Internet of Things(IoT) and big data are closely intertwined and although they are not the same thing, it is very hard to talk about one without the other. Before we analyze their connection, let us take a much closer look at these two practices.
Lightbend recently revealed the findings of a new survey of 2,457 global developers. We talked with Mark Brewer, the CEO of Lightbend about the key findings, differences between fast data and big data, misconceptions and more.
Java’s not going anywhere, no doubt about that. But why are people choosing to use Java? And what sort of role will it play in the future development of Big Data and IoT? In this article, Jane Reyes explores the relationship between this old favorite of a programming language and the newest tech in the field.
The idea of collecting and analyzing data to gather insights isn’t really new. However, the specific roles involved in the collection and analysis of data have grown and evolved considerably over the last decade as the amount of data being created has increased at a staggering rate. In this article, Cher Zavala explains why data engineers are so important.
Containers revolutionize the way modern software is being developed and operated. We talked to Johannes Unterstein, Distributed Applications Engineer at Mesosphere and JAX DevOps speaker about container tools and technologies and containers’ usefulness in a DevOps context.
It’s been one year since Yahoo open-sourced CaffeOnSpark so the tech giant has found a way to celebrate it — by open-sourcing TensorFlowOnSpark, its latest open source framework for distributed deep learning on big data clusters.
Apache Beam has successfully graduated from incubation, becoming a new Top-Level Project at the Apache Software Foundation. We invited the Apache Software Foundation’s Davor Bonaci and Jean-Baptiste Onofré to talk about the project’s journey to becoming a Top-Level Project and concrete plans for its future.
Big Data is changing. Buzzwords such as Hadoop, Storm, Pig and Hive are not the darlings of the industry anymore —they are being replaced by a powerful duo: Fast Data and SMACK. Such a fast change in such a (relatively) young ecosystem begs the following question: What is wrong with the current approach? What is the difference between Fast and Big Data? And what is SMACK?
Netflix Hollow is a Java library and comprehensive toolset for harnessing small to moderately sized in-memory datasets which are disseminated from a single producer to many consumers for read-only access. It is built with servers busily serving requests at or near maximum capacity in mind and its aim is to address the scaling challenges of in-memory datasets. Let’s see the advantages that come from using Netflix Hollow.
As a (new) member of the R Consortium, IBM will work side by side with the R user community and support the project’s mission to pinpoint, create and implement infrastructure projects that drive standards and best practices for R code.
Version 1.8 of the Clojure Lisp dialect offers new string functions, as well as the possibility of direct linking, among other features.
It’s touted as the industry’s only open-source enterprise grad unified stream and batch processing platform. Apache Apex community manager Desmond Chan show’s us what exactly that means and how this open-source engine handles big data.
After a preview version had been published at the end of November 2015, the final version of Apache Spark 1.6 is at long last ready for download. The update contains a total of over 1,000 changes; release highlights include a variety of performance improvements, the new Dataset API and expanded data science functions.
VMTurbo founders Yechiam Yemini and Yuri Rabover, as well as Principal Solutions Engineer Eric Wright have braved a look into the future and identified a few trends for the upcoming year.