Mad hatters hack Hadoop

Red Hat reveal big data plans, open sources HDFS replacement

Red Hat will be open-sourcing their Red Hat Storage Hadoop plug-in to the Apache Software Foundation, they announced yesterday as part of a reveal of their larger plans in the big data space.

Rather than becoming a Hadoop vendor themselves, Red Hat are planning to leverage their popular Linux distribution and middleware solutions to jump on board the big data bandwagon.

In addition, the company says it is building a “robust network of ecosystem and enterprise integration partners”, although who these partners will be and how they will “allow users to integrate and install comprehension enterprise big data solutions” is as yet unclear.

Greg Kleiman, director of business strategy for Red Hat’s storage business unit, told The Register that they considered developing their own Hadoop distribution, but decided not to “at the moment”.

“We are going to work through the existing Hadoop distributors,” said Kleiman. “We are going to play the field and see what happens.”

Red Hat Storage Server (previously known as Gluster before Red Hat’s acquisition) can serve as a direct replacement for Hadoop’s native HDFS, although the underlying technology has some differences, particularly in its handling of nodes.

The open-source Hadoop plug-in will be submitted to the Apache Software Foundation, allowing others to potentially integrate Red Hat’s storage technology into their own distributions of Hadoop. In addition, the company is working on a connector between Hive, Hadoop’s data warehouse system, and its JBoss middleware.

In the accompanying press release, an IDC analyst is quoted as saying that the big data market is likely to grow from “$6 billion in 2011 to $23.8 billion in 2016” - so it’s no wonder that Red Hat has finally made a move for a slice of the Hadoop-flavoured pie.

Elliot Bentley

What do you think?

JAX Magazine - 2014 - 06 Exclucively for iPad users JAX Magazine on Android

Comments

Latest opinions