You are on page 1of 3

The Big ‘Big Data’ Question: Hadoop or Spark?

One question I get asked a lot by my clients recently is: Should we go for Hadoop or Spark as
our big data framework? Spark has overtaken Hadoop as the most active open source Big
Data project. While they are not directly comparable products, they both have many of the same
uses.

In order to shed some light onto the issue of “Spark versus Hadoop” I thought an article
explaining the essential differences and similarities of each might be useful. As always, I have
tried to keep it accessible to anyone, including those without a background in computer science.

Hadoop and Spark are both Big Data frameworks – they provide some of the most popular tools
used to carry out common Big Data-related tasks.

Hadoop, for many years, was the leading open source Big Data framework but recently the
newer and more advanced Spark has become the more popular of the two Apache Software
Foundation tools.

However they do not perform exactly the same tasks, and they are not mutually exclusive, as
they are able to work together. Although Spark is reported to work up to 100 times faster than
Hadoop in certain circumstances, it does not provide its own distributed storage system.

Distributed storage is fundamental to many of today’s Big Data projects as it allows vast multi-
petabyte datasets to be stored across an almost infinite number of everyday computer hard
drives, rather than involving hugely costly custom machinery which would hold it all on one
device. These systems are scalable, meaning that more drives can be added to the network as
the dataset grows in size.

As I mentioned, Spark does not include its own system for organizing files in a distributed way
(the file system) so it requires one provided by a third-party. For this reason many Big Data
projects involve installing Spark on top of Hadoop, where Spark’s advanced analytics
applications can make use of data stored using the Hadoop Distributed File System (HDFS).
What really gives Spark the edge over Hadoop is speed. Spark handles most of its operations
“in memory” – copying them from the distributed physical storage into far faster logical RAM
memory. This reduces the amount of time consuming writing and reading to and from slow,
clunky mechanical hard drives that needs to be done under Hadoop’s MapReduce system.

MapReduce writes all of the data back to the physical storage medium after each operation.
This was originally done to ensure a full recovery could be made in case something goes wrong
– as data held electronically in RAM is more volatile than that stored magnetically on disks.
However Spark arranges data in what are known as Resilient Distributed Datasets, which can
be recovered following failure.

Spark’s functionality for handling advanced data processing tasks such as real time stream
processing and machine learning is way ahead of what is possible with Hadoop alone. This,
along with the gain in speed provided by in-memory operations, is the real reason, in my
opinion, for its growth in popularity. Real-time processing means that data can be fed into an
analytical application the moment it is captured, and insights immediately fed back to the user
through a dashboard, to allow action to be taken. This sort of processing is increasingly being
used in all sorts of Big Data applications, for example recommendation engines used by
retailers, or monitoring the performance of industrial machinery in the manufacturing industry.

Machine learning – creating algorithms which can “think” for themselves, allowing them to
improve and “learn” through a process of statistical modelling and simulation, until an ideal
solution to a proposed problem is found, is an area of analytics which is well suited to the Spark
platform, thanks to its speed and ability to handle streaming data. This sort of technology lies at
the heart of the latest advanced manufacturing systems used in industry which can predict when
parts will go wrong and when to order replacements, and will also lie at the heart of the
driverless cars and ships of the near future. Spark includes its own machine learning libraries,
called MLib, whereas Hadoop systems must be interfaced with a third-party machine learning
library, for example Apache Mahout.

The reality is, although the existence of the two Big Data frameworks is often pitched as a battle
for dominance, that isn’t really the case. There is some crossover of function, but both are non-
commercial products so it isn’t really “competition” as such, and the corporate entities which do
make money from providing support and installation of these free-to-use systems will often offer
both services, allowing the buyer to pick and choose which functionality they require from each
framework.

Many of the big vendors (i.e Cloudera) now offer Spark as well as Hadoop, so will be in a good
position to advise companies on which they will find most suitable, on a job-by-job basis. For
example, if your Big Data simply consists of a huge amount of very structured data (i.e customer
names and addresses) you may have no need for the advanced streaming analytics and
machine learning functionality provided by Spark. This means you would be wasting time, and
probably money, having it installed as a separate layer over your Hadoop storage. Spark,
although developing very quickly, is still in its infancy, and the security and support infrastructure
is not as advanced.

The increasing amount of Spark activity taking place (when compared to Hadoop activity) in the
open source community is, in my opinion, a further sign that everyday business users are
finding increasingly innovative uses for their stored data. The open source principle is a great
thing, in many ways, and one of them is how it enables seemingly similar products to exist
alongside each other – vendors can sell both (or rather, provide installation and support
services for both, based on what their customers actually need in order to extract maximum
value from their data.

Bernard Marr is a globally recognized expert in big data, analytics and


enterprise performance. He helps companies improve decision-making and
performance using data. He has written a number of seminal books and over
200 high profile reports. Bernard is a regular contributor to the World
Economic Forum, is acknowledged by the CEO Journal as one of today's
leading business brains and by LinkedIn as one of the World's top 100
business Influencers

You might also like