SEMINAR REPORT on

HADOOP From www.techalone.com

TABLE OF CONTENTS
INTRODUCTION......................................................................................................3 Need for large data processing...........................................................................4 Challenges in distributed computing --- meeting hadoop..................................5 COMPARISON WITH OTHER SYSTEMS....................................................................6 Comparison with RDBMS....................................................................................6 ORIGIN OF HADOOP...............................................................................................8 SUBPROJECTS........................................................................................................9 Core..................................................................................................................10

Avro..................................................................................................................10 Mapreduce........................................................................................................10 HDFS.................................................................................................................10 Pig.....................................................................................................................10 THE HADOOP APPROACH.....................................................................................10 Data distribution...............................................................................................11 MapReduce: Isolated Processes........................................................................12 INTRODUCTION TO MAPREDUCE..........................................................................13 Programming model.........................................................................................13 Types................................................................................................................16 HADOOP MAPREDUCE.......................................................................................17 Combiner Functions..........................................................................................22 HADOOP STREAMING........................................................................................22 HADOOP PIPES..................................................................................................22 HADOOP DISTRIBUTED FILESYSTEM (HDFS)........................................................23 ASSUMPTIONS AND GOALS ..............................................................................23 Hardware Failure ........................................................................................23 Streaming Data Access ...............................................................................23 Large Data Sets ..........................................................................................23 Simple Coherency Model ............................................................................24 “Moving Computation is Cheaper than Moving Data” .................................24 Portability Across Heterogeneous Hardware and Software Platforms .........24 DESIGN.............................................................................................................24 HDFS Concepts.................................................................................................25 Blocks .........................................................................................................25 Namenodes and Datanodes.........................................................................27 The File System Namespace .......................................................................29 Data Replication .........................................................................................29 Replica Placement.......................................................................................30 Replica Selection ........................................................................................30 Safemode ...................................................................................................31 The Persistence of File System Metadata ...................................................31 2 |Page

The Communication Protocols .........................................................................32 Robustness ......................................................................................................32 Data Disk Failure, Heartbeats and Re-Replication ......................................32 Cluster Rebalancing .........................................................................................32 Data Integrity ..................................................................................................33 Metadata Disk Failure ......................................................................................33 Snapshots ........................................................................................................33 Data Organization ............................................................................................33 Data Blocks .................................................................................................33 Staging .......................................................................................................34 Replication Pipelining ..................................................................................34 Accessibility .....................................................................................................35 Space Reclamation ..........................................................................................35 File Deletes and Undeletes .........................................................................35 Decrease Replication Factor .......................................................................35 Hadoop Filesystems.....................................................................................36 Hadoop Archives...............................................................................................37 Using Hadoop Archives................................................................................37 ANATOMY OF A MAPREDUCE JOB RUN.................................................................39 Hadoop is now a part of:-.....................................................................................40
INTRODUCTION

Computing in its purest form, has changed hands multiple times. First, from near the beginning mainframes were predicted to be the future of computing. Indeed mainframes and large scale machines were built and used, and in some circumstances are used similarly today. The trend, however, turned from bigger and more expensive, to smaller and more affordable commodity PCs and servers.

3 |Page

Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.

Cloud

computing is

a

style

of

computing

in

which

dynamically scalable

and

often virtualize resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them. Need for large data processing We live in the data age. It’s not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the “digital universe” at 0.18 zettabytes in 2006, and is forecasting a tenfold growth by 2011 to 1.8 zettabytes. Some of the large data processing needed areas include:-

• The New York Stock Exchange generates about one terabyte of new trade data per day.

• Facebook hosts approximately 10 billion photos, taking up one petabyte of storage. • Ancestry.com, the genealogy site, stores around 2.5 petabytes of data.

4 |Page

5 |Page .§ so we could read all the data from a full drive in around five minutes. so it takes more than two and a half hours to read all the data off the disk. Various distributed systems allow data to be combined from multiple sources. in a nutshell. each holding one hundredth of the data. The first problem to solve is hardware failure: as soon as we start using many pieces of hardware. and analysis by MapReduce. Almost 20 years later one terabyte drives are the norm. we could read the data in under two minutes. • The Large Hadron Collider near Geneva. is what Hadoop provides: a reliable shared storage and analysis system. MapReduce provides a programming model that abstracts the problem from disk reads and writes transforming it into a computation over sets of keys and values. but doing this correctly is notoriously challenging. There are other parts to Hadoop. This. The obvious way to reduce the time is to read from multiple disks at once. The problem is that while the storage capacities of hard drives have increased massively over the years.4 MB/s.This shows the significance of distributed computing. The storage is provided by HDFS. One typical drive from 1990 could store 1370 MB of data and had a transfer speed of 4. Working in parallel. takes a slightly different approach. for instance.meeting hadoop Various challenges are faced while developing a distributed application. the chance that one will fail is fairly high. but the transfer speed is around 100 MB/s. Switzerland. This is how RAID works. access speeds—the rate at which data can be read from drives have not kept up. The second problem is that most analysis tasks need to be able to combine the data in some way. Challenges in distributed computing --. will produce about 15 petabytes of data per year. but these capabilities are its kernel. This is a long time to read all data on a single drive—and writing is even slower. there is another copy available. A common way of avoiding data loss is through replication: redundant copies of the data are kept by the system so that in the event of failure. and is growing at a rate of 20 terabytes per month. the Hadoop Distributed Filesystem(HDFS). data read from one disk may need to be combined with the data from any of the other 99 disks. Imagine if we had 100 drives.• The Internet Archive stores around 2 petabytes of data. although Hadoop’s filesystem.

Hadoop is the popular open source implementation of MapReduce. Hadoop has its own filesystem which replicates data to multiple nodes to ensure if one node holding data goes down. COMPARISON WITH OTHER SYSTEMS Comparison with RDBMS Unless we are dealing with very large volumes of unstructured data (hundreds of GB. and the execution of Map/Reduce routines to run on the data in that cluster. there are at least 2 other nodes from which to retrieve that piece of information. using custom analyses tailored to your information and questions. Hadoop enables you to explore complex data. TB’s or PB’s) and have large numbers of machines available you will likely find the performance of Hadoop running a Map/Reduce query much slower than a comparable 6 |Page . a powerful tool designed for deep analysis and transformation of very large data sets. Hadoop is the system that allows unstructured data to be distributed across hundreds or thousands of machines forming shared nothing clusters. something which is critical when there are many nodes in a cluster (aka RAID at a server level). This protects the data availability from node failure.

a B-Tree is less efficient than MapReduce. For updating the majority of a database. If the data access pattern is dominated by seeks.000 log files that may take minutes or hours or days to do (with Hadoop you still have to copy the files to its file system). for updating a small proportion of records in a database. In our current RDBMSdependent web stacks.g. at some point the sheer mass of brute force processing power will outperform the optimized. but restricted on scale. applications that sit on top of massive stores of shared content require a distributed solution if they hope to survive the long tail usage pattern commonly found on content-rich site. it will take longer to read or write large portions of the dataset than streaming through it. relational access method. So while using Hadoop your query time may be slower (speed improves with more nodes in the cluster) but potentially your access time to the data may be improved. such as memcached provide some relief. Another difference between MapReduce and an RDBMS is the amount of structure in the datasets that they operate on. For example. The benefits really do only come into play when the positive of mass parallelism is achieved. It may also be practically impossible to load such data into a RDBMS for some environments as data could be generated in such a volume that a load process into a RDBMS cannot keep up. For applications with just a handful of common use cases that access a lot of the same data. It characterizes the latency of a disk operation. scalability problems tend to hit the hardest at the database level. which is limited by the rate it can perform seeks) works well. On the other hand. However. which uses Sort/Merge to rebuild the database. And if you have to do that for 1000 or 10. Hadoop uses a brute force access method whereas RDBMS’s have optimization methods for accessing data such as indexes and readahead. for interactive applications that hope to reliably scale and support vast amounts of IO. whereas the transfer rate corresponds to a disk’s bandwidth. distributed in-memory caches. Also as there aren’t any mainstream RDBMS’s that scale to thousands of nodes. a log file) the cost associated with extracting that data from the text file and structuring it into a standard schema and loading it into the RDBMS has to be considered. But with all benchmarks everything has to be taken into consideration.SQL query on a relational database. Structured data is data that is organized into entities 7 |Page . if the data starts life in a text file in the file system (e. We can’t use databases with lots of disks to do large-scale batch analysis. a traditional B-Tree (the data structure used in relational databases. which operates at the transfer rate. or the data is unstructured to the point where no RDBMS optimizations can be applied to help the performance of queries. Unlike small applications that can fit their most active data into memory. Seeking is the process of moving the disk’s head to a particular place on the disk to read or write data. the traditional RDBMS setup isn’t going to cut it. This is because seek time is improving more slowly than transfer rate.

The industry is trending towards distributed systems. but like any investment. MySQL and other RDBMS’s have stratospherically more market share than Hadoop. the widely used text search library. and Hadoop is a major player. it’s the future you should be considering. In otherwords. Unstructured data does not have any particular internal structure: for example. Data size Access Updates Structure Integrity Scaling Traditional RDBMS Gigabytes Interactive and batch Read and write many times Static schema High Non linear MapReduce Petabytes Batch Write once. Semi-structured data. This is the realm of the RDBMS. and though there may be a schema. and one of the central assumptions that MapReduce makes is that it is possible to perform (high-speed) streaming reads and writes. since it is designed to interpret the data at processing time. for not only is the software required to crawl and index 8 |Page . Normalization poses problems for MapReduce. read many times Dynamic schema Low Linear But hadoop hasn’t been much popular yet. since it makes reading a record a nonlocal operation.that have a defined format. such as XML documents or database tables that conform to a particular predefined schema. although the cells themselves may hold any form of data. Relational data is often normalized to retain its integrity. so it may be used only as a guide to the structure of the data: for example. a spreadsheet. in which the structure is the grid of cells. Building a web search engine from scratch was an ambitious goal. Hadoop has its origins in Apache Nutch. the input keys and values for MapReduce are not an intrinsic property of the data. and remove redundancy. the creator of Apache Lucene. on the other hand. is looser. itself a part of the Lucene project. but they are chosen by the person analyzing the data. an open source web searchengine. plain text or image data. it is often ignored. ORIGIN OF HADOOP Hadoop was created by Doug Cutting. MapReduce works well on unstructured or semistructured data.

the Nutch developers had a working MapReduce implementation in Nutch.websites complex to write. In November of the same year.‖ Nevertheless.# GFS. which was being used in production at Google. However.000-core Hadoop cluster. and in February 2006 they moved out of Nutch to form an independent subproject of Lucene called Hadoop. In 2004. Help was at hand with the publication of a paper in 2003 that described the architecture of Google’s distributed filesystem. SUBPROJECTS Although Hadoop is best known for MapReduce and its distributed filesystem(HDFS. beating the previous year’s winner of 297 seconds(described in detail in “TeraByte Sort on Apache Hadoop” on page 461). the other subprojects provide complementary services. or something like it. since there are so many moving parts.* Early in 2005. it was announced that a team at Yahoo! used Hadoop to sort one terabyte in 62 seconds. Running on a 910-node cluster. they believed it was a worthy goal. Hadoop sorted one terabyte in 2009 seconds (just under 3½ minutes). they set about writing an open source implementation. but it is also a challenge to run without a dedicated operations team. Hadoop broke a world record to become the fastest system to sort a terabyte of data. Google published the paper that introduced MapReduce to the world. the Nutch Distributed Filesystem (NDFS). they realized that their architecture wouldn’t scale to the billions of pages on the Web. would solve their storage needs for the very large files generated as a part of the web crawl and indexing process. with a monthly running cost of $30. In April 2008. At around the same time. In 2004. NDFS and the MapReduce implementation in Nutch were applicable beyond the realm of search. Nutch was started in 2002. or build on the core to add higher-level abstractions The various subprojects of hadoop includes:- 9 |Page .§ As this book was going to press (May 2009). In particular.000. renamed from NDFS). called GFS. and by the middle of that year all the major Nutch algorithms had been ported to run using MapReduce and NDFS. Doug Cutting joined Yahoo!. as it would open up and ultimately democratize search engine algorithms. and a working crawler and search system quickly emerged. Google reported that its MapReduce implementation sorted one terabyte in 68 seconds. GFS would free up time being spent on administrative tasks such as managing storage nodes. This was demonstrated in February 2008 when Yahoo! announced that its production search index was being generated by a 10. It’s expensive too: Mike Cafarella and Doug Cutting estimated a system supporting a 1-billion-page index would cost around half a million dollars in hardware. which provided a dedicated team and the resources to turn Hadoop into a system that ran at web scale (see sidebar).

Chukwa runs collectors that store data in HDFS. Avro A data serialization system for efficient. and supports both batch-style computations using MapReduce and point queries (random reads). ZooKeeper provides primitives such as distributed locks that can be used for building distributed applications. Hive A distributed data warehouse.Core A set of components and interfaces for distributed filesystems and general I/O(serialization. HDFS A distributed filesystem that runs on large clusters of commodity machines. Pig A data flow language and execution environment for exploring very large datasets. Chukwa had only recently graduated from a “contrib” module in Core to its own subproject. and it uses MapReduce to produce reports. Pig runs on HDFS and MapReduce clusters. HBASE A distributed. Zookeeper A distributed. cross-language RPC. column-oriented database. persistent data structures). highly available coordination service. Avro had been created only as a new subproject. and no other Hadoop subprojects were using it yet. (At the time of this writing. HBase uses HDFS for its underlying storage. Chukwa A distributed data collection and analysis system. (At the time of this writing. Java RPC. Hive manages data stored in HDFS and provides a query language based on SQL (and which is translated by the runtime engine to MapReduce jobs) for querying the data.) Mapreduce A distributed data processing model and execution environment that runs on large clusters of commodity machines. and persistent datastorage.) THE HADOOP APPROACH 10 | P a g e .

Hadoop is designed to efficiently process large volumes of information by connecting many commodity computers together to work in parallel. so their contents are universally accessible. automatic distribution of data and work across machines and in turn utilizing the underlying parallelism of the CPU cores. The theoretical 1000-CPU machine described earlier would cost a very large amount of money. alleviating strain on network bandwidth and preventing unnecessary network transfers. they form a single namespace. Data is conceptually record-oriented in the Hadoop programming framework. each compute process running on a node operates on a subset of the data. data is distributed to all the nodes of the cluster as it is being loaded in. This strategy of moving computation to the data. Performing computation on large volumes of data has been done before. In addition to this each chunk is replicated across several machines. Which data operated on by a node is chosen based on its locality to the node: most data is read from the local disk straight into the CPU. usually in a distributed setting. Hadoop will tie these smaller and more reasonably priced machines together into a single cost-effective compute cluster. Individual input files are broken into lines or into other formats specific to the application logic. The Hadoop framework then schedules these processes in proximity to the location of data/records using knowledge from the distributed file system. Even though the file chunks are replicated and distributed across several machines. far more than 1. instead of moving the data to the computation allows Hadoop to achieve high data locality which in turn results in high performance. The Hadoop Distributed File System (HDFS) will split large data files into chunks which are managed by different nodes in the cluster. Each process running on a node in the cluster then processes a subset of these records.000 single-CPU or 250 quad-core machines. 11 | P a g e . and its efficient. Since files are spread across the distributed file system as chunks. An active monitoring system then re-replicates the data in response to system failures which can result in partial storage. Data distribution In a Hadoop cluster. so that a single machine failure does not result in any data being unavailable. What makes Hadoop unique is its simplified programming model which allows the user to quickly write and test distributed systems.

as each individual record is processed by a task in isolation from one another. The output from the Mappers is then brought together into a second set of tasks called Reducers. Hadoop will not run just any program and distribute it across a cluster. it makes the whole framework much more reliable. Programs must be written to conform to a particular programming model. where results from different mappers can be merged together. 12 | P a g e .MapReduce: Isolated Processes Hadoop limits the amount of communication which can be performed by the processes. records are processed in isolation by tasks called Mappers." In MapReduce. While this sounds like a major limitation at first. named "MapReduce.

no messages need to be exchanged by user programs. Our use of a functional model with user specilized map and reduce operations allows us to parallelize large computations easily and to use re-execution as the primary mechanism for fault tolerance. in contrast to more conventional distributed systems where application developers explicitly marshal byte streams from node to node over sockets or through MPI buffers. We realized that most of our computations involved applying a map operation to each logical . communication in Hadoop is performed implicitly. nor do nodes need to roll back to pre-arranged checkpoints to partially restart the computation. However. Many real world tasks are expressible in this model. in our input in order to compute a set of intermediate key/value pairs. leaving the challenging aspects of partially restarting the program to the underlying Hadoop layer. The other workers continue to operate as though nothing went wrong. Hadoop internally manages all of the data transfer and cluster topology issues. Programming model 13 | P a g e . Since user-level tasks do not communicate explicitly with one another. Individual node failures can be worked around by restarting tasks on other machines. in order to combine the derived data appropriately. By restricting the communication between nodes. Pieces of data can be tagged with key names which inform Hadoop how to send related bits of information to a common destination node. and then applying a reduce operation to all the values that shared the same key. and a reduce function that merges all intermediate values associated with the same intermediate key.record. INTRODUCTION TO MAPREDUCE MapReduce is a programming model and an associated implementation for processing and generating largedata sets. This abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs.Separate nodes in a Hadoop cluster still communicate with one another. Hadoop makes the distributed system much more reliable.

MAP map (in_key. “other”) -->(“FOO”. It merges together these values to form a possibly smaller set of values. accepts an intermediate key I and a set of values for that key. The MapReduce library groups together all intermediate values associatedwith the same intermediate key I and passes them to the Reduce function. Typically just zero or one output value is produced per Reduce invocation. “data”) --> (“KEY2”. “bar”) --> (“FOO”. “DATA”) REDUCE 14 | P a g e . Map. and produces a set of output key/value pairs. The user of the MapReduce library expresses the computation as two functions: Map and Reduce.toUpper(). v. “OTHER”) (“key2”. also written by the user. v) = emit(k.toUpper()) (“foo”.The computation takes a set of input key/value pairs. “BAR”) (“Foo”. written by the user. takes an input pair and produces a set of intermediate key/value pairs. intermediate_value) list Example: Upper-case Mapper let map(k. The Reduce function. This allows us to handle lists of values that are too large to fit in memory. in_value) -> (out_key. The intermediate values are supplied to the user's reduce function via an iterator.

vals) sum = 0 foreach int v in vals: sum += v emit(k. 6. 454) (“B”. The user would write code similar to the following pseudo-code: map(String key. -2]) --> (“B”. intermediate_value list) -> out_value list Example: Sum Reducer let reduce(k. 100. String value): 15 | P a g e . 312]) --> (“A”. sum) (“A”.reduce (out_key. [42. 16) Example2:Counting the number of occurrences of each word in a large collection of documents. [12.

distribute the data. The map function emits each word plus an associated count of occurrences (just `1' in this simple example). The user then invokes the MapReduce function. Google designed a new abstraction that allows us to express the simple computations we were trying to perform but hides the messy details of parallelization. the user writes code to _ll in a mapreduce specification object with the names of the input and output _les. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Iterator values): // key: a word // values: a list of counts int result = 0. and managing the required inter-machine communication. passing it the specification object. Emit(AsString(result)).// key: document name // value: document contents for each word w in value: EmitIntermediate(w. Types 16 | P a g e . data distribution and load balancing in a library. and handle failures conspire to obscure the original simple computation with large amounts of complex code to deal with these issues. and optional tuning parameters. handling machine failures. fault-tolerance. The run-time system takes care of the details of partitioning the input data. scheduling the program's execution across a set of machines. The reduce function sums together all counts emitted for a particular word. The user's code is linked together with the MapReduce library (implemented in C++) Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. reduce(String key. The issues of how to parallelize the computation. As a reaction to this complexity. for each v in values: result += ParseInt(v). "1"). In addition.

HADOOP MAPREDUCE 17 | P a g e . and emits a hkey.list(v2)) ! list(v2) I. The reduce function accepts all pairs for a given word. The reduce function emits all pairs unchanged. list(document ID)i pair. recordi pair. the intermediate keys and values are from the same domain as the output keys and values. the input keys and values are drawn from a different domain than the output keys and values. Distributed Sort: The map function extracts the key from each record. conceptually the map and reduce functions supplied by the user have associated types: map (k1. and emits a sequence of hword. Our C++ implementation passes strings to and from the user-de_ned functions and leaves it to the user code to convert between strings and appropriate types.e. document IDi pairs. Inverted Index: The map function parses each document. sorts the corresponding document IDs and emits a hword. Furthermore..v1) ! list(k2.Even though the previous pseudo-code is written in terms of string inputs and outputs. It is easy to augment this computation to keep track of word positions.v2) reduce (k2. The set of all output pairs forms a simple inverted index.

Typically both the input and the output of the job are stored in a file-system. which keeps a record of the overall progress of each job. the MapReduce program. Typically the compute nodes and the storage nodes are the same. A MapReduce job is a unit of work that the client wants to be performed: it consists of the input data. A Map-Reduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner.Hadoop Map-Reduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable. 18 | P a g e . Hadoop creates one map task for each split. resulting in very high aggregate bandwidth across the cluster. The framework sorts the outputs of the maps. Hadoop divides the input to a MapReduce job into fixed-size pieces called input splits. of which there are two types: map tasks and reduce tasks. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present. monitoring them and re-executes the failed tasks. and configuration information. If a tasks fails. There are two types of nodes that control the job execution process: a jobtracker and a number of tasktrackers. which are then input to the reduce tasks. the MapReduce framework and the Distributed FileSystem are running on the same set of nodes. Tasktrackers run tasks and send progress reports to the jobtracker. Hadoop runs the job by dividing it into tasks. that is. fault-tolerant manner. The framework takes care of scheduling tasks. or just splits. the jobtracker can reschedule it on a different tasktracker. The jobtracker coordinates all the jobs run on the system by scheduling tasks to run on tasktrackers. which runs the userdefined map function for each record in the split.

failed processes or other jobs running concurrently make load balancing desirable. and once the job is complete the map output can be thrown away. which is clearly less efficient than running the whole map task using local data. Map output is intermediate output: it’s processed by reduce tasks to produce the final output. the processing is better load-balanced if the splits are small. we have a single reduce task that is fed by all of the map tasks. although this can be changed for the cluster (for all newly created files).Having many splits means the time taken to process each split is small compared to the time to process the whole input. Thus. Therefore the sorted map outputs have to be transferred across the network to the node where the reduce task is running. The dotted boxes in the figure below indicate nodes. and the quality of the load balancing increases as the splits become more fine-grained. the light arrows show data transfers on a node. writing the reduce output does consume network bandwidth. The number of reduce tasks is not governed by the size of the input. Even if the machines are identical. then Hadoop will automatically rerun the map task on another node to recreate the map output. but is specified independently. where they are merged and then passed to the user-defined reduce function. with replication. then the overhead of managing the splits and of map task creation begins to dominate the total job execution time. if splits are too small. Hadoop does its best to run the map task on a node where the input data resides in HDFS. so some of the split would have to be transferred across the network to the node running the map task. This is called the data locality optimization. On the other hand. or specified when each file is created. would be overkill. In the present example. If the node running the map task fails before the map output has been consumed by the reduce task. but only as much as a normal HDFS write pipeline consume. and the heavy arrows show data transfers between nodes. For each HDFS block of the reduce output. The output of the reduce is normally stored in HDFS for reliability. the first replica is stored on the local node. 64 MB by default. with other replicas being stored on off-rack nodes. So storing it in HDFS. So if we are processing the splits in parallel. since a faster machine will be able to process proportionally more splits over the course of the job than a slower machine. If the split spanned two blocks. a good split size tends to be the size of a HDFS block. It should now be clear why the optimal split size is the same as the block size: it is the largest size of input that can be guaranteed to be stored on a single node. it would be unlikely that any HDFS node stored both blocks. Map tasks write their output to local disk. For most jobs. 19 | P a g e . not to HDFS. Reduce tasks don’t have the advantage of data locality—the input to a single reduce task is normally the output from all mappers.

20 | P a g e . The shuffle is more complicated than this diagram suggests. and tuning it can have a big impact on job execution time.MapReduce data flow with a single reduce task When there are multiple reducers. the map tasks partition their output. There can be many keys (and their associated values) in each partition. but normally the default partitioner—which buckets keys using a hash function—works very well. each creating one partition for each reduce task. This diagram makes it clear why the data flow between map and reduce tasks is colloquially known as “the shuffle. Finally. This can be appropriate when you don’t need the shuffle since the processing can be carried out entirely in parallel. but the records for every key are all in a single partition. it’s also possible to have zero reduce tasks.” as each reduce task is fed by many map tasks. The partitioning can be controlled by a user-defined partitioning function.

MapReduce data flow with multiple reduce tasks MapReduce data flow with no reduce tasks 21 | P a g e .

0 it can handle binary streams. JNI is not used. it has a line-oriented view of data. calling the combiner function zero. or many times should produce the same output from the reducer.Combiner Functions Many MapReduce jobs are limited by the bandwidth available on the cluster. and writes its results to standard output. and when used in text mode. which uses standard input and output to communicate with the map and reduce code. too). which the framework guarantees are sorted by key. Hadoop allows the user to specify a combiner function to be run on the map output—the combiner function’s output forms the input to the reduce function. Since the combiner function is an optimization. HADOOP PIPES Hadoop Pipes is the name of the C++ interface to Hadoop MapReduce. one. so you can use any language that can read standard input and write to standard output to write your MapReduce program. The reduce function reads lines from standard input. A map output key-value pair is written as a single tab-delimited line. Hadoop Streaming uses Unix standard streams as the interface between Hadoop and your program. so it pays to minimize the data transferred between map and reduce tasks. In other words. which processes it line by line and writes lines to standard output. Input to the reduce function is in the same format—a tab-separated key-value pair—passed over standard input. Unlike Streaming. if at all. HADOOP STREAMING Hadoop provides an API to MapReduce that allows you to write your map and reduce functions in languages other than Java. Pipes uses sockets as the channel over which the tasktracker communicates with the process running the C++ map or reduce function. Map input data is passed over standard input to your map function. Hadoop does not provide a guarantee of how many times it will call it for a particular map output record. 22 | P a g e .21. Streaming is naturally suited for text processing (although as of version 0.

Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications. POSIX semantics in a few key areas has been traded to increase data throughput rates. and provide high-throughput access to this information. ASSUMPTIONS AND GOALS Hardware Failure Hardware failure is the norm rather than the exception. the Hadoop Distributed File System. Since they are network-based.HADOOP DISTRIBUTED FILESYSTEM (HDFS) Filesystems that manage the storage across a network of machines are called distributed filesystems. A typical file in HDFS is gigabytes to terabytes in size. Therefore. is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes). They are not general purpose applications that typically run on general purpose file systems. Large Data Sets Applications that run on HDFS have large data sets. HDFS is designed more for batch processing rather than interactive use by users. each storing part of the file system’s data. all the complications of network programming kick in. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. Hadoop comes with a distributed filesystem called HDFS. 23 | P a g e . detection of faults and quick. which stands for Hadoop Distributed Filesystem. automatic recovery from them is a core architectural goal of HDFS. thus making distributed filesystems more complex than regular disk filesystems. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. Thus. It should support tens of millions of files in a single instance. An HDFS instance may consist of hundreds or thousands of server machines. HDFS. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always nonfunctional. For example. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. one of the biggest challenges is making the filesystem tolerate node failure without suffering data loss. HDFS is tuned to support large files. The emphasis is on high throughput of data access rather than low latency of data access.

HDFS provides interfaces for applications to move themselves closer to where the data is located. of the dataset. gigabytes. if not all.Simple Coherency Model HDFS applications need a write-once-read-many access model for files. written. and closed need not be changed. There is a plan to support appending-writes to files in the future. Each analysis will involve a large proportion. A dataset is typically generated or copied from source. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. Portability Across Heterogeneous Hardware and Software Platforms HDFS has been designed to be easily portable from one platform to another. “Moving Computation is Cheaper than Moving Data” A computation requested by an application is much more efficient if it is executed near the data it operates on. A Map/Reduce application or a web crawler application fits perfectly with this model. or terabytes in size. DESIGN HDFS is a filesystem designed for storing very large files with streaming data access patterns. then various analyses are performed on that dataset over time. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. so the time to read the whole dataset is more important than the latency in reading the first record. Let’s examine this statement in more detail: Very large files “Very large” in this context means files that are hundreds of megabytes. read-many-times pattern. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.* Streaming data access HDFS is built around the idea that the most efficient data processing pattern is a writeonce. There are Hadoop clusters running today that store petabytes of data. Commodity hardware 24 | P a g e . running on clusters on commodity hardware. This assumption simplifies data coherency issues and enables high throughput data access. A file once created.

highly reliable hardware to run on. (These might be supported in the future. Filesystems for a single disk build on this by dealing with data in blocks. While storing millions of files is feasible. which are an integral multiple of the disk block size. and block takes about 150 bytes. or for modifications at arbitrary offsets in the file. So. Writes are always made at the end of the file. Multiple writers. each taking one block. the limit to the number of files in a filesystem is governed by the amount of memory on the namenode. will not work well with HDFS. Filesystem blocks are typically a few kilobytes in size. that operate on the filesystem block level. 25 | P a g e . Like in a filesystem for a single disk. such as df and fsck. It is also worth examining the applications for which using HDFS does not work so well.Hadoop doesn’t require expensive. HDFS is designed to carry on working without a noticeable interruption to the user in the face of such failure. if you had one million files.) HDFS Concepts Blocks A disk has a block size. It’s designed to run on clusters of commodity hardware (commonly available hardware available from multiple vendors†) for which the chance of node failure across the cluster is high. for example. HBase (Chapter 12) is currently a better choice for low-latency access. while disk blocks are normally 512 bytes. and this may be at the expense of latency. each file. a file in HDFS that is smaller than a single block does not occupy a full block’s worth of underlying storage. but they are likely to be relatively inefficient. you would need at least 300 MB of memory. which is the minimum amount of data that it can read or write. at least for large clusters. As a rule of thumb. Unlike a filesystem for a single disk. However. While this may change in the future. HDFS too has the concept of a block. which are stored as independent units. the term “block” in this book refers to a block in HDFS. these are areas where HDFS is not a good fit today: Low-latency data access Applications that require low-latency access to data. When unqualified. arbitrary file modifications Files in HDFS may be written to by a single writer. directory. billions is beyond the capability of current hardware. This is generally transparent to the filesystem user who is simply reading or writing a file—of whatever length. but it is a much larger unit—64 MB by default. in the tens of milliseconds range. Lots of small files Since the namenode holds filesystem metadata in memory. files in HDFS are broken into block-sized chunks. there are tools to do with filesystem maintenance. There is no support for multiple writers. Remember HDFS is optimized for delivering a high throughput of data.

and eliminating metadata concerns (blocks are just a chunk of data to be stored—file metadata such as permissions information does not need to be stored with the blocks. a copy can be read from 26 | P a g e . This figure will continue to be revised upward as transfer speeds grow with new generations of disk drives. so another system can handle metadata orthogonally). so if you have too few tasks (fewer than nodes in the cluster). A quick calculation shows that if the seek time is around 10ms. and the transfer rate is 100 MB/s. however. but is important for a distributed system in which the failure modes are so varied. There’s nothing that requires the blocks from a file to be stored on the same disk. although many HDFS installations use 128 MB blocks. blocks fit well with replication for providing fault tolerance and availability. The default is actually 64 MB. then to make the seek time 1% of the transfer time. it is easy to calculate how many can be stored on a given disk). By making a block large enough. it would be possible. In fact. we need to make the block size around 100 MB. simplifying storage management (since blocks are a fixed size. if unusual. each block is replicated to a small number of physically separate machines (typically three). your jobs will run slower than they could otherwise. the time to transfer the data from the disk can be made to be significantly larger than the time to seek to the start of the block. so they can take advantage of any of the disks in the cluster. The storage subsystem deals with blocks. Second. and the reason is to minimize the cost of seeks. Having a block abstraction for a distributed filesystem brings several benefits. Thus the time to transfer a large file made of multiple blocks operates at the disk transfer rate. If a block becomes unavailable.HDFS blocks are large compared to disk blocks. The first benefit is the most obvious: a file can be larger than any single disk in the network. This argument shouldn’t be taken too far. making the unit of abstraction a block rather than a file simplifies the storage subsystem. Map tasks in MapReduce normally operate on one block at a time. Furthermore. To insure against corrupted blocks and disk and machine failure. to store a single file on an HDFS cluster whose blocks filled all the disks in the cluster. Simplicity is something to strive for all in all systems.

running: % hadoop fsck -files -blocks will list the blocks that make up each file in the filesystem.another location in a way that is transparent to the client. Namenodes and Datanodes A HDFS cluster has two types of node operating in a master-worker pattern: a namenode (the master) and a number of datanodes (workers). It maintains the filesystem tree and the metadata for all the files and directories in the tree. (See “Data Integrity” on page 75 for more on guarding against corrupt data. Like its disk filesystem cousin. it does not store block locations persistently. 27 | P a g e . some applications may choose to set a high replication factor for the blocks in a popular file to spread the read load on the cluster. however. A block that is no longer available due to corruption or machine failure can be replicated from their alternative locations to other live machines to bring the replication factor back to the normal level. A client accesses the filesystem on behalf of the user by communicating with the namenode and datanodes. For example. HDFS’s fsck command understands blocks. The namenode also knows the datanodes on which all the blocks for a given file are located.) Similarly. The namenode manages the filesystem namespace. since this information is reconstructed from datanodes when the system starts. This information is stored persistently on the local disk in the form of two files: the namespace image and the edit log.

so in the event of total failure of the primary data. It is also possible to run a secondary namenode. the state of the secondary namenode lags that of the primary. Without the namenode. Its main role is to periodically merge the namespace image with the edit log to prevent the edit log from becoming too large. The first way is to back up the files that make up the persistent state of the filesystem metadata. it is important to make the namenode resilient to failure. so the user code does not need to know about the namenode and datanode to function. They store and retrieve blocks when they are told to (by clients or the namenode). all the files on the filesystem would be lost since there would be no way of knowing how to reconstruct the files from the blocks on the datanodes. which can be used in the event of the namenode failing. 28 | P a g e . The secondary namenode usually runs on a separate physical machine. Datanodes are the work horses of the filesystem. In fact. which despite its name does not act as a namenode. The usual course of action in this case is to copy the namenode’s metadata files that are on NFS to the secondary and run it as the new primary. It keeps a copy of the merged namespace image. The usual configuration Choice is to write to local disk as well as a remote NFS mount. loss is almost guaranteed. However. Hadoop can be configured so that the namenode writes its persistent state to multiple filesystems. and they report back to the namenode periodically with lists of blocks that they are storing. and Hadoop provides two mechanisms for this. For this reason. since it requires plenty of CPU and as much memory as the namenode to perform the merge. if the machine running the namenode were obliterated. the filesystem cannot be used.The client presents a POSIX-like filesystem interface. These writes are synchronous and atomic.

Data Replication HDFS is designed to reliably store very large files across machines in a large cluster. An application can specify the number of replicas of a file that should be maintained by HDFS. The file system namespace hierarchy is similar to most other existing file systems. Files in HDFS are write-once and have strictly one writer at any time. The replication factor can be specified at file creation time and can be changed later. A Blockreport contains a list of all blocks on a DataNode. However. move a file from one directory to another. or rename a file. The NameNode makes all decisions regarding replication of blocks. HDFS does not support hard links or soft links. The blocks of a file are replicated for fault tolerance. Any change to the file system namespace or its properties is recorded by the NameNode.The File System Namespace HDFS supports a traditional hierarchical file organization. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. one can create and remove files. A user or an application can create directories and store files inside these directories. The NameNode maintains the file system namespace. Receipt of a Heartbeat implies that the DataNode is functioning properly. It stores each file as a sequence of blocks. The block size and replication factor are configurable per file. HDFS does not yet implement user quotas or access permissions. all blocks in a file except the last block are the same size. 29 | P a g e . The number of copies of a file is called the replication factor of that file. An application can specify the number of replicas of a file. the HDFS architecture does not preclude implementing these features. This information is stored by the NameNode.

Replica Placement The placement of replicas is critical to HDFS reliability and performance. This policy improves write performance without compromising data reliability or read performance. this policy increases the cost of writes because a write needs to transfer blocks to multiple racks. then that replica is preferred to satisfy the read request. The short-term goals of implementing this policy are to validate it on production systems. this policy does not impact data reliability and availability guarantees. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. The current. The purpose of a rack-aware replica placement policy is to improve data reliability. default replica placement policy described here is a work in progress. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. two thirds of replicas are on one rack. This is a feature that needs lots of tuning and experience. One third of replicas are on one node. This policy cuts the inter-rack write traffic which generally improves write performance. Replica Selection To minimize global bandwidth consumption and read latency. However. HDFS tries to satisfy a read request from a replica that is closest to the reader. another on a different node in the local rack. Optimizing replica placement distinguishes HDFS from most other distributed file systems. and build a foundation to test and research more sophisticated policies. when the replication factor is three. and the other third are evenly distributed across the remaining racks. However. If 30 | P a g e . In most cases. and network bandwidth utilization. and the last on a different node in a different rack. The NameNode determines the rack id each DataNode belongs to via the process outlined in Rack Awareness. HDFS’s placement policy is to put one replica on one node in the local rack. For the common case. learn more about its behavior. availability. the replicas of a file do not evenly distribute across the racks. The chance of rack failure is far less than that of node failure. The current implementation for the replica placement policy is a first effort in this direction. it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. If there exists a replica on the same rack as the reader node. With this policy. Large HDFS instances run on a cluster of computers that commonly spread across many racks. A simple but non-optimal policy is to place replicas on unique racks. network bandwidth between machines in the same rack is greater than network bandwidth between machines in different racks. Communication between two nodes in different racks has to go through switches.

It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. creating a new file in HDFS causes the NameNode to insert a record into the EditLog indicating this. it reads the FsImage and EditLog from disk. changing the replication factor of a file causes a new record to be inserted into the EditLog. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. including the mapping of blocks to files and file system properties. The NameNode uses a file in its local host OS file system to store the EditLog. such that a NameNode with 4 GB of RAM is plenty to support a huge number of files and directories. For example. the NameNode enters a special state called Safemode. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata. A Blockreport contains the list of data blocks that a DataNode is hosting. Similarly. The FsImage is stored as a file in the NameNode’s local file system too. The NameNode keeps an image of the entire file system namespace and file Blockmap in memory. Work is in progress to support periodic checkpointing in the near future. Safemode On startup. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. This key metadata item is designed to be compact. Replication of data blocks does not occur when the NameNode is in the Safemode state. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds). A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. The NameNode then replicates these blocks to other DataNodes. In the current implementation. is stored in a file called the FsImage. The Persistence of File System Metadata The HDFS namespace is stored by the NameNode. It stores each block of HDFS data in a separate file in its local file system. This process is called a checkpoint. The entire file system namespace. the NameNode exits the Safemode state. The DataNode has no knowledge about HDFS files. The DataNode does not create all files in the same directory. The DataNode stores HDFS data in files in its local file system. it uses a heuristic to determine the optimal number of files per directory and creates 31 | P a g e . Each block has a specified minimum number of replicas.angg/ HDFS cluster spans multiple data centers. applies all the transactions from the EditLog to the in-memory representation of the FsImage. Instead. a checkpoint only occurs when the NameNode starts up. then a replica that is resident in the local data center is preferred over any remote replica. When the NameNode starts up. and flushes out this new version into a new FsImage on disk.

a hard disk on a DataNode may fail. The necessity for rereplication may arise due to many reasons: a DataNode may become unavailable. The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. a replica may become corrupted. By design. A Remote Procedure Call (RPC) abstraction wraps both the Client Protocol and the DataNode Protocol. a scheme might dynamically create additional replicas and rebalance other data in the cluster. DataNode failures and network partitions. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory. it only responds to RPC requests issued by DataNodes or clients. It talks the ClientProtocol with the NameNode. When a DataNode starts up. Instead. The Communication Protocols All HDFS communication protocols are layered on top of the TCP/IP protocol. A client establishes a connection to a configurable TCP port on the NameNode machine. or the replication factor of a file may be increased. the NameNode never initiates any RPCs. The NameNode detects this condition by the absence of a Heartbeat message. Any data that was registered to a dead DataNode is not available to HDFS any more. generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode: this is the Blockreport. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold. Cluster Rebalancing The HDFS architecture is compatible with data rebalancing schemes. These types of data rebalancing schemes are not yet implemented. In the event of a sudden high demand for a particular file.subdirectories appropriately. Data Disk Failure. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. 32 | P a g e . DataNode death may cause the replication factor of some blocks to fall below their specified value. it scans through its local file system. A network partition can cause a subset of DataNodes to lose connectivity with the NameNode. The DataNodes talk to the NameNode using the DataNode Protocol. Heartbeats and Re-Replication Each DataNode sends a Heartbeat message to the NameNode periodically. The three common types of failures are NameNode failures. Robustness The primary objective of HDFS is to store data reliably even in the presence of failures.

When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. then the client can opt to retrieve that block from another DataNode that has a replica of that block. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming 33 | P a g e . or buggy software. the NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog. This corruption can occur because of faults in a storage device. Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously. This synchronous updating of multiple copies of the FsImage and EditLog may degrade the rate of namespace transactions per second that a NameNode can support. A corruption of these files can cause the HDFS instance to be non-functional. For this reason. If not. Snapshots Snapshots support storing a copy of data at a particular instant of time. One usage of the snapshot feature may be to roll back a corrupted HDFS instance to a previously known good point in time. it selects the latest consistent FsImage and EditLog to use. this degradation is acceptable because even though HDFS applications are very data intensive in nature. However. If the NameNode machine fails. it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. manual intervention is necessary. they are not metadata intensive. Applications that are compatible with HDFS are those that deal with large data sets. Currently.Data Integrity It is possible that a block of data fetched from a DataNode arrives corrupted. HDFS does not currently support snapshots but will in a future release. network faults. The NameNode machine is a single point of failure for an HDFS cluster. Data Organization Data Blocks HDFS is designed to support very large files. automatic restart and failover of the NameNode software to another machine is not supported. When a NameNode restarts. Metadata Disk Failure The FsImage and the EditLog are central data structures of HDFS. The HDFS client software implements checksum checking on the contents of HDFS files. When a client creates an HDFS file.

initially the HDFS client caches the file data into a temporary local file. an HDFS file is chopped up into 64 MB chunks. AFS. e. the data is pipelined from one DataNode to the next. The client then flushes the data block to the first DataNode. in turn starts receiving each portion of the data block. the network speed and the congestion in the network impacts throughput considerably. the client contacts the NameNode. 34 | P a g e .g. Thus. a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. When the local file accumulates data worth over one HDFS block size. In fact. If the NameNode dies before the file is closed. the remaining un-flushed data in the temporary local file is transferred to the DataNode. the NameNode commits the file creation operation into a persistent store. The first DataNode starts receiving the data in small portions (4 KB). its data is first written to a local file as explained in the previous section. HDFS supports write-once-read-many semantics on files. This list contains the DataNodes that will host a replica of that block. The client then tells the NameNode that the file is closed. The NameNode inserts the file name into the file system hierarchy and allocates a data block for it. writes that portion to its repository and then flushes that portion to the third DataNode. Staging A client request to create a file does not reach the NameNode immediately. and if possible. the third DataNode writes the data to its local repository. When the local file accumulates a full block of user data. Finally. have used client side caching to improve performance. writes each portion to its local repository and transfers that portion to the second DataNode in the list.speeds. the file is lost. The NameNode responds to the client request with the identity of the DataNode and the destination data block. This approach is not without precedent. Suppose the HDFS file has a replication factor of three. The above approach has been adopted after careful consideration of target applications that run on HDFS. If a client writes to a remote file directly without any client side buffering. Thus. The second DataNode. Then the client flushes the block of data from the local temporary file to the specified DataNode. A typical block size used by HDFS is 64 MB. When a file is closed. A POSIX requirement has been relaxed to achieve higher performance of data uploads. These applications need streaming writes to files. Application writes are transparently redirected to this temporary local file. At this point. Earlier distributed file systems. Thus. the client retrieves a list of DataNodes from the NameNode. Replication Pipelining When a client is writing data to an HDFS file. each chunk will reside on a different DataNode.

The DataNode then removes the corresponding blocks and the corresponding free space appears in the cluster. this policy will be configurable through a well defined interface. Work is in progress to expose HDFS through the WebDAV protocol. it is not immediately removed from HDFS. The file can be restored quickly as long as it remains in /trash.Accessibility HDFS can be accessed from applications in many different ways. Natively. HDFS provides a java API for applications to use. Once again. In addition. the NameNode deletes the file from the HDFS namespace. Instead. The /trash directory contains only the latest copy of the file that was deleted. The next Heartbeat transfers this information to the DataNode. A C language wrapper for this Java API is also available. Decrease Replication Factor When the replication factor of a file is reduced. The current default policy is to delete files from /trash that are more than 6 hours old. A file remains in /trash for a configurable amount of time. Space Reclamation File Deletes and Undeletes When a file is deleted by a user or an application. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS. 35 | P a g e . A user can Undelete a file after deleting it as long as it remains in the /trash directory. The /trash directory is just like any other directory with one special feature: HDFS applies specified policies to automatically delete files from this directory. an HTTP browser can also be used to browse the files of an HDFS instance. there might be a time delay between the completion of the setReplication API call and the appearance of free space in the cluster. he/she can navigate the /trash directory and retrieve the file. HDFS first renames it to a file in the /trash directory. In the future. the NameNode selects excess replicas that can be deleted. After the expiry of its life in /trash. If a user wants to undelete a file that he/she has deleted.

CloudStore (formerly Kosmos filesystem) for archiving files.apache.HarFileSystem Hadoop Archives are typically used for archiving files in HDFS to reduce the namenode’s memory usage. A filesystem providing read-only access HFTP hftp hdfs.) A filesystem layered on another filesystem HAR har Fs. A Local file fs. of which HDFS is just one implementation. Hadoop’s distributed filesystem.fs.LocalFileSystem filesystem with for a locally connected disk client-side checksums. HDFS HDFS hdfs hdfs.Hadoop Filesystems Hadoop has an abstract notion of filesystem. HFTP has no connection with FTP. The Java abstract class org.HsftpFileSystem (Again. and there are several concrete implementations. which are described in following table. 36 | P a g e . to HDFS over HTTP.hadoop.DistributedFileSystem is designed to work efficiently in conjunction with MapReduce.) Often used with distcp (“Parallel Copying with A filesystem providing read-only access HSFTP hsftp Hdfs. Use RawLocalFileSys tem for a local filesystem with no checksums.HftpFileSystem (Despite its name. to HDFS over HTTPS. this has no connection with FTP.FileSystem represents a filesystem in Hadoop.

KFS(Cl oud Store) FTP S3(Na tive) Kfs fs. A filesystem backed by an FTP ftp s3n fs. so you need as much disk space as the files you are archiving to create the archive (although you can delete the originals once you have created the archive). are a file archiving facility that packs files into HDFS blocks more efficiently. so to run it. A filesystem backed by Amazon S3. For example. although the files that go 37 | P a g e . S3(Blo ck Based ) S3 fs. Thus. and block metadata is held in memory by the namenode. since each file is stored in a block.S3FileSystem A Hadoop Archives HDFS stores small files inefficiently. thereby reducing namenode memory usage while still allowing transparent access to files. written in C++.s3native. or HAR files. The tool runs a MapReduce job to process the input files in parallel. A filesystem backed by Amazon S3. In particular.s3.KosmosFileSystem is a distributed filesystem like HDFS or Google’s GFS. Using Hadoop Archives A Hadoop Archive is created from a collection of files using the archive tool.FtpFileSystem fs. Hadoop Archives can be used as input to MapReduce. you need a MapReduce cluster running to use it. a 1 MB file stored with a block size of 128 MB uses 1 MB of disk space. not 128 MB. Creating an archive creates a copy of the original files.ftp.) Hadoop Archives. Limitations There are a few limitations to be aware of with HAR files. which stores files in blocks (much like HDFS) to overcome S3’s 5 GB file size limit.kfs.NativeS3FileSyste m server. however. There is currently no support for archive compression. that small files do not take up any more disk space than is required to store the raw contents of the file. a large number of small files can eat up a lot of memory on the namenode. (Note.

To add or remove files. 38 | P a g e . Archives are immutable once they have been created. so processing lots of small files. can still be inefficient. However. even in a HAR file. there is no archive-aware InputFormat that can pack multiple files into a single MapReduce split. you must recreate the archive. such as daily or weekly. As noted earlier. HAR files can be used as input to MapReduce. In practice. since they can be archived in batches on a regular basis.into the archive can be compressed (HAR files are like tar files in this respect). this is not a problem for files that don’t change after being written.

which coordinates the job run. • The jobtracker. The jobtracker is a Java application whose main class is JobTracker. • The tasktrackers.ANATOMY OF A MAPREDUCE JOB RUN • The client. which submits the MapReduce job. • The distributed filesystem which is used for sharing job files between the other entities. which run the tasks that the job has been split into. 39 | P a g e . Tasktrackers are Java applications whose main class is TaskTracker.

The advantage of this filesystem is that you can access files on S3 that were written with other tools. Conversely. Some interesting tidbits from the post: 40 | P a g e . but they are not interoperable with other S3 tools. just like they are in HDFS. This filesystem requires you to dedicate a bucket for the filesystem . Files are stored as blocks. either as a replacement for HDFS using the S3 block filesystem (i. In the second case HDFS is still used for the Map/Reduce phase.e. This permits efficient implementation of renames. that by using S3 as an input to MapReduce you lose the data locality optimization. Transfer between S3 and AmazonEC2 is free. For this reason it is not suitable as a replacement for HDFS (which has support for very large files). You are billed monthly for storage and data transfer. or write other files to the same bucket. using either S3 filesystem. other tools can access files written using Hadoop. Note also. The disadvantage is the 5GB limit on file size imposed by S3. This makes use of S3 attractive for Hadoop users who run clusters on EC2. Hadoop provides two filesystems that use S3. There are two ways that S3 can be used with Hadoop's Map/Reduce. S3 Native FileSystem (URI scheme: s3n) • A native filesystem for reading and writing regular files on S3. One of the main tools it uses is Hadoop that makes it easier to analyze vast amounts of data.you should not use an existing bucket containing files.Hadoop is now a part of:- Amazon S3 Amazon S3 (Simple Storage Service) is a data storage service. FACEBOOK Facebook’s engineering team has posted some details on the tools it’s using to analyze the huge data sets it collects. which may be significant. using it as a reliable distributed filesystem with support for very large files) or as a convenient repository for data input to and output from MapReduce. The files stored by this filesystem can be larger than 5GB. S3 Block FileSystem (URI scheme: s3) • A block-based filesystem backed by S3.

YAHOO! Yahoo! recently launched the world's largest Apache Hadoop production application. we have added classic data warehouse features like partitioning.• Some of these early projects have matured into publicly released features (like the Facebook Lexicon) or are being used in the background to improve user experience on Facebook (by improving the relevance of search results. This derived data feeds the Machine Learned Ranking algorithms at the heart of Yahoo! Search. Some Webmap size data: • • • • Number of links between pages in the index: roughly 1 trillion links Size of output: over 300 TB. to others being used to fight spam and determine application quality. 41 | P a g e . Hadoop has allowed us to run the identical processing we ran pre-Hadoop on the same cluster in 66% of the time our previous system took. The Webmap build starts with every Web page crawled by Yahoo! and produces a database of all known Web pages and sites on the internet and a vast array of data about every page and site. They are loading over 250 gigabytes of compressed data (over 2 terabytes uncompressed) into the Hadoop file system every day and have hundreds of jobs running each day against these data sets.000 Raw disk used in the production cluster: over 5 Petabytes This process is not new. What is new is the use of Hadoop. • Over time. compressed! Number of cores used to run a single Map-Reduce job: over 10. The Yahoo! Search Webmap is a Hadoop application that runs on a more than 10. It does that while simplifying administration. This in-house data warehousing layer over Hadoop is called Hive.from those generating mundane statistics about site usage. • Facebook has multiple Hadoop clusters deployed now .000 core Linux cluster and produces data that is now used in every Yahoo! Web search query.with the biggest having about 2500 cpu cores and 1 PetaByte of disk space. for example). sampling and indexing to this environment. The list of projects that are using this infrastructure has proliferated .

yahoo.com/hadoop/tutorial/module1.html 42 | P a g e .REFERENCES O'reilly.org/core/version_control.apache.html http://hadoop.org/core/docs/current/api/ http://hadoop.cloudera. Hadoop: The Definitive Guide by Tom White http://www.com/hadoop-training-thinking-at-scale http://developer.apache.