SlideShare a Scribd company logo
1 of 19
An Introduction to
     Hadoop
     By Dan Harvey
“The Apache Hadoop project
develops open-source software
     for reliable, scalable,
    distributed computing”
Into the past

• Doug Cutting: Lucene search library
• Linear merge sorted indexes
• Disk was the bottleneck

• How to speed and scale up?
Split over more disks...

Then more machines...
Jim Grey on Disks
                                            Throughput on SATA disk
                                                           49.1 MBps
                                      50


                                     37.5
                Throughput in MBps




                             t

                                                                               Random
                                      25
                                                                               Sequential

                                     12.5

                                            0.6 MBps
                                       0
                                                  Access Type

Barclay, T., Chong, W., & Gray, J. (2003). A Quick Look at Serial ATA Disk Performance.
Jim Grey on Disks
                        Time to read 2TB disk
                 50

                       40.5 days
                37.5
 Time in Days




                                                  0.6 MBps
                 25
                                                  49.1 MBps

                12.5

                                       0.5 days
                  0
                              Throughput
So...

• Use more disks & machines
• Use sequential disk access

• Linear merge == sequential access!
• But how to make it accessible?
MapReduce: Simplified Data Processing on Large Clusters

                                      Jeffrey Dean and Sanjay Ghemawat
                                          jeff@google.com, sanjay@google.com

                                                     Google, Inc.



                       Abstract                               given day, etc. Most such computations are conceptu-
                                                              ally straightforward. However, the input data is usually
   MapReduce is a programming model and an associ-            large and the computations have to be distributed across
ated implementation for processing and generating large       hundreds or thousands of machines in order to finish in
data sets. Users specify a map function that processes a      a reasonable amount of time. The issues of how to par-
key/value pair to generate a set of intermediate key/value    allelize the computation, distribute the data, and handle
pairs, and a reduce function that merges all intermediate     failures conspire to obscure the original simple compu-
values associated with the same intermediate key. Many        tation with large amounts of complex code to deal with
real world tasks are expressible in this model, as shown      these issues.
in the paper.                                                    As a reaction to this complexity, we designed a new
   Programs written in this functional style are automati-    abstraction that allows us to express the simple computa-
cally parallelized and executed on a large cluster of com-    tions we were trying to perform but hides the messy de-
modity machines. The run-time system takes care of the        tails of parallelization, fault-tolerance, data distribution



     Google’s MapReduce
details of partitioning the input data, scheduling the pro-   and load balancing in a library. Our abstraction is in-
gram’s execution across a set of machines, handling ma-       spired by the map and reduce primitives present in Lisp
chine failures, and managing the required inter-machine       and many other functional languages. We realized that
communication. This allows programmers without any            most of our computations involved applying a map op-
experience with parallel and distributed systems to eas-      eration to each logical “record” in our input in order to
ily utilize the resources of a large distributed system.      compute a set of intermediate key/value pairs, and then
  Our implementation of MapReduce runs on a large             applying a reduce operation to all the values that shared
                                                              the same key, in order to combine the derived data ap-
Map, Shuffle, Reduce
Distributed Storage
                                                                                                                                                                                                         MapReduce: Simplified Data Processing on Large Clusters

                                                                                                                                                                                                                                   Jeffrey Dean and Sanjay Ghemawat
                                                                                                                                                                             MapReduce: Simplified Data Processing on Large Clusters
                                                                                                                                                                                                            jeff@google.com, sanjay@google.com

                                                                                                                                                                                                                                                  Google, Inc.
                                                                                                                                                                                                       Jeffrey Dean and Sanjay Ghemawat
                                                                                                                                                         MapReduce: Simplified Data Processing on Large Clusters
                                                                                                                                                                                                           jeff@google.com, sanjay@google.com
                                                                                                                                                                                                            Abstract                                                 given day, etc. Most such computations are conceptu-
                                                                                                                                                                                                              Google, Inc.                                           ally straightforward. However, the input data is usually
                                                                                                                                                                                  Jeffrey Dean and Sanjay Ghemawatmodel and an associ-
                                                                                                                                                                                           MapReduce is a programming                                                large and the computations have to be distributed across
                                                                                                                                                                                               ated implementation for processing and generating large               hundreds or thousands of machines in order to finish in
                                                                                                                                                                                          jeff@google.com, sanjay@google.com function that processes a
                                                                                                                                                                                               data sets. Users specify a map
                 MapReduce: Simplified Data Processing on Large Clusters                                                                                                                        key/value pair to generate a set of intermediate key/value
                                                                                                                                                                                                                                                                     a reasonable amount of time. The issues of how to par-
                                                                                                                                                                                                                                                                     allelize the computation, distribute the data, and handle
                                                                                                                                                                                           Abstract a reduce function that merges all intermediate such computations are conceptu-
                                                                                                                                                                                               pairs, Google, Inc.
                                                                                                                                                                                                       and                             given day, etc. Most
                                                                                                                                                                                                                                                                     failures conspire to obscure the original simple compu-
                                                                                                                                                                                               values associated with the same intermediate key. ManyHowever, the input data is usually
                                                                                                                                                                     MapReduce is a programming model and an associ-
                                                                                                                                                                                                                                       ally straightforward.
                                                                                                                                                                                                                                                                     tation with large amounts of complex code to deal with
                                         Jeffrey Dean and Sanjay Ghemawat                                                                                                                      real world tasks are expressible largemodel, computations have to be distributed across
                                                                                                                                                                                                                                        this and the
                                                                                                                                                                 ated implementation for processing and generating large in hundreds or as shown of these issues. order to finish in
                                                                                                                                                                                                                                                      thousands machines in
                                                                                                                                                                 data sets. Users specify a in thefunction that processes a
                                                                                                                                                                                                map paper.                             a reasonable amount of time. Thereaction to thisto par-
                                                                                                                                                                                                                                                                        As a issues of how complexity, we designed a new
                                             jeff@google.com, sanjay@google.com                                                                                       Abstract                     Programs written key/value Most such are automati- areabstraction that allows
                                                                                                                                                                                                                  given day, etc.                 computations         conceptu-
                                                                                                                                                                 key/value pair to generate a set of intermediate in this functional stylethe computation, distribute the data, and us to express the simple computa-
                                                                                                                                                                                                                                       allelize                                                handle
                                                                                                                                                                                                                  ally straightforward.large clusterthe com- data is usually
                                                                                                                                                                                                                                           However, of input
                                                        Google, Inc.                                                                          MapReduce is apairs, and a reduce function that merges all intermediate on a
                                                                                                                                                                   programming model and cally parallelized andand the computations conspirebe distributed across trying to perform but hides the messy de-
                                                                                                                                                                                                 an associ-       large
                                                                                                                                                                                                                         executed
                                                                                                                                                                                                                                       failures have to to obscure thewe were simple compu-
                                                                                                                                                                                                                                                                     tions original
                                                                                                                                                                 values associated with the modity machines. The run-time system takes care of the
                                                                                                                                                                                                same intermediate key. Many
                                                                                                                                           ated implementation for processing and generating large                                     tation with large amounts tails of parallelization, fault-tolerance, data distribution
                                                                                                                                                                                                                                                                      of finish in code to deal with
                                                                                                                                                                                                                                                                         complex
                                                                                                                                                                                                                  hundreds or thousands of machines in order to
                                                                                                                                           data sets. Users specifyworld tasks are expressible in this model, as shown data, scheduling the pro- of how to par-
                                                                                                                                                                 real a map function that processes partitioning the input these issues.
                                                                                                                                                                                               details of a                                                          and load balancing in a library. Our abstraction is in-
                                                                                                                                                                                               gram’s execution reasonable amount of time. The issues
                                                                                                                                           key/value pair to generatepaper.of intermediate key/value
                                                                                                                                                                 in the a set
                                                                                                                                                                                                                  a across a set of machines, handling ma-
                                                                                                                                                                                                                                           As distribute to data, and handle map and reduce primitives present in Lisp
                                                                                                                                                                                                                                                                     spired by the
                                                                                                                                                                                                                  allelize the computation, a reaction thethis complexity, we designed a new
                                                                                                                                                                                               chine failures, and managing the required inter-machine
                                                                                                                                           pairs, and a reduce function that merges all intermediate style are automati- toabstraction that allowssimple compu-other functional languages. We realized that
                                                                                                                                                                                                                                                                     and manythe simple computa-
                          Abstract                               given day, etc. Most such computations are conceptu-                                                Programs written in this functional                   conspire obscure the original us to express
                                                                                                                                                                                                                  failuresallows




                                                                                                                                                              Replicated Blocks
                                                                                                                                                                                               communication. This of com- programmers without any
                                                                                                                                           values associated with the same intermediate key. on a large cluster with large amounts of complex code to deal but hides the messy involved applying a map op-
                                                                                                                                                                 cally parallelized and executed Many                                                                most of our computations de-
                                                                 ally straightforward. However, the input data is usually                                                                                         tation               tions we were trying to perform with
      MapReduce is a programming model and an associ-            large and the computations have to be distributed across                  real world tasks are expressible in this model, as shown withthese issues. distributedof parallelization, fault-tolerance, data distribution in our input in order to
                                                                                                                                                                                               experience           parallel and
                                                                                                                                                                 modity machines. The run-time system takes care of the                tails
                                                                                                                                                                                                                                               systems to eas-       eration to each logical “record”
   ated implementation for processing and generating large                                                                                 in the paper.                                       ily utilize the resources ofpro-
                                                                                                                                                                                                                              a large distributed system.
                                                                                                                                                                 details of partitioning the input data, scheduling a reaction to and load balancing in a library. new set of intermediate key/value pairs, and then
                                                                                                                                                                                                                                                                     compute a
                                                                 hundreds or thousands of machines in order to finish in                                                                                              As the             this complexity, we designed a Our abstraction is in-
   data sets. Users specify a map function that processes a      a reasonable amount of time. The issues of how to par-                                          gram’s functional style a setOurmachines, handling ma- allows usrunsthe map and reduce primitives present in Lisp all the values that shared
                                                                                                                                              Programs written in thisexecution across are automati-of implementation of that
                                                                                                                                                                                                                  abstraction          spired to express the simple computa- reduce operation to
                                                                                                                                                                                                                               MapReduce by on a large               applying a
   key/value pair to generate a set of intermediate key/value    allelize the computation, distribute the data, and handle                 cally parallelized and executed on and managing of com-commodity machines and to many other functional messy de- on Largethat
                                                                                                                                                                 chine failures, a large cluster the requiredtions we were trying is performscalable:Processing We realized combine the derived data ap-
                                                                                                                                                                                               cluster of MapReduce: Simplified Data
                                                                                                                                                                                                                   inter-machine       and highly but hides thethe same key, in order to Clusters
                                                                                                                                                                                                                                                                       languages.
   pairs, and a reduce function that merges all intermediate     failures conspire to obscure the original simple compu-                                         communication. This allows programmers without any
                                                                                                                                                                                                care of the                            most of our computations propriately. Our use mapaop-
                                                                                                                                           modity machines. The run-time system takes a typical MapReduce computation processes many ter-                             involved applying a of functional model with user-
                                                                                                                                                                                                                  tails of parallelization, fault-tolerance, data distribution
   values associated with the same intermediate key. Many        tation with large amounts of complex code to deal with                                          experience data, scheduling distributed on thousands of machines. Programmers “record” in map and in order to
                                                                                                                                                                                               abytes of data
                                                                                                                                           details of partitioning the inputwith parallel and the pro- systems to balancingeration to each Our abstraction is in-
                                                                                                                                                                                                                  and load eas-          in a library. logical
                                                                                                                                                                                                                                                                     specified our input reduce operations allows us to paral-
   real world tasks are expressible in this model, as shown      these issues.                                                                                                                  a large system easy to by the map Jeffrey Dean and Sanjay in large computations then and to use re-execution
                                                                                                                                           gram’s execution across a set the machines, of find thema-              spired
                                                                                                                                                                                                                          use:                MapReduce intermediate key/value pairs, and easily
                                                                                                                                                                 ily utilize of resources handling distributed system. hundreds ofreduce set of pro- present Ghemawat
                                                                                                                                                                                                                                       compute a primitives
                                                                                                                                                                                                                                       and
                                                                                                                                                                                                                                                                     lelize
                                                                                                                                                                                                                                                                             Lisp
   in the paper.                                                    As a reaction to this complexity, we designed a new                                                     MapReduce: Simplified Data Processing one thou- as the primary
                                                                                                                                                                                               grams have been implemented and applying a on Large Clustersvalues that shared fault tolerance.
                                                                                                                                                                                                                                        upwards ofreduce operation to all the mechanism for
                                                                                                                                           chine failures, and managing the required inter-machine runs on a large functional languages. We realized that
                                                                                                                                                                     Our implementation of MapReduce and many other
      Programs written in this functional style are automati-    abstraction that allows us to express the simple computa-                                                                     sand MapReduce jobs are executed onjeff@google.com, sanjay@google.com contributions ap-this work are a simple and
                                                                                                                                                                 cluster of programmers without any                                          Google’s clusters to combine the derived data of
                                                                                                                                                                                                                                       the same key, in order
                                                                                                                                           communication. This allows commodity machines and is highly of our computations involved applying a map op-                  The major
                                                                                                                                                                                                                  most scalable:
   cally parallelized and executed on a large cluster of com-    tions we were trying to perform but hides the messy de-                   experience with parallel andMapReduce systems today.processes many each logical “record” Our use of apowerfulto model that enables automatic parallelization
                                                                                                                                                                                               every eas-
                                                                                                                                                                 a typical distributed computation                eration to ter-      propriately. in our input in order interface with user-
                                                                                                                                                                                                                                                                       functional
   modity machines. The run-time system takes care of the        tails of parallelization, fault-tolerance, data distribution              ily utilize the resources of oflarge distributed system.                                    specified mapGoogle, Inc. distribution of large-scale computations, combined
                                                                                                                                                                                                                                                                     and
                                                                                                                                                                 abytes a data on thousands of machines.Dean and set of intermediate key/value pairs, and then allows us to paral-
                                                                                                                                                                                                       Jeffrey compute a Sanjay Ghemawatand reduce with an implementation of this interface that achieves
                                                                                                                                                                                                                    Programmers                                       operations
   details of partitioning the input data, scheduling the pro-   and load balancing in a library. Our abstraction is in-                      Our implementation the system easy to use:1hundreds of MapReduceapro-
                                                                                                                                                                 find of MapReduce runs on a large                                      lelize large computations easily and to use re-execution
                                                                                                                                                                                                                  applying reduce operation to all the values that shared
                                                                                                                                                                                                     Introduction
   gram’s execution across a set of machines, handling ma-       spired by the map and reduce primitives present in Lisp                              MapReduce: Simplified scalable: the one key, Large Clusters derived Section 2 describeslargebasic programming model and
                                                                                                                                           cluster of commodity machines and is highlyData upwards ofsame thou- in order toprimary mechanism for fault tolerance.
                                                                                                                                                                 grams have been implemented and Processing on                         as the combine the
                                                                                                                                                                                                           jeff@google.com, sanjay@google.com
                                                                                                                                                                                                                                                                     high performance on
                                                                                                                                                                                                                                                                         data ap-
                                                                                                                                                                                                                                                                                              the
                                                                                                                                                                                                                                                                                                   clusters of commodity PCs.
   chine failures, and managing the required inter-machine       and many other functional languages. We realized that                                           sand MapReduceprocessesexecuted on Google’s clusters useThe a functional model with user- are a simple and
                                                                                                                                           a typical MapReduce computation           jobs are many ter-           propriately. Our          of major contributions of this work
   communication. This allows programmers without any            most of our computations involved applying a map op-                                            every day.
                                                                                                                                           abytes of data on thousands of machines. Programmers                        AbstractInc.
                                                                                                                                                                                                                        Google,
                                                                                                                                                                                               Over the past five years, mapauthors and many othersthat enables paral- Most such computations are conceptu-
                                                                                                                                                                                                                  specified   the and powerful interface allows us to automatic parallelization
                                                                                                                                                                                                                                       reduce operations      at given day, etc. examples. Section 3 describes an imple-
                                                                                                                                                                                                                                                                     gives several
                                                                                                                                                                                               Google have implemented computationsspecial-purpose ally straightforward.MapReduce interface tailored towards
                                                                                                                                                                                                                  lelize large hundreds of easily andof use re-executionof the combined
                                                                                                                                                                                                                                                                     mentation
   experience with parallel and distributed systems to eas-      eration to each logical “record” in our input in order to                 find the system easy to use: hundreds of MapReduce pro- is a programming model and an associ- large-scale computations,However, the input data is usually
                                                                                                                                                                                                                                       and distribution to
                                                                                                                                                                                 Jeffrey Dean and Sanjay Ghemawat mechanism for fault tolerance.ourand interface that achieves to be distributed acrossde-
                                                                                                                                                                                               MapReduce that process large amounts of raw data, large cluster-based computing environment. Section 4
   ily utilize the resources of a large distributed system.      compute a set of intermediate key/value pairs, and then                                                                       computations as the primary             with an implementation of this the computations have
                                                                                                                                           grams have been implemented and upwards of one thou-
      Our implementation of MapReduce runs on a large            applying a reduce operation to all the values that shared                                       1 Introduction ated implementationdocuments, contributions of thisetc.,on largescribes and
                                                                                                                                                                                               such as crawled for processing and generating large to hundreds several refinements of the programming model
                                                                                                                                                                                    jeff@google.com,Users            The map
                                                                                                                                                                                                                                  web high performance
                                                                                                                                                                                                                                        request logs,
                                                                                                                                           sand MapReduce jobs are executed on Google’s clusters specify amajor function that processes a are a simple orof commodity machines in order to finish in
                                                                                                                                                                                                                                                         work         clusters thousands of PCs.
                                                                                                                                                                                            data sets. sanjay@google.comderived data, such as inverted a reasonable amount ofuseful. The issues of how to par-
                                                                                                                                                                                               compute various kinds of
   cluster of commodity machines and is highly scalable:         the same key, in order to combine the derived data ap-                    every day.                                Abstract                                     given day, etc. 2 describes parallelization are conceptu- Section 5 has performance
                                                                                                                                                                                                                                           Section automatic computations found time.
                                                                                                                                                                                                                                                                     that we have
                                                                                                                                                                                                                  powerful interface that enables Most suchthe basic programming model and
                                                                                                                                                                                            key/value pair to generate a set of intermediate key/value
   a typical MapReduce computation processes many ter-           propriately. Our use of a functional model with user-                                           Over the past five years, the authors and many others at allyoflarge-scale computations,measurements of usually
                                                                                                                                                                                               indices, various representations gives severalstructure allelize inputdescribes an distribute the data, and variety of
                                                                                                                                                                                                                                          the graph examples. Section
                                                                                                                                                                                                  Google, Inc. summaries of straightforward. However,tasks.the3computation, imple-
                                                                                                                                                                                                                                                                       combineddata is
                                                                                                                                                                                                                                                                       the                our implementation for a handle
                                                                                                                                                                                            pairs, and documents, distribution of
                                                                                                                                                                                                                  and
                                                                                                                                                               MapReduce is a programming web a reduce function that merges allthe computations have tointerface tailored towardsoriginalMapReduce within
                                                                                                                                                                                                                                               intermediate
   abytes of data on thousands of machines. Programmers          specified map and reduce operations allows us to paral-                                          Google have implementedof model andspecial-purpose largethe number of pages failures beSection to explores the use of simple compu-
                                                                                                                                                                                                 hundreds of an associ-
                                                                                                                                                                                                                  with an implementation of this the MapReduce conspire obscure
                                                                                                                                                                                                                                         and
                                                                                                                                                                                                                                       mentation of interface that achieves    distributed across the
                                                                                                                                                                                                                                                                                       6
                                                                 lelize large computations easily and to use re-execution                  1 Introduction   ated computations that process large amounts of the same most our cluster-based computing environment. Section 4 complex code toitdeal with
                                                                                                                                                                  implementation for processingassociated with raw data,intermediate key. Many of machines in includingfinish in de-
                                                                                                                                                                                            values andper host, the set of
                                                                                                                                                                                               crawled generating large                 frequentthousands a tation with order amounts of
                                                                                                                                                                                                                                  hundreds or      queries in        Google large to our experiences in using as the basis
   find the system easy to use: hundreds of MapReduce pro-                                                                                                                                                         high performance on large clusters of commodity PCs.
                                                                 as the primary mechanism for fault tolerance.                                              data such as crawled documents, web tasks arelogs, etc., toin reasonableseveral refinements of issues of how to par-
                                                                                                                                                                  sets. Users specify a map functionrequest expressible a this model,amount of time. The the programming model
                                                                                                                                                                                            real world that processes a
                                                                                                                                                                                                                                       scribes
                                                                                                                                                                                                                                                   as shown       these issues.
   grams have been implemented and upwards of one thou-                                                                                                                                                              Section 2 describes the basic programming model and
                                                                                                                                                            key/value pairvarious kinds setthe intermediate key/value
                                                                                                                                                                             to generate a in derived data, such as invertedallelize the computation, useful. Section 5 has this complexity, we designed a new
                                                                                                                                                                                                of paper.
   sand MapReduce jobs are executed on Google’s clusters            The major contributions of this work are a simple and                  Over the past five Abstractauthors and many others at given day,several examples.we have found distribute imple- to performance
                                                                                                                                                                 compute
                                                                                                                                                                 years, the                 of                                         that Section 3 describes ana the
                                                                                                                                                                                                                                                                     As reaction
                                                                                                                                                                                                                  gives etc. Most such computations are conceptu- data, and handle
                                                                                                                                                            pairs, and a reduce function that merges written in this functional measurements of our implementation allows variety of the simple computa-
                                                                                                                                                                                               Programs all intermediate                style are automati-
   every day.                                                    powerful interface that enables automatic parallelization                 Google have implemented hundredsrepresentations of the graph structure failures conspire on Operating usually that for andus to express
                                                                                                                                                                                                                              of      MapReduce                   abstraction
                                                                                                                                                                 indices, various of special-purpose ally mentation ’04:the However, theinterface tailoredoriginal simple a Implementation
                                                                                                                                                                                                                  straightforward.a large cluster ofto obscureisthe towards     Design compu-
                                                                                                                                                            values associated with and USENIX Association executed on6th Symposium inputexplores Systemsof MapReduce withinbut hides the messy de- 137
                                                                                                                                                                                      the same intermediate and OSDI
                                                                                                                                                                                            cally parallelized key. Many
                                                                                                                                         MapReduce is athat of web documents, summaries of the number the pages computing to                            com- data
                                                                                                                                                                                                                                                large be distributed thewe 4 de- trying to with
                                                                                                                                                                                                                                  tation with Section 6amounts ofSection were to deal perform
                                                                                                                                                                                                                                                                  tions use code
                                                                                                                                                                                                                                                                    complex
                                                                 and distribution of large-scale computations, combined                    computations programming model                  an associ-
                                                                                                                                                                  process large amounts of raw data,                and of computations have environment. across
                                                                                                                                                                                                                  our cluster-based tasks.
                                                                 with an implementation of this interface that achieves                                     real crawled per and generating in this largeThe run-time
                                                                                                                                      ated such as crawled for processing are expressible large model,scribes severalthese issues. including our totails of in in using it as fault-tolerance, data distribution
                                                                                                                                                                 world tasks host, the set of most   machines. as shown
                                                                                                                                            implementation documents, web request modityetc., frequent queries in asystem takes carethe order experiences
                                                                                                                                                                                                                                       Google of in theof                  parallelization, the basis
                                                                                                                                                                                             logs,        to hundreds or thousands of machines
                                                                                                                                                                                                                                    refinements             programming model
                                                                                                                                                                                                                                                                    finish
   1 Introduction                                                high performance on large clusters of commodity PCs.                 data compute various theapaper.derived data, such as inverteda reasonable have data, schedulingSection 5 complexity, we designed aanew
                                                                                                                                                            in
                                                                                                                                            sets. Users specify map function that processesof partitioning the input found time. The the pro- hasand load balancing in library. Our abstraction is in-
                                                                                                                                                               kinds of
                                                                                                                                                                                            details a                                As a reaction to this
                                                                                                                                                                                                                  that we amount of useful. issues of how to par-   performance
                                                                    Section 2 describes the basic programming model and               key/value pair to generate a set of intermediatefunctional style are measurementsmachines, handling ma- forspired by the map and reduce primitives present in Lisp
                                                                                                                                           indices, various representations of
                                                                                                                                                                                            gram’s execution across a set of abstraction that allows us to express the simple computa-
                                                                                                                                                               Programs written inthe graph structure
                                                                                                                                                                                      this key/value               automati-
                                                                                                                                                                                                             allelize the computation, distribute the data, anda handle of
                                                                                                                                                                                                                                   of our implementation               variety
   Over the past five years, the authors and many others at       gives several examples. Section 3 describes an imple-                pairs, and a reduce cally parallelized of the number of large cluster of com- the tions we wereoriginalto perform many other functional languages. We realized that
                                                                                                                                                             function that merges executed on failures,failures conspire to required inter-machine
                                                                                                                                           of web documents, summaries and all intermediate
                                                                                                                                                                                            chine a
                                                                                                                                                                                                      pages
                                                                                                                                                                                                              and managing                                        and but hides
                                                                                                                                                                                                                  tasks. Section 6 explores thetrying simple compu-
                                                                                                                                                                                                                                    obscure the     use of MapReduce within the messy de-
   Google have implemented hundreds of special-purpose           mentation of the MapReduce interface tailored towards                values associated host, the same intermediate key.’04: 6thin takes This of the programmerscomplex inand to most with data distribution
                                                                                                                                                            modity set of most frequent queries Symposium allows tails Systems Design using it as of basis
                                                                                                                                                            USENIX Association OSDI system  communication. care on Operating of parallelization, fault-tolerance,
                                                                                                                                                                                                                                                without any Implementation
                                                                                                                                           crawled per with the machines. The run-timeMany a tation with large amounts experiences code deal the our computations involved applying a map op-
                                                                                                                                                                                                                  Google including our of                                                              137
   computations that process large amounts of raw data,          our cluster-based computing environment. Section 4 de-               real world tasks are expressible in this model, experience with parallelpro- distributed systems to eas-a library. Our each logical is in-
                                                                                                                                                            details of partitioning the inputshownscheduling the and
                                                                                                                                                                                            as data,         these issues. a largeand load balancing in
                                                                                                                                                                                                                                                                  eration to abstraction “record” in our input in order to
                                                                                                                                      in the paper.         gram’s execution across a set of machines, handlingof
                                                                                                                                                                                            ily utilize the resources ma-          distributed system.            compute a set of intermediate key/value pairs, and then
   such as crawled documents, web request logs, etc., to         scribes several refinements of the programming model                                                                                            As a reaction to spired by the mapwe designed primitives present in Lisp
                                                                                                                                                                                                                                   this complexity, and reduce a new
   compute various kinds of derived data, such as inverted       that we have found useful. Section 5 has performance                                       chine failures, and managing the required inter-machine
                                                                                                                                         Programs written in this functional style are automati-                                  and many other the simple computa-a reduce operation to all the values that shared
                                                                                                                                                                                               Our implementation of MapReduce runs on a large                    applying
                                                                                                                                                                                                             abstraction that allows us to express functional languages. We realized that
                                                                                                                                                            communication. This allows programmers without any                                                    the same key, 137in order to combine the derived data ap-
   indices, various representations of the graph structure       measurements of our implementation for a variety of                  cally parallelized and executed’04:a6th Symposium com- commodity machines mostis Implementation the involved applying a map op-
                                                                                                                                      USENIX Association OSDI on large cluster of on of     cluster Operating Systems Design andofhighly scalable:
                                                                                                                                                                                                                                  and
                                                                                                                                                                                                             tions we were trying to performcomputations messy de-
                                                                                                                                                                                                                                             our but hides
   of web documents, summaries of the number of pages            tasks. Section 6 explores the use of MapReduce within                                      experience with parallel and distributed systems computation processes many ter- “record” in our input in orderato
                                                                                                                                      modity machines. The run-time system takes care of the MapReduceto eas-
                                                                                                                                                                                            a typical                             eration to each logical         propriately. Our use of functional model with user-
                                                                                                                                                                                                             tails of parallelization, fault-tolerance, data distribution
                                                                                                                                                            ily utilize the resources of a abytesdistributed thousands of machines. a set of intermediate key/value pairs, reduce operations allows us to paral-
                                                                                                                                      details of partitioning the input data, scheduling the pro- dataandsystem.
                                                                                                                                                                                            large of          on                              Programmers         specified map and and then
   crawled per host, the set of most frequent queries in a       Google including our experiences in using it as the basis                                                                                        load balancingcompute
                                                                                                                                                                                                                                    in a library. Our abstraction is in-
                                                                                                                                                                                            find ma-                               applying a primitives present in Lisp computations easily and to use re-execution
                                                                                                                                                                                                             spired a the map and reduce reduce
                                                                                                                                                               Our implementation of MapReduce runs on by large                                          pro-     lelize large
                                                                                                                                      gram’s execution across a set of machines, handlingthe system easy to use: hundreds of MapReduceoperation to all the values that shared
                                                                                                                                                                                                                                                                  as the primary mechanism for fault tolerance.
                                                                                                                                      chine failures, and cluster of commodity machines and is been implemented functional languages. Weto combine the derived data ap-
                                                                                                                                                            managing the required inter-machine highlymany other and upwards of one order realized that
                                                                                                                                                                                            grams have
                                                                                                                                                                                                             and scalable:        the same key, in thou-
                                                                                                                                      communication. This allows programmers without MapReduce jobs our executed on Google’s clusters a afunctional model with user- this work are a simple and
                                                                                                                                                                                            sand any
                                                                                                                                                            a typical MapReduce computation processes many ter-
                                                                                                                                                                                                             most of
                                                                                                                                                                                                                       are        propriately. Our use of
                                                                                                                                                                                                                           computations involved applying map op-
                                                                                                                                                                                                                                                                     The major contributions of
USENIX Association OSDI ’04: 6th Symposium on Operating Systems Design and Implementation                                       137                                                         every day.                            specified map and reduce operations interface thatparal- automatic parallelization
                                                                                                                                                                                                                                                                  powerful allows us to enables
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop
An Introduction to Hadoop

More Related Content

Similar to An Introduction to Hadoop

Data Processing in the Work of NoSQL? An Introduction to Hadoop
Data Processing in the Work of NoSQL? An Introduction to HadoopData Processing in the Work of NoSQL? An Introduction to Hadoop
Data Processing in the Work of NoSQL? An Introduction to Hadoop
Dan Harvey
 
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
jencyjayastina
 
Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
cscpconf
 
Mochi: Visual Log-Analysis Based Tools for Debugging Hadoop
Mochi: Visual Log-Analysis Based Tools for Debugging HadoopMochi: Visual Log-Analysis Based Tools for Debugging Hadoop
Mochi: Visual Log-Analysis Based Tools for Debugging Hadoop
George Ang
 
Paper id 25201498
Paper id 25201498Paper id 25201498
Paper id 25201498
IJRAT
 
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTLARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
ijwscjournal
 
Mapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large ClustersMapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large Clusters
Abhishek Singh
 

Similar to An Introduction to Hadoop (20)

Data Processing in the Work of NoSQL? An Introduction to Hadoop
Data Processing in the Work of NoSQL? An Introduction to HadoopData Processing in the Work of NoSQL? An Introduction to Hadoop
Data Processing in the Work of NoSQL? An Introduction to Hadoop
 
Hadoop at JavaZone 2010
Hadoop at JavaZone 2010Hadoop at JavaZone 2010
Hadoop at JavaZone 2010
 
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
Introduction to map reduce s. jency jayastina II MSC COMPUTER SCIENCE BON SEC...
 
Hadoop v0.3.1
Hadoop v0.3.1Hadoop v0.3.1
Hadoop v0.3.1
 
Scheduling in distributed systems - Andrii Vozniuk
Scheduling in distributed systems - Andrii VozniukScheduling in distributed systems - Andrii Vozniuk
Scheduling in distributed systems - Andrii Vozniuk
 
Design Issues and Challenges of Peer-to-Peer Video on Demand System
Design Issues and Challenges of Peer-to-Peer Video on Demand System Design Issues and Challenges of Peer-to-Peer Video on Demand System
Design Issues and Challenges of Peer-to-Peer Video on Demand System
 
Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
 
Mochi: Visual Log-Analysis Based Tools for Debugging Hadoop
Mochi: Visual Log-Analysis Based Tools for Debugging HadoopMochi: Visual Log-Analysis Based Tools for Debugging Hadoop
Mochi: Visual Log-Analysis Based Tools for Debugging Hadoop
 
A data aware caching 2415
A data aware caching 2415A data aware caching 2415
A data aware caching 2415
 
Paper id 25201498
Paper id 25201498Paper id 25201498
Paper id 25201498
 
On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applica...
On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applica...On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applica...
On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applica...
 
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot ConfigurationsMap Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
 
7051461.ppt
7051461.ppt7051461.ppt
7051461.ppt
 
Novel Scheduling Algorithms for Efficient Deployment of Map Reduce Applicatio...
Novel Scheduling Algorithms for Efficient Deployment of Map Reduce Applicatio...Novel Scheduling Algorithms for Efficient Deployment of Map Reduce Applicatio...
Novel Scheduling Algorithms for Efficient Deployment of Map Reduce Applicatio...
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoop
 
Mapreduce Osdi04
Mapreduce Osdi04Mapreduce Osdi04
Mapreduce Osdi04
 
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTLARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
 
Introduccion a Hadoop / Introduction to Hadoop
Introduccion a Hadoop / Introduction to HadoopIntroduccion a Hadoop / Introduction to Hadoop
Introduccion a Hadoop / Introduction to Hadoop
 
Mapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large ClustersMapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large Clusters
 

Recently uploaded

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 

Recently uploaded (20)

Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 

An Introduction to Hadoop

  • 1. An Introduction to Hadoop By Dan Harvey
  • 2. “The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing”
  • 3. Into the past • Doug Cutting: Lucene search library • Linear merge sorted indexes • Disk was the bottleneck • How to speed and scale up?
  • 4. Split over more disks... Then more machines...
  • 5. Jim Grey on Disks Throughput on SATA disk 49.1 MBps 50 37.5 Throughput in MBps t Random 25 Sequential 12.5 0.6 MBps 0 Access Type Barclay, T., Chong, W., & Gray, J. (2003). A Quick Look at Serial ATA Disk Performance.
  • 6. Jim Grey on Disks Time to read 2TB disk 50 40.5 days 37.5 Time in Days 0.6 MBps 25 49.1 MBps 12.5 0.5 days 0 Throughput
  • 7. So... • Use more disks & machines • Use sequential disk access • Linear merge == sequential access! • But how to make it accessible?
  • 8. MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat jeff@google.com, sanjay@google.com Google, Inc. Abstract given day, etc. Most such computations are conceptu- ally straightforward. However, the input data is usually MapReduce is a programming model and an associ- large and the computations have to be distributed across ated implementation for processing and generating large hundreds or thousands of machines in order to finish in data sets. Users specify a map function that processes a a reasonable amount of time. The issues of how to par- key/value pair to generate a set of intermediate key/value allelize the computation, distribute the data, and handle pairs, and a reduce function that merges all intermediate failures conspire to obscure the original simple compu- values associated with the same intermediate key. Many tation with large amounts of complex code to deal with real world tasks are expressible in this model, as shown these issues. in the paper. As a reaction to this complexity, we designed a new Programs written in this functional style are automati- abstraction that allows us to express the simple computa- cally parallelized and executed on a large cluster of com- tions we were trying to perform but hides the messy de- modity machines. The run-time system takes care of the tails of parallelization, fault-tolerance, data distribution Google’s MapReduce details of partitioning the input data, scheduling the pro- and load balancing in a library. Our abstraction is in- gram’s execution across a set of machines, handling ma- spired by the map and reduce primitives present in Lisp chine failures, and managing the required inter-machine and many other functional languages. We realized that communication. This allows programmers without any most of our computations involved applying a map op- experience with parallel and distributed systems to eas- eration to each logical “record” in our input in order to ily utilize the resources of a large distributed system. compute a set of intermediate key/value pairs, and then Our implementation of MapReduce runs on a large applying a reduce operation to all the values that shared the same key, in order to combine the derived data ap-
  • 10. Distributed Storage MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat MapReduce: Simplified Data Processing on Large Clusters jeff@google.com, sanjay@google.com Google, Inc. Jeffrey Dean and Sanjay Ghemawat MapReduce: Simplified Data Processing on Large Clusters jeff@google.com, sanjay@google.com Abstract given day, etc. Most such computations are conceptu- Google, Inc. ally straightforward. However, the input data is usually Jeffrey Dean and Sanjay Ghemawatmodel and an associ- MapReduce is a programming large and the computations have to be distributed across ated implementation for processing and generating large hundreds or thousands of machines in order to finish in jeff@google.com, sanjay@google.com function that processes a data sets. Users specify a map MapReduce: Simplified Data Processing on Large Clusters key/value pair to generate a set of intermediate key/value a reasonable amount of time. The issues of how to par- allelize the computation, distribute the data, and handle Abstract a reduce function that merges all intermediate such computations are conceptu- pairs, Google, Inc. and given day, etc. Most failures conspire to obscure the original simple compu- values associated with the same intermediate key. ManyHowever, the input data is usually MapReduce is a programming model and an associ- ally straightforward. tation with large amounts of complex code to deal with Jeffrey Dean and Sanjay Ghemawat real world tasks are expressible largemodel, computations have to be distributed across this and the ated implementation for processing and generating large in hundreds or as shown of these issues. order to finish in thousands machines in data sets. Users specify a in thefunction that processes a map paper. a reasonable amount of time. Thereaction to thisto par- As a issues of how complexity, we designed a new jeff@google.com, sanjay@google.com Abstract Programs written key/value Most such are automati- areabstraction that allows given day, etc. computations conceptu- key/value pair to generate a set of intermediate in this functional stylethe computation, distribute the data, and us to express the simple computa- allelize handle ally straightforward.large clusterthe com- data is usually However, of input Google, Inc. MapReduce is apairs, and a reduce function that merges all intermediate on a programming model and cally parallelized andand the computations conspirebe distributed across trying to perform but hides the messy de- an associ- large executed failures have to to obscure thewe were simple compu- tions original values associated with the modity machines. The run-time system takes care of the same intermediate key. Many ated implementation for processing and generating large tation with large amounts tails of parallelization, fault-tolerance, data distribution of finish in code to deal with complex hundreds or thousands of machines in order to data sets. Users specifyworld tasks are expressible in this model, as shown data, scheduling the pro- of how to par- real a map function that processes partitioning the input these issues. details of a and load balancing in a library. Our abstraction is in- gram’s execution reasonable amount of time. The issues key/value pair to generatepaper.of intermediate key/value in the a set a across a set of machines, handling ma- As distribute to data, and handle map and reduce primitives present in Lisp spired by the allelize the computation, a reaction thethis complexity, we designed a new chine failures, and managing the required inter-machine pairs, and a reduce function that merges all intermediate style are automati- toabstraction that allowssimple compu-other functional languages. We realized that and manythe simple computa- Abstract given day, etc. Most such computations are conceptu- Programs written in this functional conspire obscure the original us to express failuresallows Replicated Blocks communication. This of com- programmers without any values associated with the same intermediate key. on a large cluster with large amounts of complex code to deal but hides the messy involved applying a map op- cally parallelized and executed Many most of our computations de- ally straightforward. However, the input data is usually tation tions we were trying to perform with MapReduce is a programming model and an associ- large and the computations have to be distributed across real world tasks are expressible in this model, as shown withthese issues. distributedof parallelization, fault-tolerance, data distribution in our input in order to experience parallel and modity machines. The run-time system takes care of the tails systems to eas- eration to each logical “record” ated implementation for processing and generating large in the paper. ily utilize the resources ofpro- a large distributed system. details of partitioning the input data, scheduling a reaction to and load balancing in a library. new set of intermediate key/value pairs, and then compute a hundreds or thousands of machines in order to finish in As the this complexity, we designed a Our abstraction is in- data sets. Users specify a map function that processes a a reasonable amount of time. The issues of how to par- gram’s functional style a setOurmachines, handling ma- allows usrunsthe map and reduce primitives present in Lisp all the values that shared Programs written in thisexecution across are automati-of implementation of that abstraction spired to express the simple computa- reduce operation to MapReduce by on a large applying a key/value pair to generate a set of intermediate key/value allelize the computation, distribute the data, and handle cally parallelized and executed on and managing of com-commodity machines and to many other functional messy de- on Largethat chine failures, a large cluster the requiredtions we were trying is performscalable:Processing We realized combine the derived data ap- cluster of MapReduce: Simplified Data inter-machine and highly but hides thethe same key, in order to Clusters languages. pairs, and a reduce function that merges all intermediate failures conspire to obscure the original simple compu- communication. This allows programmers without any care of the most of our computations propriately. Our use mapaop- modity machines. The run-time system takes a typical MapReduce computation processes many ter- involved applying a of functional model with user- tails of parallelization, fault-tolerance, data distribution values associated with the same intermediate key. Many tation with large amounts of complex code to deal with experience data, scheduling distributed on thousands of machines. Programmers “record” in map and in order to abytes of data details of partitioning the inputwith parallel and the pro- systems to balancingeration to each Our abstraction is in- and load eas- in a library. logical specified our input reduce operations allows us to paral- real world tasks are expressible in this model, as shown these issues. a large system easy to by the map Jeffrey Dean and Sanjay in large computations then and to use re-execution gram’s execution across a set the machines, of find thema- spired use: MapReduce intermediate key/value pairs, and easily ily utilize of resources handling distributed system. hundreds ofreduce set of pro- present Ghemawat compute a primitives and lelize Lisp in the paper. As a reaction to this complexity, we designed a new MapReduce: Simplified Data Processing one thou- as the primary grams have been implemented and applying a on Large Clustersvalues that shared fault tolerance. upwards ofreduce operation to all the mechanism for chine failures, and managing the required inter-machine runs on a large functional languages. We realized that Our implementation of MapReduce and many other Programs written in this functional style are automati- abstraction that allows us to express the simple computa- sand MapReduce jobs are executed onjeff@google.com, sanjay@google.com contributions ap-this work are a simple and cluster of programmers without any Google’s clusters to combine the derived data of the same key, in order communication. This allows commodity machines and is highly of our computations involved applying a map op- The major most scalable: cally parallelized and executed on a large cluster of com- tions we were trying to perform but hides the messy de- experience with parallel andMapReduce systems today.processes many each logical “record” Our use of apowerfulto model that enables automatic parallelization every eas- a typical distributed computation eration to ter- propriately. in our input in order interface with user- functional modity machines. The run-time system takes care of the tails of parallelization, fault-tolerance, data distribution ily utilize the resources of oflarge distributed system. specified mapGoogle, Inc. distribution of large-scale computations, combined and abytes a data on thousands of machines.Dean and set of intermediate key/value pairs, and then allows us to paral- Jeffrey compute a Sanjay Ghemawatand reduce with an implementation of this interface that achieves Programmers operations details of partitioning the input data, scheduling the pro- and load balancing in a library. Our abstraction is in- Our implementation the system easy to use:1hundreds of MapReduceapro- find of MapReduce runs on a large lelize large computations easily and to use re-execution applying reduce operation to all the values that shared Introduction gram’s execution across a set of machines, handling ma- spired by the map and reduce primitives present in Lisp MapReduce: Simplified scalable: the one key, Large Clusters derived Section 2 describeslargebasic programming model and cluster of commodity machines and is highlyData upwards ofsame thou- in order toprimary mechanism for fault tolerance. grams have been implemented and Processing on as the combine the jeff@google.com, sanjay@google.com high performance on data ap- the clusters of commodity PCs. chine failures, and managing the required inter-machine and many other functional languages. We realized that sand MapReduceprocessesexecuted on Google’s clusters useThe a functional model with user- are a simple and a typical MapReduce computation jobs are many ter- propriately. Our of major contributions of this work communication. This allows programmers without any most of our computations involved applying a map op- every day. abytes of data on thousands of machines. Programmers AbstractInc. Google, Over the past five years, mapauthors and many othersthat enables paral- Most such computations are conceptu- specified the and powerful interface allows us to automatic parallelization reduce operations at given day, etc. examples. Section 3 describes an imple- gives several Google have implemented computationsspecial-purpose ally straightforward.MapReduce interface tailored towards lelize large hundreds of easily andof use re-executionof the combined mentation experience with parallel and distributed systems to eas- eration to each logical “record” in our input in order to find the system easy to use: hundreds of MapReduce pro- is a programming model and an associ- large-scale computations,However, the input data is usually and distribution to Jeffrey Dean and Sanjay Ghemawat mechanism for fault tolerance.ourand interface that achieves to be distributed acrossde- MapReduce that process large amounts of raw data, large cluster-based computing environment. Section 4 ily utilize the resources of a large distributed system. compute a set of intermediate key/value pairs, and then computations as the primary with an implementation of this the computations have grams have been implemented and upwards of one thou- Our implementation of MapReduce runs on a large applying a reduce operation to all the values that shared 1 Introduction ated implementationdocuments, contributions of thisetc.,on largescribes and such as crawled for processing and generating large to hundreds several refinements of the programming model jeff@google.com,Users The map web high performance request logs, sand MapReduce jobs are executed on Google’s clusters specify amajor function that processes a are a simple orof commodity machines in order to finish in work clusters thousands of PCs. data sets. sanjay@google.comderived data, such as inverted a reasonable amount ofuseful. The issues of how to par- compute various kinds of cluster of commodity machines and is highly scalable: the same key, in order to combine the derived data ap- every day. Abstract given day, etc. 2 describes parallelization are conceptu- Section 5 has performance Section automatic computations found time. that we have powerful interface that enables Most suchthe basic programming model and key/value pair to generate a set of intermediate key/value a typical MapReduce computation processes many ter- propriately. Our use of a functional model with user- Over the past five years, the authors and many others at allyoflarge-scale computations,measurements of usually indices, various representations gives severalstructure allelize inputdescribes an distribute the data, and variety of the graph examples. Section Google, Inc. summaries of straightforward. However,tasks.the3computation, imple- combineddata is the our implementation for a handle pairs, and documents, distribution of and MapReduce is a programming web a reduce function that merges allthe computations have tointerface tailored towardsoriginalMapReduce within intermediate abytes of data on thousands of machines. Programmers specified map and reduce operations allows us to paral- Google have implementedof model andspecial-purpose largethe number of pages failures beSection to explores the use of simple compu- hundreds of an associ- with an implementation of this the MapReduce conspire obscure and mentation of interface that achieves distributed across the 6 lelize large computations easily and to use re-execution 1 Introduction ated computations that process large amounts of the same most our cluster-based computing environment. Section 4 complex code toitdeal with implementation for processingassociated with raw data,intermediate key. Many of machines in includingfinish in de- values andper host, the set of crawled generating large frequentthousands a tation with order amounts of hundreds or queries in Google large to our experiences in using as the basis find the system easy to use: hundreds of MapReduce pro- high performance on large clusters of commodity PCs. as the primary mechanism for fault tolerance. data such as crawled documents, web tasks arelogs, etc., toin reasonableseveral refinements of issues of how to par- sets. Users specify a map functionrequest expressible a this model,amount of time. The the programming model real world that processes a scribes as shown these issues. grams have been implemented and upwards of one thou- Section 2 describes the basic programming model and key/value pairvarious kinds setthe intermediate key/value to generate a in derived data, such as invertedallelize the computation, useful. Section 5 has this complexity, we designed a new of paper. sand MapReduce jobs are executed on Google’s clusters The major contributions of this work are a simple and Over the past five Abstractauthors and many others at given day,several examples.we have found distribute imple- to performance compute years, the of that Section 3 describes ana the As reaction gives etc. Most such computations are conceptu- data, and handle pairs, and a reduce function that merges written in this functional measurements of our implementation allows variety of the simple computa- Programs all intermediate style are automati- every day. powerful interface that enables automatic parallelization Google have implemented hundredsrepresentations of the graph structure failures conspire on Operating usually that for andus to express of MapReduce abstraction indices, various of special-purpose ally mentation ’04:the However, theinterface tailoredoriginal simple a Implementation straightforward.a large cluster ofto obscureisthe towards Design compu- values associated with and USENIX Association executed on6th Symposium inputexplores Systemsof MapReduce withinbut hides the messy de- 137 the same intermediate and OSDI cally parallelized key. Many MapReduce is athat of web documents, summaries of the number the pages computing to com- data large be distributed thewe 4 de- trying to with tation with Section 6amounts ofSection were to deal perform tions use code complex and distribution of large-scale computations, combined computations programming model an associ- process large amounts of raw data, and of computations have environment. across our cluster-based tasks. with an implementation of this interface that achieves real crawled per and generating in this largeThe run-time ated such as crawled for processing are expressible large model,scribes severalthese issues. including our totails of in in using it as fault-tolerance, data distribution world tasks host, the set of most machines. as shown implementation documents, web request modityetc., frequent queries in asystem takes carethe order experiences Google of in theof parallelization, the basis logs, to hundreds or thousands of machines refinements programming model finish 1 Introduction high performance on large clusters of commodity PCs. data compute various theapaper.derived data, such as inverteda reasonable have data, schedulingSection 5 complexity, we designed aanew in sets. Users specify map function that processesof partitioning the input found time. The the pro- hasand load balancing in library. Our abstraction is in- kinds of details a As a reaction to this that we amount of useful. issues of how to par- performance Section 2 describes the basic programming model and key/value pair to generate a set of intermediatefunctional style are measurementsmachines, handling ma- forspired by the map and reduce primitives present in Lisp indices, various representations of gram’s execution across a set of abstraction that allows us to express the simple computa- Programs written inthe graph structure this key/value automati- allelize the computation, distribute the data, anda handle of of our implementation variety Over the past five years, the authors and many others at gives several examples. Section 3 describes an imple- pairs, and a reduce cally parallelized of the number of large cluster of com- the tions we wereoriginalto perform many other functional languages. We realized that function that merges executed on failures,failures conspire to required inter-machine of web documents, summaries and all intermediate chine a pages and managing and but hides tasks. Section 6 explores thetrying simple compu- obscure the use of MapReduce within the messy de- Google have implemented hundreds of special-purpose mentation of the MapReduce interface tailored towards values associated host, the same intermediate key.’04: 6thin takes This of the programmerscomplex inand to most with data distribution modity set of most frequent queries Symposium allows tails Systems Design using it as of basis USENIX Association OSDI system communication. care on Operating of parallelization, fault-tolerance, without any Implementation crawled per with the machines. The run-timeMany a tation with large amounts experiences code deal the our computations involved applying a map op- Google including our of 137 computations that process large amounts of raw data, our cluster-based computing environment. Section 4 de- real world tasks are expressible in this model, experience with parallelpro- distributed systems to eas-a library. Our each logical is in- details of partitioning the inputshownscheduling the and as data, these issues. a largeand load balancing in eration to abstraction “record” in our input in order to in the paper. gram’s execution across a set of machines, handlingof ily utilize the resources ma- distributed system. compute a set of intermediate key/value pairs, and then such as crawled documents, web request logs, etc., to scribes several refinements of the programming model As a reaction to spired by the mapwe designed primitives present in Lisp this complexity, and reduce a new compute various kinds of derived data, such as inverted that we have found useful. Section 5 has performance chine failures, and managing the required inter-machine Programs written in this functional style are automati- and many other the simple computa-a reduce operation to all the values that shared Our implementation of MapReduce runs on a large applying abstraction that allows us to express functional languages. We realized that communication. This allows programmers without any the same key, 137in order to combine the derived data ap- indices, various representations of the graph structure measurements of our implementation for a variety of cally parallelized and executed’04:a6th Symposium com- commodity machines mostis Implementation the involved applying a map op- USENIX Association OSDI on large cluster of on of cluster Operating Systems Design andofhighly scalable: and tions we were trying to performcomputations messy de- our but hides of web documents, summaries of the number of pages tasks. Section 6 explores the use of MapReduce within experience with parallel and distributed systems computation processes many ter- “record” in our input in orderato modity machines. The run-time system takes care of the MapReduceto eas- a typical eration to each logical propriately. Our use of functional model with user- tails of parallelization, fault-tolerance, data distribution ily utilize the resources of a abytesdistributed thousands of machines. a set of intermediate key/value pairs, reduce operations allows us to paral- details of partitioning the input data, scheduling the pro- dataandsystem. large of on Programmers specified map and and then crawled per host, the set of most frequent queries in a Google including our experiences in using it as the basis load balancingcompute in a library. Our abstraction is in- find ma- applying a primitives present in Lisp computations easily and to use re-execution spired a the map and reduce reduce Our implementation of MapReduce runs on by large pro- lelize large gram’s execution across a set of machines, handlingthe system easy to use: hundreds of MapReduceoperation to all the values that shared as the primary mechanism for fault tolerance. chine failures, and cluster of commodity machines and is been implemented functional languages. Weto combine the derived data ap- managing the required inter-machine highlymany other and upwards of one order realized that grams have and scalable: the same key, in thou- communication. This allows programmers without MapReduce jobs our executed on Google’s clusters a afunctional model with user- this work are a simple and sand any a typical MapReduce computation processes many ter- most of are propriately. Our use of computations involved applying map op- The major contributions of USENIX Association OSDI ’04: 6th Symposium on Operating Systems Design and Implementation 137 every day. specified map and reduce operations interface thatparal- automatic parallelization powerful allows us to enables

Editor's Notes

  1. An introduction to Hadoop\nSlides will be a mix of technical content and non technical\nHigher level\nNot sure of the audiance.\n
  2. \n
  3. \n
  4. To achieve both speed and scale need both of these.\n
  5. \n
  6. \n
  7. \n
  8. \n
  9. \n
  10. - Data split into blocks\n - Replicated > three times on different machines\n - Fault tolerant storage\n
  11. - Data split into blocks\n - Replicated > three times on different machines\n - Fault tolerant storage\n
  12. - Data split into blocks\n - Replicated > three times on different machines\n - Fault tolerant storage\n
  13. - Data split into blocks\n - Replicated > three times on different machines\n - Fault tolerant storage\n
  14. \n
  15. \n
  16. \n
  17. \n
  18. \n
  19. \n