SlideShare a Scribd company logo
1 of 33
1Pivotal Confidential–Internal Use Only 1
Hadoop 2.x Configuration &
Map/Reduce Performance Tuning
Suhas Gogate, Architect Hadoop Engg
CF-Meetup, SFO (20th May 2014 )
A NEW PLATFORM FOR A NEW ERA
3Pivotal Confidential–Internal Use Only
About Me (https://www.linkedin.com/in/vgogate)
 Since 2008, active in Hadoop infrastructure and ecosystem components
with lead Hadoop technology based companies
– Yahoo, Netflix, Hortonworks, EMC-Greenplum/Pivotal
 Founder and PMC member/committer of the Apache Ambari project
 Contributed Apache “Hadoop Vaidya” – Performance diagnostics for M/R
 Prior to Hadoop,
– IBM Almaden Research (2000-2008), CS Software & Storage systems.
– In early days (1993) of my career, worked with a team that built first Indian
super computer, PARAM (Transputer based MPP system) at Center for
Development of Advance Computing (CDAC, Pune)
4Pivotal Confidential–Internal Use Only
Agenda
 Introduction to Hadoop 2.0
– HDFS, YARN, Map/Reduce (M/R)
 Hadoop Cluster
– Hardware selection & Capacity Planning
 Key Hadoop Configuration Parameters
– Operating System, Local FS, HDFS, YARN
 Performance Tuning of M/R Applications
– Hadoop Vaidya demo (MAPREDUCE-3202)
5Pivotal Confidential–Internal Use Only
Introduction to Hadoop
6Pivotal Confidential–Internal Use Only
Hadoop 1.0 -> 2.0
Image Courtesy Arun Murthy, Hortonworks
7Pivotal Confidential–Internal Use Only
HDFS Architecture
8Pivotal Confidential–Internal Use Only
Hadoop Map/Reduce 1.0
9Pivotal Confidential–Internal Use Only
Hadoop YARN 2.0 Architecture
10Pivotal Confidential–Internal Use Only
Hardware selection & Capacity planning
11Pivotal Confidential–Internal Use Only
Hadoop Cluster: Hardware selection
 Hadoop runs on cluster of commodity machines
– Does NOT mean unreliable & low-cost hardware
– 2008:
▪ Single/Dual socket, 1+ GHz, 4-8GB RAM, 2-4 Cores, 2-4 1TB SATA drives, 1GbE NIC
– 2014+:
▪ Dual socket 2+ GHz, 48-64GB RAM, 12-16 Cores, 12-16 1/3TB SATA drives, 2-4 bonded NICs
 Hadoop cluster comprises separate h/w profiles for,
– Master nodes
▪ More reliable & available configuration
▪ Low storage/memory requirements compared to worker/data nodes (except HDFS Name node)
– Worker nodes
▪ Not necessarily less reliable although not configured for high availability (e.g. JBOD not RAID)
▪ Resource requirements on storage, memory, CPU & network b/w are based on workload profile
See Appliance/Reference architectures from EMC/IBM/Cisco/HP/Dell etc
12Pivotal Confidential–Internal Use Only
Cluster capacity planning
 Initial cluster capacity is commonly based on HDFS data size & growth projection
– Keep data compression in mind
 Cluster size
– HDFS space per node = (Raw disk space per node – 20/25% non-DFS local storage) / 3 (RF)
– Cluster nodes = Total HDFS space / HDFS space per node
 Start with balanced individual node configuration in terms of CPU/Memory/Number of disks
– Keep provision for growth as you learn more about the workload
– Guidelines for individual worker node configuration
▪ Latest generation processor(s) with 12/16 cores total, is a reasonable start
▪ 4 to 6 GB memory per core
▪ 1/1.5 disks per core
▪ 1 to 3 TB SATA disks per core
▪ 1GbE NIC
 Derive resource requirements service roles (typically Name node)
– NN -> 4-6 cores, default 2GB + 1 GB roughly every 100TB of raw disk space, 1M Objects
– RM/NN/DN -> 2GB RAM, 1 core (thumb rule)
13Pivotal Confidential–Internal Use Only
Ongoing Capacity Planning
 Workload profiling
– Make sure applications running on the cluster are well tuned to utilize cluster
resources
– Gather average CPU/IO/Mem/Disk utilization stats across worker nodes
– Identify the resource bottlenecks for your workload and provision accordingly
▪ E,g, bump up more cores per node if CPU is bottleneck compared to other resources such as storage, I/O
bandwidth
▪ E.g. If memory is a bottleneck (i.e. low utilization on CPU and I/O), then add more memory per node to let
more tasks run per node.
– HDFS storage growth rate & current capacity utilization
 Latency sensitive applications
– Do planned project on-boarding
▪ Estimate resource requirement ahead of time (Capacity Calculator)
– Use Resource Scheduler
▪ Schedule the jobs to maximize the resource utilization over the time
14Pivotal Confidential–Internal Use Only
Hadoop Cluster Configuration
15Pivotal Confidential–Internal Use Only
Key Hadoop Cluster Configuration – OS
 Operating System (RHEL/CentOS 6.1+)
– Mount disk volumes with NOATIME (speed up reads)
– Disable transparent Huge page compaction
▪ # echo never > /sys/kernel/mm/redhat_transparent_hugepages/defrag
– Turn off caching on disk controller
– vm.swappiness = 0
– vm.overcommit_memory = 1
– vm.overcommit_ratio = 100
– net.core.somaxconn=1024 (default socket listen queue size 128)
– Choice of Linux I/O scheduler
 Local File System
– Ext3 (reliable and recommended) vs Ext4 vs XFS
– Default max open file descriptors per user (default 1K may change to 32K)
– Reduce FS reserve blocks space (default 5% -> 0% on non-os partitions)
 BIOS
– Disable BIOS power saving options may boost node performance
16Pivotal Confidential–Internal Use Only
Key Hadoop Cluster Configuration - HDFS
 Use multiple disk mount points
– dfs.datanode.data.dir (use all attached disks to data node)
– dfs.namenode.name.dir (NN metadata redundancy on disk)
 DFS Block size
– 128MB (you can override it while writing new files with different block size)
 Local file system buffer
– io.file.buffer.size = 131072 (128KB)
– Io.sort.factor = 50 to 100 (number merge streams while sorting file)
 NN/DN concurrency
– dfs.namenode.handler.count (100)
– dfs.datanode.max.transfer.threads (4096)
 Datanode Failed volumes tolerated
– dfs.datanode.failed.volumes.tolerated
17Pivotal Confidential–Internal Use Only
Key Hadoop Cluster Configuration - HDFS
 Short circuit read
– dfs.client.read.shortcircuit = true
– dfs.domain.socket.path
 JVM options (hadoop-env.sh)
– export HADOOP_NAMENODE_OPTS=”
▪ -Dcom.sun.management.jmxremote -Xms${dfs.namenode.heapsize.mb}m
▪ -Xmx${dfs.namenode.heapsize.mb}m
▪ -Dhadoop.security.logger=INFO,DRFAS
▪ -Dhdfs.audit.logger=INFO,RFAAUDIT
▪ -XX:ParallelGCThreads=8
▪ -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
▪ -XX:+HeapDumpOnOutOfMemoryError -XX:ErrorFile=${HADOOP_LOG_DIR}/hs_err_pid%p.log
$HADOOP_NAMENODE_OPTS”
– export HADOOP_DATANODE_OPTS=”
▪ -Dcom.sun.management.jmxremote
▪ -Xms${dfs.datanode.heapsize.mb}m
▪ -Xmx${dfs.datanode.heapsize.mb}m
▪ -Dhadoop.security.logger=ERROR,DRFAS $HADOOP_DATANODE_OPTS"
18Pivotal Confidential–Internal Use Only
Key Hadoop Cluster Configuration - YARN
 Use multiple disk mount points on the worker node
– yarn.nodemanager.local-dirs
– yarn.nodemanager.log-dirs
 Memory allocated for node manager containers
– yarn.nodemanager.resource.memory-mb
▪ Total memory on worker node allocated for all the containers running in parallel
– yarn.scheduler.minimum-allocation-mb
▪ Minimum memory requested for map/reduce task container.
▪ Based on CPU cores and available memory on the node, this parameter can limit number of max containers
per node
– yarn.nodemanager.maximum-allocation-mb
▪ Default to yarn.nodemanager.resource.memory-mb
– yarn.mapreduce.map.memory.mb, yarn.mapreduce.reduce.memory.mb
▪ Default values for map/reduce task container memory. User can override them through job configuration
– yarn.nodemanager.vmem-pmem-ratio = 2.1
19Pivotal Confidential–Internal Use Only
Key Hadoop Cluster Configuration - YARN
 Use Yarn log aggregation
– yarn.log-aggregation-enable
 RM/NM JVM options (yarn-env.sh)
– export YARN_RESOURCEMANAGER_HEAPSIZE=2GB
– export YARN_NODEMANAGER_HEAPSIZE=2GB
– YARN_OPTS="$YARN_OPTS -server
– -Djava.net.preferIPv4Stack=true
– -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
– -XX:+HeapDumpOnOutOfMemoryError
– -XX:ErrorFile=${YARN_LOG_DIR}/hs_err_pid%p.log"
20Pivotal Confidential–Internal Use Only
Hadoop Configuration Advisor - Tool
 Given
– Data size, Growth rate, Workload profile, Latency/QoS requirements
 Suggests
– Capacity requirement for Hadoop cluster (reasonable starting point)
▪ Resource requirements for various services roles
▪ Hardware profiles for master/worker nodes (No specific h/w vendor )
– Cluster services topology i.e. placement of service roles to nodes
– Optimal services configuration for given hardware specs
21Pivotal Confidential–Internal Use Only
Performance Tuning of M/R applications
22Pivotal Confidential–Internal Use Only
Hadoop Map/Reduce - WordCount
23Pivotal Confidential–Internal Use Only
Optimizing M/R applications – key features
 Speculative Execution
 Use of Combiner
 Data Compression
– Intermediate: LZO(native)/Snappy, Output: BZip2, Gzip
 Avoid map side disk spills
– io.sort.mb
 Increased replication factor for out-of-band Hdfs access
 Distributed cache
 Map output partitioner
 Appropriate granularity for M/R tasks
– Mapreduce.map.minsplitsize
– Mapreduce.map.maxsplitsize
– Optimal number of reducers
24Pivotal Confidential–Internal Use Only
Performance Benchmark – Teragen
 Running teragen out-of-box will not utilize the cluster hardware
resources
 Determine number of map tasks to run on each node to exploit
max I/O bandwidth
– Depends on number of disks on each node
 Example
– 10 nodes, 5 disks/node, per node memory for m/r tasks 50GB
– Hadoop jar hadoop-mapreduce-examples-2.x.x.jar teragen
– -Dmapred.map.tasks=50 -Dmapreduce.map.memory.mb=10GB
– 10000000000 /teragenoutput
25Pivotal Confidential–Internal Use Only
Hadoop M/R Benchmark – Terasort
 Terasort Example
hadoop jar hadoop-mapreduce/hadoop-mapreduce-examples-2.x.x.jar terasort 
– -Ddfs.replication=1 -Dmapreduce.task.io.sort.mb=500
– -Dmapreduce.map.sort.spill.percent=0.9
– -Dmapreduce.reduce.shuffle.parallelcopies=10
– -Dmapreduce.reduce.shuffle.memory.limit.percent=0.1
– -Dmapreduce.reduce.shuffle.input.buffer.percent=0.95
– -Dmapreduce.reduce.input.buffer.percent=0.95
– -Dmapreduce.reduce.shuffle.merge.percent=0.95
– -Dmapreduce.reduce.merge.inmem.threshold=0
– -Dmapreduce.job.speculative.speculativecap=0.05
– -Dmapreduce.map.speculative=false
– -Dmapreduce.reduce.speculative=false -Dmapreduce.job.jvm.numtasks=-1
26Pivotal Confidential–Internal Use Only
Hadoop M/R Benchmark – Terasort
– -Dmapreduce.job.reduces=84 -Dmapreduce.task.io.sort.factor=100 
– -Dmapreduce.map.output.compress=true
– -Dmapreduce.map.output.compress.codec=
▪ org.apache.hadoop.io.compress.SnappyCodec 
– -Dmapreduce.job.reduce.slowstart.completedmaps=0.4
– -Dmapreduce.reduce.merge.memtomem.enabled=false 
– -Dmapreduce.reduce.memory.totalbytes=12348030976
– -Dmapreduce.reduce.memory.mb=12288 
– -Dmapreduce.reduce.java.opts=
▪ "-Xms11776m -Xmx11776m -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -
XX:+CMSIncrementalPacing -XX:ParallelGCThreads=4" 
– -Dmapreduce.map.memory.mb=4096
– -Dmapreduce.map.java.opts="-Xmx1356m" 
– /terasort-input /terasort-output
27Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Performance diagnostic tool
28Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Rule based performance diagnostic tool
• Rule based performance diagnosis of M/R jobs
– Set of pre-defined diagnostic rules
– Diagnostic rules execution against job config &
Job History logs
– Targeted advice for discovered problems
• Extensible framework
– You can add your own rules,
• Based on a rule template and published
job counters
– Write complex rules using existing simpler
rules
Vaidya: An expert (versed in his own
profession , esp. in medical science) ,
skilled in the art of healing , a physician
29Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Diagnostic Test Rule
<DiagnosticTest>
<Title>Balanaced Reduce Partitioning</Title>
<ClassName>
org.apache.hadoop.vaidya.postexdiagnosis.tests.BalancedReducePartitioning
</ClassName>
<Description>
This rule tests as to how well the input to reduce tasks is balanced
</Description>
<Importance>High</Importance>
<SuccessThreshold>0.40</SuccessThreshold>
<Prescription>advice</Prescription>
<InputElement>
<PercentReduceRecords>0.85</PercentReduceRecords>
</InputElement>
</DiagnosticTest>
30Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Report Element
<TestReportElement>
<TestTitle>Balanaced Reduce Partitioning</TestTitle>
<TestDescription>
This rule tests as to how well the input to reduce tasks is balanced
</TestDescription>
<TestImportance>HIGH</TestImportance>
<TestResult>POSITIVE(FAILED)</TestResult>
<TestSeverity>0.69</TestSeverity>
<ReferenceDetails>
* TotalReduceTasks: 4096
* BusyReduceTasks processing 0.85% of total records: 3373
* Impact: 0.70
</ReferenceDetails>
<TestPrescription>
* Use the appropriate partitioning function
* For streaming job consider following partitioner and hadoop config parameters
* org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
* -jobconf stream.map.output.field.separator, -jobconf stream.num.map.output.key.fields
</TestPrescription>
</TestReportElement>
31Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Example Rules
 Balanced Reduce Partitioning
– Check if intermediate data is well partitioned among reducers.
 Map/Reduce tasks reading HDFS files as side effect
– Checks if HDFS files are being read as side effect and causing the access bottleneck across
map/reduce tasks
 Percent Re-execution of Map/Reduce tasks
 Granularity of Map/Reduce task execution
 Map tasks data locality
– This rule detects % data locality for Map tasks
 Use of Combiner & Combiner efficiency
– Checks if there is a potential in using combiner after map stage
 Intermediate data compression
– Checks if intermediate data is compressed to lower the shuffle time
32Pivotal Confidential–Internal Use Only
Hadoop Vaidya: Demo
Thank you!
• For Hadoop Vaidya demo
• Download Pivotal HD single node VM
(https://network.gopivotal.com/products/pivotal-hd)
• Run M/R job
• After job completed, go Resource Manager UI
• Select Job History
• To look at the diagnostic report click on “Vaidya Report” link in left
menu
• Vaidya Patch submitted to Apache Hadoop
• https://issues.apache.org/jira/browse/MAPREDUCE-3202
A NEW PLATFORM FOR A NEW ERA

More Related Content

What's hot

Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeDatabricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
 
Optimizing MapReduce Job performance
Optimizing MapReduce Job performanceOptimizing MapReduce Job performance
Optimizing MapReduce Job performanceDataWorks Summit
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsLynn Langit
 
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...Spark Summit
 
Spark Performance Tuning .pdf
Spark Performance Tuning .pdfSpark Performance Tuning .pdf
Spark Performance Tuning .pdfAmit Raj
 
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...Simplilearn
 
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
 
Introduction to Apache Hadoop Ecosystem
Introduction to Apache Hadoop EcosystemIntroduction to Apache Hadoop Ecosystem
Introduction to Apache Hadoop EcosystemMahabubur Rahaman
 
New Directions for Apache Arrow
New Directions for Apache ArrowNew Directions for Apache Arrow
New Directions for Apache ArrowWes McKinney
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Databricks
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introductionsudhakara st
 
Introduction to Azure Databricks
Introduction to Azure DatabricksIntroduction to Azure Databricks
Introduction to Azure DatabricksJames Serra
 

What's hot (20)

Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Hive(ppt)
Hive(ppt)Hive(ppt)
Hive(ppt)
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeSimplify CDC Pipeline with Spark Streaming SQL and Delta Lake
Simplify CDC Pipeline with Spark Streaming SQL and Delta Lake
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Optimizing MapReduce Job performance
Optimizing MapReduce Job performanceOptimizing MapReduce Job performance
Optimizing MapReduce Job performance
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals
 
Apache spark
Apache sparkApache spark
Apache spark
 
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...
Building Real-Time BI Systems with Kafka, Spark, and Kudu: Spark Summit East ...
 
Spark Performance Tuning .pdf
Spark Performance Tuning .pdfSpark Performance Tuning .pdf
Spark Performance Tuning .pdf
 
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
 
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
 
Introduction to Apache Hadoop Ecosystem
Introduction to Apache Hadoop EcosystemIntroduction to Apache Hadoop Ecosystem
Introduction to Apache Hadoop Ecosystem
 
New Directions for Apache Arrow
New Directions for Apache ArrowNew Directions for Apache Arrow
New Directions for Apache Arrow
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 
Node Labels in YARN
Node Labels in YARNNode Labels in YARN
Node Labels in YARN
 
Snowflake Datawarehouse Architecturing
Snowflake Datawarehouse ArchitecturingSnowflake Datawarehouse Architecturing
Snowflake Datawarehouse Architecturing
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introduction
 
Introduction to Azure Databricks
Introduction to Azure DatabricksIntroduction to Azure Databricks
Introduction to Azure Databricks
 

Similar to Hadoop configuration & performance tuning

Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanNarayana B
 
Power Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudPower Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudEdureka!
 
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRBig Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRVijay Rayapati
 
Apache hadoop, hdfs and map reduce Overview
Apache hadoop, hdfs and map reduce OverviewApache hadoop, hdfs and map reduce Overview
Apache hadoop, hdfs and map reduce OverviewNisanth Simon
 
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudBest Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudLeons Petražickis
 
How to configure the cluster based on Multi-site (WAN) configuration
How to configure the clusterbased on Multi-site (WAN) configurationHow to configure the clusterbased on Multi-site (WAN) configuration
How to configure the cluster based on Multi-site (WAN) configurationAkihiro Kitada
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 201302.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013WANdisco Plc
 
Administer Hadoop Cluster
Administer Hadoop ClusterAdminister Hadoop Cluster
Administer Hadoop ClusterEdureka!
 
Session 01 - Into to Hadoop
Session 01 - Into to HadoopSession 01 - Into to Hadoop
Session 01 - Into to HadoopAnandMHadoop
 
Learn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterLearn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterEdureka!
 
A Container-based Sizing Framework for Apache Hadoop/Spark Clusters
A Container-based Sizing Framework for Apache Hadoop/Spark ClustersA Container-based Sizing Framework for Apache Hadoop/Spark Clusters
A Container-based Sizing Framework for Apache Hadoop/Spark ClustersDataWorks Summit/Hadoop Summit
 
App cap2956v2-121001194956-phpapp01 (1)
App cap2956v2-121001194956-phpapp01 (1)App cap2956v2-121001194956-phpapp01 (1)
App cap2956v2-121001194956-phpapp01 (1)outstanding59
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldRichard McDougall
 
App Cap2956v2 121001194956 Phpapp01 (1)
App Cap2956v2 121001194956 Phpapp01 (1)App Cap2956v2 121001194956 Phpapp01 (1)
App Cap2956v2 121001194956 Phpapp01 (1)outstanding59
 
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
 
Learn Hadoop Administration
Learn Hadoop AdministrationLearn Hadoop Administration
Learn Hadoop AdministrationEdureka!
 

Similar to Hadoop configuration & performance tuning (20)

Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
 
Power Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS CloudPower Hadoop Cluster with AWS Cloud
Power Hadoop Cluster with AWS Cloud
 
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRBig Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
 
Apache hadoop, hdfs and map reduce Overview
Apache hadoop, hdfs and map reduce OverviewApache hadoop, hdfs and map reduce Overview
Apache hadoop, hdfs and map reduce Overview
 
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the CloudBest Practices for Deploying Hadoop (BigInsights) in the Cloud
Best Practices for Deploying Hadoop (BigInsights) in the Cloud
 
HDFS tiered storage
HDFS tiered storageHDFS tiered storage
HDFS tiered storage
 
Hadoop 2.0 handout 5.0
Hadoop 2.0 handout 5.0Hadoop 2.0 handout 5.0
Hadoop 2.0 handout 5.0
 
How to configure the cluster based on Multi-site (WAN) configuration
How to configure the clusterbased on Multi-site (WAN) configurationHow to configure the clusterbased on Multi-site (WAN) configuration
How to configure the cluster based on Multi-site (WAN) configuration
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 201302.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013
 
Administer Hadoop Cluster
Administer Hadoop ClusterAdminister Hadoop Cluster
Administer Hadoop Cluster
 
Session 01 - Into to Hadoop
Session 01 - Into to HadoopSession 01 - Into to Hadoop
Session 01 - Into to Hadoop
 
Learn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node ClusterLearn to setup a Hadoop Multi Node Cluster
Learn to setup a Hadoop Multi Node Cluster
 
Hadoop 101
Hadoop 101Hadoop 101
Hadoop 101
 
A Container-based Sizing Framework for Apache Hadoop/Spark Clusters
A Container-based Sizing Framework for Apache Hadoop/Spark ClustersA Container-based Sizing Framework for Apache Hadoop/Spark Clusters
A Container-based Sizing Framework for Apache Hadoop/Spark Clusters
 
App cap2956v2-121001194956-phpapp01 (1)
App cap2956v2-121001194956-phpapp01 (1)App cap2956v2-121001194956-phpapp01 (1)
App cap2956v2-121001194956-phpapp01 (1)
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworld
 
App Cap2956v2 121001194956 Phpapp01 (1)
App Cap2956v2 121001194956 Phpapp01 (1)App Cap2956v2 121001194956 Phpapp01 (1)
App Cap2956v2 121001194956 Phpapp01 (1)
 
Hadoop Research
Hadoop Research Hadoop Research
Hadoop Research
 
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
 
Learn Hadoop Administration
Learn Hadoop AdministrationLearn Hadoop Administration
Learn Hadoop Administration
 

Recently uploaded

Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queensdataanalyticsqueen03
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改yuu sss
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfchwongval
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxUnduhUnggah1
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样vhwb25kk
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 217djon017
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanMYRABACSAFRA2
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一F sss
 

Recently uploaded (20)

Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queens
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdf
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docx
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population Mean
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
 

Hadoop configuration & performance tuning

  • 1. 1Pivotal Confidential–Internal Use Only 1 Hadoop 2.x Configuration & Map/Reduce Performance Tuning Suhas Gogate, Architect Hadoop Engg CF-Meetup, SFO (20th May 2014 )
  • 2. A NEW PLATFORM FOR A NEW ERA
  • 3. 3Pivotal Confidential–Internal Use Only About Me (https://www.linkedin.com/in/vgogate)  Since 2008, active in Hadoop infrastructure and ecosystem components with lead Hadoop technology based companies – Yahoo, Netflix, Hortonworks, EMC-Greenplum/Pivotal  Founder and PMC member/committer of the Apache Ambari project  Contributed Apache “Hadoop Vaidya” – Performance diagnostics for M/R  Prior to Hadoop, – IBM Almaden Research (2000-2008), CS Software & Storage systems. – In early days (1993) of my career, worked with a team that built first Indian super computer, PARAM (Transputer based MPP system) at Center for Development of Advance Computing (CDAC, Pune)
  • 4. 4Pivotal Confidential–Internal Use Only Agenda  Introduction to Hadoop 2.0 – HDFS, YARN, Map/Reduce (M/R)  Hadoop Cluster – Hardware selection & Capacity Planning  Key Hadoop Configuration Parameters – Operating System, Local FS, HDFS, YARN  Performance Tuning of M/R Applications – Hadoop Vaidya demo (MAPREDUCE-3202)
  • 5. 5Pivotal Confidential–Internal Use Only Introduction to Hadoop
  • 6. 6Pivotal Confidential–Internal Use Only Hadoop 1.0 -> 2.0 Image Courtesy Arun Murthy, Hortonworks
  • 7. 7Pivotal Confidential–Internal Use Only HDFS Architecture
  • 8. 8Pivotal Confidential–Internal Use Only Hadoop Map/Reduce 1.0
  • 9. 9Pivotal Confidential–Internal Use Only Hadoop YARN 2.0 Architecture
  • 10. 10Pivotal Confidential–Internal Use Only Hardware selection & Capacity planning
  • 11. 11Pivotal Confidential–Internal Use Only Hadoop Cluster: Hardware selection  Hadoop runs on cluster of commodity machines – Does NOT mean unreliable & low-cost hardware – 2008: ▪ Single/Dual socket, 1+ GHz, 4-8GB RAM, 2-4 Cores, 2-4 1TB SATA drives, 1GbE NIC – 2014+: ▪ Dual socket 2+ GHz, 48-64GB RAM, 12-16 Cores, 12-16 1/3TB SATA drives, 2-4 bonded NICs  Hadoop cluster comprises separate h/w profiles for, – Master nodes ▪ More reliable & available configuration ▪ Low storage/memory requirements compared to worker/data nodes (except HDFS Name node) – Worker nodes ▪ Not necessarily less reliable although not configured for high availability (e.g. JBOD not RAID) ▪ Resource requirements on storage, memory, CPU & network b/w are based on workload profile See Appliance/Reference architectures from EMC/IBM/Cisco/HP/Dell etc
  • 12. 12Pivotal Confidential–Internal Use Only Cluster capacity planning  Initial cluster capacity is commonly based on HDFS data size & growth projection – Keep data compression in mind  Cluster size – HDFS space per node = (Raw disk space per node – 20/25% non-DFS local storage) / 3 (RF) – Cluster nodes = Total HDFS space / HDFS space per node  Start with balanced individual node configuration in terms of CPU/Memory/Number of disks – Keep provision for growth as you learn more about the workload – Guidelines for individual worker node configuration ▪ Latest generation processor(s) with 12/16 cores total, is a reasonable start ▪ 4 to 6 GB memory per core ▪ 1/1.5 disks per core ▪ 1 to 3 TB SATA disks per core ▪ 1GbE NIC  Derive resource requirements service roles (typically Name node) – NN -> 4-6 cores, default 2GB + 1 GB roughly every 100TB of raw disk space, 1M Objects – RM/NN/DN -> 2GB RAM, 1 core (thumb rule)
  • 13. 13Pivotal Confidential–Internal Use Only Ongoing Capacity Planning  Workload profiling – Make sure applications running on the cluster are well tuned to utilize cluster resources – Gather average CPU/IO/Mem/Disk utilization stats across worker nodes – Identify the resource bottlenecks for your workload and provision accordingly ▪ E,g, bump up more cores per node if CPU is bottleneck compared to other resources such as storage, I/O bandwidth ▪ E.g. If memory is a bottleneck (i.e. low utilization on CPU and I/O), then add more memory per node to let more tasks run per node. – HDFS storage growth rate & current capacity utilization  Latency sensitive applications – Do planned project on-boarding ▪ Estimate resource requirement ahead of time (Capacity Calculator) – Use Resource Scheduler ▪ Schedule the jobs to maximize the resource utilization over the time
  • 14. 14Pivotal Confidential–Internal Use Only Hadoop Cluster Configuration
  • 15. 15Pivotal Confidential–Internal Use Only Key Hadoop Cluster Configuration – OS  Operating System (RHEL/CentOS 6.1+) – Mount disk volumes with NOATIME (speed up reads) – Disable transparent Huge page compaction ▪ # echo never > /sys/kernel/mm/redhat_transparent_hugepages/defrag – Turn off caching on disk controller – vm.swappiness = 0 – vm.overcommit_memory = 1 – vm.overcommit_ratio = 100 – net.core.somaxconn=1024 (default socket listen queue size 128) – Choice of Linux I/O scheduler  Local File System – Ext3 (reliable and recommended) vs Ext4 vs XFS – Default max open file descriptors per user (default 1K may change to 32K) – Reduce FS reserve blocks space (default 5% -> 0% on non-os partitions)  BIOS – Disable BIOS power saving options may boost node performance
  • 16. 16Pivotal Confidential–Internal Use Only Key Hadoop Cluster Configuration - HDFS  Use multiple disk mount points – dfs.datanode.data.dir (use all attached disks to data node) – dfs.namenode.name.dir (NN metadata redundancy on disk)  DFS Block size – 128MB (you can override it while writing new files with different block size)  Local file system buffer – io.file.buffer.size = 131072 (128KB) – Io.sort.factor = 50 to 100 (number merge streams while sorting file)  NN/DN concurrency – dfs.namenode.handler.count (100) – dfs.datanode.max.transfer.threads (4096)  Datanode Failed volumes tolerated – dfs.datanode.failed.volumes.tolerated
  • 17. 17Pivotal Confidential–Internal Use Only Key Hadoop Cluster Configuration - HDFS  Short circuit read – dfs.client.read.shortcircuit = true – dfs.domain.socket.path  JVM options (hadoop-env.sh) – export HADOOP_NAMENODE_OPTS=” ▪ -Dcom.sun.management.jmxremote -Xms${dfs.namenode.heapsize.mb}m ▪ -Xmx${dfs.namenode.heapsize.mb}m ▪ -Dhadoop.security.logger=INFO,DRFAS ▪ -Dhdfs.audit.logger=INFO,RFAAUDIT ▪ -XX:ParallelGCThreads=8 ▪ -XX:+UseParNewGC -XX:+UseConcMarkSweepGC ▪ -XX:+HeapDumpOnOutOfMemoryError -XX:ErrorFile=${HADOOP_LOG_DIR}/hs_err_pid%p.log $HADOOP_NAMENODE_OPTS” – export HADOOP_DATANODE_OPTS=” ▪ -Dcom.sun.management.jmxremote ▪ -Xms${dfs.datanode.heapsize.mb}m ▪ -Xmx${dfs.datanode.heapsize.mb}m ▪ -Dhadoop.security.logger=ERROR,DRFAS $HADOOP_DATANODE_OPTS"
  • 18. 18Pivotal Confidential–Internal Use Only Key Hadoop Cluster Configuration - YARN  Use multiple disk mount points on the worker node – yarn.nodemanager.local-dirs – yarn.nodemanager.log-dirs  Memory allocated for node manager containers – yarn.nodemanager.resource.memory-mb ▪ Total memory on worker node allocated for all the containers running in parallel – yarn.scheduler.minimum-allocation-mb ▪ Minimum memory requested for map/reduce task container. ▪ Based on CPU cores and available memory on the node, this parameter can limit number of max containers per node – yarn.nodemanager.maximum-allocation-mb ▪ Default to yarn.nodemanager.resource.memory-mb – yarn.mapreduce.map.memory.mb, yarn.mapreduce.reduce.memory.mb ▪ Default values for map/reduce task container memory. User can override them through job configuration – yarn.nodemanager.vmem-pmem-ratio = 2.1
  • 19. 19Pivotal Confidential–Internal Use Only Key Hadoop Cluster Configuration - YARN  Use Yarn log aggregation – yarn.log-aggregation-enable  RM/NM JVM options (yarn-env.sh) – export YARN_RESOURCEMANAGER_HEAPSIZE=2GB – export YARN_NODEMANAGER_HEAPSIZE=2GB – YARN_OPTS="$YARN_OPTS -server – -Djava.net.preferIPv4Stack=true – -XX:+UseParNewGC -XX:+UseConcMarkSweepGC – -XX:+HeapDumpOnOutOfMemoryError – -XX:ErrorFile=${YARN_LOG_DIR}/hs_err_pid%p.log"
  • 20. 20Pivotal Confidential–Internal Use Only Hadoop Configuration Advisor - Tool  Given – Data size, Growth rate, Workload profile, Latency/QoS requirements  Suggests – Capacity requirement for Hadoop cluster (reasonable starting point) ▪ Resource requirements for various services roles ▪ Hardware profiles for master/worker nodes (No specific h/w vendor ) – Cluster services topology i.e. placement of service roles to nodes – Optimal services configuration for given hardware specs
  • 21. 21Pivotal Confidential–Internal Use Only Performance Tuning of M/R applications
  • 22. 22Pivotal Confidential–Internal Use Only Hadoop Map/Reduce - WordCount
  • 23. 23Pivotal Confidential–Internal Use Only Optimizing M/R applications – key features  Speculative Execution  Use of Combiner  Data Compression – Intermediate: LZO(native)/Snappy, Output: BZip2, Gzip  Avoid map side disk spills – io.sort.mb  Increased replication factor for out-of-band Hdfs access  Distributed cache  Map output partitioner  Appropriate granularity for M/R tasks – Mapreduce.map.minsplitsize – Mapreduce.map.maxsplitsize – Optimal number of reducers
  • 24. 24Pivotal Confidential–Internal Use Only Performance Benchmark – Teragen  Running teragen out-of-box will not utilize the cluster hardware resources  Determine number of map tasks to run on each node to exploit max I/O bandwidth – Depends on number of disks on each node  Example – 10 nodes, 5 disks/node, per node memory for m/r tasks 50GB – Hadoop jar hadoop-mapreduce-examples-2.x.x.jar teragen – -Dmapred.map.tasks=50 -Dmapreduce.map.memory.mb=10GB – 10000000000 /teragenoutput
  • 25. 25Pivotal Confidential–Internal Use Only Hadoop M/R Benchmark – Terasort  Terasort Example hadoop jar hadoop-mapreduce/hadoop-mapreduce-examples-2.x.x.jar terasort – -Ddfs.replication=1 -Dmapreduce.task.io.sort.mb=500 – -Dmapreduce.map.sort.spill.percent=0.9 – -Dmapreduce.reduce.shuffle.parallelcopies=10 – -Dmapreduce.reduce.shuffle.memory.limit.percent=0.1 – -Dmapreduce.reduce.shuffle.input.buffer.percent=0.95 – -Dmapreduce.reduce.input.buffer.percent=0.95 – -Dmapreduce.reduce.shuffle.merge.percent=0.95 – -Dmapreduce.reduce.merge.inmem.threshold=0 – -Dmapreduce.job.speculative.speculativecap=0.05 – -Dmapreduce.map.speculative=false – -Dmapreduce.reduce.speculative=false -Dmapreduce.job.jvm.numtasks=-1
  • 26. 26Pivotal Confidential–Internal Use Only Hadoop M/R Benchmark – Terasort – -Dmapreduce.job.reduces=84 -Dmapreduce.task.io.sort.factor=100 – -Dmapreduce.map.output.compress=true – -Dmapreduce.map.output.compress.codec= ▪ org.apache.hadoop.io.compress.SnappyCodec – -Dmapreduce.job.reduce.slowstart.completedmaps=0.4 – -Dmapreduce.reduce.merge.memtomem.enabled=false – -Dmapreduce.reduce.memory.totalbytes=12348030976 – -Dmapreduce.reduce.memory.mb=12288 – -Dmapreduce.reduce.java.opts= ▪ "-Xms11776m -Xmx11776m -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode - XX:+CMSIncrementalPacing -XX:ParallelGCThreads=4" – -Dmapreduce.map.memory.mb=4096 – -Dmapreduce.map.java.opts="-Xmx1356m" – /terasort-input /terasort-output
  • 27. 27Pivotal Confidential–Internal Use Only Hadoop Vaidya: Performance diagnostic tool
  • 28. 28Pivotal Confidential–Internal Use Only Hadoop Vaidya: Rule based performance diagnostic tool • Rule based performance diagnosis of M/R jobs – Set of pre-defined diagnostic rules – Diagnostic rules execution against job config & Job History logs – Targeted advice for discovered problems • Extensible framework – You can add your own rules, • Based on a rule template and published job counters – Write complex rules using existing simpler rules Vaidya: An expert (versed in his own profession , esp. in medical science) , skilled in the art of healing , a physician
  • 29. 29Pivotal Confidential–Internal Use Only Hadoop Vaidya: Diagnostic Test Rule <DiagnosticTest> <Title>Balanaced Reduce Partitioning</Title> <ClassName> org.apache.hadoop.vaidya.postexdiagnosis.tests.BalancedReducePartitioning </ClassName> <Description> This rule tests as to how well the input to reduce tasks is balanced </Description> <Importance>High</Importance> <SuccessThreshold>0.40</SuccessThreshold> <Prescription>advice</Prescription> <InputElement> <PercentReduceRecords>0.85</PercentReduceRecords> </InputElement> </DiagnosticTest>
  • 30. 30Pivotal Confidential–Internal Use Only Hadoop Vaidya: Report Element <TestReportElement> <TestTitle>Balanaced Reduce Partitioning</TestTitle> <TestDescription> This rule tests as to how well the input to reduce tasks is balanced </TestDescription> <TestImportance>HIGH</TestImportance> <TestResult>POSITIVE(FAILED)</TestResult> <TestSeverity>0.69</TestSeverity> <ReferenceDetails> * TotalReduceTasks: 4096 * BusyReduceTasks processing 0.85% of total records: 3373 * Impact: 0.70 </ReferenceDetails> <TestPrescription> * Use the appropriate partitioning function * For streaming job consider following partitioner and hadoop config parameters * org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner * -jobconf stream.map.output.field.separator, -jobconf stream.num.map.output.key.fields </TestPrescription> </TestReportElement>
  • 31. 31Pivotal Confidential–Internal Use Only Hadoop Vaidya: Example Rules  Balanced Reduce Partitioning – Check if intermediate data is well partitioned among reducers.  Map/Reduce tasks reading HDFS files as side effect – Checks if HDFS files are being read as side effect and causing the access bottleneck across map/reduce tasks  Percent Re-execution of Map/Reduce tasks  Granularity of Map/Reduce task execution  Map tasks data locality – This rule detects % data locality for Map tasks  Use of Combiner & Combiner efficiency – Checks if there is a potential in using combiner after map stage  Intermediate data compression – Checks if intermediate data is compressed to lower the shuffle time
  • 32. 32Pivotal Confidential–Internal Use Only Hadoop Vaidya: Demo Thank you! • For Hadoop Vaidya demo • Download Pivotal HD single node VM (https://network.gopivotal.com/products/pivotal-hd) • Run M/R job • After job completed, go Resource Manager UI • Select Job History • To look at the diagnostic report click on “Vaidya Report” link in left menu • Vaidya Patch submitted to Apache Hadoop • https://issues.apache.org/jira/browse/MAPREDUCE-3202
  • 33. A NEW PLATFORM FOR A NEW ERA