Abhishek Sinha is a senior product manager at Amazon for Amazon EMR. Amazon EMR allows customers to easily run data frameworks like Hadoop, Spark, and Presto on AWS. It provides a managed platform and tools to launch clusters in minutes that leverage the elasticity of AWS. Customers can customize clusters and choose from different applications, instances types, and access methods. Amazon EMR allows separating compute and storage where the low-cost S3 can be used for persistent storage while clusters are dynamically scaled based on workload.
2. Amazon EMR
Making it easy, secure and cost-effective to run
data processing frameworks on the AWS cloud
3. Amazon EMR
• Managed platform
• Hadoop MapReduce, Spark, Presto,
and more
• Launch clusters in minutes
• Apache Bigtop based distribution
• Leverage the elasticity of the cloud
• Added security features
• Pay by the hour and save with Spot
• Flexibility to customize
• Programmable Infrastructure
4. What do I need to build a cluster ?
1. Choose instances
2. Choose your software
3. Choose your access method
5. Cluster composition
Master Node
Core Instance Group Task Instance
Groups
NameNode (HDFS),
ResourceManager (YARN),
and other components
HDFS DataNode
YARN Node Manager
YARN Node Manager
6. Choice of multiple instances
CPU
c3 family
c4 family
Memory
m2 family
r3 family
Disk/IO
d2 family
i2 family
General
m1 family
m3 family
m4 family
Machine
Learning
Batch
Processing
In-memory
(Spark &
Presto)
Large HDFS
Or add EBS volumes if you need additional on-cluster storage.
11. Use the AWS CLI to easily create clusters:
aws emr create-cluster
--release-label emr-4.3.0
--instance-groups
InstanceGroupType=MASTER,InstanceCount=1, InstanceType=m3.xlarge
InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge
Or use your favorite SDK for programmatic provisioning:
19. EMR can process data from many sources
• Hadoop Distributed File
System (HDFS)
• Amazon S3 (EMRFS)
• Amazon Dynamo DB,
Redshift, Aurora, RDS
• Amazon Kinesis
• Other applications running in
your architecture (Kafka,
ElasticSearch, etc.)
20. Amazon S3 is your persistent data store
11 9’s of durability
$0.03 / GB / Month in US-East
Life Cycle Policies
Available across AZs
Easy access
Amazon S3
21. The EMR Filesystem (EMRFS)
• Allows you to leverage S3 as a file-system for Hadoop
• Streams data directly from S3
• Cluster still uses local disk/HDFS for intermediates
• Better read/write performance and error handling than
open source components
• Optional consistent view for consistent list
• Support for encryption
• Fast listing of objects
22. Going from HDFS to S3
CREATE EXTERNAL TABLE serde_regex(
host STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
)
LOCATION ‘samples/pig-apache/input/'
23. Going from HDFS to S3
CREATE EXTERNAL TABLE serde_regex(
host STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
)
LOCATION 's3://elasticmapreduce.samples/pig-
apache/input/'
34. Spot Integration with EMR
• Can provision instances from the Spot Market
• Replaces a spot instance in case of interruption
• Impact of interruption
• Master Node – Can lose the cluster
• Core Node – Can lose data stored in HDFS
• Task Nodes – lose the task (but the task will run elsewhere)
35. Scale up with Spot Instances
10 node cluster running for 14 hours
Cost = 1.0 * 10 * 14 = $140
51. Use DataFrames to easily interact with data
• Distributed
collection of data
organized in
columns
• An extension of the
existing RDD API
• Optimized for query
execution
52. Easily create DataFrames from many formats
RDD
Additional libraries for Spark SQL Data Sources
at spark-packages.org
53. Load data with the Spark SQL Data Sources API
Additional libraries at spark-packages.org
55. Use DataFrames for machine learning
• Spark ML libraries
(replacing MLlib) use
DataFrames as
input/output for
models
• Create ML pipelines
with a variety of
distributed algorithms
56. Create DataFrames on streaming data
• Access data in Spark Streaming DStream
• Create SQLContext on the SparkContext used for Spark
Streaming application for ad hoc queries
• Incorporate DataFrame in Spark Streaming application
• Checkpointing streaming jobs
58. Use R to interact with DataFrames
• SparkR package for using R to manipulate DataFrames
• Create SparkR applications or interactively use the SparkR
shell (no Zeppelin support yet - ZEPPELIN-156)
• Comparable performance to Python and Scala
DataFrames
59.
60. Amazon EMR runs Spark on YARN
• Dynamically share and centrally configure
the same pool of cluster resources across
engines
• Schedulers for categorizing, isolating, and
prioritizing workloads
• Choose the number of executors to use, or
allow YARN to choose (dynamic allocation)
• Kerberos authentication
Storage
S3, HDFS
YARN
Cluster Resource Management
Batch
MapReduce
In Memory
Spark
Applications
Pig, Hive, Cascading, Spark Streaming, Spark SQL
61. Inside Spark Executor on YARN
Max Container size on node
Executor Memory Overhead - Off heap memory (VM overheads, interned strings etc.)
𝑠𝑝𝑎𝑟𝑘. 𝑦𝑎𝑟𝑛. 𝑒𝑥𝑒𝑐𝑢𝑡𝑜𝑟. 𝑚𝑒𝑚𝑜𝑟𝑦𝑂𝑣𝑒𝑟ℎ𝑒𝑎𝑑 = 𝑒𝑥𝑒𝑐𝑢𝑡𝑜𝑟𝑀𝑒𝑚𝑜𝑟𝑦 ∗ 0.10
Executor Container
Memory
Overhead
Config File: spark-default.conf
62. Inside Spark Executor on YARN
Max Container size on node
Spark executor memory - Amount of memory to use per executor process
spark.executor.memory
Executor Container
Memory
Overhead
Spark Executor Memory
Config File: spark-default.conf
63. Inside Spark Executor on YARN
Max Container size on node
Shuffle Memory Fraction – pre-Spark 1.6
Executor Container
Memory
Overhead
Spark Executor Memory
Shuffle
memoryFraction
Default: 0.2
64. Inside Spark Executor on YARN
Max Container size on node
Storage storage Fraction - pre-Spark 1.6
Executor Container
Memory
Overhead
Spark Executor Memory
Shuffle
memoryFraction
Storage
memoryFraction
Default: 0.6
65. Inside Spark Executor on YARN
Max Container size on node
In Spark 1.6+, Spark automatically balances the amount of memory for execution
and cached data.
Executor Container
Memory
Overhead
Spark Executor Memory
Execution / Cache
Default: 0.6
66. Dynamic Allocation on YARN
Scaling up on executors
- Request when you want the job to complete faster
- Idle resources on cluster
- Exponential increase in executors over time
New Default beginning EMR 4.4
68. Compress your input data set
• Always compress Data Files on Amazon S3
• Reduces storage cost
• Reduces bandwidth between Amazon S3 and Amazon
EMR, which can speed up bandwidth constrained jobs
69. Compressions
Compression Types:
– Some are fast BUT offer less space reduction
– Some are space efficient BUT Slower
– Some are splitable and some are not
Algorithm % Space
Remaining
Encoding Speed Decoding Speed
GZIP 13% 21MB/s 118MB/s
LZO 20% 135MB/s 410MB/s
Snappy 22% 172MB/s 409MB/s
70. Data Serialization
• Data is serialized when cached or shuffled
Default: Java serializer
• Kyro serialization (10x faster than Java serialization)
• Does not support all Serializable types
• Register the class in advance
Usage: Set in SparkConf
conf.set("spark.serializer”,"org.apache.spark.serializer.KryoSerializer")
72. Focus on deriving insights from your data
instead of manually configuring clusters
Easy to install and
configure Spark
Secured
Spark submit, Oozie or
use Zeppelin UI
Quickly add
and remove capacity
Hourly, reserved, or
EC2 Spot pricing
Use S3 to decouple
compute and storage
73. Launch the latest Spark version
Spark 1.6.1 is the current version on EMR.
< 3 week cadence with latest open source release
74. Create a fully configured cluster in minutes
AWS Management
Console
AWS Command Line
Interface (CLI)
Or use a AWS SDK directly with the Amazon EMR API
76. Many storage layers to choose from
Amazon DynamoDB
EMR-DyanmoDB
connector
Amazon RDS
Amazon
Kinesis
Streaming data
connectorsJDBC Data Source
w/ Spark SQL
ElasticSearch
connector
Amazon Redshift
Spark-Redshift
connector
EMR File System
(EMRFS)
Amazon S3
Amazon EMR
77. Decouple compute and storage by using S3
as your data layer
HDFS
S3 is designed for 11
9’s of durability and is
massively scalable
EC2 Instance
Memory
Amazon S3
Amazon EMR
Amazon EMR
Amazon EMR
78. Easy to run your Spark workloads
Amazon EMR Step API
SSH to master node and use Spark
Submit, Oozie or Zeppelin
Submit a Spark
application
Amazon EMR
85. Encryption ComplianceSecurity
Fundamentals
• Identity and Access
Management (IAM) policies,
• Bucket policies
• Access Control Lists (ACLs)
• Query string authentication
• SSL endpoints
• Server Side Encryption
(SSE-S3)
• Server Side Encryption
with KMS provided keys
(coming soon)
• Client-side Encryption
• Buckets access logs
• Lifecycle Management
Policies
• Access Control Lists
(ACLs)
• Versioning & MFA deletes
86. Networking: VPC private subnets
• Use Amazon S3 Endpoints for
connectivity to S3
• Use Managed NAT for connectivity to
other services or the Internet
• Control the traffic using Security Groups
• ElasticMapReduce-Master-Private
• ElasticMapReduce-Slave-Private
• ElasticMapReduce-ServiceAccess
87. Access Control: IAM Users and Roles
• IAM Policies for access to Amazon EMR service (IAM users or
federated users)
• AmazonElasticMapReduceFullAccess
• AmazonElasticMapReduceReadOnlyAccess
• IAM Policies for Amazon EMR cluster
• Service role (AmazonElasticMapReduceRole) - Allowable
actions for Amazon EMR service, like creating EC2 instances.
• Instance profile (AmazonElasticMapReduceforEC2Role) -
Applications that run on Amazon EMR, like access to Amazon S3
for EMRFS on your cluster.
88. Data at Rest: S3 client-side encryption
Amazon S3
AmazonS3encryptionclients
EMRFSenabledfor
AmazonS3client-sideencryption
Key vendor (AWS KMS or your custom key vendor)
(client-side encrypted objects)