SlideShare a Scribd company logo
1 of 49
Download to read offline
Autoscaling
Timothy Farkas
Senior Software Engineer @ Netflix
Problem
Definition
Our Pain
● Thousands of stateless single source and single sink Flink routers.
● All operators are chained.
● When lag for a router exceeds a threshold we are paged.
Kafka
Consumer
Project
Filter
Sink
Definitions
● Workload: Events being produced to a kafka topic. Two main knobs to turn:
○ Message Rate
○ Message Size
● Lag: The time it would take for a router to process all the remaining
unprocessed events that are buffered in its kafka topic.
● Healthy Router: A router is healthy if it’s lag is always under ~5 minutes.
● Autoscaling Solution: Adjust the number of nodes in the router dynamically
based on the workload to keep the router healthy. Attempt to use the
smallest number of nodes that are required to keep the pipeline healthy.
Solution Space
● Claim: There is no perfect solution. Any autoscaling algorithm can be defeated
by one or more workloads.
● Proof: Take any autoscaling algorithm A. Provide A with a workload W that
does the exact opposite of what A expects whenever A decides to resize the
cluster. =>
A will always make the wrong decision for W by definition. =>
Q.E.D.
○ Understand our limitations.
○ Make assumptions about our workloads.
○ Make a solution that works well when taking both into account.
Limitations
● Rescaling introduces processing pauses.
● Scaling down a Flink job suspends processing for 1 - 3 minutes and possibly
more.
○ CHEAP: Graceful shutdown with savepoint.
○ EXPENSIVE: Remove TMs.
○ CHEAP: Restart from savepoint with reduced parallelism.
● Scaling up a Flink job suspends processing for period < 1 minute.
○ EXPENSIVE: Add TMs.
○ CHEAP: Graceful shutdown with savepoint.
○ CHEAP: Restart from savepoint with increased parallelism.
● There is a two minute delay for propagating metrics through Netflix’s metrics
infrastructure.
Assumptions
● Better to accidentally over allocate than to under allocate.
● Average message size changes infrequently.
● Large spikes in the workload happen, but not frequently.
● Workloads tend to smoothly increase or decrease, when they don’t have a
large spike.
Solution
Desirable Characteristics
● Minimal amount of state
● Deterministic behavior
● Easy to unit test
● Easy to control
Approaches
● Historical Prediction
● Rule Based
● PID Controller
● Statistical Short Term Prediction + Policies
Autoscaling - High Level Steps
● Collect: Receive a batch of metrics for the current 1 minute timebucket.
● Pre-Decision Policy: Apply policies which decide whether the cluster can be
rescaled or whether performance information can be collected about the cluster.
● Decide: Based on latest batch of events, decide whether to:
○ Scale up
○ Scale down
○ Stay the same
○ Also collects cluster performance information
● Calculate Size: If scaling up or down, decide how many nodes need to be
added or removed.
● Post-Decision Policy: Apply policies which can modify scale up and scale
down decisions.
Metrics Collection
Each minute collect the following
● Kafka consumer lag
● Records processed per second
● Cpu utilization
● Max Message Latency
Kafka
Kafka
Consumer
Buffered Events
Router
Sink
Processing
Filter /
Projection
● Kafka messages in per second
● Net in / out utilization
● Sink health metrics
Store the metrics for the past N minutes to inform scaling decisions and to do
regression to predict the workload.
Pre-Decision Policy
Abort autoscaling process if:
● The router has recent task failures
● The router is currently redeploying
Decide - Scale Up
Scale up if:
● There is significant lag AND sink is healthy
● Utilization exceeds the safe threshold AND sink is healthy
Key Insight - Collect cluster performance information:
● If the cluster needs to be scaled up that means the cluster is saturated.
● This is effectively a benchmark for the performance of the cluster at the current
size.
● Save this information in a Performance Table for future scaling decisions.
● More on this in the Performance Table section later.
Decide - Scale Down
Scale down if:
● There is no lag AND we do not anticipate an increase in incoming messages
● More on how we anticipate incoming message rate in the Predict Workload
section later.
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
● Quadratic regression for troughs: ax^2 + bx + c
● Linear regression for everything else: ax + b
Predict Workload
● Assume error for regression is normally distributed and centered at 0.
● Find standard deviation of error.
● Any error greater than 3 * sigma is an outlier.
● After enough consecutive outliers are observed, the baseline is reset.
Spike Detection
Spike Detection - Baseline
Var = (1 + 1 + 1 + 1) / 6 std = sqrt(4 / 6) 3 *std = 2.45
Spike Detection - First Outlier
Spike Detection - Outliers
Spike Detection - Baseline Reset
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
restartTime catchUpTime
recoveryTime = 15 min
2 minutes 13 minutes
t1 t2 = t1 + 15 min
Compute Target Processing Rate
t3 = t1 + 20 min
validTime
1.
2.
3.
Calculate Size
● Predict Workload: Predict the future workload (messages in per second),
while taking spikes into account.
● Target Events Per Second: Compute target events / sec that the pipeline will
need to handle X minutes from now.
● Cluster Size Lookup: Use the target events / sec to estimate the desired
cluster size, which can handle the workload up to X minutes from now.
● Lag and resource usage is high =>
● Pipeline is saturated =>
● We decide to scale up =>
● We know the maximum throughput of the current cluster at the current size =>
● Record the performance in a lookup table
Cluster Size Lookup - The Performance Table
● Given a target rate find the performance records above and below it.
● Do linear interpolation to find the suitable cluster size.
Num Nodes Max Rate
4 10,000
10 20,000
18 35,000
targetRate = 15,000
ratio = (15000 - 10000)
(20000 - 10000)
ratio = .5
clusterSize = .5 (4) + .5 (10)
clusterSize = 7
Performance Table
Cluster Size Lookup - The Performance Table
Num Nodes Max Rate
4 10,000
10 20,000
18 35,000
targetRate = 40,000
clusterSize = (40,000) = 20.57
(35,000 / 18)
ceil(clusterSize) = 21
Performance Table
Cluster Size Lookup - Corner Case
Cluster Size Lookup - Complexities
● Few more corner cases
● Utilization also needs to be taken into account
● Want new cluster size to have reasonable resource utilization 60% or less
Calculate Size
Scale Up vs Scale Down
● Flow and logic is the same
● Minor differences in implementation details
Post-Decision Policy
● Minimum cluster size based on partition count of Kafka topic
● Maximum cluster size based on partition count of Kafka topic
● Cooldown period for scale ups
● Cooldown period for scale downs
● Disable scale downs during region failover (see Region Failover section)
● Safety limit for max scale up. Ex. cannot add more than 50 nodes during a
scale up
● Safety limit for max cluster size
Running In
Production
Architecture Options
● Embed autoscaling in Flink
○ Pros:
■ Lower latency for retrieving metrics.
○ Cons:
■ Complex resource manager interactions get pushed down into Flink.
■ Rescale operation not easily integrated into operations history for the job.
■ Autoscaling changes requires redeploy of the job.
● Run autoscaling as a Mantis pipeline
○ Pros:
■ Flink service control plane handles all resource manager interactions already and it can be
re-used for rescaling the job.
■ Flink service control plane keeps history of all rescale actions.
■ Autoscaling can be changed without redeploying jobs.
○ Cons:
■ 2 minute latency for getting metrics.
Titus
Autoscaling Architecture
Autoscaler
(Mantis Job)
User
Router
User
Router
User
Router
Spinnaker
Flink as a Service
Atlas
Results
Sine Wave
Linear Spikey
Production Router
Fleet Resource Usage
Additional
Considerations
Memory Requirements
● Direct memory has to be preserved for Kafka Consumer and Kafka
Producer
● Direct memory cannot be changed for TMs that are already running
● Smaller clusters require more Direct memory (each node handles more
partitions)
● Larger clusters require less Direct memory (each node handles fewer
partitions)
● Deploy cluster with Direct memory that works for the minimum cluster
size
Partition Balancing
2 Partitions
2 Partitions
1 Partition
1 Partition
1 Partition
1 Partition
● 3 TMs
● 2 Task Slots per TM
● Topic with 8 partitions
TMs with N Task Slots in the worst case
can be assigned N more partitions than
other TMs
● Aggregate consumer lag and
average message latency may
be low.
● A few partitions may have high
latency due to unbalanced
distribution of partitions.
● Round up the cluster so that the
maximum possible partitions per
subtask is reduced by 1.
● Note this is a much looser
requirement on cluster size than
requiring equal distribution of
partitions to subtasks. This
allows finer grained cluster size
control.
Outlier Containers
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: HANDLING MCE MEMORY ERROR
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: CPU 24: Machine Check Event: 0 Bank 13:
cc063f80000800c0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: TSC 0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: ADDR 57c24fb4c0
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: MISC 908511010100086
[Wed Sep 4 17:50:03 2019] EDAC skx MC2: PROCESSOR 0:50654 TIME 1567619410 SOCKET 1 APIC 40
[Wed Sep 4 17:50:03 2019] EDAC MC2: 6398 CE memory scrubbing error on
CPU_SrcID#1_MC#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x57c24fb offset:0x4c0 grain:32
syndrome:0x0 - OVERFLOW err_code:0008:00c0 socket:1 imc:0 rank:1 bg:3 ba:2 row:1aacf col:338)
Region Failover
Netflix
us-west
Netflix
us-east
Netflix
eu-west
User
us-west
User
us-east
User
eu-west
Flink Routers
us-west
Flink Routers
us-west
Flink Routers
us-west
Disable scaledowns in the evacuated
region until traffic comes back.
Future Work
Eager Scale Up
● Current Scale Up Decision:
○ There is significant consumer lag AND sink is healthy
○ Utilization exceeds the safe threshold AND sink is healthy
This is not ideal since in most cases latency builds up in the job before a scale up is
triggered.
● Eager Scale Up Decision:
○ Use the performance table to determine the maximum processing rate of the
current cluster.
○ Use regression to determine if the workload will exceed the processing rate
of the cluster in the near future.
○ If this is the case do a scale up before any lag builds up.
Downscale Optimization
● Current downscale operation
○ CHEAP: Graceful shutdown with savepoint.
○ EXPENSIVE: Remove TMs.
○ CHEAP: Restart from savepoint with reduced parallelism.
● Optimized downscale operation
○ CHEAP: Graceful shutdown with savepoint.
○ CHEAP: Blacklist TMs that will be removed.
○ CHEAP: Restart from savepoint with reduced parallelism.
○ EXPENSIVE: Remove TMs.
Complex DAGs
● Extend the algorithm to support multiple sources and sinks.
● Handle jobs where all operators are not chained.
Acknowledgements
● Steven Wu
● Andrew Nguonly
● Neeraj Joshi
● Mark Cho
● Netflix Flink Team
● Netflix Mantis Team
● Netflix Data Pipeline Team
● Netflix RTDI Team
● https://www.spinnaker.io/
● https://github.com/Netflix/mantis

More Related Content

What's hot

Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraFlink Forward
 
Flink history, roadmap and vision
Flink history, roadmap and visionFlink history, roadmap and vision
Flink history, roadmap and visionStephan Ewen
 
Introduction to Apache Flink
Introduction to Apache FlinkIntroduction to Apache Flink
Introduction to Apache Flinkdatamantra
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Flink Forward
 
Handle Large Messages In Apache Kafka
Handle Large Messages In Apache KafkaHandle Large Messages In Apache Kafka
Handle Large Messages In Apache KafkaJiangjie Qin
 
A Deep Dive into Kafka Controller
A Deep Dive into Kafka ControllerA Deep Dive into Kafka Controller
A Deep Dive into Kafka Controllerconfluent
 
Uber: Kafka Consumer Proxy
Uber: Kafka Consumer ProxyUber: Kafka Consumer Proxy
Uber: Kafka Consumer Proxyconfluent
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022Flink Forward
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkDataWorks Summit
 
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022HostedbyConfluent
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
 
Fundamentals of Apache Kafka
Fundamentals of Apache KafkaFundamentals of Apache Kafka
Fundamentals of Apache KafkaChhavi Parasher
 
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...HostedbyConfluent
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
 
Scaling up uber's real time data analytics
Scaling up uber's real time data analyticsScaling up uber's real time data analytics
Scaling up uber's real time data analyticsXiang Fu
 
Apache Kafka Introduction
Apache Kafka IntroductionApache Kafka Introduction
Apache Kafka IntroductionAmita Mirajkar
 
Improving Kafka at-least-once performance at Uber
Improving Kafka at-least-once performance at UberImproving Kafka at-least-once performance at Uber
Improving Kafka at-least-once performance at UberYing Zheng
 

What's hot (20)

Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
 
Flink history, roadmap and vision
Flink history, roadmap and visionFlink history, roadmap and vision
Flink history, roadmap and vision
 
Introduction to Apache Flink
Introduction to Apache FlinkIntroduction to Apache Flink
Introduction to Apache Flink
 
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth WiesmanWebinar: Deep Dive on Apache Flink State - Seth Wiesman
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
 
Real-Time Event Processing
Real-Time Event ProcessingReal-Time Event Processing
Real-Time Event Processing
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
 
Handle Large Messages In Apache Kafka
Handle Large Messages In Apache KafkaHandle Large Messages In Apache Kafka
Handle Large Messages In Apache Kafka
 
A Deep Dive into Kafka Controller
A Deep Dive into Kafka ControllerA Deep Dive into Kafka Controller
A Deep Dive into Kafka Controller
 
Uber: Kafka Consumer Proxy
Uber: Kafka Consumer ProxyUber: Kafka Consumer Proxy
Uber: Kafka Consumer Proxy
 
The Current State of Table API in 2022
The Current State of Table API in 2022The Current State of Table API in 2022
The Current State of Table API in 2022
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache Flink
 
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022
Schema Registry 101 with Bill Bejeck | Kafka Summit London 2022
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
Fundamentals of Apache Kafka
Fundamentals of Apache KafkaFundamentals of Apache Kafka
Fundamentals of Apache Kafka
 
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...
Real-time Analytics with Upsert Using Apache Kafka and Apache Pinot | Yupeng ...
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling up uber's real time data analytics
Scaling up uber's real time data analyticsScaling up uber's real time data analytics
Scaling up uber's real time data analytics
 
Apache Kafka Introduction
Apache Kafka IntroductionApache Kafka Introduction
Apache Kafka Introduction
 
Improving Kafka at-least-once performance at Uber
Improving Kafka at-least-once performance at UberImproving Kafka at-least-once performance at Uber
Improving Kafka at-least-once performance at Uber
 

Similar to Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas

Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafka
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache KafkaStrata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafka
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafkaconfluent
 
Netflix SRE perf meetup_slides
Netflix SRE perf meetup_slidesNetflix SRE perf meetup_slides
Netflix SRE perf meetup_slidesEd Hunter
 
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLWebinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLSeveralnines
 
Scaling Monitoring At Databricks From Prometheus to M3
Scaling Monitoring At Databricks From Prometheus to M3Scaling Monitoring At Databricks From Prometheus to M3
Scaling Monitoring At Databricks From Prometheus to M3LibbySchulze
 
Enabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speedEnabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speedShubham Tagra
 
HBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
 
Netflix Data Pipeline With Kafka
Netflix Data Pipeline With KafkaNetflix Data Pipeline With Kafka
Netflix Data Pipeline With KafkaSteven Wu
 
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...HostedbyConfluent
 
Salesforce enabling real time scenarios at scale using kafka
Salesforce enabling real time scenarios at scale using kafkaSalesforce enabling real time scenarios at scale using kafka
Salesforce enabling real time scenarios at scale using kafkaThomas Alex
 
Enabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speedEnabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speedShubham Tagra
 
From Three Nines to Five Nines - A Kafka Journey
From Three Nines to Five Nines - A Kafka JourneyFrom Three Nines to Five Nines - A Kafka Journey
From Three Nines to Five Nines - A Kafka JourneyAllen (Xiaozhong) Wang
 
Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)DataWorks Summit
 
Improve Presto Architectural Decisions with Shadow Cache
 Improve Presto Architectural Decisions with Shadow Cache Improve Presto Architectural Decisions with Shadow Cache
Improve Presto Architectural Decisions with Shadow CacheAlluxio, Inc.
 
In datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unitIn datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unitJinwon Lee
 
Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015Monal Daxini
 
AWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runnersAWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runnersAnthony Scata
 
Distributed Task Scheduling with Akka, Kafka and Cassandra
Distributed Task Scheduling with Akka, Kafka and CassandraDistributed Task Scheduling with Akka, Kafka and Cassandra
Distributed Task Scheduling with Akka, Kafka and CassandraDavid van Geest
 
Cost Dimensions of Kafka - Opti Owl Cloud
Cost Dimensions of Kafka - Opti Owl CloudCost Dimensions of Kafka - Opti Owl Cloud
Cost Dimensions of Kafka - Opti Owl CloudSrinivasDevaki
 

Similar to Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas (20)

Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafka
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache KafkaStrata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafka
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafka
 
Netflix SRE perf meetup_slides
Netflix SRE perf meetup_slidesNetflix SRE perf meetup_slides
Netflix SRE perf meetup_slides
 
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLWebinar slides: An Introduction to Performance Monitoring for PostgreSQL
Webinar slides: An Introduction to Performance Monitoring for PostgreSQL
 
Scaling Monitoring At Databricks From Prometheus to M3
Scaling Monitoring At Databricks From Prometheus to M3Scaling Monitoring At Databricks From Prometheus to M3
Scaling Monitoring At Databricks From Prometheus to M3
 
Enabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speedEnabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speed
 
HBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environment
 
Netflix Data Pipeline With Kafka
Netflix Data Pipeline With KafkaNetflix Data Pipeline With Kafka
Netflix Data Pipeline With Kafka
 
Netflix Data Pipeline With Kafka
Netflix Data Pipeline With KafkaNetflix Data Pipeline With Kafka
Netflix Data Pipeline With Kafka
 
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...
 
Salesforce enabling real time scenarios at scale using kafka
Salesforce enabling real time scenarios at scale using kafkaSalesforce enabling real time scenarios at scale using kafka
Salesforce enabling real time scenarios at scale using kafka
 
Enabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speedEnabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speed
 
From Three Nines to Five Nines - A Kafka Journey
From Three Nines to Five Nines - A Kafka JourneyFrom Three Nines to Five Nines - A Kafka Journey
From Three Nines to Five Nines - A Kafka Journey
 
Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)
 
Improve Presto Architectural Decisions with Shadow Cache
 Improve Presto Architectural Decisions with Shadow Cache Improve Presto Architectural Decisions with Shadow Cache
Improve Presto Architectural Decisions with Shadow Cache
 
In datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unitIn datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unit
 
Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015
 
AWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runnersAWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runners
 
kafka
kafkakafka
kafka
 
Distributed Task Scheduling with Akka, Kafka and Cassandra
Distributed Task Scheduling with Akka, Kafka and CassandraDistributed Task Scheduling with Akka, Kafka and Cassandra
Distributed Task Scheduling with Akka, Kafka and Cassandra
 
Cost Dimensions of Kafka - Opti Owl Cloud
Cost Dimensions of Kafka - Opti Owl CloudCost Dimensions of Kafka - Opti Owl Cloud
Cost Dimensions of Kafka - Opti Owl Cloud
 

More from Flink Forward

“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Flink Forward
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorFlink Forward
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkFlink Forward
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxTuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxFlink Forward
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in FlinkWhere is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink Forward
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsFlink Forward
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotExactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesFlink Forward
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergFlink Forward
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!Flink Forward
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsFlink Forward
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use casesExtending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use casesFlink Forward
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleFlink Forward
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and ProfitUsing Queryable State for Fun and Profit
Using Queryable State for Fun and ProfitFlink Forward
 
Changelog Stream Processing with Apache Flink
Changelog Stream Processing with Apache FlinkChangelog Stream Processing with Apache Flink
Changelog Stream Processing with Apache FlinkFlink Forward
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior DetectionLarge Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior DetectionFlink Forward
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
 

More from Flink Forward (20)

“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async SinkOne sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxTuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in FlinkWhere is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in Flink
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easyFlink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data AlertsDynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotExactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial ServicesProcessing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
 
Welcome to the Flink Community!
Welcome to the Flink Community!Welcome to the Flink Community!
Welcome to the Flink Community!
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobsPractical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use casesExtending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use cases
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and ProfitUsing Queryable State for Fun and Profit
Using Queryable State for Fun and Profit
 
Changelog Stream Processing with Apache Flink
Changelog Stream Processing with Apache FlinkChangelog Stream Processing with Apache Flink
Changelog Stream Processing with Apache Flink
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior DetectionLarge Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior Detection
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
 

Recently uploaded

Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 

Recently uploaded (20)

Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 

Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas

  • 3. Our Pain ● Thousands of stateless single source and single sink Flink routers. ● All operators are chained. ● When lag for a router exceeds a threshold we are paged. Kafka Consumer Project Filter Sink
  • 4. Definitions ● Workload: Events being produced to a kafka topic. Two main knobs to turn: ○ Message Rate ○ Message Size ● Lag: The time it would take for a router to process all the remaining unprocessed events that are buffered in its kafka topic. ● Healthy Router: A router is healthy if it’s lag is always under ~5 minutes. ● Autoscaling Solution: Adjust the number of nodes in the router dynamically based on the workload to keep the router healthy. Attempt to use the smallest number of nodes that are required to keep the pipeline healthy.
  • 5. Solution Space ● Claim: There is no perfect solution. Any autoscaling algorithm can be defeated by one or more workloads. ● Proof: Take any autoscaling algorithm A. Provide A with a workload W that does the exact opposite of what A expects whenever A decides to resize the cluster. => A will always make the wrong decision for W by definition. => Q.E.D. ○ Understand our limitations. ○ Make assumptions about our workloads. ○ Make a solution that works well when taking both into account.
  • 6. Limitations ● Rescaling introduces processing pauses. ● Scaling down a Flink job suspends processing for 1 - 3 minutes and possibly more. ○ CHEAP: Graceful shutdown with savepoint. ○ EXPENSIVE: Remove TMs. ○ CHEAP: Restart from savepoint with reduced parallelism. ● Scaling up a Flink job suspends processing for period < 1 minute. ○ EXPENSIVE: Add TMs. ○ CHEAP: Graceful shutdown with savepoint. ○ CHEAP: Restart from savepoint with increased parallelism. ● There is a two minute delay for propagating metrics through Netflix’s metrics infrastructure.
  • 7. Assumptions ● Better to accidentally over allocate than to under allocate. ● Average message size changes infrequently. ● Large spikes in the workload happen, but not frequently. ● Workloads tend to smoothly increase or decrease, when they don’t have a large spike.
  • 9. Desirable Characteristics ● Minimal amount of state ● Deterministic behavior ● Easy to unit test ● Easy to control
  • 10. Approaches ● Historical Prediction ● Rule Based ● PID Controller ● Statistical Short Term Prediction + Policies
  • 11. Autoscaling - High Level Steps ● Collect: Receive a batch of metrics for the current 1 minute timebucket. ● Pre-Decision Policy: Apply policies which decide whether the cluster can be rescaled or whether performance information can be collected about the cluster. ● Decide: Based on latest batch of events, decide whether to: ○ Scale up ○ Scale down ○ Stay the same ○ Also collects cluster performance information ● Calculate Size: If scaling up or down, decide how many nodes need to be added or removed. ● Post-Decision Policy: Apply policies which can modify scale up and scale down decisions.
  • 12. Metrics Collection Each minute collect the following ● Kafka consumer lag ● Records processed per second ● Cpu utilization ● Max Message Latency Kafka Kafka Consumer Buffered Events Router Sink Processing Filter / Projection ● Kafka messages in per second ● Net in / out utilization ● Sink health metrics Store the metrics for the past N minutes to inform scaling decisions and to do regression to predict the workload.
  • 13. Pre-Decision Policy Abort autoscaling process if: ● The router has recent task failures ● The router is currently redeploying
  • 14. Decide - Scale Up Scale up if: ● There is significant lag AND sink is healthy ● Utilization exceeds the safe threshold AND sink is healthy Key Insight - Collect cluster performance information: ● If the cluster needs to be scaled up that means the cluster is saturated. ● This is effectively a benchmark for the performance of the cluster at the current size. ● Save this information in a Performance Table for future scaling decisions. ● More on this in the Performance Table section later.
  • 15. Decide - Scale Down Scale down if: ● There is no lag AND we do not anticipate an increase in incoming messages ● More on how we anticipate incoming message rate in the Predict Workload section later.
  • 16. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 17. ● Quadratic regression for troughs: ax^2 + bx + c ● Linear regression for everything else: ax + b Predict Workload
  • 18. ● Assume error for regression is normally distributed and centered at 0. ● Find standard deviation of error. ● Any error greater than 3 * sigma is an outlier. ● After enough consecutive outliers are observed, the baseline is reset. Spike Detection
  • 19. Spike Detection - Baseline
  • 20. Var = (1 + 1 + 1 + 1) / 6 std = sqrt(4 / 6) 3 *std = 2.45 Spike Detection - First Outlier
  • 21. Spike Detection - Outliers
  • 22. Spike Detection - Baseline Reset
  • 23. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 24. restartTime catchUpTime recoveryTime = 15 min 2 minutes 13 minutes t1 t2 = t1 + 15 min Compute Target Processing Rate t3 = t1 + 20 min validTime 1. 2. 3.
  • 25. Calculate Size ● Predict Workload: Predict the future workload (messages in per second), while taking spikes into account. ● Target Events Per Second: Compute target events / sec that the pipeline will need to handle X minutes from now. ● Cluster Size Lookup: Use the target events / sec to estimate the desired cluster size, which can handle the workload up to X minutes from now.
  • 26. ● Lag and resource usage is high => ● Pipeline is saturated => ● We decide to scale up => ● We know the maximum throughput of the current cluster at the current size => ● Record the performance in a lookup table Cluster Size Lookup - The Performance Table
  • 27. ● Given a target rate find the performance records above and below it. ● Do linear interpolation to find the suitable cluster size. Num Nodes Max Rate 4 10,000 10 20,000 18 35,000 targetRate = 15,000 ratio = (15000 - 10000) (20000 - 10000) ratio = .5 clusterSize = .5 (4) + .5 (10) clusterSize = 7 Performance Table Cluster Size Lookup - The Performance Table
  • 28. Num Nodes Max Rate 4 10,000 10 20,000 18 35,000 targetRate = 40,000 clusterSize = (40,000) = 20.57 (35,000 / 18) ceil(clusterSize) = 21 Performance Table Cluster Size Lookup - Corner Case
  • 29. Cluster Size Lookup - Complexities ● Few more corner cases ● Utilization also needs to be taken into account ● Want new cluster size to have reasonable resource utilization 60% or less
  • 30. Calculate Size Scale Up vs Scale Down ● Flow and logic is the same ● Minor differences in implementation details
  • 31. Post-Decision Policy ● Minimum cluster size based on partition count of Kafka topic ● Maximum cluster size based on partition count of Kafka topic ● Cooldown period for scale ups ● Cooldown period for scale downs ● Disable scale downs during region failover (see Region Failover section) ● Safety limit for max scale up. Ex. cannot add more than 50 nodes during a scale up ● Safety limit for max cluster size
  • 33. Architecture Options ● Embed autoscaling in Flink ○ Pros: ■ Lower latency for retrieving metrics. ○ Cons: ■ Complex resource manager interactions get pushed down into Flink. ■ Rescale operation not easily integrated into operations history for the job. ■ Autoscaling changes requires redeploy of the job. ● Run autoscaling as a Mantis pipeline ○ Pros: ■ Flink service control plane handles all resource manager interactions already and it can be re-used for rescaling the job. ■ Flink service control plane keeps history of all rescale actions. ■ Autoscaling can be changed without redeploying jobs. ○ Cons: ■ 2 minute latency for getting metrics.
  • 41. Memory Requirements ● Direct memory has to be preserved for Kafka Consumer and Kafka Producer ● Direct memory cannot be changed for TMs that are already running ● Smaller clusters require more Direct memory (each node handles more partitions) ● Larger clusters require less Direct memory (each node handles fewer partitions) ● Deploy cluster with Direct memory that works for the minimum cluster size
  • 42. Partition Balancing 2 Partitions 2 Partitions 1 Partition 1 Partition 1 Partition 1 Partition ● 3 TMs ● 2 Task Slots per TM ● Topic with 8 partitions TMs with N Task Slots in the worst case can be assigned N more partitions than other TMs ● Aggregate consumer lag and average message latency may be low. ● A few partitions may have high latency due to unbalanced distribution of partitions. ● Round up the cluster so that the maximum possible partitions per subtask is reduced by 1. ● Note this is a much looser requirement on cluster size than requiring equal distribution of partitions to subtasks. This allows finer grained cluster size control.
  • 43. Outlier Containers [Wed Sep 4 17:50:03 2019] EDAC skx MC2: HANDLING MCE MEMORY ERROR [Wed Sep 4 17:50:03 2019] EDAC skx MC2: CPU 24: Machine Check Event: 0 Bank 13: cc063f80000800c0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: TSC 0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: ADDR 57c24fb4c0 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: MISC 908511010100086 [Wed Sep 4 17:50:03 2019] EDAC skx MC2: PROCESSOR 0:50654 TIME 1567619410 SOCKET 1 APIC 40 [Wed Sep 4 17:50:03 2019] EDAC MC2: 6398 CE memory scrubbing error on CPU_SrcID#1_MC#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x57c24fb offset:0x4c0 grain:32 syndrome:0x0 - OVERFLOW err_code:0008:00c0 socket:1 imc:0 rank:1 bg:3 ba:2 row:1aacf col:338)
  • 44. Region Failover Netflix us-west Netflix us-east Netflix eu-west User us-west User us-east User eu-west Flink Routers us-west Flink Routers us-west Flink Routers us-west Disable scaledowns in the evacuated region until traffic comes back.
  • 46. Eager Scale Up ● Current Scale Up Decision: ○ There is significant consumer lag AND sink is healthy ○ Utilization exceeds the safe threshold AND sink is healthy This is not ideal since in most cases latency builds up in the job before a scale up is triggered. ● Eager Scale Up Decision: ○ Use the performance table to determine the maximum processing rate of the current cluster. ○ Use regression to determine if the workload will exceed the processing rate of the cluster in the near future. ○ If this is the case do a scale up before any lag builds up.
  • 47. Downscale Optimization ● Current downscale operation ○ CHEAP: Graceful shutdown with savepoint. ○ EXPENSIVE: Remove TMs. ○ CHEAP: Restart from savepoint with reduced parallelism. ● Optimized downscale operation ○ CHEAP: Graceful shutdown with savepoint. ○ CHEAP: Blacklist TMs that will be removed. ○ CHEAP: Restart from savepoint with reduced parallelism. ○ EXPENSIVE: Remove TMs.
  • 48. Complex DAGs ● Extend the algorithm to support multiple sources and sinks. ● Handle jobs where all operators are not chained.
  • 49. Acknowledgements ● Steven Wu ● Andrew Nguonly ● Neeraj Joshi ● Mark Cho ● Netflix Flink Team ● Netflix Mantis Team ● Netflix Data Pipeline Team ● Netflix RTDI Team ● https://www.spinnaker.io/ ● https://github.com/Netflix/mantis