SlideShare a Scribd company logo
1 of 32
Application Timeline Server
- Past, Present & Future
NAGANARASIMHA G R & VARUN SAXENA
Agenda
 Who we are ?
 Why we need History Server?
 Application History Server
 Timeline Server V1
 Timeline Server V2
Who we are ?
Naganarasimha G R
 Senior Technical Lead @ Huawei
 Active Apache Hadoop Contributor.
 Currently working in Hadoop Platform Dev team
 Earlier worked in Reporting Domain
Varun Saxena
 Technical Lead @ Huawei
 Active Apache Hadoop Contributor.
 Currently working in Hadoop Platform Dev team
 Earlier worked in Telecom Data Network Domain
Both of us are currently participating in ATS V2 development
Agenda
 Who we are ?
 Why we need History Server?
 Application History Server
 Timeline Server V1 & V1.5
 Timeline Server V2
Need for new History Server
 Job History server is only for MR app, YARN supports
many Applications.
 YARN level Events and Metrics are not captured.
 Storage is HDFS only, Not good for adhoc analysis.
 JHS is only for historical or completed jobs.
 On failures of Application Master, Data for current
running application is lost.
 Storage is very MR specific
• Counters
• Mappers and Reducers
Agenda
 Who we are ?
 Why we need History Server?
 Application History Server
 Timeline Server V1 & V1.5
 Timeline Server V2
Application History Server
 Separate Process
 Resource Manager directly writes to Storage(HDFS)
 Aggregated Logs
 Separate UI, CLI and Rest End Point
 Data stored :
• Application level data (queue, user etc…)
• List of ApplicationAttempts
• Information about each ApplicationAttempt
• List of containers for ApplicationAttempt
• Generic information about each container.
 CLI and REST Query interfaces were supported
Drawbacks :
 Storing Application specific custom data not
supported
 RM crashes, HDFS files are not readable
 Hard limit no number of Files
 Upgrades / Update
 Supports only completed jobs.
Agenda
 Who we are ?
 Why we need History Server?
 Application History Server
 Timeline Server V1
 Timeline Server V2
Application Timeline Service
Motivation :
 YARN takes care of it
- Relieving the application from monitoring
service
 Application diversity
- Framework specific metadata/metrics
ATS V1 : Data Model
 Timeline Domain
- Namespace for Timeline server which supports
isolations users and applications
- Timeline server Security is defined at this level
 Timeline Entity
- An abstract concept of anything
- Defines the relationship between entities
- Can be an application, an application attempt, a
container or any user-defined object
- contains Primary filters which will be used to index the
entities in the Timeline Store.
- uniquely identified by an EntityId and EntityType.
 Timeline Event
- Event that is related to a specific Timeline Entity of an application
- Users are free to define what an event means, such as starting an application, getting allocated a container,
ATS V1 : Architecture
 Separate Process
 Pluggable store – defaults to LevelDB
 REST Interfaces
ATS V1 : Level DB
 Key- value store
 Lightweight
 Open source Compatible license
 Used to store
- TimelineStore : Domain, Entity, Events and metrics
- TimelineStateStore : Security Tokens
 Supports Data Retention
Agenda
 Who we are ?
 Why we need History Server?
 Application History Server
 Timeline Server V1
 Timeline Server V2
Why ATSv2 ?
 Scalability
• Single global instance of writer/reader
• ATSv1 uses local disk based LevelDB storage
 Usability
• Handle flows as first-class concepts and model aggregation.
• Elevate configuration and metrics to first-class members.
• Better support for queries.
 Reliability
• Data is stored only in a local disk .
• Single daemon so single point of failure.
 Existing external tooling: hRaven, Finch, Dr. Elephant, etc. As new Hadoop versions are rolled out,
maintenance of these tools becomes an issue.
Key Design Points
 Distributed writers (per app and per node)
• Per App Writer/Collector launched as part of RM.
• Per Node Collector/Writer launched as an auxiliary service in NM.
• In future, will support standalone writers.
 Scalable and reliable backend storage (HBase)
 A new object model API with flows built into it.
 Separate reader instance(s). Currently have a single reader instance.
 Aggregation i.e. rolling up the metric values to the parent.
• Online aggregation for apps and flow runs.
• Offline aggregation for users, flows and queues.
ATSv2 Components
Timeline
Reader
Timeline
Reader
ATSv2 Components
Application
Master
Node
Manager
Timeline
Writer
App Events
/ Metrics
Container Events
/ Metrics
Storage
Resource Manager
Timeline Writer
Timeline
Reader
User Queries
Timeline
Reader Pool
App / Container
Events
Resource
Manager
RMApp
Distributed Writers / Collectors
Node Manager 1
{
app_1_collector_info
….
}
List of app collectors
App Master
3. Launch App Master
App Collector
App Collector
Aux Service
4. Notify Aux Service
to bind new collector 5. Bind new collector
NODE 1
HBase
NM
Collector
Service
6. Register new
collector
RM
Events
Heartbeat with collector info
App Collector
App Collector
Node
Manager 2
Node
Manager X
1. User submits app
Heartbeat with collector info
2. RMApp launches
companion app collector on
new app submission
7. Report new collector
info. (IP + Port)
Container
Events
AM reports events to app
collector notified in HB by
RM.
NM reports events to app
collector notified in HB by
RM.
{
app_1_collector_info
app_2_collector_info
….
}
App 1
App 2
App 3 App 4
Run at 9:00 pm
Flow
Script / Program
(eg. HIVE Query /
Pig Script)
App 1
App 2
App 3 App 4
Run at 7:30 pm
Joe
Data Model
Entity
ID + Type
Configurations
Metadata(Info)
Parent-Child
Relationships
Metrics
Events
Cluster
Type
Cluster Attributes
Flow
Type
User
Flow Runs
Flow Attributes
Flow Run
Type
User
Running
apps
Flow Run
Attributes
Application
Type
User
Flow + Run
Queue
Attempts
Attempt
Type
Application
Queue
Containers
Container
Type
Attempt
Attributes
Entities of first
class citizens
User
Username(ID)
Aggregated metrics
Queue
Queue(ID)
Sub queues
Aggregated metrics
Aggregation
Event
ID
Metadata
Timestamp
Metric
ID
Metadata
Single Value or
Time Series(with
timestamps)
HBase vs Phoenix evaluation
Based on the evaluation of both Hbase and Phoenix, it was decided that HBase will be used on write path. With
Hbase, much higher throughput, a lower IO wait and far lower CPU load was witnessed.
Test descript
ion
Map
tasks
Entities
per
mapper
Total
entities
written
Phoenix
Transaction
Rate (per
mapper)
ops/sec
HBase
Transactio
n Rate
(per
mapper)
ops/sec
Phoenix Write
Time (job
counter
TIMELINE_
SERVICE_
WRITE_TIME)
Hbase Write Time
(job counter
TIMELINE _SERVICE
_WRITE_TIME)
Synthetic Data 170 1k 170k 112.83 2285.13 1506704 74394
Synthetic Data 170 10k 1.7M 53.029 636.41 32057957 2671241
Synthetic Data 1 50k 50k 196.67 19770.66 254225 2529
9 History Files 33 - 85k 319.19
(write errors)
962.32 265460 88049
555 History
Files
33 - 810k 206.25
(write errors)
927.62 4102364 874151
Aggregation
 Aggregation basically means rolling up metrics from child entities to parent entities. We can perform different operations such as
SUM, AVG ,etc. while rolling them up and store them in the parent.
 App level aggregation will be done by app collector as and when it receives different metrics.
 Online or real time aggregation for apps would be a simple SUM of metrics of child entities. Additional metrics will also be stored
which indicate AVG, MAX, AREA(time integral) etc. More on this in next slide.
 App to flow run aggregation will be done via a HBase coprocessor on the read path. Cell tags used to achieve this.
 For user/flow, aggregation happens periodically(not real time i.e. offline). For this, Phoenix tables will be used. To achieve offline
aggregation, a MR job is run which reads application table and writes to user and flow aggregation tables
Container A1
(CPUCoresMillis = 400)
Container A2
(CPUCoresMillis = 300)
Container B1
(CPUCoresMillis = 200)
App A
(CPUCoresMillis = 700)
App B
(CPUCoresMillis = 200)
Flow
(CPUCoresMillis = 900)
Accumulation
 While aggregating, we also accumulate metric values along the time dimension. This is especially useful for gauges. Consider the
table below which displays the CPU utilization for containers belonging to an app(in terms of cores). Here t1…t16 represents time
10ms. apart. This table shows how values are aggregated for an app and how they are accumulated and averages calculated.
Trapezoidal integration rule is used to calculate area under the curve i.e. Area under the curve = ((valuet1 + valuet2)/2) * Dt
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 5 0.5 0.5 1 0
Application
Area
(CoreMillis)
Average
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 5 0.5 0.5 1 0
Application 1
Area
(CoreMillis)
Average
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 5 0.5 0.5 1 0
Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0
Area
(CoreMillis)
-
Average -
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5
Container 5 0.5 0.5 1 1
Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 1
Area
(CoreMillis)
- 15
Average - 1.5
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 0
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 0
Container 5 0.5 0.5 1 0
Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0
Area
(CoreMillis)
- 15 42
Average - 1.5 2.1
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16
Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0
Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0
Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 0
Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 0
Container 5 0.5 0.5 1 0
Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0
Area
(CoreMillis)
- 15 42 82 122 160 192 222 252 282 307 325 335 345 355 360
Average - 1.5 2.1 2.7 3.1 3.2 3.2 3.1 3.1 3.1 3.1 3 2.8 2.6 2.5 2.4
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
CPU Cores for App Avg
HBase Table Schema
 Entity Table – Used for storing Timeline Entity object. Contains configs, metrics and other info (events,
parent child relationships, etc.).
Row Key : clusterId!user!flowId!flowRunId!appId!entityType!entityId
 Application Table – Used for storing YARN Application entity. Contains configs, metrics and other info.
Same as entity table but added for better performance.
Row Key : clusterId!user!flowId!flowRunId!appId
 App To Flow Table – Used for getting flowId and flowRunId information based on cluster and app. This is
helpful in querying entity table on the basis of just the cluster and app information.
Row Key : clusterId! appId
 Flow Run Table – Stores flow run information aggregated across apps.
Row Key : clusterId!user!flowId!flowRunId
HBase Table Schema (Contd.)
 Flow Activity Table – Used for storing daily activity records for a flow. For quick lookup of flow level info.
Row Key : clusterId!inverted top of the day timestamp!user!flowId
Phoenix Tables for Offline Aggregation :
 Flow Aggregation Table – Stores aggregated metrics at flow level. Metrics are aggregated from application
table.
Primary Key : user, cluster, flowId
 User Aggregation Table – Stores aggregated metrics at user level. Metrics are aggregated from application
table.
Primary Key : user, cluster
Querying ATSv2
 ATSv2 offers major enhancement over ATSv1 in terms of queries supported. Efficient queries around
flows, flow runs, apps, etc. are possible. Moreover, ATSv2 can support complex queries to filter out results.
 ATSv1 offered only primary filters and secondary filters for filtering out entities. ATSv2 offers ability to filter
out entities based on config values, metric values, entity parent child relationships and events. It also
supports returning only certain configurations and metrics in the result.
 ATSv1 queries supported only “equal to” match for primary and secondary filters. But for metrics this does
not quite make sense. A user would while filtering on the basis of metric values would more likely be using
relational operators such as >=, <=, != etc. All these relational operators are supported in ATSv2 for metrics.
In addition to this different predicates in filters can be combined using “AND” and “OR” operators.
All in all this gives ATSv2 a very powerful query interface.
Querying ATSv2 (Contd.)
 ATSv2, like ATSv1 supports a REST API interface with JSON as the media. Some examples are given below.
 Get Entities – Returns a set of TimelineEntity objects based on cluster, app and entity type. The query also
supports multiple optional query parameters such as limit on number of entities to be returned,
configurations and metrics to be returned, filter on the basis of created and modified time window, config
filters, metric filters and event filters.
http://localhost:8188/ws/v2/timeline/entities/{clusterId}/{appId}/{entityType}
Example : -
http://localhost:8188/ws/v2/timeline/entities/cluster1/application_1334432321_0002/YARN_CONTAINER?li
mit=5&metrics=memory,cpu
 Get Entity – Returns a Timeline Entity object based on cluster, app, entity type and entityId.
http://localhost:8188/ws/v2/timeline/entity/{clusterId}/{appId}/{entityType}/{entityId}
Possible use cases
 Cluster utilization and inputs for capacity planning. Cluster can learn from flow’s/application’s historical
data.
 Mappers / reducers optimizations.
 Application performance over time.
 Identifying job bottlenecks.
 Ad-hoc troubleshooting and identification of problems in cluster.
 Complex queries possible at flow, user and queue level. For instance, queries like % of applications which
ran more than 10000 containers.
 Full DAG from flow to flow run to application to container level can be seen.
Team Members
 Sangjin Lee, Vrushali C and Joep Rottinghuis (Twitter)
 Junping Du, Li Lu and Vinod Kumar Vavillapalli (Hortonworks)
 Zhijie Shen (formerly Hortonworks)
 Varun Saxena and Naganarasimha G R (Huawei)
 Robert Kanter and Karthik Kambatla (Cloudera)
 Inputs from LinkedIn, Yahoo! and Altiscale.
Feature Status
 Distributed per-app and per-node writers (as Aux Service)
 RM Companion writer
 NM, RM and AM writing events and metrics to ATS
 File based readers and writers for test
 HBase and Phoenix writer implementations
 Performance evaluation of these writers
 HBase based reader implementation
 Support for flows
 App and flow run level online Aggregation
 Offline Aggregation
 Query Interface
Feature Status (Contd.)
 Standalone timeline writer
 Distributed timeline readers and a reader pool
 ATSv2 UI
 Security
 Support for migration
Thank You !

More Related Content

What's hot

End-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and AtlasEnd-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and Atlas
DataWorks Summit
 

What's hot (20)

Streaming SQL
Streaming SQLStreaming SQL
Streaming SQL
 
Jun 2017 HUG: YARN Scheduling – A Step Beyond
Jun 2017 HUG: YARN Scheduling – A Step BeyondJun 2017 HUG: YARN Scheduling – A Step Beyond
Jun 2017 HUG: YARN Scheduling – A Step Beyond
 
YARN Federation
YARN Federation YARN Federation
YARN Federation
 
Running Services on YARN
Running Services on YARNRunning Services on YARN
Running Services on YARN
 
End-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and AtlasEnd-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and Atlas
 
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBaseApache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
 
Hadoop YARN Services
Hadoop YARN ServicesHadoop YARN Services
Hadoop YARN Services
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
File Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & ParquetFile Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & Parquet
 
Design Patterns For Real Time Streaming Data Analytics
Design Patterns For Real Time Streaming Data AnalyticsDesign Patterns For Real Time Streaming Data Analytics
Design Patterns For Real Time Streaming Data Analytics
 
Producing Spark on YARN for ETL
Producing Spark on YARN for ETLProducing Spark on YARN for ETL
Producing Spark on YARN for ETL
 
Streaming in the Wild with Apache Flink
Streaming in the Wild with Apache FlinkStreaming in the Wild with Apache Flink
Streaming in the Wild with Apache Flink
 
The Future of Apache Storm
The Future of Apache StormThe Future of Apache Storm
The Future of Apache Storm
 
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
 
The Stream Processor as a Database Apache Flink
The Stream Processor as a Database Apache FlinkThe Stream Processor as a Database Apache Flink
The Stream Processor as a Database Apache Flink
 
Large-Scale Stream Processing in the Hadoop Ecosystem
Large-Scale Stream Processing in the Hadoop Ecosystem Large-Scale Stream Processing in the Hadoop Ecosystem
Large-Scale Stream Processing in the Hadoop Ecosystem
 
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
Building Scalable Data Pipelines - 2016 DataPalooza SeattleBuilding Scalable Data Pipelines - 2016 DataPalooza Seattle
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
 
Spark Uber Development Kit
Spark Uber Development KitSpark Uber Development Kit
Spark Uber Development Kit
 
From Device to Data Center to Insights
From Device to Data Center to InsightsFrom Device to Data Center to Insights
From Device to Data Center to Insights
 
ApacheCon North America 2014 - Apache Hadoop YARN: The Next-generation Distri...
ApacheCon North America 2014 - Apache Hadoop YARN: The Next-generation Distri...ApacheCon North America 2014 - Apache Hadoop YARN: The Next-generation Distri...
ApacheCon North America 2014 - Apache Hadoop YARN: The Next-generation Distri...
 

Similar to Application Timeline Server - Past, Present and Future

Similar to Application Timeline Server - Past, Present and Future (20)

Venturing into Hadoop Large Clusters
Venturing into Hadoop Large ClustersVenturing into Hadoop Large Clusters
Venturing into Hadoop Large Clusters
 
Venturing into Large Hadoop Clusters
Venturing into Large Hadoop ClustersVenturing into Large Hadoop Clusters
Venturing into Large Hadoop Clusters
 
Structured Streaming Data Pipeline Using Kafka, Spark, and MapR-DB
Structured Streaming Data Pipeline Using Kafka, Spark, and MapR-DBStructured Streaming Data Pipeline Using Kafka, Spark, and MapR-DB
Structured Streaming Data Pipeline Using Kafka, Spark, and MapR-DB
 
Venturing into Large Hadoop Clusters
Venturing into Large Hadoop ClustersVenturing into Large Hadoop Clusters
Venturing into Large Hadoop Clusters
 
Achieve big data analytic platform with lambda architecture on cloud
Achieve big data analytic platform with lambda architecture on cloudAchieve big data analytic platform with lambda architecture on cloud
Achieve big data analytic platform with lambda architecture on cloud
 
Enterprise Data Lakes
Enterprise Data LakesEnterprise Data Lakes
Enterprise Data Lakes
 
Amazon Kinesis
Amazon KinesisAmazon Kinesis
Amazon Kinesis
 
Towards SLA-based Scheduling on YARN Clusters
Towards SLA-based Scheduling on YARN ClustersTowards SLA-based Scheduling on YARN Clusters
Towards SLA-based Scheduling on YARN Clusters
 
Big data Argentina meetup 2020-09: Intro to presto on docker
Big data Argentina meetup 2020-09: Intro to presto on dockerBig data Argentina meetup 2020-09: Intro to presto on docker
Big data Argentina meetup 2020-09: Intro to presto on docker
 
Seattle spark-meetup-032317
Seattle spark-meetup-032317Seattle spark-meetup-032317
Seattle spark-meetup-032317
 
Timeline Service v.2 (Hadoop Summit 2016)
Timeline Service v.2 (Hadoop Summit 2016)Timeline Service v.2 (Hadoop Summit 2016)
Timeline Service v.2 (Hadoop Summit 2016)
 
Timeline service V2 at the Hadoop Summit SJ 2016
Timeline service V2 at the Hadoop Summit SJ 2016Timeline service V2 at the Hadoop Summit SJ 2016
Timeline service V2 at the Hadoop Summit SJ 2016
 
Reactive programming
Reactive programmingReactive programming
Reactive programming
 
Cloud Foundry Technical Overview
Cloud Foundry Technical OverviewCloud Foundry Technical Overview
Cloud Foundry Technical Overview
 
YARN - way to share cluster BEYOND HADOOP
YARN - way to share cluster BEYOND HADOOPYARN - way to share cluster BEYOND HADOOP
YARN - way to share cluster BEYOND HADOOP
 
VMworld 2013: Performance Management of Business Critical Applications using ...
VMworld 2013: Performance Management of Business Critical Applications using ...VMworld 2013: Performance Management of Business Critical Applications using ...
VMworld 2013: Performance Management of Business Critical Applications using ...
 
Apache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduceApache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduce
 
Performance testing wreaking balls
Performance testing wreaking ballsPerformance testing wreaking balls
Performance testing wreaking balls
 
Schema-based multi-tenant architecture using Quarkus &amp; Hibernate-ORM.pdf
Schema-based multi-tenant architecture using Quarkus &amp; Hibernate-ORM.pdfSchema-based multi-tenant architecture using Quarkus &amp; Hibernate-ORM.pdf
Schema-based multi-tenant architecture using Quarkus &amp; Hibernate-ORM.pdf
 
Efficiency And Optimization Of Your Hybrid Cloud
Efficiency And Optimization Of Your Hybrid CloudEfficiency And Optimization Of Your Hybrid Cloud
Efficiency And Optimization Of Your Hybrid Cloud
 

Recently uploaded

VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
dollysharma2066
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
dharasingh5698
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Recently uploaded (20)

FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor 📱 {7001035870} VIP Escorts chittoor
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 

Application Timeline Server - Past, Present and Future

  • 1. Application Timeline Server - Past, Present & Future NAGANARASIMHA G R & VARUN SAXENA
  • 2. Agenda  Who we are ?  Why we need History Server?  Application History Server  Timeline Server V1  Timeline Server V2
  • 3. Who we are ? Naganarasimha G R  Senior Technical Lead @ Huawei  Active Apache Hadoop Contributor.  Currently working in Hadoop Platform Dev team  Earlier worked in Reporting Domain Varun Saxena  Technical Lead @ Huawei  Active Apache Hadoop Contributor.  Currently working in Hadoop Platform Dev team  Earlier worked in Telecom Data Network Domain Both of us are currently participating in ATS V2 development
  • 4. Agenda  Who we are ?  Why we need History Server?  Application History Server  Timeline Server V1 & V1.5  Timeline Server V2
  • 5. Need for new History Server  Job History server is only for MR app, YARN supports many Applications.  YARN level Events and Metrics are not captured.  Storage is HDFS only, Not good for adhoc analysis.  JHS is only for historical or completed jobs.  On failures of Application Master, Data for current running application is lost.  Storage is very MR specific • Counters • Mappers and Reducers
  • 6. Agenda  Who we are ?  Why we need History Server?  Application History Server  Timeline Server V1 & V1.5  Timeline Server V2
  • 7. Application History Server  Separate Process  Resource Manager directly writes to Storage(HDFS)  Aggregated Logs  Separate UI, CLI and Rest End Point  Data stored : • Application level data (queue, user etc…) • List of ApplicationAttempts • Information about each ApplicationAttempt • List of containers for ApplicationAttempt • Generic information about each container.  CLI and REST Query interfaces were supported Drawbacks :  Storing Application specific custom data not supported  RM crashes, HDFS files are not readable  Hard limit no number of Files  Upgrades / Update  Supports only completed jobs.
  • 8. Agenda  Who we are ?  Why we need History Server?  Application History Server  Timeline Server V1  Timeline Server V2
  • 9. Application Timeline Service Motivation :  YARN takes care of it - Relieving the application from monitoring service  Application diversity - Framework specific metadata/metrics
  • 10. ATS V1 : Data Model  Timeline Domain - Namespace for Timeline server which supports isolations users and applications - Timeline server Security is defined at this level  Timeline Entity - An abstract concept of anything - Defines the relationship between entities - Can be an application, an application attempt, a container or any user-defined object - contains Primary filters which will be used to index the entities in the Timeline Store. - uniquely identified by an EntityId and EntityType.  Timeline Event - Event that is related to a specific Timeline Entity of an application - Users are free to define what an event means, such as starting an application, getting allocated a container,
  • 11. ATS V1 : Architecture  Separate Process  Pluggable store – defaults to LevelDB  REST Interfaces
  • 12. ATS V1 : Level DB  Key- value store  Lightweight  Open source Compatible license  Used to store - TimelineStore : Domain, Entity, Events and metrics - TimelineStateStore : Security Tokens  Supports Data Retention
  • 13. Agenda  Who we are ?  Why we need History Server?  Application History Server  Timeline Server V1  Timeline Server V2
  • 14. Why ATSv2 ?  Scalability • Single global instance of writer/reader • ATSv1 uses local disk based LevelDB storage  Usability • Handle flows as first-class concepts and model aggregation. • Elevate configuration and metrics to first-class members. • Better support for queries.  Reliability • Data is stored only in a local disk . • Single daemon so single point of failure.  Existing external tooling: hRaven, Finch, Dr. Elephant, etc. As new Hadoop versions are rolled out, maintenance of these tools becomes an issue.
  • 15. Key Design Points  Distributed writers (per app and per node) • Per App Writer/Collector launched as part of RM. • Per Node Collector/Writer launched as an auxiliary service in NM. • In future, will support standalone writers.  Scalable and reliable backend storage (HBase)  A new object model API with flows built into it.  Separate reader instance(s). Currently have a single reader instance.  Aggregation i.e. rolling up the metric values to the parent. • Online aggregation for apps and flow runs. • Offline aggregation for users, flows and queues.
  • 17. Timeline Reader Timeline Reader ATSv2 Components Application Master Node Manager Timeline Writer App Events / Metrics Container Events / Metrics Storage Resource Manager Timeline Writer Timeline Reader User Queries Timeline Reader Pool App / Container Events
  • 18. Resource Manager RMApp Distributed Writers / Collectors Node Manager 1 { app_1_collector_info …. } List of app collectors App Master 3. Launch App Master App Collector App Collector Aux Service 4. Notify Aux Service to bind new collector 5. Bind new collector NODE 1 HBase NM Collector Service 6. Register new collector RM Events Heartbeat with collector info App Collector App Collector Node Manager 2 Node Manager X 1. User submits app Heartbeat with collector info 2. RMApp launches companion app collector on new app submission 7. Report new collector info. (IP + Port) Container Events AM reports events to app collector notified in HB by RM. NM reports events to app collector notified in HB by RM. { app_1_collector_info app_2_collector_info …. }
  • 19. App 1 App 2 App 3 App 4 Run at 9:00 pm Flow Script / Program (eg. HIVE Query / Pig Script) App 1 App 2 App 3 App 4 Run at 7:30 pm Joe
  • 20. Data Model Entity ID + Type Configurations Metadata(Info) Parent-Child Relationships Metrics Events Cluster Type Cluster Attributes Flow Type User Flow Runs Flow Attributes Flow Run Type User Running apps Flow Run Attributes Application Type User Flow + Run Queue Attempts Attempt Type Application Queue Containers Container Type Attempt Attributes Entities of first class citizens User Username(ID) Aggregated metrics Queue Queue(ID) Sub queues Aggregated metrics Aggregation Event ID Metadata Timestamp Metric ID Metadata Single Value or Time Series(with timestamps)
  • 21. HBase vs Phoenix evaluation Based on the evaluation of both Hbase and Phoenix, it was decided that HBase will be used on write path. With Hbase, much higher throughput, a lower IO wait and far lower CPU load was witnessed. Test descript ion Map tasks Entities per mapper Total entities written Phoenix Transaction Rate (per mapper) ops/sec HBase Transactio n Rate (per mapper) ops/sec Phoenix Write Time (job counter TIMELINE_ SERVICE_ WRITE_TIME) Hbase Write Time (job counter TIMELINE _SERVICE _WRITE_TIME) Synthetic Data 170 1k 170k 112.83 2285.13 1506704 74394 Synthetic Data 170 10k 1.7M 53.029 636.41 32057957 2671241 Synthetic Data 1 50k 50k 196.67 19770.66 254225 2529 9 History Files 33 - 85k 319.19 (write errors) 962.32 265460 88049 555 History Files 33 - 810k 206.25 (write errors) 927.62 4102364 874151
  • 22. Aggregation  Aggregation basically means rolling up metrics from child entities to parent entities. We can perform different operations such as SUM, AVG ,etc. while rolling them up and store them in the parent.  App level aggregation will be done by app collector as and when it receives different metrics.  Online or real time aggregation for apps would be a simple SUM of metrics of child entities. Additional metrics will also be stored which indicate AVG, MAX, AREA(time integral) etc. More on this in next slide.  App to flow run aggregation will be done via a HBase coprocessor on the read path. Cell tags used to achieve this.  For user/flow, aggregation happens periodically(not real time i.e. offline). For this, Phoenix tables will be used. To achieve offline aggregation, a MR job is run which reads application table and writes to user and flow aggregation tables Container A1 (CPUCoresMillis = 400) Container A2 (CPUCoresMillis = 300) Container B1 (CPUCoresMillis = 200) App A (CPUCoresMillis = 700) App B (CPUCoresMillis = 200) Flow (CPUCoresMillis = 900)
  • 23. Accumulation  While aggregating, we also accumulate metric values along the time dimension. This is especially useful for gauges. Consider the table below which displays the CPU utilization for containers belonging to an app(in terms of cores). Here t1…t16 represents time 10ms. apart. This table shows how values are aggregated for an app and how they are accumulated and averages calculated. Trapezoidal integration rule is used to calculate area under the curve i.e. Area under the curve = ((valuet1 + valuet2)/2) * Dt t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 5 0.5 0.5 1 0 Application Area (CoreMillis) Average t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 5 0.5 0.5 1 0 Application 1 Area (CoreMillis) Average t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 5 0.5 0.5 1 0 Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0 Area (CoreMillis) - Average - t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 Container 5 0.5 0.5 1 1 Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 1 Area (CoreMillis) - 15 Average - 1.5 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 0 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 0 Container 5 0.5 0.5 1 0 Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0 Area (CoreMillis) - 15 42 Average - 1.5 2.1 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 Container 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0 Container 2 0.5 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0 Container 3 0.5 1 1 1 1 1 1 1 1 1 0.5 0.5 0 Container 4 0.5 1 1 1 1 1 1 1 1 1 1 0.5 0.5 0 Container 5 0.5 0.5 1 0 Application 1 2.5 4 4 4 3.5 3 3 3 3 2 1.5 1 1 1 0 Area (CoreMillis) - 15 42 82 122 160 192 222 252 282 307 325 335 345 355 360 Average - 1.5 2.1 2.7 3.1 3.2 3.2 3.1 3.1 3.1 3.1 3 2.8 2.6 2.5 2.4 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 CPU Cores for App Avg
  • 24. HBase Table Schema  Entity Table – Used for storing Timeline Entity object. Contains configs, metrics and other info (events, parent child relationships, etc.). Row Key : clusterId!user!flowId!flowRunId!appId!entityType!entityId  Application Table – Used for storing YARN Application entity. Contains configs, metrics and other info. Same as entity table but added for better performance. Row Key : clusterId!user!flowId!flowRunId!appId  App To Flow Table – Used for getting flowId and flowRunId information based on cluster and app. This is helpful in querying entity table on the basis of just the cluster and app information. Row Key : clusterId! appId  Flow Run Table – Stores flow run information aggregated across apps. Row Key : clusterId!user!flowId!flowRunId
  • 25. HBase Table Schema (Contd.)  Flow Activity Table – Used for storing daily activity records for a flow. For quick lookup of flow level info. Row Key : clusterId!inverted top of the day timestamp!user!flowId Phoenix Tables for Offline Aggregation :  Flow Aggregation Table – Stores aggregated metrics at flow level. Metrics are aggregated from application table. Primary Key : user, cluster, flowId  User Aggregation Table – Stores aggregated metrics at user level. Metrics are aggregated from application table. Primary Key : user, cluster
  • 26. Querying ATSv2  ATSv2 offers major enhancement over ATSv1 in terms of queries supported. Efficient queries around flows, flow runs, apps, etc. are possible. Moreover, ATSv2 can support complex queries to filter out results.  ATSv1 offered only primary filters and secondary filters for filtering out entities. ATSv2 offers ability to filter out entities based on config values, metric values, entity parent child relationships and events. It also supports returning only certain configurations and metrics in the result.  ATSv1 queries supported only “equal to” match for primary and secondary filters. But for metrics this does not quite make sense. A user would while filtering on the basis of metric values would more likely be using relational operators such as >=, <=, != etc. All these relational operators are supported in ATSv2 for metrics. In addition to this different predicates in filters can be combined using “AND” and “OR” operators. All in all this gives ATSv2 a very powerful query interface.
  • 27. Querying ATSv2 (Contd.)  ATSv2, like ATSv1 supports a REST API interface with JSON as the media. Some examples are given below.  Get Entities – Returns a set of TimelineEntity objects based on cluster, app and entity type. The query also supports multiple optional query parameters such as limit on number of entities to be returned, configurations and metrics to be returned, filter on the basis of created and modified time window, config filters, metric filters and event filters. http://localhost:8188/ws/v2/timeline/entities/{clusterId}/{appId}/{entityType} Example : - http://localhost:8188/ws/v2/timeline/entities/cluster1/application_1334432321_0002/YARN_CONTAINER?li mit=5&metrics=memory,cpu  Get Entity – Returns a Timeline Entity object based on cluster, app, entity type and entityId. http://localhost:8188/ws/v2/timeline/entity/{clusterId}/{appId}/{entityType}/{entityId}
  • 28. Possible use cases  Cluster utilization and inputs for capacity planning. Cluster can learn from flow’s/application’s historical data.  Mappers / reducers optimizations.  Application performance over time.  Identifying job bottlenecks.  Ad-hoc troubleshooting and identification of problems in cluster.  Complex queries possible at flow, user and queue level. For instance, queries like % of applications which ran more than 10000 containers.  Full DAG from flow to flow run to application to container level can be seen.
  • 29. Team Members  Sangjin Lee, Vrushali C and Joep Rottinghuis (Twitter)  Junping Du, Li Lu and Vinod Kumar Vavillapalli (Hortonworks)  Zhijie Shen (formerly Hortonworks)  Varun Saxena and Naganarasimha G R (Huawei)  Robert Kanter and Karthik Kambatla (Cloudera)  Inputs from LinkedIn, Yahoo! and Altiscale.
  • 30. Feature Status  Distributed per-app and per-node writers (as Aux Service)  RM Companion writer  NM, RM and AM writing events and metrics to ATS  File based readers and writers for test  HBase and Phoenix writer implementations  Performance evaluation of these writers  HBase based reader implementation  Support for flows  App and flow run level online Aggregation  Offline Aggregation  Query Interface
  • 31. Feature Status (Contd.)  Standalone timeline writer  Distributed timeline readers and a reader pool  ATSv2 UI  Security  Support for migration