SlideShare a Scribd company logo
1 of 43
Download to read offline
Apache CarbonData:
An indexed columnar file format for
interactive query with Spark SQL
Jihong MA, Jacky LI
HUAWEI
data
Report & Dashboard OLAP & Ad-hoc Batch processing Machine learningReal TimeAnalytics
Challenge
• Wide Spectrum of Query Analysis
• OLAP Vs Detailed Query
• Full scan Vs Small scan
• Point queries
Small Scan QueryBig Scan Query
Multi-dimensional OLAP Query
How to choose storage engine to
facilitate query execution?
Available Options
1. NoSQL Database
• Key-Value store: low latency, <5ms
• No Standard SQL support
2. MPP relational Database
•Shared-nothing enables fast query execution
•Poor scalability: < 100 cluster size, no fault-tolerance
3. Search Engine
•Advanced indexing technique for fast search
•3~4X data expansion in size, no SQL support
4. SQL on Hadoop
•Modern distributed architecture and high scalability
•Slow on point queries
Motivation for A New File Format
CarbonData: Unified File Format
Small Scan QueryBig Scan Query
Multi-dimensional OLAP Query
A single copy of data
balanced to fit all data access
Apache
• Apache Incubator Project since June, 2016
• Apache releases
• 4 stable releases
• Latest 1.0.0, Jan 28, 2017
• Contributors:
• In Production:
Compute
Storage
Introducing CarbonData
CarbonData Integration with Spark
CarbonData Table
CarbonData File
What is CarbonData file format?
What forms a CarbonData table on disk?
What it takes to deeply integrate with
distributed processing engine like Spark?
1
2
3
CarbonData:
An Indexed Columnar File Format
CarbonData File Structure
§ Built-in Columnar & Index
§ Multi-dimensional Index (B+ Tree)
§ Min/Max index
§ Inverted index
§ Encoding:
§ RLE, Delta, Global Dictionary
§ Snappy for compression
§ Adaptive Data Type Compression
§ Data Type:
§ Primitive type and nested type
CarbonData File Layout
• Blocklet:A set of rows in columnarformat
•Data are sorted along MDK (multi-dimensional keys)
•Clustered data enabling efficient filtering and scan
•Column chunk: Data for one column in a Blocklet
•Footer: Metadata information
•File level metadata & statistics
•Schema
•Blocklet Index
Carbon Data File
Blocklet 1
Column 1 Chunk
Column 2 Chunk
…
Column n Chunk
Blocklet N
File Footer
…
File Metadata
Schema
Blocklet Index
File Level Blocklet Index
Blocklet 1
1 1 1 1 1 1 12000
1 1 1 2 1 2 5000
1 1 2 1 1 1 12000
1 1 2 2 1 2 5000
1 1 3 1 1 1 12000
1 1 3 2 1 2 5000
Blocklet 2
1 2 1 3 2 3 11000
1 2 2 3 2 3 11000
1 2 3 3 2 3 11000
1 3 1 4 3 4 2000
1 3 1 5 3 4 1000
1 3 2 4 3 4 2000
Blocklet 3
1 3 2 5 3 4 1000
1 3 3 4 3 4 2000
1 3 3 5 3 4 1000
1 4 1 4 1 1 20000
1 4 2 4 1 1 20000
1 4 3 4 1 1 20000
Blocklet 4
2 1 1 1 1 1 12000
2 1 1 2 1 2 5000
2 1 2 1 1 1 12000
2 1 2 2 1 2 5000
2 1 3 1 1 1 12000
2 1 3 2 1 2 5000
Blocklet Index
Blocklet1
Start Key1
End Key1 Start Key1
End Key4
Start Key1
End Key2
Start Key3
End Key4
Start Key1
End Key1
Start Key2
End Key2
Start Key3
End Key3
Start Key4
End Key4
File FooterBlocklet
• Build in-memory file level MDK index tree for filtering
• Major optimization for efficient scan
C1(Min, Max)
….
C7(Min, Max)
Blocklet4
Start Key4
End Key4
C1(Min, Max)
….
C7(Min, Max)
C1(Min,Max)
…
C7(Min,Max)
C1(Min,Max)
…
C7(Min,Max)
C1(Min,Max)
…
C7(Min,Max)
C1(Min,Max)
…
C7(Min,Max)
Rich Multi-Level Index Support
Spark Driver
Executor
Carbon File
Data
Footer
Carbon File
Data
Footer
Carbon File
Data
Footer
File Level Index
& Scanner
Table Level Index
Executor
File Level Index
& Scanner
Catalyst
Inverted
Index
•Using the index info in footer, two
level B+ tree index can be built:
•File level index: local B+ tree, efficient
blocklet level filtering
•Table level index: global B+ tree,
efficient file level filtering
•Column chunk inverted index:
efficient column chunk scan
CarbonData Table:
Data Segment + Metadata
CarbonData Table
Carbon
File
Metadata:
Appendable Dictionary, Schema
Index:
Separated Index per Segment
Data:
Immutable Data File
Data
source
Load in batch
Segment
CarbonData Table
Read
CarbonData Table
Carbon
File
Carbon
File
Dictionary
(append)
Index
(new tree)
Data File
(new folder)
Data
source
Load in batch
Segment 2 Segment 1
CarbonData Table
Read
CarbonData Table Layout
Carbon File
Data
Footer
Carbon File
Data
Footer
Carbon File
Data
Footer
Carbon File
Data
Footer
Dictionary
File
Dictionary
Map
Spark Driver
Table Level Index
Index File
All Footer
Schema
File
Latest
Schema
/table_name/fact/segment_id /table_name/metaHDFS
Spark
Mutation: Delete
Delete
Base
File
Segment
CarbonData Table
Delete
Delta
Bitmap file that mark
deleted rows
No change
in index
No change
in dictionary
Mutation: Update
Update
Base
File
Segment
CarbonData Table
Delete
Delta
Insert
Delta
Bitmap file for delete
delta and regular data
file for inserted delta
New index added for
inserted delta
Dictionary append
Data Compaction
Carbon
File
Carbon
File
Carbon
File
Dictionary
Index
Data File
Segment 2 Segment 1 Segment 1.1
CarbonData Table
Read
Segment Management
Data segment
…
Load
(write)
Leveraging ZooKeeper to manage the Segment State
Segment
Manager
(ZK based)
Carbon
File
Data segment
Carbon
File
Data segment
Carbon
File
Query
(read)
SparkSQL + CarbonData:
Enables fast interactive data analysis
Carbon-Spark Integration
• Built-in Spark integration
• Spark 1.5, 1.6, 2.1
• Interface
• SQL
• DataFrame API
• Operation:
• Load, Query (with optimization)
• Update, Delete, Compaction, etc
Reader/Writer
Data
Management
Query
Optimization
Carbon File Carbon File Carbon File
Integration through File Format
Carbon
File
Carbon
File
Carbon
File
HDFS
Segment 3 Segment 2 Segment 1
InputFormat/OutputFormat
Deep Integration with SparkSQL
Carbon
File
Carbon
File
Carbon
File
Integrate
SparkSQL
Optimizer
HDFS
Segment 3 Segment 2 Segment 1
CarbonData Table Read/Write/Update/Delete
CarbonData as a SparkSQL Data Source
Parser/Analyzer Optimizer Execution
SQL or
DataFrame
ResolveRelation
Carbon-specific optimization rule:
• Lazy Decode leveraging
global dictionary
Rule-based
New SQL syntax
• DML related statement
RDD
CarbonScanRDD:
• Leveraging multi level index for
efficient filtering and scan
DML related RDDs
Parser
SparkSQL Catalyst Framework
Cost-based
Physical Planning
Strategy
Carbon Data Source
Spark Core
Spark Executor
Spark Driver
Blocklet
Efficient Filtering via Index
HDFS
File
Footer
Blocklet
Blocklet
…
…
C1 C2 C3 C4 Cn
1. File pruning
File
Blocklet
Blocklet
Footer
File
Footer
Blocklet
Blocklet
File
Blocklet
Blocklet
Footer
Blocklet Blocklet Blocklet Blocklet
Task Task
2. Blocklet
pruning
3. Read and decompress filter column
4. Binary search using inverted index,
skip to next blocklet if no matching
…
5. Decompress projection column
6. Return decoded data to spark
SELECT c3, c4 FROM t1 WHERE c2=’boston’
Stage 2
Stage 1
Stage 2
Stage 1
Lazy Decoding by Leveraging Global Dictionary
Final Aggregation
Partial Aggregation
Scan with Filter
(decode c1, c2, c3)
Final Aggregation
Partial Aggregation
Scan with Filter
(decode c1, c2 only)
Dictionary Decode
Before applying Lazy Decode After applying Lazy Decode
SELECT c3, sum(c2) FROM t1 WHERE c1>10 GROUP BY c3
Aggregate on
encoded
value of c3
Decode c3 to
string type
Usage: Write
• Using SQL
• Using Dataframe
df.write
.format(“carbondata")
.options("tableName“, “t1"))
.mode(SaveMode.Overwrite)
.save()
CREATE TABLE tablename (name String, PhoneNumber String) STORED BY “carbondata”
LOAD DATA [LOCAL] INPATH 'folder path' [OVERWRITE] INTO TABLE tablename
OPTIONS(property_name=property_value, ...)
INSERT INTO TABLE tablennme select_statement1 FROM table1;
Usage: Read
• Using SQL
• Using Dataframe
SELECT project_list FROM t1
WHERE cond_list
GROUP BY columns
ORDER BY columns
df = sparkSession.read
.format(“carbondata”)
.option(“tableName”, “t1”)
.load(“path_to_carbon_file”)
df.select(…).show
Usage: Update and Delete
UPDATE table1 A
SET (A.PRODUCT, A.REVENUE) =
(
SELECT PRODUCT, REVENUE
FROM table2 B
WHERE B.CITY = A.CITY AND B.BROKER = A.BROKER
)
WHERE A.DATE BETWEEN ‘2017-01-01’ AND ‘2017-01-31’
table1 table2
UPDATE table1 A
SET A.REVENUE = A.REVENUE - 10
WHERE A.PRODUCT = ‘phone’
DELETE FROM table1 A
WHERE A.CUSTOMERID = ‘123’
123,abc
456,jkd
phone, 70 60
car,100
phone, 30 20
Modify one column in table1
Modify two columns in table1 with values from table2
Delete records in table1
Performance Result
Test
1. TPC-H benchmark (500GB)
2. Test on Production Data Set (Billions of rows)
3. Test on Large Data Set for Scalability (1000B rows, 103TB)
• Storage
– Parquet:
• Partitioned by time column (c1)
– CarbonData:
• Multi-dimensional Index by c1~c10
• Compute
– Spark 2.1
TPC-H: Query
For big scan queries, similar performance,±20%
0
100
200
300
400
500
600
700
800
Response Time (sec)
Parquet
CarbonData
TPC-H: Query
138
283
173
1910
73
207
41 82
0
500
1000
1500
2000
2500
Query_1 Query_3 Query_12 Query_16
Response Time (sec)
Parquet
CarbonData
For queries include filter on fact table,
CarbonData get 1.5-20X performance by leveraging index
TPC-H: Loading and Compression
83.27
44.30
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
Parquet CarbonData
loading throughput (MB/sec/node)
2.96
2.74
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
Parquet CarbonData
compression ratio
Test on Production Data Set
Filter Response Time (sec) Number of Task
Query c1 c2 c3 c4 c5 c6 … c10 Parquet CarbonData Parquet CarbonData
Q1 6.4 1.3 55 5
Q2
65 1.3 804 5
Q3
71 5.2 804 9
Q5
64 4.7 804 9
Q4
67 2.7 804 161
Q6
62 3.7 804 161
Q7
63 21.85 804 588
Q8
69 11.2 804 645
Filter Query
Observation:
Less scan task (resource) is needed because of more efficient filtering by leveraging multi-level
index
Test on Production Data Set
Aggregation Query: no filter, group by c5 (dictionary encoded column)
0
2
4
6
8
10
12
14
16
Parquet CarbonData
Response Time (sec)
0
2
4
6
8
10
12
14
stage 1 stage 2
Stage Execution Time (sec)
Parquet CarbonData
Observation: both partial aggregation and final aggregation are faster,
because aggregation operates on dictionary encoded value
Test on large data set
Observation
When data increase:
l Index is efficient to reduce response time for IO bound query: Q1, Q2
l Spark can scale linearly for CPU bound query: Q3
Data: 200 to 1000 Billion Rows (half year of
telecom data in one china province)
Cluster: 70 nodes, 1120 cores
Query:
Q1: filter (c1~c4), select *
Q2: filter (c10), select *
Q3: full scan aggregate
0.00%
100.00%
200.00%
300.00%
400.00%
500.00%
200B 400B 600B 800B 1000B
Response Time (% of 200B)
Q1 Q2 Q3
What’s Coming Next
What’s coming next
l Enhancement on data loading & compression
l Streaming Ingest:
• Introduce row-based format for fast ingestion
• Gradually compact row-based to column-based for analytic workload
• Optimization for time series data
l Broader Integration across big data ecosystem: Beam, Flink, Kafka, Kylin
Apache
• Love feedbacks, try out, any kind of contribution!
• Code: https://github.com/apache/incubator-carbondata
• JIRA: https://issues.apache.org/jira/browse/CARBONDATA
• dev mailing list: dev@carbondata.incubator.apache.org
• Website: http://http://carbondata.incubator.apache.org
• Current Contributor: 64
• Monthly resolved JIRA issue: 100+
Thank You.
Jihong.Ma@huawei.com
Jacky.likun@huawei.com

More Related Content

What's hot

The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowJulien Le Dem
 
Building Robust ETL Pipelines with Apache Spark
Building Robust ETL Pipelines with Apache SparkBuilding Robust ETL Pipelines with Apache Spark
Building Robust ETL Pipelines with Apache SparkDatabricks
 
The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemDatabricks
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
 
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Downscaling: The Achilles heel of Autoscaling Apache Spark ClustersDownscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Downscaling: The Achilles heel of Autoscaling Apache Spark ClustersDatabricks
 
Diving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDiving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDatabricks
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkDatabricks
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming JobsDatabricks
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureDatabricks
 
Parquet Strata/Hadoop World, New York 2013
Parquet Strata/Hadoop World, New York 2013Parquet Strata/Hadoop World, New York 2013
Parquet Strata/Hadoop World, New York 2013Julien Le Dem
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeDatabricks
 
Data Lineage with Apache Airflow using Marquez
Data Lineage with Apache Airflow using Marquez Data Lineage with Apache Airflow using Marquez
Data Lineage with Apache Airflow using Marquez Willy Lulciuc
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021StreamNative
 
How We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IOHow We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IODatabricks
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumTathastu.ai
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsDatabricks
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark Aakashdata
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Cloudera, Inc.
 

What's hot (20)

The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
 
Building Robust ETL Pipelines with Apache Spark
Building Robust ETL Pipelines with Apache SparkBuilding Robust ETL Pipelines with Apache Spark
Building Robust ETL Pipelines with Apache Spark
 
The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format Ecosystem
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
 
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Downscaling: The Achilles heel of Autoscaling Apache Spark ClustersDownscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
 
Druid deep dive
Druid deep diveDruid deep dive
Druid deep dive
 
Diving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDiving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction Log
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache Spark
 
Productizing Structured Streaming Jobs
Productizing Structured Streaming JobsProductizing Structured Streaming Jobs
Productizing Structured Streaming Jobs
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse Architecture
 
Parquet Strata/Hadoop World, New York 2013
Parquet Strata/Hadoop World, New York 2013Parquet Strata/Hadoop World, New York 2013
Parquet Strata/Hadoop World, New York 2013
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
 
Data Lineage with Apache Airflow using Marquez
Data Lineage with Apache Airflow using Marquez Data Lineage with Apache Airflow using Marquez
Data Lineage with Apache Airflow using Marquez
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
 
Presto: SQL-on-anything
Presto: SQL-on-anythingPresto: SQL-on-anything
Presto: SQL-on-anything
 
How We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IOHow We Optimize Spark SQL Jobs With parallel and sync IO
How We Optimize Spark SQL Jobs With parallel and sync IO
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
 

Similar to Apache Carbondata: An Indexed Columnar File Format for Interactive Query with Spark SQL: Spark Summit East talk by Jacky Li and Jihong Ma

Apache CarbonData:New high performance data format for faster data analysis
Apache CarbonData:New high performance data format for faster data analysisApache CarbonData:New high performance data format for faster data analysis
Apache CarbonData:New high performance data format for faster data analysisliang chen
 
Apache CarbonData+Spark to realize data convergence and Unified high performa...
Apache CarbonData+Spark to realize data convergence and Unified high performa...Apache CarbonData+Spark to realize data convergence and Unified high performa...
Apache CarbonData+Spark to realize data convergence and Unified high performa...Tech Triveni
 
Managing data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBManaging data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBAntonios Giannopoulos
 
Managing Data and Operation Distribution In MongoDB
Managing Data and Operation Distribution In MongoDBManaging Data and Operation Distribution In MongoDB
Managing Data and Operation Distribution In MongoDBJason Terpko
 
What’s Evolving in the Elastic Stack
What’s Evolving in the Elastic StackWhat’s Evolving in the Elastic Stack
What’s Evolving in the Elastic StackElasticsearch
 
Deploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSDeploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSAmazon Web Services
 
Apache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdbApache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdbZhangZhengming
 
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...Databricks
 
Apache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoTApache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoTjixuan1989
 
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayDatadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayC4Media
 
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftBest Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftSnapLogic
 
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengBuilding Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengDatabricks
 
MariaDB ColumnStore
MariaDB ColumnStoreMariaDB ColumnStore
MariaDB ColumnStoreMariaDB plc
 
SQL Server Deep Dive, Denis Reznik
SQL Server Deep Dive, Denis ReznikSQL Server Deep Dive, Denis Reznik
SQL Server Deep Dive, Denis ReznikSigma Software
 
Building a modern Application with DataFrames
Building a modern Application with DataFramesBuilding a modern Application with DataFrames
Building a modern Application with DataFramesDatabricks
 
Building a modern Application with DataFrames
Building a modern Application with DataFramesBuilding a modern Application with DataFrames
Building a modern Application with DataFramesSpark Summit
 
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...Dataconomy Media
 
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)Amazon Web Services Korea
 

Similar to Apache Carbondata: An Indexed Columnar File Format for Interactive Query with Spark SQL: Spark Summit East talk by Jacky Li and Jihong Ma (20)

Apache CarbonData:New high performance data format for faster data analysis
Apache CarbonData:New high performance data format for faster data analysisApache CarbonData:New high performance data format for faster data analysis
Apache CarbonData:New high performance data format for faster data analysis
 
Apache CarbonData+Spark to realize data convergence and Unified high performa...
Apache CarbonData+Spark to realize data convergence and Unified high performa...Apache CarbonData+Spark to realize data convergence and Unified high performa...
Apache CarbonData+Spark to realize data convergence and Unified high performa...
 
Managing data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBManaging data and operation distribution in MongoDB
Managing data and operation distribution in MongoDB
 
Managing Data and Operation Distribution In MongoDB
Managing Data and Operation Distribution In MongoDBManaging Data and Operation Distribution In MongoDB
Managing Data and Operation Distribution In MongoDB
 
What’s Evolving in the Elastic Stack
What’s Evolving in the Elastic StackWhat’s Evolving in the Elastic Stack
What’s Evolving in the Elastic Stack
 
Deep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDBDeep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDB
 
Deploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSDeploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWS
 
Apache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdbApache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdb
 
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...
Vectorized Deep Learning Acceleration from Preprocessing to Inference and Tra...
 
Apache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoTApache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoT
 
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayDatadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
 
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftBest Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
 
week1slides1704202828322.pdf
week1slides1704202828322.pdfweek1slides1704202828322.pdf
week1slides1704202828322.pdf
 
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengBuilding Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
 
MariaDB ColumnStore
MariaDB ColumnStoreMariaDB ColumnStore
MariaDB ColumnStore
 
SQL Server Deep Dive, Denis Reznik
SQL Server Deep Dive, Denis ReznikSQL Server Deep Dive, Denis Reznik
SQL Server Deep Dive, Denis Reznik
 
Building a modern Application with DataFrames
Building a modern Application with DataFramesBuilding a modern Application with DataFrames
Building a modern Application with DataFrames
 
Building a modern Application with DataFrames
Building a modern Application with DataFramesBuilding a modern Application with DataFrames
Building a modern Application with DataFrames
 
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
 
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)
AWS CLOUD 2018- Amazon DynamoDB기반 글로벌 서비스 개발 방법 (김준형 솔루션즈 아키텍트)
 

More from Spark Summit

FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang Spark Summit
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...Spark Summit
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang WuApache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang WuSpark Summit
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data  with Ramya RaghavendraImproving Traffic Prediction Using Weather Data  with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data with Ramya RaghavendraSpark Summit
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...Spark Summit
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingSpark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingSpark Summit
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...Spark Summit
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub WozniakNext CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub WozniakSpark Summit
 
Powering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin KimPowering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin KimSpark Summit
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraImproving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraSpark Summit
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...Spark Summit
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...Spark Summit
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...Spark Summit
 
Goal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim SimeonovGoal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim SimeonovSpark Summit
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...Spark Summit
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir VolkGetting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir VolkSpark Summit
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Spark Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...Spark Summit
 

More from Spark Summit (20)

FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang WuApache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data  with Ramya RaghavendraImproving Traffic Prediction Using Weather Data  with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub WozniakNext CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub Wozniak
 
Powering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin KimPowering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin Kim
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraImproving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
 
Goal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim SimeonovGoal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim Simeonov
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir VolkGetting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir Volk
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
 

Recently uploaded

English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfblazblazml
 
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...KarteekMane1
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBoston Institute of Analytics
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsVICTOR MAESTRE RAMIREZ
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Boston Institute of Analytics
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024Susanna-Assunta Sansone
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...Dr Arash Najmaei ( Phd., MBA, BSc)
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataTecnoIncentive
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...Amil Baba Dawood bangali
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Seán Kennedy
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max PrincetonTimothy Spann
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Decoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectDecoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectBoston Institute of Analytics
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our WorldEduminds Learning
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Thomas Poetter
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxSimranPal17
 
convolutional neural network and its applications.pdf
convolutional neural network and its applications.pdfconvolutional neural network and its applications.pdf
convolutional neural network and its applications.pdfSubhamKumar3239
 

Recently uploaded (20)

Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
 
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...
wepik-insightful-infographics-a-data-visualization-overview-20240401133220kwr...
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
 
Insurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis ProjectInsurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis Project
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business Professionals
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded data
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max Princeton
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Decoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectDecoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis Project
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our World
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptx
 
convolutional neural network and its applications.pdf
convolutional neural network and its applications.pdfconvolutional neural network and its applications.pdf
convolutional neural network and its applications.pdf
 

Apache Carbondata: An Indexed Columnar File Format for Interactive Query with Spark SQL: Spark Summit East talk by Jacky Li and Jihong Ma

  • 1. Apache CarbonData: An indexed columnar file format for interactive query with Spark SQL Jihong MA, Jacky LI HUAWEI
  • 2. data Report & Dashboard OLAP & Ad-hoc Batch processing Machine learningReal TimeAnalytics
  • 3. Challenge • Wide Spectrum of Query Analysis • OLAP Vs Detailed Query • Full scan Vs Small scan • Point queries Small Scan QueryBig Scan Query Multi-dimensional OLAP Query
  • 4. How to choose storage engine to facilitate query execution?
  • 5. Available Options 1. NoSQL Database • Key-Value store: low latency, <5ms • No Standard SQL support 2. MPP relational Database •Shared-nothing enables fast query execution •Poor scalability: < 100 cluster size, no fault-tolerance 3. Search Engine •Advanced indexing technique for fast search •3~4X data expansion in size, no SQL support 4. SQL on Hadoop •Modern distributed architecture and high scalability •Slow on point queries
  • 6. Motivation for A New File Format CarbonData: Unified File Format Small Scan QueryBig Scan Query Multi-dimensional OLAP Query A single copy of data balanced to fit all data access
  • 7. Apache • Apache Incubator Project since June, 2016 • Apache releases • 4 stable releases • Latest 1.0.0, Jan 28, 2017 • Contributors: • In Production: Compute Storage
  • 8. Introducing CarbonData CarbonData Integration with Spark CarbonData Table CarbonData File What is CarbonData file format? What forms a CarbonData table on disk? What it takes to deeply integrate with distributed processing engine like Spark? 1 2 3
  • 10. CarbonData File Structure § Built-in Columnar & Index § Multi-dimensional Index (B+ Tree) § Min/Max index § Inverted index § Encoding: § RLE, Delta, Global Dictionary § Snappy for compression § Adaptive Data Type Compression § Data Type: § Primitive type and nested type
  • 11. CarbonData File Layout • Blocklet:A set of rows in columnarformat •Data are sorted along MDK (multi-dimensional keys) •Clustered data enabling efficient filtering and scan •Column chunk: Data for one column in a Blocklet •Footer: Metadata information •File level metadata & statistics •Schema •Blocklet Index Carbon Data File Blocklet 1 Column 1 Chunk Column 2 Chunk … Column n Chunk Blocklet N File Footer … File Metadata Schema Blocklet Index
  • 12. File Level Blocklet Index Blocklet 1 1 1 1 1 1 1 12000 1 1 1 2 1 2 5000 1 1 2 1 1 1 12000 1 1 2 2 1 2 5000 1 1 3 1 1 1 12000 1 1 3 2 1 2 5000 Blocklet 2 1 2 1 3 2 3 11000 1 2 2 3 2 3 11000 1 2 3 3 2 3 11000 1 3 1 4 3 4 2000 1 3 1 5 3 4 1000 1 3 2 4 3 4 2000 Blocklet 3 1 3 2 5 3 4 1000 1 3 3 4 3 4 2000 1 3 3 5 3 4 1000 1 4 1 4 1 1 20000 1 4 2 4 1 1 20000 1 4 3 4 1 1 20000 Blocklet 4 2 1 1 1 1 1 12000 2 1 1 2 1 2 5000 2 1 2 1 1 1 12000 2 1 2 2 1 2 5000 2 1 3 1 1 1 12000 2 1 3 2 1 2 5000 Blocklet Index Blocklet1 Start Key1 End Key1 Start Key1 End Key4 Start Key1 End Key2 Start Key3 End Key4 Start Key1 End Key1 Start Key2 End Key2 Start Key3 End Key3 Start Key4 End Key4 File FooterBlocklet • Build in-memory file level MDK index tree for filtering • Major optimization for efficient scan C1(Min, Max) …. C7(Min, Max) Blocklet4 Start Key4 End Key4 C1(Min, Max) …. C7(Min, Max) C1(Min,Max) … C7(Min,Max) C1(Min,Max) … C7(Min,Max) C1(Min,Max) … C7(Min,Max) C1(Min,Max) … C7(Min,Max)
  • 13. Rich Multi-Level Index Support Spark Driver Executor Carbon File Data Footer Carbon File Data Footer Carbon File Data Footer File Level Index & Scanner Table Level Index Executor File Level Index & Scanner Catalyst Inverted Index •Using the index info in footer, two level B+ tree index can be built: •File level index: local B+ tree, efficient blocklet level filtering •Table level index: global B+ tree, efficient file level filtering •Column chunk inverted index: efficient column chunk scan
  • 15. CarbonData Table Carbon File Metadata: Appendable Dictionary, Schema Index: Separated Index per Segment Data: Immutable Data File Data source Load in batch Segment CarbonData Table Read
  • 16. CarbonData Table Carbon File Carbon File Dictionary (append) Index (new tree) Data File (new folder) Data source Load in batch Segment 2 Segment 1 CarbonData Table Read
  • 17. CarbonData Table Layout Carbon File Data Footer Carbon File Data Footer Carbon File Data Footer Carbon File Data Footer Dictionary File Dictionary Map Spark Driver Table Level Index Index File All Footer Schema File Latest Schema /table_name/fact/segment_id /table_name/metaHDFS Spark
  • 18. Mutation: Delete Delete Base File Segment CarbonData Table Delete Delta Bitmap file that mark deleted rows No change in index No change in dictionary
  • 19. Mutation: Update Update Base File Segment CarbonData Table Delete Delta Insert Delta Bitmap file for delete delta and regular data file for inserted delta New index added for inserted delta Dictionary append
  • 21. Segment Management Data segment … Load (write) Leveraging ZooKeeper to manage the Segment State Segment Manager (ZK based) Carbon File Data segment Carbon File Data segment Carbon File Query (read)
  • 22. SparkSQL + CarbonData: Enables fast interactive data analysis
  • 23. Carbon-Spark Integration • Built-in Spark integration • Spark 1.5, 1.6, 2.1 • Interface • SQL • DataFrame API • Operation: • Load, Query (with optimization) • Update, Delete, Compaction, etc Reader/Writer Data Management Query Optimization Carbon File Carbon File Carbon File
  • 24. Integration through File Format Carbon File Carbon File Carbon File HDFS Segment 3 Segment 2 Segment 1 InputFormat/OutputFormat
  • 25. Deep Integration with SparkSQL Carbon File Carbon File Carbon File Integrate SparkSQL Optimizer HDFS Segment 3 Segment 2 Segment 1 CarbonData Table Read/Write/Update/Delete
  • 26. CarbonData as a SparkSQL Data Source Parser/Analyzer Optimizer Execution SQL or DataFrame ResolveRelation Carbon-specific optimization rule: • Lazy Decode leveraging global dictionary Rule-based New SQL syntax • DML related statement RDD CarbonScanRDD: • Leveraging multi level index for efficient filtering and scan DML related RDDs Parser SparkSQL Catalyst Framework Cost-based Physical Planning Strategy Carbon Data Source Spark Core
  • 27. Spark Executor Spark Driver Blocklet Efficient Filtering via Index HDFS File Footer Blocklet Blocklet … … C1 C2 C3 C4 Cn 1. File pruning File Blocklet Blocklet Footer File Footer Blocklet Blocklet File Blocklet Blocklet Footer Blocklet Blocklet Blocklet Blocklet Task Task 2. Blocklet pruning 3. Read and decompress filter column 4. Binary search using inverted index, skip to next blocklet if no matching … 5. Decompress projection column 6. Return decoded data to spark SELECT c3, c4 FROM t1 WHERE c2=’boston’
  • 28. Stage 2 Stage 1 Stage 2 Stage 1 Lazy Decoding by Leveraging Global Dictionary Final Aggregation Partial Aggregation Scan with Filter (decode c1, c2, c3) Final Aggregation Partial Aggregation Scan with Filter (decode c1, c2 only) Dictionary Decode Before applying Lazy Decode After applying Lazy Decode SELECT c3, sum(c2) FROM t1 WHERE c1>10 GROUP BY c3 Aggregate on encoded value of c3 Decode c3 to string type
  • 29. Usage: Write • Using SQL • Using Dataframe df.write .format(“carbondata") .options("tableName“, “t1")) .mode(SaveMode.Overwrite) .save() CREATE TABLE tablename (name String, PhoneNumber String) STORED BY “carbondata” LOAD DATA [LOCAL] INPATH 'folder path' [OVERWRITE] INTO TABLE tablename OPTIONS(property_name=property_value, ...) INSERT INTO TABLE tablennme select_statement1 FROM table1;
  • 30. Usage: Read • Using SQL • Using Dataframe SELECT project_list FROM t1 WHERE cond_list GROUP BY columns ORDER BY columns df = sparkSession.read .format(“carbondata”) .option(“tableName”, “t1”) .load(“path_to_carbon_file”) df.select(…).show
  • 31. Usage: Update and Delete UPDATE table1 A SET (A.PRODUCT, A.REVENUE) = ( SELECT PRODUCT, REVENUE FROM table2 B WHERE B.CITY = A.CITY AND B.BROKER = A.BROKER ) WHERE A.DATE BETWEEN ‘2017-01-01’ AND ‘2017-01-31’ table1 table2 UPDATE table1 A SET A.REVENUE = A.REVENUE - 10 WHERE A.PRODUCT = ‘phone’ DELETE FROM table1 A WHERE A.CUSTOMERID = ‘123’ 123,abc 456,jkd phone, 70 60 car,100 phone, 30 20 Modify one column in table1 Modify two columns in table1 with values from table2 Delete records in table1
  • 33. Test 1. TPC-H benchmark (500GB) 2. Test on Production Data Set (Billions of rows) 3. Test on Large Data Set for Scalability (1000B rows, 103TB) • Storage – Parquet: • Partitioned by time column (c1) – CarbonData: • Multi-dimensional Index by c1~c10 • Compute – Spark 2.1
  • 34. TPC-H: Query For big scan queries, similar performance,±20% 0 100 200 300 400 500 600 700 800 Response Time (sec) Parquet CarbonData
  • 35. TPC-H: Query 138 283 173 1910 73 207 41 82 0 500 1000 1500 2000 2500 Query_1 Query_3 Query_12 Query_16 Response Time (sec) Parquet CarbonData For queries include filter on fact table, CarbonData get 1.5-20X performance by leveraging index
  • 36. TPC-H: Loading and Compression 83.27 44.30 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00 Parquet CarbonData loading throughput (MB/sec/node) 2.96 2.74 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 Parquet CarbonData compression ratio
  • 37. Test on Production Data Set Filter Response Time (sec) Number of Task Query c1 c2 c3 c4 c5 c6 … c10 Parquet CarbonData Parquet CarbonData Q1 6.4 1.3 55 5 Q2 65 1.3 804 5 Q3 71 5.2 804 9 Q5 64 4.7 804 9 Q4 67 2.7 804 161 Q6 62 3.7 804 161 Q7 63 21.85 804 588 Q8 69 11.2 804 645 Filter Query Observation: Less scan task (resource) is needed because of more efficient filtering by leveraging multi-level index
  • 38. Test on Production Data Set Aggregation Query: no filter, group by c5 (dictionary encoded column) 0 2 4 6 8 10 12 14 16 Parquet CarbonData Response Time (sec) 0 2 4 6 8 10 12 14 stage 1 stage 2 Stage Execution Time (sec) Parquet CarbonData Observation: both partial aggregation and final aggregation are faster, because aggregation operates on dictionary encoded value
  • 39. Test on large data set Observation When data increase: l Index is efficient to reduce response time for IO bound query: Q1, Q2 l Spark can scale linearly for CPU bound query: Q3 Data: 200 to 1000 Billion Rows (half year of telecom data in one china province) Cluster: 70 nodes, 1120 cores Query: Q1: filter (c1~c4), select * Q2: filter (c10), select * Q3: full scan aggregate 0.00% 100.00% 200.00% 300.00% 400.00% 500.00% 200B 400B 600B 800B 1000B Response Time (% of 200B) Q1 Q2 Q3
  • 41. What’s coming next l Enhancement on data loading & compression l Streaming Ingest: • Introduce row-based format for fast ingestion • Gradually compact row-based to column-based for analytic workload • Optimization for time series data l Broader Integration across big data ecosystem: Beam, Flink, Kafka, Kylin
  • 42. Apache • Love feedbacks, try out, any kind of contribution! • Code: https://github.com/apache/incubator-carbondata • JIRA: https://issues.apache.org/jira/browse/CARBONDATA • dev mailing list: dev@carbondata.incubator.apache.org • Website: http://http://carbondata.incubator.apache.org • Current Contributor: 64 • Monthly resolved JIRA issue: 100+