Realtime analytics over large datasets has become an increasing wide-spread demand, over the past several years, Hadoop ecosystem has been continuously evolving, even complex queries over large datasets can be realized in an interactive fashion with distributed processing framework like Apache Spark, new paradigm of efficient storage were introduced as well to facilitate data processing framework, such as Apache Parquet, ORC provide fast scan over columnar data format, and Apache Hbase offers fast ingest and millisecond scale random access.
In this talk, we will outline Apache Carbondata, a new addition to open source Hadoop ecosystem which is an indexed columnar file format aimed for bridging the gap to fully enable real-time analytics abilities. It has been deeply integrated with Spark SQL and enables dramatic acceleration of query processing by leveraging efficient encoding/compression and effective predicate push down through Carbondata’s multi-level index technique.
Similar to Apache Carbondata: An Indexed Columnar File Format for Interactive Query with Spark SQL: Spark Summit East talk by Jacky Li and Jihong Ma (20)
3. Challenge
• Wide Spectrum of Query Analysis
• OLAP Vs Detailed Query
• Full scan Vs Small scan
• Point queries
Small Scan QueryBig Scan Query
Multi-dimensional OLAP Query
4. How to choose storage engine to
facilitate query execution?
5. Available Options
1. NoSQL Database
• Key-Value store: low latency, <5ms
• No Standard SQL support
2. MPP relational Database
•Shared-nothing enables fast query execution
•Poor scalability: < 100 cluster size, no fault-tolerance
3. Search Engine
•Advanced indexing technique for fast search
•3~4X data expansion in size, no SQL support
4. SQL on Hadoop
•Modern distributed architecture and high scalability
•Slow on point queries
6. Motivation for A New File Format
CarbonData: Unified File Format
Small Scan QueryBig Scan Query
Multi-dimensional OLAP Query
A single copy of data
balanced to fit all data access
7. Apache
• Apache Incubator Project since June, 2016
• Apache releases
• 4 stable releases
• Latest 1.0.0, Jan 28, 2017
• Contributors:
• In Production:
Compute
Storage
8. Introducing CarbonData
CarbonData Integration with Spark
CarbonData Table
CarbonData File
What is CarbonData file format?
What forms a CarbonData table on disk?
What it takes to deeply integrate with
distributed processing engine like Spark?
1
2
3
10. CarbonData File Structure
§ Built-in Columnar & Index
§ Multi-dimensional Index (B+ Tree)
§ Min/Max index
§ Inverted index
§ Encoding:
§ RLE, Delta, Global Dictionary
§ Snappy for compression
§ Adaptive Data Type Compression
§ Data Type:
§ Primitive type and nested type
11. CarbonData File Layout
• Blocklet:A set of rows in columnarformat
•Data are sorted along MDK (multi-dimensional keys)
•Clustered data enabling efficient filtering and scan
•Column chunk: Data for one column in a Blocklet
•Footer: Metadata information
•File level metadata & statistics
•Schema
•Blocklet Index
Carbon Data File
Blocklet 1
Column 1 Chunk
Column 2 Chunk
…
Column n Chunk
Blocklet N
File Footer
…
File Metadata
Schema
Blocklet Index
13. Rich Multi-Level Index Support
Spark Driver
Executor
Carbon File
Data
Footer
Carbon File
Data
Footer
Carbon File
Data
Footer
File Level Index
& Scanner
Table Level Index
Executor
File Level Index
& Scanner
Catalyst
Inverted
Index
•Using the index info in footer, two
level B+ tree index can be built:
•File level index: local B+ tree, efficient
blocklet level filtering
•Table level index: global B+ tree,
efficient file level filtering
•Column chunk inverted index:
efficient column chunk scan
26. CarbonData as a SparkSQL Data Source
Parser/Analyzer Optimizer Execution
SQL or
DataFrame
ResolveRelation
Carbon-specific optimization rule:
• Lazy Decode leveraging
global dictionary
Rule-based
New SQL syntax
• DML related statement
RDD
CarbonScanRDD:
• Leveraging multi level index for
efficient filtering and scan
DML related RDDs
Parser
SparkSQL Catalyst Framework
Cost-based
Physical Planning
Strategy
Carbon Data Source
Spark Core
27. Spark Executor
Spark Driver
Blocklet
Efficient Filtering via Index
HDFS
File
Footer
Blocklet
Blocklet
…
…
C1 C2 C3 C4 Cn
1. File pruning
File
Blocklet
Blocklet
Footer
File
Footer
Blocklet
Blocklet
File
Blocklet
Blocklet
Footer
Blocklet Blocklet Blocklet Blocklet
Task Task
2. Blocklet
pruning
3. Read and decompress filter column
4. Binary search using inverted index,
skip to next blocklet if no matching
…
5. Decompress projection column
6. Return decoded data to spark
SELECT c3, c4 FROM t1 WHERE c2=’boston’
28. Stage 2
Stage 1
Stage 2
Stage 1
Lazy Decoding by Leveraging Global Dictionary
Final Aggregation
Partial Aggregation
Scan with Filter
(decode c1, c2, c3)
Final Aggregation
Partial Aggregation
Scan with Filter
(decode c1, c2 only)
Dictionary Decode
Before applying Lazy Decode After applying Lazy Decode
SELECT c3, sum(c2) FROM t1 WHERE c1>10 GROUP BY c3
Aggregate on
encoded
value of c3
Decode c3 to
string type
29. Usage: Write
• Using SQL
• Using Dataframe
df.write
.format(“carbondata")
.options("tableName“, “t1"))
.mode(SaveMode.Overwrite)
.save()
CREATE TABLE tablename (name String, PhoneNumber String) STORED BY “carbondata”
LOAD DATA [LOCAL] INPATH 'folder path' [OVERWRITE] INTO TABLE tablename
OPTIONS(property_name=property_value, ...)
INSERT INTO TABLE tablennme select_statement1 FROM table1;
30. Usage: Read
• Using SQL
• Using Dataframe
SELECT project_list FROM t1
WHERE cond_list
GROUP BY columns
ORDER BY columns
df = sparkSession.read
.format(“carbondata”)
.option(“tableName”, “t1”)
.load(“path_to_carbon_file”)
df.select(…).show
31. Usage: Update and Delete
UPDATE table1 A
SET (A.PRODUCT, A.REVENUE) =
(
SELECT PRODUCT, REVENUE
FROM table2 B
WHERE B.CITY = A.CITY AND B.BROKER = A.BROKER
)
WHERE A.DATE BETWEEN ‘2017-01-01’ AND ‘2017-01-31’
table1 table2
UPDATE table1 A
SET A.REVENUE = A.REVENUE - 10
WHERE A.PRODUCT = ‘phone’
DELETE FROM table1 A
WHERE A.CUSTOMERID = ‘123’
123,abc
456,jkd
phone, 70 60
car,100
phone, 30 20
Modify one column in table1
Modify two columns in table1 with values from table2
Delete records in table1
33. Test
1. TPC-H benchmark (500GB)
2. Test on Production Data Set (Billions of rows)
3. Test on Large Data Set for Scalability (1000B rows, 103TB)
• Storage
– Parquet:
• Partitioned by time column (c1)
– CarbonData:
• Multi-dimensional Index by c1~c10
• Compute
– Spark 2.1
34. TPC-H: Query
For big scan queries, similar performance,±20%
0
100
200
300
400
500
600
700
800
Response Time (sec)
Parquet
CarbonData
37. Test on Production Data Set
Filter Response Time (sec) Number of Task
Query c1 c2 c3 c4 c5 c6 … c10 Parquet CarbonData Parquet CarbonData
Q1 6.4 1.3 55 5
Q2
65 1.3 804 5
Q3
71 5.2 804 9
Q5
64 4.7 804 9
Q4
67 2.7 804 161
Q6
62 3.7 804 161
Q7
63 21.85 804 588
Q8
69 11.2 804 645
Filter Query
Observation:
Less scan task (resource) is needed because of more efficient filtering by leveraging multi-level
index
38. Test on Production Data Set
Aggregation Query: no filter, group by c5 (dictionary encoded column)
0
2
4
6
8
10
12
14
16
Parquet CarbonData
Response Time (sec)
0
2
4
6
8
10
12
14
stage 1 stage 2
Stage Execution Time (sec)
Parquet CarbonData
Observation: both partial aggregation and final aggregation are faster,
because aggregation operates on dictionary encoded value
39. Test on large data set
Observation
When data increase:
l Index is efficient to reduce response time for IO bound query: Q1, Q2
l Spark can scale linearly for CPU bound query: Q3
Data: 200 to 1000 Billion Rows (half year of
telecom data in one china province)
Cluster: 70 nodes, 1120 cores
Query:
Q1: filter (c1~c4), select *
Q2: filter (c10), select *
Q3: full scan aggregate
0.00%
100.00%
200.00%
300.00%
400.00%
500.00%
200B 400B 600B 800B 1000B
Response Time (% of 200B)
Q1 Q2 Q3
41. What’s coming next
l Enhancement on data loading & compression
l Streaming Ingest:
• Introduce row-based format for fast ingestion
• Gradually compact row-based to column-based for analytic workload
• Optimization for time series data
l Broader Integration across big data ecosystem: Beam, Flink, Kafka, Kylin
42. Apache
• Love feedbacks, try out, any kind of contribution!
• Code: https://github.com/apache/incubator-carbondata
• JIRA: https://issues.apache.org/jira/browse/CARBONDATA
• dev mailing list: dev@carbondata.incubator.apache.org
• Website: http://http://carbondata.incubator.apache.org
• Current Contributor: 64
• Monthly resolved JIRA issue: 100+