SlideShare a Scribd company logo
1 of 47
Download to read offline
Sparser: Fast Analytics on Raw Data
by Avoiding Parsing
Shoumik Palkar, Firas Abuzaid, Peter Bailis, Matei Zaharia
Motivation
Bigger unstructured datasets and faster hardware.
Time
Speed
Time
DataVolume
Parsing unstructured data before querying it
is often very slow.
Today: Spark Data Sources API
Spark Core Engine
Push part of query into the data source.
Today: Spark Data Sources API
+ E.g., column pruning directly in Parquet data loader
- Little support for unstructured formats (e.g., can’t avoid JSON parsing)
Spark Core Engine
Data Source API
Push part of query into the data source.
Parsing: A Computational Bottleneck
Example: Existing state-of-the-
art JSON parsers 100x slower
than scanning a string!
113836
56796
360
0
50000
100000
150000
RapidJSON
Parsing
Mison Index
Contruction
String Scan
Cycles/5KBRecord
State-of-the-art Parsers
*Similar results on binary formats
like Avro and Parquet
Parsing seems to be a necessary evil:
how do we get around doing it?
Key Opportunity: High Selectivity
High selectivity especially true
for exploratory analytics.
0
0.2
0.4
0.6
0.8
1
1.E-09 1.E-05 1.E-01
CDF
Selectivity
Databricks Censys
40% of customer Spark queries at Databricks select < 20% of data
99% of queries in Censys select < 0.001% of data
Data Source API provides access to
query filters at data source!
How can we exploit high selectivity to accelerate parsing?
Sparser: Filter Before You Parse
Raw DataFilter
Raw DataFilter
Raw DataFilter
Raw DataParse
Raw DataParse
Today:
parse full input à slow!
Sparser: Filter before parsing first
using fast filtering functions with false
positives, but no false negatives
Demo
Count Tweets where text contains “Trump” and “Putin”
Sparser in Spark SQL
Spark Core Engine
Data Source API
val df = spark.read.format(“json”)
Sparser in Spark SQL
Spark Core Engine
Data Source API
Sparser Filtering Engine
val df = spark.read.format(“edu.stanford.sparser.json”) Sparser	Data	Source	Reader
(Also	supports	Avro,	Parquet!)
Sparser Overview
Sparser Overview
Raw Filter (“RF”): filtering function on a bytestream with false
positives but no false negatives.
Use an optimizer to combine RFs into a cascade.
But is it in the “text” field? Is the
full word “Trump”?
0x54 0x72 0x75 0x6d 0x70
T r u m b
RF
“TRUM”
Pass!
Other
RFs
Example: Tweets WHERE
text contains “Trump”
Full
Parser
Key Challenges in Using RFs
1. How do we implement the RFs efficiently?
2. How do we efficiently choose the RFs to maximize parsing
throughput for a given query and dataset?
Rest of this Talk
• Sparser’s API
• Sparser’s Raw Filter Designs (Challenge 1)
• Sparser’s Optimizer: Choosing a Cascade (Challenge 2)
• Performance Results on various file formats (e.g., JSON, Avro)
Sparser API
Sparser API
Filter Example
Exact String Match
WHERE user.name = “Trump”
WHERE likes = 5000
Sparser API
Filter Example
Exact String Match
WHERE user.name = “Trump”
WHERE likes = 5000
Contains String WHERE text contains “Trum”
Sparser API
Filter Example
Exact String Match
WHERE user.name = “Trump”
WHERE likes = 5000
Contains String WHERE text contains “Trum”
Contains Key WHERE user.url != NULL
Sparser API
Filter Example
Exact String Match
WHERE user.name = “Trump”
WHERE likes = 5000
Contains String WHERE text contains “Trum”
Contains Key WHERE user.url != NULL
Conjunctions
WHERE user.name = “Trump”
AND user.verified = true
Disjunctions
WHERE user.name = “Trump”
OR user.name = “Obama”
Sparser API
Filter Example
Exact String Match
WHERE user.name = “Trump”
WHERE likes = 5000
Contains String WHERE text contains “Trum”
Contains Key WHERE user.url != NULL
Conjunctions
WHERE user.name = “Trump”
AND user.verified = true
Disjunctions
WHERE user.name = “Trump”
OR user.name = “Obama”
Currently does not support numerical range-based predicates.
Challenge 1: Efficient RFs
Raw Filters in Sparser
SIMD-based filtering functions that pass or discard a record
by inspecting raw bytestream
113836
56796
360
0
50000
100000
150000
RapidJSON Parsing Mison Index Contruction Raw Filter
Cycles/5KBRecord
Example RF: Substring Search
Search for a small (e.g., 2, 4, or 8-byte) substring of a query
predicate in parallel using SIMD
On modern CPUs: compare 32 characters in parallel in ~4 cycles (2ns).
Length 4 Substring
packed in SIMD register
Input : “ t e x t ” : “ I j u s t m e t M r . T r u m b ! ! ! ”
Shift 1 : T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - - - -
Shift 2 : - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - - -
Shift 3 : - - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - -
Shift 4 : - - - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - -
Example query: text contains “Trump”
False positives (found “Trumb” by accident),
but no false negatives (No “Trum” ⇒ No “Trump”)
Other	RFs	also	possible!	Sparser	selects	them	agnostic	of	implementation.
Key-Value Search RF
Searches for key, and if key is found, searches for value until
some stopping point. Searches occur with SIMD.
Only applicable for exact matches
Useful for queries with common substrings (e.g., favorited=true)
Key: name Value: Trump Delimiter: ,
Ex 1: “name”: “Trump”,
Ex 2: “name”: “Actually, Trump”
Ex 3: “name”: “My name is Trump”
(Pass)
(Fail)
(Pass, False positive)
Second Example would result in false negative if we allow substring matches
Other	RFs	also	possible!	Sparser	selects	them	agnostic	of	implementation.
Challenge 2: Choosing RFs
Choosing RFs
To decrease false positive rate, combine RFs into a cascade
Sparser uses an optimizer to choose a cascade
Raw DataFilter
Raw DataFilter
Raw DataFilter
Raw
DataParse
Raw DataFilter
Raw DataParse
Raw DataFilter
Raw
DataParse
Raw DataFilter
Raw DataFilter
vs. vs.
Sparser’s Optimizer
Step 2: Measure
Params on Sample
of Records
raw bytestream
0010101000110101
RF 1
RF fails
RF 2
X
X
RF fails
Filtered bytes
sent to full parser
Step 4: Apply
Chosen Cascade
for sampled
records:
1 = passed
0 = failed
Step 3: Score and
Choose Cascade
…
(name = "Trump" AND text contains "Putin")
Step 1: Compile
Possible RFs from
predicates
RF1: "Trump"
RF2: "Trum"
RF3: "rump"
RF4: "Tr"
…
RF5: "Putin"
C (RF1) = 4
C (RF1àRF2) = 1
C (RF2àRF3) = 6
C (RF1àRF3) = 9
…
S1 S2 S3
RF 1 0 1 1
RF 2 1 0 1
Sparser’s Optimizer: Configuring RF Cascades
Three high level steps:
1. Convert a query predicate to a set of RFs
2. Measure the passthrough rate and runtime of each RF and
the runtime of the parser on sample of records
3. Minimize optimization function to find min-cost cascade
1. Converting Predicates to RFs
Running Example
(name = “Trump” AND text contains “Putin”)
OR name = “Obama”
Possible RFs*
Substring Search: Trum, Tr, ru, …, Puti, utin, Pu, ut, …, Obam, Ob, …
* Translation from predicate to RF is format-dependent:
examples are for JSON data
2. Measure RFs and Parser
Evaluate RFs and parser on a sample of records.
*Passthrough	rates	of	RFs	are	non-independent! But	too	expensive	to	
measure	rate	of	each	cascade	(combinatorial	search	space)
RF
RF
Parser
Passthrough rates*
and runtimes
2. Measure RFs and Parser
Passthrough rates of RFs are non-independent! But too
expensive to rate of each cascade (combinatorial search
space)
Solution: Store rates as bitmaps.
Pr[a] ∝ 1s in Bitmap of RF a: 0011010101
Pr[b] ∝ 1s in Bitmap of RF b: 0001101101
Pr[a,b] ∝ 1s in Bitwise-& of a, b: 0001000101
foreach sampled record i:
foreach RF j:
BitMap[i][j] = 1 if RF j passes on i
3. Choosing Best Cascade
Running Example
(name = “Trump” AND text contains “Putin”)
OR name = “Obama”
P
Trum
utinObam
PD Obam
PD
1. Valid cascade 2. Valid cascade
utin
Obam
PD
P
3. Invalid cascade (utin must
consider Obam when it fails)
Trum
utinObam
PD PD
Pass (right branch)Fail (left branch)Parse PDiscard D
Cascade cost
computed by
traversing tree from
root to leaf.
Each RF has a
runtime/passthrough
rate we measured in
previous step.
Probability and cost of
executing full parser
Probability and cost
of executing RF i
Cost of Cascade
with RFs 0,1,…,R
3. Choosing Best Cascade
𝐶" = $ Pr	[ 𝑒𝑥𝑒𝑐𝑢𝑡𝑒.]
.∈"
×𝑐. + Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒45678 ×𝑐45678
P
Trum
utinObam
PD Obam
PD
Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒96:; = 1
Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒:=.> = Pr	[Trum]
Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒BC5; = Pr	[¬Trum]	+	Pr[Trum,¬utin]
Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒45678 = Pr	[¬Trum,Obam]	+	Pr[Trum,utin]	+	Pr[Trum,¬utin,Obam]
Choose cascade with minimum cost: Sparser considers up to min(32,
# ORs) RFs and considers up to depth of 4 cascades.
Spark Integration
Sparser in Spark SQL
Spark Core Engine
Data Source API
Sparser Filtering Engine
Sparser in Spark SQL
Spark Core Engine
Data Source API
Sparser Filtering Engine
SQL Query to Candidate
Raw Filters
Native HDFS File Reader
Sparser Optimizer + Filtering
Raw Data
DataFrame
Performance Results
Evaluation Setup
Datasets
Censys - Internet port scan data used in network security
Tweets - collected using Twitter Stream API
Distributed experiments
on 20-node cluster with 4 Broadwell vCPUs/26GB of memory, locally
attached SSDs (Google Compute Engine)
Single node experiments
on Intel Xeon E5-2690 v4 with 512GB memory
Queries
Twitter queries from other academic work. Censys queries sampled randomly from top-3000
queries. PCAP and Bro queries from online tutorials/blogs.
Results: Accelerating End-to-End Spark Jobs
0
200
400
600
Disk Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9
Runtime(seconds)
Spark Spark + Sparser Query Only
Censys queries on 652GB of JSON data: up to 4x speedup by using Sparser.
1.1 1.3 0.7 1.4
0
20
40
60
80
Disk Q1 Q2 Q3 Q4
Runtime
(seconds)
Spark Spark + Sparser Query Only
Twitter queries on
68GB of JSON data:
up to 9x speedup
by using Sparser.
Results: Accelerating Existing JSON Parsers
1
10
100
1 2 3 4 5 6 7 8 9
Runtime(s,log10)
Sparser + RapidJSON Sparser + Mison RapidJSON Mison
Censys queries compared against two state-of-the-art parsers
(Mison based on SIMD): Sparser accelerates them by up to 22x.
Results: Accelerating Binary Format Parsing
0
0.5
1
1.5
Q1 Q2 Q3 Q4
Runtime
(seconds)
avro-c Sparser + avro-c
0
0.2
Q1 Q2 Q3 Q4
Runtime
(seconds)
parquet-cpp Sparser + parquet-cpp
Sparser accelerates Avro (above) and Parquet (below) based queries by up to 5x and 4.3x respectively.
Results: Domain-Specific Tasks
0
1
2
3
4
0.01
0.1
1
10
100
1000
P1 P2 P3 P4
Speedupover
libpcap
Runtime(s,log10)
Sparser libpcap Tshark tcpdump
Sparser accelerates packet parsing by up to 3.5x compared to standard tools.
Results: Sparser’s Sensitivity to Selectivity
0
20
40
60
80
100
120
0 0.2 0.4 0.6 0.8 1
Runtime(seconds)
Selectivity
RapidJSON Sparser + RapidJSON
Mison Sparser + Mison
Sparser’s
performance
degrades
gracefully as the
selectivity of the
query decreases.
Results: Nuts and Bolts
1
10
100
0 200 400
Runtime(s,log10)
Cascade
Sparser picks a cascade within 5%
of the optimal one using its
sampling-based measurement and
optimizer.
Picked by
Sparser
100
1000
10000
0 100 200
ParseThroughput
(MB/s)
File Offset (100s of MB)
With Resampling No Resampling
Resampling to calibrate the cascade
can improve end-to-end parsing time
by up to 20x.
Conclusion
Sparser:
• Uses raw filters to filter before parsing
• Selects a cascade of raw filters using an efficient optimizer
• Delivers up to 22x speedups over existing parsers
• Will be open source soon!
shoumik@cs.stanford.edu
fabuzaid@cs.stanford.edu
Questions or Comments? Contact Us!

More Related Content

What's hot

A Technical Introduction to WiredTiger
A Technical Introduction to WiredTigerA Technical Introduction to WiredTiger
A Technical Introduction to WiredTigerMongoDB
 
Degrading Performance? You Might be Suffering From the Small Files Syndrome
Degrading Performance? You Might be Suffering From the Small Files SyndromeDegrading Performance? You Might be Suffering From the Small Files Syndrome
Degrading Performance? You Might be Suffering From the Small Files SyndromeDatabricks
 
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Introduction to DataFusion  An Embeddable Query Engine Written in RustIntroduction to DataFusion  An Embeddable Query Engine Written in Rust
Introduction to DataFusion An Embeddable Query Engine Written in RustAndrew Lamb
 
SQL Performance Improvements at a Glance in Apache Spark 3.0
SQL Performance Improvements at a Glance in Apache Spark 3.0SQL Performance Improvements at a Glance in Apache Spark 3.0
SQL Performance Improvements at a Glance in Apache Spark 3.0Databricks
 
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)Ryan Blue
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...RTTS
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleFlink Forward
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsJonas Bonér
 
Apache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data TransportApache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data TransportWes McKinney
 
Transactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangTransactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangDatabricks
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLDatabricks
 
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...Databricks
 
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeSOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
 
Introduction to Spark Streaming
Introduction to Spark StreamingIntroduction to Spark Streaming
Introduction to Spark Streamingdatamantra
 
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a Service
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceZeus: Uber’s Highly Scalable and Distributed Shuffle as a Service
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
 
Vectorized Query Execution in Apache Spark at Facebook
Vectorized Query Execution in Apache Spark at FacebookVectorized Query Execution in Apache Spark at Facebook
Vectorized Query Execution in Apache Spark at FacebookDatabricks
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
 

What's hot (20)

A Technical Introduction to WiredTiger
A Technical Introduction to WiredTigerA Technical Introduction to WiredTiger
A Technical Introduction to WiredTiger
 
Degrading Performance? You Might be Suffering From the Small Files Syndrome
Degrading Performance? You Might be Suffering From the Small Files SyndromeDegrading Performance? You Might be Suffering From the Small Files Syndrome
Degrading Performance? You Might be Suffering From the Small Files Syndrome
 
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Introduction to DataFusion  An Embeddable Query Engine Written in RustIntroduction to DataFusion  An Embeddable Query Engine Written in Rust
Introduction to DataFusion An Embeddable Query Engine Written in Rust
 
SQL Performance Improvements at a Glance in Apache Spark 3.0
SQL Performance Improvements at a Glance in Apache Spark 3.0SQL Performance Improvements at a Glance in Apache Spark 3.0
SQL Performance Improvements at a Glance in Apache Spark 3.0
 
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
 
Spark sql
Spark sqlSpark sql
Spark sql
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scaleThe top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability Patterns
 
Apache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data TransportApache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data Transport
 
Transactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangTransactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric Liang
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
 
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
 
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeSOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
 
Introduction to Spark Streaming
Introduction to Spark StreamingIntroduction to Spark Streaming
Introduction to Spark Streaming
 
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a Service
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceZeus: Uber’s Highly Scalable and Distributed Shuffle as a Service
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a Service
 
NLP
NLPNLP
NLP
 
Vectorized Query Execution in Apache Spark at Facebook
Vectorized Query Execution in Apache Spark at FacebookVectorized Query Execution in Apache Spark at Facebook
Vectorized Query Execution in Apache Spark at Facebook
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
 

Similar to Sparser: Faster Parsing of Unstructured Data Formats in Apache Spark with Firas Abuzaid and Shoumik Palkar

RFS Search Lang Spec
RFS Search Lang SpecRFS Search Lang Spec
RFS Search Lang SpecJing Kang
 
Querying data on the Web – client or server?
Querying data on the Web – client or server?Querying data on the Web – client or server?
Querying data on the Web – client or server?Ruben Verborgh
 
2009 Dils Flyweb
2009 Dils Flyweb2009 Dils Flyweb
2009 Dils FlywebJun Zhao
 
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...corehard_by
 
Querying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsQuerying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsRuben Verborgh
 
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Chris Fregly
 
Data Presentations Cassandra Sigmod
Data  Presentations  Cassandra SigmodData  Presentations  Cassandra Sigmod
Data Presentations Cassandra SigmodJeff Hammerbacher
 
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...Amazon Web Services
 
Extreme Scripting July 2009
Extreme Scripting July 2009Extreme Scripting July 2009
Extreme Scripting July 2009Ian Foster
 
ParlBench: a SPARQL-benchmark for electronic publishing applications.
ParlBench: a SPARQL-benchmark for electronic publishing applications.ParlBench: a SPARQL-benchmark for electronic publishing applications.
ParlBench: a SPARQL-benchmark for electronic publishing applications.Tatiana Tarasova
 
C interview-questions-techpreparation
C interview-questions-techpreparationC interview-questions-techpreparation
C interview-questions-techpreparationKushaal Singla
 
Win pcap filtering expression syntax
Win pcap  filtering expression syntaxWin pcap  filtering expression syntax
Win pcap filtering expression syntaxVota Ppt
 
postgres loader
postgres loaderpostgres loader
postgres loaderINRIA-OAK
 
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016MLconf
 
Atlanta MLconf Machine Learning Conference 09-23-2016
Atlanta MLconf Machine Learning Conference 09-23-2016Atlanta MLconf Machine Learning Conference 09-23-2016
Atlanta MLconf Machine Learning Conference 09-23-2016Chris Fregly
 
Hardware Approaches for Fast Lookup & Classification
Hardware Approaches for Fast Lookup & ClassificationHardware Approaches for Fast Lookup & Classification
Hardware Approaches for Fast Lookup & ClassificationJignesh Patel
 

Similar to Sparser: Faster Parsing of Unstructured Data Formats in Apache Spark with Firas Abuzaid and Shoumik Palkar (20)

Serial-War
Serial-WarSerial-War
Serial-War
 
RFS Search Lang Spec
RFS Search Lang SpecRFS Search Lang Spec
RFS Search Lang Spec
 
Querying data on the Web – client or server?
Querying data on the Web – client or server?Querying data on the Web – client or server?
Querying data on the Web – client or server?
 
2009 Dils Flyweb
2009 Dils Flyweb2009 Dils Flyweb
2009 Dils Flyweb
 
Phpconf2008 Sphinx En
Phpconf2008 Sphinx EnPhpconf2008 Sphinx En
Phpconf2008 Sphinx En
 
Avro intro
Avro introAvro intro
Avro intro
 
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...
C++ CoreHard Autumn 2018. Text Formatting For a Future Range-Based Standard L...
 
Querying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsQuerying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern Fragments
 
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
 
Data Presentations Cassandra Sigmod
Data  Presentations  Cassandra SigmodData  Presentations  Cassandra Sigmod
Data Presentations Cassandra Sigmod
 
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...
AWS re:Invent 2016: How Toyota Racing Development Makes Racing Decisions in R...
 
Extreme Scripting July 2009
Extreme Scripting July 2009Extreme Scripting July 2009
Extreme Scripting July 2009
 
ParlBench: a SPARQL-benchmark for electronic publishing applications.
ParlBench: a SPARQL-benchmark for electronic publishing applications.ParlBench: a SPARQL-benchmark for electronic publishing applications.
ParlBench: a SPARQL-benchmark for electronic publishing applications.
 
C interview-questions-techpreparation
C interview-questions-techpreparationC interview-questions-techpreparation
C interview-questions-techpreparation
 
Win pcap filtering expression syntax
Win pcap  filtering expression syntaxWin pcap  filtering expression syntax
Win pcap filtering expression syntax
 
postgres loader
postgres loaderpostgres loader
postgres loader
 
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
 
Atlanta MLconf Machine Learning Conference 09-23-2016
Atlanta MLconf Machine Learning Conference 09-23-2016Atlanta MLconf Machine Learning Conference 09-23-2016
Atlanta MLconf Machine Learning Conference 09-23-2016
 
Hardware Approaches for Fast Lookup & Classification
Hardware Approaches for Fast Lookup & ClassificationHardware Approaches for Fast Lookup & Classification
Hardware Approaches for Fast Lookup & Classification
 
Understanding TCP and HTTP
Understanding TCP and HTTP Understanding TCP and HTTP
Understanding TCP and HTTP
 

More from Databricks

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDatabricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Databricks
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Databricks
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDatabricks
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceDatabricks
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringDatabricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsDatabricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkDatabricks
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkDatabricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesDatabricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeDatabricks
 

More from Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Recently uploaded

Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelBoston Institute of Analytics
 
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...ThinkInnovation
 
IBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaIBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaManalVerma4
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...Dr Arash Najmaei ( Phd., MBA, BSc)
 
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...ThinkInnovation
 
Digital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfDigital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfNicoChristianSunaryo
 
DATA ANALYSIS using various data sets like shoping data set etc
DATA ANALYSIS using various data sets like shoping data set etcDATA ANALYSIS using various data sets like shoping data set etc
DATA ANALYSIS using various data sets like shoping data set etclalithasri22
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBoston Institute of Analytics
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Boston Institute of Analytics
 
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...Jack Cole
 
Statistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfStatistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfnikeshsingh56
 
Role of Consumer Insights in business transformation
Role of Consumer Insights in business transformationRole of Consumer Insights in business transformation
Role of Consumer Insights in business transformationAnnie Melnic
 
Presentation of project of business person who are success
Presentation of project of business person who are successPresentation of project of business person who are success
Presentation of project of business person who are successPratikSingh115843
 

Recently uploaded (16)

Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
 
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...
 
IBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaIBEF report on the Insurance market in India
IBEF report on the Insurance market in India
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
 
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...
 
Insurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis ProjectInsurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis Project
 
Digital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfDigital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdf
 
DATA ANALYSIS using various data sets like shoping data set etc
DATA ANALYSIS using various data sets like shoping data set etcDATA ANALYSIS using various data sets like shoping data set etc
DATA ANALYSIS using various data sets like shoping data set etc
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
 
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
 
Statistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfStatistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdf
 
2023 Survey Shows Dip in High School E-Cigarette Use
2023 Survey Shows Dip in High School E-Cigarette Use2023 Survey Shows Dip in High School E-Cigarette Use
2023 Survey Shows Dip in High School E-Cigarette Use
 
Role of Consumer Insights in business transformation
Role of Consumer Insights in business transformationRole of Consumer Insights in business transformation
Role of Consumer Insights in business transformation
 
Presentation of project of business person who are success
Presentation of project of business person who are successPresentation of project of business person who are success
Presentation of project of business person who are success
 
Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 

Sparser: Faster Parsing of Unstructured Data Formats in Apache Spark with Firas Abuzaid and Shoumik Palkar

  • 1. Sparser: Fast Analytics on Raw Data by Avoiding Parsing Shoumik Palkar, Firas Abuzaid, Peter Bailis, Matei Zaharia
  • 2. Motivation Bigger unstructured datasets and faster hardware. Time Speed Time DataVolume
  • 3. Parsing unstructured data before querying it is often very slow.
  • 4. Today: Spark Data Sources API Spark Core Engine Push part of query into the data source.
  • 5. Today: Spark Data Sources API + E.g., column pruning directly in Parquet data loader - Little support for unstructured formats (e.g., can’t avoid JSON parsing) Spark Core Engine Data Source API Push part of query into the data source.
  • 6. Parsing: A Computational Bottleneck Example: Existing state-of-the- art JSON parsers 100x slower than scanning a string! 113836 56796 360 0 50000 100000 150000 RapidJSON Parsing Mison Index Contruction String Scan Cycles/5KBRecord State-of-the-art Parsers *Similar results on binary formats like Avro and Parquet Parsing seems to be a necessary evil: how do we get around doing it?
  • 7. Key Opportunity: High Selectivity High selectivity especially true for exploratory analytics. 0 0.2 0.4 0.6 0.8 1 1.E-09 1.E-05 1.E-01 CDF Selectivity Databricks Censys 40% of customer Spark queries at Databricks select < 20% of data 99% of queries in Censys select < 0.001% of data Data Source API provides access to query filters at data source!
  • 8. How can we exploit high selectivity to accelerate parsing?
  • 9. Sparser: Filter Before You Parse Raw DataFilter Raw DataFilter Raw DataFilter Raw DataParse Raw DataParse Today: parse full input à slow! Sparser: Filter before parsing first using fast filtering functions with false positives, but no false negatives
  • 10. Demo Count Tweets where text contains “Trump” and “Putin”
  • 11. Sparser in Spark SQL Spark Core Engine Data Source API val df = spark.read.format(“json”)
  • 12. Sparser in Spark SQL Spark Core Engine Data Source API Sparser Filtering Engine val df = spark.read.format(“edu.stanford.sparser.json”) Sparser Data Source Reader (Also supports Avro, Parquet!)
  • 14. Sparser Overview Raw Filter (“RF”): filtering function on a bytestream with false positives but no false negatives. Use an optimizer to combine RFs into a cascade. But is it in the “text” field? Is the full word “Trump”? 0x54 0x72 0x75 0x6d 0x70 T r u m b RF “TRUM” Pass! Other RFs Example: Tweets WHERE text contains “Trump” Full Parser
  • 15. Key Challenges in Using RFs 1. How do we implement the RFs efficiently? 2. How do we efficiently choose the RFs to maximize parsing throughput for a given query and dataset? Rest of this Talk • Sparser’s API • Sparser’s Raw Filter Designs (Challenge 1) • Sparser’s Optimizer: Choosing a Cascade (Challenge 2) • Performance Results on various file formats (e.g., JSON, Avro)
  • 17. Sparser API Filter Example Exact String Match WHERE user.name = “Trump” WHERE likes = 5000
  • 18. Sparser API Filter Example Exact String Match WHERE user.name = “Trump” WHERE likes = 5000 Contains String WHERE text contains “Trum”
  • 19. Sparser API Filter Example Exact String Match WHERE user.name = “Trump” WHERE likes = 5000 Contains String WHERE text contains “Trum” Contains Key WHERE user.url != NULL
  • 20. Sparser API Filter Example Exact String Match WHERE user.name = “Trump” WHERE likes = 5000 Contains String WHERE text contains “Trum” Contains Key WHERE user.url != NULL Conjunctions WHERE user.name = “Trump” AND user.verified = true Disjunctions WHERE user.name = “Trump” OR user.name = “Obama”
  • 21. Sparser API Filter Example Exact String Match WHERE user.name = “Trump” WHERE likes = 5000 Contains String WHERE text contains “Trum” Contains Key WHERE user.url != NULL Conjunctions WHERE user.name = “Trump” AND user.verified = true Disjunctions WHERE user.name = “Trump” OR user.name = “Obama” Currently does not support numerical range-based predicates.
  • 23. Raw Filters in Sparser SIMD-based filtering functions that pass or discard a record by inspecting raw bytestream 113836 56796 360 0 50000 100000 150000 RapidJSON Parsing Mison Index Contruction Raw Filter Cycles/5KBRecord
  • 24. Example RF: Substring Search Search for a small (e.g., 2, 4, or 8-byte) substring of a query predicate in parallel using SIMD On modern CPUs: compare 32 characters in parallel in ~4 cycles (2ns). Length 4 Substring packed in SIMD register Input : “ t e x t ” : “ I j u s t m e t M r . T r u m b ! ! ! ” Shift 1 : T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - - - - Shift 2 : - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - - - Shift 3 : - - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - - Shift 4 : - - - T r u m T r u m T r u m T r u m T r u m T r u m T r u m - - - - Example query: text contains “Trump” False positives (found “Trumb” by accident), but no false negatives (No “Trum” ⇒ No “Trump”) Other RFs also possible! Sparser selects them agnostic of implementation.
  • 25. Key-Value Search RF Searches for key, and if key is found, searches for value until some stopping point. Searches occur with SIMD. Only applicable for exact matches Useful for queries with common substrings (e.g., favorited=true) Key: name Value: Trump Delimiter: , Ex 1: “name”: “Trump”, Ex 2: “name”: “Actually, Trump” Ex 3: “name”: “My name is Trump” (Pass) (Fail) (Pass, False positive) Second Example would result in false negative if we allow substring matches Other RFs also possible! Sparser selects them agnostic of implementation.
  • 27. Choosing RFs To decrease false positive rate, combine RFs into a cascade Sparser uses an optimizer to choose a cascade Raw DataFilter Raw DataFilter Raw DataFilter Raw DataParse Raw DataFilter Raw DataParse Raw DataFilter Raw DataParse Raw DataFilter Raw DataFilter vs. vs.
  • 28. Sparser’s Optimizer Step 2: Measure Params on Sample of Records raw bytestream 0010101000110101 RF 1 RF fails RF 2 X X RF fails Filtered bytes sent to full parser Step 4: Apply Chosen Cascade for sampled records: 1 = passed 0 = failed Step 3: Score and Choose Cascade … (name = "Trump" AND text contains "Putin") Step 1: Compile Possible RFs from predicates RF1: "Trump" RF2: "Trum" RF3: "rump" RF4: "Tr" … RF5: "Putin" C (RF1) = 4 C (RF1àRF2) = 1 C (RF2àRF3) = 6 C (RF1àRF3) = 9 … S1 S2 S3 RF 1 0 1 1 RF 2 1 0 1
  • 29. Sparser’s Optimizer: Configuring RF Cascades Three high level steps: 1. Convert a query predicate to a set of RFs 2. Measure the passthrough rate and runtime of each RF and the runtime of the parser on sample of records 3. Minimize optimization function to find min-cost cascade
  • 30. 1. Converting Predicates to RFs Running Example (name = “Trump” AND text contains “Putin”) OR name = “Obama” Possible RFs* Substring Search: Trum, Tr, ru, …, Puti, utin, Pu, ut, …, Obam, Ob, … * Translation from predicate to RF is format-dependent: examples are for JSON data
  • 31. 2. Measure RFs and Parser Evaluate RFs and parser on a sample of records. *Passthrough rates of RFs are non-independent! But too expensive to measure rate of each cascade (combinatorial search space) RF RF Parser Passthrough rates* and runtimes
  • 32. 2. Measure RFs and Parser Passthrough rates of RFs are non-independent! But too expensive to rate of each cascade (combinatorial search space) Solution: Store rates as bitmaps. Pr[a] ∝ 1s in Bitmap of RF a: 0011010101 Pr[b] ∝ 1s in Bitmap of RF b: 0001101101 Pr[a,b] ∝ 1s in Bitwise-& of a, b: 0001000101 foreach sampled record i: foreach RF j: BitMap[i][j] = 1 if RF j passes on i
  • 33. 3. Choosing Best Cascade Running Example (name = “Trump” AND text contains “Putin”) OR name = “Obama” P Trum utinObam PD Obam PD 1. Valid cascade 2. Valid cascade utin Obam PD P 3. Invalid cascade (utin must consider Obam when it fails) Trum utinObam PD PD Pass (right branch)Fail (left branch)Parse PDiscard D Cascade cost computed by traversing tree from root to leaf. Each RF has a runtime/passthrough rate we measured in previous step.
  • 34. Probability and cost of executing full parser Probability and cost of executing RF i Cost of Cascade with RFs 0,1,…,R 3. Choosing Best Cascade 𝐶" = $ Pr [ 𝑒𝑥𝑒𝑐𝑢𝑡𝑒.] .∈" ×𝑐. + Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒45678 ×𝑐45678 P Trum utinObam PD Obam PD Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒96:; = 1 Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒:=.> = Pr [Trum] Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒BC5; = Pr [¬Trum] + Pr[Trum,¬utin] Pr 𝑒𝑥𝑒𝑐𝑢𝑡𝑒45678 = Pr [¬Trum,Obam] + Pr[Trum,utin] + Pr[Trum,¬utin,Obam] Choose cascade with minimum cost: Sparser considers up to min(32, # ORs) RFs and considers up to depth of 4 cascades.
  • 36. Sparser in Spark SQL Spark Core Engine Data Source API Sparser Filtering Engine
  • 37. Sparser in Spark SQL Spark Core Engine Data Source API Sparser Filtering Engine SQL Query to Candidate Raw Filters Native HDFS File Reader Sparser Optimizer + Filtering Raw Data DataFrame
  • 39. Evaluation Setup Datasets Censys - Internet port scan data used in network security Tweets - collected using Twitter Stream API Distributed experiments on 20-node cluster with 4 Broadwell vCPUs/26GB of memory, locally attached SSDs (Google Compute Engine) Single node experiments on Intel Xeon E5-2690 v4 with 512GB memory
  • 40. Queries Twitter queries from other academic work. Censys queries sampled randomly from top-3000 queries. PCAP and Bro queries from online tutorials/blogs.
  • 41. Results: Accelerating End-to-End Spark Jobs 0 200 400 600 Disk Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Runtime(seconds) Spark Spark + Sparser Query Only Censys queries on 652GB of JSON data: up to 4x speedup by using Sparser. 1.1 1.3 0.7 1.4 0 20 40 60 80 Disk Q1 Q2 Q3 Q4 Runtime (seconds) Spark Spark + Sparser Query Only Twitter queries on 68GB of JSON data: up to 9x speedup by using Sparser.
  • 42. Results: Accelerating Existing JSON Parsers 1 10 100 1 2 3 4 5 6 7 8 9 Runtime(s,log10) Sparser + RapidJSON Sparser + Mison RapidJSON Mison Censys queries compared against two state-of-the-art parsers (Mison based on SIMD): Sparser accelerates them by up to 22x.
  • 43. Results: Accelerating Binary Format Parsing 0 0.5 1 1.5 Q1 Q2 Q3 Q4 Runtime (seconds) avro-c Sparser + avro-c 0 0.2 Q1 Q2 Q3 Q4 Runtime (seconds) parquet-cpp Sparser + parquet-cpp Sparser accelerates Avro (above) and Parquet (below) based queries by up to 5x and 4.3x respectively.
  • 44. Results: Domain-Specific Tasks 0 1 2 3 4 0.01 0.1 1 10 100 1000 P1 P2 P3 P4 Speedupover libpcap Runtime(s,log10) Sparser libpcap Tshark tcpdump Sparser accelerates packet parsing by up to 3.5x compared to standard tools.
  • 45. Results: Sparser’s Sensitivity to Selectivity 0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1 Runtime(seconds) Selectivity RapidJSON Sparser + RapidJSON Mison Sparser + Mison Sparser’s performance degrades gracefully as the selectivity of the query decreases.
  • 46. Results: Nuts and Bolts 1 10 100 0 200 400 Runtime(s,log10) Cascade Sparser picks a cascade within 5% of the optimal one using its sampling-based measurement and optimizer. Picked by Sparser 100 1000 10000 0 100 200 ParseThroughput (MB/s) File Offset (100s of MB) With Resampling No Resampling Resampling to calibrate the cascade can improve end-to-end parsing time by up to 20x.
  • 47. Conclusion Sparser: • Uses raw filters to filter before parsing • Selects a cascade of raw filters using an efficient optimizer • Delivers up to 22x speedups over existing parsers • Will be open source soon! shoumik@cs.stanford.edu fabuzaid@cs.stanford.edu Questions or Comments? Contact Us!