Slides from: https://www.meetup.com/Sydney-Apache-Spark-User-Group/events/246892684/
Welcome to the first Sydney Spark Meetup in 2018!
We are very glad to have an visiting Apache Spark committer Holden Karau to give a talk on streaming machine learning. Title: Streaming ML w/Spark (and why it's a bit painful today & #workingonit)
Apache Spark is one of the most popular distributed systems, and it has built in libraries for both machine learning and streaming. This talk will cover Spark's two streaming libraries, look at the future, and how to make streaming ML work today (for both serving and prediction). If you aren't familiar with Spark, that's ok! We'll spend the first ~5 minutes covering just enough to get through the rest of the talk, and for those of you already familiar you can spend those ~5 minutes downloading the sample code :)
About Holden:
Holden is a transgender Canadian open source developer advocate @ Google with a focus on Apache Spark, BEAM, and related "big data" tools. She is the co-author of Learning Spark, High Performance Spark, and another Spark book that's a bit more out of date. She is a committer on the Apache Spark, SystemML, and Mahout projects. She was tricked into the world of big data while trying to improve search and recommendation systems and has long since forgotten her original goal.
• What to bring
• Important to know
A couple of us will be at the doors of 60 Margaret St to let people in until 6.10pm.
3. Who am I?
● My name is Holden Karau
● Prefered pronouns are she/her
● Developer Advocate at Google focused on OSS Big Data
● Apache Spark PMC
● Contributor to a lot of other projects (including BEAM)
● previously IBM, Alpine, Databricks, Google, Foursquare & Amazon
● co-author of High Performance Spark & Learning Spark (+ more)
● Twitter: @holdenkarau
● Slideshare http://www.slideshare.net/hkarau
● Linkedin https://www.linkedin.com/in/holdenkarau
● Github https://github.com/holdenk
● Related Spark Videos http://bit.ly/holdenSparkVideos
4.
5. Quick side notes:
● In town for LinuxConf AU (come see me tomorrow @ 1:40 pm)
● My voice feels like shit today
○ If you can’t hear me let me know
○ If I take a break to have some tea sorry!
● Depending on my voice I might just ask to do Q&A with e-mail later
6. What is going to be covered:
● Who I think y’all are
● What the fuck Spark is -- O’Reilly wouldn’t let me name a chapter 1 this...
● Abridged Introduction to Datasets
● Abridged Introduction to Structured Streaming
● What Structured Streaming is and is not
● How to write simple structured streaming queries
● The “exciting” part: Building machine learning on top of structured streaming
● Possible future changes to make structured streaming & ML work together
nicely
Torsten Reuschling
7. Who I think you wonderful humans are?
● Nice* people
● Don’t mind pictures of cats
● Possibly Know some Apache Spark
● May or may not know the Dataset API
● Want to take advantage of Spark’s Structured Streaming
● May care about machine learning
● Possible distracted with the new Zelda game?
8. ALPHA =~ Please don’t use this in production
We decided to change all the APIs again :p
Image by Mr Thinktank
9. What are Datasets?
● New in Spark 1.6 (comparatively old hat now)
● Provide compile time strongly typed version of DataFrames
● Make it easier to intermix functional & relational code
○ Do you hate writing UDFS? So do I!
● The basis of the Structured Streaming (new in 2.0 with more changes in 2.3)
○ Still an experimental component (API will change in future versions)
Houser Wolf
10. Using Datasets to mix functional & relational:
val ds: Dataset[RawPanda] = ...
val happiness = ds.filter($"happy" === true).
select($"attributes"(0).as[Double]).
reduce((x, y) => x + y)
Sephiroty Magno Fiesta
11. So what was that?
ds.filter($"happy" === true).
select($"attributes"(0).as[Double]).
reduce((x, y) => x + y)
A typed query (specifies the
return type). Without the as[]
will return a DataFrame
(Dataset[Row])
Traditional functional
reduction:
arbitrary scala code :)
Robert Couse-Baker
12. And functional style maps:
/**
* Functional map + Dataset, sums the positive attributes for the
pandas
*/
def funMap(ds: Dataset[RawPanda]): Dataset[Double] = {
ds.map{rp => rp.attributes.filter(_ > 0).sum}
}
13. And now we can use it for streaming too!
● StructuredStreaming - new to Spark 2.0
○ Emphasis on new - be cautious when using
● Extends the Dataset & DataFrame APIs to represent continuous tables
● Still very early stages - but lots of really cool optimizations possible now
● We can build a machine learning pipeline with it together :)
○ Well we have to use some hacks - but ssssssh don’t tell TD
https://github.com/holdenk/spark-structured-streaming-ml
16. Aggregates: V2.0 API only for now?
abstract class UserDefinedAggregateFunction {
def initialize(buffer: MutableAggregationBuffer): Unit
def update(buffer: MutableAggregationBuffer, input: Row): Unit
def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit
def evaluate(buffer: Row): Any
}
17. Get a streaming dataset
// Read a streaming dataframe
val schema = new StructType()
.add("happiness", "double")
.add("coffees", "integer")
val streamingDS = spark
.readStream
.schema(schema)
.format(“parquet”)
.load(path)
Dataset
isStreaming = true
streaming
source
18. Build the recipe for each query
val happinessByCoffee = streamingDS
.groupBy($"coffees")
.agg(avg($"happiness"))
Dataset
isStreaming = true
streaming
source
Aggregate
groupBy = “coffees”
expr = avg(“happiness”)
26. How to train a streaming ML model
1. Future: directly use structured streaming to create model streams via stateful
aggregators
○ https://spark-summit.org/eu-2016/events/online-learning-with-structured-streaming/
2. Today: use the sink to collect model updates and store them on the driver
32. Batch ML pipelines
Tokenizer HashingTF String Indexer Naive Bayes
Tokenizer HashingTF
Streaming
String Indexer
Streaming
Naive Bayes
fit(df)
Estimator
Transformer
● In the batch setting, an estimator is trained on a dataset, and
produces a static, immutable transformer.
● There is no communication between the two.
33. Streaming ML pipelines
Tokenizer HashingTF String Indexer
Naive BayesTokenizer HashingTF
Streaming
String Indexer
Streaming
Naive Bayes
Model
Sink
Tokenizer HashingTF
Streaming
String Indexer
Model
Sink
Data
sink
34. Cool - lets build some ML with it!
Lauren Coolman
35. Streaming ML Pipelines (Proof of Concept)
Tokenizer HashingTF
Streaming
String Indexer
Streaming
Naive Bayes
Tokenizer HashingTF
Streaming
String Indexer
Streaming
Naive Bayes
fit(df)
Estimator
Transformer
(mutable)
● In this implementation, the estimator produces an initial transformer, and
communicates updates to a specialized StreamingTransformer.
● Streaming transformers must provide a means of incorporating model
updates into predictions.
state state
Lauren Coolman
36. Streaming Estimator/Transformer (POC)
trait StreamingModel[S] extends Transformer {
def update(updates: S): Unit
}
trait StreamingEstimator[S] extends Estimator {
def model: StreamingModel[S]
def update(batch: Dataset[_]): Unit
}
Sufficient statistics for
model updates
Blinke
nArea
37. Getting a micro-batch view with distributed
collection*
case class ForeachDatasetSink(func: DataFrame => Unit) extends Sink
{
override def addBatch(batchId: Long, data: DataFrame): Unit = {
func(data)
}
}
https://github.com/holdenk/spark-structured-streaming-ml
38. And doing some ML with it:
def evilTrain(df: DataFrame): StreamingQuery = {
val sink = new ForeachDatasetSink({df: DataFrame => update(df)})
val sparkSession = df.sparkSession
val evilStreamingQueryManager =
EvilStreamingQueryManager(sparkSession.streams)
evilStreamingQueryManager.startQuery(
Some("snb-train"),
None,
df,
sink,
OutputMode.Append())
}
39. And doing some ML with it:
def update(batch: Dataset[_]): Unit = {
val newCountsByClass = add(batch)
model.update(newCountsByClass)
} Aggregate new batch
Merge with previous aggregates
40. And doing some ML with it*
(Algorithm specific)
def update(updates: Array[(Double, (Long, DenseVector))]): Unit = {
updates.foreach { case (label, (numDocs, termCounts)) =>
countsByClass.get(label) match {
case Some((n, c)) =>
axpy(1.0, termCounts, c)
countsByClass(label) = (n + numDocs, c)
case None =>
// new label encountered
countsByClass += (label -> (numDocs, termCounts))
}
}
}
41. Non-Evil alternatives to our Evil:
● ForeachWriter exists
● Since everything runs on the executors it's difficult to update the model
● You could:
○ Use accumulators
○ Write the updates to Kafka
○ Send the updates to a param server of some type with RPC
○ Or do the evil things we did instead :)
● Wait for the “future?”: https://github.com/apache/spark/pull/15178
_torne
42. Working with the results - foreach (1 of 2)
val foreachWriter: ForeachWriter[T] =
new ForeachWriter[T] {
def open(partitionId: Long, version: Long): Boolean = {
True // always open
}
def close(errorOrNull: Throwable): Unit = {
// No close logic - if we wanted to copy updates per-batch
}
def process(record: T): Unit = {
db.update(record)
}
}
43. Working with the results - foreach (2 of 2)
// Apply foreach
happinessByCoffee.writeStream.outputMode(OutputMode.Complete())
foreach(foreachWriter).start()
44. Structured Streaming in Review:
● Pre-2.3 Structured Streaming still uses Spark’s Microbatch approach
● 2.3 forward: New execution engine! Yes this breaks everything
● One of the areas that Matei is researching
○ Researching ==~ future , research !~ today
Windell Oskay
45. Ok but where can we not use it?
● A lot of random methods on DataFrames & Datasets won’t work
● They will fail at runtime rather than compile time - so have tests!
● Anything which roundtrips through an rdd() is going to be pretty sad (aka fail)
○ Lot’s of internals randomly do (like toJson) for historical reasons
● Need to run a query inside of a sink? That is not going to work
● Need a complex receiver type? Many receivers are not ported yet
● Also you will need distinct query names - even if you stop the previous query.
● Aggregations and Append output mode (and the only file sink requires
Append)
● DataFrame/Dataset transformations inside of a sink
46. Open questions for ML pipelines
● How to train and predict simultaneously, on the same data?
○ Transform thread should be executed first
○ Do we actually need to support this or is this just a common demo?
● How to ensure robustness to failures?
○ Treat the output of training as a stream of models, with the same robustness guarantees of
any structured streaming query
○ Work based on this approach has already been prototyped
● Model training must be idempotent - should not train on the same data twice
○ Leverage batch ID, similar to `FileStreamSink`
● How to extend MLWritable for streaming
○ Spark’s format isn’t really all that useful - maybe PMML or PFA
Photo by
bullet101
47. Structured Streaming ML vs DStreams ML
What could be different for ML on structured streaming vs ML on DStreams?
● Structured streaming is built on the Spark SQL engine
○ Catalyst optimizer
○ Project tungsten
● Pipeline integration
○ ML pipelines have been improved and iterated across 5 releases, we can leverage their
mature design for streaming pipelines
○ This will make adding and working with new algorithms much easier than in the past
● Event time handling
○ Streaming ML algorithms typically use a decay factor
○ Structured streaming provides native support for event time, which is more appropriate for
decay
Krzysztof Belczyński
48. Batch vs Streaming Pipelines (Draft POC API)
val df = spark
.read
.schema(schema)
.parquet(path)
val tokenizer = new RegexTokenizer()
val htf = new HashingTF()
val nb = new NaiveBayes()
val pipeline = new Pipeline()
.setStages(Array(tokenizer, htf, nb))
val pipelineModel = pipeline.fit(df)
val df = spark
.readStream
.schema(schema)
.parquet(path)
val tokenizer = new RegexTokenizer()
val htf = new HashingTF()
val snb = new StreamingNaiveBayes()
val pipeline = new StreamingPipeline()
.setStages(Array(tokenizer, htf, snb))
.setCheckpointLocation(path)
val query = pipeline.fitStreaming(df)
query.awaitTermination()
https://github.com/sethah/spark/tree/structured-streaming-fun
49. Additional Spark Resources
● Programming guide (along with JavaDoc, PyDoc,
ScalaDoc, etc.)
○ http://spark.apache.org/docs/latest/
● Books
● Videos
● Spark Office Hours
○ Normally in the bay area - will do Google Hangouts ones soon
○ follow me on twitter for future ones - https://twitter.com/holdenkarau
51. Surveys!!!!!!!! :D
● Interested in Structured Streaming?
○ http://bit.ly/structuredStreamingML - Let us know your thoughts
● Pssst: Care about Python DataFrame UDF
Performance?
○ http://bit.ly/pySparkUDF
● Care about Spark Testing?
○ http://bit.ly/holdenTestingSpark
● Want to give me feedback on this talk?
○ http://bit.ly/holdenTalkFeedback
Michael
Himbeault
52. And some upcoming talks:
● Jan
○ If interest tomorrow: Office Hours? Tweet me @holdenkarau
○ LinuxConf AU - tomorrow!
○ Data Day Texas - kind of far from Sydney but….
● Feb
○ FOSDEM - One on testing one on scaling
○ JFokus in Stockholm - Adding deep learning to Spark
○ I disappear for a week and pretend computers work
● March
○ Strata San Jose - Big Data Beyond the JVM
53. Learning Spark
Fast Data
Processing with
Spark
(Out of Date)
Fast Data
Processing with
Spark
(2nd edition)
Advanced
Analytics with
Spark
Coming soon:
Spark in Action
High Performance SparkLearning PySpark
54. High Performance Spark!
The gift of whichever holiday season is next!
Cats love it!**
You can buy it from that scrappy Seattle bookstore, Jeff
Bezos needs another newspaper and I want a cup of
coffee.
http://bit.ly/hkHighPerfSpark
55. Cat wave photo by Quinn Dombrowski
k thnx bye!
If you <3 testing & want to fill out
survey: http://bit.ly/holdenTestingSpark
Want to tell me (and or my boss) how
I’m doing?
http://bit.ly/holdenTalkFeedback
Want to e-mail me? Promise not to be
creepy? Ok:
holden@pigscanfly.ca
56. k thnx bye!
If you care about Spark testing and
don’t hate surveys:
http://bit.ly/holdenTestingSpark
Will tweet results
“eventually” @holdenkarau
Any PySpark Users: Have some
simple UDFs you wish ran faster
you are willing to share?:
http://bit.ly/pySparkUDF
Pssst: Have feedback on the presentation? Give me a
shout (holden@pigscanfly.ca) if you feel comfortable doing
so :)
58. Start a continuous query
val query = happinessByCoffee
.writeStream
.format(“parquet”)
.outputMode(“complete”)
.trigger(ProcessingTime(5.seconds))
.start()
StreamingQuery
source
relation
groupBy
avglogicalPlan =
59. Launch a new thread to listen for new data
Source
Available Offsets:
Sink
Committed Offsets:
MicroBatch Thread
StreamingQuery
source
relation
groupBy
avglogicalPlan =
Listening
Neil Falzon
60. Write new offsets to WAL
Source
Available Offsets:
0, 1, 2
Sink
Committed Offsets:
MicroBatch Thread
StreamingQuery
source
relation
groupBy
avglogicalPlan =
Commit to WAL
April Weeks
61. Check the source for new offsets
Source
Available Offsets:
0, 1, 2
Sink
Committed Offsets
MicroBatch Thread
StreamingQuery
source
relation
groupBy
avglogicalPlan =
batchId=
42
getBatch()
cat-observer
62. Get the “recipe” for this micro batch
Source
Available Offsets:
0, 1, 2
Sink
Committed Offsets
MicroBatch Thread
StreamingQuery
source
relation
groupBy
avglogicalPlan =
source
scan
groupBy
avg
batchId=
42
transform
Jackie
63. Send the micro batch Dataset to the sink
Source
Available Offsets:
0, 1, 2
Sink
Committed Offsets
MicroBatch Thread
StreamingQuery
source
scan
groupBy
avglogicalPlan =
groupBy
avg
MicroBatch Dataset
isStreaming = false
addBatch()
batchId=
42
Backed by an incremental
execution plan
Jason Rojas
64. Commit and listen again
Source
Available Offsets:
0, 1, 2
Sink
Committed Offsets:
0, 1, 2
MicroBatch Thread
StreamingQuery
source
scan
groupBy
avglogicalPlan =
Listening
S Orchard
65. Execution Summary
● Each query has its own thread - asynchronous
● Sources must be replayable
● Use write-ahead-logs for durability
● Sinks must be idempotent
● Each batch is executed with an incremental execution
plan
● Sinks get a micro batch view of the data
snaxor
66. Cool - lets build some ML with it!
Lauren Coolman
67. Get a dataframe
val schema = new StructType()
.add("happiness", "double")
.add("coffees", "integer")
val batchDS = spark
.read
.schema(schema)
.format(“parquet”)
.load(path)
Dataset
data
source
isStreaming = false
68. Build the recipe for each query
val happinessByCoffee = batchDS
.groupBy($"coffees")
.agg(avg($"happiness"))
Dataset
isStreaming = false
data
source
Aggregate
groupBy = “coffees”
expr = avg(“happiness”)