The machine learning libraries in Apache Spark are an impressive piece of software engineering, and are maturing rapidly. What advantages does Spark.ml offer over scikit-learn? At Data Science Retreat we've taken a real-world dataset and worked through the stages of building a predictive model -- exploration, data cleaning, feature engineering, and model fitting; which would you use in production?
The machine learning libraries in Apache Spark are an impressive piece of software engineering, and are maturing rapidly. What advantages does Spark.ml offer over scikit-learn?
At Data Science Retreat we've taken a real-world dataset and worked through the stages of building a predictive model -- exploration, data cleaning, feature engineering, and model fitting -- in several different frameworks. We'll show what it's like to work with native Spark.ml, and compare it to scikit-learn along several dimensions: ease of use, productivity, feature set, and performance.
In some ways Spark.ml is still rather immature, but it also conveys new superpowers to those who know how to use it.
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
1. A full Machine learning pipeline
in Scikit-learn vs Scala-Spark:
pros and cons
Jose Quesada and David Anderson
@quesada, @alpinegizmo, @datascienceret
4. • How do you get from a single-machine workload to a fully distributed
one?
• Answer: Spark machine learning
• Is there something I'm missing out by staying with python?
5.
6.
7. • Mentors are world-class. CTOs, library authors, inventors, founders of
fast-growing companies, etc
• DSR accepts fewer than 5% of the applications
• Strong focus on commercial awareness
• 5 years of working experience on average
• 30+ partner companies in Europe
15. Scala
“Scala offers the easiest refactoring experience that I've ever had due
to the type system.”
Jacob, coursera engineer
16. Spark
• Basically distributed Scala
• API
• Scala, Java, Python, and R bindings
• Libraries
• SQL, streams, graph processing, machine learning
• One of the most active open source projects
17. “Spark will inevitably become the de-facto Big Data framework
for Machine Learning and Data Science.”
Dean Wampler, Lightbend
18. All under one roof (big Win)
Source: Spark 2015 infographic
Spark Core
Spark SQL
Spark
streaming
Spark.ml
(machine
learning
GraphX
(graphs)
20. Data is partitioned; code is sent to the data
Input
Driver /
SparkContext
Worker
Worker
Data
Data
21. Example: word count
hello world
foo bar
foo foo bar
bye world
Data is immutable,
and is partitioned
across the cluster
22. Example: word count
hello world
foo bar
foo foo bar
bye world
We get things done
by creating new,
transformed copies
of the data.
In parallel.
hello
world
foo
bar
foo
foo
bar
bye
world
(hello, 1)
(world, 1)
(foo, 1)
(bar, 1)
(foo, 1)
(foo, 1)
(bar, 1)
(bye, 1)
(world, 1)
23. Example: word count
hello world
foo bar
foo foo bar
bye world
Some operations require a shuffle
to group data together
hello
world
foo
bar
foo
foo
bar
bye
world
(hello, 1)
(world, 1)
(foo, 1)
(bar, 1)
(foo, 1)
(foo, 1)
(bar, 1)
(bye, 1)
(world, 1)
(hello, 1)
(foo, 3)
(bar, 2)
(bye, 1)
(world, 2)
24. Example: word count
lines = sc.textFile(input)
words = lines.flatMap(lambda x: x.split(" "))
word_count =
(words.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
-------------------------------------------------
word_count.saveAsTextFile(output)
Pipelined into the same
python executor
Nothing happens until
after this line, when this
"action" forces evaluation
of the RDD
25. RDD – Resilient Distributed Dataset
• An immutable, partitioned collection of elements that can be
operated on in parallel
• Lazy
• Fault-tolerant
26. PySpark RDD Execution Model
Whenever you provide a
lambda to operate on an
RDD:
• Each Spark worker
forks a Python worker
• data is serialized and
piped to those Python
workers
27.
28. Impact of this execution model
• Worker overhead (forking, serialization)
• The cluster manager isn't aware of Python's memory needs
• Very confusing error messages
29. Spark Dataframes (and Datasets)
• Based on RDDs, but tabular; something like SQL tables
• Not Pandas
• Rescues Python from serialization overhead
• df.filter(df.col("color") == "red") vs. rdd.filter(lambda x: x.color == "red")
• processed entirely in the JVM
• Python UDFs and maps still require serialization and piping to Python
• can write (and register) Scala code, and then call it from Python
30.
31. DataFrame execution: unified across
languages
Python DF Java/Scala DF R DF
Logical Plan
Execution
API wrappers create a
logical plan (a DAG)
Catalyst optimizes the plan;
Tungsten compiles the plan
into executable code
34. Machine learning with scikit-learn
• Easy to use
• Rich ecosystem
• Limited to one machine (but see sparkit-learn package)
35. Machine learning with Hadoop (in short: NO)
• Each iteration is a new M/R job
• Each job must store data in HDFS – lots of overhead
36. How Spark killed Hadoop map/reduce
• Far easier to program
• More cost-effective since less hardware can perform the same tasks
much faster
• Can do real-time processing as well as batch processing
• Can do ML, graphs
37. Machine learning with Spark
• Spark was designed for ML workloads
• Caching (reuse data)
• Accumulators (keep state across iterations)
• Functional, lazy, fault-tolerant
• Many popular algorithms are supported out of the box
• Simple to productionalize models
• MLlib is RDD (the past), spark.ml is dataframes, the future
38. Spark is an Ecosystem of ML frameworks
• Spark was designed by people who understood the need of ML
practitioners (unlike Hadoop)
• MLlib
• Spark.ml
• System.ml (IBM)
• Keystone.ml
39. Spark.ML– the basics
• DataFrame: ML requires DFs holding vectors
• Transformer: transforms one DF into another
• Estimator: fit on a DF; produces a transformer
• Pipeline: chain of transformers and estimators
• Parameter: there is a unified API for specifying parameters
• Evaluator:
• CrossValidator: model selection via grid search
44. Q: Hardest scaling problem in data science?
A: Adding people
• Spark.ml has a clean architecture and APIs that should encourage
code sharing and reuse
• Good first step: can you refactor some ETL code as a Transformer?
• Don't see much sharing of components happening yet
• Entire libraries, yes; components, not so much
• Perhaps because Spark has been evolving so quickly
• E.g., pull request implementing non-linear SVMs that has been stuck for a
year
45. Structured types in Spark
SQL DataFrames DataSets
(Java/Scala only)
Syntax Errors Runtime Compile time Compile time
Analysis Errors Runtime Runtime Compile time
47. Indexing categorical features
• You are responsible for identifying and indexing categorical features
val rfcd_indexer = new StringIndexer()
.setInputCol("color")
.setOutputCol("color_index")
.fit(dataset)
val seo_indexer = new StringIndexer()
.setInputCol("status")
.setOutputCol("status_index")
.fit(dataset)
48. Assembling features
• You must gather all of your features into one Vector, using a
VectorAssembler
val assembler = new VectorAssembler()
.setInputCols(Array("color_index", "status_index", ...))
.setOutputCol("features")
49. Spark.ml – Scikit-learn: Pipelines (good news!)
• Spark ML and scikit-learn: same approach
• Chain together Estimators and Transformers
• Support non-linear pipelines (must be a DAG)
• Unify parameter passing
• Support for cross-validation and grid search
• Can write your own custom pipeline stages
Spark.ml just like scikit-learn
50. Transformer Description scikit-learn
Binarizer Threshold numerical feature to binary Binarizer
Bucketizer Bucket numerical features into ranges
ElementwiseProduct Scale each feature/column separately
HashingTF Hash text/data to vector. Scale by term frequency FeatureHasher
IDF Scale features by inverse document frequency TfidfTransformer
Normalizer Scale each row to unit norm Normalizer
OneHotEncoder Encode k-category feature as binary features OneHotEncoder
PolynomialExpansion Create higher-order features PolynomialFeatures
RegexTokenizer Tokenize text using regular expressions (part of text methods)
StandardScaler Scale features to 0 mean and/or unit variance StandardScaler
StringIndexer Convert String feature to 0-based indices LabelEncoder
Tokenizer Tokenize text on whitespace (part of text methods)
VectorAssembler Concatenate feature vectors FeatureUnion
VectorIndexer Identify categorical features, and index
Word2Vec Learn vector representation of words
Spark.ml – Scikit-learn: NLP tasks (thumbs up)
51. Graph stuff (graphX, graphframes, not great)
• Extremely easy to run monster algorithms in a cluster
• GraphX has no python API
• Graphframes are cool, and should provide access to the graph tools in
Spark from python
• In practice, it didn’t work too well
52. Things we liked in Spark ML
• Architecture encourages building reusable pieces
• Type safety, plus types are driving optimizations
• Model fitting returns an object that transforms the data
• Uniform way of passing parameters
• It's interesting to use the same platform for ETL and model fitting
• Very easy to parallelize ETL and grid search, or work with huge models
53. Disappointments using Spark ML
• Feature indexing and assembly can become tedious
• Surprised by the maximum depth limit for trees: 30
• Data exploration and visualization aren't easy in Scala
• Wish list: non-linear SVMs, deep learning (but see Deeplearning4j)
54. What is new for machine learning in Spark 2.0
• DataFrame-based Machine Learning API emerges as the primary ML
API: With Spark 2.0, the spark.ml package, with its “pipeline” APIs,
will emerge as the primary machine learning API. While the original
spark.mllib package is preserved, future development will focus on
the DataFrame-based API.
• Machine learning pipeline persistence: Users can now save and
load machine learning pipelines and models across all programming
languages supported by Spark.
55. What is new for data structures in Spark 2.0
Unifying the API for Streams and static data: Infinite datasets (same interface as dataframes)
57. … Other than distributed dataframes,
distributed machine learning,
easy distributed grid search,
distributed SQL,
distributed stream analysis,
more performance than map reduce
easier programming model
And easier deployment …
What have Spark and Scala ever given us?
58. Reminder: 25 videos explaining ML on spark
• For people who already know ML
• http://datascienceretreat.com/videos/data-science-with-scala-and-
spark)
59. Thank you for your attention!
@quesada, @datascienceret
Editor's Notes
Scala and spark are very close: if you learn one you learn the other.
Spark is distributed scala
Scala and spark are very close: if you learn one you learn the other.
Spark is distributed scala
This has been possible for years, but nowadays it’s not only possible but pleasant
You attend a Retreat, not a training
A talk should give you a superpower.
- Am I missing out?
redo the diagram
fault-tolerant: missing partitions can be recomputed by using the lineage graph to rerun operations
When using python, the sparkcontext in python is basically a proxy. py4j is used to launch a JVM and create a native spark context. py4j manages communication between the python and java spark context objects.
In the workers, some operations can be executed directly in the JVM. But, for example, if you've implemented a map function in python, a python process is forked to execute this user-supplied mapping. Each thread in the spark worker will have its own python sub-process.
When Python wrapper calls the underlying Spark codes written in Scala running on a JVM, translation between two different environments and languages might be the source of more bugs and issues.
Scala and spark are very close: if you learn one you learn the other.
Spark is distributed scala
This has been possible for years, but nowadays it’s not only possible but pleasant
Just one Map / Reduce step, but many algorithms are iterative
Disk based → long startup times
-------
Spark is a wholesale replacement for MapReduce that leverages lessons learned from MapReduce. The Hadoop community realized that areplacement for MR was needed. While MR has served the community well, it’s a decade old and shows clear limitations and problems, as we’ve seen. In late 2013, Cloudera, the largest Hadoop vendor officially embraced Spark as the replacement. Most of the other Hadoop vendors have followed suit.
When it comes to one-pass ETL-like jobs, for example, data transformation or data integration, then MapReduce is the deal—this is what it was designed for.
Advantages for Hadoop: Security, staffing
sample use case for accumulators: gradient descent
Spark.ml Departs from scikit-learn quite a bit
Good
from https://databricks.com/blog/2015/07/29/new-features-in-machine-learning-pipelines-in-apache-spark-1-4.html