4. True Scalability
How long does it take to get to a
predetermined accuracy?
Not About:
How well you can implement Algorithm X
Understand the tradeoffs between different algorithms.
7. • Assume bounded resources
• Optimize for data scalability
The Dato Way
• Scales excellently
• Require fewer machines to
solve in the same runtime as
other systems
8. 10
~1GB/s
1 TB
~0.1GB/s
10 TB
~1-10 GB/s
0.1 TB
Single Machine Scalability: Storage Hierarchy
Capacity
Throughput
Random access
is very slow!
Good External Memory
Datastructures For ML
10. Data is usually rows…
user movie rating
But, data engineering typically column
transformations…
11. 13
Feature engineering is columnar
Normalizes the feature x:
sf[‘rating’] = sf[‘rating’] / sf[‘rating’].sum()
Create a new feature:
sf[‘rating-squared’] =
sf[‘rating’].apply(lambda rating: rating*rating)
Create a new dataset with 2 of the features:
sf2 = sf[[‘rating’,’ rating-squared’]]
ratinguser movie
rating
squared
13. Out of Core Machine Learning
Rethink all ML
Algorithms
Random Access Sequential Only
Sampling? Sort/Shuffle
Understanding the
Statistical/convergence impacts of ML
algorithm variations.
14. Single Machine Scaling
0 500 1000 1500 2000 2500
GraphLab-Create (1 Node)
MLlib 1.3 (5 Node)
MLlib 1.3 (1 Node)
Scikit-Learn
Runtime
Dataset Source: LIBLinear binary classification datsets.
KDD Cup data: 8.4M data points, 20M features, 2.4GB compressed.
Task: Predict student performance on math problems based on interactions with tutoring system
16. Social Media
Graphs encode the relationships between:
•Big: trillions of vertices and edges and rich metadata
•Facebook (10/2012): 1B users, 144B friendships
•Twitter (2011): 15B follower edges
AdvertisingScience Web
People
Facts
Products
Interests
Ideas
17. SGraph
1. Immutable disk-backed graph representation.
(Append only)
2. Vertex / Edge Attributes.
3. Optimized for bulk access, not fine-grained queries.
Get neighborhood of [5 Million Vertices]
Get neighborhood of 1 vertex
18. Standard Graph Representations
src dest
1 102
132 10
48 999
129 192
998 23
392 124
Edge List
Easy to Insert
src dest
1 10
1 99
1 102
2 5
2 10
2 120
Sparse Matrix /
Sorted Edge List
Difficult to Insert
(random writes)102 103
349 13
Difficult to
Query
Fast to Query
1 105
23. Common Crawl Graph
3.5 billion Nodes and 128 billion Edges
Largest available public Graph.
200GB
Compression factor 10:1
12.5 bits per edge
2 TB
Benefit From SFrame Compression Methods
24. Common Crawl Graph
3.5 billion Nodes and 128 billion Edges
Largest available public Graph.
200GB
Compression factor 10:1
12.5 bits per edge
2 TB
25. Common Crawl Graph
1x r3.8xlarge using 1x SSD.
3.5 billion Nodes and 128 billion Edges
PageRank: 9 min per iteration.
Connected Components: ~ 1 hr.
There isn’t any general purpose library out there capable of this.
29. X Y
Time for 1 pass
= 100s
Extending Single Machine to Distributed
30. Extending Single Machine to Distributed
X Y
Time for 1 pass
= 50s
X Y
Parallel Disks
Good External Memory
Datastructures For ML Still Help
31. Distributed Optimization
Newton, LBFGS, FISTA, etc
Parallel
Sweep over
data
X Y
Synchronize
Parameters
Parallel
Sweep over
data
X Y
Synchronize
Parameters
Make sure this is
embarrassingly parallel
Talk Quickly
32. Distributed Optimization
HDFS
X Y
1. Data begins on HDFS
X YX Y
2. Every machine takes part
of the data to local disk/SSD
3. Inter machine communication by
fast supercomputer-style primitives
33. Criteo Terabyte Click Logs
Click Prediction Task:
Whether visitor clicked on a link or not.
34. Criteo Terabyte Click Prediction
4.4 Billion Rows
13 Features
½ TB of data
0
500
1000
1500
2000
2500
3000
3500
4000
0 4 8 12 16
Runtime
#Machines
225s
3630s
36. Graph Partitioning Minimizing Communication
YYYCommunication is linear in
the number of machines
each vertex spans
49
Vertex-Cut: Placing edges on machines, and
letting vertex span machines
38. Graph Partitioning
Since Large Natural Graphs are difficult to partition anyway…
Time to
compute a
partition
Quality of
partition
How good a partition quality can we get while
doing almost no work at all?
39. Machine 2Machine 1 Machine 3
Randomly assign edges to machines
YYYY ZYYYY ZY Z
Random Partitioning
But is probably the worst partition you
can construct. Can we do better?
43. Common Crawl Graph
0
100
200
300
400
500
600
0 4 8 12 16
Runtime
#Machines
16 Machines, (c3.8xlarge, 512 vCPUs)
45 sec per iteration
3B edges per second
3.5 billion Nodes and 128 billion Edges
44. In search of Performance
Understand memory access patterns of algorithms:
Single Machine and Distributed
Sequential? Random?
User Com.
Title Body
User Disc. Optimize datastructures for
access patterns
45. It is not merely about speed, or scaling
Doing more with what you already have
48. Our Tools Are Easy To Use
import graphlab as gl
train_data = gl.SFrame.read_csv(traindata_path)
train_data['1grams'] = gl.text_analytics.count_ngrams(train_data[‘text’],1)
train_data['2grams'] = gl.text_analytics.count_ngrams(train_data[‘text’],2)
cls = gl.classifier.create(train_data, target='sentiment’)
5 line sentiment analysis
But
You have preexisting code in Numpy, Scipy, Scikit-learn
49. Automatic Numpy Scaling
Automatic in-memory, type aware compression using SFrame type
compression technology.
import graphlab.numpy
Scalable numpy activation successful
Scales all numeric numpy arrays to datasets much larger than
memory Works with scipy, sklearn.
Demo
51. Automatic Numpy Scaling
Automatic in-memory, type aware compression using SFrame type
compression technology.
import graphlab.numpy
Scalable numpy activation successful
Scales all numeric numpy arrays to datasets much larger than
memory Works with scipy, sklearn.
Demo
Caveats apply
- Sequential Access highly preferred.
- Scales most memory bound sklearn algorithms by at least 2x,
some by more.