A quick overview of what Snowplow is, followed by a more indepth dive into how Snowplow is architected, and which AWS services are used where in the Snowplow data pipeline. Some tips are included at the end related to using EMR with Redshift.
This presentation was given to the Hadoop Users Group in London on July 19th 2013, and was part of an event focused on AWS and Redshift in particular.
Call Girls in Gomti Nagar - 7388211116 - With room Service
Ā
Snowplow presentation to hug uk
1. Snowplow: scalable open source web and
event analytics platform, built on AWS
Using EMR, Redshift, Cloudfront and Elastic Beanstalk to build a
scalable, log-everything, query-everything data infrastructure
2. What is Snowplow?
ā¢ Web analytics platform
ā¢ Javascript tags -> event-level data delivered in your own Amazon Redshift or
PostgreSQL database, for analysis in R, Excel, Tableau
ā¢ Open source -> run on your own AWS account
ā¢ Own your own data
ā¢ Join with 3rd party data sets (PPC, Facebook, CRM)
ā¢ Analyse with any tool you want
ā¢ Architected to scale
ā¢ Ad networks track 100Ms of events (impressions) per day
ā¢ General purpose event analytics platform -> Universal Event Analytics
ā¢ Log-everything infrastructure works for web data and other event data sets
3. Why we built Snowplow
ā¢ Traditional web analytics tools are very limited
ā¢ Siloed -> hard to integrate
ā¢ Reports built for publishers and retailers in the 1990s
ā¢ Impressed by how easy AWS makes it to collect, manage and process massive
data sets
ā¢ More on this in a secondā¦
ā¢ Impressed by new generation of agile BI tools
ā¢ Tableau, Excel, Rā¦
ā¢ Commoditise and standardise event data capture (esp. data structure) -> enable
innovation in the use of that data
ā¢ Lots of tech companies have built a similar stack to handle data internally
ā¢ Makes sense for everyone to standardise around an open source product
4. Snowplowās (loosely coupled) technical architecture
1. Trackers 2. Collectors 3. Enrich 4. Storage 5. AnalyticsB C D
A D Standardised data protocols
Generate event
data (e.g.
Javascript
tracker)
Receive data
from trackers
and log it to S3
Clean and
enrich raw data
(e.g. geoIP
lookup, session
ization, referrer
parsing)
Store data in
format suitable
to enable
analysis
6. The Snowplow technology stack: collectors
1. Trackers 2. Collectors 3. Enrich 4. Storage 5. Analytics
Cloudfront collector
Clojure collector
on Elastic Beanstalk
ā¢ Tracker: GET request to pixel hosted
on Cloudfront
ā¢ Event data appended to the GET
request as a query string
ā¢ Cloudfront logging -> data
automatically logged to S3
ā¢ Scalable ā Cloudfront CDN built to
handle enormous volume and
velocity of requests
ā¢ Enable tracking users across
domains, by setting a 3rd party
cookie server side
ā¢ Clojure collector runs on Tomcat:
customize format of Tomcat logs to
match Cloudfront log file format
ā¢ Elastic Beanstalk supports rotation of
Tomcat logs into S3
ā¢ Scalable: Elastic Beanstalk makes it
easy to handle spikes in request
volumes
7. The Snowplow technology stack: data enrichment
1. Trackers 2. Collectors 3. Enrich 4. Storage 5. Analytics
Scalding Enrichment on EMR
ā¢ Enrichment process run 1-4x per day
ā¢ Consolidate log files from collector, clean up, enrich, and write back to storage (S3)
ā¢ Enrichments incl. referrer parsing, Geo-IP lookups, server-side sessionization
ā¢ Process written in Scalding: a Scala API for Cascading
ā¢ Cascading: a high level library for Hadoop esp. well suited for building robust data pipelines
(ETL) that e.g. push bad data into separate sinks to validated data
ā¢ Powered by EMR: cluster fired up to perform the enrichment step, then shut down
8. Hadoop and EMR are excellent for data enrichment
ā¢ For many, the volume of data processed with each run is not large enough to necessitate
a big data solutionā¦
ā¢ ā¦ but building the process on Hadoop / EMR means it is easy to rerun the entire
historical Snowplow data set through Enrichment e.g.
ā¢ When a new enrichment becomes available
ā¢ When the company wants to apply a new definition of a key variable in their Snowplow data
set (e.g. new definition for sessionization, or new definition for user cohort) i.e. change in
business logic
ā¢ Reprocessing entire data set isnāt just possible -> itās easy (as easy as just processing new
data) and fast (just fire up a larger cluster)
ā¢ This is game changing in web analytics, where reprocessing data has never been possible
9. Scalding + Scalaz make it easy for us to build rich, validated ETL
pipelines to run on EMR
ā¢ Scalaz is a functional programming library for Scala ā it has a Validation data type which
lets us accumulate errors as we process our raw Snowplow rows
ā¢ Scalding + Scalaz lets us write ETL in a very expressive way:
ā¢ In the above, ValidatedMaybeCanonicalOutput contains either a valid Snowplow
event, or a list of validation failures (Strings) which were encountered trying to parse the
raw Snowplow log row
10. Scalding + Scalaz make it easy for us to build rich, validated ETL
pipelines to run on EMR (continued)
ā¢ Scalding + Scalaz lets us route our bad raw rows into a ābad bucketā in S3, along with all
of the validation errors which were encountered for that row:
ā¢ (This is pretty-printed ā in fact the flatfile is one JSON object per line)
ā¢ In the future we could add an aggregation job to process these ābad bucketā files and
report on the number of errors encountered and most common validation failures
12. Loading Redshift from an EMR job is relatively
straightforward, with some gotchas to be aware of
ā¢ Load Redshift from S3, not DynamoDB ā the costs for loading from DynamoDB only
make sense if you need the data in DynamoDB anyway
ā¢ Your EMR job can either write directly to S3 (slow), or write to local HDFS and then
S3DistCp to S3 (faster)
ā¢ For Scalding, our Redshift table target is a POJO assembled using
scala.reflect.BeanProperty ā with fields declared in same order as in Redshift:
13. Make sure to escape tabs, newlines etc in your strings
ā¢ Once we have Snowplow events in CanonicalOutput form, we simply unpack them into
tuple fields for writing:
ā¢ Remember you are loading tab-separated, newline terminated values into Redshift, so
make sure to escape all tabs, newlines, other special characters in your strings:
14. You need to handle field length too
ā¢ You can either handle string length proactively in your code, or add TRUNCATECOLUMNS to
your Redshift COPY command
ā¢ Currently we proactively truncate:
ā¢ BUT this code is not unicode-aware (Redshift varchar field lengths are in terms of
bytes, not characters) and rather fragile ā we will likely switch to using
TRUNCATECOLUMNS
15. Then use STL_LOAD_ERRORS, Excel and MAXERROR to help
debug load errors
ā¢ If you do get load errors, then check STL_LOAD_ERRORS in Redshift ā it gives you all the
information you need to fix the load error
ā¢ If the error is non-obvious, pull your POJO, Redshift table definition and bad row (from
STL_LOAD_ERRORS) into Excel to compare:
ā¢ COPY ā¦ MAXERROR X is your friend ā lets you see more than just the first load error
16. TSV text files are great for feeding Redshift, but be careful of
using them as your āmaster data storeā
ā¢ Some limitations to using tab-separated flat files to store your data:
ā¢ Inefficient for storage/querying ā versus e.g. binary files
ā¢ Schemaless ā no way of knowing the structure without visually eyeballing
ā¢ Fragile ā problems with field length, tabs, newlines, control characters etc
ā¢ Inexpressive ā no support for things like Union data types; rows can only be 65kb wide (you
can insert fatter rows into Redshift, but cannot query them)
ā¢ Brittle ā adding a new field to Redshift means the old files donāt load; need to re-run the
EMR job over all of your archived input data to re-generate
ā¢ All of this means we will be moving to a more robust Snowplow event storage format
on disk (Avro), and simply generating TSV files from those Avro events as needed to
feed Redshift (or Postgres or Amazon RDS or ā¦)
ā¢ Recommendation: write a new Hadoop job step to take your existing outputs from
EMR and convert into Redshift-friendly TSVs; donāt start hacking on your existing data
flow