SlideShare a Scribd company logo
1 of 17
Design Principles for a Modern
Data Warehouse
CASE STUDIES AT DE BIJENKORF AND TRAVELBIRD
Old Challenges, New Considerations
Data warehouses still must deliver:
◦ Data integration of multiple systems
◦ Accuracy, completeness, and auditability
◦ Reporting for assorted stakeholders and business needs
◦ Clean data
◦ A “single version of the truth”
But the problem space now contains:
◦ Unstructured/Semi-structured data
◦ Real time data
◦ Shorter time to access / self-service BI
◦ SO MUCH DATA (terabytes/hour to load)
◦ More systems to integrate (everything has an API)
New technologies are changing the landscape
What is best practice today?
A modern, best in class data warehouse:
◦ Is designed for scalability, ideally using cloud architecture
◦ Uses a bus-based, lambda architecture
◦ Has a federated data model for structured and unstructured data
◦ Leverages MPP databases
◦ Uses an agile data model like Data Vault
◦ Is built using code automation
◦ Processes data using ELT, not ETL
All the buzzwords! But what does it look like and why do these things help?
Architectural overview at de Bijenkorf
Tools
AWS
◦ S3
◦ Kinesis
◦ Elasticache
◦ Elastic Beanstalk
◦ EC2
◦ DynamoDB
Open Source
◦ Snowplow Event Tracker
◦ Rundeck Scheduler
◦ Jenkins Continuous Integration
◦ Pentaho PDI
Other
◦ HP Vertica
◦ Tableau
◦ Github
◦ RStudio Server
DWH internal architecture, Travelbird and Bijenkorf
• Traditional three tier DWH
• ODS generated automatically from
staging
• Allow regeneration of vault
without replaying logs
• Ops mart reflects data in original
source form
• Helps offload queries from
source systems
• Business marts materialized
exclusively from vault
Why use the cloud?
Cost Management
•Services billed by the hour, pay for what you use
•For small deployments (<50 machines), cloud hosting can be significantly cheaper
•Ex. a 3 node Vertica cluster in AWS with 25TB data: $2.2k/mo
Off the Shelf Services
•Minimize administration by using pre-built services like message buses (Kinesis), databases (RDS), Key/Value
stores (Elasticache), simplifying technology stack
•Increase speed of delivery of new functionality by eliminating most deployment tasks
•Full stack in a day? No problem!
Scalability
•Services can automatically be scaled up/down based on time, load, or other triggers
•Adding additional services can be done within minutes
•Services can scale (near) infinitely
Designed to solve both primary data needs:
◦ Damn close, right now
◦ Correct, tomorrow
Data is processed twice per stream
As implemented at BYK and TB:
◦ Real time flow from Kinesis to DWH
◦ Simultaneous process to S3
◦ Reprocessing as needed from S3 (batch)
Lambda architecture: Right Now and Right Later
Hadoop in the DWH
What is Hadoop?
◦ A distributed, fault tolerant file system
◦ A set of tools for file/data stream processing
Where does it fit into the DWH stack?
◦ Data Lake: Save all raw data for cheap; don’t force
schemas on unstructured data
◦ ETL: Distributed batch processing, aggregation, and
loading
Hadoop at Bijenkorf
◦ We had it but threw it out; the use cases didn’t fit
◦ Very little data is unstructured and the DWH supports JSON
◦ Data volumes are limited and growing slowly
◦ How did we solve the use cases?
◦ Data lake: S3 file storage + semi-structured data in Vertica
◦ Data processing: Stream processing (stable event volumes + clean
events)
Hadoop at Travelbird
◦ Dirty, fast growing event data, so…
◦ Hadoop in the typical role
◦ Raw data in AWS Elastic Map Reduce via S3
◦ Data cleaned and processed in Hadoop, then loaded into Redshift
• C-Stores persist each column independently and allow
column compression
• Queries retrieve data only from needed columns
Example: 7 billion rows, 25 columns, 10 bytes/column = 1,6 TB
table
Query: Select A, sum( D ) from table where C >= X;
Row Store: 1,6TB of data scanned
Column Store (50% compression): <100 GB data scanned
The Role of Column Store Databases in the DWH
187
230
600
600
0.63
2.1
23
62
Count Distinct
Count
Top 20, One Month
Top 20
Query Performance Results
(seconds)
C-store Postgres
Performance Comparison
Loads fast too! Facebook loads
35TB/hour into Vertica
But are there tradeoffs to a C-Store?
Weaknesses
◦ No PK/FK integrity enforced on write
◦ Slow on DELETE, UPDATE
◦ REALLY slow on single record INSERT and
SELECT
◦ Optimized for limited concurrency but big
queries; only a few users can use at a
time
Solutions
◦ Design to use calculated keys (ex. hashes)
◦ Build ETLs around COPY, TRUNCATE
◦ Individual transactions should use OLTP
or Key/Value systems
◦ Optimize data structures for common
queries and leverage big, slow disks to
create denormalized tables
Data Vault 1.618 at Bijenkorf
3rd Normal Form Data Vault
So many tables! WHY?!?!?!
What we gained
◦ Speed of integration
of new entities
◦ Fast primary keys
without lookups by
using hash keys
◦ Data matches
business processes,
not systems
◦ Easy parallelization
of table loading (24
concurrent tables?
OK!)
ELT, not ETL
Advantages of ELT
◦ Performance: Bijenkorf benchmark showed ELT was >50x faster than ETL
◦ Plus horizontal scalability is Web scale, big data, <insert buzzword here>
◦ Data Availability: You want an exact replica of your source data in the DWH anyways
◦ Simpler Architecture: Fewer systems, fewer interdependencies (decouple STG and DV), can build multiple
transformations from STG simultaneously
Myths of ELT
◦ Source and Target DB must match: Intelligently coded ELT jobs leverage platform agnostic code (or a library for each
source DB type) for loading to STG
◦ Bijenkorf runs MySQL and Oracle ELT into Vertica
◦ Travelbird runs MySQL and Postgres ELT into Redshift
◦ Limited tool availability: DV 2.0 lends itself to code generators / managers, which are best built internally anyways
◦ Talend is free (like speech and hugs) and offers ELT for many systems
◦ ELT takes longer to deploy: Because data is perfectly replicated from source, getting records in is faster;
transformations can be iterated quicker since they are independent of source->stg loading
Targeted benefits of DWH automation at Bijenkorf
Objective Achievements at Bijenkorf
Speed of development • Integration of new sources or data from existing sources takes 1-2 steps
• Adding a new vault dependency takes one step
Simplicity • Five jobs handle all ETL processes across DWH
Traceability • Every record/source file is traced in the database and every row automatically
identified by source file in ODS
Code simplification • Replaced most common key definitions with dynamic variable replacement
File management • Every source file automatically archived to Amazon S3 in appropriate locations
sorted by source, table, and date
• Entire source systems, periods, etc can be replayed in minutes
Data Vault loading automation at BYK
• New sources
automatically
added
• Last change
epoch based
on load
stamps,
advanced
each time all
dependencies
execute
successfully
All Staging
Tables
Checked for
Changes
• Dependencies
declared at
time of job
creation
• Load
prioritization
possible but
not utilized
List of
Dependent
Vault Loads
Identified
• Jobs
parallelized
across tables
but serialized
per job
• Dynamic job
queueing
ensures
appropriate
execution
order
Loads
Planned in
Hub, Link,
Sat Order
• Variables
automatically
identified and
replaced
• Each load
records
performance
statistics and
error
messages
Loads
Executed
o Loader is fully metadata driven with focus on horizontal scalability and management simplicity
o To support speed of development and performance, variable-driven SQL templates used throughout
Bringing it back: Best practice, in practice
Code
Automation
Cloud Based
Bus
Architecture
MPP
Data Vault
Unstructured
Data Stores
ELT controlled
by scheduler
Rob Winters
WintersRD@gmail.com

More Related Content

What's hot

Data Sharing with Snowflake
Data Sharing with SnowflakeData Sharing with Snowflake
Data Sharing with Snowflake
Snowflake Computing
 
Best Practices in DataOps: How to Create Agile, Automated Data Pipelines
Best Practices in DataOps: How to Create Agile, Automated Data PipelinesBest Practices in DataOps: How to Create Agile, Automated Data Pipelines
Best Practices in DataOps: How to Create Agile, Automated Data Pipelines
Eric Kavanagh
 

What's hot (20)

Big Data Architecture
Big Data ArchitectureBig Data Architecture
Big Data Architecture
 
Snowflake Overview
Snowflake OverviewSnowflake Overview
Snowflake Overview
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse Architecture
 
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
 
Data Warehouse or Data Lake, Which Do I Choose?
Data Warehouse or Data Lake, Which Do I Choose?Data Warehouse or Data Lake, Which Do I Choose?
Data Warehouse or Data Lake, Which Do I Choose?
 
The Marriage of the Data Lake and the Data Warehouse and Why You Need Both
The Marriage of the Data Lake and the Data Warehouse and Why You Need BothThe Marriage of the Data Lake and the Data Warehouse and Why You Need Both
The Marriage of the Data Lake and the Data Warehouse and Why You Need Both
 
Data Vault Overview
Data Vault OverviewData Vault Overview
Data Vault Overview
 
From Data Warehouse to Lakehouse
From Data Warehouse to LakehouseFrom Data Warehouse to Lakehouse
From Data Warehouse to Lakehouse
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta Lake
 
Introducing Databricks Delta
Introducing Databricks DeltaIntroducing Databricks Delta
Introducing Databricks Delta
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 
Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?
 
Data Warehousing 2016
Data Warehousing 2016Data Warehousing 2016
Data Warehousing 2016
 
Screw DevOps, Let's Talk DataOps
Screw DevOps, Let's Talk DataOpsScrew DevOps, Let's Talk DataOps
Screw DevOps, Let's Talk DataOps
 
Data Sharing with Snowflake
Data Sharing with SnowflakeData Sharing with Snowflake
Data Sharing with Snowflake
 
Best Practices in DataOps: How to Create Agile, Automated Data Pipelines
Best Practices in DataOps: How to Create Agile, Automated Data PipelinesBest Practices in DataOps: How to Create Agile, Automated Data Pipelines
Best Practices in DataOps: How to Create Agile, Automated Data Pipelines
 
Snowflake essentials
Snowflake essentialsSnowflake essentials
Snowflake essentials
 
Lakehouse in Azure
Lakehouse in AzureLakehouse in Azure
Lakehouse in Azure
 
Azure data platform overview
Azure data platform overviewAzure data platform overview
Azure data platform overview
 
Modern Data Warehousing with the Microsoft Analytics Platform System
Modern Data Warehousing with the Microsoft Analytics Platform SystemModern Data Warehousing with the Microsoft Analytics Platform System
Modern Data Warehousing with the Microsoft Analytics Platform System
 

Similar to Design Principles for a Modern Data Warehouse

The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
Databricks
 

Similar to Design Principles for a Modern Data Warehouse (20)

Data Vault Automation at the Bijenkorf
Data Vault Automation at the BijenkorfData Vault Automation at the Bijenkorf
Data Vault Automation at the Bijenkorf
 
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
 
Using Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architectureUsing Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architecture
 
Database Technologies
Database TechnologiesDatabase Technologies
Database Technologies
 
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast DataDatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
 
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
 
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & Alluxio
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & AlluxioAlluxio 2.0 & Near Real-time Big Data Platform w/ Spark & Alluxio
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & Alluxio
 
Otimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
Otimizações de Projetos de Big Data, Dw e AI no Microsoft AzureOtimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
Otimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
 
Building Analytic Apps for SaaS: “Analytics as a Service”
Building Analytic Apps for SaaS: “Analytics as a Service”Building Analytic Apps for SaaS: “Analytics as a Service”
Building Analytic Apps for SaaS: “Analytics as a Service”
 
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)Highlights of AWS ReInvent 2023 (Announcements and Best Practices)
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)
 
Data & Analytics - Session 2 - Introducing Amazon Redshift
Data & Analytics - Session 2 - Introducing Amazon RedshiftData & Analytics - Session 2 - Introducing Amazon Redshift
Data & Analytics - Session 2 - Introducing Amazon Redshift
 
Reshape Data Lake (as of 2020.07)
Reshape Data Lake (as of 2020.07)Reshape Data Lake (as of 2020.07)
Reshape Data Lake (as of 2020.07)
 
Data Stream Processing for Beginners with Kafka and CDC
Data Stream Processing for Beginners with Kafka and CDCData Stream Processing for Beginners with Kafka and CDC
Data Stream Processing for Beginners with Kafka and CDC
 
Data warehousing in the era of Big Data: Deep Dive into Amazon Redshift
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftData warehousing in the era of Big Data: Deep Dive into Amazon Redshift
Data warehousing in the era of Big Data: Deep Dive into Amazon Redshift
 
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionEnterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
 
Making Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse TechnologyMaking Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse Technology
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)
 
NoSQL.pptx
NoSQL.pptxNoSQL.pptx
NoSQL.pptx
 
Apache Kudu (Incubating): New Hadoop Storage for Fast Analytics on Fast Data ...
Apache Kudu (Incubating): New Hadoop Storage for Fast Analytics on Fast Data ...Apache Kudu (Incubating): New Hadoop Storage for Fast Analytics on Fast Data ...
Apache Kudu (Incubating): New Hadoop Storage for Fast Analytics on Fast Data ...
 

More from Rob Winters

Tableau @ Spil Games
Tableau @ Spil GamesTableau @ Spil Games
Tableau @ Spil Games
Rob Winters
 

More from Rob Winters (11)

A brief history of data warehousing
A brief history of data warehousingA brief history of data warehousing
A brief history of data warehousing
 
Data Ops at TripActions
Data Ops at TripActionsData Ops at TripActions
Data Ops at TripActions
 
Building data "Py-pelines"
Building data "Py-pelines"Building data "Py-pelines"
Building data "Py-pelines"
 
Architecting for analytics
Architecting for analyticsArchitecting for analytics
Architecting for analytics
 
Building a Personalized Offer Using Machine Learning
Building a Personalized Offer Using Machine LearningBuilding a Personalized Offer Using Machine Learning
Building a Personalized Offer Using Machine Learning
 
Architecting for Real-Time Big Data Analytics
Architecting for Real-Time Big Data AnalyticsArchitecting for Real-Time Big Data Analytics
Architecting for Real-Time Big Data Analytics
 
Big Data at a Gaming Company: Spil Games
Big Data at a Gaming Company: Spil GamesBig Data at a Gaming Company: Spil Games
Big Data at a Gaming Company: Spil Games
 
HP Discover: Real Time Insights from Big Data
HP Discover: Real Time Insights from Big DataHP Discover: Real Time Insights from Big Data
HP Discover: Real Time Insights from Big Data
 
Getting Started with Big Data Analytics
Getting Started with Big Data AnalyticsGetting Started with Big Data Analytics
Getting Started with Big Data Analytics
 
Billions of Rows, Millions of Insights, Right Now
Billions of Rows, Millions of Insights, Right NowBillions of Rows, Millions of Insights, Right Now
Billions of Rows, Millions of Insights, Right Now
 
Tableau @ Spil Games
Tableau @ Spil GamesTableau @ Spil Games
Tableau @ Spil Games
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Design Principles for a Modern Data Warehouse

  • 1. Design Principles for a Modern Data Warehouse CASE STUDIES AT DE BIJENKORF AND TRAVELBIRD
  • 2. Old Challenges, New Considerations Data warehouses still must deliver: ◦ Data integration of multiple systems ◦ Accuracy, completeness, and auditability ◦ Reporting for assorted stakeholders and business needs ◦ Clean data ◦ A “single version of the truth” But the problem space now contains: ◦ Unstructured/Semi-structured data ◦ Real time data ◦ Shorter time to access / self-service BI ◦ SO MUCH DATA (terabytes/hour to load) ◦ More systems to integrate (everything has an API)
  • 3. New technologies are changing the landscape
  • 4. What is best practice today? A modern, best in class data warehouse: ◦ Is designed for scalability, ideally using cloud architecture ◦ Uses a bus-based, lambda architecture ◦ Has a federated data model for structured and unstructured data ◦ Leverages MPP databases ◦ Uses an agile data model like Data Vault ◦ Is built using code automation ◦ Processes data using ELT, not ETL All the buzzwords! But what does it look like and why do these things help?
  • 5. Architectural overview at de Bijenkorf Tools AWS ◦ S3 ◦ Kinesis ◦ Elasticache ◦ Elastic Beanstalk ◦ EC2 ◦ DynamoDB Open Source ◦ Snowplow Event Tracker ◦ Rundeck Scheduler ◦ Jenkins Continuous Integration ◦ Pentaho PDI Other ◦ HP Vertica ◦ Tableau ◦ Github ◦ RStudio Server
  • 6. DWH internal architecture, Travelbird and Bijenkorf • Traditional three tier DWH • ODS generated automatically from staging • Allow regeneration of vault without replaying logs • Ops mart reflects data in original source form • Helps offload queries from source systems • Business marts materialized exclusively from vault
  • 7. Why use the cloud? Cost Management •Services billed by the hour, pay for what you use •For small deployments (<50 machines), cloud hosting can be significantly cheaper •Ex. a 3 node Vertica cluster in AWS with 25TB data: $2.2k/mo Off the Shelf Services •Minimize administration by using pre-built services like message buses (Kinesis), databases (RDS), Key/Value stores (Elasticache), simplifying technology stack •Increase speed of delivery of new functionality by eliminating most deployment tasks •Full stack in a day? No problem! Scalability •Services can automatically be scaled up/down based on time, load, or other triggers •Adding additional services can be done within minutes •Services can scale (near) infinitely
  • 8. Designed to solve both primary data needs: ◦ Damn close, right now ◦ Correct, tomorrow Data is processed twice per stream As implemented at BYK and TB: ◦ Real time flow from Kinesis to DWH ◦ Simultaneous process to S3 ◦ Reprocessing as needed from S3 (batch) Lambda architecture: Right Now and Right Later
  • 9. Hadoop in the DWH What is Hadoop? ◦ A distributed, fault tolerant file system ◦ A set of tools for file/data stream processing Where does it fit into the DWH stack? ◦ Data Lake: Save all raw data for cheap; don’t force schemas on unstructured data ◦ ETL: Distributed batch processing, aggregation, and loading Hadoop at Bijenkorf ◦ We had it but threw it out; the use cases didn’t fit ◦ Very little data is unstructured and the DWH supports JSON ◦ Data volumes are limited and growing slowly ◦ How did we solve the use cases? ◦ Data lake: S3 file storage + semi-structured data in Vertica ◦ Data processing: Stream processing (stable event volumes + clean events) Hadoop at Travelbird ◦ Dirty, fast growing event data, so… ◦ Hadoop in the typical role ◦ Raw data in AWS Elastic Map Reduce via S3 ◦ Data cleaned and processed in Hadoop, then loaded into Redshift
  • 10. • C-Stores persist each column independently and allow column compression • Queries retrieve data only from needed columns Example: 7 billion rows, 25 columns, 10 bytes/column = 1,6 TB table Query: Select A, sum( D ) from table where C >= X; Row Store: 1,6TB of data scanned Column Store (50% compression): <100 GB data scanned The Role of Column Store Databases in the DWH 187 230 600 600 0.63 2.1 23 62 Count Distinct Count Top 20, One Month Top 20 Query Performance Results (seconds) C-store Postgres Performance Comparison Loads fast too! Facebook loads 35TB/hour into Vertica
  • 11. But are there tradeoffs to a C-Store? Weaknesses ◦ No PK/FK integrity enforced on write ◦ Slow on DELETE, UPDATE ◦ REALLY slow on single record INSERT and SELECT ◦ Optimized for limited concurrency but big queries; only a few users can use at a time Solutions ◦ Design to use calculated keys (ex. hashes) ◦ Build ETLs around COPY, TRUNCATE ◦ Individual transactions should use OLTP or Key/Value systems ◦ Optimize data structures for common queries and leverage big, slow disks to create denormalized tables
  • 12. Data Vault 1.618 at Bijenkorf 3rd Normal Form Data Vault So many tables! WHY?!?!?! What we gained ◦ Speed of integration of new entities ◦ Fast primary keys without lookups by using hash keys ◦ Data matches business processes, not systems ◦ Easy parallelization of table loading (24 concurrent tables? OK!)
  • 13. ELT, not ETL Advantages of ELT ◦ Performance: Bijenkorf benchmark showed ELT was >50x faster than ETL ◦ Plus horizontal scalability is Web scale, big data, <insert buzzword here> ◦ Data Availability: You want an exact replica of your source data in the DWH anyways ◦ Simpler Architecture: Fewer systems, fewer interdependencies (decouple STG and DV), can build multiple transformations from STG simultaneously Myths of ELT ◦ Source and Target DB must match: Intelligently coded ELT jobs leverage platform agnostic code (or a library for each source DB type) for loading to STG ◦ Bijenkorf runs MySQL and Oracle ELT into Vertica ◦ Travelbird runs MySQL and Postgres ELT into Redshift ◦ Limited tool availability: DV 2.0 lends itself to code generators / managers, which are best built internally anyways ◦ Talend is free (like speech and hugs) and offers ELT for many systems ◦ ELT takes longer to deploy: Because data is perfectly replicated from source, getting records in is faster; transformations can be iterated quicker since they are independent of source->stg loading
  • 14. Targeted benefits of DWH automation at Bijenkorf Objective Achievements at Bijenkorf Speed of development • Integration of new sources or data from existing sources takes 1-2 steps • Adding a new vault dependency takes one step Simplicity • Five jobs handle all ETL processes across DWH Traceability • Every record/source file is traced in the database and every row automatically identified by source file in ODS Code simplification • Replaced most common key definitions with dynamic variable replacement File management • Every source file automatically archived to Amazon S3 in appropriate locations sorted by source, table, and date • Entire source systems, periods, etc can be replayed in minutes
  • 15. Data Vault loading automation at BYK • New sources automatically added • Last change epoch based on load stamps, advanced each time all dependencies execute successfully All Staging Tables Checked for Changes • Dependencies declared at time of job creation • Load prioritization possible but not utilized List of Dependent Vault Loads Identified • Jobs parallelized across tables but serialized per job • Dynamic job queueing ensures appropriate execution order Loads Planned in Hub, Link, Sat Order • Variables automatically identified and replaced • Each load records performance statistics and error messages Loads Executed o Loader is fully metadata driven with focus on horizontal scalability and management simplicity o To support speed of development and performance, variable-driven SQL templates used throughout
  • 16. Bringing it back: Best practice, in practice Code Automation Cloud Based Bus Architecture MPP Data Vault Unstructured Data Stores ELT controlled by scheduler

Editor's Notes

  1. http://lambda-architecture.net/ http://www.semantikoz.com/blog/lambda-architecture-velocity-volume-big-data-hadoop-storm/
  2. https://redshiftuser.wordpress.com/2013/02/17/aws-redshift-query-comparison-times-against-hadoop-and-postgres/