SlideShare a Scribd company logo
1 of 46
Download to read offline
HBase in Practice
Lars George – Partner and Co-Founder @ OpenCore
DataWorks Summit 2017 - Munich
NoSQL is no SQL is SQL?
About Us
• Partner & Co-Founder at OpenCore
• Before that
• Lars: EMEA Chief Architect at Cloudera (5+ years)
• Hadoop since 2007
• Apache Committer & Apache Member
• HBase (also in PMC)
• Lars: O’Reilly Author: HBase – The Definitive Guide
• Contact
• lars.george@opencore.com
• @larsgeorge
Website: www.opencore.com
Agenda
• Brief Intro To Core Concepts
• Access Options
• Data Modelling
• Performance Tuning
• Use-Cases
• Summary
Introduction To Core Concepts
HBase Tables
• From user perspective, HBase is similar to a database, or spreadsheet
• There are rows and columns, storing values
• By default asking for a specific row/column combination returns the
current value (that is, that last value stored there)
HBase Tables
• HBase can have a
different schema
per row
• Could be called
schema-less
• Primary access by
the user given row
key and column
name
• Sorting of rows and
columns by their
key (aka names)
HBase Tables
• Each row/column coordinate is tagged with a version number, allowing
multi-versioned values
• Version is usually
the current time
(as epoch)
• API lets user ask
for versions
(specific, by count,
or by ranges)
• Up to 2B versions
HBase Tables
• Table data is cut into pieces to distribute over cluster
• Regions split table into
shards at size boundaries
• Families split within
regions to group
sets of columns
together
• At least one of
each is needed
Scalability – Regions as Shards
• A region is served by exactly
one region server
• Every region server serves
many regions
• Table data is spread over servers
• Distribution of I/O
• Assignment is based on
configurable logic
• Balancing cluster load
• Clients talk directly to region
servers
Column Family-Oriented
• Group multiple columns into
physically separated locations
• Apply different properties to each
family
• TTL, compression, versions, …
• Useful to separate distinct data
sets that are related
• Also useful to separate larger blob
from meta data
Data Management
• What is available is tracked in three
locations
• System catalog table hbase:meta
• Files in HDFS directories
• Open region instances on servers
• System aligns these locations
• Sometimes (very rarely) a repair may
be needed using HBase Fsck
• Redundant information is useful to
repair corrupt tables
HBase really is….
• A distributed Hash Map
• Imagine a complex, concatenated key including the user given row key and
column name, the timestamp (version)
• Complex key points to actual value, that is, the cell
Fold, Store, and Shift
• Logical rows in tables are
really stored as flat key-value
pairs
• Each carries full coordinates
• Pertinent information can be
freely placed in cell to
improve lookup
• HBase is a column-family
grouped key-value store
HFile Format Information
• All data is stored in a custom (open-source) format, called HFile
• Data is stored in blocks (64KB default)
• Trade-off between lookups and I/O throughput
• Compression, encoding applied _after_ limit check
• Index, filter and meta data is stored in separate blocks
• Fixed trailer allows traversal of file structure
• Newer versions introduce multilayered index and filter structures
• Only load master index and load partial index blocks on demand
• Reading data requires deserialization of block into cells
• Kind of Amdahl’s Law applies
HBase Architecture
• One Master and many Worker servers
• Clients mostly communicate with workers
• Workers store actual data
• Memstore for accruing
• HFile for persistence
• WAL for fail-safety
• Data provided as regions
• HDFS is backing store
• But could be another
HBase Architecture (cont.)
HBase Architecture (cont.)
• Based on Log-Structured Merge-Trees (LSM-Trees)
• Inserts are done in write-ahead log first
• Data is stored in memory and flushed to disk on regular intervals or based
on size
• Small flushes are merged in the background to keep number of files small
• Reads read memory stores first and then disk based files second
• Deletes are handled with “tombstone”
markers
• Atomicity on row level no matter how
many columns
• Keeps locking model easy
Merge Reads
• Read Memstore & StoreFiles
using separate scanners
• Merge matching cells into
single row “view”
• Delete’s mask existing data
• Bloom filters help skip
StoreFiles
• Reads may have to span
many files
APIs and Access Options
HBase Clients
• Native Java Client/API
• Non-Java Clients
• REST server
• Thrift server
• Jython, Groovy DSL
• Spark
• TableInputFormat/TableOutputFormat for MapReduce
• HBase as MapReduce source and/or target
• Also available for table snapshots
• HBase Shell
• JRuby shell adding get, put, scan etc. and admin calls
• Phoenix, Impala, Hive, …
Java API
From Wikipedia:
• CRUD: “In computer programming, create, read, update, and delete are the
four basic functions of persistent storage.”
• Other variations of CRUD include
• BREAD (Browse, Read, Edit, Add, Delete)
• MADS (Modify, Add, Delete, Show)
• DAVE (Delete, Add, View, Edit)
• CRAP (Create, Retrieve, Alter, Purge)
Java API (cont.)
• CRUD
• put: Create and update a row (CU)
• get: Retrieve an entire, or partial row (R)
• delete: Delete a cell, column, columns, or row (D)
• CRUD+SI
• scan: Scan any number of rows (S)
• increment: Increment a column value (I)
• CRUD+SI+CAS
• Atomic compare-and-swap (CAS)
• Combined get, check, and put operation
• Helps to overcome lack of full transactions
Java API (cont.)
• Batch Operations
• Support Get, Put, and Delete
• Reduce network round-trips
• If possible, batch operation to the server to gain better overall throughput
• Filters
• Can be used with Get and Scan operations
• Server side hinting
• Reduce data transferred to client
• Filters are no guarantee for fast scans
• Still full table scan in worst-case scenario
• Might have to implement your own
• Filters can hint next row key
Data Modeling
Where’s your data at?
Key Cardinality
• The best performance is gained from using row keys
• Time range bound reads can skip store files
• So can Bloom Filters
• Selecting column families
reduces the amount of data
to be scanned
• Pure value based access
is a full table scan
• Filters often are too, but
reduce network traffic
Key/Table Design
• Crucial to gain best performance
• Why do I need to know? Well, you also need to know that RDBMS is only working
well when columns are indexed and query plan is OK
• Absence of secondary indexes forces use of row key or column name
sorting
• Transfer multiple indexes into one
• Generate large table -> Good since fits architecture and spreads across cluster
• DDI
• Stands for Denormalization, Duplication and Intelligent Keys
• Needed to overcome trade-offs of architecture
• Denormalization -> Replacement for JOINs
• Duplication -> Design for reads
• Intelligent Keys -> Implement indexing and sorting, optimize reads
Pre-materialize Everything
• Achieve one read per customer request if possible
• Otherwise keep at lowest number
• Reads between 10ms (cache miss) and 1ms (cache hit)
• Use MapReduce or Spark to compute exacts in batch
• Store and merge updates live
• Use increment() methods
Motto: “Design for Reads”
Tall-Narrow vs. Flat-Wide Tables
• Rows do not split
• Might end up with one row per region
• Same storage footprint
• Put more details into the row key
• Sometimes dummy column only
• Make use of partial key scans
• Tall with Scans, Wide with Gets
• Atomicity only on row level
• Examples
• Large graphs, stored as adjacency matrix (narrow)
• Message inbox (wide)
Sequential Keys
<timestamp><more key>: {CF: {CQ: {TS : Val}}}
• Hotspotting on regions is bad!
• Instead do one of the following:
• Salting
• Prefix <timestamp> with distributed value
• Binning or bucketing rows across regions
• Key field swap/promotion
• Move <more key> before the timestamp (see OpenTSDB)
• Randomization
• Move <timestamp> out of key or prefix with MD5 hash
• Might also be mitigated by overall spread of workloads
Key Design Choices
• Based on access pattern, either use
sequential or random keys
• Often a combination of both is needed
• Overcome architectural limitations
• Neither is necessarily bad
• Use bulk import for sequential keys and
reads
• Random keys are good for random access
patterns
Checklist
• Design for Use-Case
• Read, Write, or Both?
• Avoid Hotspotting
• Hash leading key part, or use salting/bucketing
• Use bulk loading where possible
• Monitor your servers!
• Presplit tables
• Try prefix encoding when values are small
• Otherwise use compression (or both)
• For Reads: Restrict yourself
• Specify what you need, i.e. columns, families, time range
• Shift details to appropriate position
• Composite Keys
• Column Qualifiers
Performance Tuning
1000 knobs to turn… 20 are important?
Everything is Pluggable
• Cell
• Memstore
• Flush Policy
• Compaction
Policy
• Cache
• WAL
• RPC handling
• …
Cluster Tuning
• First, tune the global settings
• Heap size and GC algorithm
• Memory share for reads and writes
• Enable Block Cache
• Number of RPC handlers
• Load Balancer
• Default flush and compaction strategy
• Thread pools (10+)
• Next, tune the per-table and family settings
• Region sizes
• Block sizes
• Compression and encoding
• Compactions
• …
Region Balancer Tuning
• A background process in the HBase
Master is tracking load on servers
• The load balancer moves regions
occasionally
• Multiple implementations exists
• Simple counts number of regions
• Stochastic determines cost
• Favored Node pins HDFS block
replicas
• Can be tuned further
• Cluster-wide setting!
RPC Tuning
• Default is one queue for
all types of requests
• Can be split into
separate queues for
reads and writes
• Read queue can be
further split into reads
and scans
 Stricter resource limits,
but may avoid cross-
starvation
Key Tuning
• Design keys to match use-case
• Sequential, salted, or random
• Use sorting to convey meaning
• Colocate related data
• Spread load over all servers
• Clever key design can make use
of distribution: aging-out regions
Compaction Tuning
• Default compaction settings are aggressive
• Set for update use-case
• For insert use-cases, Blooms are effective
• Allows to tune down compactions
• Saves resources by reducing write amplification
• More store files are also enabling faster full
table scans with time range bound scans
• Server can ignore older files
• Large regions may be eligible for advanced
compaction strategies
• Stripe or date-tiered compactions
• Reduce rewrites to fraction of region size
Use-Cases
What works well, what does not, and what is so-so
Placing the Use-Case
Big Data Workloads
Low
latency
Batch
Random Access Full ScanShort Scan
HDFS + MR
(Hive/Pig)
HBase
HBase + Snapshots
-> HDFS + MR/Spark
HDFS
+ SQL
HBase + MR/Spark
Big Data Workloads
Low
latency
Batch
Random Access Full ScanShort Scan
HDFS + MR/Spark
(Hive/Pig)
HBase
HBase + Snapshots
-> HDFS + MR/Spark
HDFS
+ SQL
HBase + MR/Spark
Current Metrics
Graph data
Simple Entities
Hybrid Entity Time series
+ Rollup serving
Messages
Analytic archive
Hybrid Entity Time series
+ Rollup generation
Index building
Entity Time series
Summary
Wrapping it up…
What matters…
• For optimal performance, two things need to be considered:
• Optimize the cluster and table settings
• Choose the matching key schema
• Ensure load is spread over tables and cluster nodes
• HBase works best for random access and bound scans
• HBase can be optimized for larger scans, but its sweet spot is short burst scans (can
be parallelized too) and random point gets
• Java heap space limits addressable space
• Play with region sizes, compaction strategies, and key design to maximize result
• Using HBase for a suitable use-case will make for a happy customer…
• Conversely, forcing it into non-suitable use-cases may be cause for trouble
Questions?
Thank You!
@larsgeorge

More Related Content

What's hot

Apache Hadoop and HBase
Apache Hadoop and HBaseApache Hadoop and HBase
Apache Hadoop and HBaseCloudera, Inc.
 
Hive Data Modeling and Query Optimization
Hive Data Modeling and Query OptimizationHive Data Modeling and Query Optimization
Hive Data Modeling and Query OptimizationEyad Garelnabi
 
HBase.pptx
HBase.pptxHBase.pptx
HBase.pptxSadhik7
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache SparkRahul Jain
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path ForwardAlluxio, Inc.
 
Apache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query ProcessingApache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query ProcessingHortonworks
 
Tag based policies using Apache Atlas and Ranger
Tag based policies using Apache Atlas and RangerTag based policies using Apache Atlas and Ranger
Tag based policies using Apache Atlas and RangerVimal Sharma
 
ORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataDataWorks Summit
 
Securing Hadoop with Apache Ranger
Securing Hadoop with Apache RangerSecuring Hadoop with Apache Ranger
Securing Hadoop with Apache RangerDataWorks Summit
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
 
Apache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAseApache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAseenissoz
 
Native Support of Prometheus Monitoring in Apache Spark 3.0
Native Support of Prometheus Monitoring in Apache Spark 3.0Native Support of Prometheus Monitoring in Apache Spark 3.0
Native Support of Prometheus Monitoring in Apache Spark 3.0Databricks
 
HADOOP TECHNOLOGY ppt
HADOOP  TECHNOLOGY pptHADOOP  TECHNOLOGY ppt
HADOOP TECHNOLOGY pptsravya raju
 

What's hot (20)

Apache Hadoop and HBase
Apache Hadoop and HBaseApache Hadoop and HBase
Apache Hadoop and HBase
 
Hive Data Modeling and Query Optimization
Hive Data Modeling and Query OptimizationHive Data Modeling and Query Optimization
Hive Data Modeling and Query Optimization
 
SQOOP PPT
SQOOP PPTSQOOP PPT
SQOOP PPT
 
HBase.pptx
HBase.pptxHBase.pptx
HBase.pptx
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Intro to HBase
Intro to HBaseIntro to HBase
Intro to HBase
 
Kudu Deep-Dive
Kudu Deep-DiveKudu Deep-Dive
Kudu Deep-Dive
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path Forward
 
Apache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query ProcessingApache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query Processing
 
Tag based policies using Apache Atlas and Ranger
Tag based policies using Apache Atlas and RangerTag based policies using Apache Atlas and Ranger
Tag based policies using Apache Atlas and Ranger
 
Optimizing Hive Queries
Optimizing Hive QueriesOptimizing Hive Queries
Optimizing Hive Queries
 
ORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big Data
 
Securing Hadoop with Apache Ranger
Securing Hadoop with Apache RangerSecuring Hadoop with Apache Ranger
Securing Hadoop with Apache Ranger
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
 
Apache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAseApache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAse
 
Apache hive introduction
Apache hive introductionApache hive introduction
Apache hive introduction
 
Apache hadoop hbase
Apache hadoop hbaseApache hadoop hbase
Apache hadoop hbase
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Native Support of Prometheus Monitoring in Apache Spark 3.0
Native Support of Prometheus Monitoring in Apache Spark 3.0Native Support of Prometheus Monitoring in Apache Spark 3.0
Native Support of Prometheus Monitoring in Apache Spark 3.0
 
HADOOP TECHNOLOGY ppt
HADOOP  TECHNOLOGY pptHADOOP  TECHNOLOGY ppt
HADOOP TECHNOLOGY ppt
 

Viewers also liked

Date-tiered Compaction Policy for Time-series Data
Date-tiered Compaction Policy for Time-series DataDate-tiered Compaction Policy for Time-series Data
Date-tiered Compaction Policy for Time-series DataHBaseCon
 
HBase: Just the Basics
HBase: Just the BasicsHBase: Just the Basics
HBase: Just the BasicsHBaseCon
 
Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBaseHBaseCon
 
Data Architectures for Robust Decision Making
Data Architectures for Robust Decision MakingData Architectures for Robust Decision Making
Data Architectures for Robust Decision MakingGwen (Chen) Shapira
 
Apache HBase - Just the Basics
Apache HBase - Just the BasicsApache HBase - Just the Basics
Apache HBase - Just the BasicsHBaseCon
 
Apache HBase at Airbnb
Apache HBase at Airbnb Apache HBase at Airbnb
Apache HBase at Airbnb HBaseCon
 

Viewers also liked (8)

Apache HBase: State of the Union
Apache HBase: State of the UnionApache HBase: State of the Union
Apache HBase: State of the Union
 
Date-tiered Compaction Policy for Time-series Data
Date-tiered Compaction Policy for Time-series DataDate-tiered Compaction Policy for Time-series Data
Date-tiered Compaction Policy for Time-series Data
 
HBase: Just the Basics
HBase: Just the BasicsHBase: Just the Basics
HBase: Just the Basics
 
HBase internals
HBase internalsHBase internals
HBase internals
 
Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBase
 
Data Architectures for Robust Decision Making
Data Architectures for Robust Decision MakingData Architectures for Robust Decision Making
Data Architectures for Robust Decision Making
 
Apache HBase - Just the Basics
Apache HBase - Just the BasicsApache HBase - Just the Basics
Apache HBase - Just the Basics
 
Apache HBase at Airbnb
Apache HBase at Airbnb Apache HBase at Airbnb
Apache HBase at Airbnb
 

Similar to HBase in Practice

HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars GeorgeJAX London
 
Hbase schema design and sizing apache-con europe - nov 2012
Hbase schema design and sizing   apache-con europe - nov 2012Hbase schema design and sizing   apache-con europe - nov 2012
Hbase schema design and sizing apache-con europe - nov 2012Chris Huang
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012larsgeorge
 
Schema Design
Schema DesignSchema Design
Schema DesignQBurst
 
Intro to HBase - Lars George
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars GeorgeJAX London
 
Introduction to Apache HBase
Introduction to Apache HBaseIntroduction to Apache HBase
Introduction to Apache HBaseGokuldas Pillai
 
Cassandra an overview
Cassandra an overviewCassandra an overview
Cassandra an overviewPritamKathar
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Fwdays
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
Scaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLScaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLRichard Schneeman
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxRahul Borate
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxRahul Borate
 
Hbasepreso 111116185419-phpapp02
Hbasepreso 111116185419-phpapp02Hbasepreso 111116185419-phpapp02
Hbasepreso 111116185419-phpapp02Gokuldas Pillai
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]Speedment, Inc.
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]Malin Weiss
 

Similar to HBase in Practice (20)

HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars George
 
Hbase schema design and sizing apache-con europe - nov 2012
Hbase schema design and sizing   apache-con europe - nov 2012Hbase schema design and sizing   apache-con europe - nov 2012
Hbase schema design and sizing apache-con europe - nov 2012
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
 
Schema Design
Schema DesignSchema Design
Schema Design
 
Intro to HBase - Lars George
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars George
 
Introduction to Apache HBase
Introduction to Apache HBaseIntroduction to Apache HBase
Introduction to Apache HBase
 
Cassandra an overview
Cassandra an overviewCassandra an overview
Cassandra an overview
 
01 hbase
01 hbase01 hbase
01 hbase
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Apache HBase Workshop
Apache HBase WorkshopApache HBase Workshop
Apache HBase Workshop
 
Scaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLScaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQL
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptx
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptx
 
Hbasepreso 111116185419-phpapp02
Hbasepreso 111116185419-phpapp02Hbasepreso 111116185419-phpapp02
Hbasepreso 111116185419-phpapp02
 
Hbase 20141003
Hbase 20141003Hbase 20141003
Hbase 20141003
 
NoSql
NoSqlNoSql
NoSql
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
 

More from DataWorks Summit/Hadoop Summit

Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerDataWorks Summit/Hadoop Summit
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformDataWorks Summit/Hadoop Summit
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDataWorks Summit/Hadoop Summit
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...DataWorks Summit/Hadoop Summit
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...DataWorks Summit/Hadoop Summit
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLDataWorks Summit/Hadoop Summit
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)DataWorks Summit/Hadoop Summit
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...DataWorks Summit/Hadoop Summit
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesDataWorks Summit/Hadoop Summit
 

More from DataWorks Summit/Hadoop Summit (20)

Running Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in ProductionRunning Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in Production
 
State of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache ZeppelinState of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache Zeppelin
 
Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache Ranger
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science Platform
 
Revolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and ZeppelinRevolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and Zeppelin
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSense
 
Hadoop Crash Course
Hadoop Crash CourseHadoop Crash Course
Hadoop Crash Course
 
Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Apache Spark Crash Course
Apache Spark Crash CourseApache Spark Crash Course
Apache Spark Crash Course
 
Dataflow with Apache NiFi
Dataflow with Apache NiFiDataflow with Apache NiFi
Dataflow with Apache NiFi
 
Schema Registry - Set you Data Free
Schema Registry - Set you Data FreeSchema Registry - Set you Data Free
Schema Registry - Set you Data Free
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and ML
 
How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
 

Recently uploaded

Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAshyamraj55
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IES VE
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 

Recently uploaded (20)

Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 

HBase in Practice

  • 1. HBase in Practice Lars George – Partner and Co-Founder @ OpenCore DataWorks Summit 2017 - Munich NoSQL is no SQL is SQL?
  • 2. About Us • Partner & Co-Founder at OpenCore • Before that • Lars: EMEA Chief Architect at Cloudera (5+ years) • Hadoop since 2007 • Apache Committer & Apache Member • HBase (also in PMC) • Lars: O’Reilly Author: HBase – The Definitive Guide • Contact • lars.george@opencore.com • @larsgeorge Website: www.opencore.com
  • 3. Agenda • Brief Intro To Core Concepts • Access Options • Data Modelling • Performance Tuning • Use-Cases • Summary
  • 5. HBase Tables • From user perspective, HBase is similar to a database, or spreadsheet • There are rows and columns, storing values • By default asking for a specific row/column combination returns the current value (that is, that last value stored there)
  • 6. HBase Tables • HBase can have a different schema per row • Could be called schema-less • Primary access by the user given row key and column name • Sorting of rows and columns by their key (aka names)
  • 7. HBase Tables • Each row/column coordinate is tagged with a version number, allowing multi-versioned values • Version is usually the current time (as epoch) • API lets user ask for versions (specific, by count, or by ranges) • Up to 2B versions
  • 8. HBase Tables • Table data is cut into pieces to distribute over cluster • Regions split table into shards at size boundaries • Families split within regions to group sets of columns together • At least one of each is needed
  • 9. Scalability – Regions as Shards • A region is served by exactly one region server • Every region server serves many regions • Table data is spread over servers • Distribution of I/O • Assignment is based on configurable logic • Balancing cluster load • Clients talk directly to region servers
  • 10. Column Family-Oriented • Group multiple columns into physically separated locations • Apply different properties to each family • TTL, compression, versions, … • Useful to separate distinct data sets that are related • Also useful to separate larger blob from meta data
  • 11. Data Management • What is available is tracked in three locations • System catalog table hbase:meta • Files in HDFS directories • Open region instances on servers • System aligns these locations • Sometimes (very rarely) a repair may be needed using HBase Fsck • Redundant information is useful to repair corrupt tables
  • 12. HBase really is…. • A distributed Hash Map • Imagine a complex, concatenated key including the user given row key and column name, the timestamp (version) • Complex key points to actual value, that is, the cell
  • 13. Fold, Store, and Shift • Logical rows in tables are really stored as flat key-value pairs • Each carries full coordinates • Pertinent information can be freely placed in cell to improve lookup • HBase is a column-family grouped key-value store
  • 14. HFile Format Information • All data is stored in a custom (open-source) format, called HFile • Data is stored in blocks (64KB default) • Trade-off between lookups and I/O throughput • Compression, encoding applied _after_ limit check • Index, filter and meta data is stored in separate blocks • Fixed trailer allows traversal of file structure • Newer versions introduce multilayered index and filter structures • Only load master index and load partial index blocks on demand • Reading data requires deserialization of block into cells • Kind of Amdahl’s Law applies
  • 15. HBase Architecture • One Master and many Worker servers • Clients mostly communicate with workers • Workers store actual data • Memstore for accruing • HFile for persistence • WAL for fail-safety • Data provided as regions • HDFS is backing store • But could be another
  • 17. HBase Architecture (cont.) • Based on Log-Structured Merge-Trees (LSM-Trees) • Inserts are done in write-ahead log first • Data is stored in memory and flushed to disk on regular intervals or based on size • Small flushes are merged in the background to keep number of files small • Reads read memory stores first and then disk based files second • Deletes are handled with “tombstone” markers • Atomicity on row level no matter how many columns • Keeps locking model easy
  • 18. Merge Reads • Read Memstore & StoreFiles using separate scanners • Merge matching cells into single row “view” • Delete’s mask existing data • Bloom filters help skip StoreFiles • Reads may have to span many files
  • 19. APIs and Access Options
  • 20. HBase Clients • Native Java Client/API • Non-Java Clients • REST server • Thrift server • Jython, Groovy DSL • Spark • TableInputFormat/TableOutputFormat for MapReduce • HBase as MapReduce source and/or target • Also available for table snapshots • HBase Shell • JRuby shell adding get, put, scan etc. and admin calls • Phoenix, Impala, Hive, …
  • 21. Java API From Wikipedia: • CRUD: “In computer programming, create, read, update, and delete are the four basic functions of persistent storage.” • Other variations of CRUD include • BREAD (Browse, Read, Edit, Add, Delete) • MADS (Modify, Add, Delete, Show) • DAVE (Delete, Add, View, Edit) • CRAP (Create, Retrieve, Alter, Purge)
  • 22. Java API (cont.) • CRUD • put: Create and update a row (CU) • get: Retrieve an entire, or partial row (R) • delete: Delete a cell, column, columns, or row (D) • CRUD+SI • scan: Scan any number of rows (S) • increment: Increment a column value (I) • CRUD+SI+CAS • Atomic compare-and-swap (CAS) • Combined get, check, and put operation • Helps to overcome lack of full transactions
  • 23. Java API (cont.) • Batch Operations • Support Get, Put, and Delete • Reduce network round-trips • If possible, batch operation to the server to gain better overall throughput • Filters • Can be used with Get and Scan operations • Server side hinting • Reduce data transferred to client • Filters are no guarantee for fast scans • Still full table scan in worst-case scenario • Might have to implement your own • Filters can hint next row key
  • 25. Key Cardinality • The best performance is gained from using row keys • Time range bound reads can skip store files • So can Bloom Filters • Selecting column families reduces the amount of data to be scanned • Pure value based access is a full table scan • Filters often are too, but reduce network traffic
  • 26. Key/Table Design • Crucial to gain best performance • Why do I need to know? Well, you also need to know that RDBMS is only working well when columns are indexed and query plan is OK • Absence of secondary indexes forces use of row key or column name sorting • Transfer multiple indexes into one • Generate large table -> Good since fits architecture and spreads across cluster • DDI • Stands for Denormalization, Duplication and Intelligent Keys • Needed to overcome trade-offs of architecture • Denormalization -> Replacement for JOINs • Duplication -> Design for reads • Intelligent Keys -> Implement indexing and sorting, optimize reads
  • 27. Pre-materialize Everything • Achieve one read per customer request if possible • Otherwise keep at lowest number • Reads between 10ms (cache miss) and 1ms (cache hit) • Use MapReduce or Spark to compute exacts in batch • Store and merge updates live • Use increment() methods Motto: “Design for Reads”
  • 28. Tall-Narrow vs. Flat-Wide Tables • Rows do not split • Might end up with one row per region • Same storage footprint • Put more details into the row key • Sometimes dummy column only • Make use of partial key scans • Tall with Scans, Wide with Gets • Atomicity only on row level • Examples • Large graphs, stored as adjacency matrix (narrow) • Message inbox (wide)
  • 29. Sequential Keys <timestamp><more key>: {CF: {CQ: {TS : Val}}} • Hotspotting on regions is bad! • Instead do one of the following: • Salting • Prefix <timestamp> with distributed value • Binning or bucketing rows across regions • Key field swap/promotion • Move <more key> before the timestamp (see OpenTSDB) • Randomization • Move <timestamp> out of key or prefix with MD5 hash • Might also be mitigated by overall spread of workloads
  • 30. Key Design Choices • Based on access pattern, either use sequential or random keys • Often a combination of both is needed • Overcome architectural limitations • Neither is necessarily bad • Use bulk import for sequential keys and reads • Random keys are good for random access patterns
  • 31. Checklist • Design for Use-Case • Read, Write, or Both? • Avoid Hotspotting • Hash leading key part, or use salting/bucketing • Use bulk loading where possible • Monitor your servers! • Presplit tables • Try prefix encoding when values are small • Otherwise use compression (or both) • For Reads: Restrict yourself • Specify what you need, i.e. columns, families, time range • Shift details to appropriate position • Composite Keys • Column Qualifiers
  • 32. Performance Tuning 1000 knobs to turn… 20 are important?
  • 33. Everything is Pluggable • Cell • Memstore • Flush Policy • Compaction Policy • Cache • WAL • RPC handling • …
  • 34. Cluster Tuning • First, tune the global settings • Heap size and GC algorithm • Memory share for reads and writes • Enable Block Cache • Number of RPC handlers • Load Balancer • Default flush and compaction strategy • Thread pools (10+) • Next, tune the per-table and family settings • Region sizes • Block sizes • Compression and encoding • Compactions • …
  • 35. Region Balancer Tuning • A background process in the HBase Master is tracking load on servers • The load balancer moves regions occasionally • Multiple implementations exists • Simple counts number of regions • Stochastic determines cost • Favored Node pins HDFS block replicas • Can be tuned further • Cluster-wide setting!
  • 36. RPC Tuning • Default is one queue for all types of requests • Can be split into separate queues for reads and writes • Read queue can be further split into reads and scans  Stricter resource limits, but may avoid cross- starvation
  • 37. Key Tuning • Design keys to match use-case • Sequential, salted, or random • Use sorting to convey meaning • Colocate related data • Spread load over all servers • Clever key design can make use of distribution: aging-out regions
  • 38. Compaction Tuning • Default compaction settings are aggressive • Set for update use-case • For insert use-cases, Blooms are effective • Allows to tune down compactions • Saves resources by reducing write amplification • More store files are also enabling faster full table scans with time range bound scans • Server can ignore older files • Large regions may be eligible for advanced compaction strategies • Stripe or date-tiered compactions • Reduce rewrites to fraction of region size
  • 39. Use-Cases What works well, what does not, and what is so-so
  • 41. Big Data Workloads Low latency Batch Random Access Full ScanShort Scan HDFS + MR (Hive/Pig) HBase HBase + Snapshots -> HDFS + MR/Spark HDFS + SQL HBase + MR/Spark
  • 42. Big Data Workloads Low latency Batch Random Access Full ScanShort Scan HDFS + MR/Spark (Hive/Pig) HBase HBase + Snapshots -> HDFS + MR/Spark HDFS + SQL HBase + MR/Spark Current Metrics Graph data Simple Entities Hybrid Entity Time series + Rollup serving Messages Analytic archive Hybrid Entity Time series + Rollup generation Index building Entity Time series
  • 44. What matters… • For optimal performance, two things need to be considered: • Optimize the cluster and table settings • Choose the matching key schema • Ensure load is spread over tables and cluster nodes • HBase works best for random access and bound scans • HBase can be optimized for larger scans, but its sweet spot is short burst scans (can be parallelized too) and random point gets • Java heap space limits addressable space • Play with region sizes, compaction strategies, and key design to maximize result • Using HBase for a suitable use-case will make for a happy customer… • Conversely, forcing it into non-suitable use-cases may be cause for trouble

Editor's Notes

  1. For Developers & End-Users – Apache Phoenix, Spark
  2. Importance of Row Key structure
  3. Time-series Data etc.
  4. Time-series Data etc.