5. Weak side of combining Parquet and HBase
• Complex code to manage the flow and synchronization of data
between the two systems.
• Manage consistent backups, security policies, and monitoring
across multiple distinct systems.
6. Lambda Architecture Challenges
• In the real world, systems often need to accommodate
• Late-arriving data
• Corrections on past records
• Privacy-related deletions on data that has already been
migrated to the immutable store.
7. Happy Medium
• High Throughput. Goal within 2x Impala
• Low Latency for random read/write. Goal 1ms on SSD
• SQL and NoSQL style API
Fast Scans Fast Random Access
9. Tables, Schemas, Keys
• Kudu is a storage system for tables of structured data
• Schema consisting of a finite number of columns
• Each such column has a name, type:
• Boolean, Integers, Unixtime_Micros,
• Floating, String, Binary
10. Keys
• Some ordered subset of those columns are specified to be the
table’s primary key
• The primary key:
• enforces a uniqueness constraint
• acts as the sole index by which rows may be efficiently
updated or deleted
11. Write Operations
• User mutates the table using Insert, Update, and Delete
APIs
• Note: a primary key must be fully specified
• Java, C++, Python API
• No multi-row transactional APIs:
• each mutation conceptually executes as its own
transaction,
• despite being automatically batched with other mutations
for better performance.
12. Read Operations
• Scan operation:
• any number of predicates to filter the results
• two types of predicates:
• comparisons between a column and a constant value,
• and composite primary key ranges.
• An user may specify a projection for a scan.
• A projection consists of a subset of columns to be
retrieved.
15. Storage Layout Goals
• Fast columnar scans
• best-of-breed immutable data formats
such as Parquet
• efficiently encoded columnar data files.
• Low-latency random updates
• O(lg n) lookup complexity for random
access
• Consistency of performance
• Majority of users are willing
predictability
16. MemRowSet
• In-memory concurrent B-tree
• No removal from tree – MVCC
records instead
• No in-place updates – only
modifications without changing the
value size
• Link together leaf nodes for
sequential scans
• Row-wise layout
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
17. DiskRowSet
• Column-organized
• Each column is written to
disk in a single contiguous
block of data.
• The column itself is
subdivided into small
pages
• Granular random reads,
and
• An embedded B-tree index
18. Deltas
• A DeltaMemStore is a concurrent B-tree which shares the
implementation of MemRowSets
• A DeltaMemStore flushes into a DeltaFile
• A DeltaFile is a simple binary column
19. Insert Path
• Each DiskRowSet stores a Bloom filter of the set of keys
present
• Each DiskRowSet, we store the minimum and maximum
primary key,
20. Read Path
• Converts the key range predicate into a row offset range
predicate
• Performs the scan one column at a time
• Seeks the target column to the correct row offset
• Consult the delta stores to see if any later updates
21. Delta Compaction
• Background maintenance manager periodically
• scans DiskRowSets to find any cases where a large
number of deltas have accumulated, and
• schedules a delta compaction operation which merges
those deltas back into the base data columns.
22. RowSet Compaction
• A key-based merge of two or more DiskRowSets
• The output is written back to new DiskRowSets rolling every
32 MB
• RowSet compaction has two goals:
• We take this opportunity to remove deleted rows.
• This process reduces the number of DiskRowSets that
overlap in key range
23. Kudu Trade-Offs
• Random Updates will be slower
• Kudu requires key-lookup before update, bloom lookup
before insert
• Single Row Seek may be slower
• Columnar Design is optimized for scans
• Especially slow at reading a row with many recent
updates
26. The Kudu Master
Kudu’s central master process has several key responsibilities:
• A catalog manager
• keeping track of which tables and tablets exist, as well as their
schemas, desired replication levels, and other metadata
• A cluster coordinator
• keeping track of which servers in the cluster are aliveand
coordinating redistribution of data
• A tablet directory
• keeping track of which tablet servers are hosting replicas of
each tablet
28. Partitioning
• Tables in Kudu are horizontally partitioned.
• Kudu, like BigTable, calls these partitions tablets
• Kudu supports a flexible array of partitioning schemes
31. Partitioning: Hash plus Range
Img source: https://github.com/cloudera/kudu/blob/master/docs/images/hash-range-partitioning-example.png
32. Partitioning Recommendations
• Bigger tables, like fact tables are recommended to partition in
a way so that 1 tablet would contain about 1GB of data
• Do not partition small tables like dimensions
• Note: Impala doesn’t allow skipping the partitioning
clause, so you need to specify the 1 range partition
explicitly:
35. Replication Approach
• Kudu uses the Leader/Follower or Master-Slave
replication
• Kudu employs the Raft[25] consensus algorithm to
replicate its tablets
• If a majority of replicas accept the write and log it to
their own local write-ahead logs,
• the write is considered durably replicated and thus
can be committed on all replicas
36. Raft: Replicated State Machine
• Replicated log ensures state machines execute same commandsinsame order
• Consensus module ensures proper log replication
• System makes progress as long as any majority of servers are up
• Visualization: https://raft.github.io/raftscope/index.html
37. Consistency Model
• Kudu provides clients the choice between two consistency
modes for reads(scans):
• READ_AT_SNAPSHOT
• READ_LATEST
38. READ_LATEST consistency
• Monotonic reads are guaranteed(?) Read-your-writes is not
• Corresponds to "Read Committed" ACID Isolation mode:
• This is the default mode.
39. READ_LATEST consistency
• The server will always return committed writes at the time
the request was received.
• This type of read is not repeatable.
41. READ_AT_SNAPSHOT Consistency
• The server attempts to perform a read at the provided
timestamp
• In this mode reads are repeatable
• at the expense of waiting for in-flight transactions whose
timestamp is lower than the snapshot's timestamp to
complete
42. Write Consistency
• Writes to a single tablet are always internally consistent
• By default, Kudu does not provide an external consistency
guarantee.
• However, for users who require a stronger guarantee, Kudu
offers the option to manually propagate timestamps between
clients
43. Replication Factor Limitation
• Since Kudu 1.2.0:
• The replication factor of tables is now limited to a
maximum of 7
• In addition, it is no longer allowed to create a table with an
even replication factor
44. Kudu and CAP Theorem
• Kudu is a CP type of storage engine.
• Writing to a tablet will be delayed if
the server that hosts that tablet’s
leader replica fails
• Kudu gains the following properties
by using Raft consensus:
• Leader elections are fast
• Follower replicas don’t allow
writes, but they do allow reads
46. Applications for which Kudu is a viable
• Reporting applications where new data must be immediately
available for end users
• Time-series applications with
• queries across large amounts of historic data
• granular queries about an individual entity
• Applications that use predictive models to make real-time
decisions
48. Business Case
• A leader in health care
compliance consulting and
technology-driven managed
services
• Cloud-based multi-services
platform
• It offers
• enhanced data security and
scalability,
• operational managed services,
and access to business
information
http://ihealthone.com /wp-c ontent/uploads/2016/12/
Healthcare_Complianc e_Cons ultants-495x400.jpg
49. ETL Approach
Key Points:
• Leverage Confluent platform with
Schema Registry
• Apply configuration based approach:
• Avro Schema in Schema Registry for
Input Schema
• Impala Kudu SQL scripts for Target
Schema
• Stick to Python App as primary ETL code,
but extend:
• Develop new abstractions to work
with mapping rules
• Streaming processing for both facts and
dimensions
Cons:
• Scaling needs extra effortsData Flow
Analytics
DWH
Event
Topics
ETL
Code
Configuration
Input
Schema
Mapping
Rules
Target
Schema
Other
Configurations
50. Stream ETL using Pipeline Architecture
Cache
Manager
Mapper/
Flattener
Types
Adjuster
Data
Enricher
DB Sinker
Data
Reader
Configuration
Pipeline Modules:
• Data Reader: reads data from source DB
• Mapper/Flattener: flatten JSON treelike structure into flat one
and maps the field names to target ones
• Types Adjuster: adjusts/converts data types properly
• Data Enricher: enriches the data structure with new data:
• Generates surrogate key
• Looks up for the data from target DB(using cache)
• DB Sinker: writes data into target DB
Other Modules:
• Cache Manager: manages the cache with dimension data
52. Kudu Numeric vs String Keys
• Reason:
• Generating surrogate numeric keys adds extra processing step
and complexityto the overall ETL process
• Sample Schema:
• Dimension:
• Promotion dimension with 1000 unique members, 30
categories
• Products dimension with 50 000 unique members, 300
categories
• Facts
• Fact table containing the references to the 2 dimension
above with 1 million of rows
• Fact table containing the references to the 2 dimension
above with 100 million of rows
55. Pain Points
• Often releases with many changes
• Data types Limitations (especially in Python Lib, Impala)
• Lack of Sequences/Constraints
• Lack of Multi-Row transactions
56. Limitations
• Not recommended more than 50 columns
• Immutable primary keys
• Non-alterable Primary Key, Partitioning, Column Types
• Partitions splitable
57. Modeling Recommendations: Star Schema
Dimensions :
• Replication factor equal to
number of nodes in a cluster
• 1 Tablet per dimension
Facts:
• Aim for as many tablets as you
have cores in the cluster
59. What Kudu is Not
• Not a SQL interface itself
• It’s just the storage layer – you should use Impala or
SparkSQL
• Not an application that runs on HDFS
• It’s an alternative, native Hadoop storage engine
• Not a replacement for HDFS or Hbase
• Select the right storage for the right use case
• Cloudera will support and invest in all three
61. Kudu vs MPP Data Warehouses
In Common:
• Fast analytics queries via SQL
• Ability to insert, update, delete data
Differences:
üFaster streaming inserts
üImproved Hadoop integration
o Slower batch inserts
o No transactional data loading, multi-row transactions,
indexing
Structured storage in the Hadoop ecosystem has typically been achieved in two ways: for static data sets, data is typically stored on HDFS using binary data formats such as Apache Avro[1] or Apache Parquet[3]. However, neither HDFS nor these formats has any provision for updating indi- vidual records, or for efficient random access. Mutable data sets are typically stored in semi-structured stores such as Apache HBase[2] or Apache Cassandra[21]. These systems allow for low-latency record-level reads and writes, but lag far behind the static file formats in terms of sequential read throughput for applications such as SQL-based analytics or machine learning.
Following design of BigTable Kudu relies on:
A single Master server, responsible for metadata
Can be replicated for fault tolerance
An arbitrary number of Tablet Servers, responsible for data
When READ_LATEST is specified the server will always return committed writes at the time the request was received. This type of read does not return a snapshot timestamp and is not repeatable.
In ACID terms this corresponds to Isolation mode: "Read Committed"
This is the default mode.
Monotonic reads [19] is a guarantee that this kind of anomaly does not happen. It’s a lesser guarantee than strong consistency, but a stronger guarantee than eventual con‐ sistency. When you read data, you may see an old value; monotonic reads only means that if one user makes several reads in sequence, they will not see time go backwards, i.e. they will not read older data after having previously read newer data.
In this situation, we need read-after-write consistency, also known as read-your-writes consistency [20]. This is a guarantee that if the user reloads the page, they will always see any updates they submitted themselves. It makes no promises about other users: other users’ updates may not be visible until some later time. However, it reassures the user that their own input has been saved correctly.
When READ_LATEST is specified the server will always return committed writes at the time the request was received. This type of read does not return a snapshot timestamp and is not repeatable.
In ACID terms this corresponds to Isolation mode: "Read Committed"
This is the default mode.
By default, Kudu does not provide an external consistency guarantee. That is to say, if a client performs a write, then communicates with a di↵erent client via an external mecha- nism (e.g. a message bus) and the other performs a write, the causal dependence between the two writes is not captured. A third reader may see a snapshot which contains the second write without the first
However, for users who require a stronger guarantee, Kudu o↵ers the option to man- ually propagate timestamps between clients: after performing a write, the user may ask the client library for a timestamp to- ken. This token may be propagated to another client through the external channel, and passed to the Kudu API on the other side, thus preserving the causal relationship between writes made across the two clients.
In this situation, we need read-after-write consistency, also known as read-your-writes consistency [20]. This is a guarantee that if the user reloads the page, they will always see any updates they submitted themselves. It makes no promises about other users: other users’ updates may not be visible until some later time. However, it reassures the user that their own input has been saved correctly.
By default, Kudu does not provide an external consistency guarantee. That is to say, if a client performs a write, then communicates with a di↵erent client via an external mecha- nism (e.g. a message bus) and the other performs a write, the causal dependence between the two writes is not captured. A third reader may see a snapshot which contains the second write without the first
However, for users who require a stronger guarantee, Kudu o↵ers the option to man- ually propagate timestamps between clients: after performing a write, the user may ask the client library for a timestamp to- ken. This token may be propagated to another client through the external channel, and passed to the Kudu API on the other side, thus preserving the causal relationship between writes made across the two clients.
By default, Kudu does not provide an external consistency guarantee. That is to say, if a client performs a write, then communicates with a di↵erent client via an external mecha- nism (e.g. a message bus) and the other performs a write, the causal dependence between the two writes is not captured. A third reader may see a snapshot which contains the second write without the first
However, for users who require a stronger guarantee, Kudu o↵ers the option to man- ually propagate timestamps between clients: after performing a write, the user may ask the client library for a timestamp to- ken. This token may be propagated to another client through the external channel, and passed to the Kudu API on the other side, thus preserving the causal relationship between writes made across the two clients.