The document discusses options for scaling relational database management systems (RDBMS). It describes scale-up vs scale-out approaches, and compares solutions like master-slave replication, sharding, and using scale-out databases. It provides details on ClustrixDB's scale-out architecture with shared-nothing storage and automatic data distribution. Benchmark results show ClustrixDB outperforming Aurora for throughput and latency on OLTP workloads as nodes are added.
2. Agenda
u Market Landscape of DB market
u Options to Scale DB
u Scale-Out Architecture
u Comparisons of solutions for high transaction relational databases
2/11/16
3. Generalized and Specialized
2/11/16
High Concurrency/Write heavy /Real Time Analytics Historical Analytics Exploratory
Transactional Analytics
Traditional Databases
No
SQL
DW/Analytical
DBMS
Operational
System/OLTP (New
SQL)
Hadoop
5. RDBMS Scaling Techniques
u Scale-Up
u Master Slave
u Master Master
u MySQL Clustering Technologies
u Sharding
u Scale-Out
2/11/16
6. Options to Scale DBMS
2/11/16
DBMS
Scale Out
e.g., MongoDB
No transactions
May have weak
consistency (CAP)
Application involves DB
Coding
e.g. ClustrixDB
ACID
Proven Scalability
(Reads and Writes)
Shared Nothing
Scale Up
e.g., Aurora
Reads Scale
limited scalability on
writes
Not Shared nothing scale
out
7. Scaling-Up
u Keep increasing the size of the (single) database server
u Pros
u Simple, no application changes needed
u Cons
u Expensive. At some point, you’re paying 5x for 2x the performance
u ‘Exotic’ hardware (128 cores and above) become price prohibitive
u Eventually you ‘hit the wall’, and you literally cannot scale-up
anymore
7
8. Scaling Reads: Master/Slave
u Add a ‘Slave’ read-server(s) to your ‘Master’ database server
u Pros
u Reasonably simple to implement.
u Read/write fan-out can be done at the proxy level
u Cons
u Only adds Read performance
u Data consistency issues can occur, especially if the application isn’t
coded to ensure reads from the slave are consistent with reads from
the master
8
9. Scaling Writes: Master/Master
9
u Add additional ‘Master’(s) to your ‘Master’ database server
u Pros
u Adds Write scaling without needing to shard
u Cons
u Adds write scaling at the cost of read-slaves
u Adding read-slaves would add even more latency
u Application changes are required to ensure data consistency / conflict resolution
10. Scaling Reads & Writes: Sharding
10
SHARDO1 SHARDO2 SHARDO3 SHARDO4
u Partitioning tables across separate database servers
u Pros
u Adds both write and read scaling
u Cons
u Loses the ability of an RDBMS to manage transactionality, referential integrity and ACID
u ACID compliance & transactionality must be managed at the application level
u Consistent backups across all the shards are very hard to manage
u Read and Writes can be skewed / unbalanced
u Application changes can be significant
A - K L - O P - S T - Z
11. Scaling Reads & Writes: MySQL Cluster
u Provides shared-nothing clustering and auto-sharding for MySQL. (designed for Telco deployments: minimal cross-
node transactions, HA emphasis)
u Pros
u Distributed, multi-master model
u Provides high availability and high throughput
u Cons
u Only supports read-committed isolation
u Long-running transactions can block a node restart
u SBR replication not supported
u Range scans are expensive and lower performance than MySQL
u Unclear how it scales with many nodes
11
12. Application Workload Partitioning
12
u Partition entire application + RDBMS stack across several “pods”
u Pros
u Adds both write and read scaling
u Flexible: can keep scaling with addition of pods
u Cons
u No data consistency across pods (only suited for cases where it
is not needed)
u High overhead in DBMS maintenance and upgrade
u Queries / Reports across all pods can be very complex
u Complex environment to setup and support
APP
APP
APP
APP
APP
APP
13. DBMS Capacity, Elasticity and Resiliency
13
Scale-up
Master – Slave
Master – Master
MySQL Cluster
Sharding
Scale-Out
DBMS Scaling
Many cores – very expensive
Reads Only
Read / Write
Read / Write
Unbalanced Read/Writes
Read / Write
Capacity
Single Point Failure
Fail-over
Yes
Yes
Multiple points of failure
Yes
ResiliencyElasticity
No
No
No
No
No
Yes
None
Yes – for read scale
High – update conflict
None (or minor)
Very High
None
Application Impact
14. DBMS Architecture-Scale out
2/11/16
Shared Nothing Architecture
Compiler Map
Engine Data
Compiler Map
Engine Data
Compiler Map
Engine Data
Each Node Contains:
u Query Parser/Planner: distribute partial
query fragments to the nodes.
u Data Map: all nodes metadata about data
across the cluster
u Database Engine: all nodes can perform all
database operations (no leader,
aggregator, leaf, data-only, etc nodes)
u Data: Table Distributed: All table auto-
redistributed
16. Query
Distributed Query Processing
16
ClustrixDB
Load
Balancer
TRXTRXTRX
u Queries are fielded by any peer node
u Routed to node holding the data
u Complex queries are split into steps and processed in parallel
u Automatically distributed for optimized performance
u All nodes handle writes and reads
u Result is aggregated and returned to the user
17. DBMS Capacity, Elasticity and Resiliency
17
Features ClustrixDB Aurora
Write Scalability Writes scales by adding nodes Cannot add write nodes
High Concurrency Latency Low with High concurrency Latency climbs quickly with high
concurrency
ACID Yes Yes
On-Demand Write Scale Yes No
Automatically Distributed
queries
Yes: No Application changes No: Read/Write fanout needed.
Write contention on Master
Cloud/On Premises Yes No, only AWS Cloud
Shared Nothing Storage Yes: Parallel data access No: Contention at high write
concurrency