SlideShare a Scribd company logo
1 of 9
Download to read offline
Cassandra by example - the path of read and
write requests

Abstract

This article describes how Cassandra handles and processes requests. It will help you to get a better
impression about Cassandra's internals and architecture. The path of a single read request as well as
the path of a single write request will be described in detail. This description is based on a single data
center Cassandra V1.1.4 cluster (default store configuration).


Example data model

Please consider that this article is not an introduction to the Cassandra model. In the examples below
a column family hotel is used. In short, a column family is analogous to tables of the relational
database approach. Each hotel record or row is identified by a unique key. The columns of a hotel
row include the hotel name as well as the category of the hotel.

The column family hotel lives inside the keyspace book_a_hotel . A keyspace can be described by
analogy as a tablespace or database.




Thrift

The common way to access Cassandra is using Thrift. Thrift is a language-independent RPC protocol
originally developed at Facebook and contributed to Apache. Although Thrift is widely supported by
the most popular programming languages the Cassandra project suggests using higher level
Cassandra clients such as Hector or Astyanax instead the raw Thrift-based API. In general these high
level clients try to hide the underlying middleware protocol.



Gregor Roth           Cassandra by example - the path of read and write requests                             1
The listing below shows a simple query by using the Hector client library V1.1.
// [1] prepare the client (cluster)
Cluster cluster = HFactory.getOrCreateCluster("TestClstr", "172.39.126.14, 172.39.126.93, 172.39.126.52");
Keyspace keyspaceOperator = HFactory.createKeyspace("book_a_hotel", cluster);


// [2] create the query (fetching the column category)
SliceQuery<String, String, String> query = HFactory.createSliceQuery(keyspaceOperator,
AsciiSerializer.get(), StringSerializer.get(), StringSerializer.get());
query.setColumnFamily("hotel");
query.setKey("26813445");
query.setColumnNames("category");


// [3] perform the request
QueryResult<ColumnSlice<String, String>> result = query.execute();
ColumnSlice<String, String> row = result.get();
String category = row.getColumnByName("category").getValue();
//...


// [4] release the client (cluster)
cluster.getConnectionManager().shutdown();


In the first line of the listing a set of server IP addresses is passed over by creating a Hector Cluster
object. The server address identifies a single Cassandra node. A collection of independent Cassandra
nodes (the Cassandra cluster) represents the Cassandra database. Within this cluster all nodes are
peers. No master node or something like that exists.

The client is free to connect any Cassandra node to perform any request. In the listing above 3
addresses are configured. This does not mean that the Cassandra cluster consist of 3 nodes. It just
defines that the client will communicate with these nodes only.

The connected Cassandra node plays two roles, potentially. In each case the connected node is the
coordinator node which is responsible to handle the dedicated request. Furthermore the connected
node will be a replica store node, if the node is responsible to store a replica of the requested data.
For instance the requested Pavillon Nation hotel record of the example above does not have to be
stored on the connected node. Often the coordinator node has to send sub requests to other replica
nodes to be able to handle the request. As shown in the diagram below the notes 172.39.126.14,
172.39.126.93 and 172.39.126.52 would not able to serve a Pavillon Nation query in a direct way
without sub requesting other nodes.




Gregor Roth           Cassandra by example - the path of read and write requests                             2
Please consider that a coordinator node and a replica node is a role description of a Cassandra node
in context of a dedicated read or write operation. All Cassandra nodes can be a coordinator node as
well as a replica node.

Hector uses a round-robin strategy to select the node to use. By executing the example query Hector
first connects one of the configured nodes. The connect request will be handled on the server-side by
the CassandraServer .




By default the CassandraServer is bound to server port 9160 during the start sequence of a
Cassandra node. The CassandraServer implements Cassandra's Thrift interface which defines remote
procedure methods such as set_keyspace(…) or get_slice(…).This meansCassandra's Thrift interface is
stateful, implicitly. The Hector client has to call the remote method set_keyspace(..) first to assign the
keyspace book_a_hotel to the current connection session. After assigning the keyspace the
get_slice(..) can be called to request the columns of the Pavillon Nation hotel.

However, you are not forced to use Thrift to access Cassandra. Several alternative open-source
connectors such as REST-based connectors exist.


Determining the replica nodes

The CassandraServer is responsible to handle the client-server communication only. Internally, the
CassandraServer calls the local StorageProxy class to process the request. The StorageProxy
implements the coordinator logic. The coordinator logic includes determining the replica notes for
the request row key as well as requesting these replica nodes.

By default a RandomPartitioner is used to determine the replica nodes for the row key of the
request. The RandomPartinitoner spreads the data records (rows) evenly across the Cassandra nodes
which are arranged in a circular ring. Within this ring each node is assigned to a range of hash values
(tokens). To determine the first replica, the MD5 hash of the row key will be calculated and the node
will be selected where the key hash maps with the assigned token range.


Gregor Roth           Cassandra by example - the path of read and write requests                             3
For instance the token of the Pavillon Nation's row key 26813445 is
91851936251452796391746312281860607309. This token is within the token range of node
172.39.126.86 which means that node 172.39.126.86 is responsible to store a replica of the Pavillon
Nation record.




In most case a replica is stored by more than one node which depends on the key space's replication
factor. For instance a replication factor 2 means the clockwise next node of the ring will store the
replica, too. If replication level is 3, the next of the next will also store the replica and so forth.


Processing a read request

The handle a read request the StorageProxy (which is the coordinator of the request) determines the
replica nodes as described above. Additionally, the StorageProxy checks that enough replica nodes
are alive to handle the read request. If this is true, the replica nodes will be sorted by proximity
(closest node first) and the first replica node will be called to get the requested row data.

In contrast to the thrift-based client-server communication the Cassandra nodes interchange data by
using a message-oriented tcp-based protocol. This means the StorageProxy will get the requested
row data by using Cassandra's messaging protocol.

Calling other replica nodes depends on the consistency level. The consistency level is specified by the
client request. If consistency level ONE is required, no further replica nodes will be called. If
consistency level QUORUM is required, in total (replication_factor / 2) + 1 replica nodes
will be called.

In contrast to the first full-data read call all additional calls are digest calls. A digest call queries a
single MD5 hash of all column names, values and timestamps instead requesting the complete row
data. The hashes of all calls, including the first one will be compared together. If a hash does not
match, the replicas will be inconsistent and the out-of-date replicas will be auto-repaired during the


Gregor Roth           Cassandra by example - the path of read and write requests                              4
read process. To do this, a full-data read request will be sent to the additional nodes, the most recent
version of data will be computed and the diff will be sent to out-of-date replicas.

Occasionally all replica nodes for the row key will be called independent of the requested consistency
level. This depends on the column family's read_repair_chance parameter. This configuration
parameter specifies the probability with which read repairs should be invoked. The default value of
0.1 means that a read repair is performed 10%. However, the client response will always be
answered regarding to the requested consistency level. Additional work will be done in background.
A read_repair_chance parameter larger the 0 ensures that frequently read data remains consistent
even though only consistency level ONE is required. The row becomes consistent eventually.


Performing the local data query

As already mentioned above, a dedicated messaging protocol is used for inter-node communication.
Similar to the CassandraServer the MessagingService will be started during the start sequence of a
Cassandra node, too. By default the MessagingService in bound to server port 7000.

The replica node will receive the read call from the coordinator node through the replica node's
MessagingService. However, the MessagingService will not access the local store in a direct way. To
read and write data locally, the ColumnFamilyStore has to be used. Roughly speaking, the
ColumFamilyStore represents the underlying local store of a dedicated column family.




Please consider that a coordinator node can also be in role replica node. This will be true, if the client
calls node 172.39.126.52 to get the Mister bed city row instead of the Pavillon Nation row in the
example above. In this case the StorageProxy of the coordinator node will not call the


Gregor Roth           Cassandra by example - the path of read and write requests                             5
MessagingService of the same node. To avoid remote calls to the same node, the StorageProxy will
call the ColumnFamilyStore in the same way the MessagingServices does to access local data.

By processing a query the ColumnFamilyStore will try to read the requested row data through the
row cache, if the row cache is activated for the column family. The row cache holds the entire row
and is deactivated per default. If the row cache contains the requests row data, no disk IO will be
required. The query will be served very fast by performing in-memory operations only. However, an
activated row cache causes that the full row have to be fetched internally even though a sub set of
columns is requested. For this reasons the row cache is often less efficient for large rows and small
sub set queries.

If the request row isn't cached, the Memtables and the SSTables (sorted strings table) have to be
read. Memtables and SSTables are maintained per column family. SSTables are data files containing
row data fragments and only allow appending data. A Memtable is an in-memory table which buffers
writes. If the Memtable is full, it will be written to disk as a new SSTable file in background. For this
reason the columns of the requested Pavillon Nation row could be fragmented over several SSTables
and unflushed Memtables. For instance one SSTable book_a_hotel-hotel-he-1-Data.db could contain
the initial inserted columns ‘name’= ‘Pavillon Nation’ and ‘category’=’4’ of the Pavillon Nation row.
Another SSTable book_a_hotel-hotel-he-2-Data.db (or Memtable) could contain the updated
category column ‘category’=’5’.




If an SSTable exists for the requested column family, first the associated (key-scoped) Bloom filter of
the SSTable file will be read to avoid unnecessary disk IO. For each SSTable the ColumnFamilyStore
holds an in-memory structure called SSTableReader which contains metadata as wells as the Bloom
filter of the underlying SSTable file. The Bloom filter indicates that the dedicated SSTable could
contain a row data fragment (false positive are possible, false negative not). If this is true, the key
cache will be requested to get the seek position. If not found, the on-disk index will have to be
scanned. The fetched seek position will be added to the key cache in this case. Based on the seek
position the row data fragment will be read from the SSTable file. The data fragments of the SSTables
and Memtables will be merged together by using the column timestamp and the requested row data
will be returned to the caller.




Gregor Roth          Cassandra by example - the path of read and write requests                             6
Processing an write request

To insert, update or delete a row Cassandra's mutate method has to be called. The listing below
shows such a mutate call by using the Hector client.

//...


// [1.b] create and perform an update
Mutator<String> mutator = HFactory.createMutator(keyspaceOperator, AsciiSerializer.get());
mutator.addInsertion("26813445", "hotel",
           HFactory.createColumn("category", "5", StringSerializer.get(), StringSerializer.get()));

MutationResult result = mutator.execute();

//...



The write path is very the same to the read path. Similar to the read request a write request also
includes the required consistency level. However, the coordinator node tries to send a write request
including the mutated columns to all replica nodes for the row key.

First, the StorageProxy of the coordinator node checks if enough replica notes for the row key are
alive regarding to the requested consistency level. If this is true, the write request will be sent to the
living replica nodes. If not, an error response will be returned. Write requests to temporarily failed
replica nodes will be scheduled as a hinted handoff. This means that a hint will be written locally
instead calling the failed node. Once the failed replica node is back the hint will be sent to this node
to perform the write operation. By sending the hints the failed nodes becomes consistent to the
other nodes. Please consider that hints will not longer store locally, if the failed node is dead longer
than 1 hour (config param max_hint_window_in_ms).

The coordinator node returns the response to the client as soon as the replica nodes conforming to
the consistency level have confirmed the update (a hinted write will not count towards the
requested consistency level). The updates of the other replica nodes will still be executed in
background. If an error occurs by updating the replica nodes conforming to the consistency level, an
error response will be returned. However, in this case the already updated nodes will not be
reverted. Cassandra does not support distributed transactions, and hence it does not support a
distributed rollback.

The write operation supports an additional consistency level ANY which means that the mutated
columns have to be written to at least one node regardless of whether this node is a replica node for
the key or not. In contrast to consistency level ONE the write will also succeed, if a hinted handoff is
written (by the coordinator node). However, in this case the mutated columns will not be readable
until the responsible replica nodes have recovered.




Gregor Roth           Cassandra by example - the path of read and write requests                             7
Performing the local update

Similar to the local data query a local update is triggered by handling a message through the
MessagingService or by the StorageProxy. However, in contrast to the read path, first a commit log
entry will be written for durability reasons. By default the commit log entry will be written in
background asynchronously.

The mutated columns will also be written into the in-memory Memtable of the column family. After
inserting the changes the local update is completed.

However, the memory size of a Memtable is limited. If the max size is exceeded, the Memtable will
be written to disk as a new SSTable. This is done by a background thread which checks the current
size of all unflushed Memtables of all ColumnFamilies, periodically. If a Memtable exceeds the max
size, the background thread replaces the current Memtable by a new one. The old Memtable will be
marked as pending flush and will be flushed by another thread. Under certain circumstances several
pending Memtables for a column family could exists. After writing the Memtable to disk a new
SSTableReader referring the written SSTable is created and added to the ColumnFamilyStore. Once
written, the SSTable file is immutable. By default the SSTable data will be compressed
(SnappyCompression).

Compacting

The SSTable file includes the modified columns of the row including their timestamps as well as
additional row meta data. For instance the meta data section includes a (column name-scoped)
Bloom Filter which is used to reduce disk IO by fetching columns by name.




To reduce fragmentation and save space, SSTable files will be merged into a new SSTable file,
occasionally. This compaction will be triggered by a background thread, if the compaction threshold
is exceeded. The compaction threshold can be set for each column family.

Gregor Roth         Cassandra by example - the path of read and write requests                        8
About the author

Gregor Roth works as a software architect at United Internet group, a leading European Internet
Service Provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and
system architecture, enterprise architecture management, distributed computing, and development
methodologies.




Gregor Roth         Cassandra by example - the path of read and write requests                      9

More Related Content

What's hot

Introduction to Apache ZooKeeper
Introduction to Apache ZooKeeperIntroduction to Apache ZooKeeper
Introduction to Apache ZooKeeperSaurav Haloi
 
ETL With Cassandra Streaming Bulk Loading
ETL With Cassandra Streaming Bulk LoadingETL With Cassandra Streaming Bulk Loading
ETL With Cassandra Streaming Bulk Loadingalex_araujo
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase强 王
 
Jvm tuning for low latency application & Cassandra
Jvm tuning for low latency application & CassandraJvm tuning for low latency application & Cassandra
Jvm tuning for low latency application & CassandraQuentin Ambard
 
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...ScyllaDB
 
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...DataStax
 
Cassandra vs. ScyllaDB: Evolutionary Differences
Cassandra vs. ScyllaDB: Evolutionary DifferencesCassandra vs. ScyllaDB: Evolutionary Differences
Cassandra vs. ScyllaDB: Evolutionary DifferencesScyllaDB
 
Deep Dive into Cassandra
Deep Dive into CassandraDeep Dive into Cassandra
Deep Dive into CassandraBrent Theisen
 
Indexing in Cassandra
Indexing in CassandraIndexing in Cassandra
Indexing in CassandraEd Anuff
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...DataWorks Summit/Hadoop Summit
 
Optimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloadsOptimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloadsdatamantra
 
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATS
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATSDeploy Secure and Scalable Services Across Kubernetes Clusters with NATS
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATSNATS
 
Dongwon Kim – A Comparative Performance Evaluation of Flink
Dongwon Kim – A Comparative Performance Evaluation of FlinkDongwon Kim – A Comparative Performance Evaluation of Flink
Dongwon Kim – A Comparative Performance Evaluation of FlinkFlink Forward
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisDvir Volk
 
Apache Spark At Scale in the Cloud
Apache Spark At Scale in the CloudApache Spark At Scale in the Cloud
Apache Spark At Scale in the CloudDatabricks
 
Cassandra sharding and consistency (lightning talk)
Cassandra sharding and consistency (lightning talk)Cassandra sharding and consistency (lightning talk)
Cassandra sharding and consistency (lightning talk)Federico Razzoli
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDBMike Dirolf
 

What's hot (20)

Introduction to Apache ZooKeeper
Introduction to Apache ZooKeeperIntroduction to Apache ZooKeeper
Introduction to Apache ZooKeeper
 
ETL With Cassandra Streaming Bulk Loading
ETL With Cassandra Streaming Bulk LoadingETL With Cassandra Streaming Bulk Loading
ETL With Cassandra Streaming Bulk Loading
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase
 
Jvm tuning for low latency application & Cassandra
Jvm tuning for low latency application & CassandraJvm tuning for low latency application & Cassandra
Jvm tuning for low latency application & Cassandra
 
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...
How We Reduced Performance Tuning Time by Orders of Magnitude with Database O...
 
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...
The Missing Manual for Leveled Compaction Strategy (Wei Deng & Ryan Svihla, D...
 
Cassandra 101
Cassandra 101Cassandra 101
Cassandra 101
 
Cassandra vs. ScyllaDB: Evolutionary Differences
Cassandra vs. ScyllaDB: Evolutionary DifferencesCassandra vs. ScyllaDB: Evolutionary Differences
Cassandra vs. ScyllaDB: Evolutionary Differences
 
Deep Dive into Cassandra
Deep Dive into CassandraDeep Dive into Cassandra
Deep Dive into Cassandra
 
Indexing in Cassandra
Indexing in CassandraIndexing in Cassandra
Indexing in Cassandra
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...
 
Hive: Loading Data
Hive: Loading DataHive: Loading Data
Hive: Loading Data
 
Optimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloadsOptimizing S3 Write-heavy Spark workloads
Optimizing S3 Write-heavy Spark workloads
 
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATS
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATSDeploy Secure and Scalable Services Across Kubernetes Clusters with NATS
Deploy Secure and Scalable Services Across Kubernetes Clusters with NATS
 
Dongwon Kim – A Comparative Performance Evaluation of Flink
Dongwon Kim – A Comparative Performance Evaluation of FlinkDongwon Kim – A Comparative Performance Evaluation of Flink
Dongwon Kim – A Comparative Performance Evaluation of Flink
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
Apache Spark At Scale in the Cloud
Apache Spark At Scale in the CloudApache Spark At Scale in the Cloud
Apache Spark At Scale in the Cloud
 
Cassandra sharding and consistency (lightning talk)
Cassandra sharding and consistency (lightning talk)Cassandra sharding and consistency (lightning talk)
Cassandra sharding and consistency (lightning talk)
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 

Viewers also liked

Cassandra 2.1 boot camp, Read/Write path
Cassandra 2.1 boot camp, Read/Write pathCassandra 2.1 boot camp, Read/Write path
Cassandra 2.1 boot camp, Read/Write pathJoshua McKenzie
 
Migrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraMigrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraAdrian Cockcroft
 
An Overview of Apache Cassandra
An Overview of Apache CassandraAn Overview of Apache Cassandra
An Overview of Apache CassandraDataStax
 
Cassandra @Formspring
Cassandra @FormspringCassandra @Formspring
Cassandra @Formspringmartincozzi
 
Hadoop and Cassandra at Rackspace
Hadoop and Cassandra at RackspaceHadoop and Cassandra at Rackspace
Hadoop and Cassandra at RackspaceStu Hood
 
What Every Developer Should Know About Database Scalability
What Every Developer Should Know About Database ScalabilityWhat Every Developer Should Know About Database Scalability
What Every Developer Should Know About Database Scalabilityjbellis
 
From 100s to 100s of Millions
From 100s to 100s of MillionsFrom 100s to 100s of Millions
From 100s to 100s of MillionsErik Onnen
 
Understanding Data Partitioning and Replication in Apache Cassandra
Understanding Data Partitioning and Replication in Apache CassandraUnderstanding Data Partitioning and Replication in Apache Cassandra
Understanding Data Partitioning and Replication in Apache CassandraDataStax
 
BI, Reporting and Analytics on Apache Cassandra
BI, Reporting and Analytics on Apache CassandraBI, Reporting and Analytics on Apache Cassandra
BI, Reporting and Analytics on Apache CassandraVictor Coustenoble
 
Reverse Engineering
Reverse EngineeringReverse Engineering
Reverse Engineeringdswanson
 
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...DataStax
 
Advanced data modeling with apache cassandra
Advanced data modeling with apache cassandraAdvanced data modeling with apache cassandra
Advanced data modeling with apache cassandraPatrick McFadin
 

Viewers also liked (20)

Cassandra 2.1 boot camp, Read/Write path
Cassandra 2.1 boot camp, Read/Write pathCassandra 2.1 boot camp, Read/Write path
Cassandra 2.1 boot camp, Read/Write path
 
Migrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraMigrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global Cassandra
 
An Overview of Apache Cassandra
An Overview of Apache CassandraAn Overview of Apache Cassandra
An Overview of Apache Cassandra
 
Cassandra @Formspring
Cassandra @FormspringCassandra @Formspring
Cassandra @Formspring
 
Hadoop and Cassandra at Rackspace
Hadoop and Cassandra at RackspaceHadoop and Cassandra at Rackspace
Hadoop and Cassandra at Rackspace
 
What Every Developer Should Know About Database Scalability
What Every Developer Should Know About Database ScalabilityWhat Every Developer Should Know About Database Scalability
What Every Developer Should Know About Database Scalability
 
From 100s to 100s of Millions
From 100s to 100s of MillionsFrom 100s to 100s of Millions
From 100s to 100s of Millions
 
Camunda and Apache Cassandra
Camunda and Apache CassandraCamunda and Apache Cassandra
Camunda and Apache Cassandra
 
Understanding Data Partitioning and Replication in Apache Cassandra
Understanding Data Partitioning and Replication in Apache CassandraUnderstanding Data Partitioning and Replication in Apache Cassandra
Understanding Data Partitioning and Replication in Apache Cassandra
 
BI, Reporting and Analytics on Apache Cassandra
BI, Reporting and Analytics on Apache CassandraBI, Reporting and Analytics on Apache Cassandra
BI, Reporting and Analytics on Apache Cassandra
 
Management Consulting
Management ConsultingManagement Consulting
Management Consulting
 
Oprah Winfrey
Oprah WinfreyOprah Winfrey
Oprah Winfrey
 
Reverse Engineering
Reverse EngineeringReverse Engineering
Reverse Engineering
 
Chess
ChessChess
Chess
 
Lionel Messi
Lionel MessiLionel Messi
Lionel Messi
 
Lionel messi
Lionel messiLionel messi
Lionel messi
 
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...
Cassandra Internals: The Read Path (Tyler Hobbs, DataStax) | Cassandra Summit...
 
Jeff jonas big data new physics
Jeff jonas big data new physicsJeff jonas big data new physics
Jeff jonas big data new physics
 
Advanced data modeling with apache cassandra
Advanced data modeling with apache cassandraAdvanced data modeling with apache cassandra
Advanced data modeling with apache cassandra
 
Growth Hacking
Growth Hacking Growth Hacking
Growth Hacking
 

Similar to Cassandra by example - the path of read and write requests

Cassandra consistency
Cassandra consistencyCassandra consistency
Cassandra consistencyzqhxuyuan
 
Mongodb connection string
Mongodb connection stringMongodb connection string
Mongodb connection stringPravin Dwiwedi
 
Querying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsQuerying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsRuben Verborgh
 
Cassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User GroupCassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User GroupAdam Hutson
 
Internals of how an Http Client works (Final) (3).pdf
Internals of how an Http Client works (Final) (3).pdfInternals of how an Http Client works (Final) (3).pdf
Internals of how an Http Client works (Final) (3).pdfjrhee17
 
White paper on cassandra
White paper on cassandraWhite paper on cassandra
White paper on cassandraNavanit Katiyar
 
Cassandra for Sysadmins
Cassandra for SysadminsCassandra for Sysadmins
Cassandra for SysadminsNathan Milford
 
The Apache Cassandra ecosystem
The Apache Cassandra ecosystemThe Apache Cassandra ecosystem
The Apache Cassandra ecosystemAlex Thompson
 
Spinnaker VLDB 2011
Spinnaker VLDB 2011Spinnaker VLDB 2011
Spinnaker VLDB 2011sandeep_tata
 
Dynamo.ppt
Dynamo.pptDynamo.ppt
Dynamo.pptksjk1
 
Dynamo.ppt
Dynamo.pptDynamo.ppt
Dynamo.pptkaja56
 
Introduction to Cassandra
Introduction to CassandraIntroduction to Cassandra
Introduction to Cassandraaaronmorton
 
Understanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraUnderstanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraDataStax
 
Cassandra's Sweet Spot - an introduction to Apache Cassandra
Cassandra's Sweet Spot - an introduction to Apache CassandraCassandra's Sweet Spot - an introduction to Apache Cassandra
Cassandra's Sweet Spot - an introduction to Apache CassandraDave Gardner
 
Designing distributedsystems cht6
Designing distributedsystems cht6Designing distributedsystems cht6
Designing distributedsystems cht6Chen-Tien Tsai
 
Introduction to Apache Cassandra and support within WSO2 Platform
Introduction to Apache Cassandra and support within WSO2 PlatformIntroduction to Apache Cassandra and support within WSO2 Platform
Introduction to Apache Cassandra and support within WSO2 PlatformSrinath Perera
 
Distribute Key Value Store
Distribute Key Value StoreDistribute Key Value Store
Distribute Key Value StoreSantal Li
 
Distribute key value_store
Distribute key value_storeDistribute key value_store
Distribute key value_storedrewz lin
 

Similar to Cassandra by example - the path of read and write requests (20)

Cassandra consistency
Cassandra consistencyCassandra consistency
Cassandra consistency
 
Cassandra no sql ecosystem
Cassandra no sql ecosystemCassandra no sql ecosystem
Cassandra no sql ecosystem
 
Mongodb connection string
Mongodb connection stringMongodb connection string
Mongodb connection string
 
Querying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern FragmentsQuerying federations 
of Triple Pattern Fragments
Querying federations 
of Triple Pattern Fragments
 
Cassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User GroupCassandra & Python - Springfield MO User Group
Cassandra & Python - Springfield MO User Group
 
Internals of how an Http Client works (Final) (3).pdf
Internals of how an Http Client works (Final) (3).pdfInternals of how an Http Client works (Final) (3).pdf
Internals of how an Http Client works (Final) (3).pdf
 
White paper on cassandra
White paper on cassandraWhite paper on cassandra
White paper on cassandra
 
Cassandra for Sysadmins
Cassandra for SysadminsCassandra for Sysadmins
Cassandra for Sysadmins
 
The Apache Cassandra ecosystem
The Apache Cassandra ecosystemThe Apache Cassandra ecosystem
The Apache Cassandra ecosystem
 
Spinnaker VLDB 2011
Spinnaker VLDB 2011Spinnaker VLDB 2011
Spinnaker VLDB 2011
 
Dynamo.ppt
Dynamo.pptDynamo.ppt
Dynamo.ppt
 
Dynamo.ppt
Dynamo.pptDynamo.ppt
Dynamo.ppt
 
Introduction to Cassandra
Introduction to CassandraIntroduction to Cassandra
Introduction to Cassandra
 
NoSql Database
NoSql DatabaseNoSql Database
NoSql Database
 
Understanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraUnderstanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache Cassandra
 
Cassandra's Sweet Spot - an introduction to Apache Cassandra
Cassandra's Sweet Spot - an introduction to Apache CassandraCassandra's Sweet Spot - an introduction to Apache Cassandra
Cassandra's Sweet Spot - an introduction to Apache Cassandra
 
Designing distributedsystems cht6
Designing distributedsystems cht6Designing distributedsystems cht6
Designing distributedsystems cht6
 
Introduction to Apache Cassandra and support within WSO2 Platform
Introduction to Apache Cassandra and support within WSO2 PlatformIntroduction to Apache Cassandra and support within WSO2 Platform
Introduction to Apache Cassandra and support within WSO2 Platform
 
Distribute Key Value Store
Distribute Key Value StoreDistribute Key Value Store
Distribute Key Value Store
 
Distribute key value_store
Distribute key value_storeDistribute key value_store
Distribute key value_store
 

Recently uploaded

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 

Recently uploaded (20)

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 

Cassandra by example - the path of read and write requests

  • 1. Cassandra by example - the path of read and write requests Abstract This article describes how Cassandra handles and processes requests. It will help you to get a better impression about Cassandra's internals and architecture. The path of a single read request as well as the path of a single write request will be described in detail. This description is based on a single data center Cassandra V1.1.4 cluster (default store configuration). Example data model Please consider that this article is not an introduction to the Cassandra model. In the examples below a column family hotel is used. In short, a column family is analogous to tables of the relational database approach. Each hotel record or row is identified by a unique key. The columns of a hotel row include the hotel name as well as the category of the hotel. The column family hotel lives inside the keyspace book_a_hotel . A keyspace can be described by analogy as a tablespace or database. Thrift The common way to access Cassandra is using Thrift. Thrift is a language-independent RPC protocol originally developed at Facebook and contributed to Apache. Although Thrift is widely supported by the most popular programming languages the Cassandra project suggests using higher level Cassandra clients such as Hector or Astyanax instead the raw Thrift-based API. In general these high level clients try to hide the underlying middleware protocol. Gregor Roth Cassandra by example - the path of read and write requests 1
  • 2. The listing below shows a simple query by using the Hector client library V1.1. // [1] prepare the client (cluster) Cluster cluster = HFactory.getOrCreateCluster("TestClstr", "172.39.126.14, 172.39.126.93, 172.39.126.52"); Keyspace keyspaceOperator = HFactory.createKeyspace("book_a_hotel", cluster); // [2] create the query (fetching the column category) SliceQuery<String, String, String> query = HFactory.createSliceQuery(keyspaceOperator, AsciiSerializer.get(), StringSerializer.get(), StringSerializer.get()); query.setColumnFamily("hotel"); query.setKey("26813445"); query.setColumnNames("category"); // [3] perform the request QueryResult<ColumnSlice<String, String>> result = query.execute(); ColumnSlice<String, String> row = result.get(); String category = row.getColumnByName("category").getValue(); //... // [4] release the client (cluster) cluster.getConnectionManager().shutdown(); In the first line of the listing a set of server IP addresses is passed over by creating a Hector Cluster object. The server address identifies a single Cassandra node. A collection of independent Cassandra nodes (the Cassandra cluster) represents the Cassandra database. Within this cluster all nodes are peers. No master node or something like that exists. The client is free to connect any Cassandra node to perform any request. In the listing above 3 addresses are configured. This does not mean that the Cassandra cluster consist of 3 nodes. It just defines that the client will communicate with these nodes only. The connected Cassandra node plays two roles, potentially. In each case the connected node is the coordinator node which is responsible to handle the dedicated request. Furthermore the connected node will be a replica store node, if the node is responsible to store a replica of the requested data. For instance the requested Pavillon Nation hotel record of the example above does not have to be stored on the connected node. Often the coordinator node has to send sub requests to other replica nodes to be able to handle the request. As shown in the diagram below the notes 172.39.126.14, 172.39.126.93 and 172.39.126.52 would not able to serve a Pavillon Nation query in a direct way without sub requesting other nodes. Gregor Roth Cassandra by example - the path of read and write requests 2
  • 3. Please consider that a coordinator node and a replica node is a role description of a Cassandra node in context of a dedicated read or write operation. All Cassandra nodes can be a coordinator node as well as a replica node. Hector uses a round-robin strategy to select the node to use. By executing the example query Hector first connects one of the configured nodes. The connect request will be handled on the server-side by the CassandraServer . By default the CassandraServer is bound to server port 9160 during the start sequence of a Cassandra node. The CassandraServer implements Cassandra's Thrift interface which defines remote procedure methods such as set_keyspace(…) or get_slice(…).This meansCassandra's Thrift interface is stateful, implicitly. The Hector client has to call the remote method set_keyspace(..) first to assign the keyspace book_a_hotel to the current connection session. After assigning the keyspace the get_slice(..) can be called to request the columns of the Pavillon Nation hotel. However, you are not forced to use Thrift to access Cassandra. Several alternative open-source connectors such as REST-based connectors exist. Determining the replica nodes The CassandraServer is responsible to handle the client-server communication only. Internally, the CassandraServer calls the local StorageProxy class to process the request. The StorageProxy implements the coordinator logic. The coordinator logic includes determining the replica notes for the request row key as well as requesting these replica nodes. By default a RandomPartitioner is used to determine the replica nodes for the row key of the request. The RandomPartinitoner spreads the data records (rows) evenly across the Cassandra nodes which are arranged in a circular ring. Within this ring each node is assigned to a range of hash values (tokens). To determine the first replica, the MD5 hash of the row key will be calculated and the node will be selected where the key hash maps with the assigned token range. Gregor Roth Cassandra by example - the path of read and write requests 3
  • 4. For instance the token of the Pavillon Nation's row key 26813445 is 91851936251452796391746312281860607309. This token is within the token range of node 172.39.126.86 which means that node 172.39.126.86 is responsible to store a replica of the Pavillon Nation record. In most case a replica is stored by more than one node which depends on the key space's replication factor. For instance a replication factor 2 means the clockwise next node of the ring will store the replica, too. If replication level is 3, the next of the next will also store the replica and so forth. Processing a read request The handle a read request the StorageProxy (which is the coordinator of the request) determines the replica nodes as described above. Additionally, the StorageProxy checks that enough replica nodes are alive to handle the read request. If this is true, the replica nodes will be sorted by proximity (closest node first) and the first replica node will be called to get the requested row data. In contrast to the thrift-based client-server communication the Cassandra nodes interchange data by using a message-oriented tcp-based protocol. This means the StorageProxy will get the requested row data by using Cassandra's messaging protocol. Calling other replica nodes depends on the consistency level. The consistency level is specified by the client request. If consistency level ONE is required, no further replica nodes will be called. If consistency level QUORUM is required, in total (replication_factor / 2) + 1 replica nodes will be called. In contrast to the first full-data read call all additional calls are digest calls. A digest call queries a single MD5 hash of all column names, values and timestamps instead requesting the complete row data. The hashes of all calls, including the first one will be compared together. If a hash does not match, the replicas will be inconsistent and the out-of-date replicas will be auto-repaired during the Gregor Roth Cassandra by example - the path of read and write requests 4
  • 5. read process. To do this, a full-data read request will be sent to the additional nodes, the most recent version of data will be computed and the diff will be sent to out-of-date replicas. Occasionally all replica nodes for the row key will be called independent of the requested consistency level. This depends on the column family's read_repair_chance parameter. This configuration parameter specifies the probability with which read repairs should be invoked. The default value of 0.1 means that a read repair is performed 10%. However, the client response will always be answered regarding to the requested consistency level. Additional work will be done in background. A read_repair_chance parameter larger the 0 ensures that frequently read data remains consistent even though only consistency level ONE is required. The row becomes consistent eventually. Performing the local data query As already mentioned above, a dedicated messaging protocol is used for inter-node communication. Similar to the CassandraServer the MessagingService will be started during the start sequence of a Cassandra node, too. By default the MessagingService in bound to server port 7000. The replica node will receive the read call from the coordinator node through the replica node's MessagingService. However, the MessagingService will not access the local store in a direct way. To read and write data locally, the ColumnFamilyStore has to be used. Roughly speaking, the ColumFamilyStore represents the underlying local store of a dedicated column family. Please consider that a coordinator node can also be in role replica node. This will be true, if the client calls node 172.39.126.52 to get the Mister bed city row instead of the Pavillon Nation row in the example above. In this case the StorageProxy of the coordinator node will not call the Gregor Roth Cassandra by example - the path of read and write requests 5
  • 6. MessagingService of the same node. To avoid remote calls to the same node, the StorageProxy will call the ColumnFamilyStore in the same way the MessagingServices does to access local data. By processing a query the ColumnFamilyStore will try to read the requested row data through the row cache, if the row cache is activated for the column family. The row cache holds the entire row and is deactivated per default. If the row cache contains the requests row data, no disk IO will be required. The query will be served very fast by performing in-memory operations only. However, an activated row cache causes that the full row have to be fetched internally even though a sub set of columns is requested. For this reasons the row cache is often less efficient for large rows and small sub set queries. If the request row isn't cached, the Memtables and the SSTables (sorted strings table) have to be read. Memtables and SSTables are maintained per column family. SSTables are data files containing row data fragments and only allow appending data. A Memtable is an in-memory table which buffers writes. If the Memtable is full, it will be written to disk as a new SSTable file in background. For this reason the columns of the requested Pavillon Nation row could be fragmented over several SSTables and unflushed Memtables. For instance one SSTable book_a_hotel-hotel-he-1-Data.db could contain the initial inserted columns ‘name’= ‘Pavillon Nation’ and ‘category’=’4’ of the Pavillon Nation row. Another SSTable book_a_hotel-hotel-he-2-Data.db (or Memtable) could contain the updated category column ‘category’=’5’. If an SSTable exists for the requested column family, first the associated (key-scoped) Bloom filter of the SSTable file will be read to avoid unnecessary disk IO. For each SSTable the ColumnFamilyStore holds an in-memory structure called SSTableReader which contains metadata as wells as the Bloom filter of the underlying SSTable file. The Bloom filter indicates that the dedicated SSTable could contain a row data fragment (false positive are possible, false negative not). If this is true, the key cache will be requested to get the seek position. If not found, the on-disk index will have to be scanned. The fetched seek position will be added to the key cache in this case. Based on the seek position the row data fragment will be read from the SSTable file. The data fragments of the SSTables and Memtables will be merged together by using the column timestamp and the requested row data will be returned to the caller. Gregor Roth Cassandra by example - the path of read and write requests 6
  • 7. Processing an write request To insert, update or delete a row Cassandra's mutate method has to be called. The listing below shows such a mutate call by using the Hector client. //... // [1.b] create and perform an update Mutator<String> mutator = HFactory.createMutator(keyspaceOperator, AsciiSerializer.get()); mutator.addInsertion("26813445", "hotel", HFactory.createColumn("category", "5", StringSerializer.get(), StringSerializer.get())); MutationResult result = mutator.execute(); //... The write path is very the same to the read path. Similar to the read request a write request also includes the required consistency level. However, the coordinator node tries to send a write request including the mutated columns to all replica nodes for the row key. First, the StorageProxy of the coordinator node checks if enough replica notes for the row key are alive regarding to the requested consistency level. If this is true, the write request will be sent to the living replica nodes. If not, an error response will be returned. Write requests to temporarily failed replica nodes will be scheduled as a hinted handoff. This means that a hint will be written locally instead calling the failed node. Once the failed replica node is back the hint will be sent to this node to perform the write operation. By sending the hints the failed nodes becomes consistent to the other nodes. Please consider that hints will not longer store locally, if the failed node is dead longer than 1 hour (config param max_hint_window_in_ms). The coordinator node returns the response to the client as soon as the replica nodes conforming to the consistency level have confirmed the update (a hinted write will not count towards the requested consistency level). The updates of the other replica nodes will still be executed in background. If an error occurs by updating the replica nodes conforming to the consistency level, an error response will be returned. However, in this case the already updated nodes will not be reverted. Cassandra does not support distributed transactions, and hence it does not support a distributed rollback. The write operation supports an additional consistency level ANY which means that the mutated columns have to be written to at least one node regardless of whether this node is a replica node for the key or not. In contrast to consistency level ONE the write will also succeed, if a hinted handoff is written (by the coordinator node). However, in this case the mutated columns will not be readable until the responsible replica nodes have recovered. Gregor Roth Cassandra by example - the path of read and write requests 7
  • 8. Performing the local update Similar to the local data query a local update is triggered by handling a message through the MessagingService or by the StorageProxy. However, in contrast to the read path, first a commit log entry will be written for durability reasons. By default the commit log entry will be written in background asynchronously. The mutated columns will also be written into the in-memory Memtable of the column family. After inserting the changes the local update is completed. However, the memory size of a Memtable is limited. If the max size is exceeded, the Memtable will be written to disk as a new SSTable. This is done by a background thread which checks the current size of all unflushed Memtables of all ColumnFamilies, periodically. If a Memtable exceeds the max size, the background thread replaces the current Memtable by a new one. The old Memtable will be marked as pending flush and will be flushed by another thread. Under certain circumstances several pending Memtables for a column family could exists. After writing the Memtable to disk a new SSTableReader referring the written SSTable is created and added to the ColumnFamilyStore. Once written, the SSTable file is immutable. By default the SSTable data will be compressed (SnappyCompression). Compacting The SSTable file includes the modified columns of the row including their timestamps as well as additional row meta data. For instance the meta data section includes a (column name-scoped) Bloom Filter which is used to reduce disk IO by fetching columns by name. To reduce fragmentation and save space, SSTable files will be merged into a new SSTable file, occasionally. This compaction will be triggered by a background thread, if the compaction threshold is exceeded. The compaction threshold can be set for each column family. Gregor Roth Cassandra by example - the path of read and write requests 8
  • 9. About the author Gregor Roth works as a software architect at United Internet group, a leading European Internet Service Provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and system architecture, enterprise architecture management, distributed computing, and development methodologies. Gregor Roth Cassandra by example - the path of read and write requests 9