- Consulting
- Training
- Partners
- About Us
x
Apache Cassandra is an open source non-relational/NOSQL database. It is massively scalable and is designed to handle large amounts of data across multiple servers (Here, we shall use Amazon EC2 instances), providing high availability. In this blog, we shall replicate data across nodes running in multiple Availability Zones (AZs) to ensure reliability and fault tolerance. We will also learn how to ensure that the data remains intact even when an entire AZ goes down.
The initial setup consists of a Cassandra cluster with 6 nodes with 2 nodes (EC2s) spread across AZ-1a , 2 in AZ-1b and 2 in AZ-1c.
Cassandra Cluster with six nodes.
Next, we have to make changes in the Cassandra configuration file. cassandra.yaml file is the main configuration file for Cassandra. We can control how nodes are configured within a cluster, including inter-node communication, data partitioning and replica placement etc., in this config file. The key value which we need to define in the config file in this context is called Snitch. Basically, a snitch indicates as to which Region and Availability zones does each node in the cluster belongs to. It gives information about the network topology so as to the requests are routed efficiently. Additionally, Cassandra has replication strategies which place the replicas based on the information provided by the snitch. There are different types of snitches available. But, in this case, we shall use EC2Snitch as all of our nodes in the cluster are within a single region.
We shall set the snitch value as shown below:
Also, since we are using multiple nodes, we need to group our nodes. We shall do so, by defining seeds key in the configuration file (Cassandra.yaml) . Cassandra nodes use seeds for finding each other and learning the topology of the ring. It is used during startup to discover the cluster.
For instance:
Node 1: Set the seeds value as shown below:
Similarly on the other nodes:
Node 2:
Node 3:
Node 4:
Node 5:
Node 6:
Cassandra nodes use this list of hosts to find each other and learn the topology of the ring. The nodetool utility is a command line interface for managing a cluster. We shall check the status of the cluster using this command as shown below: The owns field above indicates the percentage of data owned by each node. As we can see, the owns field above is nil as there are no keyspaces/databases created. So, let us go ahead and create a sample keyspace. We shall create a keyspace with data replication strategy & replication factor. Replication strategy indicates the nodes where replicas are placed. And, the total number of replicas across the cluster is known as replication factor. We shall use NetworkTopology replication strategy since we have our cluster deployed across multiple availability zones. NetworkTopologyStrategy places replicas on distinct racks/AZs as sometimes, nodes in the same rack/AZ might usually fail at the same time due to power, cooling or network issues.
Let us set the replication factor to 3 for our “first” keyspace:
1 |
CREATE KEYSPACE "first" WITH REPLICATION ={'class' :'NetworkTopologyStrategy', 'us-east' : 3}; |
The above CQL command creates a database/keyspace ‘first’ with class as NetworkTopologyStrategy and 3 replicas in us-east (In this case, one replica in AZ/rack 1a, one replica in rack AZ/1b and one replica in rack AZ/1c). Cassandra uses a command prompt called Cassandra Query Language Shell, also known as CQLSH, which acts as an interface for users to communicate with it. Using CQLSH, you can execute queries using Cassandra Query Language (CQL).
Next, we shall create a table user with 5 records for tests.
1 |
CREATE TABLE user(user_id text,login text,region text,PRIMARY KEY (user_id)); |
Now, let us insert some queries in this table:
1 2 3 4 5 |
insert into user (user_id,login,region) values ('1','test.1,'IN'); insert into user (user_id,login,region) values ('2','test.2','IN'); insert into user (user_id,login,region) values ('3','test.3','IN'); insert into user (user_id,login,region) values ('4','test.4','IN'); insert into user (user_id,login,region) values ('5','test.5','IN'); |
1 |
cqlsh> select * from user; |
Now that our keyspace/database consists of data, let us check for ownership & effectiveness:
As we can see here, the owns field above is NOT nil after defining the keyspace. The owns field indicates the percentage of data owned by the node.
Let us perform some tests to make sure the data was replicated intact across multiple Availability Zones.
Test 1:
Test 2:
Similar tests were done by shutting down nodes in us-east-1b & us-east-1c AZs to check if the records were intact even when an entire Availability Zone goes down. Hence, from the above tests, it is quite clear and is recommended to use 6 node cassandra cluster spread across three availability zones and with minimum replication factor of 3 (1 replica in all the 3 AZs) to make cassandra fault tolerant from one whole Availability Zone going down. This strategy will also help in case of disaster recovery.
Stay tuned for more blogs!!
Voiced by Amazon Polly |
CloudThat is a leading provider of cloud training and consulting services, empowering individuals and organizations to leverage the full potential of cloud computing. With a commitment to delivering cutting-edge expertise, CloudThat equips professionals with the skills needed to thrive in the digital era.
Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!
sandipan mukherjee
Jan 27, 2021
Yes you are right…A leading Apache Cassandra development company in India and USA helping businesses to design, develop and deploy applications with Cassandra database which handles large data with no failure probability
Divya
Aug 25, 2015
It is awesome…. !!!
Akshay S kumar
Aug 25, 2015
Hey amrutha,
Top notch stuff!!!
Keep up the phenomenal work
Regards
Akshay
priya dutta
Aug 25, 2015
Network topology thing quiet complex but explained and integrated well in much efficient way. Gr8 job highly reliable..
Neha
Aug 25, 2015
Amazing! total tech savvy 🙂 keep the good work going 🙂
abhijeet kange
Aug 25, 2015
Hey nice work
I ll definitely use this blog.
Amrutha Singh
Aug 21, 2015
Thank you Ashwin 🙂
Ashwin
Aug 21, 2015
Hey Amrutha,
Nice work, makes for a good read!!
Click to Comment