Professional Documents
Culture Documents
Choose the "Start Free Now" plan, then "Get Started" to work in the free tier.
You will have plenty of free initial credit (renewed each month!), roughly corresponding
to 80 GB of storage and 20M read/write operations.
If this is not enough for you, congratulations! You are most likely running a mid- to
large-sized business! In that case you should switch to a paid plan.
(You can follow this guide to set up your free-tier database with the $25 monthly credit.)
To create the database, please note that the db_name and ks_name in the above image are
just placeholders:
For the database name - use workshops. While Astra DB allows you to fill in these
fields with values of your own choosing, please follow our recommendations to
ensure the application runs properly.
For the keyspace name - use chatsandra. Please stick to this name, it will make
the following steps much easier (you have to customize here and there
otherwise). In short:
Note: if you already have a workshops database, for instance from a previous workshop
with us, you can simply create the keyspace with the Add Keyspace button in your Astra
DB dashboard: the new keyspace will be available in few seconds.
Parameter Value
For provider and region: Choose any provider (either GCP, AWS or Azure).
Region is where your database will reside physically (choose one close to you or
your users).
Create the database. Review all the fields to make sure they are as shown, and
click the Create Database button.
You will see your new database as Pending in the Dashboard; the status will change
to Active when the database is ready. This will only take 2-3 minutes (you will also
receive an email when it is ready).
⚠️ Important
2. Create tables
Ok, now that you have a database created the next step is to create tables to work with.
General Methodology Note: We'll work with a (rather simplified) "chat application"
called ChatSandra: users, identified by a unique ID, write posts in several "rooms".
Rooms are also uniquely identified by their name, such as #gardening. The design of our
application is such that we need to be able to (a) retrieve all posts by a given user,
sorted by descending date, and (b) retrieve all posts for a given room, sorted by
descending date. As dictated by the best practices of data modeling with Cassandra,
these requirements are satisfied by creating two very similar tables (denormalization), as
you'll see momentarily: they will contain the same posts, but stored (a.k.a. partitioned) in
two different ways; and it will be our (that is, the application's) responsibility to maintain
them aligned. Of course, we also need a users table - we will start with this one indeed.
✅ Step 2a. Navigate to the CQL Console and login to the database
In the Summary screen for your database, select CQL Console from the top menu in the
main window. This will take you to the CQL Console and automatically log you in.
Show me!
Note: if you are working with your own Cassandra cluster (other than Astra DB), you will
reach the CQL Console differently. Moreover, in that case you have to manually create
the keyspace once in the CQL Console: this is done with a command similar to CREATE
KEYSPACE chatsandra WITH REPLICATION = {'class': 'NetworkTopologyStrategy',
'replication_factor': 3};. See the Cassandra documentation for more details on this.
Ok, now we're ready to rock. Creating tables is quite easy, but before we create one we
need to tell the database which keyspace we are working with.
First, let's DESCRIBE all of the keyspaces that are in the database. This will give us a list
of the available keyspaces.
📘 Command to execute
DESC KEYSPACES;
CQL commands usually end with a semicolon ;. If you hit Enter, nothing happens and
you don't even get your prompt back, most likely it's because you have not closed the
command with ;. If in trouble, you can always get back to the prompt with Ctrl-C and
start typing the command anew.
📗 Expected output
Depending on your setup you might see a different set of keyspaces than in the image.
The one we care about for now is chatsandra. From here, execute the USE command
with the chatsandra keyspace to tell the database our context is within chatsandra.
Take advantage of the TAB-completion in the CQL Console. Try typing use cha and then
pressing TAB, for example.
📘 Command to execute
USE chatsandra;
📗 Expected output
Notice how the prompt displays <username>@cqlsh:chatsandra> informing us we
are using the chatsandra keyspace. Now we are ready to create our table.
✅ Step 2c. Create the users table
At this point we can execute a command to create the users table. Just copy/paste the
following command into your CQL console at the prompt. Try to identify the primary
key, the partition key and the clustering columns (if any) for this table in the command:
📘 Command to execute
📘 Command to execute
DESC TABLES;
📗 Expected output
Aaaand BOOM, you created a table in your database. That's it. Now let's go ahead and
create a couple more tables before we do something interesting with the data.
Let us create two more tables, which will contain the posts. As remarked earlier, we will
store the posts in two tables which differ in how they are partitioned: look at the
commands below, the differences mostly lie in the PRIMARY KEY specification:
📘 Command to execute
Then DESCRIBE your keyspace tables: you should see all three listed.
📘 Command to execute
DESC TABLES;
📗 Expected output
You may wonder, how did we arrive at this particular structure for the post tables? The
answer lies in the methodology for data modeling with Cassandra, which, at its very core,
states: first look at the application's needs, determine the required workflows, then
map them to a number of queries, finally design a table around each query. We
create table posts_by_user to support a query such as "get all posts by a user X"; then we
also need table posts_by_room for a query of type "get all posts in room Y". The two
tables have the same columns, but the different choice of partition key is what will make
the two queries possible on the respective tables.
Our tables are in place so let's put some data in them. This is done with
the INSERT statement. We'll start by inserting three rows into the users table.
Note that we have three users in this example: "111...", "555..." and "999...", which are
having some pleasant conversations. In a real application, you would probably generate
user IDs at the application level or with the UUID() primitive offered by CQL. See
the documentation for more details on time/uuid-related CQL functions.
Copy and paste the following in your CQL Console: (Once you have carefully examined
the first of the following INSERT statements below, you can simply copy/paste the others
which are very similar.)
📘 Commands to execute
Let's run some more INSERT statements, this time for posts. We'll insert data into
the posts_by_user table. (Once you have carefully examined the first of the
following INSERT statements below, you can simply copy/paste the others which are very
similar.)
Note: in the following, we are using TIMEUUIDs crafted by hand, to make things easier to
visualize. In a real application, you would generate them at application level or, in some
cases, using the NOW() primitive offered by CQL. In the values below, you can just pay
attention to the first octet of hex digits.
📘 Commands to execute
Ok, we have a lovely bunch of posts in our chat application. But wait: data is
denormalized and the very same posts have to be inserted in table posts_by_room as
well! Let's do it with the following command (please note that the INSERT statements are
exactly the same as above, with only the table name changed):
📘 Commands to execute
Now that we've inserted a set of rows (two sets, to be precise), let's take a look at how
to read the data back out. This is done with a SELECT statement. In its simplest form we
could just execute a statement like the following **cough **cough:
// Read all rows from "posts_by_user" table (careful with this ...)
SELECT * FROM posts_by_user;
You may have noticed my coughing fit a moment ago. Even though you can execute
a SELECT statement with no partition key defined, this is NOT something you should do
when using Apache Cassandra. We are doing it here for illustration purposes only and
because our whole dataset is just a handful of values. Given the data we inserted earlier,
a more proper statement would be something like (while we are at it, we also explicitly
specify which columns we want back):
The key is to ensure we are always selecting by some partition key at a minimum, so
to avoid the dreaded full-cluster scans which yield performances that are generally
unacceptable in production.
Ok, with that out of the way we can READ the data from the other table as well -
remember we INSERTed on both tables?
📘 Commands to execute
// Read (some columns of) posts from a certain room (= a certain partition)
SELECT user_id, text FROM posts_by_room WHERE room_id = '#hiking';
(again, in the second SELECT we specify some columns - it is something we may want to
do in most cases).
📗 Expected output
Notice how the two tables contain the same set of posts, but group them differently:
table posts_by_user is partitioned by user, while table posts_by_room is partitioned by room
- and the corresponding outputs reflect this fact. This is very much related to the fact that
these two tables, in the data modeling process, were designed to answer two different
questions, "what are the posts by user X?" and "what are the posts in room Y?"
respectively. Moreover, within any partition in both tables, right as we required when
creating the table, posts are kept (and displayed) sorted by decreasing post_id (which, due
to the nature of TIMEUUIDs, implies a time-ordering as well).
Once you execute the above SELECT statements you should see something like the
expected output above. We have now READ the data we INSERTED earlier. Awesome
job!
BTW, just a little extra for those who are interested. Since we used a TIMEUUID type for
our post_id field we can use the dateOf() function to determine the timestamp from the
value. Check it out.
At this point we've CREATED and READ some data, but what happens when you want
to change some existing data to some new value? That's where UPDATE comes into
play. The use case is as follows: in our chat app, users are allowed to edit their previous
posts.
Let's take one of the records we created earlier and modify it. Recall that
we INSERTED the following record in the posts_by_user table.
Let's also take a look at how the posts_by_user table was created. In order
to UPDATE an existing record, indeed, we need to know the primary key we defined
when we CREATEd the table.
Let's say that user "111..." has noticed the remark by "555..." and, perhaps a bit ashamed
by their own boasting, wants to correct their assessment on the hike difficulty!
So we can run the following UPDATE statement and help user "111..." fix their post on
table posts_by_user (we also subsequently read back the data as a check):
📘 Commands to execute
UPDATE posts_by_user
SET text = '... and Mt. Gumbo was NOT SO easy!!!'
WHERE user_id = 11111111-1111-1111-1111-111111111111
AND post_id = aaaaaaaa-5cff-11ec-be16-1fedb0dfd057;
📗 Expected output
But wait: data, again, is denormalized! This means that we have to make sure such an
edit is performed on table posts_by_room as well. Since the primary key of that table is
given as PRIMARY KEY ((room_id), post_id), these are the fields to provide, along
with text itself, to the UPDATE statement.
And we could run an UPDATE. But, lo and behold, in Cassandra UPDATEs and INSERTs
are (almost) the same, as a consequence of its architecture and the way storage and
write logic are structured. We can then update the row with an INSERT statement like
the following (note that we provide: primary key + any field that we want to modify; and
leave out the other, unchanged fields):
📘 Commands to execute
That's it, we successfully edited a post (on both tables). All that's left now is
to DELETE some data.
The final operation from our CRUD acronym is DELETE. This is the operation we use
when we want to remove data from the database. In Apache Cassandra you
can DELETE from the cell level all the way up to the partition (meaning I could remove a
single column in a single row or I could remove a whole partition) using the
same DELETE command.
Generally speaking, it's best to perform as few delete operations as possible on the largest
amount of data. Think of it this way, if you want to delete ALL data in a table, don't delete
each individual cell, just TRUNCATE the table. If you need to delete all the rows in a
partition, don't delete each row, DELETE the partition, and so on.
User "555..." notices the post by "111..." being edited and wants to remove their snarky
remark. Let's help them!
When deleting a row on a given table, we have to specify the values of the primary key
for that table. And don't forget that, in our data model, a post appears as two separate
rows in the two tables, so we have to perform two different DELETE operations!
📘 Commands to execute
(Notice in the above, for your convenience, we read the tables, then delete the rows,
then read them again).
📗 Expected output
Notice the rows are now removed from both tables: it is as simple as that.
Homework note
To submit the homework, please take a screenshot of the CQL Console showing the
rows in tables posts_by_user and posts_by_room before and after executing the DELETE
statements.
4. Wrapping up
We've just scratched the surface of what you can do using Astra DB, built on Apache
Cassandra. Go take a look at DataStax for Developers to see what else is possible.
There's plenty to dig into!
Done?
Congratulations: you made to the end of today's workshop.