You are on page 1of 9

Cluster virtualization and Multi-tenant CockroachDB

Introduction and strategy


Overview of run-time components
Summary table
Logical components: the account administrator's view
Deployment components: the deployment/SRE view
All-the-things: host clusters
Architectural terms
SQL Proxy
Segue: Instances, servers, pods and nodes
SQL
Shared storage cluster
Abstract concept: KV-only server, pod, node
Storage server, pod, node
cockroach demo: a hybrid server
Serverless Host Clusters: all-the-things
Logical concepts
Virtual CockroachDB clusters
Tenants: the owners of virtual clusters
What's a virtual cluster made of: tenant-specific data
VC servers and pods
System interface: the administrative environment
System instances and servers
Shared state in a multi-tenant deployment

Introduction and strategy


Cluster virtualization is a new way to structure the CockroachDB technology that achieves isolation between logical clusters. This is most
useful when we share a common distributed storage across competing customers, i.e. multi-tenancy and Software-as-a-Service.

(As an analogy, CockroachDB’s cluster virtualization achieves the virtualization of CockroachDB SQL in a similar way that containers or
VMs achieve a virtualization of hosted servers.)

Today (Summer 2023), cluster virtualization is only available inside the CockroachCloud Serverless product. However, eventually, we wish
to evolve CockroachDB to serve all application traffic using cluster virtualization, including inside CockroachCloud Dedicated and
licensed CockroachDB self-hosted customers.

In the words of our CTO, “Virtual clusters is the way CockroachDB should have been designed from the start.”

This also means that we are now focusing our development to maximize the application developer experience on top of cluster
virtualization.

Care must be taken to distinguish the internal product architecture, discussed here, from the ability to actually run two or more virtual
clusters side-by-side:
Cockroach Labs would retain exclusive right to define more than one virtual clusters side-by-side on a shared storage cluster,
via the Serverless product offering.

In CockroachCloud Dedicated and for self-hosted deployments, applications will be able to utilize a single pre-defined virtual cluster,
without the capability to define more tenants.
Overview of run-time components

Summary table

Logical components: the account administrator's view

What's virtualized New name for the virtualized Previous terminology New name for the physical
logical concept infrastructure

The CockroachDB cluster NEW: “Virtual cluster” or “Cluster” N/A: the underlying
service, as a whole alternatively “logical cluster” infrastructure is not visible to
end-users any more.

Run-time state for a (virtual) “VC Servers/pods” “Servers/pods” NEW: “Shared storage
cluster servers/pods”

On-disk state for a (virtual) NEW: “VC-specific data” or “CockroachDB data” NEW: “Shared storage data”
cluster “virtual keyspace”

new: the SQL interface used to administer other virtual clusters = system interface (Previously “system tenant”)

Beware of the difference between “Shared storage cluster” (deployed system) and “System interface” (logical cluster an
administrator connects to, to create additional virtual clusters)

Ownership (not data) “Tenant” or “Workload” “User”

Deployment components: the deployment/SRE view

Description In-code abstraction In-memory instance Unix process Running container

Routes SQL clients to the “SQL proxy” “SQL proxy instance” “SQL proxy server” “SQL proxy pod”
right server
Runs SQL queries “SQL” or “SQL gateway” “SQL instance” “SQL server”or “SQL- “SQL pod”(implies “SQL-
only server” to highlight only server”)
server contains no KV
instance

Runs KV queries “KV components” “KV instance” “KV server” but the term N/A, we don't currently
(plural) is inclusive of mixed run KV-only servers.
servers, we don't yet
support KV-only
servers.

Stores data for multiple NEW: “Shared storage NEW: “Shared storage
virtual clusters, 1 unit server” pod”

Runs both SQL and KV NEW: “Mixed SQL/KV NEW: “Mixed SQL/KV
queries servers” pods”

Stores data for all virtual NEW: “Shared storage NEW: “Shared storage
clusters, fleet of all cluster” cluster”
servers

We also use the word “node” to designate either a unix process or Docker container, when the distinction does not matter.

All-the-things: host clusters

In CockroachCloud deployments, we need a word to designate a complete fleet consisting of:

a single storage cluster


all the SQL servers /pods connected to it
one or more SQL proxy directing traffic to the SQL pods
the corresponding DNS service, Prometheus instance, Kubernetes configuration, etc

This complete fleet of “All the things” is named a Serverless host cluster.
Architectural terms

SQL Proxy

Role:

Accepts incoming connections from client apps

Determines which tenant the connection is for


Routes each connection to a SQL instance

Segue: Instances, servers, pods and nodes

“Instance”: a run-time realization of a data structure in the source code. Think: class vs object.
TCP/UDP ports are attached to instances.
“Server”: a unix process started from an executable file. Contains diverse instances.
CPU/memory/IOPS accounting commonly happens here.
“Pod”: a container, a kind of reduced virtual machine that can be managed by Kubernetes.

Usually contains 1 process, can contain more.

IP addresses and storage volumes are attached to containers.

For example:

We use the word “Node” when the distinction between “server” and “pod” does not matter.

SQL

NB: The name is just “SQL”.


Derived as “SQL instance”, “SQL server”, “SQL pod”, “SQL node” depending on the run-time properties of interest.
Role:

Accepts incoming connections from SQL proxy.


Responsible for SQL query execution for client apps, scoped to one virtual cluster.
Performs KV data requests to a shared storage cluster.

Also offers HTTP APIs scoped to one virtual cluster.


Also known as “SQL-only server, pod, node” when the process only contains a SQL instance.

Shared storage cluster

Role (collective):

Accepts (KV) data requests from SQL instances.


Shared by many virtual clusters.
Responsible for persisting (storing) data.

Abstract concept: KV-only server, pod, node

“KV instance”: Accepts and serves KV requests for SQL instances. This does exist.

“KV-only server”: This does not exist yet: we have not yet built the capability to run a process containing only a KV instance.

Storage server, pod, node

“Storage server”: a process that contains both a KV and SQL instance.

Alternatively: “mixed KV/SQL server”.

Multiple storage servers make collectively a “shared storage cluster”.

The SQL component here is “System SQL”

invisible to virtual clusters.


used to administer virtual clusters and KV.
cockroach demo : a hybrid server
'cockroach demo' is a tool built out of testing code, which is able to run a single server process containing a system cluster and, optionally
when --multitenant=true , one additional non-system virtual cluster.

This is organized at run-time as follows, given --nodes=N :

N x KV instances

N x SQL instances for the system cluster.


Optionally, with --multitenant=true , N x additional SQL instances able to serve the one additional virtual cluster.

This gives a total of 2 or 3 times N instances able to run services, inside the same 'demo' process.

From the perspective of the users of 'cockroach demo', such a server process has two interfaces:

The SQL interface(s) to the non-system virtual cluster.

This is what is presented at the interactive prompt with --multitenant=true


The SQL interface(s) to the system interface.
This is what is presented at the interactive prompt with --multitenant=false .

This could (hypothetically) be used to create additional virtual clusters inside the demo process.

There's currently some UX misdesign, in that the existence of two separate virtual clusters is not clear to the user of cockroach demo . We
know about this shortcoming and it should get fixed at some point.

Serverless Host Clusters: all-the-things


The CC deployment tooling needs to name the fleet of all components running around a storage cluster to support Serverless tenants.

We’ve called this the “Serverless Host Cluster”, often simplified “host cluster” and this includes:

one storage cluster

all the SQL servers connected to it


the accompanying Prometheus instance
the accompanying DNS glue service
one or more K8s clusters supporting the configuration (we need more than one when the host cluster is spread across multiple regions).
any other run-time components around the same storage cluster.

Logical concepts
The essence of cluster virtualization is to introduce logical boundaries inside of a shared architecture — for the purpose of separate billing,
running client apps side-by-side, avoiding interference, etc. So we also need words to designate those things that have received logical
boundaries.

These concepts exist on a different semantic level than the run-time “deployment” aspects covered above. Hence the need for a separate
vocabulary.
Virtual CockroachDB clusters
To the extent that CockroachDB is perceived to serve a “database product” to end-users, the new architecture creates a virtualization of this
product.

This acknowledges a pattern already settled in our industry:

Datacenter hosting went from physical machines to virtual machines (VMs) running on a shared physical infrastructure.
Memory architectures have this same split between physical addressing (corresponding to hardware) and virtual addressing (multiple
logical address spaces using shared hardware, coordinated by MMUs).
Operating systems enable sharing physical processing units (cores) to present virtual processing units (threads) to software.

Likewise, in CockroachDB’s cluster virtualization technology,

The “per-tenant” product that end-users see is a virtual CockroachDB cluster.

The architecture shares a physical cluster (a set of interconnected shared storage servers) to produce the illusion of many virtual clusters
for end-users.

Tenants: the owners of virtual clusters


There's a lot of different data that is coordinated from a CockroachDB cluster: its KV persistent state, its backups (stored elsewhere, e.g. in
storage buckets), its authentication service for logins, etc. All this is “owned” by an organization / customer / end-user, identified as a single
entity in the control plane.

We're going to call the owner of virtual clusters and their adjacent data, tenants.

This “owner” abstraction exists beyond the CC serverless infrastructure: when our self-hosted customers ask us to deploy multi-tenant in
their infrastructure, it's because they want to split ownership of a physical cluster between multiple sub-organizations.

What's a virtual cluster made of: tenant-specific data


A single tenant does not own just a virtual CockroachDB that can run SQL queries.

It really owns an adjacent constellation of data that is not shared with other tenants, including:

The tenant-specific keyspace, that defines the virtual CockroachDB cluster in KV.Also virtual keyspace.
The tenant-specific log files.
The tenant-specific heap, profile and goroutine dumps.
The tenant-specific crash dumps.
The tenant-specific exported traces.
The tenant-specific debug zips.
The tenant-specific backups and exports.
The tenant-specific metrics.

The state of a virtual cluster is the collection of all the related tenant-specific data.

VC servers and pods


Mostly for security reasons, and additionally for billing reasons, we find it important to ensure that a single SQL server process does not
serve instances on behalf of more than one tenant.

In other words, our architecture (currently) implies that a SQL-only server corresponds to exactly one tenant, the one that owns the virtual
cluster served by that SQL server.

We are thus tempted to equate the phrases “tenant server” = “SQL-only server” = “virtual cluster server/service”.
However, consider that next to SQL nodes (servers and pods), a deployment would also run other pods that are specific to just one tenant;
for example, a Prometheus pod and a log collector.

We'll name the fleet of run-time nodes (servers and pods) that are serving just one tenant, the tenant nodes (servers and pods). This
includes both SQL-only servers but also other tenant-specific services needed to serve a virtual cluster.

System interface: the administrative environment


Currently, we have chosen to administer the creation/deletion of virtual clusters using SQL statements run in the context of a virtual cluster
with special privileges.

This was not the only possible choice; we could have chosen to design an API separate from SQL, that exists “outside” of the virtual cluster
APIs. But here we are.

So we need a word to designate that virtual cluster. To follow established terminology, we will call this the system interface.

Today, the term “system interface” largely overlaps with “shared storage cluster” because, implementation-wise, we have chosen to give
SQL semantics to the keyspace that does not use a VC prefix. However, this choice may be revisited in the future, such that we mandate a
VC prefix for all logical clusters including the system cluster. Should such plans materialize, the system interface would be supported by a
virtual cluster too. It is thus useful to be disciplined about distinguishing the term “system interface”, which designates a SQL
interface, and “shared storage cluster”, which designates the set of interconnected storage servers.

This system interface and all its “own” data also has an owner, which in the context of CC is Cockroach Labs itself. The owner of the system
interface is the system tenant.

System instances and servers


Currently, we have chosen to co-host the SQL instances that can serve queries for the system interface together with the KV instances for
the storage servers.

That's what our current “mixed SQL/KV servers” are about. They contain:

the KV instances shared by all virtual clusters;


SQL instances specific to the system interface, able to serve access to the storage cluster.

However, this is not the only way we can do this. In fact, we could also make a plan to enable running SQL instances for the system
interface in a separate SQL-only server.

Generally, we'll call any server that contains at least one SQL instance for the system interface, a system server. Our current shared
storage servers are also system servers; our future SQL-only servers with system cluster capability will be system servers too.

Our unit tests also run many SQL instances side-by-side, including multiple SQL instances that operate on system clusters; inside the
context of tests, these are system instances.

Shared state in a multi-tenant deployment


In addition to tenant-specific state that defines virtual clusters, a multi-tenant deployment needs shared state too:

At run-time:
The SQL proxy node(s) (server(s) and pod(s)), which routes SQL client apps to their own virtual cluster.
The shared storage/DB nodes (servers and pods).
The networked shared storage/DB cluster, as a fleet of nodes.
The run-time state of the system interface.
On disk:
The aggregate state of all virtual clusters stored on a single storage cluster.

You might also like