You are on page 1of 2

1. Google Kubernetes Engine(GKE).

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling
your containerized applications using Google infrastructure. The GKE environment consists of multiple
machines (specifically, Compute Engine instances) grouped together to form a cluster.

2. Cluster.

A cluster is the foundation of Google Kubernetes Engine (GKE): the Kubernetes objects that represent
your containerized applications all run on top of a cluster. In GKE, a cluster consists of at least one
control plane and multiple worker machines called nodes.

cluster is a dynamic system that places and manages containers, grouped together in pods, running on
nodes, along with all the interconnections and communication channels.

3. Node Pool.

A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a
NodeConfig specification. Each node in the pool has a Kubernetes node label, cloud.google.com/gke-
nodepool , which has the node pool's name as its value.

4. Node.

A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending
on the cluster. Each Node is managed by the control plane. A Node can have multiple pods, and the
Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster

5. Pod.

Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance
of a running process in your cluster. Pods contain one or more containers, such as Docker containers.
When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod's
resources.

Pod Auto scaling can be done by the following command.

kubectl autoscale deployment web --max 4 --min 1 --cpu-percent 1

6. Kubernete Service.

A Kubernetes service is a logical abstraction for a deployed group of pods in a cluster (which all
perform the same function). Since pods are ephemeral, a service enables a group of pods, which provide
specific functions (web services, image processing, etc.) to be assigned a name and unique IP address
(clusterIP).
The idea of a Service is to group a set of Pod endpoints into a single resource. You can configure various
ways to access the grouping. By default, you get a stable cluster IP address that clients inside the cluster
can use to contact Pods in the Service.

A Service enables network access to a set of Pods in Kubernetes. Services select Pods based on their
labels. When a network request is made to the service, it selects all Pods in the cluster matching the
service's selector, chooses one of them, and forwards the network request to it.

7. Blue-green deployments.

Blue/Green deployments are a form of progressive delivery where a new version of the application is
deployed while the old version still exists. The two versions coexist for a brief period of time while user
traffic is routed to the new version, before the old version is discarded (if all goes well).

8. Rolling Update(Ramped Rollout).

Rolling updates incrementally replace your resource's Pods with new ones, which are then scheduled
on nodes with available resources. Rolling updates are designed to update your workloads without
downtime.

Rolling updates allow Deployments' update to take place with zero downtime by incrementally
updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available
resources. In Kubernetes, updates are versioned and any Deployment update can be reverted to a
previous (stable) version.

This process is time consuming. There is no control over how the traffic is directed to the old and new
pods.

9. Canary Deployment.

A canary deployment is an upgraded version of an existing deployment, with all the required
application code and dependencies. When you add the canary deployment to a Kubernetes cluster, it is
managed by a service through selectors and labels. The service routes traffic to the pods that have the
specified label.

A canary deployment is a deployment strategy that releases an application or service incrementally to
a subset of users. ... A canary release is the lowest risk-prone, compared to all other deployment
strategies, because of this control.

You might also like