You are on page 1of 43

PODS:

Hello and welcome to this lecture on Khubani Despots.

Before we head into understanding part, we would like to assume that the following have been set
up.

already at this point, we assume that the application is already developed and built into docker
images and it is available on a docker repository like Docker Hub, so this can pull it down.

We also assume that the kubernetes cluster has already been set up and is working.

This could be a single node set up or a multi node setup.


Doesn't matter, all the services need to be in a running state.
As we discussed before with kubernetes, it is our ultimate aim is to deploy our application in the
form of containers on a set of machines that are configured as worker nodes in a cluster.
However, kubernetes does not deploy containers directly on the worker notes.
The containers are encapsulated into a kubernetes object known as pods.
A pod is a single instance of an application.
A pod is the smallest object that you can create in kubernetes.

Here we see the simplest of simplest cases where you have a single node carbonate as cluster with a
single instance of your application running in a single dock or container encapsulated in a pod.

What if the number of users accessing your application increase and you need to scale your
application,
you need to add additional instances of your Web application to share the load.
Now, where would you pin up additional instances?
Do we bring up new container instance within the same pod?
No, we create new pod altogether with a new instance of the same application.
As you can see, we now have two instances of our Web application running on two separate pods on
the kubernetes system or node.

What if the user base further increases and your current node has no sufficient capacity?
Well, then you can always deploy additional pods on a new node in the cluster.
you will have a new node added to the cluster to expand the clusters physical capacity.

So what I'm trying to illustrate in this slide is that pods usually have a one to one relationship with
containers running your application to scale up you create new pods and to scale down, you delete
existing pod.
You do not add additional containers to an existing pod to scale your application.

Also, if you're wondering how we implement all of this and how we achieve load balancing between
the
containers, etc., we will get into all of that in a later lecture.
For now, we are only trying to understand the basic concepts.

Multi container PODS:

We just said that parts usually have a one to one relationship with the containers, but are we
restricted
to having a single container in a single pod?
No, a single pod can have multiple containers, except for the fact that they're usually not multiple
containers of the same kind.
As we discussed in the previous slide.
If our intention was to scale our application, then we would need to create additional pods.
But sometimes you might have a scenario where you have a helper container that might be doing
some kind
of supporting task for our Web application, such as processing a user, enter data, processing a file
uploaded by the user, etc. and you want these helper containers to live alongside your application
container.
In that case, you can have both of these containers, part of the same pod, so that when a new
application container is created, the helper is also created and when it dies, the helper also dies.
Since they are part of the same pod, the two containers can also communicate with each other
directly
by referring to each other as localhost, since they share the same network space.
Plus they can easily share the same storage space as well.

PODS explanation 2:

If you still have doubts in this topic, I would understand if you did, because I did the first time I
learned these concepts, we could take another shot at understanding parts from a different angle.

Let's for a moment keep kubernetes out of our discussion and talk about simple docker containers,
let's assume we were developing a process or a script to deploy our application on a Docker host.

Then we would first simply deploy our application using a simple docker run python app command
and the application runs fine and our users are able to access it.
When the load increases, we deploy more instances of our application by running the docker run
commands many more times.

This works fine and we are all happy now sometime in the future, our application is further
developed, undergoes architectural changes and grows and gets complex.

We now have a new helper container that helps our web application by processing or fetching data
from elsewhere.

These helper containers maintain a one to one relationship with our application container and
thus needs to communicate with the application containers directly and access data from those
containers.

For this, we need to maintain a map of what app and help our containers are connected to each
other, we would need to establish network connectivity between these containers ourselves using
links and custom networks.
We would need to create shareable volumes and shared mout containers.
We would need to maintain a map of that as well.

And most importantly, we would need to monitor the state of the application container.
And when it dies, Manually kill the helper container as well as it's no longer required.
When a new container is deployed, we would need to deploy the new helper container as well with
part kubernetes, as does all of this for us automatically, we just need to define what containers a
pod consists of and the containers in a pod by default will have access to the same storage, the same
network namespace and same fate as it will be created together and destroyed together.

Even if our application didn't happen to be so complex and we could live with a single container
Kubernetes, it is still requires you to create pods.
But this is good in the long run as your application is now equipped for architectural changes and
scale in the future.

However, also note that multiple containers are a rare use case and we are going to stick to single
containers per pod in this course.

How do deploy pods:

This is now look at how to deploy pods.


Earlier, we learned about the cube control run command, what does command really does is, it
deploys a docker container by creating a pod.
It first creates a pod automatically and deploys an instance of the engine X docker image.
But where does it get the application image from. For that you need to specify the image name using
the image parameter.
The application image in this case, the nginx image is downloaded from the repository Docker hub,
as we discussed, is a public repository where latest images of various applications are stored.
You could configure kebernetes to pull the image from the public or a private repository within the
organization.

Now that we have a pod created, how do we see the list of pods available?

The kubectl get pods command helps us see the list of pods in our cluster, in this case, we see the
pod is in a container creating state and soon changes to a running state when it is actually running.

Also, remember that we haven't really talked about the concepts on how a user can access the nginx
server Web server, and so in the current state, we haven't made the Web server accessible to
external users.
You can access it internally from the node.
But for now, we will just the how to deploy a pod.
And later, in a later lecture, once we learn about networking and services, we will get to know how
to make the service accessible to end users.
Well, that's it for this lecture.
Head over to a demo and I will see you in the next one.

Deploying a pod in Minikube cluster:

In this demo, we're going to deploy a pod in our mini cube cluster.

As we discussed, a part is a most basic and the smallest unit in Kubernetes.

So we will use the Kubectl command line utility to interact with the kubernetes cluster.

Now, if you followed our demo earlier of deploying a cluster using mini cube, then you
already have the kubectl command line utility configured to work with the cluster.
We will run the command kubectl run nginx, which is the name of the pod dash dash.
Image equals two kubectl.
Now here is where we specify the docker image to be used.
While the pod name could be anything, the image name has to be the name of an image
available at Docker Hub or any other container registry.
You can additionally specify a tag for the image name or a different address to an image
hosted on another registry. If the image is hosted in a place that's other than Docker Hub.

CMD to create POD:


Kubectl run nginx --image=nginx

Here ngnix after run is pod name


Here ngnix after image is image name. you specify tag or
Okay, so once we run this command, we see a pod by the name and nginx has been
created.
And you can check the status using the Kube ctl get PODS command.
Now here you can see the pod name, which is an nginx. The status is running and we see
that the there is a ready column which shows the number of containers
in a ready state.
We also see if the container has restarted since it was created.
And how long has the port been running?
We can get more information related to the pod by running the Kubectl.
describe pod nginx Command.
And you'll notice that this provides a lot more information as compared to the get
command.

CMD to get more details related to the pod:


kubectl describe pod nginx

So, for example, the name of the pod is in Gen X.


It shows any labels that has been assigned to this pod.
So this was created by default when we ran the run command.
It shows when it was started.
It chose the node that is that it is assigned to along with the IP address of the node.
So in this case, we just have a single node cluster set up using mini tube.
And the node name is Minnie Cube, and that is the IP address of this worker node.
Secondly, it also shows the IP address of the pod itself.
So in this case, the pod is assigned an IP of 172.
The 16 that zero dark three will cover more about the IP addresses of the pod.
Later in the networking section.
Now, moving on, we can see that it displays information related to the container.
So we know that there is a single container which uses the image in Genex.
If there were multiple containers, we would list them here.
I'll explain about creating a pod with multiple containers in the upcoming lectures.
But here we can also see that the ending next image was pulled from Docker Hub.
And if you scroll all the way to the bottom, you'll see additional information called events.
And here you see the list of events that occurred since the pod was created.
It went through multiple stages before it started.
We can see that the pod was assigned to the mini cube node.
If there were multiple nodes, you would see which node the pod was assigned to here.
And then it entered the pulling phase where the engine X image was pulled down from
Docker Hub successfully.
And then the container called Engine X was created and started.
Now there is one other command that we can use to check the status of the pod, and that is
the same command as before that kubectl get pod command.
But with the dash wide option and this provides additional information such as the node,
where the pod is running and the IP address of the pod as well.
So this is the internal IP address of the pod itself.

one other command that we can use to check the status of the pod:
kubectl get pods –o wide

So each pod gets an internal IP of its own within the within the Cuban its cluster.
But more on that later.
That was a quick demonstration on how to run a pod in many KUB environment in the
upcoming videos.
We will see how to create a pod using a Yamal definition file.
YAML in kubernetes:

lecture,
We'll talk about creating a part using a YAML based configuration file.
In the previous lecture we learned about YAML files in general.
Now we will learn how to develop YAML files specifically for Kubernetes. Kubernetes uses
YAML files as inputs for the creation of objects such as POD's, replicas, deployment services,
etc.
All of these follow similar structure Kubernetes definition file always contains 4 top level
fields.
The API version, kind,
metadata and spec.
These are the top level or root level properties.
These are also required fields. so you must have them in your configuration file.

Let's just look at each one of them.


The first one is the API version.

API version:

This is the version of the Kubernetes API you're using to create the objects.
Depending on what we are trying to create we must use the right API version. For now
since we are working on pods.
We will set the API version as we want.
Few other possible values for this field are apps/v1.
extensions/v1 Beta
etc..
We will see what these are for later in this course.

Kind:

Next is the kind. The kind refers to the type of object we are trying to create which in this
case happens to be a POD.
So we will set it as POD.

Some other possible values here could be replica set or deployment or service which is
what you see in the kind field in the table on the right the next is metadata.

Meta:

The metadata is data about the object like its name labels etc. As you can see unlike the
first two where you have specified a string value, this is in the form of a dictionary.
So, everything under metadata is intended to the right a little bit and so names and labels
are children of metadata.
The number of spaces before the two properties name and labels doesn't matter.
But they should be the same as they are siblings.
In this case labels has more spaces on the left than name and so it is now a child of the name
property
instead of a sibling. Which is incorrect.
Also the two properties must have more spaces than its parent which is metadata so that it's
intended to the right a little bit.
In this case all three of them have the same number of spaces before them and so they are
all siblings. Which is not correct.
Under metadata the name is a string value so you can name your part my app pod and
the labels is a dictionary.

Labels:

So labels is a dictionary within the metadata dictionary and it can have any key value
pairs as you wish.
For now I have added a label app with the value my app.
Similarly you could add other labels as you see fit.
Which will help you identify these objects at a later point in time.

Say for example there are hundreds of pods running a front end application and
hundreds of pods running a backend application or a database.
It will be difficult for you to group these parts once they are deployed.
If you label them now as frontend backend or database you will be able to filter the
parts.
Based on this label at a later point in time it's important to note that under metadata
you can only specify name or labels or anything else that Kubernetes expects to be
under metadata.

You cannot add any other property as you wish under this.

However, under labels you can have any kind of key or value pairs as you see fit.
So it's important to understand what each of these parameters expect.
So far we have only mentioned that type and name of the object we need to create which
happens to be a pod with a name My app part, but we haven't really specified the container
or image we need in the part.

Spec:

The last section in the configuration file is the specification section which is written as
spec. Depending on the object we are going to create this is where we would provide
additional information to Kubernetes pertaining to that object.
This is going to be different for different objects, so it's important to understand or refer to
the documentation section to get the right format for each since we are only creating a pod
with a single container in it. It is easy.

Spec is a dictionary.

Containers:

So add a property under it called containers. Containers is a list or an array.


The reason this property is a list is because the pods can have multiple containers within
them.
As we learned in the lecture earlier.
In this case though we will only add a single item in the list, since we plan to have only a
single container in the pod that - right before the name indicates that this is the first item in
the list the item in the list is a dictionary.

So add a name and image property the value for image is nginx, which is the name of the
docker image in the docker repository.
Once the file is created from the command kubectl create -f followed by the file name
which is pod-definition.yml and Kubernetes creates the pod.

CMD to create pod by kubernetes from pod definition config file:


Kubectl create –f pod-definition.yml
Summary:

So to summarize remember the four top level properties


1)API version,
2)kind
3)metadata and
4)spec.

Then start by adding values to those depending on the object you're going to create.
Once we create the pod How do you see it?
Use the kubectl get pods command to see a list of pods available.YAML
In this case it's just want to see detailed information about the pod.
Run the kubectl describe pod command.
This will tell you information about the pod
1)when it was created.
2)What labels are assigned to it.
3)What containers are part of it and
4)the events associated with that pod.
That's it for this lecture.
We will now head over to a demo and I will see you in the next lecture .

Demo – PODS with YAML:

In this demo, we're going to create a pod again, but this time, instead of making use of the
kube ctl run command, we're going to create it using a YAML definition file.
So our goal is to create a YAML file with the pods specifications in it.
Now, there are many ways to do it.
You could just create one in any text editor.
So if you're on Windows, you could just use notepad.
Or if you're on Linux, as I am just using native editor like V.I. or Vim, an editor with support
for
Yamal language would be very helpful in getting the syntax right.
So instead of notepad at all, a tool like Notepad ++ in Windows or VIM in Linux would be
better.
Now I'll talk more about tips and tricks and other tools and ideas that can help with this
more in the
upcoming lectures.
For now, let's take with the very basic form of creating a demo file using V.I. Ed on Maylee
next system.
So here I am on my Linux terminal.
I'm going to make use of Wim Text-based editor to create this part definition file.
So the name of the file I'm going to call as Paul Yamal and as seen in the lecture, we will start
off
with the four root level elements are the root level properties that we that we saw, which are
API
version, kinde metadata and spec.
So we know that the value of four API version for a pod is the one, the kind is pod with a
capital
P.
So it is case sensitive.
So that's important.
And metadata is a dictionary and it can have values where we define the name of the pod.
So I'm going to use name as Engine X and we can have additional labels that we can specify
under it.
So labels again is also a dictionary.
And it can have as many labels as you want under it.
So we can specify a label, which is a key value pairs as a harp and engine X.
And we can also add more labels like tIere and set it to front end.
Anything that can help us group this particular pod.
Next, we have to define the spec.
So spec is also a dictionary.
It has an object called containers.
So before we move on to that, we have to make sure that we get the indentation right.
For example, the app and TEER are children of the labels property.
So it has to be in the same kind of vertical line here.
And similarly, under metadata, you have Nayman labels, which are the children of metadata.
So people have to be on within the same vertical line.
So you have to make sure that the spacing is correct.
Typically it would be two spaces or a tab.
But it is recommended not to use taps.
So always stick to two spaces and stick to that throughout.
So the next thing that we're going to configure is the container.
So a container is a list of objects.
Now we first give it a name.
And note that this is the name of the container within the port.
And there could be multiple containers and each can have a different name.
So one container could be named app.
And another container could be named helper.
Any name that makes sense to you?
We're going to use the same name as that of the container image.
So we will just name it in Genex.
And the second object that we're going to add here is the image name, which is the Docker
hub image
name of the container that we're going to create.
So the image name is, again, engine X.
If you're using other registries than Docker Hub, then make sure to specify the full path to
that image
repository here.
Now, remember that we can add additional containers to port as well.
So if you have to do that, we have to declare the secondary element to the list, which would
be the
second object in the list.
So here I can, for example, add a busy box container using the busy box image.
And that would be the second element of theory.
So in this case, we're going to stick to one single container.
So I'm going to just delete that.
And I'm not going to hit escape.
Colin WQ to save this file, and we will just use the Cat Command to make sure that the file
was created
with the expected contents.
So make sure the format is correct.
So the name and labels are children of Medidata.
And you can see that they are on the same kind of vertical line.
And similarly, labels have two children, which are the two labels happened tier and spek has
a list.
And we are defining it as a list with a hyphen followed by the objects.
So we can make use of the Kyouko to create a command or the capital apply command, so
to create and
apply command.
Kind of works the same if you're creating a new object.
Right.
You can either use, create or you can use apply.
It doesn't matter.
And we pass in the file name using the dash F option.
And here we can see that the part has been created.
So let's check the status.
Real quick.
And you can see that it's in container creating state.
And then when we check again, we see that it's in the running state.
And as before, if you want to get more details about the part, you can always run the cuddle
described
command and specify the the name of the pod.
And you should get a much more in-depth information about the pod.
OK.
So that's it for this demo.
In the next section, we will learn some tips and tricks of developing Yamal files and easily
using
I.D..
21. Tips & Tricks - Developing
Kubernetes Manifest files with VS code:
26. Replication Controllers and
ReplicaSets:

Hello and welcome to this lecture on Kubernetes controllers.


My name is Mom child Mohamed.
In this lecture we will discuss about Kubernetes and its controllers.
Controllers are the brains behind Kubernetes.
They are the processes that monitor Kubernetes as objects and respond accordingly.
In this lecture we will discuss about one controller in particular and that is the replication
controller.
So what is a replica and why do we need a replication controller.

High Availability:

Let's go back to our first scenario where we had a single pod running our application.
What if for some reason our application crashes and the part fails users will no longer be
able to access our application to prevent users from losing access to our application.
We would like to have more than one instance or pod running at the same time.
That way if one failed we still have our application running on the other one.
The replication controller helps us run multiple instances of a single part in the Kubernetes
cluster thus providing high availability.
so does that mean you can't use a replication controller if you plan to have a single pod.
No.
Even if you have a single pod the replication controller can help by automatically bringing up
a new pod when the existing one fails does the replication controller ensures that the
specified number of
parts are running at all times even if it's just one or 100.
Load balancing and scaling:

Another reason we need a replication controller is to create multiple paths to share the load
across them.
For example in this simple scenario we have a single pod serving a set of users. When the
number of users increases, we deploy additional pod to balance the load across the two
pods.
if the demand further increases and if we were to run out of resources on the first node we
could deploy additional pods across the other nodes in the cluster.
As you can see the replication controller spans across multiple nodes in the cluster.

It helps us balance the load across multiple paths on different nodes as well as scale our
application.
When the demand increases.
Difference between Replication controller and replica set:
It's important to note that there are two similar terms replication controller
and replica set.
Both have the same purpose but they are not the same replication controller is the older
technology
that is being replaced by replicate set.
Replica set has said is the new recommended way to set up replication.

However whatever we discussed in the previous few slides remain applicable to both these
technologies.
There are minor differences in the way each works and we will look at that in a bit.
As such we will try to stick to replicable sets in all of our demos and implementations going
forward.

Let us now look at how we create a replication controller:

As with the previous lecture we start by creating a replication controller definition file we will
nameit RC dash definition.yaml (rc-definition.yml ) as with any Kubernetes definition file.
We have four sections the
1)API version 2)
kind 3)
metadata and
4)spec

The API version is specific to what we are creating.


In this case replication controller is supported Kubernetes its API version V1.
So we will set it as v1 the kind as we know is a replication controller
Under a metadata, we will add a name and we will call it myapp-rc and we will also add a
few labels app and type and assign values to them.
So far it has been very similar to how we created a pod in the previous section.

The next is the most crucial part of the definition file and that is the specification
written as spec.
For any kubernetes definition file, the spec section defines what's inside the object we are
creating.
In this case we note that the replication controller creates multiple instances of a pod. But
what pod.
We create a template section under spec to provide a po template to be used by the
replication controller to create replicas.

now how do we define the pod template.


It's not that hard because we have already done that in the previous exercise.
Remember we created a pod definition file in the previous exercise.

We could reuse the contents of the file to populate the template section.
Move all the contents of the pod definition file into the template section of the replication
controller except for the first few lines which are API version and kind.

Remember whatever we move must be under the template section meaning this should be
intended to the right and have more spaces before them then the template line itself.
They should be children of the template section looking at our file.
Now we now have two metadata sections.
One is for the replication controller and another for the pod and we have two spec sections
one for each.
We have nested to definition files together the replication controller being the parent and
the pod definition being the child.
Now there is something still missing.
We haven't mentioned how many replicas we need in the replication controller
For that, add another property to the spec called replicas and input the number of replicas
you need under it.
Remember that the template and replicas are direct children of spec sections so they are
siblings and
must be on the same vertical line which means having equal number of spaces before them.

once the file is ready run the kubectl create command and input the file using the -f
parameter.
CMD to create Replication controller:

kubectl create –f rc-definition.yml

The replication controller is created when the replication controller is created it first creates
the pod using the pod definition template as many as required which is 3 in this case
To view the list of created replication controllers run the kubecrl get replication controller
command and you will see the replication controller listed we can also see the desired
number of public cars or parts the current number of friendly cars and how many of them
are ready in the output.

CMD to view no of replication controller created:

Kubectl get replicationcontroller

If you would like to see the pods that were created by the replication controller run the cube
kubectl get pods command and you will see three pods running.
Note that all of them are starting with the name of the replication controller which is
myapp-rc.
Indicating that they are all created automatically by the replication controller .

Replica set:

What we just saw was the replication controller let us now look at replica set it is very similar
to replication controller.
As usual first we have API version kind metadata and spec the API version though is a bit
different.
It is apps/v1 which is different from what we had before for application controller
which
was just V1.
if you get this wrong you are likely to get an error that looks like this.
It would say no match for a kind replica set because the specified Kubernetes API version
has no support for replica set.
The kind would be ‘ReplicaSet’ set and we add name and labels in the metadata.

The specification section looks very similar to replication controller it has a template section
where we provide part definition as before.
So I'm going to copy the contents over from pod definition file and we have number of
replicas which is set to three.
However there is one major difference between replication controller and replica set.
Replica set requires a ‘selector’ definition the selector section helps the replica set identify
what pod fall under it.
But why would you have to specify what pods fall under it?
If we have provided the contents of the pod definition file itself in the template, it's
because replicas set can also manage pods that were not created as part of the replica
creation.
Say for example the pods created before the creation of the replica set that match
labels specified in the selector.
The replica set will also take those parts into consideration when creating the replicas.
I will elaborate this in the next slide but before we get into that I would like to
mention that the selector is one of the major differences between replication
controller and replica set.
The selector is not a required field in case of a replication controller but it is still
available when you skip it as we did in the previous slide.
It assumes it to be the same as the labels provided in the pod definition file in case of replica
set.
A user input is required for this property and it has to be written in the form of match labels
as shown here.
The match labels selector simply matches the labels specified under it to the labels on the
pods.

The replica set selector also provides many other options for matching labels that were not
available in a replication controller and as always to create a replica set run the kubectl
create command providing the definition file as input and to see the created replicas
run the kubectl get replicaset command to get list of parts simply run the kubectl get pods
Command.
CMD to create replicateset:
Kubectl create –f replicaset-definition.yml

Labels and selectors:

So what is the deal with labels and selectors.


Why do we label our pods and objects in kubernetes.
Let us look at a simple scenario.
Say we deployed three instances of our front end web application as three pods.

We would like to create a replication controller or replica set to ensure that we have
three active pods at any time.
And yes that is one of the use case of replica sets.
You can use it to monitor existing parts.
If you have them already created as is it in this example in case they were not created the
replica set will create them for you.
The role of the replica set is to monitor the parts and if any of them were to fail deploy
new ones.
The replica set is in fact a process that monitors the pods.
Now how does the replica set know what pods to monitor.
There could be hundreds of other pods in the cluster running different applications.

This is where labelling our pods during creation comes in handy. we could now provide these
labels as a filter for replica set.

Under the selector section we use to match labels filter and provide the same
label that we used while creating the pods.
This way the replica set knows which pods to monitor the same concept of labels and
selectors is used in many other places throughout kubernetes .
let me ask you a question along the same lines in the replica set specification section.
We learned that there are three sections template, replicas and the selector.
We need three replicas and we have updated our selector based on our discussion in the
previous slide.
Say for instance we have the same scenario as in the previous slide where we have three
existing pods that were created already and we need to create a replica set to monitor the
parts to ensure there are a minimum of three running at all times.
When the replication controller is created it is not going to deploy a new instance of pods as
three of them with matching labels are already created.
In that case do we really need to provide a template session in the replica set specification.
Since we are not expecting the replica set to create a new pod on deployment, yes we do
because in case one of the pods were to fail in the future the replicas that need to create a
new one to maintain the desired number of pods and for the replica set to create a new pod.
The template definition section is required.

Scale:

let's now look at how we scaled the replica set.


Say we started with three replicas in the future.
We decided to scale to six.
How do we update our replica set to scale to six replicas.
Well there are multiple ways to do it.
The first is to update the number of replicas in the definition file to six
then run the kubectl replace command to specify the same file using the -f parameter and
that will update the replica set to have six replicas.
CMD to update replicas 1st way:
Kubectl replace –f replicateset-definition.yml
CMD to update replicas 2nd way:
Kubectl sacle --replicas=6 –f replicateset-definition.yml

CMD to update replicas 3rd way:


Kubectl sacle --replicas=6 –f replicateset myapp-replicaset
| |
|__ type |_name

The second way to do it is to run the kubectl scale command.


Use the replicas parameter to provide the new number of replicas and specify the same file
as input.
Either input the definition file or provide the replicaset name in the type name format
however remember that using the file name as input will not result in the number of replicas
being updated automatically in the file.
In other words the number of replicas in the replica set definition file will still be three even
though you scaled your replica set to have six replicas using the kubectl scale command and
the file as input.
There are also options available for automatically scaling the replica set based on load but
that is an advanced topic and we will discuss it at a later time.
Let us now review the commands.
Real quick the kubectl create command as we know is used to create a replica set or basically
any object in kubernetes depending on the file we are providing as input.
You must provide the input file using the -f parameter.
use the kubectl get command to see List of replica sets created.
Use kubectl delete replica set command followed by the name of the replica set to delete the
replica set and then we have the kubectl replace Command to replace or
update the replica set and also the kubectl scale command scale replica set simply from the
command line without having to modify the file that's it for this lecture.
Demo-ReplicaSet:

In this demo, we're going to create a replica set based on the pod definition file that we
created earlier.
So in the last demo, we created a pod using Yaml.
So here what I've done is I've created a directory called Pods Under the kubernetes is for
Beginners Directory, which is my project directory.
And here are the two files that we created earlier for pods.
So now let us create a new directory for public assets called Replica Sets.
And inside this directory, let's create a new file called replica.yaml.
So let's start with the EPA version.
And if you remember from the lecture, the vision for a concept should be APS slashed me
one.
And next, we'll use Kindt and we'll use these positions available.
And here we are going to make use of public assets.
The next route element is the same as that of Pardes, or we are going to add metadata in the
name of
the replica set, which would be my dash and my app dash of the cassette and will assign a
label for
our app, like I said as well.
So the key would be app and the value will be my app.
Now I'm going to set the last the last property, which is the spec.
And as you can see for replicas at the Visual Studio code, the extension, the amole extension
is automatically.
Understood that the object is public asset and that it needs a selector.
So it has already created The Selecter field for us.
And so we'll just have to fill it in.
So here we are.
And we have two possible options.
It can be a match expression's or match label.
So let's use the match label option.
And here we're going to use the same label that we used for the part that will tie the part to
the replicas.
So let us use the same label that we used before.
Well, creating parts.
So for this, let me first open the the engine export definition file on the right side of the
screen
so you can just direct click and open it on the right side.
The labels we use there where the EMV label, which is set to production.
So I'm just going to copy that over.
Now, the next property is replicas.
So for this example, let's make use of three replicas.
And then the next mandatory value that we need to add here is the template.
So for template, we can make use of the part definition file that we created earlier and copy
the template.
So I'm just going to copy this whole section here from Metadata and pasted under the
template section.
So now as soon as we paste the contents, we see that the the indentation is all out of order.
So in order to fix this, one easy way to do it is to select the the whole section that we just
pasted
except for the first line and then press tab twice and then or until, you know, it fixes the the
the
indentation.
Right.
Now, some editors are intelligent enough to automatically correct that for you.
Right.
However, this this one doesn't.
But I know that there is an extension available for this.
And so if you're interested in that.
Check it out.
I think it's called paste and then.
Or something like that.
OK.
So for now, we will just stick to the man who we are fixing it.
So now the format is corrected and there are no more errors in the file.
So one thing to note here is the labels used for the part and the labels used under these
selecter at
the top.
So they have to be the same.
Right.
The label used at the top of the replicas.
That itself doesn't really matter.
The.
It is the the two labels that are set on the part.
And then one set on the selector that matters.
That's what ties them together.
So they have to be the same.
So say we changed the labels on the part template to something else like app and set it to
my AB.
Then we must change it at the top as well to use the same.
Right.
So now we're done with the file.
OK.
So now that we have the European as a definition file ready, let's go back to the terminal.
And here in the Project Root directory, we see that we have the new directory called replica
sets.
And underneath that there is the replicas at Yamal definition file that we just created.
So let's quickly check to make sure that the file has the contents we created.
So I'm just going to use Cat to view the contents of the file.
As everything as expected.
So now let me clear the screen and we're going to create this replica set using the cube to
create command
with the dash F option.
So as soon as the replica set is created, let's check the status using Google gets replicas at
command.
And we notice that there is one replica set which is created already.
That is a number of replicas is equal to three.
And the current and ready number of vehicles are three as well.
And it was created about eight seconds ago.
So we can further inspect the status of the parts by running the cube car to get parts
command.
And we notice here that we have three replicas or three parts for the replica that we created.
And all of the parts have a unique name.
But you will notice that the name of the pod begins with the name of the replica set, which is
my app,
Dash Replicas.
So that way you can look at the pod any part and you can identify that if it is a standalone
part or
a part that is part of a replica, is it?
Now, all of them are in a running state.
So we said that a replica set ensures that a sufficient number of replicas or pods are available
at
all times.
Now, let's see what happens if we were to delete one of these parts.
So let's go back and copy one of the port names.
So in this case, I'm copying this one here that has a name that ends with eight an X X L.
So I'm going to use the cue card to delete our part command and I'm going to paste the
part name here.
Now, whenever you delete a pod, it takes a few seconds for the pod to fully terminate.
So just give it a few seconds for the fourth part to how fully terminate.
So once we get the prompt back, let's check the status of the parts again.
But you'll notice that the replica set still has three parts running.
And you'll notice that one of the pod.
Was just created fifteen seconds ago.
And you'll notice that the name of the older pod.
And that ended with the eight n x exhale has been deleted.
And a new pod with the name with a different name has been created.
So that's the, uh, the replica set, ensuring that sufficient number of parts are always available
on the cluster.
Now, if you're on the cubicle, describe a replica replicas, a command for the new replica set.
We will see that the desired number of replicas in history.
And here we see more information about the replica set.
Just like what we saw with the Cube Cuttle describe part command.
So here we see the name of the replica set, these electors, the label for the replicas itself.
If you scroll down, you'll see the labels that the selector about makes use of, which is the
labels
on the pod.
And we also see the engine X container definition.
And if you scroll down below, we see the events.
So initially when we created the replica set, it actually came up with three replicas.
But then we deleted one of the replicas and then it spun up one additional replica for it to
maintain
that design.
No.
Right.
And you see all of that in the history of events here.
So this is a handy tool and command if you'd like to inspect what happened to a replica set.
In case you're troubleshooting something or you're just looking for more information on
what's happening
with the replica set, etc..
So so we said that a replica is it ensures that a minimum number of replicas are available all
the time.
But what if there are more number of replicas than what's required?
So let's try to let's try something else.
Let's try to create a new pod by making use of the the same label that the replica set selector
uses,
which is the app set to my app.
And to do this, let's go back to our pod definition file.
And so here I have the Internet start YAML file that we created.
And I will notice that we have a pod template which tries to create a pod by the name and
the next dash
to.
We also.
But we have now changed its label to the same label as the pod definition template used in
our replicas.
So now let's create the pod directly and not to the replicas it, but just, you know, directly the
pod like as we have done before.
And we'll see what happens when we create a new pod outside of the replica set, but one
that has the
same labels that the replica set selector uses.
So before doing that, let me run the cubicle, get parts command, and I see that there is only
the
three parts that were created by the replicas it.
And so now I'm going to use the cube, Carol, create command and with the dash F option
and specify
the engine start M0 file and you will see that the engine X dash two parred has been created.
But if I run the cube to get parts command now, we'll see that the status of that part is in a
terminating
state already.
And the replica set is terminating.
The new part that we just created is not allowing more pods with the same labels than the
number of
replicas configured on the replicas.
So if we run the Cube Cuttle describe command now, we'll see that at the bottom.
Under the event section of the output of described command, the replicas of the controller
delete the
new and the next dash to pod that we just created.
So now if I go back and run the cupola, get pods command and see that it is under
terminating state
and soon it should go away from it from the list altogether.
OK, so now let's see how to update epilepticus it.
So what what if we want to change the number of replicas to, say, four instead of the current
three?
Say, for instance, we are trying to scale up our application.
So for this, we must edit the replica set definition file and update its replicas count to four.
For this, we will make use of a new command call to cuddle, edit replica set and will provide
the
name of the replica set, which is my app that a replica set.
And now when we run this command, we see that it opens up the running configuration of
the replica
set in a text editor in a text format.
In this case, it opens up in Vim.
So note that this is not the actual file that we used that we created at the beginning of this
demo.
This is a temporary file that that's created by cabernets in memory to allow us to edit the
configuration
of an existing object on CORONETS.
And that's why you'll see a lot of additional fields in this file other than the details that you
provided.
So changes made to this file are directly applied on the running configuration on the cluster.
As soon as the file is saved.
So you must be very careful with the changes that you're making here.
So now if I scroll down all the way to the specked section, which is over here, and we can see
the
running configuration and the current number of replicas, which is set to three, I can change
this
to scale it up by another part.
So four in this case, then I will save an exit from the editor.
Now, it should now spin up a new part in addition to the three that you already had to
match, the
new number of replicas that we specified.
So if you're on the cubicle, get particle man.
Now we see that there is a new pod now which has spun up six seconds ago and we can use
the same approach
to scale down as well.
And there's also a command available to scale the number of replicas without having to go
in and edit
the definition file.
And that is using the cube Carol scale replica set command.
So we will provide the name of the replica set and we will set the number of replicas for it to
scale
to two.
So you can specify a number which is greater or less than the current number of replicas.
And note taking note of the double dashes before the replicas option.
So if I run the you've got to get parts command.
Again, we will see that the replica set is now scaling down to two replicas by terminating two
of the
parts.
Well, so let's wait for these two replicas to be terminated.
And there you go.
And now we just have two parts.
Well, that's it for this demo.
I will see you in the next.

You might also like