Professional Documents
Culture Documents
Containers provide an isolated environment for running the application. The entire
user space is explicitly dedicated to the application. Any changes made inside the
container is never reflected on the host or even other containers running on the
same host. Containers are an abstraction of the application layer. Each container is
a different application.
2.Docker Container.
Docker containers include the application and all of its dependencies. It shares the
kernel with other containers, running as isolated processes in user space on the host
operating system. Docker containers are not tied to any specific infrastructure: they
run on any computer, on any infrastructure, and in any cloud. Docker containers are
basically runtime instances of Docker images.
Docker becomes more popular with the cloud, because it is very useful when we
have to build a PaaS cloud. The reason is simple. The concept of containers allows
us to create a layer of different isolated containers with a single application inside.
We can easily build a PaaS specific to a single customer. This allows great flexibility
in all aspects associated with the release of the design of our PaaS.
5. The Google ComputeEngine is composed of three basic components:
Virtual Machine
Network component
Persistent disk
A Pod guarantees that a container is continually functioning, but the Job ensures
that the pods do their tasks. The Job entails doing a certain activity.
Example:
1) Cluster IP service
2) Load Balancer service
3) Node Port service, and
4) External Name Creation service
• Load balanced: Because all the instances are managed as one, theresource is
shared and balanced among the instances. Of course,we must create the load
balancer on top of that to ensure thefunctionality.
The shell has the az tool automatically installed and configured to work with
yourAzure environment.
When you have the shell up and working, you can run:
Once the resource group is created, you can create a cluster using:
This will take a few minutes. Once the cluster is created, you can get credentials for
If you don’t already have the kubectl tool installed, you can install it using:
$ azaks install-cli
The name comes from the convention used to name configuration files in
GNU/Linux operating systems: “/etc”. The extra letter “d” stands for “distributed”. etcd
is now open source and is managed by the Cloud Native Computing Foundation.
ETCD can be defined as the distributed key-value store which establishes a
relation between the distributed works. The ETCD is basically written in a specific
language that is called a GO programming language.
The interactive method is the easiest way to create docker images. The first
step is to launch Docker and open a terminal session. Then use the Docker run
command image_name:tag_name. This starts a shell session with the container
that was launched from the image.
The official Kubernetes client is kubectl: a command-line tool for interacting withthe
Kubernetes API. kubectl can be used to manage most Kubernetes objects, such
asPods, ReplicaSets, and Services. kubectl can also be used to explore and verify
theoverall health of the cluster.
Each container within a Pod runs in its own cgroup, but they share a number
ofLinux namespaces. Applications running in the same Pod share the same IP
address and port space (net‐work namespace), have the same hostname (UTS
namespace), and can communicateusing native interprocess communication
channels over System V IPC or POSIXmessage queues (IPC namespace).
17. Does the rolling update with state full set replicas=1 makes sense?
No, because there is only 1 replica, any changes to state full set would result
in an outage. So rolling update of a StatefulSet would need to tear down one (or
more) old pods before replacing them. In case 2 replicas, a rolling update will
create the second pod, which it will not be succeeded, the PD is locked by first
(old) running pod, the rolling update is not deleting the first pod in time to release
the lock on the PDisk in time for the second pod to use it. If there's only one that
rolling update goes 1 -> 0 -> 1.f the app can run with multiple identical instances
concurrently, use a Deployment and roll 1 -> 2 -> 1 instead.
18. If a pod exceeds its memory “limit” what signal is sent to the process?
SIGKILL as immediately terminates the container and spawns a new one with
OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will
do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately
responsible for killing the processes.`SIGTERM` is sent to PID 1 and k8s waits for
(default of 30 seconds) `terminationGracePeriodSeconds` before sending the
`SIGKILL` or you can change that time with terminationGracePeriodSeconds in the
pod. As long as your container will eventually exit, it should be fine to have a long
grace period. If you want a graceful restart it would have to do it inside the pod. If
you don't want it killed, then you shouldn't set a memory `limit` on the pod and
there's not a way to disable it for the whole node. Also, when the liveness probe fails,
the container will SIGTERM and SIGKILL after some grace period.
1. Docker architecture
The Docker daemon (dockerd) listens for Docker API requests and manages Docker
objects such as images, containers, networks, and volumes. A daemon can also
communicate with other daemons to manage Docker services.
The Docker client (docker) is the primary way that many Docker users interact with
Docker. When you use commands such as docker run, the client sends these
commands to dockerd, which carries them out. The docker command uses the
Docker API. The Docker client can communicate with more than one daemon.
Docker Desktop
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone
can use, and Docker is configured to look for images on Docker Hub by default. You
can even run your own private registry.
When you use the docker pull or docker run commands, the required images are
pulled from your configured registry. When you use the docker push command, your
image is pushed to your configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of those
objects.
Images
A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one
or more networks, attach storage to it, or even create a new image based on its
current state.
By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container’s network, storage, or other
underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide
to it when you create or start it. When a container is removed, any changes to its
state that are not stored in persistent storage disappear.
A container is probably the best way to release and put a microservice into
production.To create a container in GCP, we can construct a new Compute Engine
with one of thecontainer-optimized OSs. This is a family of OSs optimized for the run
container inCompute Engine. For example, CoreOS is one of the operating systems.
project practicaldevopsgcpcli.
2. Click the board Compute Engine, which opens the page tomanage our Compute
Engine instance .
3. This opens the page of our Compute Engine. We must now createa new instance,
based on one of the container-optimized OSs.Select the Create Instance button from
the toolbar.
6.Click the Create button and then create the new Compute Engine
We can access the new instance just by clicking the name. This opens the details of
the instance. Then scroll down to Remote access and click SSH. This opens new
browser windows for the instance with Docker and our image.We can see the image
actually installed, with the command docker images. When weexecute the command
into the instance.
starting point. Google offers a private registry, and we can upload our images to
theregistry and use that in our instance. Using the Google registry, we can put in
place ourCI/CD basic systems. For example, we can use Jenkins to create the
image and put it inthe registry.
Using a Compute Engine instance to create our Docker image does have some
limitations.
instance.
In the Compute Engine, the image from the Docker Hub registry can be used as
well;however, there is a limit of one container for VM. If we want to design a
microserviceapplication, we probably need to use more than one container per VM.
To do so, Googleoffers another service called Kubernetes Engine.
3. Visualize the labels applied for the deployment of a project and explain the
concept of label creation and selectors.
Refer - Book.
Master Node: The master node is the first and most important component in the
Kubernetes cluster, and it is responsible for cluster administration. It serves as the
starting point for all administrative tasks. The cluster may include more than one
master node to ensure fault tolerance.
API Server: The API server is the entry point for any REST commands used to
operate the cluster.
Scheduler: The slave node's tasks are scheduled by the scheduler. It keeps track of
how much each slave node uses its resources. It is in charge of allocating workload.
ETCD: Wright values, etcd components, and store configuration info. To accept
orders and function, it connects with the most component. It also takes care of
network rules and port forwarding.
Kubelet: It obtains a Pod's configuration from the API server and verifies that the
containers mentioned are up and running.
Kube-proxy: The Kube-proxy is a component of worker nodes. The Kube-proxy
goes through each node and runs in them. It helps in TCP/UDP packet forwarding
transversely back-end network services.
Docker Container: Each worker node has a Docker container that executes the
defined pods.
Pods: A pod is a collection of one or more containers that execute logically on the
same node.
There are many IT sectors that introduce numerous containers with multiple tasks
that run through numerous nodes all over the world in an evenly distributed manner.
As MNCs, they have the power to utilize anything that will supply them with agility,
top-notch capabilities, and practices of DevOps the applications based on cloud
services. Now they can move forward to schedule architecture and they can also get
support for various container formats by using the Kubernetes platform.Ultimately it
solves their issue of maintaining work consistency.
Controller Manager:
There are certain important roles of the Cloud Controller Manager to maintain
the residing cloud services in Kubernetes. The Cloud Controller Manager plays a
significant role in routing the network, maintenance of consistent storage, and
management of communication with the pre-existing cloud-based services further, it
also helps in abstracting the codes, particularly for Cloud Controller Manager from
the primary specific code of Kubernetes.
Replica sets
kubectl get replicasets # get replica sets
NAME DESIRED CURRENT READY AGE
nginx-65899c769f 0 0 0 7m
nginx-6c9655f5bb 1 1 1 13s
One more replica set was added and then the other replica set was brought
down.
There are different stages when we create a Docker container which is known as
Docker Container Lifecycle. Some of the states are:
Create Containers
Using the docker create command will create a new Docker container with the
specified docker image.
Start Container
To start a stopped container, we can use the docker start command.
Run Container
The docker run command will do the work of both “docker create” and “docker
start” command. This command will create a new container and run the image in the
newly created container.
Pause Container
If we want to pause the processes running inside the container, we can use the
“docker pause” command.
Stop Container
Stopping a running Container means to stop all the processes running in that
Container. Stopping does not mean killing or ending the process.
A stopped container can be made into the start state, which means all the processes
inside the container will again start. When we do the docker stop command, the main
process inside the container receives a SIGTERM signal.
In our case, 4 containers are running which you can see using the docker
ps command.
To stop all the running containers we can use the following command:
$ docker stop $(docker container ls –aq)
Delete Container
Removing or deleting the container means destroying all the processes running
inside the container and then deleting the Container. It’s preferred to destroy the
container, only if present in the stopped state instead of forcefully destroying the
running container.
We can delete or remove all containers with a single command only. In our
example, 4 containers (not necessarily running) are there which you can see using
the docker ps -a command.
We can see there are 4 containers which are not in the running state. Now we will
delete all of them using a single command which is given below:
Kill Container