You are on page 1of 15

PART -A

1. Differentiate virtualization and containerization

Containers provide an isolated environment for running the application. The entire
user space is explicitly dedicated to the application. Any changes made inside the
container is never reflected on the host or even other containers running on the
same host. Containers are an abstraction of the application layer. Each container is
a different application.

Whereas in Virtualization, hypervisors provide an entire virtual machine to the


guest(including Kernal). Virtual machines are an abstraction of the hardware layer.
Each VM is a physical machine.

2.Docker Container.

Docker containers include the application and all of its dependencies. It shares the
kernel with other containers, running as isolated processes in user space on the host
operating system. Docker containers are not tied to any specific infrastructure: they
run on any computer, on any infrastructure, and in any cloud. Docker containers are
basically runtime instances of Docker images.

3. Describe Container Orchestration.


In layman's terms orchestration is known as the fusion of different types of
instruments to produce a great piece of music.

Orchestration integrates numerous services to promptly automate procedures or


synchronize data. If an application is having different containers with several
microservices, communication will become difficult. The role of container
orchestration is to fuse different components of an application to deliver a smooth
service. Orchestration would allow all services in various containers to work together
to achieve a single objective.

4. Why Use Docker?

Docker becomes more popular with the cloud, because it is very useful when we
have to build a PaaS cloud. The reason is simple. The concept of containers allows
us to create a layer of different isolated containers with a single application inside.
We can easily build a PaaS specific to a single customer. This allows great flexibility
in all aspects associated with the release of the design of our PaaS.
5. The Google ComputeEngine is composed of three basic components:

 Virtual Machine
 Network component
 Persistent disk

6. Differentiate between a job and a POD.

A Pod guarantees that a container is continually functioning, but the Job ensures
that the pods do their tasks. The Job entails doing a certain activity.
Example:

kubectl run mypod1 --image=nginx --restart=Never


kubectl run mypod2 --image=nginx --restart=onFailure
○ → kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod1 1/1 Running 0 59s
○ → kubectl get job
NAME DESIRED SUCCESSFUL AGE
mypod1 1 0 19s

7. Enlist the various services available in Kubernetes.

Some of the services accessible in Kubernetes are:

1) Cluster IP service
2) Load Balancer service
3) Node Port service, and
4) External Name Creation service

8. A managed instance group has several benefits.

• Autoscaling: When the application requires more resources, we canscale the


instance to fit the new requirements.

• Load balanced: Because all the instances are managed as one, theresource is
shared and balanced among the instances. Of course,we must create the load
balancer on top of that to ensure thefunctionality.

• Management of the unhealthy instances: In the event an instance inthe group is


stopped or crashed, it is automatically re-created withthe same name as the previous
instance.
9. Installing Kubernetes with Azure Kubernetes Service.

The shell has the az tool automatically installed and configured to work with
yourAzure environment.

Alternatively, you can install the az command-line interface (CLI) on your


localmachine.

When you have the shell up and working, you can run:

$ az group create --name=kuar --location=westus

Once the resource group is created, you can create a cluster using:

$ azaks create --resource-group=kuar --name=kuar-cluster

This will take a few minutes. Once the cluster is created, you can get credentials for

the cluster with:

$ azaks get-credentials --resource-group=kuar --name=kuar-cluster

If you don’t already have the kubectl tool installed, you can install it using:

$ azaks install-cli

10. Define ETCD.

The name comes from the convention used to name configuration files in
GNU/Linux operating systems: “/etc”. The extra letter “d” stands for “distributed”. etcd
is now open source and is managed by the Cloud Native Computing Foundation.
ETCD can be defined as the distributed key-value store which establishes a
relation between the distributed works. The ETCD is basically written in a specific
language that is called a GO programming language.

It main function is to accumulate the configuration information of the cluster of


Kubernetes. This helps it to represent the form of the cluster at any following time.

11. Illustrate the steps to deploy an application to a docker container.

1. Install Docker on the machines you want to use it;


2. Set up a registry at Docker Hub;
3. Initiate Docker build to create your Docker Image;
4. Set up your 'Dockerized' machines;
5. Deploy your built docker image or application.
12. Write the command used to create docker images and deployment in
docker.

The interactive method is the easiest way to create docker images. The first
step is to launch Docker and open a terminal session. Then use the Docker run
command image_name:tag_name. This starts a shell session with the container
that was launched from the image.

13. The Kubernetes Client

The official Kubernetes client is kubectl: a command-line tool for interacting withthe
Kubernetes API. kubectl can be used to manage most Kubernetes objects, such
asPods, ReplicaSets, and Services. kubectl can also be used to explore and verify
theoverall health of the cluster.

14. Pods in kubernetes

A Pod represents a collection of application containers and volumes running in


thesame execution environment. Pods, not containers, are the smallest deployable
arti‐fact in a Kubernetes cluster. This means all of the containers in a Pod always
land onthe same machine.

Each container within a Pod runs in its own cgroup, but they share a number
ofLinux namespaces. Applications running in the same Pod share the same IP
address and port space (net‐work namespace), have the same hostname (UTS
namespace), and can communicateusing native interprocess communication
channels over System V IPC or POSIXmessage queues (IPC namespace).

15. Describe about annotations in kubernetes.

Annotations are key/value pairs designed to hold non identifying information


that can be leveraged by tools and libraries.

16. Articulate the purpose of a load balancer used in Kubernetes.

Load balancing is a technique for distributing incoming traffic across different


backend servers and ensuring that the application is accessible to consumers.
In Kubernetes, as illustrated in the diagram above, all incoming traffic is
routed to a single IP address on the load balancer, which is a method to expose
your service to the outside world and send the traffic to a specific pod (through
service) using a round-robin algorithm. Even if a pod goes down, load balancers
are alerted, and traffic is not directed to that unavailable node. As a result,
Kubernetes load balancers are in charge of distributing a set of tasks (incoming
traffic) to the pods.

17. Does the rolling update with state full set replicas=1 makes sense?

No, because there is only 1 replica, any changes to state full set would result
in an outage. So rolling update of a StatefulSet would need to tear down one (or
more) old pods before replacing them. In case 2 replicas, a rolling update will
create the second pod, which it will not be succeeded, the PD is locked by first
(old) running pod, the rolling update is not deleting the first pod in time to release
the lock on the PDisk in time for the second pod to use it. If there's only one that
rolling update goes 1 -> 0 -> 1.f the app can run with multiple identical instances
concurrently, use a Deployment and roll 1 -> 2 -> 1 instead.

This is a common yet one of the most important Kubernetes interview


questions and answers for experienced professionals, don't miss this one.

18. If a pod exceeds its memory “limit” what signal is sent to the process?

SIGKILL as immediately terminates the container and spawns a new one with
OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will
do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately
responsible for killing the processes.`SIGTERM` is sent to PID 1 and k8s waits for
(default of 30 seconds) `terminationGracePeriodSeconds` before sending the
`SIGKILL` or you can change that time with terminationGracePeriodSeconds in the
pod. As long as your container will eventually exit, it should be fine to have a long
grace period. If you want a graceful restart it would have to do it inside the pod. If
you don't want it killed, then you shouldn't set a memory `limit` on the pod and
there's not a way to disable it for the whole node. Also, when the liveness probe fails,
the container will SIGTERM and SIGKILL after some grace period.

19. Which command do you use to create a new swarm?

docker swarm init --advertise-addr <MANAGER-IP>


PART B

1. Docker architecture

Docker uses a client-server architecture. The Docker client talks to the


Docker daemon, which does the heavy lifting of building, running, and distributing
your Docker containers. The Docker client and daemon can run on the same system,
or you can connect a Docker client to a remote Docker daemon. The Docker client
and daemon communicate using a REST API, over UNIX sockets or a network
interface. Another Docker client is Docker Compose, that lets you work with
applications consisting of a set of containers.

The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker
objects such as images, containers, networks, and volumes. A daemon can also
communicate with other daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with
Docker. When you use commands such as docker run, the client sends these
commands to dockerd, which carries them out. The docker command uses the
Docker API. The Docker client can communicate with more than one daemon.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac, Windows or Linux


environment that enables you to build and share containerized applications and
microservices. Docker Desktop includes the Docker daemon (dockerd), the Docker
client (docker), Docker Compose, Docker Content Trust, Kubernetes, and Credential
Helper. For more information, see Docker Desktop.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone
can use, and Docker is configured to look for images on Docker Hub by default. You
can even run your own private registry.

When you use the docker pull or docker run commands, the required images are
pulled from your configured registry. When you use the docker push command, your
image is pushed to your configured registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of those
objects.

Images

An image is a read-only template with instructions for creating a Docker container.


Often, an image is based on another image, with some additional customization. For
example, you may build an image which is based on the ubuntu image, but installs
the Apache web server and your application, as well as the configuration details
needed to make your application run.
You might create your own images or you might only use those created by others
and published in a registry. To build your own image, you create a Dockerfile with a
simple syntax for defining the steps needed to create the image and run it. Each
instruction in a Dockerfile creates a layer in the image. When you change the
Dockerfile and rebuild the image, only those layers which have changed are rebuilt.
This is part of what makes images so lightweight, small, and fast, when compared to
other virtualization technologies.
Containers

A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one
or more networks, attach storage to it, or even create a new image based on its
current state.

By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container’s network, storage, or other
underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide
to it when you create or start it. When a container is removed, any changes to its
state that are not stored in persistent storage disappear.

Example docker run command


The following command runs an ubuntu container, attaches interactively to your local
command-line session, and runs /bin/bash.
$docker run -i-t ubuntu /bin/bash

2. Creating a Compute Engine Instance

A container is probably the best way to release and put a microservice into
production.To create a container in GCP, we can construct a new Compute Engine
with one of thecontainer-optimized OSs. This is a family of OSs optimized for the run
container inCompute Engine. For example, CoreOS is one of the operating systems.

The following is one of the ways to create a containerapplication in GCP.

1. Connect to our Google Cloud Platform instance and select the

project practicaldevopsgcpcli.

2. Click the board Compute Engine, which opens the page tomanage our Compute
Engine instance .

3. This opens the page of our Compute Engine. We must now createa new instance,
based on one of the container-optimized OSs.Select the Create Instance button from
the toolbar.

4. In the page for creating the instance, type the


namepracticaldevopscontainerinstance and check the Container box. This changes
the default OS from Debian to one thatis container-optimized.
5. In the container image text box, we can identify the Docker imagewe want to use.
For our test, we can use a busybox image. In thetext box, we must insert the
following string: gcr.io/google-containers/busybox.

6.Click the Create button and then create the new Compute Engine

instance with the Docker image

We can access the new instance just by clicking the name. This opens the details of

the instance. Then scroll down to Remote access and click SSH. This opens new
browser windows for the instance with Docker and our image.We can see the image
actually installed, with the command docker images. When weexecute the command
into the instance.

Creating a Compute Engine instance in which we create our container is a good

starting point. Google offers a private registry, and we can upload our images to
theregistry and use that in our instance. Using the Google registry, we can put in
place ourCI/CD basic systems. For example, we can use Jenkins to create the
image and put it inthe registry.

Using a Compute Engine instance to create our Docker image does have some

limitations.

• It is possible to deploy only one container in the VM.

• It is possible only to use an optimized-container OS to create the

instance.

In the Compute Engine, the image from the Docker Hub registry can be used as
well;however, there is a limit of one container for VM. If we want to design a
microserviceapplication, we probably need to use more than one container per VM.
To do so, Googleoffers another service called Kubernetes Engine.

3. Visualize the labels applied for the deployment of a project and explain the
concept of label creation and selectors.

Refer - Book.

T1_Brendan Burns, Joe Beda, Kelsey Hightower - Kubernetes_ Up and Running_


Dive into the Future of Infrastructure-O’Reilly Media (2019)

From page 92 : cut short


4. Explain the components in the architecture of Kubernetes.
Different components are:

Master Node: The master node is the first and most important component in the
Kubernetes cluster, and it is responsible for cluster administration. It serves as the
starting point for all administrative tasks. The cluster may include more than one
master node to ensure fault tolerance.

API Server: The API server is the entry point for any REST commands used to
operate the cluster.

Scheduler: The slave node's tasks are scheduled by the scheduler. It keeps track of
how much each slave node uses its resources. It is in charge of allocating workload.

ETCD: Wright values, etcd components, and store configuration info. To accept
orders and function, it connects with the most component. It also takes care of
network rules and port forwarding.

Worker/slave nodes: Worker/slave nodes are another important component that


provides all the services needed to handle container networking, connect with the
master node, and assign resources to scheduled containers.

Kubelet: It obtains a Pod's configuration from the API server and verifies that the
containers mentioned are up and running.
Kube-proxy: The Kube-proxy is a component of worker nodes. The Kube-proxy
goes through each node and runs in them. It helps in TCP/UDP packet forwarding
transversely back-end network services.

Docker Container: Each worker node has a Docker container that executes the
defined pods.

Pods: A pod is a collection of one or more containers that execute logically on the
same node.

5. Suppose there is an MNC that has a highly distributed system that


comprises a huge variety of data clouds, a large number of employees,
and multiple virtual machines.
a. Share your thoughts about how such an MNC can manage consistency
in the work with the help of Kubernetes?
b. Explain the role of Cloud Controller Manager.

There are many IT sectors that introduce numerous containers with multiple tasks
that run through numerous nodes all over the world in an evenly distributed manner.

As MNCs, they have the power to utilize anything that will supply them with agility,
top-notch capabilities, and practices of DevOps the applications based on cloud
services. Now they can move forward to schedule architecture and they can also get
support for various container formats by using the Kubernetes platform.Ultimately it
solves their issue of maintaining work consistency.

Controller Manager:

There are certain important roles of the Cloud Controller Manager to maintain
the residing cloud services in Kubernetes. The Cloud Controller Manager plays a
significant role in routing the network, maintenance of consistent storage, and
management of communication with the pre-existing cloud-based services further, it
also helps in abstracting the codes, particularly for Cloud Controller Manager from
the primary specific code of Kubernetes.

It is categorized into various types of cloud containers. And each container


can be used on the basis of a particular Cloud Controller Manager platform.

Further, it permits cloud sellers to develop Kubernetes codes. Here it is also


considered that the Kubernetes code can be organized and deployed without
depending on any of the platforms of the Cloud Controller Manager.

In order to do so the cloud vendors or sellers, first, take time to develop a


specific code. After developing a code these vendors connect with the Kubernetes
cloud controller manager while running the Kubernetes.
6. Write the Kubectl commands to deploy a feature with zero downtime in
Kubernetes.

By default Deployment in Kubernetes using RollingUpdate as a strategy. Let's


have an example that creates a deployment in Kubernetes

kubectl run nginx --image=nginx # creates a deployment


○ → kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 0 7s
Now let’s assume we are going to update the nginx image
kubectl set image deployment nginx nginx=nginx:1.15 # updates the image

Replica sets
kubectl get replicasets # get replica sets
NAME DESIRED CURRENT READY AGE
nginx-65899c769f 0 0 0 7m
nginx-6c9655f5bb 1 1 1 13s

One more replica set was added and then the other replica set was brought
down.

kubectl rollout status deployment nginx


# check the status of a deployment rollout
kubectl rollout history deployment nginx

# check the revisions in a deployment


○ → kubectl rollout history deployment nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
7. Explain the lifecycle of a Docker container.
Docker is a containerization platform for developing, shipping, and running
applications inside containers. We can deploy many containers simultaneously on a
given host. Containers are very fast and boot up quickly because they don’t need the
extra load of a hypervisor in comparison to the virtual machines because they run
directly within the host machine’s kernel.

There are different stages when we create a Docker container which is known as
Docker Container Lifecycle. Some of the states are:

 Created: A container that has been created but not started


 Running: A container running with all its processes
 Paused: A container whose processes have been paused
 Stopped: A container whose processes have been stopped
 Deleted: A container in a dead state

Commands in Docker Container Lifecycle Management

Managing the states of the Docker containers is called Docker Container


Lifecycle Management. For managing the Docker Lifecycle we have some common
commands which are explained below.

Create Containers

Using the docker create command will create a new Docker container with the
specified docker image.

$ docker create --name <container name> <image name>

Start Container
To start a stopped container, we can use the docker start command.

$ docker start <container name>

Run Container

The docker run command will do the work of both “docker create” and “docker
start” command. This command will create a new container and run the image in the
newly created container.

$ docker run -it --name <container name> <image name>

Pause Container

If we want to pause the processes running inside the container, we can use the
“docker pause” command.

$ docker pause <container name>

To unpause the container, use “docker unpause” command.

$ docker unpause <container name>

Stop Container

Stopping a running Container means to stop all the processes running in that
Container. Stopping does not mean killing or ending the process.

$ docker stop <container name>

A stopped container can be made into the start state, which means all the processes
inside the container will again start. When we do the docker stop command, the main
process inside the container receives a SIGTERM signal.

In our case, 4 containers are running which you can see using the docker
ps command.

To stop all the running containers we can use the following command:
$ docker stop $(docker container ls –aq)

Delete Container

Removing or deleting the container means destroying all the processes running
inside the container and then deleting the Container. It’s preferred to destroy the
container, only if present in the stopped state instead of forcefully destroying the
running container.

$ docker stop <container name>

$ docker rm <container name>

We can delete or remove all containers with a single command only. In our
example, 4 containers (not necessarily running) are there which you can see using
the docker ps -a command.

We can see there are 4 containers which are not in the running state. Now we will
delete all of them using a single command which is given below:

$ docker rm $(docker ps -aq)

Kill Container

Kills one or more running containers.

$ docker kill <container name>

You might also like