You are on page 1of 78

1 yaml ansible -- Azure Kubernetes Service

Ansible is an open-source tool that automates cloud provisioning, configuration management,


and application deployments. Using Ansible you can provision virtual machines, containers,
network, and complete cloud infrastructures. In addition, Ansible allows you to automate the
deployment and configuration of resources in your environment.

Ansible includes a suite of Ansible modules that can be executed directly on remote hosts or via
playbooks. Users can also create their own modules. Modules can be used to control system
resources - such as services, packages, or files - or execute system commands.

For interacting with Azure services, Ansible includes a suite of Ansible cloud modules that
provides the tools to easily create and orchestrate your infrastructure on Azure.

YAML is a human-readable data serialization language that is often


used for writing configuration files. Depending on whom you ask,
YAML stands for yet another markup language or YAML
YAML in Ansible

Ansible Playbooks are used to orchestrate IT processes. A


playbook is a YAML file containing 1 or more plays, and is used to
define the desired state of a system.

Each play can run one or more tasks, and each task invokes
an Ansible module. Modules are used to accomplish automation
tasks in Ansible. Ansible modules can be written in any language
that can return JSON, such as Ruby, Python, or bash.

An Ansible Playbook consists of maps and lists. To create a


playbook, start a YAML list that names the play, and then lists tasks
in a sequence. Remember that indentation is not an indication of
logical inheritance. Think of each line as a YAML data type (a list or
a map).

By using YAML templates, Ansible users can program repetitive


tasks to happen automatically without having to learn an advanced
programming language. Developers can also use the ansible-lint
command, a YAML linter for Ansible Playbooks, to identify mistakes
so errors don't occur during a critical stage of operation.
2 yaml ansible -- Azure Kubernetes Service

With the introduction of Ansible Lightspeed with IBM Watson Code


Assistant, a generative AI service, developers can create Ansible
automation content more efficiently. Users can enter a task request
in plain English and get clean and compliant YAML code
recommendations for automation tasks that are then used to create
Ansible Playbooks.

Microsoft Azure Guide


Important

Red Hat Ansible Automation Platform will soon be available on Microsoft Azure. Sign up to
preview the experience.
Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you
the tools to easily create and orchestrate infrastructure on the Microsoft Azure Cloud.

Requirements

Using the Azure Resource Manager modules requires having specific Azure SDK modules
installed on the host running Ansible.

$ pip install 'ansible[azure]'


If you are running Ansible from source, you can install the dependencies from the root
directory of the Ansible repo.

$ pip install .[azure]


You can also directly run Ansible in Azure Cloud Shell, where Ansible is pre-installed.

Authenticating with Azure

Using the Azure Resource Manager modules requires authenticating with the Azure API.
You can choose from two authentication strategies:

 Active Directory Username/Password


 Service Principal Credentials
Follow the directions for the strategy you wish to use, then proceed to Providing Credentials
to Azure Modules for instructions on how to actually use the modules and authenticate with
the Azure API.
3 yaml ansible -- Azure Kubernetes Service

Using Service Principal

There is now a detailed official tutorial describing how to create a service principal.

After stepping through the tutorial you will have:

 Your Client ID, which is found in the “client id” box in the “Configure” page of
your application in the Azure portal
 Your Secret key, generated when you created the application. You cannot show
the key after creation. If you lost the key, you must create a new one in the
“Configure” page of your application.
 And finally, a tenant ID. It is a UUID (for example, ABCDEFGH-1234-ABCD-
1234-ABCDEFGHIJKL) pointing to the AD containing your application. You
will find it in the URL from within the Azure portal, or in the “view endpoints” of
any given URL.
Using Active Directory Username/Password

To create an Active Directory username/password:

 Connect to the Azure Classic Portal with your admin account


 Create a user in your default AAD. You must NOT activate Multi-Factor
Authentication
 Go to Settings - Administrators
 Click on Add and enter the email of the new user.
 Check the checkbox of the subscription you want to test with this user.
 Login to Azure Portal with this new user to change the temporary password to a
new one. You will not be able to use the temporary password for OAuth login.
Providing Credentials to Azure Modules

The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible
AWX or Jenkins, you will most likely want to use environment variables. For local
development you may wish to store your credentials in a file within your home directory.
And of course, you can always pass credentials as parameters to a task within a playbook.
The order of precedence is parameters, then environment variables, and finally a file found in
your home directory.

Using Environment Variables


4 yaml ansible -- Azure Kubernetes Service

To pass service principal credentials through the environment, define the following variables:

 AZURE_CLIENT_ID
 AZURE_SECRET
 AZURE_SUBSCRIPTION_ID
 AZURE_TENANT
To pass Active Directory username/password through the environment, define the following
variables:

 AZURE_AD_USER
 AZURE_PASSWORD
 AZURE_SUBSCRIPTION_ID
To pass Active Directory username/password in ADFS through the environment, define the
following variables:

AZURE_AD_USER
 AZURE_PASSWORD
 AZURE_CLIENT_ID
 AZURE_TENANT
 AZURE_ADFS_AUTHORITY_URL
“AZURE_ADFS_AUTHORITY_URL” is optional. It is necessary only when you have own
ADFS authority like https://yourdomain.com/adfs.

Storing in a File

When working in a development environment, it may be desirable to store credentials in a


file. The modules will look for credentials in $HOME/.azure/credentials . This file is an ini
style file. It will look as follows:

[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Note

If your secret values contain non-ASCII characters, you must URL Encode them to avoid
login errors.
It is possible to store multiple sets of credentials within the credentials file by creating
multiple sections. Each section is considered a profile. The modules look for the [default]
5 yaml ansible -- Azure Kubernetes Service

profile automatically. Define AZURE_PROFILE in the environment or pass a profile


parameter to specify a specific profile.

Passing as Parameters

If you wish to pass credentials as parameters to a task, use the following parameters for
service principal:

 client_id
 secret
 subscription_id
 tenant
Or, pass the following parameters for Active Directory username/password:

 ad_user
 password
 subscription_id
Or, pass the following parameters for ADFS username/password:

 ad_user
 password
 client_id
 tenant
 adfs_authority_url
“adfs_authority_url” is optional. It is necessary only when you have own ADFS authority
like https://yourdomain.com/adfs.

Other Cloud Environments

To use an Azure Cloud other than the default public cloud (for example, Azure China Cloud,
Azure US Government Cloud, Azure Stack), pass the “cloud_environment” argument to
modules, configure it in a credential profile, or set the
“AZURE_CLOUD_ENVIRONMENT” environment variable. The value is either a cloud
name as defined by the Azure Python SDK (for example, “AzureChinaCloud”,
“AzureUSGovernment”; defaults to “AzureCloud”) or an Azure metadata discovery URL
(for Azure Stack).
6 yaml ansible -- Azure Kubernetes Service

Creating Virtual Machines

There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine
module. We can either create a storage account, network interface, security group and public
IP address and pass the names of these objects to the module as parameters, or we can let the
module do the work for us and accept the defaults it chooses.

Creating Individual Components

An Azure module is available to help you create a storage account, virtual network, subnet,
network interface, security group and public IP. Here is a full example of creating each of
these and passing the names to the azure.azcollection.azure_rm_virtualmachine module at the
end:

- name: Create storage account


azure.azcollection.azure_rm_storageaccount:
resource_group: Testing
name: testaccount001
account_type: Standard_LRS

- name: Create virtual network


azure.azcollection.azure_rm_virtualnetwork:
resource_group: Testing
name: testvn001
address_prefixes: "10.10.0.0/16"

- name: Add subnet


azure.azcollection.azure_rm_subnet:
resource_group: Testing
name: subnet001
address_prefix: "10.10.0.0/24"
virtual_network: testvn001

- name: Create public ip


azure.azcollection.azure_rm_publicipaddress:
resource_group: Testing
allocation_method: Static
name: publicip001

- name: Create security group that allows SSH


azure.azcollection.azure_rm_securitygroup:
resource_group: Testing
name: secgroup001
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 101
direction: Inbound

- name: Create NIC


azure.azcollection.azure_rm_networkinterface:
resource_group: Testing
name: testnic001
virtual_network: testvn001
7 yaml ansible -- Azure Kubernetes Service

subnet: subnet001
public_ip_name: publicip001
security_group: secgroup001

- name: Create virtual machine


azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm001
vm_size: Standard_D1
storage_account: testaccount001
storage_container: testvm001
storage_blob: testvm001.vhd
admin_username: admin
admin_password: Password!
network_interfaces: testnic001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Each of the Azure modules offers a variety of parameter options. Not all options are
demonstrated in the above example. See each individual module for further details and
examples.

Creating a Virtual Machine with Default Options

If you simply want to create a virtual machine without specifying all the details, you can do
that as well. The only caveat is that you will need a virtual network with one subnet already
in your resource group. Assuming you have a virtual network already with an existing subnet,
you can run the following to create a VM:

azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
vm_size: Standard_D1
admin_username: chouseknecht
ssh_password_enabled: false
ssh_public_keys: "{{ ssh_keys }}"
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Creating a Virtual Machine in Availability Zones

If you want to create a VM in an availability zone, consider the following:

 Both OS disk and data disk must be a ‘managed disk’, not an ‘unmanaged disk’.
 When creating a VM with
the azure.azcollection.azure_rm_virtualmachine module, you need to explicitly set
8 yaml ansible -- Azure Kubernetes Service

the managed_disk_type parameter to change the OS disk to a managed disk.


Otherwise, the OS disk becomes an unmanaged disk.
 When you create a data disk with
the azure.azcollection.azure_rm_manageddisk module, you need to explicitly
specify the storage_account_type parameter to make it a managed disk.
Otherwise, the data disk will be an unmanaged disk.
 A managed disk does not require a storage account or a storage container, unlike
an unmanaged disk. In particular, note that once a VM is created on an
unmanaged disk, an unnecessary storage container named “vhds” is automatically
created.
 When you create an IP address with
the azure.azcollection.azure_rm_publicipaddress module, you must set
the sku parameter to standard . Otherwise, the IP address cannot be used in an
availability zone.
Dynamic Inventory Script

If you are not familiar with Ansible’s dynamic inventory scripts, check out Intro to Dynamic
Inventory.

The Azure Resource Manager inventory script is called azure_rm.py. It authenticates with the
Azure API exactly the same as the Azure modules, which means you will either define the
same environment variables described above in Using Environment Variables, create
a $HOME/.azure/credentials file (also described above in Storing in a File), or pass command
line parameters. To see available command line options execute the following:

$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/
azure_rm.py
$ ./azure_rm.py --help
As with all dynamic inventory scripts, the script can be executed directly, passed as a
parameter to the ansible command, or passed directly to ansible-playbook using the -i option.
No matter how it is executed the script produces JSON representing all of the hosts found in
your Azure subscription. You can narrow this down to just hosts found in a specific set of
Azure resource groups, or even down to a specific host.

For a given host, the inventory script provides the following host variables:

{
"ansible_host": "XXX.XXX.XXX.XXX",
"computer_name": "computer_name2",
"fqdn": null,
"id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/
Microsoft.Compute/virtualMachines/object-name",
"image": {
9 yaml ansible -- Azure Kubernetes Service

"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "latest"
},
"location": "westus",
"mac_address": "00-00-5E-00-53-FE",
"name": "object-name",
"network_interface": "interface-name",
"network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-
production/providers/Microsoft.Network/networkInterfaces/object-name1",
"network_security_group": null,
"network_security_group_id": null,
"os_disk": {
"name": "object-name",
"operating_system_type": "Linux"
},
"plan": null,
"powerstate": "running",
"private_ip": "172.26.3.6",
"private_ip_alloc_method": "Static",
"provisioning_state": "Succeeded",
"public_ip": "XXX.XXX.XXX.XXX",
"public_ip_alloc_method": "Static",
"public_ip_id":
"/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/
Microsoft.Network/publicIPAddresses/object-name",
"public_ip_name": "object-name",
"resource_group": "galaxy-production",
"security_group": "object-name",
"security_group_id":
"/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/
Microsoft.Network/networkSecurityGroups/object-name",
"tags": {
"db": "mysql"
},
"type": "Microsoft.Compute/virtualMachines",
"virtual_machine_size": "Standard_DS4"
}
Host Groups

By default, hosts are grouped by:

 azure (all hosts)


 location name
 resource group name
 security group name
 tag key
 tag key_value
 os_disk operating_system_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables
or creating an azure_rm.ini file in your current working directory.

NOTE: An .ini file will take precedence over environment variables.


10 yaml ansible -- Azure Kubernetes Service

NOTE: The name of the .ini file is the basename of the inventory script (in other words,
‘azure_rm’) with a ‘.ini’ extension. This allows you to copy, rename and customize the
inventory script and have matching .ini files all in the same directory.

Control grouping using the following variables defined in the environment:

 AZURE_GROUP_BY_RESOURCE_GROUP=yes
 AZURE_GROUP_BY_LOCATION=yes
 AZURE_GROUP_BY_SECURITY_GROUP=yes
 AZURE_GROUP_BY_TAG=yes
 AZURE_GROUP_BY_OS_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:

 AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
Select hosts for specific tag key by assigning a comma separated list of tag keys to:

 AZURE_TAGS=key1,key2,key3
Select hosts for specific locations by assigning a comma separated list of locations to:

 AZURE_LOCATIONS=eastus,eastus2,westus
Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value
pairs to:

 AZURE_TAGS=key1:value1,key2:value2
If you don’t need the powerstate, you can improve performance by turning off powerstate
fetching:

 AZURE_INCLUDE_POWERSTATE=no
A sample azure_rm.ini file is included along with the inventory script in here. An .ini file will
contain the following:

[azure]
# Control which resource groups are included. By default, all resources groups are
included.
# Set resource_groups to a comma separated list of resource groups names.
#resource_groups=

# Control which tags are included. Set tags to a comma separated list of keys or key:value
pairs
#tags=

# Control which locations are included. Set locations to a comma separated list of
locations.
#locations=
11 yaml ansible -- Azure Kubernetes Service

# Include powerstate. If you don't need powerstate information, turning it off improves
runtime performance.
# Valid values: yes, no, true, false, True, False, 0, 1.
include_powerstate=yes

# Control grouping with the following boolean flags. Valid values: yes, no, true, false,
True, False, 0, 1.
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
Examples

Here are some examples using the inventory script:

# Download inventory script


$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/
azure_rm.py

# Execute /bin/uname on all instances in the Testing resource group


$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"

# Execute win_ping on all Windows instances


$ ansible -i azure_rm.py windows -m win_ping

# Execute ping on all Linux instances


$ ansible -i azure_rm.py linux -m ping

# Use the inventory script to print instance specific information


$ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty

# Use the inventory script with ansible-playbook


$ ansible-playbook -i ./azure_rm.py test_playbook.yml
Here is a simple playbook to exercise the Azure inventory script:

- name: Test the inventory script


hosts: azure
connection: local
gather_facts: false
tasks:
- debug:
msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
You can execute the playbook with something like:

$ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml


Disabling certificate validation on Azure endpoints

When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable
certificate validation for Azure endpoints in the Azure modules. This is not a recommended
security practice, but may be necessary when the system CA store cannot be altered to
include the necessary CA certificate. Certificate validation can be controlled by setting the
“cert_validation_mode” value in a credential profile, through the
12 yaml ansible -- Azure Kubernetes Service

“AZURE_CERT_VALIDATION_MODE” environment variable, or by passing the


“cert_validation_mode” argument to any Azure module. The default value is “validate”;
setting the value to “ignore” will prevent all certificate validation. The module argument
takes precedence over a credential profile value, which takes precedence over the
environment value.

Get Started: Configure Ansible using Azure Cloud Shell


 Article
 12/30/2021
 4 contributors
Feedback

In this article
1. Configure your environment
2. Automatic credential configuration
3. Test Ansible installation
4. Next steps

Get started with Ansible by configuring Ansible on Azure and creating a basic Azure
resource group.

Ansible is an open-source product that automates cloud provisioning, configuration


management, and application deployments. Using Ansible you can provision virtual
machines, containers, and network and complete cloud infrastructures. Also, Ansible
allows you to automate the deployment and configuration of resources in your
environment.

This article describes getting started with Ansible from the Azure Cloud
Shell environment.

Configure your environment


 Azure subscription: If you don't have an Azure subscription, create a free
account before you begin.
 Configure Azure Cloud Shell - If you're new to Azure Cloud Shell,
see Quickstart for Bash in Azure Cloud Shell.

1. If you already have a Cloud Shell session open, you can skip to the next
section.

2. Browse to the Azure portal


13 yaml ansible -- Azure Kubernetes Service

3. If necessary, log in to your Azure subscription and change the Azure


directory.

4. Open Cloud Shell.

5. If you haven't previously used Cloud Shell, configure the environment


and storage settings.

6. Select the command-line environment.

Automatic credential configuration

When signed into the Cloud Shell, Ansible authenticates with Azure to manage
infrastructure without any extra configuration.

When working with multiple subscriptions, specify the subscription Ansible uses by
exporting the AZURE_SUBSCRIPTION_ID environment variable.

To list all of your Azure subscriptions, run the following command:

Azure CLICopy

Open Cloud Shell


az account list

Using your Azure subscription ID, set the AZURE_SUBSCRIPTION_ID as follows:


14 yaml ansible -- Azure Kubernetes Service

ConsoleCopy
export AZURE_SUBSCRIPTION_ID=<your-subscription-id>

Test Ansible installation

You now have configured Ansible for use within Cloud Shell!

This section shows how to create a test resource group within your new Ansible
configuration. If you don't need to do that, you can skip this section.

Create an Azure resource group

1. Save the following code as create_rg.yml.

YAMLCopy
---
- hosts: localhost
connection: local
tasks:
- name: Creating resource group - "{{ name }}"
azure_rm_resourcegroup:
name: "{{ name }}"
location: "{{ location }}"
register: rg
- debug:
var: rg

2. Run the playbook using ansible-playbook. Replace the placeholders with


the name and location of the resource group to be created.

BashCopy
ansible-playbook create_rg.yml --extra-vars
"name=<resource_group_name> location=<resource_group_location>"

Key points:

o Because of the register variable and debug section of the


playbook, the results display when the command finishes.

Delete an Azure resource group

 Ansible
 Azure CLI
 Azure PowerShell
15 yaml ansible -- Azure Kubernetes Service

1. Save the following code as delete_rg.yml.

ymlCopy
---
- hosts: localhost
tasks:
- name: Deleting resource group - "{{ name }}"
azure_rm_resourcegroup:
name: "{{ name }}"
state: absent
register: rg
- debug:
var: rg

2. Run the playbook using the ansible-playbook command. Replace the


placeholder with the name of the resource group to be deleted. All
resources within the resource group will be deleted.

BashCopy
ansible-playbook delete_rg.yml --extra-vars "name=<resource_group>"

Key points:

o Because of the register variable and debug section of the


playbook, the results display when the command finishes.

Most frequently Asked Azure Kubernetes Service (AKS)


Interview Question
Close X

1. What is Azure Kubernetes Service (AKS)?


2. How does AKS compare to other container orchestration
solutions?
3. How do you deploy applications to AKS?
4. What advantages does AKS provide?
16 yaml ansible -- Azure Kubernetes Service

5. What are the security and compliance considerations of


using AKS?
6. How can you monitor your AKS deployments?
7. How do you scale an application running on AKS?
8. What challenges have you faced when working with AKS?
9. What services does AKS integrate with?
10. How does networking work in AKS?
11. What best practices should be followed when
deploying to AKS?
12. How can you optimize workloads running in AKS?

What is Azure Kubernetes Service (AKS)?


Azure Kubernetes Service (AKS) is a managed Kubernetes offering
from Microsoft that helps reduce the complexity and operational
overhead of managing a Kubernetes cluster and provides a
production-ready environment to deploy and manage containerized
applications.
It allows users to quickly and easily create, manage, scale, and monitor
Kubernetes clusters on Azure while still maintaining full control over
their data, applications, and infrastructure.
AKS simplifies the deployment and maintenance of Kubernetes
clusters by providing users with access to a "single pane of glass" from
which they can manage all of their Kubernetes resources.
AKS also provides automated upgrades to the latest version of
Kubernetes and seamlessly integrates with other Azure services such
as Azure Container Registry for container image storage.
17 yaml ansible -- Azure Kubernetes Service

The following snippet is an example of creating a new Kubernetes


cluster in Azure using AKS:

az aks create \
--resource-group myResourceGroup \
--name myK8sCluster \
--node-count 3 \
--generate-ssh-keys

How does AKS compare to other container


orchestration solutions?
AKS (Azure Kubernetes Service) is a cloud-based container
orchestration solution offered by Microsoft Azure.
It provides a platform for deploying, managing, and scaling containers
in an automated and cost-effective way. Compared to other container
orchestration solutions such as Docker Swarm, AKS offers a
streamlined experience with an intuitive user interface, a tightly
integrated experience between applications and infrastructure, and
advanced features such as autoscaling, monitoring, and metering.
Additionally, AKS allows users to quickly deploy applications and
workloads without the need for complex configuration and setup.
To create an AKS cluster, you will first need to install the Azure CLI
(Command-Line Interface). Once this is done, you can create a
resource group and then create an AKS cluster with the following code
snippet:

az group create --name "myResourceGroup" --location "eastus"


az aks create --resource-group "myResourceGroup" --name "myAKSCluster" --node-
count 1 --generate-ssh-keys
18 yaml ansible -- Azure Kubernetes Service

This will create an AKS cluster with one node, ready for you to begin
deploying applications. From there, you can specify more parameters
such as the number of nodes, SKU, network configuration, and more.
Finally, you can access the AKS dashboard to manage and monitor
your cluster usage.
Overall, AKS provides a comprehensive and robust solution for
running containerized applications in the cloud.
It is secure, reliable, and highly scalable, making it an ideal choice for
businesses of any size.

How do you deploy applications to AKS?


Deploying applications to an Azure Kubernetes Service (AKS) is a
straightforward process that requires a few steps. Firstly, you need to
ensure that the application is containerized and ready for deployment.
You can do this by creating a Docker image of the application and
pushing it to a container registry such as Docker Hub.
Once the image has been created, you can use the az aks create
command in the Azure CLI to create an AKS cluster. Then you can use
the az aks get-credentials command to pull down the credentials for
the cluster.
Once the cluster has been setup, you can use the kubectl create -f
command to deploy the application to the cluster.
This command will install a service, replication controller, and pods
which will house the containers running your application.
You can use the following code snippet to deploy an application to an
AKS cluster:

apiVersion: apps/v1
kind: Deployment
19 yaml ansible -- Azure Kubernetes Service

metadata:
name: my-app-deployment
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: my-application
image: my.container.registry.io/my-application:latest

By following these steps, you can successfully deploy an application to


an AKS cluster.

What advantages does AKS provide?


Azure Kubernetes Service (AKS) is a managed container orchestration
service for containerized applications.
As a managed service, Microsoft takes on responsibility for the
operation and maintenance of the infrastructure, ensuring high
availability and security of the clusters.
With AKS, you can manage and deploy container-based applications
quickly while taking advantage of several features such as auto-
scaling, self-healing, load balancing, and more.
The following code snippet shows how to create an AKS cluster in
Azure:
20 yaml ansible -- Azure Kubernetes Service

```
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster
--node-count 3 \
--generate-ssh-keys
```

The main advantages of using AKS are the following:

1. Easy Deployment and Management: AKS simplifies deployment and management of


highly available, secure clusters that are optimized for running Kubernetes
workloads.

2. Cost Savings: AKS is cost-effective, reducing the costs associated with buying,
renting or leasing servers and other hardware for running Kubernetes workloads.

3. Security and Compliance: AKS integrates with Azure RBAC, providing secure
access control to resources. It also supports LDAP integration and SAML
authentication.

4. Scale and Resilience: AKS allows you to quickly scale up or down depending on
your needs, making sure your applications are always running. Additionally, many
features are built in that help protect the cluster from various types of failure.

5. Auto-healing: AKS's built-in auto-healing capabilities mean that Kubernetes


applications can be automatically restarted if they crash or become unresponsive.

6. Monitoring and Logging: AKS integrates with Azure Monitor and Log Analytics,
allowing you to fully monitor your Kubernetes resources.
21 yaml ansible -- Azure Kubernetes Service

What are the security and compliance


considerations of using AKS?
Security and compliance considerations of using Azure Kubernetes
Service (AKS) are important to assess before implementation.
As with any public cloud technology, there are certain security
principles that should be kept in mind when using AKS.
Some of the key considerations include access control, identity
management, data protection, and networking.
Access control is a vital part of security when using AKS. AKS offers
role-based access control (RBAC), which allows users to assign access
rights to specific resources.
This helps to ensure that only those who need access to certain
components have it, while preventing malicious actors from gaining
access.
Identity management is also an important aspect of security when
using AKS. It is essential to ensure that each user has unique
credentials and can be authenticated.
Data protection is another important consideration when using AKS.
AKS offers features such as encryption at rest and encryption in transit,
which helps protect data stored in and passed through AKS.
Networking also plays an important role in the security of AKS. AKS
provides a secure network that ensures that only authorized traffic is
allowed in or out of the system.
Below is a code snippet that shows how to create an Azure Kubernetes
Service (AKS) cluster with security considerations in mind:

// Create a resource group


$resourceGroupName = "myResourceGroup"
az group create --name $resourceGroupName --location eastus

// Create an AKS cluster


$clusterName = "myCluster"
22 yaml ansible -- Azure Kubernetes Service

az aks create \
--resource-group $resourceGroupName \
--name $clusterName \
--node-count 3 \
--enable-rbac \
--enable-private-cluster \
--network-policy azure \
--generate-ssh-keys \
--no-wait

How can you monitor your AKS deployments?


Monitoring your AKS deployments can be achieved with a few simple
steps. First, you can use Azure Monitor to collect your pods and
containers logs. This will help you identify any errors or problems in
your deployment.
You can also use Grafana, a popular open source monitoring system,
to view performance information and metrics such as usage and
memory consumption. Additionally, Kubernetes has the ability to roll
out configuration changes and self-heal when unexpected issues arise.
To do this, you can set up an alert policy using the kube-prometheus
stack and configure it to detect and alert you on any changes in the
system. Lastly, you can use Azure PowerShell or the Azure CLI to
automate deployments and easily view all the resources associated
with your AKS cluster.
Using these tools together will give you a thorough understanding of
what's going on in your AKS system so that you can make the
necessary changes if needed.
An example of code snippet to monitor your AKS deployments is
shown below:
23 yaml ansible -- Azure Kubernetes Service

$kubectl_logs = az aks get-credentials --resource-group myResourceGroup --name


myAKSCluster
kubectl logs --all-namespaces --tail=100

This will list the last 100 lines of all the logs for your current
Kubernetes environment, which you can then analyze to determine
any abnormality in your system.

How do you scale an application running on


AKS?
Scaling an application running on Azure Kubernetes Service (AKS) can
be accomplished in a few simple steps.
First, you will need to increase the capacity of your cluster, which can
be done through the Azure portal.
Second, you will task the AKS API to increase the number of pods
available, which can be done via kubectl.
Lastly, you will need to modify your application deployment resources
and adjust the desired state of replicas in order to increase the
number of containers running.
To demonstrate, the following code snippet utilizes the kubectl CLI to
scale a deployment of nginx.

$ kubectl scale deployment nginx --replicas=6

This command will scale the deployment "nginx" to 6 replicas and thus
increase the number of containers running for that service. After
making sure that both the cluster and deployment is correctly scaled,
you will be able to use the application and handle higher traffic with
ease.
24 yaml ansible -- Azure Kubernetes Service

What challenges have you faced when working


with AKS?
Working with Azure Kubernetes Service (AKS) can be challenging due
to its complexity.
To begin with, there are a variety of different APIs and architectures
that need to be understood before being able to effectively work with
AKS.
Additionally, many of the AKS related tasks have a steep learning
curve which makes it difficult to become productive quickly.
Furthermore, the AKS environment is highly complex and requires an
understanding of many different concepts such as networking,
storage, containers, and deployments.
In order to overcome these challenges, I had to invest in learning the
requisite concepts and technologies.
Once I was able to gain an understanding of the basics, I was then
able to utilize various development tools and frameworks to interact
with the AKS environment.
For example, I could use kubectl to interact with resources such as
deployments, services, nodes, and pods. I could also leverage Helm
charts to easily deploy applications and services into the AKS
environment.
By leveraging the automation capabilities of these tools, I was able to
drastically reduce the complexity of provisioning AKS resources.
Lastly, I had to understand and manage networking concepts such as
Ingress Controllers, Load Balancers, and Service Mesh.
This was important for implementing secure container
communications and routing traffic between microservices. To help
with this, I leveraged Azure CNI and Istio to create a reliable and
secure routing mesh.
25 yaml ansible -- Azure Kubernetes Service

Overall, working with AKS can be a challenging undertaking due to its


complexity. However, once the core concepts and technologies have
been mastered, it is possible to not only deploy applications into AKS
but also to create secure and scalable solutions.

What services does AKS integrate with?


Microsoft Azure Kubernetes Service (AKS) integrates with a variety of
other services and solutions.
Depending on your specific needs, AKS can be integrated with
monitoring, logging, authentication, and identity management
services.
Additionally, AKS also has the ability to integrate with other cloud
services, such as Azure Machine Learning, as well as on-premise tools,
such as Jenkins, Helm, and more.
For example, you can integrate AKS with an identity provider in order
to authenticate users and grant them access to resources.
This can be done using the Azure Active Directory service, or any other
compatible identity provider.
You can also integrate AKS with monitoring services such as
Prometheus, Grafana, and Azure Monitor.
These services allow you to view and analyze the performance of your
clusters and nodes, helping you quickly diagnose any issues.
Finally, AKS also supports integration with external logging services
such as Elasticsearch.
This allows you to gather and store log data from your clusters, giving
you further insight into their performance.
Here's some simple code snippet to help you get started with
integrating AKS with other services:

// Authenticate users with Azure Active Directory


az aks update \
26 yaml ansible -- Azure Kubernetes Service

--resource-group myResourceGroup \
--name myAKSCluster \
--enable-aad

// Integrate AKS with Azure Monitor


az extension add --name monitor
az aks enable-addons --addons monitoring --resource-group MyResourceGroup --name
myAKSCluster

// Integrate AKS with external logging services


az aks enable-addons \
--addons logging \
--resource-group myResourceGroup \
--name myAKSCluster \
--log-analytics-workspace-resource-id <your-workspace-resource-id>

How does networking work in AKS?


Networking in Azure Kubernetes Service (AKS) is managed by the
underlying virtual networking infrastructure. Networking components
include:

IP address management
DNS services
Load Balancing
Virtual Network (VNET)
Network Security Groups (NSGs)

IP address management is done through the Azure Resource


Manager, which allows you to create and manage IP addresses on a
per-node basis.
Additionally, namespaces can be defined and assigned to nodes
allowing for easy management.
27 yaml ansible -- Azure Kubernetes Service

DNS services are managed through Azure DNS, which allows for
automated name resolution of resources in AKS clusters.
Additionally, nodes can have their own hostnames assigned making it
easier to reach them.
Load balancing is handled using Azure's Load Balancer, which offers
multiple features such as traffic distribution, health checks and
performance tracking. Additionally, TCP, UDP and HTTP ports can be
opened and configured for incoming traffic.
Virtual Networks (VNETs) are used to group and contain AKS node
deployments.
This helps to ensure that nodes in the same network can communicate
with each other, but nodes outside of the network are unable to
communicate.
Finally, Network Security Groups (NSGs) are used to further enhance
security. These allow for control over which IP addresses and ports can
access AKS clusters, and also provides advanced filters and rules for
traffic.
An example of a code snippet that allows you to configure NSG rules
for an AKS cluster is shown below:

// Create a new NSG


$nsg = New-AzNetworkSecurityGroup -Name "myNSG" -ResourceGroupName "myRG"

// Add NSG rules


$rule1 = New-AzNetworkSecurityRuleConfig -Name "AllowHTTP" -Protocol Tcp -
Direction Inbound -Priority 100 -SourceAddressPrefix * -SourcePortRange * -
DestinationAddressPrefix * -DestinationPortRange 80

// Assign NSG to AKS cluster


Set-AzNetworkSecurityGroup -NetworkSecurityGroup $nsg -SubnetId [subnet id]
28 yaml ansible -- Azure Kubernetes Service

What best practices should be followed when


deploying to AKS?
When deploying to AKS (Azure Kubernetes Service), it is important to
ensure that your environment is secure, reliable, and scalable.
One way to do this is by taking advantage of the platform's built-in
container orchestration capabilities.
This includes using rolling updates, liveness probes, and readiness
probes which can be configured with a few lines of code.
First, you'll need to set up an AKS cluster with the desired compute
resources and number of nodes.
This can be done through the Azure portal or via the Azure CLI. Once
the cluster is created, you can deploy containers to it with the kubectl
command line utility.
When deploying applications, it is important to consider their resource
requirements.
To do this, use Kubernetes Resource Quotas, which allow you to limit
the amount of CPU, memory, and storage consumed by each
deployment. You can also leverage Kubernetes Horizontal Pod
Autoscaler to automatically scale deployment replicas up and down
based on resource usage.
When deploying applications to production, security should be a
priority.
Use Kubernetes Network Policies to control traffic between containers
and enable secure communication between services. You should also
make sure to configure authentication and authorization for your
applications.
Finally, it is important to monitor application performance in
production.
To do this, use Prometheus or Azure Monitor for Containers, which
can be configured to track metrics such as memory usage, CPU
utilization, and response times.
In summary, deploying to AKS requires careful consideration of the
29 yaml ansible -- Azure Kubernetes Service

environment's security, reliability, and scalability.


By leveraging Kubernetes features such as resource quotas, horizontal
pod autoscaler, and network policies, as well as monitoring tools like
Prometheus and Azure Monitor for Containers, you can ensure your
applications are running optimally.
Example Code Snippet:

//Create a resource quota for instance types


apiVersion: v1
kind: ResourceQuota
metadata:
name: myapp-quota
spec:
hard:
cpu: "4"
memory: "8Gi"
pods: "3"
scopes:
- InstanceType

How can you optimize workloads running in


AKS?
Optimizing workloads running in Azure Kubernetes Service (AKS) can
be done in a few different ways.
First, it is important to analyze and understand the workloads that are
being deployed in AKS, as this will allow you to effectively identify any
performance issues. Once the workloads have been analyzed, one way
to address performance issues is by scaling the resources in the AKS
cluster.
This could involve scaling up the number of nodes, adjusting the size
of the nodes, or both. Additionally, when configuring applications and
30 yaml ansible -- Azure Kubernetes Service

services, one should consider leveraging the ecosystem of open


source tools such as Helm, Prometheus and Grafana.
By properly configuring these tools, it is possible to gain insights into
the performance of the cluster, proactively identify any potential
bottlenecks and take corrective action.
Additionally, you might consider orchestrating auto-scaling
capabilities of your workloads based on CPU/memory utilization
metrics by leveraging the horizontal pod autoscaling feature in
Kubernetes.
Finally, it is important to note that every workload is different and may
require different methods of optimization; therefore, experimentation
and testing are key in determining the most effective approach to
optimizing workloads running in AKS.
Here is a sample code snippet that can be used to set up auto-scaling
of an application running in Azure Kubernetes Service (AKS):

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: <name-of-autoscaler>
spec:
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: <desired-level-of-utilization>
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: <name-of-deployment>

Q1. How is Kubernetes different from Docker Swarm?


Features Kubernetes Docker Swarm

Installatio Setup is very complicated, but once installed Installation is very simple, but the cluster is
31 yaml ansible -- Azure Kubernetes Service

n & Cluster
cluster is robust. not robust.
Config

GUI GUI is the Kubernetes Dashboard. There is no GUI.

Highly scalable and scales 5x faster than


Scalability Highly scalable and scales fast.
Kubernetes.

Auto-
Kubernetes can do auto-scaling. Docker swarm cannot do auto-scaling.
scaling

Load Manual intervention needed for load balancing Docker swarm does auto load balancing of
Balancing traffic between different containers and pods. traffic between containers in the cluster.

Rolling
Can deploy rolling updates and does automatic Can deploy rolling updates, but not
Updates &
rollbacks. automatic rollback.
Rollbacks

DATA Can share storage volumes only with the other Can share storage volumes with any other
Volumes containers in the same pod. container.

Logging & 3rd party tools like ELK stack should be used
In-built tools for logging and monitoring.
Monitoring for logging and monitoring.

Q2. What is Kubernetes?


32 yaml ansible -- Azure Kubernetes Service

Fig 1: What is Kubernetes – Kubernetes Interview Questions


Kubernetes is an open-source container management tool that holds the
responsibilities of container deployment, scaling & descaling of containers &
load balancing. Being Google’s brainchild, it offers excellent community and
works brilliantly with all the cloud providers. So, we can say that Kubernetes is
not a containerization platform, but it is a multi-container management solution.

Q3. How is Kubernetes related to Docker?


It’s a known fact that Docker provides the lifecycle management of containers
and a Docker image builds the runtime containers. But, since these individual
containers have to communicate, Kubernetes is used. So, Docker builds the
containers and these containers communicate with each other via Kubernetes.
So, containers running on multiple hosts can be manually linked and
orchestrated using Kubernetes.

Q4. What is the difference between deploying applications on hosts and


containers?

Fig 2: Deploying Applications On Host vs Containers – Kubernetes Interview


Questions
Refer to the above diagram. The left side architecture represents deploying
applications on hosts. So, this kind of architecture will have an operating system
and then the operating system will have a kernel that will have various libraries
installed on the operating system needed for the application. So, in this kind of
framework you can have n number of applications and all the applications will
33 yaml ansible -- Azure Kubernetes Service

share the libraries present in that operating system whereas while deploying
applications in containers the architecture is a little different.

This kind of architecture will have a kernel and that is the only thing that’s going
to be the only thing common between all the applications. So, if there’s a
particular application that needs Java then that particular application we’ll get
access to Java and if there’s another application that needs Python then only that
particular application will have access to Python.

The individual blocks that you can see on the right side of the diagram are
basically containerized and these are isolated from other applications. So, the
applications have the necessary libraries and binaries isolated from the rest of
the system, and cannot be encroached by any other application.

Q5. What is Container Orchestration?


Consider a scenario where you have 5-6 microservices for an application. Now,
these microservices are put in individual containers, but won’t be able to
communicate without container orchestration. So, as orchestration means the
amalgamation of all instruments playing together in harmony in music, similarly
container orchestration means all the services in individual containers working
together to fulfill the needs of a single server.

Q6. What is the need for Container Orchestration?


Consider you have 5-6 microservices for a single application performing various
tasks, and all these microservices are put inside containers. Now, to make sure
that these containers communicate with each other we need container
orchestration.
34 yaml ansible -- Azure Kubernetes Service

Fig 3: Challenges Without Container Orchestration – Kubernetes Interview


Questions
As you can see in the above diagram, there were also many challenges that
came into place without the use of container orchestration. So, to overcome
these challenges the container orchestration came into place.

Q7. What are the features of Kubernetes?


The features of Kubernetes, are as follows:

Fig 4: Features Of Kubernetes – Kubernetes Interview Questions


Q8. How does Kubernetes simplify containerized Deployment?
As a typical application would have a cluster of containers running across
multiple hosts, all these containers would need to talk to each other. So, to do
this you need something big that would load balance, scale & monitor the
containers. Since Kubernetes is cloud-agnostic and can run on any public/private
providers it must be your choice simplify containerized deployment.

Q9. What do you know about clusters in Kubernetes?


The fundamental behind Kubernetes is that we can enforce the desired state
management, by which I mean that we can feed the cluster services of a specific
configuration, and it will be up to the cluster services to go out and run that
configuration in the infrastructure.
35 yaml ansible -- Azure Kubernetes Service

Fig 5: Representation Of Kubernetes Cluster – Kubernetes Interview


Questions
So, as you can see in the above diagram, the deployment file will have all the
configurations required to be fed into the cluster services. Now, the deployment
file will be fed to the API and then it will be up to the cluster services to figure
out how to schedule these pods in the environment and make sure that the right
number of pods are running.

So, the API which sits in front of services, the worker nodes & the Kubelet
process that the nodes run, all together make up the Kubernetes Cluster.

Q11. How to do maintenance activity on the K8 node?

Performing maintenance activities on a Kubernetes (K8s) node requires careful


planning and execution to minimize disruption to running applications and
ensure the overall health and stability of the cluster. Here are the general steps
to perform maintenance on a Kubernetes node:

1. **Drain the Node**: Before performing maintenance, you should drain the node
to gracefully evict all the running pods from the node. The Kubernetes control
plane will schedule the evicted pods to other healthy nodes in the cluster. Use
the following command to drain the node:

“`

kubectl drain <node_name> –ignore-daemonsets

“`
36 yaml ansible -- Azure Kubernetes Service

Replace `<node_name>` with the name of the node you want to drain.

2. **Mark the Node as Unschedulable**: Prevent new pods from being scheduled
on the node during maintenance:

“`

kubectl cordon <node_name>

“`

3. **Perform Maintenance Tasks**: Perform any required maintenance tasks on


the node, such as OS upgrades, kernel updates, hardware replacements, etc.
4. **Verify Node Status**: After the maintenance is completed, verify that the node
is back online and functioning correctly.
5. **Uncordon the Node**: Allow the node to accept new pods again:

“`

kubectl uncordon <node_name>

“`

6. **Validate Pod Status**: Check the status of the pods that were running on the
node before draining to ensure they have been successfully rescheduled to
other nodes.
7. **Rollout Updates (if applicable)**: If you have made any changes that require
pod updates (e.g., container image updates), trigger a controlled rollout of the
affected pods to the updated version.
8. **Monitor Cluster Health**: Keep an eye on the overall health of the cluster
after maintenance. Monitor the logs and metrics to ensure that all components
and nodes are functioning as expected.

It’s crucial to plan maintenance windows during low-traffic periods or periods of


reduced load to minimize the impact on running applications. For critical
production environments, consider using Kubernetes features like
PodDisruptionBudgets and readiness probes to ensure high availability during
maintenance activities.

Always document your maintenance activities and follow best practices


recommended by the Kubernetes community to maintain a stable and reliable
cluster.

Q12. How do we control the resource usage of POD?


37 yaml ansible -- Azure Kubernetes Service

Controlling the resource usage of a Pod in Kubernetes is essential to ensure fair


allocation of resources and prevent individual Pods from consuming excessive
CPU and memory, which could negatively impact other Pods and the overall
cluster performance. Kubernetes provides several mechanisms to control the
resource usage of Pods:

1. **Resource Requests and Limits**: Kubernetes allows you to set resource


requests and limits for CPU and memory on a per-container basis within a Pod.

– Resource Requests: It specifies the minimum amount of CPU and memory


required for a container to run. Kubernetes will use this information for
scheduling and determining the amount of resources allocated to a Pod.

– Resource Limits: It specifies the maximum amount of CPU and memory that
a container can consume. Kubernetes enforces these limits to prevent a single
container from using more resources than specified, which helps in avoiding
resource contention.

Here’s an example of setting resource requests and limits in a Pod’s container


specification:

“`yaml

apiVersion: v1

kind: Pod

metadata:

name: my-pod

spec:

containers:

– name: my-container

image: my-image

resources:

requests:

cpu: “0.5”
38 yaml ansible -- Azure Kubernetes Service

memory: “512Mi”

limits:

cpu: “1”

memory: “1Gi”

“`2. **Resource Quotas**: Kubernetes allows you to define Resource Quotas


at the namespace level to limit the total amount of CPU and memory that can be
consumed by all Pods within the namespace. Resource Quotas help prevent
resource hogging and ensure a fair distribution of resources among different
applications.

3. **Horizontal Pod Autoscaler (HPA)**: HPA automatically adjusts the number of


replicas of a Pod based on CPU utilization or custom metrics. It can scale up or
down the number of replicas to maintain a target CPU utilization, helping to
optimize resource usage dynamically.
4. **Vertical Pod Autoscaler (VPA)**: VPA automatically adjusts the resource
requests and limits of Pods based on their actual resource usage. It can resize
the resource requests and limits to optimize resource allocation based on real-
time usage patterns.
5. **Quality of Service (QoS) Classes**: Kubernetes assigns QoS classes to Pods
based on their resource requirements and usage. There are three classes:
Guaranteed, Burstable, and BestEffort. The QoS classes help prioritize resource
allocation and eviction decisions during resource contention.

By using these mechanisms, you can effectively control the resource usage of
Pods in your Kubernetes cluster, ensuring efficient resource allocation, high
availability, and optimal performance for all applications running in the cluster.

Q13. What are the various K8 services running on nodes and describe the
role of each service?

In a Kubernetes (K8s) cluster, several essential services run on nodes to ensure


proper cluster management, networking, and communication between
components. Here are some of the key services and their roles:

1. **kubelet**: The kubelet is an agent that runs on each node and is responsible
for managing the containers running on that node. It communicates with the
Kubernetes control plane and ensures that the containers specified in Pod
manifests are running and healthy.
2. **kube-proxy**: The kube-proxy is responsible for network proxying and load
balancing for services running in the cluster. It enables communication between
39 yaml ansible -- Azure Kubernetes Service

Pods and services and maintains network rules to forward traffic to the
appropriate destinations.
3. **container runtime**: The container runtime is the software responsible for
pulling container images and running containers on the node. Kubernetes
supports various container runtimes, such as Docker, containerd, and others.
4. **kube-dns/coredns**: The kube-dns or CoreDNS service provides DNS
resolution within the cluster. It allows Pods to discover and communicate with
each other using DNS names instead of direct IP addresses.
5. **kubelet-certificate-controller**: This service ensures that each node has the
necessary TLS certificates required for secure communication with the control
plane.
6. **kubelet-eviction-manager**: The kubelet-eviction-manager monitors the
resource usage of the node and triggers Pod eviction when there is a lack of
resources, helping to maintain node stability and prevent node resource
exhaustion.
7. **kube-proxy (IPVS mode)**: In clusters running with IPVS (IP Virtual Server)
mode, kube-proxy uses IPVS to handle the load balancing of services more
efficiently.
8. **metrics-server**: The metrics-server collects resource usage metrics (CPU,
memory, etc.) from nodes and Pods and provides them to Kubernetes Horizontal
Pod Autoscaler (HPA) and other components for scaling decisions.
9. **node-problem-detector**: The node-problem-detector detects and reports
node-level issues, such as kernel panics or unresponsive nodes, to the
Kubernetes control plane for further actions.
10. **kube-reserved and kube-system-reserved cgroups**: These are control
groups that reserve CPU and memory resources for the kubelet and critical
system components to ensure their stability and proper functioning.

These services, running on every node, play a crucial role in maintaining the
health, networking, and performance of the Kubernetes cluster. They ensure
seamless communication, resource management, and container orchestration,
providing the foundation for deploying and managing containerized applications
effectively in the Kubernetes environment.

Q14. What is PDB (Pod Disruption Budget)?

A Pod Disruption Budget (PDB) is a Kubernetes resource that allows you to set
policies on how many Pods of a particular ReplicaSet or Deployment can be
simultaneously unavailable during voluntary disruptions. Voluntary disruptions
can occur during planned maintenance, scaling events, or other administrative
actions.

The main purpose of a Pod Disruption Budget is to ensure high availability and
reliability of applications running in a Kubernetes cluster while allowing for
necessary maintenance and updates. By setting a PDB, you define the maximum
40 yaml ansible -- Azure Kubernetes Service

tolerable disruption to a group of Pods, ensuring that a minimum number of


replicas remain available and operational at all times.

Kubernetes Certification Training Course: Administrator (CKA)

 Instructor-led Sessions
 Assessments
 Lifetime Access
 24 x 7 Expert Support

Explore Curriculum

A typical use case for PDB is during rolling updates or scaling events. When you
update a deployment or scale it up or down, Kubernetes will try to ensure that
the disruption does not exceed the defined PDB. This prevents scenarios where
all instances of an application are taken down simultaneously, leading to service
outages or degraded performance.

Here’s how a Pod Disruption Budget is defined in a Kubernetes manifest:

“`yaml

apiVersion: policy/v1beta1

kind: PodDisruptionBudget

metadata:

name: example-pdb

spec:

selector:

matchLabels:
41 yaml ansible -- Azure Kubernetes Service

app: example-app

maxUnavailable: 1

“`

In this example, we create a Pod Disruption Budget named “example-pdb” for


Pods labeled with `app: example-app`. The `maxUnavailable` parameter is set
to 1, meaning that only one Pod can be unavailable at any time due to voluntary
disruptions.

It’s important to note that a PDB does not prevent involuntary disruptions
caused by node failures or other unforeseen issues. Instead, it focuses on
controlling voluntary disruptions to maintain application availability during
planned events. PDBs are particularly useful for applications that require a
certain level of redundancy or have strict availability requirements.

Q15. What’s the init container and when it can be used?

An init container is a special type of container in Kubernetes that runs and


completes its tasks before the main containers in a Pod start running. Init
containers are used to perform setup, initialization, or configuration tasks
required by the main application containers before they can start processing
requests or performing their primary functions.

Here are some key points about init containers:

1. **Sequential Execution**: Init containers run one after another, and each init
container must successfully complete before the next one starts. This allows for
a sequential setup of required resources or configurations.
2. **Temporary Nature**: Init containers are temporary and are not part of the
main application’s ongoing lifecycle. Once their tasks are completed, they
terminate, and the main containers start.
3. **Different Image**: Init containers can use a different container image than the
main application containers. This allows for separate tools or configurations to
be used for initialization tasks.

Use Cases for Init Containers:

1. **Data Initialization**: Init containers can be used to fetch or generate initial


data required by the main application containers. For example, an init container
might fetch static configuration files or set up a database schema before the
main application starts.
42 yaml ansible -- Azure Kubernetes Service

2. **Dependency Handling**: When an application has dependencies on external


services, an init container can be used to check and ensure that those services
are available before the main application attempts to use them.
3. **Database Schema Migration**: In scenarios where the application requires a
specific database schema or migration, an init container can handle database
schema setup or migration tasks before the main application connects to the
database.
4. **Certificate or Secret Injection**: Init containers can fetch secrets or SSL
certificates from external sources and make them available to the main
application containers securely.

Here’s an example of a Pod definition with an init container:

“`yaml

apiVersion: v1

kind: Pod

metadata:

name: my-pod

spec:

containers:

– name: main-container

image: my-app-image

# Main application container specification

initContainers:

– name: init-container

image: busybox

command: [‘sh’, ‘-c’, ‘echo “Performing initialization…” && sleep 10’]

# Init container specification

“`
43 yaml ansible -- Azure Kubernetes Service

In this example, the Pod contains an init container named “init-container” with a
simple command to echo a message and sleep for 10 seconds. The main
application container is named “main-container” and is specified below the init
container. When the Pod starts, the init container will run and complete its task
before the main application container starts.

Using init containers can help ensure that the required setup and configuration
tasks are completed successfully before the main application starts, improving
the reliability and stability of the overall application deployment.

Q16. What is the role of Load Balance in Kubernetes?

The role of Load Balancing in Kubernetes is to distribute incoming network


traffic across multiple instances of a service or a set of Pods that are part of a
Kubernetes Deployment or ReplicaSet. Load balancing ensures that each
instance or Pod receives a fair share of requests, optimizing resource utilization
and providing high availability for applications.

Here’s how load balancing works in Kubernetes:

1. Service: In Kubernetes, a Service is an abstraction that defines a logical set of


Pods and a policy for accessing them. A Service acts as a stable endpoint for
other applications to access the Pods running your application.
2. Load Balancer: When a Service is created, Kubernetes can automatically
provision a load balancer (external or internal, depending on the cloud provider
and configuration) to distribute incoming traffic across the Pods associated with
the Service.
3. Traffic Distribution: The load balancer continuously monitors the health and
availability of the Pods associated with the Service. It uses different algorithms,
such as round-robin, least connections, or IP hash, to evenly distribute incoming
requests to the available Pods. This ensures that each Pod gets its fair share of
traffic, preventing any single Pod from being overwhelmed.
4. High Availability: Load balancing also provides high availability. If a Pod
becomes unhealthy or unresponsive, the load balancer automatically routes
traffic to the remaining healthy Pods, ensuring that the application remains
accessible even if individual Pods fail.
5. Scaling and Rolling Updates: Load balancing plays a critical role in scaling and
rolling updates. When new Pods are added due to scaling or updates, the load
balancer automatically starts routing traffic to these new Pods, gradually
replacing the older ones. This allows for seamless scaling and updates with
minimal or no disruption to the application.
6. Service Discovery: Load balancing facilitates service discovery within the cluster.
Clients do not need to know the exact locations or IP addresses of individual
Pods; they can simply access the Service, and the load balancer routes their
requests to the appropriate Pod.
44 yaml ansible -- Azure Kubernetes Service

Overall, load balancing in Kubernetes ensures that applications are efficiently


distributed across the available Pods, that they remain highly available, and that
traffic is managed effectively as the cluster scales or undergoes updates. This
enhances the overall performance, reliability, and scalability of applications
running in a Kubernetes cluster.

Q17. What are the various things that can be done to increase Kubernetes
security?

Increasing Kubernetes security is crucial to protect your cluster, applications,


and sensitive data from potential threats and unauthorized access. Here are
several essential practices and measures to enhance Kubernetes security:

1. Use RBAC (Role-Based Access Control): Implement RBAC to control and restrict
access to different resources within the cluster. Assign roles and permissions
based on the principle of least privilege, ensuring that users and applications
have only the necessary access rights.
2. Enable Network Policies: Use Network Policies to control network traffic
between Pods within the cluster. Network Policies help enforce communication
rules, limiting the attack surface and preventing unauthorized access between
Pods.
3. Secure API Server: Ensure that the Kubernetes API server is properly secured.
Use TLS certificates for communication, disable insecure ports, and enable audit
logging to monitor API server activity.
4. Image Security: Scan container images for vulnerabilities and use trusted
sources for images. Implement container image signing and verification to
ensure image integrity.
5. Pod Security Policies: Utilize Pod Security Policies to enforce security standards
and best practices for Pod specifications. Pod Security Policies can prevent the
creation of Pods that do not meet security requirements.
6. Secure Secrets Management: Use Kubernetes Secrets to store sensitive
information securely. Avoid exposing sensitive data directly in Pod specifications
or YAML files.
7. Regularly Update and Patch: Keep all components of the Kubernetes cluster,
including nodes, control plane, and add-ons, up to date with the latest security
patches and updates.
8. Secure etcd: Ensure that the etcd data store used by the Kubernetes control
plane is secure. Configure TLS encryption for etcd communication and consider
enabling role-based access control for etcd.
9. Limit External Access: Minimize external access to the Kubernetes API server
and use a VPN or private network for secure access.
10. Implement Pod Security Context: Set appropriate security context for Pods to
control their privileges and capabilities. Avoid running containers with excessive
permissions.
45 yaml ansible -- Azure Kubernetes Service

11. Monitoring and Logging: Implement robust monitoring and logging solutions to
detect and respond to security incidents. Monitor cluster activity, audit logs, and
network traffic for suspicious behavior.
12. Secure Network Communication: Use TLS for secure communication between
components in the cluster. Enable mutual TLS authentication for enhanced
security.
13. Limit Host OS Access: Restrict direct access to the host OS from within
containers, as it can pose security risks.
14. Regular Security Audits: Conduct regular security audits and vulnerability
assessments of your Kubernetes cluster and applications.
15. Training and Education: Educate your team about Kubernetes security best
practices and conduct security training regularly.

By following these security measures and best practices, you can significantly
enhance the security of your Kubernetes cluster, reducing the risk of potential
threats and ensuring a more resilient and protected environment for your
applications.

Q17. How to monitor the Kubernetes cluster?

Monitoring a Kubernetes cluster involves setting up various tools and practices


to collect and analyze data on the cluster’s health, performance, and resource
usage. Here’s a step-by-step guide to monitoring a Kubernetes cluster
effectively:

1. Choose a Monitoring Solution: Select a monitoring solution suitable for your


needs. Popular choices include Prometheus, Grafana, Datadog, New Relic, and
others. Prometheus and Grafana are widely used in Kubernetes environments
due to their flexibility and strong community support.
2. Deploy Monitoring Components: Set up the monitoring components within the
Kubernetes cluster. For Prometheus and Grafana, you can use Helm charts or
manifests to deploy them. Prometheus scrapes metrics from Kubernetes
components and applications, while Grafana provides visualization and
dashboard capabilities.
3. Node-Level Metrics: Collect and monitor node-level metrics (CPU, memory, disk,
network) using tools like Node Exporter or cAdvisor. These tools export metrics
to Prometheus, which stores and manages the data.
4. Application Metrics: Instrument your applications with client libraries like
Prometheus client libraries or OpenTelemetry to expose custom metrics. These
metrics can be scraped by Prometheus and visualized in Grafana.
5. Kubernetes Metrics: Use kube-state-metrics to expose Kubernetes-specific
metrics like the status of deployments, replicasets, pods, and services. These
metrics provide insights into the state of Kubernetes resources.
6. Monitor Cluster Components: Keep an eye on the health of Kubernetes
components like API server, controller manager, etcd, and scheduler.
46 yaml ansible -- Azure Kubernetes Service

Prometheus can scrape metrics from these components, and alerting rules can
be configured to notify of any issues.
7. Alerting: Configure alerting rules in Prometheus or through your monitoring
solution to get notified of critical issues or abnormal behavior. Use Alertmanager
to manage and route alerts to various channels like email, Slack, or other
messaging platforms.
8. Visualize Data: Create custom dashboards in Grafana to visualize the collected
metrics. Display critical cluster metrics, application-specific metrics, and any
other relevant data for easy monitoring.
9. Long-Term Storage: Consider setting up long-term storage for historical metrics
data. Tools like Thanos or VictoriaMetrics can help store and query historical
data from Prometheus.
10. Log Aggregation: Use a centralized logging solution (e.g., ELK Stack, Fluentd, Loki)
to collect and analyze container logs for debugging and troubleshooting
purposes.
11. Security Monitoring: Implement security monitoring to detect potential security
threats and unauthorized access attempts in your Kubernetes cluster.
12. Regular Review and Maintenance: Regularly review the monitoring data, analyze
trends, and fine-tune alerting thresholds. Keep monitoring components updated
and ensure that they are functioning correctly.

By following these steps and continuously monitoring your Kubernetes cluster,


you can gain valuable insights, detect and resolve issues promptly, and maintain
a stable and well-performing environment for your applications.

Q18. How to get the central logs from POD?

To collect central logs from Pods running in a Kubernetes cluster, you can use a
centralized logging solution. One popular approach is to use the ELK Stack,
which consists of three main components: Elasticsearch, Logstash (or Fluentd),
and Kibana. Here’s how you can set up central logging using the ELK Stack:

1. Install Elasticsearch: Deploy Elasticsearch as a central log storage and indexing


solution. Elasticsearch will store and index the logs collected from various Pods.
2. Install Logstash or Fluentd: Choose either Logstash or Fluentd as the log collector
and forwarder. Both tools can collect logs from different sources, including
application logs from Pods, and send them to Elasticsearch.

– If using Logstash: Install and configure Logstash on a separate node or


container. Create Logstash pipelines to process and forward logs to
Elasticsearch.
47 yaml ansible -- Azure Kubernetes Service

– If using Fluentd: Deploy Fluentd as a DaemonSet on each node in the


Kubernetes cluster. Fluentd will collect logs from containers running on each
node and send them to Elasticsearch.

3. Configure Application Logs: Inside your Kubernetes Pods, ensure that your
applications are configured to log to the standard output and standard error
streams. Kubernetes will collect these logs by default.
4. Install Kibana: Set up Kibana as a web-based user interface to visualize and
query the logs stored in Elasticsearch. Kibana allows you to create custom
dashboards and perform complex searches on your log data.
5. Configure Log Forwarding: Configure Logstash or Fluentd to forward logs from
the Kubernetes Pods to Elasticsearch. This may involve defining log collection
rules, filters, and log parsing configurations.
6. View Logs in Kibana: Access Kibana using its web interface and connect it to the
Elasticsearch backend. Once connected, you can create visualizations, search
logs, and analyze log data from your Kubernetes Pods.

Additionally, you can consider using other centralized logging solutions like Loki
or Splunk for log aggregation and analysis. The process may vary slightly
depending on the logging tool you choose, but the core concept remains the
same: collect logs centrally from Kubernetes Pods and make them available for
analysis and visualization in a user-friendly interface.

Keep in mind that setting up and maintaining a centralized logging solution


requires careful planning and consideration of resource usage, especially if you
have a large number of Pods generating a significant volume of logs.

Q10. What is Google Container Engine?


Google Container Engine (GKE) is an open-source management platform for
Docker containers and clusters. This Kubernetes based engine supports only
those clusters which run within Google’s public cloud services.
48 yaml ansible -- Azure Kubernetes Service

Kubernetes Interview Questions


Q11. What is Heapster?
Heapster is a cluster-wide aggregator of data provided by Kubelet running on
each node. This container management tool is supported natively on
Kubernetes cluster and runs as a pod, just like any other pod in the cluster. So, it
basically discovers all nodes in the cluster and queries usage information from
the Kubernetes nodes in the cluster, via on-machine Kubernetes agent.

Q12. What is Minikube?


Minikube is a tool that makes it easy to run Kubernetes locally. This runs a
single-node Kubernetes cluster inside a virtual machine.

Q13. What is Kubectl?


Kubectl is the platform using which you can pass commands to the cluster. So, it
basically provides the CLI to run commands against the Kubernetes cluster with
various ways to create and manage the Kubernetes component.

Q14. What is Kubelet?


This is an agent service which runs on each node and enables the slave to
communicate with the master. So, Kubelet works on the description of
containers provided to it in the PodSpec and makes sure that the containers
described in the PodSpec are healthy and running.
49 yaml ansible -- Azure Kubernetes Service

Q15. What do you understand by a node in Kubernetes?

Fig 6: Node In Kubernetes – Kubernetes Interview Questions


Architecture-Based Kubernetes Interview Questions

This section of questions will deal with the questions related to the architecture
of Kubernetes.

Q1. What are the different components of Kubernetes Architecture?


The Kubernetes Architecture has mainly 2 components – the master node and
the worker node. As you can see in the below diagram, the master and the
worker nodes have many inbuilt components within them. The master node has
the kube-controller-manager, kube-apiserver, kube-scheduler, etcd. Whereas the
worker node has kubelet and kube-proxy running on each node.

Fig 7: Architecture Of Kubernetes – Kubernetes Interview Questions


Q2. What do you understand by Kube-proxy?
Kube-proxy can run on each and every node and can do simple TCP/UDP packet
forwarding across backend network service. So basically, it is a network proxy
50 yaml ansible -- Azure Kubernetes Service

that reflects the services as configured in Kubernetes API on each node. So, the
Docker-linkable compatible environment variables provide the cluster IPs and
ports which are opened by proxy.

DevOps Training

DEVOPS CERTIFICATION TRAINING COURSE


DevOps Certification Training Course
Reviews

5(79900)

KUBERNETES CERTIFICATION TRAINING COURSE: ADMINISTRATOR (CKA)


Kubernetes Certification Training Course: Administrator (CKA)
Reviews

5(5400)

AWS DEVOPS ENGINEER CERTIFICATION TRAINING COURSE


AWS DevOps Engineer Certification Training Course
Reviews

5(3800)

DOCKER CERTIFICATION TRAINING COURSE


Docker Certification Training Course
Reviews

5(3150)

JENKINS CERTIFICATION TRAINING COURSE


Jenkins Certification Training Course
Reviews

5(3700)
51 yaml ansible -- Azure Kubernetes Service

GIT CERTIFICATION TRAINING


Git Certification Training
Reviews

5(1950)

ANSIBLE CERTIFICATION TRAINING COURSE


Ansible Certification Training Course
Reviews

5(350)

Next

Q3. Can you brief on the working of the master node in Kubernetes?
Kubernetes master controls the nodes and inside the nodes the containers are
present. Now, these individual containers are contained inside pods and inside
each pod, you can have a various number of containers based upon the
configuration and requirements. So, if the pods have to be deployed, then they
can either be deployed using user interface or command-line interface. Then,
these pods are scheduled on the nodes, and based on the resource
requirements, the pods are allocated to these nodes. The kube-apiserver makes
sure that there is communication established between the Kubernetes node and
the master components.
52 yaml ansible -- Azure Kubernetes Service

Fig 8: Representation Of Kubernetes Master Node – Kubernetes Interview


Questions
Q4. What is the role of kube-apiserver and kube-scheduler?
The kube – apiserver follows the scale-out architecture and is the front end of
the master node control panel. This exposes all the APIs of the Kubernetes
Master node components and is responsible for establishing communication
between Kubernetes Node and the Kubernetes master components.

The kube-scheduler is responsible for distributing and managing the workload


on the worker nodes. So, it selects the most suitable node to run the
unscheduled pod based on resource requirements and keeps track of resource
utilization. It ensures that the workload is not scheduled on already full nodes.

Q5. Can you brief me about the Kubernetes controller manager?


Multiple controller processes run on the master node but are compiled together
to run as a single process: the Kubernetes Controller Manager. So, Controller
Manager is a daemon that embeds controllers and does namespace creation
and garbage collection. It owns the responsibility and communicates with the
API server to manage the end-points.

So, the different types of controller manager running on the master node are :

Fig 9: Types Of Controllers – Kubernetes Interview Questions


Q6. What is ETCD?
Etcd is written in Go programming language and is a distributed key-value store
used for coordinating distributed work. So, Etcd stores the configuration data of
the Kubernetes cluster, representing the state of the cluster at any given point in
time.

Q7. What are the different types of services in Kubernetes?


The following are the different types of services used:
53 yaml ansible -- Azure Kubernetes Service

Fig 10: Types Of Services – Kubernetes Interview Questions


Q8. What do you understand by load balancer in Kubernetes?
A load balancer is one of the most common and standard ways of exposing
service. There are two types of load balancer used based on the working
environment i.e. either the Internal Load Balancer or the External Load Balancer.
The Internal Load Balancer automatically balances load and allocates the pods
with the required configuration whereas the External Load Balancer directs the
traffic from the external load to the backend pods.

Kubernetes Interview Questions


Q9. What is Ingress network, and how does it work?
Ingress network is a collection of rules that acts as an entry point to the
Kubernetes cluster. This allows inbound connections, which can be configured to
give services externally through reachable URLs, load balance traffic, or by
offering name-based virtual hosting. So, Ingress is an API object that manages
external access to the services in a cluster, usually by HTTP and is the most
powerful way of exposing service.

Now, let me explain to you the working of Ingress network with an example.

There are 2 nodes having the pod and root network namespaces with a Linux
bridge. In addition to this, there is also a new virtual ethernet device called
flannel0(network plugin) added to the root network.

Now, suppose we want the packet to flow from pod1 to pod 4. Refer to the
below diagram.
54 yaml ansible -- Azure Kubernetes Service

Fig 11: Working Of Ingress Network – Kubernetes Interview Questions

 So, the packet leaves pod1’s network at eth0 and enters the root network
at veth0.
 Then it is passed on to cbr0, which makes the ARP request to find the destination
and it is found out that nobody on this node has the destination IP address.
 So, the bridge sends the packet to flannel0 as the node’s route table is
configured with flannel0.
 Now, the flannel daemon talks to the API server of Kubernetes to know all the
pod IPs and their respective nodes to create mappings for pods IPs to node IPs.
 The network plugin wraps this packet in a UDP packet with extra headers
changing the source and destination IP’s to their respective nodes and sends this
packet out via eth0.
 Now, since the route table already knows how to route traffic between nodes, it
sends the packet to the destination node2.
 The packet arrives at eth0 of node2 and goes back to flannel0 to de-
capsulate and emits it back in the root network namespace.
 Again, the packet is forwarded to the Linux bridge to make an ARP request to
find out the IP that belongs to veth1.
 The packet finally crosses the root network and reaches the destination Pod4.

Q10. What do you understand by Cloud controller manager?


The Cloud Controller Manager is responsible for persistent storage, network
routing, abstracting the cloud-specific code from the core Kubernetes specific
code, and managing the communication with the underlying cloud services. It
might be split out into several different containers depending on which cloud
platform you are running on and then it enables the cloud vendors and
55 yaml ansible -- Azure Kubernetes Service

Kubernetes code to be developed without any inter-dependency. So, the cloud


vendor develops their code and connects with the Kubernetes cloud-controller-
manager while running the Kubernetes.

The various types of cloud controller manager are as follows:

Fig 12: Types Of Cloud Controller Manager – Kubernetes Interview


Questions
Q11. What is Container resource monitoring?
As for users, it is really important to understand the performance of the
application and resource utilization at all the different abstraction layer,
Kubernetes factored the management of the cluster by creating abstraction at
different levels like container, pods, services and whole cluster. Now, each level
can be monitored and this is nothing but Container resource monitoring.

The various container resource monitoring tools are as follows:

Fig 13: Container Resource Monitoring Tools – Kubernetes Interview


Questions
Q12. What is the difference between a replica set and a replication controller?
Replica Set and Replication Controller do almost the same thing. Both ensure
that a specified number of pod replicas are running at any given time. The
difference comes with the usage of selectors to replicate pods. Replica Set uses
Set-Based selectors while replication controllers use Equity-Based selectors.
56 yaml ansible -- Azure Kubernetes Service

 Equity-Based Selectors: This type of selector allows filtering by label key and
values. So, in layman’s terms, the equity-based selector will only look for the
pods with the exact same phrase as the label.
Example: Suppose your label key says app=nginx; then, with this selector, you
can only look for those pods with label app equal to nginx.
 Selector-Based Selectors: This type of selector allows filtering keys according to
a set of values. So, in other words, the selector-based selector will look for pods
whose label has been mentioned in the set.
Example: Say your label key says app in (Nginx, NPS, Apache). Then, with this
selector, if your app is equal to any of Nginx, NPS, or Apache, the selector will
take it as a true result.

Q13. What is a Headless Service?


Headless Service is similar to that of a ‘Normal’ service but does not have a
Cluster IP. This service enables you to directly reach the pods without the need
to access them through a proxy.

Q14. What are the best security measures that you can take while using
Kubernetes?
The following are the best security measures that you can follow while using
Kubernetes:

Fig 14: Best Security Measures – Kubernetes Interview Questions


Q15. What are federated clusters?
Multiple Kubernetes clusters can be managed as a single cluster with the help of
federated clusters. So, you can create multiple Kubernetes clusters within a data
center/cloud and use federation to control/manage them all at one place.

The federated clusters can achieve this by doing the following two things. Refer
to the below diagram.
57 yaml ansible -- Azure Kubernetes Service

Fig 15: Federated Clusters – Kubernetes Interview Questions

Scenario-Based Kubernetes Interview Questions


This section of questions will consist of various scenario-based questions that
you may face in your interviews.

Scenario 1: Suppose a company built on monolithic architecture handles


numerous products. Now, as the company expands in today’s scaling industry,
their monolithic architecture started causing problems.

How do you think the company shifted from monolithic to microservices and deploy
their services containers?

Solution:

As the company’s goal is to shift from their monolithic application to


microservices, they can end up building piece by piece, in parallel and just switch
configurations in the background. Then they can put each of these built-in
microservices on the Kubernetes platform. So, they can start by migrating their
services once or twice and monitor them to make sure everything is running
stable. Once they feel everything is going good, then they can migrate the rest of
the application into their Kubernetes cluster.

Scenario 2: Consider a multinational company with a very much distributed


system, with a large number of data centers, virtual machines, and many
employees working on various tasks.

How do you think can such a company manage all the tasks in a consistent way with
Kubernetes?

Solution:
58 yaml ansible -- Azure Kubernetes Service

As all of us know that I.T. departments launch thousands of containers, with


tasks running across a numerous number of nodes across the world in a
distributed system.

In such a situation the company can use something that offers them agility,
scale-out capability, and DevOps practice to the cloud-based applications.

So, the company can, therefore, use Kubernetes to customize their scheduling
architecture and support multiple container formats. This makes it possible for
the affinity between container tasks that gives greater efficiency with an
extensive support for various container networking solutions and container
storage.

Scenario 3: Consider a situation, where a company wants to increase its


efficiency and the speed of its technical operations by maintaining minimal
costs.

How do you think the company will try to achieve this?

Solution:

The company can implement the DevOps methodology, by building a CI/CD


pipeline, but one problem that may occur here is the configurations may take
time to go up and running. So, after implementing the CI/CD pipeline the
company’s next step should be to work in the cloud environment. Once they
start working on the cloud environment, they can schedule containers on a
cluster and can orchestrate with the help of Kubernetes. This kind of approach
will help the company reduce their deployment time, and also get faster across
various environments.

Scenario 4: Suppose a company wants to revise it’s deployment methods and


wants to build a platform which is much more scalable and responsive.

How do you think this company can achieve this to satisfy their customers?

Solution:

In order to give millions of clients the digital experience they would expect, the
company needs a platform that is scalable, and responsive, so that they could
quickly get data to the client website. Now, to do this the company should move
from their private data centers (if they are using any) to any cloud environment
such as AWS. Not only this, but they should also implement the microservice
architecture so that they can start using Docker containers. Once they have the
base framework ready, then they can start using the best orchestration platform
59 yaml ansible -- Azure Kubernetes Service

available i.e. Kubernetes. This would enable the teams to be autonomous in


building applications and delivering them very quickly.

Scenario 5: Consider a multinational company with a very much distributed


system, looking forward to solving the monolithic code base problem.

How do you think the company can solve their problem?

Solution

Well, to solve the problem, they can shift their monolithic code base to a
microservice design and then each and every microservices can be considered
as a container. So, all these containers can be deployed and orchestrated with
the help of Kubernetes.

Want to get Kubernetes Certified? View Batches Now


Kubernetes Interview Questions
Scenario 6: All of us know that the shift from monolithic to microservices solves
the problem from the development side, but increases the problem at the
deployment side.

How can the company solve the problem on the deployment side?

Solution

The team can experiment with container orchestration platforms, such as


Kubernetes and run it in data centers. So, with this, the company can generate a
templated application, deploy it within five minutes, and have actual instances
containerized in the staging environment at that point. This kind of Kubernetes
project will have dozens of microservices running in parallel to improve the
production rate as even if a node goes down, then it can be rescheduled
immediately without performance impact.

Scenario 7: Suppose a company wants to optimize the distribution of its


workloads, by adopting new technologies.

How can the company achieve this distribution of resources efficiently?

Solution

The solution to this problem is none other than Kubernetes. Kubernetes makes
sure that the resources are optimized efficiently, and only those resources are
used which are needed by that particular application. So, with the usage of the
60 yaml ansible -- Azure Kubernetes Service

best container orchestration tool, the company can achieve the distribution of
resources efficiently.

Scenario 8: Consider a carpooling company wants to increase their number of


servers by simultaneously scaling their platform.

How do you think will the company deal with the servers and their installation?

Solution

The company can adopt the concept of containerization. Once they deploy all
their application into containers, they can use Kubernetes for orchestration and
use container monitoring tools like Prometheus to monitor the actions in
containers. So, with such usage of containers, giving them better capacity
planning in the data center because they will now have fewer constraints due to
this abstraction between the services and the hardware they run on.

Scenario 9: Consider a scenario where a company wants to provide all the


required hand-outs to its customers having various environments.

How do you think they can achieve this critical target in a dynamic manner?

Solution

Kubernetes Certification Training Course: Administrator (CKA)


Weekday / Weekend BatchesSee Batch Details

The company can use Docker environments, to put together a cross-sectional


team to build a web application using Kubernetes. This kind of framework will
help the company achieve the goal of getting the required things into production
within the shortest time frame. So, with such a machine running, the company
can give the hands-outs to all the customers having various environments.

Scenario 10: Suppose a company wants to run various workloads on different


cloud infrastructure from bare metal to a public cloud.
61 yaml ansible -- Azure Kubernetes Service

How will the company achieve this in the presence of different interfaces?

Solution

The company can decompose its infrastructure into microservices and then
adopt Kubernetes. This will let the company run various workloads on different
cloud infrastructures.

Multiple Choice Kubernetes Interview Questions


This section of questions will consist of multiple-choice interview questions, that
are frequently asked in interviews.

Q1. What are minions in the Kubernetes cluster?

a. They are components of the master node.


b. They are the work-horse / worker node of the cluster.[Ans]
c. They are monitoring engine used widely in kubernetes.
d. They are docker container service.

Q2. Kubernetes cluster data is stored in which of the following?

a. Kube-apiserver
b. Kubelet
c. Etcd[Ans]
d. None of the above

Q3. Which of them is a Kubernetes Controller?


62 yaml ansible -- Azure Kubernetes Service

a. ReplicaSet
b. Deployment
c. Rolling Updates
d. Both ReplicaSet and Deployment[Ans]

Q4. Which of the following are core Kubernetes objects?

a. Pods
b. Services
c. Volumes
d. All of the above[Ans]

Q5. The Kubernetes Network proxy runs on which node?

a. Master Node
b. Worker Node
c. All the nodes[Ans]
d. None of the above

Q6. What are the responsibilities of a node controller?

a. To assign a CIDR block to the nodes


b. To maintain the list of nodes
c. To monitor the health of the nodes
d. All of the above[Ans]

Q7. What are the responsibilities of Replication Controller?

a. Update or delete multiple pods with a single command


b. Helps to achieve the desired state
c. Creates a new pod, if the existing pod crashes
d. All of the above[Ans]

Q8. How to define a service without a selector?

a. Specify the external name[Ans]


b. Specify an endpoint with IP Address and port
c. Just by specifying the IP address
d. Specifying the label and api-version

Q9. What did the 1.8 version of Kubernetes introduce?

a. Taints and Tolerations[Ans]


b. Cluster level Logging
c. Secrets
d. Federated Clusters
63 yaml ansible -- Azure Kubernetes Service

Q10. The handler invoked by Kubelet to check if a container’s IP address is


open or not is?

a. HTTPGetAction
b. ExecAction
c. TCPSocketAction[Ans]
d. None of the above

Terraform and Ansible are both popular tools used for infrastructure automation, but they have
different approaches and use cases. Here's a comparison between the two:

**Terraform**:

1. **Infrastructure as Code (IaC)**: Terraform is specifically designed for Infrastructure as Code. It


allows you to define your infrastructure using a declarative configuration language (HCL - HashiCorp
Configuration Language). With Terraform, you describe the desired state of your infrastructure, and
Terraform handles provisioning, updating, and destroying resources to achieve that state.

2. **Multi-Cloud Support**: Terraform supports multiple cloud providers (such as AWS, Azure,
Google Cloud Platform, etc.) as well as on-premises infrastructure. It provides a consistent workflow
for managing infrastructure across different environments and cloud providers.

3. **Resource Orchestration**: Terraform manages the lifecycle of infrastructure resources. It can


provision and configure various resources including virtual machines, networks, storage, databases,
and more. Terraform is well-suited for managing complex infrastructures and dependencies
between resources.

4. **State Management**: Terraform maintains a state file that keeps track of the current state of
your infrastructure. This state file helps Terraform understand which resources are currently
provisioned and manage changes to your infrastructure in a safe and predictable manner.

**Ansible**:

1. **Automation Tool**: Ansible is a general-purpose automation tool that can be used for a wide
range of tasks including configuration management, application deployment, orchestration, and
more. It uses a procedural language based on YAML for defining automation tasks called playbooks.
64 yaml ansible -- Azure Kubernetes Service

2. **Agentless Architecture**: Ansible operates in an agentless manner, meaning it doesn't require


any software to be installed on managed nodes. It communicates with remote systems over SSH or
WinRM, making it lightweight and easy to set up.

3. **Idempotent Operations**: Ansible playbooks are idempotent, meaning running the same
playbook multiple times will result in the same state. This makes it safe to run Ansible playbooks
repeatedly, reducing the risk of unintended changes.

4. **Configuration Management**: Ansible is often used for configuration management tasks such
as installing software, managing configuration files, and ensuring system configurations are
consistent across multiple servers.

**Key Differences**:

1. **Declarative vs Procedural**: Terraform uses a declarative approach where you define the
desired state of your infrastructure, while Ansible uses a procedural approach where you specify the
steps to be executed to achieve a desired outcome.

2. **Scope**: Terraform is focused on infrastructure provisioning and management, while Ansible


has a broader scope and can be used for configuration management, application deployment,
orchestration, and more.

3. **State Management**: Terraform manages infrastructure state using state files, while Ansible
doesn't maintain state between playbook runs.

4. **Agentless vs Agent-based**: Ansible operates in an agentless manner, communicating directly


with remote systems over SSH or WinRM, while some other configuration management tools may
require agents to be installed on managed nodes.

In summary, Terraform is well-suited for managing infrastructure as code and provisioning resources
across multiple cloud environments, while Ansible is a versatile automation tool that can be used for
a wide range of tasks including configuration management, application deployment, and
orchestration. Depending on your specific use case, you may choose to use one or both of these
tools in your infrastructure automation workflows.
65 yaml ansible -- Azure Kubernetes Service

https://www.spiceworks.com/tech/devops/articles/terraform-vs-ansible/#:~:text=Terraform%20is
%20a%20tool%20used,deployment%2C%20and%20other%20IT%20processes.&text=Terraform
%20is%20defined%20as%20an,and%20manage%20IT%20infrastructure%20effectively.

Terraform vs. Ansible : Key Differences and Comparison of Tools

What is the difference between Terraform and Ansible? Terraform is an open-source


platform designed to provision cloud infrastructure, while Ansible is an open-source
configuration management tool focused on the configuration of that infrastructure.

Terraform vs Ansible: which tool should you use? Here’s the answer from experts
who have working experience on both tools.

This post highlights the differences between Terraform and Ansible, explores the
similarities and concludes with the best way to manage infrastructure.

What is Terraform?
Terraform enables you to provision, manage, and deploy your infrastructure as code
(IaC) using a declarative configuration language called HashiCorp Configuration
Language (HCL). One of the most popular IaC tools available, it was initially
developed by Hashicorp under an open-source license, but it recently switched to a
Business Source License (BUSL).

Key features of Terraform:

 State management: Terraform tracks resources and their configuration in a


state file.
 Declarative code: Users describe the desired state of their infrastructure, and
Terraform manages it.
 Widely adopted: Terraform supports over 3k providers.
 Declarative language: You can divide your infrastructure into multiple
reusable modules.
66 yaml ansible -- Azure Kubernetes Service

What is Ansible?
Ansible is a software tool designed for cross-platform automation and orchestration at
scale. Written in Python and backed by RedHat and a loyal open-source community, it
is a command-line IT automation application widely used for configuration
management, infrastructure provisioning, and application deployment use cases.

Key features of Ansible:

 YAML: A popular, simple data format that is easy for humans to understand.
 Modules: Reusable standalone scripts that perform a specific task
 Playbooks: A playbook is a YAML file that expresses configurations,
deployments, and Orchestration in Ansible. They contain one or multiple plays.
 Plays: Subset within a playbook. Defines a set of tasks to run on a specific host
or group of hosts.
 Inventories: All the machines you use with Ansible are listed in a single
simple file, together with their IP addresses, databases, servers, and other
details.
 Roles: Redistributable units of organization that make it easier for users to
share automation code.

Ansible vs Terraform: Similarities


At a very high level, given the capabilities of both the products, Terraform and
Ansible come across as similar tools. Both of them are capable of provisioning the
new cloud infrastructure and configuring the same with required application
components.

Both Terraform and Ansible are capable of executing remote commands on the virtual
machine that is newly created. This means, both the tools are agentless. There is no
need to deploy agents on the machines for operational purposes.
67 yaml ansible -- Azure Kubernetes Service

Terraform uses cloud provider APIs to create infrastructure and basic configuration
tasks are achieved using SSH. The same goes with Ansible – it uses SSH to perform
all the required configuration tasks. The “state” information for both does not require
a separate set of infrastructure to manage, thus both the tools are masterless.

Differences Between Terraform and Ansible


The previous section gives an overview of the two tools in their broadest similarities.
At a high level, it sounds like both Terraform and Ansible are capable of provisioning
and configuration management. However, a deeper dive into them makes us realize
the benefits of one over the other in certain areas.

In general, both the tools are great in their own ways. They have an overlap of
functions when it comes to infrastructure management. Infrastructure management
broadly encompasses 2 aspects – orchestration and configuration management.

Terraform and Ansible have their own ways of managing both – with strong and weak
points when it comes to overlaps. Thus, it is important to delve into some details of
both the tools to make a “perfect” choice or a combination with boundaries.
68 yaml ansible -- Azure Kubernetes Service

1. Orchestration vs. Configuration Management

Orchestration/provisioning is a process where we create the infrastructure – virtual


machines, network components, databases, etc. Whereas, on the other hand,
configuration management is a process of automating versioned software component
installation, OS configuration tasks, network and firewall configuration, etc.

Both Terraform and Ansible are capable of performing both tasks.


However, Terraform offers a comprehensive solution to manage infrastructure.
Terraform uses cloud provider APIs to provision and de-provision the infrastructure
based on declared resources.

Ansible, on the other hand, is also capable of provisioning the cloud infrastructure but
it is not comprehensive enough. It is mainly geared towards configuration
management. Configuration management is a process of keeping the applications and
dependencies up to date. This is where Ansible really shines as compared to
Terraform.

Both the tools can perform both kinds of activities. However, there are limitations
when implementing configuration management using Terraform, and infrastructure
automation using Ansible. They are not flexible enough when it comes to complex
infrastructure management.

Logically, we can identify orchestration as Day 0 activity and configuration


management as Day 1 activity. Terraform works best for Day 0 activities and Ansible
for Day 1 and onwards activities.

2. Declarative vs. Procedural Language

Terraform is used to write Infrastructure as Code (IaC). It uses HCL (Hashicorp


Configuration Language) which is declarative in nature. It doesn’t matter in which
sequence the code is written. The code could also be dispersed in multiple files.
69 yaml ansible -- Azure Kubernetes Service

No matter how you write the code, Terraform identifies the dependencies, and
provisions infrastructure. Writing or translating existing infrastructure to code is easy
in Terraform. Check this Terraform import tutorial if you would like to know more
about importing infrastructure under Terraform management.

Ansible uses YAML syntax to define the procedure to perform on the target
infrastructure. Ansible YAML scripts are procedural in nature – meaning when you
write the script, it will be executed from top to bottom.

Ansible scripts are called “ansible playbooks“. When you have to perform a certain
series of tasks, you define the same in the playbook. The tasks will be performed in
the sequence they are written. For example, to install an Apache server on the given
virtual machine as a root user, you would have to write the user creation step before
defining the task for installation.

3. Mutable vs. Immutable Infrastructure

Application deployment workflow involves provisioning of the infrastructure and


installing the right version of source code and dependencies on the provisioned
infrastructure.

Mutability is an attribute associated with the underlying infrastructure that defines the
way newer versions of applications and services are deployed. Deployment either
takes place on existing infrastructure, or we can provision a completely new set of
infrastructure for the same.

The deployment practices typically determine whether the infrastructure is mutable or


immutable. When newer versions of applications are released on the same
infrastructure, it is called mutable. However, if the deployment happens on completely
new infrastructure during releases, it is said to be immutable.
70 yaml ansible -- Azure Kubernetes Service

Mutability seems convenient, but the risk of failure associated with it is higher. When
application configurations are re-applied on the same infrastructure, there are
additional steps of uninstalling the previous version and then installing the desired
version. More steps also introduce more chances of failure. Doing this for a fleet of
servers can result in uneven configurations and unpredictable behavior.

Instead, if we focus on reducing these number of steps by ignoring the uninstallation


procedure and performing the installation on new infrastructure resources – we get a
chance to test and revert the new deployment in case of failure. Treating infrastructure
as immutable in this way provides greater control over introducing changes.

However, there is no golden rule defined that advocates one approach over the other.

Since Terraform’s strength lies in handling the infrastructure lifecycle, it supports


infrastructure immutability better. It is easier to provision a completely new set of
infrastructure and deprovision the older set using Terraform. However, handling
configuration changes are not something that can be done in the most efficient
manner.

As far as the configuration changes are concerned, Ansible wins the race since it is
primarily a configuration management tool. Ansible supports infrastructure
immutability by offering VM image creation. However, maintaining these additional
images requires additional efforts.

It is recommended to follow the immutable infrastructure approach, where Terraform


takes care of the infrastructure management, and Ansible helps apply the changed
configuration. This is also known as the Blue/Green deployment strategy, where the
risk of configuration failure is reduced.
71 yaml ansible -- Azure Kubernetes Service

4. State Management

Terraform manages the entire lifecycle of the resources under its management. It
maintains the mapping of infrastructure resources with the current configuration in
state files. State management plays a very important role in Terraform.

States are used to track changes to the configuration and provision the same. It is also
possible to import existing resources under Terraform management by importing the
real-world infrastructure in state files.

At any given time, it is possible to query the Terraform state files to understand the
infrastructure component and their attributes currently available.

As opposed to this, Ansible does not support any lifecycle management. Since Ansible
mainly deals with configuration management and considering it defaults to immutable
infrastructure, any changes introduced in the configuration are executed automatically
on the target resource.

5. Configuration Drift

Configuration drift refers to the difference between the desired and actual state of your
configuration. One of the most common reasons for this is that engineers/machines
make changes outside the configuration. If you are using Terraform to manage your
infrastructure and you make a change outside it, you’ve introduced drift. Drift
happens, and solutions like Spacelift’s drift detection not only detect the drift but can
optionally remediate it, too.

While both Ansible and Terraform aim to mitigate drift, their methodologies differ.
Ansible relies on idempotent tasks and continuous execution without maintaining a
persistent state of the infrastructure. In contrast, Terraform relies on a stored state to
detect and manage drift, emphasizing a declarative approach to infrastructure as code.
-
72 yaml ansible -- Azure Kubernetes Service

In Microsoft Azure, pricing tiers refer to the different service plans or subscription
levels offered for Azure services. Each service in Azure typically offers multiple pricing
tiers with varying features, performance levels, and pricing models to meet the
diverse needs of customers.

Here are some common examples of pricing tiers in Azure:

1. Basic Tier: Basic tiers often offer entry-level features and limited
functionalities at a lower cost. They are suitable for small-scale applications or
testing purposes.
2. Standard Tier: Standard tiers provide additional features, higher performance,
and enhanced capabilities compared to basic tiers. They are suitable for
production workloads and applications that require higher reliability and
scalability.
3. Premium Tier: Premium tiers offer the highest level of performance,
reliability, and advanced features. They are designed for mission-critical
applications, high-demand workloads, and scenarios that require guaranteed
service levels and premium support.
4. Free Tier: Some Azure services offer a free tier with limited usage quotas or
trial periods, allowing users to explore the service and its capabilities at no
cost.
5. Pay-As-You-Go: Many Azure services operate on a pay-as-you-go pricing
model, where customers are billed based on their actual usage of resources or
services. This model offers flexibility and scalability, allowing customers to pay
only for the resources they consume.
6. Reserved Instances: Azure also offers reserved instance pricing, where
customers can commit to a specific usage level for a fixed term (e.g., one year
or three years) in exchange for discounted rates. Reserved instances provide
cost savings for predictable workloads with steady usage patterns.

The availability of pricing tiers and their specific features vary depending on the
Azure service. Customers can choose the pricing tier that best aligns with their
requirements, budget, and performance needs. Azure also provides pricing
calculators and cost management tools to help customers estimate and optimize
their cloud spending.
73 yaml ansible -- Azure Kubernetes Service

Evaluate Azure CLI vs.


PowerShell for resource
management

When it comes to resource management in Microsoft Azure, there are two


primary tools IT teams can use: Azure CLI and PowerShell. One question
that comes up frequently is when to use one or the other.

Sometimes the choice between the two tools depends on what is familiar.
For those coming from an on-premises environment, PowerShell is the
more natural choice since it's been a part of Microsoft Windows for more
than a decade, but the Azure command-line interface (Azure CLI) has its
merits, too.

The differences between the two are often subtle, and your selection will
likely come down to personal preference. Thankfully, though, the process
of choosing is made easier by the fact that the Azure CLI and PowerShell
call the same exact APIs. Let's take a closer look at Azure CLI and
PowerShell, as well as the possibility of combining them.

What are Azure CLI and PowerShell?


Before jumping into the code, let's talk briefly about PowerShell and the Azure
CLI and why they are so popular.

Azure CLI

The Azure CLI is a set of commands developers use to create, manage and
interact with all Azure resources from a programmatic perspective. The Azure CLI
is built in Python. You can use the Azure CLI from both the terminal on your
computer and the Azure Cloud Shell, which is a browser-accessible shell.
74 yaml ansible -- Azure Kubernetes Service

Additionally, Azure CLI is cross-platform, which means IT teams can use it


on any OS, including:

 Linux distributions

 MacOS

 Windows

PowerShell

While it was originally only for Windows, PowerShell was re-created as an open
source scripting and programming language with an interactive command-line
shell. Microsoft built it to serve as an automation tool for system administrators.
Unlike most command-line shells, PowerShell was built on the .NET framework
and works with objects.

Similar to the Azure CLI, IT teams can run PowerShell on any OS. Developers can
also use it to interact with any platform or application that has a public-facing API.
PowerShell is still one of the top languages used, alongside Python and JavaScript,
for scripting and automating in Azure.

3 ways to create an Azure resource group


From a comparison perspective, the best way to see PowerShell and Azure CLI in
action is to create a resource and a PowerShell function. By creating a resource,
you can see the differences between code length, ease of use and speed.

First, Let's create an Azure resource group using a cmdlet, which is a scaled-back
version of a PowerShell function:

New-AzResourceGroup -Location eastus2 -Name


resource_group_name

Now, let's look at the same task using Azure CLI:

az group create -l eastus2 -n resource_group_name


75 yaml ansible -- Azure Kubernetes Service

As you can see, the length is pretty much the same. However, one thing that you
can do with PowerShell natively is expand the cmdlet and create a PowerShell
Tool out of it. This is sometimes referred to as Toolmaking.

Toolmaking is great if you want to add functionality to the PowerShell cmdlet or


restrict what's available. For example, you could use this technique to restrict
which parameters are exposed:

function New-ResourceGroup {

param (

[parameter(Mandatory)]

[string]$rgName,

[parameter(Mandatory)]

[string]$location

$params = @{

'Name' = $rgName

'Location' = $location

New-AzResourceGroup @params

Now, you're more likely to use the New-AzResourceGroup command, but the
point of this example is to show that you can extend the cmdlet with additional
functionality that you don't have with a command-line tool like Azure CLI. That's
because PowerShell gives you more "programmer" type options, including
parameters, error testing and more.
76 yaml ansible -- Azure Kubernetes Service

Combine PowerShell and Azure CLI


One benefit of these two tools is developers can combine them for more
functionality. Azure CLI is a good choice, but there's no solid way to make the
code reusable by itself. For example, if someone else wants to use the code and
change the name of the resource group that's being created, developers have to
create variables in a terminal session and then the code can run. When you add
PowerShell into the mix, you get additional features such as error handling and
reusability.

As you can see in the example below, the code uses a PowerShell function and
adds in the Azure CLI. This ensures that the Azure CLI has reusable code and error
handling added in, to make it a true script.

function New-ResourceGroup {

param (

[parameter(Mandatory)]

[string]$rgName,

[parameter(Mandatory)]

[string]$location

try {

az group create -l $location `

-n $rgName

catch {

$pscmdlet.ThrowTerminatingError($_)

}
77 yaml ansible -- Azure Kubernetes Service

Combining both PowerShell and Azure CLI is a great approach if you cannot
decide which one to use.

Use Azure CLI with other programming languages


Let's say you're a Python developer and you want to try PowerShell, but the rest of
the stack you work on is Python, so you decide to stay with Python. However, IT
teams in this scenario can still use the Azure CLI with other programming
languages.

Below is an example of using the Azure CLI with Python. If you're a Python
developer, as you can see, you're writing Python code the same way you would
always write it. There's no difference in syntax.

The azure.cli.core library enables you to use an invoke function, which you can use
to type out an Azure CLI command. This scenario is listing VMs in a specific
resource group.

from azure.cli.core import get_default_cli as azcli

import logging

import sys

def list_vms(resource_group):

azcli().invoke(['vm', 'list', '-g', resource_group])

resource_group = sys.argv[1]

if __name__ == '__main__':

print('Running as main program...')

list_vms(resource_group)
78 yaml ansible -- Azure Kubernetes Service

You might also like