You are on page 1of 37

Container Services

Containers
Container Service in Cloud(AWS)

Container is a form of virtualization where application run in an isolated


user spaces.

Container are an executable unit of software in which application code is


packaged, along with its libraries and dependencies, so that it can be run
anywhere, whether it be on desktop, traditional IT, or the cloud.
Why use Container?
Container are a powerful way for developers to package and deploy their
applications. They are lightweight and provide a consistent, portable software
environment for applications to easily run and scale anywhere.
Need of Container:

 Containers make it easy to share CPU, memory, storage, and network resources at the operating
systems level and offer a logical packaging mechanism in which applications can be abstracted from the
environment in which they actually run.

 Containers require less system resources than traditional or hardware virtual machine


environments because they don't include operating system images. Applications running in containers
can be deployed easily to multiple different operating systems and hardware platforms.
Benefits of Container
 Less overhead. Containers require less system resources than traditional or hardware virtual
machine environments because they don't include operating system images.
 Increased portability.
 More consistent operation.
 Greater efficiency.
 Better application development.
 Flexibility.
 Security.
 Speed.
What are Container?
Virtual Machine
Container
App1 App2

Guest OS Guest OS App1 App2

Hypervisor Docker Engine

Operating System Operating System

Hardware Hardware
Container on AWS
Container on AWS shall be divided into three
category. Registry (ECR)

Orchestration
(ECS,EKS)

Container management tool Compute


(EC2,Fargate)
What is Docker?
Docker is a tool designed to make it easier to create, deploy and run applications by
using container.

Docker is an operating system for containers. Similar to how a virtual machine
virtualizes (removes the need to directly manage) server hardware, containers virtualize the
operating system of a server.

Docker is a advance version of virtualization.


Advantages of Docker:
o Consistent Environment
o Speed and Agility
o Efficiency management of multi-cloud environments
o Security
o Optimized costs
o Catching a cluster of container
o Flexible resource sharing
o Scalability-many Container can be placed in a single host
o Running your service on hardware that is much cheaper than standard servers
How to Install Docker:
Step 01 : Launch an EC2 instance. (Using Linux AMI & .ppk file)
Step 02 : After launched the instance select and connect with putty.
Step 03 : Then type the below commands to install docker.
1. Change your user login type (ec2-user to root user)
# sudo su
2. Update your linux machine
# yum update –y
3. Install the docker
# yum install docker -y
4. Check whether docker is present or not
# which docker
5. Check version
# docker –v
6. Check docker running or not
# service docker status OR docker info
7. Start the docker engine
# service docker start
8. Check which images present in docker
# docker images
9. Check the process status and running stopped containers
# docker ps OR docker ps -a
What is NGINX?
NGINX is a web server that accept request via HTTP/s and respond to
display website content through storing processing and delivering web
pages to user.
This can also used as a reverse proxy, load balancer, mail proxy and
http.
Architecture of NGINX
Proxy
Load balancer

HTTP traffic

Web server
Application server
Serve content from the
Fast CGI, Passenger
disk
How to Install nginx container on the EC2 linux
machine
Steps:
Step 01 : Open EC2 console and Launch an Instance. (Using Linux AMI & .ppk

file)
Step 02 : After Launched the instance, Connect with putty.
Step 03 : Follow the below five steps and command to install nginx.
01. Update the Linux Machine.
# sudo yum update
02. Enter the below command
# sudo vim /etc/yum.repos.d/nginx.repo
3. Add the following to vim (Add the following package to vim editor in your linux machine)
[nginx]
name=nginx repo
baseurl=https://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

4. Again update your linux machine


# sudo yum update
5. Now install the nginx server
# sudo yum install nginx
6. Start the nginx server
# sudo service nginx start
7. Check the process status
# sudo service nginx status
8. Check or verify your nginx server correctly running or not
# curl localhost:80

Now we check our web browser is properly working or not by using Public
IPv4

• Go to Instances and copy the Public IPv4


• Then open new tab and paste the IP click enter
What is AWS ECR?
Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container
image service that is Secure, Scalable, and reliable amazon ECR supports private
repositories with resource-based permissions using AWS IAM.

ECR or Elastic Container Registry is an AWS service that acts as a dedicated


location for storing images.
Reasons For Using Image Registry(Need of ECR)
5 Reasons To Use A Docker Registry For Applications
1. Docker registries enable developers to store and distribute Docker images.
2. Automate Development.
3. Collaborate With Your Team.
4. Secure Docker Images.
5. Gain Insight Into Issues.
6. Reliably Deploy Containers.
Steps To Create ECR And Pushing Images
1. Login to AWS console and Open Elastic Container Registry.
2. On the Repository page, choose create repository.
3. Enter repository name and choose create.
4. Then go to IAM service
5. In the left side panel click on roles and choose create role
6. Select use case  EC2
7. Add permission, search and select “AmazonEC2ConatinerRegistoryFullAccess”
8. Enter name and create IAM role
9. Then go to EC2 console
10. Choose create instance, enter name, select Linux AMI, create new key pair using .ppk file,
create security group or select existing security group, in advanced details select created
IAM role and launch the instance.
11. Connect EC2 instance with PuTTY
12. And again go to ECR, click on “view push commands”, and click “getting started with ECR”
13. You can see some basic Commands,
and execute the commands.
14.After execute that commands again click on “view push command”, here you can
see some commands type the commands to push an image to your repository.
15.Now you can see your image was
uploaded in your repository.
What is ECS?
Amazon Elastic Container Service (ECS) is a highly scalable, high performance container
management service that supports Docker containers and allows you to easily run applications on
a managed cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances.
Create An ECS Cluster:
1. Open the ECS console in the region where you are looking to launch your cluster.
2. Click Create Cluster
3. Un-select New ECS Experience on the top left corner to work on previous ECS console version
(Capacity providers not supported on new version)
4. Under Select cluster template select EC2 Linux + Networking
5. Click Next step
6. Under Configure cluster for Cluster name, enter EcsSpotWorkshop
7. Select the checkbox Create an empty cluster
8. Select the checkbox Enable Container Insights
9. Click Create
10. Click View Cluster 11. Click Capacity Providers tab
Creating ECS Service:
To create the service, follow these steps:
1. In the ECS Console select the EcsSpotWorkshopUpdate or just click here to open
the EcsSpotWorkshopUpdate cluster view
2. Select the Services Tab
3. Click on Create
4. For Capacity provider strategy, leave it to default value Cluster default Strategy
5. For Task Definition Family, select ec2-task
6. For Task Definition Revision, select 1
7. For Cluster, leave default value EcsSpotWorkshop
8. For Service name, ec2-service-split
9. For Service type, leave it to the default value REPLICA
10. For Number of tasks, enter 10
11. Leave the default values for Minimum healthy percent and Maximum percent
12. Under Deployments section, leave it to default values
13. Under Task Placement section, for Placement Templates, select BinPack
14. Under Task tagging configuration section, leave it to default values
15. Click on Next Step
16. Under Configure network section, in Load balancing, for Load balancer type*, select Application
Load Balancer
17. For Service IAM role, leave default value
18. For Load balancer name, select EcsSpotWorkshop
19. Under Container to load balance, for Container name : port, click on add to load balancer
20. For Production listener port, Select HTTP:80 from the dropdown list
21. For Production listener protocol, leave default value of HTTP
22. For Target group name, select EcsSpotWorkshop from the list
23. Leave default values for Target group protocol, Target type, Path pattern, Health check path
24. Click on Next Step
25. Under Set Auto Scaling (optional), leave default value for service auto scaling
26. Click on Next Step
27. Click on Create Service
28. Click on View Service
What is Kubernetes(k8s)?

Kubernetes is open-source software that allows you to deploy


and manage containerized applications at scale. Kubernetes
manages clusters of Amazon Elastic Compute Cloud (EC2)
compute instances and runs containers on those instances with processes for deployment,
maintenance, and scaling.
Kubernetes Components:
When you deploy Kubernetes, you get a cluster.
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized
applications. Every cluster has at least one worker node.
Kubernetes Components
Master Node:
Worker Node:
• etcd
• Kublet
• Kube-apiserver
• Kube-proxy
• Kube-scheduler
• Container runtime
• Kube-controller-manager
• Cloud-controller-manager
• Master node:
A master node is a node which controls and manages a set of worker nodes (workloads
runtime) and resembles a cluster in Kubernetes.

Ectd: The etcd data store is the Kubernetes backend, which contains the cluster information in
key-value pairs.

Kube-apiserver: The API server is the Kubernetes frontend that exposes the Kubernetes API. It
also validates and configures data for the API objects, including pods, services,
deployments, replication controllers, and others.
kube-scheduler: The kube-scheduler is a control plane component which mainly assigns
the unscheduled pods to the relevant node based on its memory usage.

kube-controller-manager: The kube-control-manager is a control plane component that


runs the control process.

cloud-controller-manager: The cloud controller manager lets you link your cluster into
your cloud provider's API, and separates out the components that interact with that
cloud platform from components that only interact with your cluster.
• Worker Node:
A worker node is a node that runs the application in a cluster and reports to a control plane. The
main responsibilities of a worker node is to process data stored in the cluster and handle networking to
ensure traffic between the application across the cluster and outside of the cluster are properly facilitated.

Kubelet: The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures
that the containers described in those PodSpecs are running and healthy.
kube-proxy: kube-proxy maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of your cluster.
Container runtime: Kubernetes supports container runtimes such as containerd, CRI-O, and any other
implementation of the Kubernetes CRI (Container Runtime Interface).
Steps to Create Cluster in Kubernetes:

1. Create an IAM (Identity and Access Management) Role.


This role will be used to give your CI host permission to create and destroy resources on AWS.
Instructions for creating a role can be found here. The following policies are required:

• AmazonEC2FullAccess
• IAMFullAccess
• AmazonS3FullAccess
• AmazonVPCFullAccess
• Route53FullAccess (Optional)
2. Create a new instance to use as your CI host. This node will deal with provisioning and tearing
down the cluster.
This instance can be small (t2.micro for example).
When creating it, assign the IAM role created in step 1.
Once created, download ssh keys (.pem file). Ensure permissions are restrictive on the file:
# chmod 400 name.pem

3. SSH to your CI host. Instructions on how to do this are given here.


4. Install kops and kubectl on your CI host
Follow the instructions here: kubernetes/kops
5. Choose a cluster name:
Since we are not using pre-configured DNS we will use the suffix “.k8s.local”. Per the docs, if the DNS
name ends in .k8s.local the cluster will use internal hosted DNS.
# export NAME=<somename>.k8s.local
6. Setup an ssh keypair to use with the cluster:
# ssh-keygen
7. Create an S3 bucket to store your cluster configuration
Since we are on AWS we can use a S3 backing store. It is recommended to enabling versioning on
the S3 bucket. We don’t need to pass this into the KOPS commands. It is automatically detected by
the kops tool as an env variable.
# export KOPS_STATE_STORE=s3://<your_s3_bucket_name_here>

8. Set the region to deploy in:


# export REGION=`curl -s
http://169.254.169.254/latest/dynamic/instance-identity/document|grep region|awk -F\"
'{print $4}'`
9. Install the AWS CLI:
# sudo apt-get update
# sudo apt-get install awscli

10. Set the availability zones for the nodes


For this guide we will be allowing nodes to be deployed in all AZs:
# export ZONES=$(aws ec2 describe-availability-zones --region $REGION | grep
ZoneName | awk '{print $2}' | tr -d '"')

11. Create the cluster


Thank you

You might also like