You are on page 1of 49

1) What is Container?

2) Benefits of Containerization?
3) Containtainerization with Docker
4) Why Containerization On AWS?
5) How to Achieve Containerization on AWS?
6) Basic ECS Concepts
7) Hands on Workshop

Containerization On AWS
“We are passionate cloud enthusiasts running
AWS communities organically and are not
Amazon or AWS employees”
CODE OF CONDUCT

● Please keep your Video Disabled


● Please keep your Audio on Mute
● You can put your workshop specific questions in the chat
● Moderators will help you to resolve your issues either on the chat or in the Chime
rooms
● This is 3+ hours workshop, So be patient
● Recording for this workshop will not be available

Hope you all enjoy the session and learn something AWSome today.
Pre-requisites

● Ensure you have your activated AWS account.


● Ensure to install putty and puttygen
● Ensure you have configured AWS CLI V2 on your machine (EC2
instance)
What is Container?
Containers are a method of operating system virtualization that allow you to run an application and its dependencies in
resource-isolated processes. Containers allow you to easily package an application's code, configurations, and dependencies into
easy-to-use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version
control.
Benefits of Containerization
Microservices : Containers provide process isolation that makes it easy to break apart and run applications as independent
components called microservices.

Hybrid applications: Containers let you standardize how code is deployed, making it easy to build workflows for applications that
run between on-premises and cloud environments.

Application migration to the cloud: Containers make it easy to package entire applications and move them to the cloud without
needing to make any code changes.

Batch processing: Package batch processing and ETL jobs into containers to start jobs quickly and scale them dynamically in
response to demand.

Machine learning: Use containers to quickly scale machine learning models for training and inference and run them close to your
data sources on any platform.

Platform as a service: Use containers to build platforms that remove the need for developers to manage infrastructure and
standardize how your applications are deployed and managed.
Containerization with Docker
What is Docker ?
Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages
software into standardized units called containers that have everything the software needs to run including
libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any
environment and know your code will run.
Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run
distributed applications at any scale. AWS supports both Docker licensing models: open source Docker
Community Edition (CE) and subscription-based Docker Enterprise Edition (EE).

How Docker works

Docker works by providing a standard way to run your code. Docker is an


operating system for containers. Similar to how a virtual machine
virtualizes (removes the need to directly manage) server hardware,
containers virtualize the operating system of a server. Docker is installed
on each server and provides simple commands you can use to build,
start, or stop containers.
Why Containerization on AWS ?
AWS is the #1 place for you to run containers and 80% of all containers in the cloud run on AWS. Customers such as Samsung, Expedia, KPMG, GoDaddy, and Snap
choose to run their containers on AWS because of our security, reliability, and scalability.
What more?

Secure: AWS offers 210 security, compliance, and governance services and key features which is about 40 more than the next largest cloud provider. AWS provides
strong security isolation between your containers, ensures you are running the latest security updates, and gives you the ability to set granular access permissions for
every container.

Reliable: AWS container services run on the best global infrastructure with 69 Availability Zones (AZ) across 22 Regions. AWS provides >2x more regions with multiple
availability zones than the next largest cloud provider (22 vs. 8). There are SLAs for all our container services giving you ease of mind.

Choice: AWS container services offer the broadest choice of services to run your containers.

Deeply Integrated with AWS: AWS container services are deeply integrated with AWS by design. This allows your container applications to leverage the breadth and depth
of the AWS cloud from networking, security, to monitoring. AWS combines the agility of containers with the elasticity and security of the cloud.
How to Achieve Containerization on AWS
AWS provides below popular services using which we can achieve containerization goal:
Amazon Elastic Container Service: Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo,
Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

ECS is a great choice to run containers for several reasons. First, you can choose to run your ECS clusters using AWS Fargate, which is serverless compute for
containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application
isolation by design. Second, ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s
recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability.

Additionally, because ECS has been a foundational pillar for key Amazon services, it can natively integrate with other services such as Amazon Route 53, Secrets
Manager, AWS Identity and Access Management (IAM), and Amazon CloudWatch providing you a familiar experience to deploy and scale your containers. ECS is also
able to quickly integrate with other AWS services to bring new capabilities to ECS. For example, ECS allows your applications the flexibility to use a mix of Amazon EC2
and AWS Fargate with Spot and On-Demand pricing options. ECS also integrates with AWS App Mesh, which is a service mesh, to bring rich observability, traffic controls
and security features to your applications. ECS has grown rapidly since launch and is currently launching 5X more containers every hour than EC2 launches instances.
Basic ECS Concepts
ECS Cluster: One or more servers (i.e. EC2 instances) that ECS can use for deploying Docker containers.

Container Instance: A single node (i.e. EC2 Instance) in an ECS Cluster.

ECS Task: One or more Docker containers that should be run as a group on a single instance.

ECS Task Definition (aka ECS Container Definition): A JSON file that defines an ECS Task, including the container(s) to run,
the resources (memory, CPU) those containers need, the volumes to mount, the environment variables to set, and so on.

Task Definition Revision: ECS Tasks are immutable. Once you define a Task Definition, you can never change it: you can only
create new Task Definitions, which are known as revisions. The most common revision is to change what version of a Docker
container to deploy.

ECS Service: A way to deploy and manage long-running ECS Tasks, such as a web service. The service can deploy your
Tasks across one or more instances in the ECS Cluster, restart any failed Tasks, and route traffic across your Tasks using an
optional Elastic Load Balancer.
Hands on Workshop - Containerize Microservice with AWS
In this workshop, we will create a python based microservice and will deploy on to Amazon ECS along with
Application Load Balancer with dynamic port mapping. Here we will create three images of microservice
which will run behind an Application Load Balancer on ECS.

To achieve this we will perform following steps:


Step1: Create microservice app [3 separate apps to see the ALB effect]
Step 2: Create Dockerfile(s)
Step3: Create ECR Repository and push images to ECR
Step4: Create ECS cluster
Step5: Create Task Definition & add container information
Step6: Create Service to run the Task definition
Step7: Create Application Load Balancer
Step8: Fix security group settings
Step9: Complete creation of service by providing ALB name
Step10: Verify the running services
Step11: Delete resources (ECS Cluster, Load Balancer)
Step1: Create microservice
Here we are creating a sample python based simple ‘Hello World’ microservice app. Let’s
write its code in index.py using flask which is a small HTTP server for python apps

Filename: index.py

from flask import Flask


app = Flask(__name__)
@app.route("/service-1")
def hello():
return "Hello World from service-1!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=int("5001"), debug=True)
Step 2: Create Dockerfile
Filename: Dockerfile

FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5001
ENTRYPOINT ["python","./index.py"]

Where FROM directive is to tell Docker that which base image is to take from Docker Hub.
COPY directive moves the application into the container image
WORKDIR sets the working directory
RUN directive is calling PyPi(pip) to install dependencies available in file: “requirements.txt”
EXPOSE directive is to expose a port to be used by flask
ENTRYPOINT command is to execute the actual application script.
Step2 contd.: Docker Dependencies

Filename: requirements.txt

flask

Check if Docker is available on Machine

docker --version

Steps to install Docker on Linux ec2 (if not already


available) So by now we have created
index.py, requirements.txt,
sudo yum update -y Dockerfile for our first
sudo yum install -y docker microservice service-1
sudo service docker start which is set to run at 5001
sudo usermod -a -G docker ec2-user port. Similarly create
exit service-2 and service-3 on
==> login again ports 5002 and 5003
docker info respectively.
docker --version
Step3: Create ECR Repository
Go to the AWS management console
Open ECR dashboard, provide the repository name as ‘microservices-repo’
Leave everything default and click on create repository.
Now select the repository and click on ‘View Push Commands’ option
Which will give you the commands to Login (via CLI), Create image, Create tag and push
tag to ECR repository
Ensure you have awscli version2 configured, to check you can run ‘aws --version’
To setup refer: https://medium.com/@learning.dipali/install-awscli-v2-b54931091bbb
Else, run ‘aws configure’ and use access key and secret key to configure the same.
Reference commands to create and push the three images to ECR repo are on next slide.
Step3: Create ECR Repository (commands)
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-repo:service-1 .
docker tag microservices-repo:service-1 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-1
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-1

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-repo:service-2 .
docker tag microservices-repo:service-2 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-2
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-2

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-repo:service-3 .
docker tag microservices-repo:service-3 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-3
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-3
Step4: Create ECS Cluster
Go to the Amazon ECS dashboard => Cluster =>Create Cluster and consider following
configurations while cluster creation:
Select Cluster template as “EC2 Linux + Networking”
Provide cluster name as ‘microservices-cluster’
choose EC2 instance type as ‘t3.small’
Number of instances ‘2’
and leave rest everything as default.
it will create a Cluster with 2 EC2 instances in
A new VPC and a new security group with name
starting from ‘EC2ContainerService….’
Step5: Create Task Definition & add container information
From the ECS dashboard go to Task Definition => Create New Task Definition with
launch type as EC2
Name: service-1-td, and click on Add Container (refer snapshot below)
Now use following configurations to add container:
Container Name: service-1-cont
image: image URI of service-1 tag from ECR
Memory Limit: Hard Limit — 256
Port mapping: 80 => 5001
then click on Add
Then click on create for Task definition to get created.
Similarly create 2 more Task definitions for service-2 and service-3
The only difference will be in the container port mapping. here to allow dynamic post
mapping via ALB, we will map port 0 to 5002 and 0 to 5003. Please refer snapshot below:
Step6: Create Service to run the Task definition
Open the cluster and click on ‘Create’ under Services tab
Consider following service configurations:
Launch type: EC2
Task Definition: service-1-td
service name: service-1
No. of tasks: 1
Choose Task Placement as ‘BinPack’ and Click on ‘Next Step’. Under Load balancing we need to use
Application Load Balancer. For which we need to create and ALB first.

Task Placement Strategies:


Binpack: Place Tasks based on the least amount of available CPU and Memory.
Random: Place tasks randomly. Use this strategy when task placement or termination does not matter.
Spread: Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs,
InstanceID or host.
Spread is typically used to achieve high availability.
Step7: Create Application Load Balancer
In New tab, Open EC2 console and go to the ‘Load balancer’ While ALB creation, it is very important to create
this ALB in the same VPC as of ECS cluster, use following configurations for ALB:
Name: microservicesLB
Listeners: HTTP : 80
VPC: (choose VPC as of ECS cluster) and select AZs
Then click on “Configure Security Settings” and then “Next: Configure Security Groups”
Step7 contd.: Create Application Load Balancer
Create new Security Group named: MicroservicesLB-SG

Now, Click on “Next: Configure Routing”

Create New Target Group with Name: MicroservicesLB-TG


Step7 contd.: Create Application Load Balancer
Then, Click on “Next: Register Targets”
Here, we do not need to register any targets (as these will be registered via ECS), Simply click
on “Next: Review” and then “Create”
Once Load balancer is created, Note down (copy) the Security group id of this load balancer
Step8: Fix security group settings
Click on this Security group id, it will take you to the Security Groups page under EC2
Open the EC2 container service security group, and edit inbound rules to allow traffic from
Load balancer security group with All TCP protocol and save rule.
Step9: Complete creation of service by providing ALB name
Now, Come back to the service creation tab. under Load Balancing section choose Application
Load balancer and select the load balancer name just created. Also ensure Container name:Port
mapping already populated with correct port
Step9 contd.: Complete creation of service by providing ALB
name
Now, Click on “Add to Load balancer”
Choose following configuration in ‘Container to Load balance’ section
Production Listener Port: 80:HTTP
Leave the default Target group name
Make sure Path pattern exactly matches with the end-point in your application script (index.py)
followed by /*
Provide Evaluation order as 1 or 2 or 3 for service-1, service-2, service-3 respectively
Health check path should also be the same as end-point
Step10: Verify the running services
Once the service is created, pick the DNS of ALB followed by /service-1 (service-2 or service-3)
it should display the running application with message: Hello World from Service-1!
e.g. refer snapshot below:
Step11: Delete resources (ECS Cluster, Load Balancer)

Go to ECS Dashboard => open cluster => Delete Cluster


Go To Load Balancer via EC2 => Select Load Blancer =>Actions =>Delete
Serverless Containerization
Using
AWS Fargate
What is AWS Fargate
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic
Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage
servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources
required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel
providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by
design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.

In this workshop demo we will show how you can utilize AWS fargate to run AWS Fargate with AWS ECS.
AWS Fargate Benefits
Deploy and manage applications, not infrastructure:

With Fargate, you can focus on building and operating your applications whether you are running it with ECS or EKS. You only interact with and pay for
your containers, and you avoid the operational overhead of scaling, patching, securing, and managing servers. Fargate ensures that the infrastructure
your containers run on is always up-to-date with the required patches.

Right-sized resources with flexible pricing options

Fargate launches and scales the compute to closely match the resource requirements you specify for the container. With Fargate, there is no
over-provisioning and paying for additional servers. You can also get Spot and Compute Savings Plan pricing options with Fargate just like with Amazon
EC2 instances. Compared to On-Demand prices, Fargate Spot provides up to 70% discount for interrupt-tolerant applications, and Compute Savings Plan
offers up to 50% discount on committed spend for persistent workloads.

Secure isolation by design

Individual ECS tasks or EKS pods each run in their own dedicated kernel runtime environment and do not share CPU, memory, storage, or network
resources with other tasks and pods. This ensures workload isolation and improved security for each task or pod.

Rich observability of applications

With Fargate, you get out-of-box observability through built-in integrations with other AWS services including Amazon CloudWatch Container Insights.
Fargate allows you to gather metrics and logs for monitoring your applications through an extensive selection of third party tools with open interfaces.
Hands on Workshop - Use Fargate to Run Containerized App
In this workshop, we will create a node.js based microservice and will deploy on to Amazon ECS Fargate
along with Application Load Balancer. Here we will create one image of microservice which will run behind an
Application Load Balancer on ECS.

To achieve this we will perform following steps:


Step1: Create a Node.js microservice app
Step 2: Create Dockerfile
Step3: Create ECR Repository and push images to ECR
Step4: Create ECS cluster
Step5: Create Task Definition & add container information
Step6: Create Service to run the Task definition
Step7: Create New Application Load Balancer
Step8: Fix security group settings
Step9: Complete creation of service by providing ALB name
Step10: Verify the running services
Step11: Delete resources (ECS Cluster, Load Balancer)
Step1: Create microservice
Here we are creating a sample node.js based simple ‘Hello World’ microservice app. Let’s
write its code in server.js using javascript

File name: server.js


File name:package.json
{
const express = require('express'); "name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
// Constants "author": "First Last
const PORT = process.env.PORT || 8080; <first.last@example.com>",
const HOST = process.env.HOST || 'localhost'; "main": "server.js",
"scripts": {
// App "start": "node server.js",
const app = express(); "test": "jest"
},
"dependencies": {
app.get('/fargate-service', (req, res) => { "express": "^4.16.1"
return res.send(`Hello World from },
fargate-service`); }
});

app.listen(PORT);
console.log(`Running on port :${PORT}`);
Step 2: Create Dockerfile
Filename: Dockerfile

FROM mhart/alpine-node:12
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]

Where FROM directive is to tell Docker that which base image is to take from Docker Hub.
COPY directive moves the application into the container image
WORKDIR sets the working directory
RUN directive is calling npm to install dependencies available in file: “package.json”
EXPOSE directive is to expose a port to be used by express.js
CMD command is to start the actual application script.
Step2 contd.: Docker Dependencies

Filename: package.json

npm install

Check if Docker is available on Machine

docker --version

Steps to install Docker on Linux ec2 (if not already


available)

sudo yum update -y


sudo yum install -y docker So by now we have created
sudo service docker start server.js, package.json
sudo usermod -a -G docker ec2-user Dockerfile for our app, which
exit is set to run at 8080 port.
==> login again
docker info
docker --version
Step3: Create ECR Repository
Go to the AWS management console
Open ECR dashboard, provide the repository name as ‘microservices-fargate-repo’
Leave everything default and click on create repository.
Now select the repository and click on ‘View Push Commands’ option
Which will give you the commands to Login (via CLI), Create image, Create tag and push
tag to ECR repository
Ensure you have awscli version2 configured, to check you can run ‘aws --version’
To setup refer: https://medium.com/@learning.dipali/install-awscli-v2-b54931091bbb
Else, run ‘aws configure’ and use access key and secret key to configure the same.
Reference commands to create and push the three images to ECR repo are on next slide.
Step3: Create ECR Repository (commands)

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-fargate-repo:fargate-service .
docker tag microservices-fargate-repo:fargate-service
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-fargate-repo:fargate-service
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-fargate-repo:fargate-service
Step4: Create ECS Cluster
Go to the Amazon ECS dashboard => Cluster =>Create Cluster and consider following configurations
while cluster creation:
Select Cluster template as “Networking only”
Provide cluster name as ‘microservices-fargate-cluster’
Under networking check on “Create VPC”

Optional: Enable CloudWatch Container Insights (for debugging)


Leave rest everything as default.

it will create necessary required resource

Note: Keep a note of the created resources, specially VPC id, that will be useful in next steps
Step5: Create Task Definition & add container information
From the ECS dashboard go to Task Definition => Create New Task Definition with launch type as Fargate
Name: fargate-service-td,
Set Task Size: Task Memory:0.5 GB and vCPU: 0.25
click on Add Container (refer snapshot below)
Now use following configurations to add container:
Container Name: fargate-service-cont
image: image URI of service-4 tag from ECR
Memory Limit: Soft Limit — 512
Port mapping: 8080 (TCP)
then click on Add
Then click on create for Task definition to get created.
Step6: Create Service to run the Task definition
Open the cluster and click on ‘Create’ under Services tab
Consider following service configurations:
Launch type: FARGATE
Task Definition: fargate-service-td
service name: fargate-service
No. of tasks: 1
Choose Task Placement as ‘BinPack’ and Click on ‘Next Step’.

In Security Group allow, port 8080 to be access from anywhere (later should change to traffics from load balancer security group only )

Under Load balancing we need to use Application Load Balancer. For which we need to create and ALB first if not done already or utilize
previously created load balancer

Task Placement Strategies:


Binpack: Place Tasks based on the least amount of available CPU and Memory.
Random: Place tasks randomly. Use this strategy when task placement or termination does not matter.
Spread: Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs, InstanceID or host.
Spread is typically used to achieve high availability.
Step7: Create Application Load Balancer
In New tab, Open EC2 console and go to the ‘Load balancer’ While ALB creation, it is very important to create
this ALB in the same VPC as of ECS cluster, use following configurations for ALB:
Name: microservicesFargateLB
Listeners: HTTP : 80
VPC: (choose VPC as of ECS cluster) and select AZs
Then click on “Configure Security Settings” and then “Next: Configure Security Groups”
Step7 contd.: Create Application Load Balancer
Create new Security Group named: MicroservicesFargateLB-SG

Now, Click on “Next: Configure Routing”

Create New Target Group with Name: MicroservicesFargateLB-TG


Step7 contd.: Create Application Load Balancer
Then, Click on “Next: Register Targets”
Here, we do not need to register any targets (as these will be registered via ECS), Simply click
on “Next: Review” and then “Create”
Once Load balancer is created, Note down (copy) the Security group id of this load balancer
Step8: Fix security group settings
Come back to service creation tab and edit the security group to allow traffic on port 80 from
security group if of load balancer
Step9: Complete creation of service by providing ALB name
In service creation tab. Select the created load balancer and make sure port mapping done
correctly
Step9 contd.: Complete creation of service by providing ALB
name
Now, Click on “Add to Load balancer”
Choose following configuration in ‘Container to Load balance’ section
Production Listener Port: 80:HTTP
Leave the default Target group name and create new one, name it, MicroservicesFargateLB-TG-New
Make sure Path pattern exactly matches with the end-point in your application script (server.js)
followed by /*
Provide Evaluation order as 1 if it’s a new load balancer else the order number based on your
existing load balancer
Step10: Verify the running services
Once the service is created, pick the DNS of ALB followed by /fargate-service
it should display the running application with message: Hello World from fargate-service!
Step11: Delete resources (ECS Cluster, Load Balancer)

Go to ECS Dashboard => open cluster => Delete Cluster


Go To Load Balancer via EC2 => Select Load Blancer =>Actions =>Delete
Good luck!
I hope you’ll use this knowledge and build
awesome solutions.

If any issue, feel free to let us know

You might also like