Professional Documents
Culture Documents
2) Benefits of Containerization?
3) Containtainerization with Docker
4) Why Containerization On AWS?
5) How to Achieve Containerization on AWS?
6) Basic ECS Concepts
7) Hands on Workshop
Containerization On AWS
“We are passionate cloud enthusiasts running
AWS communities organically and are not
Amazon or AWS employees”
CODE OF CONDUCT
Hope you all enjoy the session and learn something AWSome today.
Pre-requisites
Hybrid applications: Containers let you standardize how code is deployed, making it easy to build workflows for applications that
run between on-premises and cloud environments.
Application migration to the cloud: Containers make it easy to package entire applications and move them to the cloud without
needing to make any code changes.
Batch processing: Package batch processing and ETL jobs into containers to start jobs quickly and scale them dynamically in
response to demand.
Machine learning: Use containers to quickly scale machine learning models for training and inference and run them close to your
data sources on any platform.
Platform as a service: Use containers to build platforms that remove the need for developers to manage infrastructure and
standardize how your applications are deployed and managed.
Containerization with Docker
What is Docker ?
Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages
software into standardized units called containers that have everything the software needs to run including
libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any
environment and know your code will run.
Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run
distributed applications at any scale. AWS supports both Docker licensing models: open source Docker
Community Edition (CE) and subscription-based Docker Enterprise Edition (EE).
Secure: AWS offers 210 security, compliance, and governance services and key features which is about 40 more than the next largest cloud provider. AWS provides
strong security isolation between your containers, ensures you are running the latest security updates, and gives you the ability to set granular access permissions for
every container.
Reliable: AWS container services run on the best global infrastructure with 69 Availability Zones (AZ) across 22 Regions. AWS provides >2x more regions with multiple
availability zones than the next largest cloud provider (22 vs. 8). There are SLAs for all our container services giving you ease of mind.
Choice: AWS container services offer the broadest choice of services to run your containers.
Deeply Integrated with AWS: AWS container services are deeply integrated with AWS by design. This allows your container applications to leverage the breadth and depth
of the AWS cloud from networking, security, to monitoring. AWS combines the agility of containers with the elasticity and security of the cloud.
How to Achieve Containerization on AWS
AWS provides below popular services using which we can achieve containerization goal:
Amazon Elastic Container Service: Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo,
Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.
ECS is a great choice to run containers for several reasons. First, you can choose to run your ECS clusters using AWS Fargate, which is serverless compute for
containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application
isolation by design. Second, ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s
recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability.
Additionally, because ECS has been a foundational pillar for key Amazon services, it can natively integrate with other services such as Amazon Route 53, Secrets
Manager, AWS Identity and Access Management (IAM), and Amazon CloudWatch providing you a familiar experience to deploy and scale your containers. ECS is also
able to quickly integrate with other AWS services to bring new capabilities to ECS. For example, ECS allows your applications the flexibility to use a mix of Amazon EC2
and AWS Fargate with Spot and On-Demand pricing options. ECS also integrates with AWS App Mesh, which is a service mesh, to bring rich observability, traffic controls
and security features to your applications. ECS has grown rapidly since launch and is currently launching 5X more containers every hour than EC2 launches instances.
Basic ECS Concepts
ECS Cluster: One or more servers (i.e. EC2 instances) that ECS can use for deploying Docker containers.
ECS Task: One or more Docker containers that should be run as a group on a single instance.
ECS Task Definition (aka ECS Container Definition): A JSON file that defines an ECS Task, including the container(s) to run,
the resources (memory, CPU) those containers need, the volumes to mount, the environment variables to set, and so on.
Task Definition Revision: ECS Tasks are immutable. Once you define a Task Definition, you can never change it: you can only
create new Task Definitions, which are known as revisions. The most common revision is to change what version of a Docker
container to deploy.
ECS Service: A way to deploy and manage long-running ECS Tasks, such as a web service. The service can deploy your
Tasks across one or more instances in the ECS Cluster, restart any failed Tasks, and route traffic across your Tasks using an
optional Elastic Load Balancer.
Hands on Workshop - Containerize Microservice with AWS
In this workshop, we will create a python based microservice and will deploy on to Amazon ECS along with
Application Load Balancer with dynamic port mapping. Here we will create three images of microservice
which will run behind an Application Load Balancer on ECS.
Filename: index.py
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5001
ENTRYPOINT ["python","./index.py"]
Where FROM directive is to tell Docker that which base image is to take from Docker Hub.
COPY directive moves the application into the container image
WORKDIR sets the working directory
RUN directive is calling PyPi(pip) to install dependencies available in file: “requirements.txt”
EXPOSE directive is to expose a port to be used by flask
ENTRYPOINT command is to execute the actual application script.
Step2 contd.: Docker Dependencies
Filename: requirements.txt
flask
docker --version
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-repo:service-2 .
docker tag microservices-repo:service-2 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-2
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-2
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-repo:service-3 .
docker tag microservices-repo:service-3 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-3
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-repo:service-3
Step4: Create ECS Cluster
Go to the Amazon ECS dashboard => Cluster =>Create Cluster and consider following
configurations while cluster creation:
Select Cluster template as “EC2 Linux + Networking”
Provide cluster name as ‘microservices-cluster’
choose EC2 instance type as ‘t3.small’
Number of instances ‘2’
and leave rest everything as default.
it will create a Cluster with 2 EC2 instances in
A new VPC and a new security group with name
starting from ‘EC2ContainerService….’
Step5: Create Task Definition & add container information
From the ECS dashboard go to Task Definition => Create New Task Definition with
launch type as EC2
Name: service-1-td, and click on Add Container (refer snapshot below)
Now use following configurations to add container:
Container Name: service-1-cont
image: image URI of service-1 tag from ECR
Memory Limit: Hard Limit — 256
Port mapping: 80 => 5001
then click on Add
Then click on create for Task definition to get created.
Similarly create 2 more Task definitions for service-2 and service-3
The only difference will be in the container port mapping. here to allow dynamic post
mapping via ALB, we will map port 0 to 5002 and 0 to 5003. Please refer snapshot below:
Step6: Create Service to run the Task definition
Open the cluster and click on ‘Create’ under Services tab
Consider following service configurations:
Launch type: EC2
Task Definition: service-1-td
service name: service-1
No. of tasks: 1
Choose Task Placement as ‘BinPack’ and Click on ‘Next Step’. Under Load balancing we need to use
Application Load Balancer. For which we need to create and ALB first.
Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources
required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel
providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by
design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.
In this workshop demo we will show how you can utilize AWS fargate to run AWS Fargate with AWS ECS.
AWS Fargate Benefits
Deploy and manage applications, not infrastructure:
With Fargate, you can focus on building and operating your applications whether you are running it with ECS or EKS. You only interact with and pay for
your containers, and you avoid the operational overhead of scaling, patching, securing, and managing servers. Fargate ensures that the infrastructure
your containers run on is always up-to-date with the required patches.
Fargate launches and scales the compute to closely match the resource requirements you specify for the container. With Fargate, there is no
over-provisioning and paying for additional servers. You can also get Spot and Compute Savings Plan pricing options with Fargate just like with Amazon
EC2 instances. Compared to On-Demand prices, Fargate Spot provides up to 70% discount for interrupt-tolerant applications, and Compute Savings Plan
offers up to 50% discount on committed spend for persistent workloads.
Individual ECS tasks or EKS pods each run in their own dedicated kernel runtime environment and do not share CPU, memory, storage, or network
resources with other tasks and pods. This ensures workload isolation and improved security for each task or pod.
With Fargate, you get out-of-box observability through built-in integrations with other AWS services including Amazon CloudWatch Container Insights.
Fargate allows you to gather metrics and logs for monitoring your applications through an extensive selection of third party tools with open interfaces.
Hands on Workshop - Use Fargate to Run Containerized App
In this workshop, we will create a node.js based microservice and will deploy on to Amazon ECS Fargate
along with Application Load Balancer. Here we will create one image of microservice which will run behind an
Application Load Balancer on ECS.
app.listen(PORT);
console.log(`Running on port :${PORT}`);
Step 2: Create Dockerfile
Filename: Dockerfile
FROM mhart/alpine-node:12
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Where FROM directive is to tell Docker that which base image is to take from Docker Hub.
COPY directive moves the application into the container image
WORKDIR sets the working directory
RUN directive is calling npm to install dependencies available in file: “package.json”
EXPOSE directive is to expose a port to be used by express.js
CMD command is to start the actual application script.
Step2 contd.: Docker Dependencies
Filename: package.json
npm install
docker --version
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com
docker build -t microservices-fargate-repo:fargate-service .
docker tag microservices-fargate-repo:fargate-service
16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-fargate-repo:fargate-service
docker push 16xxxxxx.dkr.ecr.us-east-1.amazonaws.com/microservices-fargate-repo:fargate-service
Step4: Create ECS Cluster
Go to the Amazon ECS dashboard => Cluster =>Create Cluster and consider following configurations
while cluster creation:
Select Cluster template as “Networking only”
Provide cluster name as ‘microservices-fargate-cluster’
Under networking check on “Create VPC”
Note: Keep a note of the created resources, specially VPC id, that will be useful in next steps
Step5: Create Task Definition & add container information
From the ECS dashboard go to Task Definition => Create New Task Definition with launch type as Fargate
Name: fargate-service-td,
Set Task Size: Task Memory:0.5 GB and vCPU: 0.25
click on Add Container (refer snapshot below)
Now use following configurations to add container:
Container Name: fargate-service-cont
image: image URI of service-4 tag from ECR
Memory Limit: Soft Limit — 512
Port mapping: 8080 (TCP)
then click on Add
Then click on create for Task definition to get created.
Step6: Create Service to run the Task definition
Open the cluster and click on ‘Create’ under Services tab
Consider following service configurations:
Launch type: FARGATE
Task Definition: fargate-service-td
service name: fargate-service
No. of tasks: 1
Choose Task Placement as ‘BinPack’ and Click on ‘Next Step’.
In Security Group allow, port 8080 to be access from anywhere (later should change to traffics from load balancer security group only )
Under Load balancing we need to use Application Load Balancer. For which we need to create and ALB first if not done already or utilize
previously created load balancer