Professional Documents
Culture Documents
BELAGAVI, KARNATAKA
B. V. V. SANGHA’S
BILURU GURUBASAVA MAHASWAMIJI INSTITUTE OF
TECHNOLOGY, MUDHOL - 587313
DEPARTMENT OF
ELECTRONICS AND COMMUNICATIONS ENGINEERING
Technical Seminar on
“DevOps on AWS”
Submitted by :
Mr. Prem Koppal
[2LB18EC017]
DEPARTMENT OF
ELECTRONICS AND COMMUNICATIONS ENGINEERING
CERTIFICATE
This is to certify that the technical seminar entitled “DevOps on AWS” is a
bonafide work carried out by Mr. Prem Koppal in partial fulfilment for the award of
Bachelor of Engineering Department of Electronics and Communication
Engineering, Biluru Gurubasava Mahaswamiji Institute of Technology Mudhol,
affiliated to Visvesvaraya Technological University Belgavi, Karnataka, during the
academic year of 2021-2022. It is verified that all corrections/suggestions indicated
for internal assessment have been incorporated in the report deposited in the
department library. The seminar report has been approved as it satisfied the
academic requirements in respect of technical seminar prescribed for the Bachelor of
Engineering Degree.
Submitted by
Mr. Prem Koppal
[2LB18EC017]
2.
DECLARATION
dissertation entitled “DevOps on AWS” technical seminar has been carried at Department of
Electronics and Communication Engineering, Biluru Gurubasava Mahaswamiji Institute of
Technology Mudhol, and submitted in the partial fulfilment of the requirements for the award of
degree of Bachelor of Engineering in Department of Electronics and Communication Engineering by
Biluru Gurubasava Mahaswamiji Institute of Technology Mudhol, during the academic year of 2021-
22. Further the matter embodied in dissertation has not been submitted by anybody for the award of
any degree or diploma to any other university.
ABSTRACT
Now a days the world is moving towards automation, any small or big task to be done,
automation is preferred. With the power of Docker Engine, we create customized docker image
from CentOS official docker image. This image consists the software and python libraries. The
generated Deep Learning model is loaded into a backend python code. In the front end, a form is
displayed to the user. As the user enters the required values, they will be carried over to the
backend server. The backend code will get the data entered by user using GET request and, these
values act as input to the pre-trained model, gives the predicted output. This result is sent to the
user. This backend process is run by the Flask framework of python which provides an inbuilt
server.
As innovation accelerates and customer needs rapidly evolve, businesses must become
increasingly agile. Time to market is key, and to facilitate overall business goals, IT departments
need to be agile. Over the years software development lifecycles moved from waterfall to agile
models of development. These improvements are moving downstream toward IT operations with
the evolution of DevOps.
In order to meet the demands of an agile business, IT operations need to deploy applications
in a consistent, repeatable, and reliable manner. This can only be fully achieved with the
adoption of automation. Amazon Web Services (AWS) supports numerous DevOps principles
and practices that IT departments can capitalize on to improve business agility. This paper
focuses on DevOps principles and practices supported on the AWS platform. A brief
introduction to the origins of DevOps sets the scene and explains how and why DevOps has
evolved.
INDEX
CHAPTER 1: INTRODUCTION 3
CHAPTER 6: AUTOMATION 13
CHAPTER 7: SECURITY 14
CHAPTER 8: CONCLUSION 17
REFERENCES
CHAPTER 1
INTRODUCTION
Historically many organizations have been vertically structured with poor integration among
development, infrastructure, security and support teams. Frequently the groups report into
different organizational structures with different corporate goals and philosophies.
Deploying software has predominately been the role of the IT operations group.
Fundamentally developers like to build software and change things quickly, whereas IT
operations focus on stability and reliability. This mismatch of goals can lead to conflict, and
ultimately the business may suffer.
Today, these old divisions are breaking down, with the IT and developer roles merging and
following a series of systematic principles:
Infrastructure as code
Continuous deployment
Automation
Monitoring
Security
An examination of each of these principles reveals a close connection to the offerings
available from Amazon Web Services.
CHAPTER 2
CONTINUOUS INTEGRATION
AWS Code-Commit is a secure, highly scalable, managed source control service that hosts
private git repositories. Code-Commit eliminates the need for you to operate your own source
control system and there is no hardware to provision and scale or software to install, configure,
and operate. You can use Code-Commit to store anything from code to binaries, and it supports
the standard functionality of Git, allowing it to work seamlessly with your existing Git-based
tools. Your team can also use Code-Commit’s online code tools to browse, edit, and collaborate
on projects. AWS Code-Commit has several benefits:
Collaboration - AWS Code Commit is designed for collaborative software development. You
can easily commit, branch, and merge your code enabling you to easily maintain control of your
team’s projects. Code-Commit also supports pull requests, which provide a mechanism to
request code reviews and discuss code with collaborators.
Encryption - You can transfer your files to and from AWS Code-Commit using HTTPS or SSH,
as you prefer. Your repositories are also automatically encrypted at rest through AWS Key
Management Service (AWS KMS) using customer-specific keys.
Access Control - AWS Code-Commit uses AWS Identity and Access Management (IAM) to
control and monitor who can access your data as well as how, when, and where they can access
it. Code-Commit also helps you monitor your repositories through AWS CloudTrail and Amazon
CloudWatch.
High Availability and Durability - AWS Code-Commit stores your repositories in Amazon
Simple Storage Service (Amazon S3) and Amazon DynamoDB. Your encrypted data is
redundantly stored across multiple facilities. This architecture increases the availability and
durability of your repository data.
Notifications and Custom Scripts - You can now receive notifications for events impacting
your repositories. Notifications will come in the form of Amazon Simple Notification Service
(Amazon SNS) notifications. Each notification will include a status message as well as a link to
the resources whose event generated that notification. Additionally, using AWS Code Commit
repository triggers, you can send notifications and create HTTP webhooks with Amazon SNS or
invoke AWS Lambda functions in response to the repository events you choose.
AWS Code-Build is a fully managed continuous integration service that compiles source code,
runs tests, and produces software packages that are ready to deploy. You don’t need to provision,
manage, and scale your own build servers. Code-Build can use either of GitHub, GitHub
Enterprise, Bit-Bucket, AWS Code-Commit, or Amazon S3 as a source provider.
Code-Build scales continuously and can processes multiple builds concurrently. Code-Build
offers various pre-configured environments for various version of Windows and Linux.
Customers can also bring their customized build environments as docker containers. Code-Build
also integrates with open source tools such as Jenkins and Spinnaker.
Code-Build can also create reports for unit, functional or integration tests. These reports provide
a visual view of how many tests cases were executed and how many passed or failed. The build
process can also be executed inside an Amazon Virtual Private Cloud (Amazon VPC) which can
be helpful if your integration services or databases are deployed inside a VPC.
With AWS Code Build, your build artifacts are encrypted with customer-specific keys that are
managed by the KMS. Code Build is integrated with IAM, so you can assign user- specific
permissions to your build projects.
CHAPTER 3
CONTINUOUS DELIVERY
Continuous delivery is a software development practice where code changes are automatically
prepared for a release to production. A pillar of modern application development, continuous
delivery expands upon continuous integration by deploying all code changes to a testing
environment and/or a production environment after the build stage. When properly implemented,
developers will always have a deployment-ready build artifact that has passed through a
standardized test process.
Continuous delivery lets developers automate testing beyond just unit tests so they can verify
application updates across multiple dimensions before deploying to customers. These tests may
include UI testing, load testing, integration testing, API reliability testing, etc. This helps
developers more thoroughly validate updates and pre-emptively discover issues. With the cloud,
it is easy and cost-effective to automate the creation and replication of multiple environments for
testing, which was previously difficult to do on-premises.
AWS Code-Build
AWS Code-Deploy
AWS Code-Pipeline
AWS Code-Deploy is a fully managed deployment service that automates software deployments
to a variety of compute services such as Amazon Elastic Compute Cloud (Amazon EC2), AWS
Faregate, AWS Lambda, and your on-premises servers. AWS Code-Deploy makes it easier for
you to rapidly release new features, helps you avoid downtime during application deployment,
and handles the complexity of updating your applications. You can use Code-Deploy to automate
software deployments, eliminating the need for error-prone manual operations. The service
scales to match your deployment needs.
Code-Deploy has several benefits that align with the DevOps principle of continuous deployment:
Automated Deployments: Code-Deploy fully automates software deployments, allowing you to
deploy reliably and rapidly.
Centralized control: Code-Deploy enables you to easily launch and track the status of your
application deployments through the AWS Management Console or the AWS CLI. Code Deploy
gives you a detailed report enabling you to view when and to where each application revision
was deployed. You can also create push notifications to receive live updates about your
deployments.
Minimize downtime: Code-Deploy helps maximize your application availability during the
software deployment process. It introduces changes incrementally and tracks application health
Dept of ECE. BGMIT,
DevOps on 7
according to configurable rules. Software deployments can easily be stopped and rolled back if
there are errors.
Easy to adopt: Code-Deploy works with any application, and provides the same experience
across different platforms and languages. You can easily reuse your existing setup code. Code-
Deploy can also integrate with your existing software release process or continuous delivery
toolchain (e.g., AWS Code-Pipeline, GitHub, Jenkins). AWS Code-Deploy supports multiple
deployment options. For more information, see Deployment Strategies.
CHAPTER 4
DEPLOYMENT STRATEGIES
Deployment strategies define how you want to deliver your software. Organizations follow
different deployment strategies based on their business model. Some may choose to deliver
software which is fully tested, and other may want their users to provide feedback and let their
users evaluate under development features (e.g. Beta releases). In the following section we will
talk about various deployment strategies.
All-at-once deployments means all traffic is shifted from the original environment to the
replacement environment all at once.
CHAPTER 5
INFRASTRUCTURE AS A CODE
A fundamental principle of DevOps is to treat infrastructure the same way developers treat code.
Application code has a defined format and syntax. If the code is not written according to the
rules of the programming language, applications cannot be created. Code is stored in a version
management or source control system that logs a history of code development, changes, and bug
fixes. When code is compiled or built into applications, we expect a consistent application to be
created, and the build is repeatable and reliable.
Practicing infrastructure as code means applying the same rigor of application code development
to infrastructure provisioning. All configurations should be defined in a declarative way and
stored in a source control system such as AWS Code-Commit, the same as application code.
Infrastructure provisioning, orchestration, and deployment should also support the use of the
infrastructure as code.
Infrastructure was traditionally provisioned using a combination of scripts and manual processes.
Sometimes these scripts were stored in version control systems or documented step by step in
text files or run-books. Often the person writing the run books is not the same person executing
these scripts or following through the run- books. If these scripts or runbooks are not updated
frequently, they can potentially become a show-stopper in deployments. This results in the
creation of new environments is not always repeatable, reliable, or consistent.
In contrast to the above, AWS provides a DevOps-focused way of creating and maintaining
infrastructure. Similar to the way software developers write application code, AWS provides
services that enable the creation, deployment and maintenance of infrastructure in a
programmatic, descriptive, and declarative way. These services provide rigor, clarity, and
reliability. The AWS services discussed in this paper are core to a DevOps methodology and
form the underpinnings of numerous higher-level AWS DevOps principles and practices.
AWS CloudFormation
AWS Cloud Development Kit (AWS CDK)
AWS Cloud Development Kit for Kubernetes
AWS CloudFormation
AWS CloudFormation is a service that enables developers create AWS resources in an orderly
and predictable fashion. Resources are written in text files using JavaScript Object Notation
(JSON) or Yet Another Markup Language (YAML) format. The templates require a specific
syntax and structure that depends on the types of resources being created and managed. You
author your resources in JSON or YAML with any code editor such as AWS Cloud9, check it
into a version control system, and then CloudFormation builds the specified services in safe,
repeatable manner.
A CloudFormation template is deployed into the AWS environment as a stack. You can manage
stacks through the AWS Management Console, AWS Command Line Interface, or AWS
CloudFormation APIs. If you need to make changes to the running resources in a stack you
update the stack. Before making changes to your resources, you can generate a change set, which
is a summary of your proposed changes. Change sets enable you to see how your changes might
impact your running resources, especially for critical resources, before implementing them.
Figure 2 - AWS CloudFormation creating an entire environment (stack) from one template
You can use a single template to create and update an entire environment or separate templates to
manage multiple layers within an environment. This enables templates to be modularized, and
also provides a layer of governance that is important to many organizations.
When you create or update a stack in the console, events are displayed showing the status of the
configuration. If an error occurs, by default the stack is rolled back to its previous state. Amazon
Simple Notification Service (Amazon SNS) provides notifications on events. For example, you
can use Amazon SNS to track stack creation and deletion progress via email and integrate with
other processes programmatically.
AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources and
lets you describe any dependencies or pass in special parameters when the stack is configured.
With CloudFormation templates, you can work with a broad set of AWS services, such as
Amazon S3, Auto Scaling, Amazon CloudFront, Amazon DynamoDB, Amazon EC2, Amazon
Elasti- Cache, AWS Elastic Beanstalk, Elastic Load Balancing, IAM, AWS OpsWorks, and
Amazon VPC. For the most recent list of supported resources, see AWS resource and property
types reference.
CHAPTER 6
AUTOMATION
Another core philosophy and practice of DevOps is automation. Automation focuses on the
setup, configuration, deployment, and support of infrastructure and the applications that run on
it. By using automation, you can set up environments more rapidly in a standardized and
repeatable manner. The removal of manual processes is a key to a successful DevOps strategy.
Historically, server configuration and application deployment have been predominantly a manual
process. Environments become non standard, and reproducing an environment when issues arise
is difficult. The use of automation is critical to realizing the full benefits of the cloud. Internally
AWS relies heavily on automation to provide the core features of elasticity and scalability.
Manual processes are error prone, unreliable, and inadequate to support an agile business.
Frequently an organization may tie up highly skilled resources to provide manual configuration,
when time could be better spent supporting other, more critical, and higher value activities within
the business. Modern operating environments commonly rely on full automation to eliminate
manual intervention or access to production environments. This includes all software releasing,
machine configuration, operating system patching, troubleshooting, or bug fixing. Many levels
of automation practices can be used together to provide a higher level end-to-end automated
process.
Rapid changes
Improved productivity
Repeatable configurations
Reproducible environments
Leveraged elasticity
Leveraged auto scaling
Automated testing
Automation is a cornerstone with AWS services and is internally supported in all services,
features, and offerings.
CHAPTER 7
SECURITY
Whether you are going through a DevOps Transformation or implementing DevOps principles
for the first time, you should think about Security as integrated in your DevOps processes. This
should be cross cutting concern across your build, test deployment stages. Before we talk about
Security in DevOps on AWS let’s look at the AWS Shared Responsibility Model.
This shared model can help relieve the customer’s operational burden as AWS operates, manages
and controls the components from the host operating system and virtualization layer down to the
physical security of the facilities in which the service operates. This is critical in the cases where
customer want to understand the security of their build environments.
For DevOps we want to assign permissions based on the least-privilege permissions model. This
model states that “A user (or service) should be granted minimal amount of permissions that are
required to get job done”. Permissions are maintained in IAM. IAM is a web service that helps
you securely control access to AWS resources. You can use IAM to control who is authenticated
(signed in) and authorized (has permissions) to use resources.
CHAPTER 8
8.1 ADVANTAGES
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
8.2 DIS-ADVANTAGES
Amazon Web Services may have some common cloud computing issues when you move
to a cloud. For example, downtime, limited control, and backup protection.
AWS sets default limits on resources which vary from region to region. These resources
consist of images, volumes, and snapshots. You can launch the limited number of
instance per area.
CHAPTER 9
CONCLUSION
In order to make the journey to the cloud smooth, efficient, and effective; technology
companies should embrace DevOps principles and practices. These principles are embedded in
the AWS platform, and form the cornerstone of numerous AWS services, especially those in the
deployment and monitoring offerings.
Begin by defining your infrastructure as code using the service AWS CloudFormation or AWS
Cloud Development Kit (CDK). Next, define the way in which your applications are going to use
continuous deployment with the help of services like AWS Code-Build, AWS Code-Deploy,
AWS Code-Pipeline, and AWS Code-Commit. At the application level, use containers like AWS
Elastic Beanstalk, AWS Elastic Container Service (Amazon ECS), or AWS Elastic Kubernetes
Service (Amazon EKS), and AWS Ops-Works to simplify the configuration of common
architectures. Using these services also makes it easy to include other important services like
Auto Scaling and Elastic Load Balancing.
Finally, use the DevOps strategy of monitoring such as Amazon CloudWatch, and solid
security practices such as AWS IAM. With AWS as your partner, your DevOps principles will
bring agility to your business and IT organization and accelerate your journey to the cloud.
REFERENCES
https://aws.amazon.com/devops/what-is-devops/
https://aws.amazon.com/about-aws/
https://en.wikipedia.org/wiki/Amazon_Web_Services