You are on page 1of 22

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

BELAGAVI, KARNATAKA

B. V. V. SANGHA’S
BILURU GURUBASAVA MAHASWAMIJI INSTITUTE OF
TECHNOLOGY, MUDHOL - 587313
DEPARTMENT OF
ELECTRONICS AND COMMUNICATIONS ENGINEERING
Technical Seminar on

“DevOps on AWS”

Bachelor of Engineering in ELECTRONICS AND


COMMUNICATION ENGINEERING

Submitted by :
Mr. Prem Koppal
[2LB18EC017]

Under the Guidance of


Prof. Jyoti M Katagi
Assistant Professor
Department of Electronics and Communication Engineering
Biluru Gurubasava Mahaswamiji Institute of Technology
Mudhol-58731
2021-22
B. V. V. SANGHA’S
BILURU GURUBASAVA MAHASWAMIJI INSTITUTE
OFTECHNOLOGY, MUDHOL-587313

DEPARTMENT OF
ELECTRONICS AND COMMUNICATIONS ENGINEERING

CERTIFICATE
This is to certify that the technical seminar entitled “DevOps on AWS” is a
bonafide work carried out by Mr. Prem Koppal in partial fulfilment for the award of
Bachelor of Engineering Department of Electronics and Communication
Engineering, Biluru Gurubasava Mahaswamiji Institute of Technology Mudhol,
affiliated to Visvesvaraya Technological University Belgavi, Karnataka, during the
academic year of 2021-2022. It is verified that all corrections/suggestions indicated
for internal assessment have been incorporated in the report deposited in the
department library. The seminar report has been approved as it satisfied the
academic requirements in respect of technical seminar prescribed for the Bachelor of
Engineering Degree.

Prof. Jyoti Katagi Prof. Sneha B. Kotin Dr. S. B. Kerur


Guide HOD Principal

Submitted by
Mr. Prem Koppal
[2LB18EC017]

Name of Examiners Signature with Date


1.

2.
DECLARATION

I, the student of 8th semester Bachelor of Engineering in Department of Electronics and


Communication Engineering, Biluru Gurubasava Mahaswamiji Institute of Technology Mudhol,
affiliated to Visvesvaraya Technological University Belgavi Karnataka, hereby we declare that the

dissertation entitled “DevOps on AWS” technical seminar has been carried at Department of
Electronics and Communication Engineering, Biluru Gurubasava Mahaswamiji Institute of
Technology Mudhol, and submitted in the partial fulfilment of the requirements for the award of
degree of Bachelor of Engineering in Department of Electronics and Communication Engineering by
Biluru Gurubasava Mahaswamiji Institute of Technology Mudhol, during the academic year of 2021-
22. Further the matter embodied in dissertation has not been submitted by anybody for the award of
any degree or diploma to any other university.

Place: Mudhol Submitted by


Mr. Prem Koppal
ACKNOWLEDGMENT
“SUCCESS IS 1% INSPIRATION AND 99% PERSIPIRATION” and it is this 1% of
inspiration that enabled me to put in all hard work needed to complete the project. I consider it to
be a great privilege to place on record my profound sense of gratitude and sentiments to all those
who helped us during the course of seminnar.
First and foremost I would like to thank the Almighty and My Parents for their continuous
encouragement and supporting me to successfully complete our project work.
I am highly indebted to my institution Biluru Gurubasava Mahaswamiji Institute of
Technology, Mudhol. I have taken efforts in this occassion. However, it would not have been
possible without the kind support and help of many individuals. I would like to extend my sincere
thanks to all of them.
I wish to thank our Principal Dr. S. B. Kerur for constantly inspiring us in all
endeavoursand his valuable suggestions during the course of the project.
I wish to thank our HOD Prof. Sneha B. Kotin for providing all the support and facilitiesto
make this a huge success.
I do grab this opportunity to express our gratitude to my guide and project coordinator Prof.
Jyoti M. Katagi for her kind support, valuable suggestions and co-operation during this period.
Her constant motivation enabled us to widen the horizon of seminar.
Without the cheerful support of teaching and non-teaching staff of our department whose
names have not been mentioned, I would not have seen the light of the day. I wouldalso like to
express warm gratitude to all our friends who marked their contributions directly or indirectly in
making this project a success.

Mr. Prem Koppal


DevOps on 1

ABSTRACT

Now a days the world is moving towards automation, any small or big task to be done,
automation is preferred. With the power of Docker Engine, we create customized docker image
from CentOS official docker image. This image consists the software and python libraries. The
generated Deep Learning model is loaded into a backend python code. In the front end, a form is
displayed to the user. As the user enters the required values, they will be carried over to the
backend server. The backend code will get the data entered by user using GET request and, these
values act as input to the pre-trained model, gives the predicted output. This result is sent to the
user. This backend process is run by the Flask framework of python which provides an inbuilt
server.
As innovation accelerates and customer needs rapidly evolve, businesses must become
increasingly agile. Time to market is key, and to facilitate overall business goals, IT departments
need to be agile. Over the years software development lifecycles moved from waterfall to agile
models of development. These improvements are moving downstream toward IT operations with
the evolution of DevOps.
In order to meet the demands of an agile business, IT operations need to deploy applications
in a consistent, repeatable, and reliable manner. This can only be fully achieved with the
adoption of automation. Amazon Web Services (AWS) supports numerous DevOps principles
and practices that IT departments can capitalize on to improve business agility. This paper
focuses on DevOps principles and practices supported on the AWS platform. A brief
introduction to the origins of DevOps sets the scene and explains how and why DevOps has
evolved.

Dept of ECE. BGMIT,


DevOps on 2

INDEX

CHAPTER 1: INTRODUCTION 3

CHAPTER 2: CONTINUOUS INTEGRATION 4

CHAPTER 3: CONTINUOUS DELIVERY 6

CHAPTER 4: DEPLOYMENT STRATEGIES 8

CHAPTER 5: INFRASTRUCTURE AS A CODE 10

CHAPTER 6: AUTOMATION 13

CHAPTER 7: SECURITY 14

CHAPTER 8: ADVANTAGES & DIS ADVANTAGES 16

CHAPTER 8: CONCLUSION 17

REFERENCES

Dept of ECE. BGMIT,


DevOps on 3

CHAPTER 1
INTRODUCTION

DevOps is a new term that primarily focuses on improved collaboration, communication,


and integration between software developers and IT operations. It’s an umbrella term that
some describe as a philosophy, cultural change, and paradigm shift.

Figure 1: Developer throwing code "over the wall"

Historically many organizations have been vertically structured with poor integration among
development, infrastructure, security and support teams. Frequently the groups report into
different organizational structures with different corporate goals and philosophies.
Deploying software has predominately been the role of the IT operations group.
Fundamentally developers like to build software and change things quickly, whereas IT
operations focus on stability and reliability. This mismatch of goals can lead to conflict, and
ultimately the business may suffer.
Today, these old divisions are breaking down, with the IT and developer roles merging and
following a series of systematic principles:
 Infrastructure as code
 Continuous deployment
 Automation
 Monitoring
 Security
An examination of each of these principles reveals a close connection to the offerings
available from Amazon Web Services.

Dept of ECE. BGMIT,


DevOps on 4

CHAPTER 2
CONTINUOUS INTEGRATION

Continuous Integration (CI) is a software development practice where developers regularly


merge their code changes into a central code repository, after which automated builds and tests
are run. CI helps find and address bugs quicker, improve software quality, and reduce the time it
takes to validate and release new software updates.

AWS offers the following three services for continuous integration:

2.1 AWS CODE-COMMIT

AWS Code-Commit is a secure, highly scalable, managed source control service that hosts
private git repositories. Code-Commit eliminates the need for you to operate your own source
control system and there is no hardware to provision and scale or software to install, configure,
and operate. You can use Code-Commit to store anything from code to binaries, and it supports
the standard functionality of Git, allowing it to work seamlessly with your existing Git-based
tools. Your team can also use Code-Commit’s online code tools to browse, edit, and collaborate
on projects. AWS Code-Commit has several benefits:

Collaboration - AWS Code Commit is designed for collaborative software development. You
can easily commit, branch, and merge your code enabling you to easily maintain control of your
team’s projects. Code-Commit also supports pull requests, which provide a mechanism to
request code reviews and discuss code with collaborators.

Encryption - You can transfer your files to and from AWS Code-Commit using HTTPS or SSH,
as you prefer. Your repositories are also automatically encrypted at rest through AWS Key
Management Service (AWS KMS) using customer-specific keys.

Access Control - AWS Code-Commit uses AWS Identity and Access Management (IAM) to
control and monitor who can access your data as well as how, when, and where they can access
it. Code-Commit also helps you monitor your repositories through AWS CloudTrail and Amazon
CloudWatch.

High Availability and Durability - AWS Code-Commit stores your repositories in Amazon
Simple Storage Service (Amazon S3) and Amazon DynamoDB. Your encrypted data is
redundantly stored across multiple facilities. This architecture increases the availability and
durability of your repository data.

Dept of ECE. BGMIT,


DevOps on 5

Notifications and Custom Scripts - You can now receive notifications for events impacting
your repositories. Notifications will come in the form of Amazon Simple Notification Service
(Amazon SNS) notifications. Each notification will include a status message as well as a link to
the resources whose event generated that notification. Additionally, using AWS Code Commit
repository triggers, you can send notifications and create HTTP webhooks with Amazon SNS or
invoke AWS Lambda functions in response to the repository events you choose.

2.2 AWS Code-Build

AWS Code-Build is a fully managed continuous integration service that compiles source code,
runs tests, and produces software packages that are ready to deploy. You don’t need to provision,
manage, and scale your own build servers. Code-Build can use either of GitHub, GitHub
Enterprise, Bit-Bucket, AWS Code-Commit, or Amazon S3 as a source provider.

Code-Build scales continuously and can processes multiple builds concurrently. Code-Build
offers various pre-configured environments for various version of Windows and Linux.
Customers can also bring their customized build environments as docker containers. Code-Build
also integrates with open source tools such as Jenkins and Spinnaker.

Code-Build can also create reports for unit, functional or integration tests. These reports provide
a visual view of how many tests cases were executed and how many passed or failed. The build
process can also be executed inside an Amazon Virtual Private Cloud (Amazon VPC) which can
be helpful if your integration services or databases are deployed inside a VPC.

With AWS Code Build, your build artifacts are encrypted with customer-specific keys that are
managed by the KMS. Code Build is integrated with IAM, so you can assign user- specific
permissions to your build projects.

2.3 AWS Code-Artifact


AWS Code-Artifact is a fully managed artifact repository service that can be used by
organizations securely store, publish, and share software packages used in their software
development process. Code-Artifact can be configured to automatically fetch software packages
and dependencies from public artifact repositories so developers have access to the latest
versions

Dept of ECE. BGMIT,


DevOps on 6

CHAPTER 3
CONTINUOUS DELIVERY
Continuous delivery is a software development practice where code changes are automatically
prepared for a release to production. A pillar of modern application development, continuous
delivery expands upon continuous integration by deploying all code changes to a testing
environment and/or a production environment after the build stage. When properly implemented,
developers will always have a deployment-ready build artifact that has passed through a
standardized test process.
Continuous delivery lets developers automate testing beyond just unit tests so they can verify
application updates across multiple dimensions before deploying to customers. These tests may
include UI testing, load testing, integration testing, API reliability testing, etc. This helps
developers more thoroughly validate updates and pre-emptively discover issues. With the cloud,
it is easy and cost-effective to automate the creation and replication of multiple environments for
testing, which was previously difficult to do on-premises.

AWS offers the following services for continuous delivery:

 AWS Code-Build
 AWS Code-Deploy
 AWS Code-Pipeline

3.1 AWS Code-Deploy

AWS Code-Deploy is a fully managed deployment service that automates software deployments
to a variety of compute services such as Amazon Elastic Compute Cloud (Amazon EC2), AWS
Faregate, AWS Lambda, and your on-premises servers. AWS Code-Deploy makes it easier for
you to rapidly release new features, helps you avoid downtime during application deployment,
and handles the complexity of updating your applications. You can use Code-Deploy to automate
software deployments, eliminating the need for error-prone manual operations. The service
scales to match your deployment needs.
Code-Deploy has several benefits that align with the DevOps principle of continuous deployment:
Automated Deployments: Code-Deploy fully automates software deployments, allowing you to
deploy reliably and rapidly.
Centralized control: Code-Deploy enables you to easily launch and track the status of your
application deployments through the AWS Management Console or the AWS CLI. Code Deploy
gives you a detailed report enabling you to view when and to where each application revision
was deployed. You can also create push notifications to receive live updates about your
deployments.
Minimize downtime: Code-Deploy helps maximize your application availability during the
software deployment process. It introduces changes incrementally and tracks application health
Dept of ECE. BGMIT,
DevOps on 7

according to configurable rules. Software deployments can easily be stopped and rolled back if
there are errors.
Easy to adopt: Code-Deploy works with any application, and provides the same experience
across different platforms and languages. You can easily reuse your existing setup code. Code-
Deploy can also integrate with your existing software release process or continuous delivery
toolchain (e.g., AWS Code-Pipeline, GitHub, Jenkins). AWS Code-Deploy supports multiple
deployment options. For more information, see Deployment Strategies.

3.2 AWS Code-Pipeline


AWS Code Pipeline is a continuous delivery service that enables you to model, visualize, and
automate the steps required to release your software. With AWS Code-Pipeline, you model the
full release process for building your code, deploying to pre-production environments, testing
your application, and releasing it to production. AWS Code-Pipeline then builds, tests, and
deploys your application according to the defined workflow every time there is a code change.
You can integrate partner tools and your own custom tools into any stage of the release process
to form an end-to- end continuous delivery solution.
AWS Code-Pipeline has several benefits that align with the DevOps principle of continuous
deployment:
Rapid Delivery: AWS Code-Pipeline automates your software release process, allowing you to
rapidly release new features to your users. With Code-Pipeline, you can quickly iterate on
feedback and get new features to your users faster.
Easy to Integrate: AWS Code-Pipeline can easily be extended to adapt to your specific needs.
You can use the pre-built plugins or your own custom plugins in any step of your release
process. For example, you can pull your source code from GitHub, use your on- premises
Jenkins build server, run load tests using a third-party service, or pass on deployment
information to your custom operations dashboard.
Configurable Workflow: AWS Code-Pipeline enables you to model the different stages of your
software release process using the console interface, the AWS CLI, AWS CloudFormation, or
the AWS SDKs. You can easily specify the tests to run and customize the steps to deploy your
application and its dependencies.

Dept of ECE. BGMIT,


DevOps on 8

CHAPTER 4
DEPLOYMENT STRATEGIES

Deployment strategies define how you want to deliver your software. Organizations follow
different deployment strategies based on their business model. Some may choose to deliver
software which is fully tested, and other may want their users to provide feedback and let their
users evaluate under development features (e.g. Beta releases). In the following section we will
talk about various deployment strategies.

4.1 In-Place Deployments


In this strategy, the deployment is done line with the application on each instance in the
deployment group is stopped, the latest application revision is installed, and the new version of
the application is started and validated. You can use a load balancer so that each instance is
deregistered during its deployment and then restored to service after the deployment is complete.
In-place deployments can be all-at-once, assuming a service outage, or done as a rolling update.
AWS Code-Deploy and AWS Elastic Beanstalk offer deployment configurations for one at a
time, half at a time and all at once. These same deployment strategies for in-place deployments
are available within Blue-Green deployments.

4.2 Blue-Green Deployments


Blue-Green, sometimes referred to as red-black deployment is a technique for releasing
applications by shift traffic between two identical environments running differing versions of the
application. Blue-green deployments help you minimize downtime during application updates
mitigating risks surrounding downtime and rollback functionality. Blue-green deployments
enable you to launch a new version (green) of your application alongside the old version (blue),
and monitor and test the new version before you reroute traffic to it, rolling back on issue
detection.

4.3 Canary Deployments


Traffic is shifted in two increments. A canary deployment is a blue-green strategy that is more
risk-adverse, in which a phased approach is used. This can be two step or linear in which new
application code is deployed and exposed for trial, and upon acceptance rolled out either to the
rest of the environment or in a linear fashion.

Dept of ECE. BGMIT,


DevOps on 9

4.4 Linear Deployments


Linear deployments mean traffic is shifted in equal increments with an equal number of minutes
between each increment. You can choose from predefined linear options that specify the
percentage of traffic shifted in each increment and the number of minutes between each
increment.

4.5 All-at-once Deployments

All-at-once deployments means all traffic is shifted from the original environment to the
replacement environment all at once.

4.6 AWS Elastic Beanstalk Deployment Strategies

AWS Elastic Beanstalk supports the following type of deployment strategies:

1. All-at-Once: Performs in place deployment on all instances.


2. Rolling: Splits the instances into batches and deploys to one batch at a time.
3. Rolling with Additional Batch: Splits the deployments into batches but for the first
batch creates new EC2 instances instead of deploying on the existing EC2 instances.
4. Immutable: If you need to deploy with a new instance instead of using an existing
instance.
5. Traffic Splitting: Performs immutable deployment and then forwards percentage of
traffic to the new instances for a pre-determined duration of time. If the instances stay
healthy then forward all traffic to new instances and terminate old instances

Dept of ECE. BGMIT,


DevOps on 1

CHAPTER 5
INFRASTRUCTURE AS A CODE
A fundamental principle of DevOps is to treat infrastructure the same way developers treat code.
Application code has a defined format and syntax. If the code is not written according to the
rules of the programming language, applications cannot be created. Code is stored in a version
management or source control system that logs a history of code development, changes, and bug
fixes. When code is compiled or built into applications, we expect a consistent application to be
created, and the build is repeatable and reliable.

Practicing infrastructure as code means applying the same rigor of application code development
to infrastructure provisioning. All configurations should be defined in a declarative way and
stored in a source control system such as AWS Code-Commit, the same as application code.
Infrastructure provisioning, orchestration, and deployment should also support the use of the
infrastructure as code.

Infrastructure was traditionally provisioned using a combination of scripts and manual processes.
Sometimes these scripts were stored in version control systems or documented step by step in
text files or run-books. Often the person writing the run books is not the same person executing
these scripts or following through the run- books. If these scripts or runbooks are not updated
frequently, they can potentially become a show-stopper in deployments. This results in the
creation of new environments is not always repeatable, reliable, or consistent.

In contrast to the above, AWS provides a DevOps-focused way of creating and maintaining
infrastructure. Similar to the way software developers write application code, AWS provides
services that enable the creation, deployment and maintenance of infrastructure in a
programmatic, descriptive, and declarative way. These services provide rigor, clarity, and
reliability. The AWS services discussed in this paper are core to a DevOps methodology and
form the underpinnings of numerous higher-level AWS DevOps principles and practices.

AWS offers following services to define Infrastructure as a code.

 AWS CloudFormation
 AWS Cloud Development Kit (AWS CDK)
 AWS Cloud Development Kit for Kubernetes

Dept of ECE. BGMIT,


DevOps on 1

AWS CloudFormation
AWS CloudFormation is a service that enables developers create AWS resources in an orderly
and predictable fashion. Resources are written in text files using JavaScript Object Notation
(JSON) or Yet Another Markup Language (YAML) format. The templates require a specific
syntax and structure that depends on the types of resources being created and managed. You
author your resources in JSON or YAML with any code editor such as AWS Cloud9, check it
into a version control system, and then CloudFormation builds the specified services in safe,
repeatable manner.
A CloudFormation template is deployed into the AWS environment as a stack. You can manage
stacks through the AWS Management Console, AWS Command Line Interface, or AWS
CloudFormation APIs. If you need to make changes to the running resources in a stack you
update the stack. Before making changes to your resources, you can generate a change set, which
is a summary of your proposed changes. Change sets enable you to see how your changes might
impact your running resources, especially for critical resources, before implementing them.

Figure 2 - AWS CloudFormation creating an entire environment (stack) from one template

You can use a single template to create and update an entire environment or separate templates to
manage multiple layers within an environment. This enables templates to be modularized, and
also provides a layer of governance that is important to many organizations.
When you create or update a stack in the console, events are displayed showing the status of the
configuration. If an error occurs, by default the stack is rolled back to its previous state. Amazon
Simple Notification Service (Amazon SNS) provides notifications on events. For example, you

Dept of ECE. BGMIT,


DevOps on 1

can use Amazon SNS to track stack creation and deletion progress via email and integrate with
other processes programmatically.
AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources and
lets you describe any dependencies or pass in special parameters when the stack is configured.
With CloudFormation templates, you can work with a broad set of AWS services, such as
Amazon S3, Auto Scaling, Amazon CloudFront, Amazon DynamoDB, Amazon EC2, Amazon
Elasti- Cache, AWS Elastic Beanstalk, Elastic Load Balancing, IAM, AWS OpsWorks, and
Amazon VPC. For the most recent list of supported resources, see AWS resource and property
types reference.

AWS Cloud Development Kit


The AWS Cloud Development Kit (AWS CDK) is an open source software development
framework to model and provision your cloud application resources using familiar programming
languages. AWS CDK enables you to model application infrastructure using TypeScript, Python,
Java, and .NET. Developers can leverage their existing Integrated Development Environment
(IDE), leveraging tools like autocomplete and in- line documentation to accelerate development
of infrastructure.
AWS CDK utilizes AWS CloudFormation in the background to provision resources in a safe,
repeatable manner. Constructs are the basic building blocks of CDK code. A construct represents
a cloud component and encapsulates everything AWS CloudFormation needs to create the
component. The AWS CDK includes the AWS Construct Library containing constructs
representing many AWS services. By combining constructs together, you can quickly and easily
create complex architectures for deployment in AWS.

AWS Cloud Development Kit for Kubernetes


AWS Cloud Development Kit for Kubernetes (cdk8s), is an open-source software development
framework for defining Kubernetes applications using general-purpose programming languages.
Once you have defined your application in a programming language (As of date of publication
only Python and TypeScript are supported) cdk8s will convert your application description in to
pre-Kubernetes YML. This YML file can then be consumed by any Kubernetes cluster running
anywhere. Because the structure is defined in a programming language you can use the rich
features provided by the programming language. You can use the abstraction feature of the
programming language to create your own boiler-plate code and re-use it across all of the
deployments.

Dept of ECE. BGMIT,


DevOps on 1

CHAPTER 6
AUTOMATION

Another core philosophy and practice of DevOps is automation. Automation focuses on the
setup, configuration, deployment, and support of infrastructure and the applications that run on
it. By using automation, you can set up environments more rapidly in a standardized and
repeatable manner. The removal of manual processes is a key to a successful DevOps strategy.
Historically, server configuration and application deployment have been predominantly a manual
process. Environments become non standard, and reproducing an environment when issues arise
is difficult. The use of automation is critical to realizing the full benefits of the cloud. Internally
AWS relies heavily on automation to provide the core features of elasticity and scalability.
Manual processes are error prone, unreliable, and inadequate to support an agile business.
Frequently an organization may tie up highly skilled resources to provide manual configuration,
when time could be better spent supporting other, more critical, and higher value activities within
the business. Modern operating environments commonly rely on full automation to eliminate
manual intervention or access to production environments. This includes all software releasing,
machine configuration, operating system patching, troubleshooting, or bug fixing. Many levels
of automation practices can be used together to provide a higher level end-to-end automated
process.

Automation has the following key benefits:

 Rapid changes
 Improved productivity
 Repeatable configurations
 Reproducible environments
 Leveraged elasticity
 Leveraged auto scaling
 Automated testing

Automation is a cornerstone with AWS services and is internally supported in all services,
features, and offerings.

Dept of ECE. BGMIT,


DevOps on 1

CHAPTER 7
SECURITY
Whether you are going through a DevOps Transformation or implementing DevOps principles
for the first time, you should think about Security as integrated in your DevOps processes. This
should be cross cutting concern across your build, test deployment stages. Before we talk about
Security in DevOps on AWS let’s look at the AWS Shared Responsibility Model.

AWS Shared Responsibility Model


Security is a shared responsibility between AWS and the customer. The different parts of the
Shared Responsibility Model are explained below:
 AWS responsibility “Security of the Cloud” - AWS is responsible for protecting the
infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure
is composed of the hardware, software, networking, and facilities that run AWS Cloud
services.
 Customer responsibility “Security in the Cloud” – Customer responsibility will be
determined by the AWS Cloud services that a customer selects. This determines the
amount of configuration work the customer must perform as part of their security
responsibilities.

This shared model can help relieve the customer’s operational burden as AWS operates, manages
and controls the components from the host operating system and virtualization layer down to the
physical security of the facilities in which the service operates. This is critical in the cases where
customer want to understand the security of their build environments.

Figure 3 - AWS Shared Responsibility Model

Dept of ECE. BGMIT,


DevOps on 1

For DevOps we want to assign permissions based on the least-privilege permissions model. This
model states that “A user (or service) should be granted minimal amount of permissions that are
required to get job done”. Permissions are maintained in IAM. IAM is a web service that helps
you securely control access to AWS resources. You can use IAM to control who is authenticated
(signed in) and authorized (has permissions) to use resources.

7.2 Identity Access Management


AWS Identity and Access Management (IAM) defines the controls and polices that are used to
manage access to AWS Resources. Using IAM you can create users and groups and define
permissions to various DevOps services.
In addition to the users, various services may also need access to AWS resources. e.g. your Code
Build project may need access to store Docker images in Amazon Elastic Container Registry
(Amazon ECR) and will need permissions to write to ECR. These types of permissions are
defined by a special type role know as service role.
IAM is one component of the AWS security infrastructure. With IAM, you can centrally manage
groups, users, service roles and security credentials such as passwords, access keys, and
permissions policies that control which AWS services and resources users can access. IAM
Policy lets you define the set of permissions. This policy can then be attached to either a Role,
User, or a Service to define their permission. You can also use IAM to create roles that are used
widely within your desired DevOps strategy. In some case it can make perfect sent to
programmatically. Assume Role instead directly getting the permissions. When a service or user
assumes roles, they are given temporary credentials to access a service that you normally don’t
have access.

Dept of ECE. BGMIT,


DevOps on 1

CHAPTER 8

8.1 ADVANTAGES

 Operational excellence
 Security
 Reliability
 Performance efficiency
 Cost optimization

8.2 DIS-ADVANTAGES

 Amazon Web Services may have some common cloud computing issues when you move
to a cloud. For example, downtime, limited control, and backup protection.

 AWS sets default limits on resources which vary from region to region. These resources
consist of images, volumes, and snapshots. You can launch the limited number of
instance per area.

Dept of ECE. BGMIT,


DevOps on 1

CHAPTER 9
CONCLUSION
In order to make the journey to the cloud smooth, efficient, and effective; technology
companies should embrace DevOps principles and practices. These principles are embedded in
the AWS platform, and form the cornerstone of numerous AWS services, especially those in the
deployment and monitoring offerings.
Begin by defining your infrastructure as code using the service AWS CloudFormation or AWS
Cloud Development Kit (CDK). Next, define the way in which your applications are going to use
continuous deployment with the help of services like AWS Code-Build, AWS Code-Deploy,
AWS Code-Pipeline, and AWS Code-Commit. At the application level, use containers like AWS
Elastic Beanstalk, AWS Elastic Container Service (Amazon ECS), or AWS Elastic Kubernetes
Service (Amazon EKS), and AWS Ops-Works to simplify the configuration of common
architectures. Using these services also makes it easy to include other important services like
Auto Scaling and Elastic Load Balancing.
Finally, use the DevOps strategy of monitoring such as Amazon CloudWatch, and solid
security practices such as AWS IAM. With AWS as your partner, your DevOps principles will
bring agility to your business and IT organization and accelerate your journey to the cloud.

Dept of ECE. BGMIT,


DevOps on 1

REFERENCES

https://aws.amazon.com/devops/what-is-devops/
https://aws.amazon.com/about-aws/
https://en.wikipedia.org/wiki/Amazon_Web_Services

Dept of ECE. BGMIT,

You might also like