You are on page 1of 55

Santhosh Kumar "San" Srinivasan

Solutions Architect, Cloud / DevOps Engineer

san@sanspace.in
+91 739 709 5811
https://sanspace.in

Profile
 AWS Certified Solutions Architect
 Bachelor's Degree in Computer Applications.
 AWS Cloud Engineer with 12 years of experience in IT industry comprising of DevOps, SRE,
Solutions Architect, Full Stack Development, Software Configuration Management (SCM) and
Build and Release Management.
 In-depth knowledge and Hands-on experience using AWS cloud services like Compute,
Network, Storage, Database, Serverless and Identity & Access Management.
 Strong experience designing and deploying highly available, resilient, cost-effective, fault-
tolerant, and scalable distributed systems on AWS.
 Knowledge of recommended best practices for building enterprise-wide scalable, secure,
self-healing and reliable applications on AWS.
 Leveraging AWS, Terraform, Python and bash to rapidly provision multi-team cloud
environments.
 Define and deploy monitoring, metrics, and logging systems
 Implement cost-control strategies such as resizing, offhours downsizing and shooting for
optimum utilization ratio.
 Currently managing devops for a Fast paced development team for an Enterprise wide
Python Web Portal.
 Maintaining highly available, resilient AWS environments with RTO < 1 hour and RPO near
real time.
 Engaged in Vulnerability Management for both infrastructure and application components.
 Collaborated with team of 8 developers on git and github with branching strategy,
segregation of duty and release management.
 Developed CI/CD Pipeline for Infrastructure and Code Deployment using Github, Jenkins,
Artifactory from code check-in to Production Deployment.
 Set up Terraform backends, modules and varfiles to manage multiple environments with
same Infrastructure code.
 Responsible for ensuring Systems & Network Security, maintaining performance and setting
up monitoring using CloudWatch, DataDog, NewRelic and PagerDuty.

Skills
 Cloud: AWS
 DevOps: Terraform, CloudFormation, AWS SDK (Boto3)
 CI/CD: Jenkins, Artifactory
 Containerization: Docker, Kubernetes
 Scripting: Python, Perl, Shell
 Web/App Servers: Apache, nginx, uWSGI
 SCM: git, Github, ServiceNow
 OS: UNIX, Linux (RHEL, Amazon Linux, Ubuntu)
 DB: SQL, PostgreSQL, MySQL, MongoDB, SQLite
 Programming: C, Javascript
 Monitoring: DataDog, NewRelic, Splunk
 Automation: Selenium, Puppeteer
 Other Tools: jMeter, RegEx

Experience
DevOps Engineer
Client: Top US FinCorp

AWS, Terraform, Jenkins, Docker, Agile

 Manage DEV, QA, PROD environments with Terraform


 Perform routine regional failover and disaster recovery exercises
 Secure components in a multi-team cloud environment
 Automate deployments with bash and Jenkins
 Create S3 buckets and secure them using IAM and Bucket Policies
 Manage LifeCyclePolicies to move objects to different S3 tiers and Glacier
 Use Terraform to manage and provision Infrastructure as Code
 Write bash wrapper scripts to enable developers to deploy and destroy terraform managed
mini environments with one click
 Set up monitors and dashboards using DataDog
 Install and Configure NewRelic for APM
 Manage AWS Services EC2, Lambda, S3, ELB, Route53, EFS and IAM
 Set up Alarms in CloudWatch for monitoring EC2 and RDS performance
 Create and maintain Dockerfiles for Python Flask Applications
 Use Jira to track effort and manage sprints
 Create and advocate branching strategy and best practices for Github based collaboration
with Developers.
 Utilize Route53 to manage DNS records and CNAMEs
 Create custom AMIs for faster boot time and efficient Auto Scaling
 Manage weekly releases with several teams working on the same monolithic application
 Migrate a monolithic app to a microservices based architecture
 Create Infrastructure and Code Deployments using established Change Management Process

2017 - 2020

Cognizant USA

Senior Developer

Client: Top US Retailer

C++, UNIX, Perl, SQL

 Develop and Test EMV Payment Cards functionality with Tokenization in POS systems
 Write performant, PCI compliant tokenization code to tokenize card numbers on the fly
using STL boost libraries.
 Enhance C and C++ POS and Post Processing Applications
 Write High Performance code to process trillions of lines in text files
 Convert billions of existing database records and log files with PAN to tokenized PAN across
stores
 Use Regular Expressions to identify different types of Credit Card Numbers and Providers
 Manage deployments to 1700 retail stores across US in different time zones
 Write monitoring scripts to profile and assess backend applications' peformance
 Collaborate with Performance Testing teams to test against estaplished KPIs from business
 Write concurrent scripts to simulate peak load to different applications
 Create parsing logic to extract logs and assess rate of failures in a legacy environment
 Triaged production and deployment issues in War room environments and provided
workarounds and solutions
 Plan and execute scaling during peak events during Holiday Season
 Architect and deploy custom configurations to scale dynamically based on traffic volume

2016 - 2017

Cognizant USA

Developer

Client: Top US Retailer

C, Pro* C, UNIX, SQL

 Write, compile and deploy Pro*C components for integration with Planogram application
 Rewrite EOD processing with 50 modules
 Write Perl CGI app to provide web interface to AIX boxes
 Write Shell and Perl scripts to automate tasks
 Monitor logs across POS and backstore applications to track and identify missing
transactions
 Review Developers code for adherence to coding standards, performance, and security
requirements
 Create and Manage dotfiles to efficiently manage development server envrionment using
bash and ksh scripts
 Collaborate with Mainframe developers to triage daily batch file transfers

Overview
As an DevOps Solution Architect consultant, I provide infrastructure automation services
within private and public clouds as well as team training and DevOps mentoring. I like
uncharted territory and problems that are difficult to solve. I love to:

 Advocate for, facilitate and create DevOps culture and Automate all the things with
the correct devops tool!
 Finally reach continuous delivery and Build clouds with Openstack and AWS

With Many years of experience conducting and managing business on the Internet and have
created technical solutions for businesses around the world and for my own business
ventures. To briefly name a few of my major accomplishments, I have

 Designed and built highly available, private AWS clouds.


 Designed and built automated infrastructure using open source tools.
 Developed web applications.
 Designed and built networks and information systems and.
 Provided training and mentoring to others in these fields.

Am currently working as a Principal DevOps Engineer, with a primary focus on Build and
Release and Delivery Automation Engineering. My responsibilities involve setting the overall
Delivery Automation strategy, via investments in automation at several layers of the
Technology stack to implement a Continuous Deployment/Delivery pipeline.

My specific areas of expertise are cloud systems automation and architecture and specialities
encompass agile and lean development practices. Additionally, open source software
strategy, identification, evaluation, selection, implementation and documentation. My
primary focus is on enabling development/engineering professionals, as well as project,
program, and technical leads on the benefits of adaptive automation processes.

Have extensive experience with build architecture and automation, continuous delivery, and
troubleshooting complex dependency graphs. Am also a software craftsman, applying state-
of-the-art tools and methods to enable efficiency, performance, and agility for those who rely
on me.

 Mastery of build technologies like Hudson, Jenkins, Ivy, Maven, Gradle, NuGet etc,
Integration and automation of source control applications like Perforce, Subversion,
Git, Artifactory,
 Management of library versions and deprecated code, Design and sequencing of
automated builds & test runs and Troubleshooting expertise – build failures due to
dependencies, tests, etc.
 Evangelism of best practices and tools and Programming/scripting skills using tools
like shell scripting, Python, Groovy, Powershell with Strong communication and
cross-functional skills and the ability to execute autonomously given a set of clearly
defined strategies
 Currently Rails and AWS consulting. – From the backend persistence layer to the
Ruby layer, to client side JavaScript, I’ve got projects covered. I’m typically adding
code to Github and submitting a pull request within a day of joining a project.

Specialties: Systems engineering and automation (mainly GNU/Linux), cloud/distributed


computing, Opscode Chef, Openstack, Amazon AWS, Apache Software Foundation projects,
Unix/Linux shell scripting, Ruby, PHP, Python, Java, HTML5, Javascript, Git, GitHub,
Jenkins.

Experience

Angie’s List,Indianapolis, IN
2012 to present, Principal Devops Lead @ Angie’s List
Principal DevOps Lead

 Manage DevOps and Infrastructure Teams supporting tools and infrastructure for
100+ developers on 3-5 concurrent releases.
 Manage implementation of the company’s first repeatable, traceable, documented, and
packaged product ensuring quality upon delivery.
 Implement the first release tracking and reporting providing full visibility into
software releases.
 Manage re-architecture of Jenkins and integration with Confluence for release
management and documentation assets. Re architect a Maven based system reducing
build times.
 Manage implementation and installation of server class hardware with migration
companies’ assets from desktops around the office.
 Manage hardware request and support from developers and infrastructure.
 Manage all CM tools (JIRA, Confluence, Artifactory, SVN, Maven, Jenkins, ANT,
Git, GitHub, Visual Studio) and their usage / process ensuring traceability,
repeatability, quality, and support.
 Re architect a legacy SVN repository from pure script dependency and no
representation of releases to clear direction regarding where code resides and the
difference between releases.
 Implemented a Continuous Delivery pipeline with Docker, Jenkins and GitHub and
AWS AMI’s, Whenever a new github branch gets started, Jenkins, our Continuous
Integration server, automatically attempts to build a new Docker container from it,
The Docker container leverages Linux containers and has the AMI baked in.
Converted our staging and Production environment from a handful AMI’s to a single
bare metal host running Docker.

Tools used : Artifactory, Nuget, OctpusDeploy, DJango, Python, .NET, Nexus, Ivy, Maven,
MS Build,
MS Deploy, Nant, Docker, Puppet, AWS Cloudformation, AWS OpsWorks, Ruby, Scala,
Play, JIRA, Confluence, Artifactory, SVN, Maven, Jenkins, ANT, Git, GitHub, Visual
Studio.

Build and Release manager @ Angie’s List

This role is key to providing proactive, continuous acceleration of development velocity by


managing, optimizing, and redesigning builds (and related artifacts) for 10+ distinct high
traffic services (edge, middle, big data) and related libraries owned the by the Infrastructure
team. We develop, deploy, and operate critical functionality,global management, operational
insight, and developer support. We depend on a large, distributed set of services and others
depend on us. We produce and consume 100s of code artifacts and we’re working towards
daily deployments.

Tools used : Artifactory, Nuget, OctpusDeploy, DJango, Python, .NET, Nexus, Ivy, Maven,
MS Build, MS Deploy, Nant
Configuration Manager @ Angie’s List

 Responsible for applying the Corporation’s Software Configuration Management


processes to projects, setting up and maintaining TFS/GIT/GITHub infrastructure and
supporting a continuous delivery model by automating software build and package
migration processes.
 Responsible for planning, developing, executing and supporting the Corporations
software development lifecycle from the point of developer check-in through
production deployment. TFS/GIT responsibilities include maintaining the version
control system (branching model, security), creation and maintenance of build
definitions\scripts, and the setup of work item areas and iterations. work closely with
key members of the development and operations teams. is responsible for
troubleshooting build breaks, enforcement of software quality standards, and proper
communication of packages\installation steps to operations for both non-production
and production environments. also responsible for maintaining and supporting
developer tools (i.e. PostSharp, CodeSmith, Resharper) and developer environments.
 Working with developers to reduce friction of code flow from the Developer’s
fingertips to production.This includes improving and maintaining Continues
Integration Systems and Deployment systems

Tools Used: CruiseControl, Hudson, Anthill Pro, Fitnesse, TFS, MSBuild, Maven, Ivy,
Nexus, NAnt, Ant, Sonar, Selenium, QTP, Agile, XP, Scrum, Lean / Kanban, Mingle, Rally,
Hudson, Jira, Crucible, Git, GitHUb, Gerritt.

Delivery Lead @ Angie’s List

Came up with a Continuous Delivery platform and refrecne implementation to provide a


complete working Continuous Delivery solution using industry-standard open source tools
such as Jenkins, Puppet, Chef, Capistrano, Ruby, Maven, git, Java, etc. Uses the following
tools

 CloudFormation – CloudFormation is a domain-specific language expressed in


JSON for automating the provisioning/management of AWS resources. These
CloudFormation scripts automate the provisioning of AWS resources (IAM, EC2, S3,
Route 53, SNS, RDS, ELB and Auto Scaling) and make calls to Puppet scripts that
execute the rest (provisioning/configuration of servers on the instances)
 IAM – IAM is the user authentication and authorization service for AWS. The
Delivery Scripts provisions IAM users for secure access to AWS resources in its
CloudFormation scripts.
 EC2 – (Security Groups, KeyPairs) EC2 is the service for launching compute
instances in AWS and automatically provisions EC2 instances based on AMIs. Italso
configures security groups as part of virtual firewall configuration.
 S3 – It has CloudFormation scripts that push files, create keys, etc. in S3.
 Route 53 – Route53 is an automated DNS. From CloudFormation, it configures
Route 53 to allocate domain names.
 Simple Notification Service (SNS) – In CloudFormation, it configures SNS to send
notification messages to inform users and services based on events.
 RDS – In CloudFormation, configures RDS
 Elastic Load Balancer – In CloudFormation, provisions ELB to distribute traffic
across multiple Amazon EC2 instances, as necessary.
 Auto Scaling – In CloudFormation, provisions launch configurations and Auto
Scaling groups to provide support for scaling instances across AZs
 Github – uses Github for version control
 Jenkins – automates the provisioning and configuration for all of the Jenkins
Continuous Integration Server including plugins, jobs, server configuration, etc.
 Puppet – Puppet scripts are called by CloudFormation. It’s Puppet scripts automate
the provisioning and configuration of servers on the EC2 instances. These include, but
are not limited to, Java, Ruby, GCC, Tomcat, MySQL and PostgreSQL.
 Chef – Chef It is in the process of supporting Chef.
 Cucumber – Cucumber is a behavior-driven framework written in Ruby. It calls
Cucumber scripts from Jenkins to test the infrastructure and deployment to ensure
they were successful or, otherwise, notify team members.
 Capistrano – Capistrano is a deployment framework written in Ruby. It calls
Capistrano scripts from Jenkins to perform
 Ruby – uses Ruby as the “glue” between all of its components. These Ruby scripts
run tests, delete stacks and instances, etc.
 Dev Platforms – It provides “out-of-the-box” automation for Rails, Java and Grails
on the Linux operating system. Other platforms can be supported by OpenDelivery.
and, provides analogous automation for MySQL, PostgreSQL, Tomcat and Apache.

Tools Used : Jenkins, Puupet, Chef, Capastrino, Ruby, Maven, Git, Java

Vanguard Investments Group, Malvern, PA


2012 – 2012, Configuration Analyst
 Constructed/Architected a Continuous Integration CI Server and Implemented
Build/Deploy automation Server utilizing CI Technologies like Jenkins/Hudson,
Subversion, Maven, Ivy Nexus, MS Build, Ant, Sonar, JIRA and Selenium for
both .NET and J2EE Applications on mixed OS (Windows/Linux/Unix).
 Modeled and automated the End to End Continuous Integration/Deployment/Delivery
pipeline which included building a Continuous Integration server utilizing tools like
Jenkins, Ivy, Nexus, maven, Jira, Subversion, Git, Ant, Selenium, and Sonar.
 Administered and Supported the Continuous Integration Server Infrastructure.
Completing software builds and elevations, creating directories and security groups,
and recreating prior versions. Monitored software, hardware, and/or middleware
updates and

Tools Used : Jenkins/Hudson, Ant, MS Build, TFS Team Explorer, and Subversion, Jenkins,
Travis CI
HP Enterprise Services
2010-2012, CM Solutions Architect, Release manager

CM Solution Architect for Healthcare Emerging Products

 As CM Solution Architect at HP Healthcare was the focal point for managing,


communicating and strategizing application and infrastructure releases. Main
responsibilities included planning, developing, coordinating and leading release
activities with Engineering, Quality, Project Management, Operations, Product and
Support teams.
 Led the successful deployment of multiple product releases to QA, Staging and
Production environments and acted as the gatekeeper in facilitating all agreed
entry/exit criteria and process checkpoints are satisfactorily negotiated and received.
In addition, also applied my proven communication and problem-solving skills to
guide and assist the support groups on issues related to the testing and deployment of
business-critical information and applications. Specific responsibilities:
 Engaged with Multiple projects in early phases of the development life cycle to
ensure alignment with the overall release schedules and corresponding demand
management. Defined and formulated releases to ensure alignment to established
software development processes in an effort to meet business/Corporate needs.

Tools Used : Jenkins, Puupet, Chef, Capastrino, Ruby, Maven, Git, Java

Configuration manager for Healthcare Emerging Products

 Conducted multiple CM assessments including “As-Is” and “To-Be” with


recommendations to improve Healthcare CM and release management practices.
Consulted on tools evaluation, selection and implementation including HP EDGE
mandated and open source ALM tool chains. Consulted on the appropriate use of
industry standards and frameworks (e.g. CMMI/EDGE).
 Developed Healthcare Product Release and Configuration plan and managed CM
team responsible for build automation (development and maintenance), code
promotion through environments, and production packaging and installation support
for customers. The configuration management tools utilized were HP CMDB, HP
Service manager, Quality Center, Subversion, Team Foundation Server, Ant, Electric
Commander, Jenkins, Nexus, Ivy and InstallAnywhere, InstallShield. Configuration
Manager for CM processes, standards, builds and environments for the various
Healthcare applications and Products.
 Directed setup, use, and build scheduling for environments and implemented a
Continuous Delivery pipeline. Designed and implemented CM requirements,
approach, and tooling for Java (J2EE) and .NET -based application development.
Designed, coded, and implemented automated build scripting in Ant, Ivy,
Jenkins/Hudson, and Maven.
 Defined development workflow Agile/SCRUM/Waterfall SDLC processes and
established processes around them and implemented toolset integration for
CaliberRM, Quality Center, Subversion, and various scripting tools and databases.
Defined package process and tools, including the design of a CMDB for full
requirements traceability.
 Led team of CM build specialists, tool integrators, environment coordinators and
packagers and defined and assisted with Data Management (and Testing data
baseline) CM strategies. Managed Healthcare product packaging for release to
customers utilizing InstallShield and InstallAnywhere.

Tools Used : Jenkins, Puupet, Chef, Capastrino, Ruby, Maven, Git, Java

InterDigital, King Of Prussia, PA


2005 – 2009, Configuration/Release manager
As CM manager responsible for building and running continuous integration environments
that support multiple development teams working toward towards common builds and Major
Responsibilities included

 Designing and implementing a continuous development and deployment process that


is uniform throughout several development teams and across projects. Implemented
Continuous Integration validation tests on code check-in. Design and implement,
working with QA automation team, post check-in automated unit and system tests and
Established and documented workflow. Trained development, QA and production
deployment teams. Managed tools like Subversion, Jenkins, JIRA and Performed
maintenance and troubleshooting of build/deployment systems.
 Plan, coordinate and execute releases to QA, stage and production environments and
Managed complex code branches from multiple development teams for current and
future releases. Merged code and ensure successful builds with intended functionality.
Ensured releases are documented for supportability and functionality and stakeholders
spanning multiple organizations are notified in advance.
 Responsible for configuration management including deployment of new
software/configuration changes into our UAT, Training, Production and DR
environments. Additional duties included working with development and
infrastructure teams to improve the configuration and release management processes
and environments for more efficient, higher quality software deployments.

Aspen Software Consultants, Dallas, TX


2002-2005, Consultant
 CM Administrator at IRS, Dallas, TX : Participated and Lead on software
configuration management boards and provided support for the release process form
the different Vendors. Involved in Building and deploying software releases, Building
and compiling code of varying complexity using automated and manual efforts to
ensure complete and accurate code compilation for release into various critical
environments. Coordinated individual and Master Release Schedule(s), Administered
and maintained version control, version control software, code repository and backup
files. Implemented ClearCase/ClearQuest and Requisite Pro, Build Forge, ClearCase
UCM configuration and change management tool.
 Configuration Analyst at Capital One, Plano TX : Responsible for Rational
development tools support over multiple environments consisting of ClearCase,
ClearCase Multisite & ClearQuest. Ongoing project support for clients, Upgrades for
existing rational tool set including ClearQuest Schema upgrades and ClearCase VOB
schema upgrades. ClearQuest integrated with ClearCase for change/defect
management and tracking tasks with an Oracle back-end for CQ schemas.
Recommended security policies and created triggers using PERL scripts, which were
applied to VOB’s.
 Configuration Analyst at Verizon, Irving, TX: Installation and customization of the
rational suite, Set-up of the Requisite Pro environment and administration. Integration
between ClearQuest and Test director, ClearCase UCM configuration and change
management tool. Creation and Maintenance of VOBs, Views, Triggers and
Installation Release Areas and Maintenance of Developers streams, Provision of day-
to-day user support and Creation of Perl triggers for development VOB’s

Tellabs Inc., Naperville, IN


1998- 2002, Configration Management manager
 Developed plan and organizational processes to improve configuration management
within the enterprise to include the establishment of a change control board (CCB).
Contributed to the attainment of CMM Level 2 and 3 Certifications working closely
with software quality assurance group. Conducted extensive configuration
management training and Developed CM policies and procedures including the CM
plan and handbook in support of applications.
 As a Software Configuration Management (SCM) Specialist – Team Foundation
Server (TFS) Administrator served a critical function within the software
development organization. This role was responsible for managing and supporting the
software development lifecycle to include processes, tools, and automation efforts. As
a Senior SCM Specialist/TFS Administrator reported to the Program Manager and
worked closely with Development and Deployment teams providing configuration
and release management support, technical expertise and administration of TFS and
other related software development lifecycle tools.
Education
 Introduction to CMMI, Carnegie Mellon Univeristy**.
 M.E (Elect Engg), Andhra University, India
 B.E (Elect Engg) Andhra University, India

Talks/Presentations/Publications/Projects
Some of my Talks and Presentations * My Presentations

Projects

Publications

Tutorial : Continuous Delivery in the Cloud


Part 1 of 6
We help companies deliver software reliably and repeatedly using Continuous Delivery in the
Cloud. With Continuous Delivery (CD), teams can deliver new versions of software to
production by flattening the software delivery process and decreasing the cycle time between
an idea and usable software through the automation of the entire delivery system: build,
deployment, test, and release. CD is enabled through a delivery pipeline. With CD,our
customers can choose when and how often to release to production. On top of this, we utilize
the cloud so that customers can scale their infrastructure up and down and deliver software to
users on demand.

We offer a solutions called Elastic Operations which provides a Continuous Delivery


platform along with expert engineering support and monitoring of a delivery pipeline that
builds, tests, provisions and deploys software to target environments – as often as our
customers choose. We’re in the process of open sourcing the platform utilized by Elastic
Operations. In this six-part blog series, I am going to go over how we built out a Continuous
Delivery solution for one of our customers Sea to Shore Alliance:

Part 1: Introduction – What you’re reading now; Part 2: CD Pipeline – Automated


pipeline to build, test, deploy, and release software continuously; Part 3: CloudFormation –
Scripted virtual resource provisioning; Part 4: Dynamic Configuration – “Property file
less” infrastructure; Part 5: Deployment Automation – Scripted deployment orchestration;
Part 6: Infrastructure Automation – Scripted environment provisioning (Infrastructure
Automation)

This year, we delivered this Continuous Delivery in the Cloud solution to the Sea to Shore
Alliance. The Sea to Shore Alliance is a non-profit organization whose mission is to protect
and conserve the world’s fragile coastal ecosystems and its endangered species such as
manatees, sea turtles, and right whales. One of their first software systems tracks and
monitors manatees. Prior to Stelligent‘s involvement, the application was running on a single
instance that was manually provisioned and deployed. As a result of the manual processes,
there were no automated tests for the infrastructure or deployment. This made it impossible to
reproduce environments or deployments the same way every time. Moreover, the knowledge
to recreate these environments, builds and deployments were locked in the heads of a few key
individuals. The production application for tracking these Manatees, developed by Sarvatix,
is located here.

In this case study, I describe how we went from an untested manual process in which the
development team was manually building software artifacts, creating environments and
deploying, to a completely automated delivery pipeline that is triggered with every change.

Figure 1 illustrates the AWS architecture of the infrastructure that we designed for this
Continuous Delivery solution.

There are two CloudFormation stacks being used, the Jenkins stack – or Jenkins
environment – as shown on the left and the Manatee stack – or Target environment – as
shown on the right.

The Jenkins Stack

1.
o Creates the jenkins.example.com Route53 Hosted Zone
2.
o Creates an EC2 instance with Tomcat and Jenkins installed and configured on
it.
3.
o Runs the CD Pipeline

The Manatee stack is slightly different, it utilizes the configuration provided by SimpleDB to
create itself. This stack defines the target environment for which the application software is
deployed.

The Manatee Stack

1.
o Creates the manatee.example.com Route53 Hosted Zone
2.
o Creates an EC2 instance with Tomcat, Apache, PostgreSQL installed on it.
3.
o Runs the Manatee application.
The Manatee stack is configured with CPU alarms that send an email notification to the
developers/administrators when it becomes over-utilized. We’re in the process of scaling to
additional instances when these types of alarms are triggered.

Both instances are encapsulated behind a security group so that they can talk between each
other using the internal AWS network.

Fast Facts Industry: Non-Profit Profile: Customer tracks and monitors endangered species
such as manatees. Key Business Issues: The customer’s development team needed to have
unencumbered access to resources along with automated environment creation and
deployment. Stakeholders: Development team and scientists and others from the Sea to
Shore Alliance Solution: Continuous Delivery in the Cloud (Elastic Operations) Key
Tools/Technologies: AWS – Amazon Web Services (CloudFormation, EC2, S3, SimpleDB,
IAM, CloudWatch, SNS), Jenkins, Capistrano, Puppet, Subversion, Cucumber, Liquibase

The Business Problem The customer needed an operations team that could be scaled up or
down depending on the application need. The customer’s main requirements were to have
unencumbered access to resources such as virtual hardware. Specifically, they wanted to have
the ability to create a target environment and run an automated deployment to it without
going to a separate team and submitting tickets, emails, etc. In addition to being able to create
environments, the customer wanted to have more control over the resources being used; they
wanted to have the ability to terminate resources if they were unused. To address these
requirements we introduced an entirely automated solution which utilizes the AWS cloud for
providing resources on-demand, along with other solutions for providing testing, environment
provisioning and deployment.

On the Manatee project, we have five key objectives for the delivery infrastructure. The
development team should be able to:

1.
o Deliver new software or updates to users on demand
2.
o Reprovision target environment configuration on demand
3.
o Provision environments on demand
4.
o Remove configuration bottlenecks
5.
o Ability for users to terminate instances

Our Team Stelligent’s team consisted of an account manager and one polyskilled DevOps
Engineer that built, managed, and supported the Continuous Delivery pipeline.

Our Solution Our solution, a single delivery pipeline that gives our customer (developers,
testers, etc.) unencumbered access to resources and a single click automated deployment to
production. To enable this, the pipeline needed to include:

1.
o The ability for any authorized team member to create a new target
environment using a single click
2.
o Automated deployment to the target environment
3.
o End-to-end testing
4.
o The ability to terminate unnecessary environments
5.
o Automated deployment into production with a single click

The delivery pipeline improves efficiency and reduces costs by not limiting the development
team. The solution includes:

On-Demand Provisioning – All hardware is provided via EC2’s virtual instances in the
cloud, on demand. As part of the CD pipeline, any authorized team member can use the
Jenkins CreateTargetEnvironment job to order target environments for development work.

Continuous Delivery Solution so that the team can deliver software to users on demand:

 Dependency Management using Ivy (through Grails).


 Database Integration/Change using Liquibase
 Testing using Cucumber
 Custom Capistrano scripts for remote deployment.
 Continuous Integration server using Jenkins
 Continuous Delivery pipeline system – we customized Jenkins to build a delivery
pipeline

Development Infrastructure – Consists of:

 Tomcat: used for hosting the Manatee Application


 Apache: Hosted the front-end website and used virtual hosts for proxying and
redirection.
 PostgreSQL: Database for the Manatee application
 Groovy: the application is written in Grails which uses Groovy.

Instance Management – Any authorized team member is able to monitor virtual instance
usage by viewing Jenkins. There is a policy that test instances are automatically terminated
every two days. This promotes ephemeral environments and test automation.

Deployment to Production – There’s a boolean value (i.e. a checkbox the user selects) in the
delivery pipeline used for deciding whether to deploy to production.

System Monitoring and Disaster Recovery – Using the AWS CloudWatch service, AWS
provides us with detailed monitoring to notify us of instance errors or anomalies through
statistics such as CPU utilization, Network IO, Disk utilization, etc. Using these solutions
we’ve implemented an automated disaster recovery solution.

A list of the AWS tools we utilized are enumerated below.


Tool: AWS EC2 What is it? Cloud-based virtual hardware instances Our Use: We use EC2
for all of our virtual hardware needs. All instances, from development to production are run
on EC2

Tool: AWS S3 What is it? Cloud-based storage Our Use: We use S3 as both a binary
repository and a place to store successful build artifacts.

Tool:  AWS IAM What is it? User-based access to AWS resources Our Use: We create
users dynamically and use their AWS access and secret access keys so we don’t have to store
credentials as properties

Tool: AWS CloudWatch What is it? System monitoring Our Use: Monitors all instances in
production. If an instance takes an abnormal amount of strain or shuts down unexpectedly,
SNS sends an email to designated parties

Tool: AWS SNS What is it? Email notifications Our Use: When an environment is created
or a deployment is run, SNS is used to send notifications to affected parties.

Tool: Cucumber What is it? Acceptance testing Our Use: Cucumber is used for testing at
almost every step of the way. We use Cucumber to test infrastructure, deployments and
application code to ensure correct functionality. Cucumber’s unique english-ess  verbiage
allows both technical personnel and customers to communicate using an executable test.

Tool: Liquibase What is it? Automated database change management Our Use: Liquibase is


used for all database changesets. When a change is necessary within the database, it is made
to a liquibase changelog.xml

Tool: AWS CloudFormation What is it? Templating language for orchestrating all AWS
resources Our Use: CloudFormation is used for creating a fully working Jenkins environment
and Target environment. For instance for the Jenkins environment it creates the EC2 instance
with CloudWatch monitoring alarms, associated IAM user, SNS notification topic,
everything required for Jenkins to build. This along with Jenkins are the major pieces of the
infrastructure.

Tool: AWS SimpleDB What is it? Cloud-based NoSQL database Our Use: SimpleDB is


used for storing dynamic property configuration and passing properties through the CD
Pipeline. As part of the environment creation process, we store multiple values such as IP
addresses that we need when deploying the application to the created environment.

Tool: Jenkins What is it? We’re using Jenkins to implement a CD pipeline using the Build
Pipeline plugin. Our Use: Jenkins runs the CD pipeline which does the building, testing,
environment creation and deploying. Since the CD pipeline is also code (i.e. configuration
code), we version our Jenkins configuration.

Tool: Capistrano What is it? Deployment automation Our Use: Capistrano orchestrates and
automates deployments. Capistrano is a Ruby-based deployment DSL that can be used to
deploy to multiple platforms including Java, Ruby and PHP. It is called as part of the CD
pipeline and deploys to the target environment.
Tool: Puppet What is it? Infrastructure automation Our Use: Puppet takes care of the
environment provisioning. CloudFormation requests the environment and then calls Puppet to
do the dynamic configuration. We configured Puppet to install, configure, and manage the
packages, files and services.

Tool: Subversion What is it? Version control system Our Use: Subversion is the version
control repository where every piece of the Manatee infrastructure is stored. This includes the
environment scripts such as the Puppet modules, the CloudFormation templates, Capistrano
deployment scripts, etc.

We applied the on-demand usability of the cloud with a proven continuous delivery approach
to build an automated one click method for building and deploying software into scripted
production environments.

In the blog series, I will describe the technical implementation of how we went about
building this infrastructure into a complete solution for continuously delivering software.
This series will consist of the following:

Part 2 of 6 – CD Pipeline: I will go through the technical implementation of the CD pipeline


using Jenkins. I will also cover Jenkins versioning, pulling and pushing artifacts from S3, and
Continuous Integration.

Part 3 of 6 – CloudFormation: I will go through a CloudFormation template we’re using to


orchestrate the creation of AWS resources and to build the Jenkins and target infrastructure.

Part 4 of 6 – Dynamic Configuration: Will cover dynamic property configuration using


SimpleDB

Part 5 of 6 – Deployment Automation: I will explain Capistrano in detail along how we


used Capistrano to deploy build artifacts and run Liquibase database changesets against target
environments

Part 6 of 6 – Infrastructure Automation: I will describe the features of Puppet in detail


along with how we’re using Puppet to build and configure target environments – for which
the software is deployed.

Tutorial : Continuous Delivery in the Cloud


Part 2 of 6
In part 1 of this series, I introduced the Continuous Delivery (CD) pipeline for the Manatee
Tracking application and how we use this pipeline to deliver software from checkin to
production. In this article I will take an in-depth look at the CD pipeline. A list of topics for
each of the articles is summarized below.

Part 1: Introduction – Introduction to continuous delivery in the cloud and the rest of the
articles; Part 2: CD Pipeline – What you’re reading now; Part 3: CloudFormation – Scripted
virtual resource provisioning; Part 4: Dynamic Configuration – “Property file less”
infrastructure; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6:
Infrastructure Automation – Scripted environment provisioning (Infrastructure Automation)

The CD pipeline consists of five Jenkins jobs. These jobs are configured to run one after the
other. If any one of the jobs fail, the pipeline fails and that release candidate cannot be
released to production. The five Jenkins jobs are listed below (further details of these jobs are
provided later in the article).

1. A job that set the variables used throughout the pipeline (SetupVariables)
2. Build job (Build)
3. Production database update job (StoreLatestProductionData)
4. Target environment creation job (CreateTargetEnvironment)
5. A deployment job (DeployManateeApplication) which enables a one-click
deployment into production.

We used Jenkins plugins to add additional features to the core Jenkins configuration. You can
extend the standard Jenkins setup by using Jenkins plugins. A list of the plugins we use for
the Sea to Shore Alliance Continuous Delivery configuration are listed below.

 Grails: http://updates.jenkins-ci.org/download/plugins/grails/1.5/grails.hpi****
 Groovy: http://updates.jenkins-ci.org/download/plugins/groovy/1.12/groovy.hpi****
 Subversion: http://updates.jenkins-
ci.org/download/plugins/subversion/1.40/subversion.hpi****
 Paramterized Trigger: http://updates.jenkins-
ci.org/download/plugins/parameterized-trigger/2.15/parameterized-trigger.hpi****
 Copy Artifact: http://updates.jenkins-
ci.org/download/plugins/copyartifact/1.21/copyartifact.hpi****
 Build Pipeline: http://updates.jenkins-ci.org/download/plugins/build-pipeline-
plugin/1.2.3/build-pipeline-plugin.hpi****
 Ant: http://updates.jenkins-ci.org/download/plugins/ant/1.1/ant.hpi****
 S3: http://updates.jenkins-ci.org/download/plugins/s3/0.2.0/s3.hpi

The parameterized trigger, build pipeline and S3 plugins are used for moving the application
through the pipeline jobs. The Ant, Groovy, and Grails plugins are used for running the build
for the application code. Subversion for polling and checking out from version control.

Below, I describe each of the jobs that make up the CD pipeline in greater detail.

SetupVariables: Jenkins job used for entering in necessary property values which are
propagated along the rest of the pipeline.

Parameter: STACK_NAME

Type: String

Where: Used in both CreateTargetEnvironment and DeployManateeApplication jobs


Purpose: Defines the CloudFormation Stack name and SimpleDB property domain
associated with the CloudFormation stack.

Parameter: HOST

Type: String

Where: Used in both CreateTargetEnvironment and DeployManateeApplication jobs

Purpose: Defines the CNAME of the domain created in the CreateTargetEnvironment job.
The DeployManateeApplication job uses it when it dynamically creates configuration files.
For instance, in test.oneclickdeployment.com, test would be the HOST

Parameter: PRODUCTION_IP* Type: String Where: Used in the StoreProductionData


job Purpose**: Sets the production IP for the job so that it can SSH into the existing
production environment and run a database script that exports the data and uploads it to S3.

Parameter: deployToProduction Type: Boolean Where: Used in both


CreateTargetEnvironment and DeployManateeApplication jobs Purpose: Determines
whether to use the development or production SSH keypair.

In order for the parameters to propagate through the pipeline, we pass the current build
parameters using the parametrized build trigger plugin

Build: Compiles the Manatee application’s Grails source code and creates a WAR file.

To do this, we utilize a Jenkins grails plugin and run grails targets such as compile and prod
war. Next, we archive the grails migrations for use in the DeployManateeApplication job
and then the job pushes the Manatee WAR up to S3 which is used as an artifact repository.

Lastly, using the trigger parametrized build plugin, we trigger the StoreProductionData job
with the current build parameters.

StoreProductionData: This job performs a pg dump (PostgreSQL dump) of the production


database and then stores it up in S3 for the environment creation job to use when building up
the environment. Below is a snippet from this job.

ssh -i /usr/share/tomcat6/development.pem -o UserKnownHostsFile=/dev/null


1 -o StrictHostKeyChecking=no ec2-user@${PRODUCTION_IP} ruby /home/ec2-
user/database_update.rb

On the target environments created using the CD pipeline, a database script is stored. The
script goes into the PostgreSQL database and runs a pg_dump. It then pushes the pg_dump
SQL file to S3 to be used when creating the target environment.

After the SQL file is stored successfully, the CreateTargetEnvironment job is triggered.

CreateTargetEnvironment: Creates a new target environment using a CloudFormation


template to create all the AWS resources and calls puppet to provision the environment itself
from a base operating system to a fully working target environment ready for deployment.
Below is a snippet from this job.

if [ $deployToProduction ] then SSH_KEY=development else


SSH_KEY=production fi

# Create Cloudformaton Stack


1 ruby ${WORKSPACE}/config/aws/create_stack.rb ${STACK_NAME} $
2 {WORKSPACE}/infrastructure/manatees/production.template ${HOST} $
3 {JENKINSIP} ${SSH_KEY} ${SGID} ${SNS_TOPIC}
4
5 # Load SimpleDB Domain with Key/Value Pairs
6 ruby ${WORKSPACE}/config/aws/load_domain.rb ${STACK_NAME}
7
8 # Pull and store variables from SimpleDB
9 host=`ruby ${WORKSPACE}/config/aws/showback_domain.rb
10 ${STACK_NAME} InstanceIPAddress`
11
12 # Run Acceptance Tests
13 cucumber $
14 {WORKSPACE}/infrastructure/manatees/features/production.feature host=$
15 {host} user=ec2-user key=/usr/share/tomcat6/.ssh/id_rsa
16
17 # Publish notifications to SNS
18 sns-publish –topic-arn $SNS_TOPIC –subject “New Environment Ready” –
19 message “Your new environment is ready. IP Address: $host.
20
21
An example command to ssh into the box would be:

ssh -i development.pem ec2-user@$host This instance was created by


$JENKINS_DOMAIN” –aws-credential-file /usr/share/tomcat6/aws_access

Once the environment is created, a set of Cucumber tests is run to ensure it’s in the correct
working state. If any test fails, the entire pipeline fails and the developer is notified
something went wrong. Otherwise if it passes, the DeployManateeApplication job is kicked
off and an AWS SNS email notification with information to access the new instance is sent to
the developer.

DeployManateeApplication: Runs a Capistrano script which uses steps in order to


coordinate the deployment. A snippet from this job is displayed below.

1 if [ !$deployToProduction ]
2 then
3 SSH_KEY=/usr/share/tomcat6/development.pem
4 else
5 SSH_KEY=/usr/share/tomcat6/production.pem
6 fi
7
8 #/usr/share/tomcat6/.ssh/id_rsa
9
10 cap deploy:setup stack=${STACK_NAME} key=${SSH_KEY}
11
12 sed -i “s@manatee0@${HOST}@”
13 ${WORKSPACE}/deployment/features/deployment.feature
14
15 host=`ruby ${WORKSPACE}/config/aws/showback_domain.rb ${STACK_NAME}
16 InstanceIPAddress`
17 cucumber deployment/features/deployment.feature host=${host} user=ec2-
user key=${SSH_KEY} artifact=

18 sns-publish –topic-arn $SNS_TOPIC –subject “Manatee Application Deployed”


19 –message “Your Manatee Application has been deployed successfully.
20 You can view it by going to
http://$host/wildtracks This instance was deployed to by $JENKINS_DOMAIN”
–aws-credential-file/usr/share/tomcat6/aws_access

This deployment job is the final piece of the delivery pipeline, it pulls together all of the
pieces created in the previous jobs to successfully deliver working software.

During the deployment, the Capistrano script SSH’s into the target server, deploys the new
war and updated configuration changes and restarts all services. Then the Cucumber tests are
run to ensure the application is available and running successfully. Assuming the tests pass,
an AWS SNS email gets dispatched to the developer with information on how to access their
new development application

We use Jenkins as the orchestrator of the pipeline. Jenkins executes a set of scripts and passes
around parameters as it runs each job. Because of the role Jenkins plays, we want to make
sure it’s treated the same way as application – meaning versioning and testing all of our
changes to the system. For example, if a developer modifies the create environment job
configuration, we want to have the ability to revert back if necessary. Due to this requirement
we version the Jenkins configuration. The jobs, plugins and main configuration. To do this, a
script is executed each hour using cron.hourly that checks for new jobs or updated
configuration and commits them up to version control.

The CD pipeline that we have built for the Manatee application enables any change in the
application, infrastructure, database or configuration to move through to production
seamlessly using automation. This allows any new features, security fixes, etc. to be fully
tested as it gets delivered to production at the click of a button.

In the next part of our series – which is all about using CloudFormation – we’ll go through a
CloudFormation template used to automate the creation of a Jenkins environment. In this next
article, you’ll see how CloudFormation procures AWS resources and provisions our Jenkins
CD Pipeline environment.

Tutorial : Continuous Delivery in the Cloud


Part 3 of 6
In part 1 of this series, I introduced theContinuous Delivery (CD) pipeline for theManatee
Tracking application. In part 2 I went over how we use this CD pipeline to deliver software
from checkin to production. A list of topics for each of the articles is summarized below.

Part 1: Introduction – introduction to continuous delivery in the cloud and the rest of the
articles; Part 2: CD Pipeline – In-depth look at the CD Pipeline Part 3: CloudFormation –
What you’re reading now
Part 4: Dynamic Configuration – “Property file less” infrastructure; Part 5: Deployment
Automation – Scripted deployment orchestration; Part 6: Infrastructure Automation –
Scripted environment provisioning (Infrastructure Automation)
In this part of the series, I am going to explain how we use CloudFormation to script our
AWS infrastructure and provision our Jenkins environment.

What is CloudFormation? CloudFormation is an AWS offering for scripting AWS virtual


resource allocation. A CloudFormation template is a JSON script which references various
AWS resources that you want to use. When the template runs, it will allocate the AWS
resources accordingly.

A CloudFormation template is split up into four sections:

1. Parameters: Parameters are values that you define in the template. When creating the
stack through the AWS console, you will be prompted to enter in values for the
Parameters. If the value for the parameter generally stays the same, you can set a
default value. Default values can be overridden when creating the stack. The
parameter can be used throughout the template by using the “Ref” function.
2. Mappings: Mappings are for specifying conditional parameter values in your
template. For instance you might want to use a different AMI depending on the region
your instance is running on. Mappings will enable you to switch AMIs depending on
the region the instance is being created in.
3. Resources: Resources are the most vital part of the CloudFormation template. Inside
the resource section, you define and configure your AWS components.
4. Outputs: After the stack resources are created successfully, you may want to have it
return values such as the IP address or the domain of the created instance. You use
Outputs for this. Outputs will return the values to the AWS console or command line
depending on which medium you use for creating a stack.

CloudFormation parameters, and resources can be referenced throughout the template. You
do this using intrinsic functions, Ref, Fn::Base64,Fn::FindInMap, Fn::GetAtt,Fn::GetAZs and
Fn::Join.

These functions enable you to pass properties and resource outputs throughout your template
– reducing the need for most hardcoded properties (something I will discuss in part 4 of this
series, Dynamic Configuration).

How do you run a CloudFormation template? You can create a CloudFormation stack
using either the AWS Console, CloudFormation CLI tools or the CloudFormation API.

Why do we use CloudFormation? We use CloudFormation in order to have a fully scripted,


versioned infrastructure. From the application to the virtual resources, everything is created
from a script and is checked into version control. This gives us complete control over our
AWS infrastructure which can be recreated whenever necessary.

CloudFormation for Manatees In the Manatee Infrastructure, we use CloudFormation for


setting up the Jenkins CD environment. I am going to go through each part of the jenkins
template and explain its use and purpose. In template’s lifecycle, the user launches the stack
using the jenkins.template and enters in the Parameters. The template then starts to work:

1. IAM User with AWS Access keys is created 2. SNS Topic is created 3. CloudWatch
Alarm is created and SNS topic is used for sending alarm notifications 4. Security Group is
created 5. Wait Condition created 6. Jenkins EC2 Instance is created with the Security Group
from step #4. This security group is used for port configuration. It also uses
AWSInstanceType2Arch and AWSRegionArch2AMI to decide what AMI and OS type to
use 7. Jenkins EC2 Instance runs UserData script and executes cfn_init. 8. Wait Condition
waits for Jenkins EC2 instance to finish UserData script 9. Elastic IP is allocated and
associated with Jenkins EC2 instance 10. Route53 domain name created and associated with
Jenkins Elastic IP 11. If everything creates successfully, the stack signals complete and
outputs are displayed

Now that we know at a high level what is being done, lets take a deeper look at what’s going
on inside the jenkins.template.

Parameters
 Email: Email address that SNS notifications will be sent. When we create or deploy
to target environments, we use SNS to notify us of their status.
 ApplicationName: Name of A Record created by Route53. Inside the template, we
dynamically create a domain with A record for easy access to the instance after
creation. Example: jenkins.integratebutton.com, jenkins is the ApplicationName
 HostedZone: Name of Domain used Route53. Inside the template, we dynamically
create a domain with A record for easy access to the instance after creation. Example:
jenkins.integratebutton.com, integratebutton.com is the HostedZone.
 KeyName: EC2 SSH Keypair to create the Instance with. This is the key you use to
ssh into the Jenkins instance after creation.
 InstanceType: Size of the EC2 instance. Example: t1.micro, c1.medium
 S3Bucket: We use a S3 bucket for containing the resources for the Jenkins template
to use, this parameter specifies the name of the bucket to use for this.

Mappings
1 "Mappings" : {
2   "AWSInstanceType2Arch" : {
3     "t1.micro" : { "Arch" : "64" },
4     "m1.small" : { "Arch" : "32" },
5     "m1.large" : { "Arch" : "64" },
6     "m1.xlarge" : { "Arch" : "64" },
7     "m2.xlarge" : { "Arch" : "64" },
8     "m2.2xlarge" : { "Arch" : "64" },
9     "m2.4xlarge" : { "Arch" : "64" },
10     "c1.medium" : { "Arch" : "64" },
11     "c1.xlarge" : { "Arch" : "64" },
12     "cc1.4xlarge" : { "Arch" : "64" }
13   },
14     "AWSRegionArch2AMI" : {
15     "us-east-1" : { "32" : "ami-ed65ba84", "64" : "ami-e565ba8c" }   
16 }
17 },
These Mappings are used to define what type of operating system architecture and AWS AMI
(Amazon Machine Image) ID to use to use based upon the Instance size. The instance size is
specified using the Parameter InstanceType

The conditional logic to interact with the Mappings is done inside the EC2 instance.

1 "ImageId" : {
2 "Fn::FindInMap" : [
3 "AWSRegionArch2AMI",
4 {"Ref" : "AWS::Region" },
5 { "Fn::FindInMap" : [
6 "AWSInstanceType2Arch", {
7 "Ref" : "InstanceType"
8 },
9 "Arch"
10 ] }
11 ]
12 },

Resources
AWS::IAM::User

1 "CfnUser" : {
2   "Type" : "AWS::IAM::User",
3   "Properties" : {
4     "Path": "/",
5     "Policies": [{
6       "PolicyName": "root",
7       "PolicyDocument": { "Statement":[{
8         "Effect":"Allow",
9         "Action":"*",
10         "Resource":"*"
11         }
12       ]}
13     }]
14   }
15 },
16
17 "Type" : "AWS::IAM::AccessKey",
18 "Properties" : {
19   "UserName" : { "Ref": "CfnUser" }
20 }
21

We create the AWS IAM user and then create the AWS Access and Secret access keys for
the IAM user which are used throughout the rest of the template. Access and Secret access
keys are authentication keys used to authenticate to the AWS account.

AWS::SNS::Topic

1 "MySNSTopic" : {
2   "Type" : "AWS::SNS::Topic",
3   "Properties" : {
4     "Subscription" : [ {
5       "Endpoint" : { "Ref": "Email" },
6       "Protocol" : "email"
7     } ]
8   } },

SNS is a highly available solution for sending notifications. In the Manatee infrastructure it is
used for sending notifications to the development team.

AWS::Route53::RecordSetGroup

"JenkinsDNS" : {
1   "Type" : "AWS::Route53::RecordSetGroup",
2   "Properties" : {
3     "HostedZoneName" : { "Fn::Join" : [ "", [ {"Ref" : "HostedZone"},
4 "." ]]},
5     "RecordSets" : [{       "Name" : { "Fn::Join" : ["", [ { "Ref" :
6 "ApplicationName" }, ".", { "Ref" : "HostedZone" }, "." ]]},
7       "Type" : "A",
8       "TTL" : "900",
9       "ResourceRecords" : [ { "Ref" : "IPAddress" } ]
10     }]
11   }
},

Route53 is a highly available DNS service. We use Route53 to create domains dynamically
using the given HostedZone and ApplicationName parameters. If the parameters are not
overriden, the domain jenkins.integratebutton.com will be created. We then reference the
Elastic IP and associate it with the created domain. This way the jenkins.integratebutton.com
domain will route to the created instance

AWS::EC2::Instance

EC2 gives access to on-demand compute resources. In this template, we allocate a new EC2
instance and configure it with a Keypair, Security Group, and Image ID (AMI). Then for
provisioning the EC2 instance we use the UserData property. Inside UserData we run a set
of bash commands along with cfn_init. The UserData script is run during instance
creation.

1 "WebServer": {
2   "Type": "AWS::EC2::Instance",
3   "Metadata" : {
4     "AWS::CloudFormation::Init" : {
5       "config" : {
6         "packages" : {
7           "yum" : {
8             "tomcat6" : [],
9             "subversion" : [],
10             "git" : [],
11             "gcc" : [],
12             "libxslt-devel" : [],
13             "ruby-devel" : [],
14             "httpd" : []
15           }
16         },
17
18         “sources” : {
19           “/opt/aws/apitools/cfn” : { “Fn::Join” : [“”,
20 [“https://s3.amazonaws.com/”, { “Ref” : “S3Bucket” },
21 “/resources/aws_tools/cfn-cli.tar.gz”]]},
22           “/opt/aws/apitools/sns” : { “Fn::Join” : [“”,
23 [“https://s3.amazonaws.com/”, { “Ref” : “S3Bucket” },
24 “/resources/aws_tools/sns-cli.tar.gz”]]}
25         },
26
27         “files” : {
28           “/usr/share/tomcat6/webapps/jenkins.war” : {
29             “source” : “http://mirrors.jenkins-
30 ci.org/war/1.480/jenkins.war”,
31             “mode” : “000700”,
32             “owner” : “tomcat”,
33             “group” : “tomcat”,
34             “authentication” : “S3AccessCreds”
35           },
36
37           “/usr/share/tomcat6/webapps/nexus.war” : {
38             “source” : “http://www.sonatype.org/downloads/nexus-
39 2.0.3.war”,
40             “mode” : “000700”,
41             “owner” : “tomcat”,
42             “group” : “tomcat”,
43             “authentication” : “S3AccessCreds”
44           },
45
46           “/usr/share/tomcat6/.ssh/id_rsa” : {
47             “source” : { “Fn::Join” : [“”, [“https://s3.amazonaws.com/”,
48 { “Ref” : “S3Bucket” }, “/private/id_rsa”]]},
49             “mode” : “000600”,
50             “owner” : “tomcat”,
51             “group” : “tomcat”,
52             “authentication” : “S3AccessCreds”
53           },
54
55           “/home/ec2-user/common-step-definitions-1.0.0.gem” : {
56             “source” : { “Fn::Join” : [“”,[“https://s3.amazonaws.com/”,
57 { “Ref” : “S3Bucket” }, “/gems/common-step-definitions-1.0.0.gem”]]},
58             “mode” : “000700”,
59             “owner” : “root”,
60             “group” : “root”,
61             “authentication” : “S3AccessCreds”
62           },
63
64           “/etc/cron.hourly/jenkins_backup.sh” : {
65             “source” : { “Fn::Join” : [“”, [“https://s3.amazonaws.com/”,
66 { “Ref” : “S3Bucket” }, “/jenkins_backup.sh”]]},
67             “mode” : “000500”,
68             “owner” : “root”,
69             “group” : “root”,
70             “authentication” : “S3AccessCreds”
71           },
72
73           “/etc/tomcat6/server.xml” : {
74             “source” : { “Fn::Join” : [“”, [“https://s3.amazonaws.com/”,
75 { “Ref” : “S3Bucket” }, “/server.xml”]]},
76             “mode” : “000554”,
77             “owner” : “root”,
78             “group” : “root”,
79             “authentication” : “S3AccessCreds”
80           },
81
82           “/usr/share/tomcat6/aws_access” : {
83             “content” : { “Fn::Join” : [“”, [
84               “AWSAccessKeyId=”, { “Ref” : “HostKeys” }, “n”,
85               “AWSSecretKey=”, {“Fn::GetAtt”: [“HostKeys”,
86 “SecretAccessKey”]}
87             ]]},
88             “mode” : “000400”,
89             “owner” : “tomcat”,
90             “group” : “tomcat”,
91             “authentication” : “S3AccessCreds”
92           },
93
94           “/opt/aws/aws.config” : {
95             “content” : { “Fn::Join” : [“”, [
96               “AWS.config(n”,
97               “:access_key_id => ”“, {”Ref" : “HostKeys” }, “”,n",
98               “:secret_access_key => ”“, {”Fn::GetAtt“:
99 [”HostKeys“,”SecretAccessKey“]},”“)n”
10             ]]},
0             “mode” : “000500”,
10             “owner” : “tomcat”,
1             “group” : “tomcat”
10           },
2
10           “/etc/httpd/conf/httpd.conf2” : {
3             “content” : { “Fn::Join” : [“”, [
10               “NameVirtualHost *:80n”,
4               “n”,
10               “ProxyPass /jenkins http://”, { “Fn::Join” : [“”, [{ “Ref”
5 : “ApplicationName” }, “.”, { “Ref” : “HostedZone” }]] },
10 “:8080/jenkinsn”,
6               “ProxyPassReverse /jenkins http://”, { “Fn::Join” : [“”,[{
10 “Ref” : “ApplicationName” }, “.”, { “Ref” :
7 “HostedZone” }]] },“:8080/jenkinsn”,
10               “ProxyRequests Offn”,
8               “n”,
10               “Order deny,allown”,
9               “Allow from alln”,
11               “n”,
0               “RewriteEngine Onn”,
11               “RewriteRule ^/$ http://”, { “Fn::Join” : [“”, [{ “Ref” :
1 “ApplicationName” }, “.”, { “Ref” : “HostedZone” }]] },“:8080/jenkins$1
11 [NC,P]n”, “”
2             ]]},
11             “mode” : “000544”,
3             “owner” : “root”,
11             “group” : “root”
4           },
11
5           “/root/.ssh/config” : {
11             “content” : { “Fn::Join” : [“”, [
6               “Host github.comn”,
11               “StrictHostKeyChecking non”
7             ]]},
11             “mode” : “000600”,
8             “owner” : “root”,
11             “group” : “root”
9           },
12
0           “/usr/share/tomcat6/.route53” : {
12             “content” : { “Fn::Join” : [“”, [
1               “access_key:”, { “Ref” : “HostKeys” }, “n”,
12               “secret_key:”, {“Fn::GetAtt”:
2 [“HostKeys”,“SecretAccessKey”]}, “n”,
12               “api: ‘2012-02-29’n”,
3               “endpoint: https://route53.amazonaws.com/n”,
12               “default_ttl: ‘3600’”
4             ]]},
12             “mode” : “000700”,
5             “owner” : “tomcat”,
12             “group” : “tomcat”
6           }
12         }
7       }
12     },
8
12     “AWS::CloudFormation::Authentication” : {
9       “S3AccessCreds” : {
13         “type” : “S3”,
0         “accessKeyId” : { “Ref” : “HostKeys” },
13         “secretKey” : {“Fn::GetAtt”: [“HostKeys”, “SecretAccessKey”]},
1         “buckets” : [ { “Ref” : “S3Bucket”} ]
13       }
2     }
13   },
3
13   “Properties”: {
4     “ImageId” : { “Fn::FindInMap” : [ “AWSRegionArch2AMI”, { “Ref” :
13 “AWS::Region” }, { “Fn::FindInMap” : [ “AWSInstanceType2Arch”, { “Ref” :
5 “InstanceType” }, “Arch” ] } ] },
13     “InstanceType” : { “Ref” : “InstanceType” },
6     “SecurityGroups” : [ {“Ref” : “FrontendGroup”} ],
13     “KeyName” : { “Ref” : “KeyName” },
7     “Tags”: [ { “Key”: “Name”, “Value”: “Jenkins” } ],
13     “UserData” : { “Fn::Base64” : { “Fn::Join” : [“”, [
8       “#!/bin/bash -vn”,
13       “yum -y install java-1.6.0-openjdk*n”,
9       “yum update -y aws-cfn-bootstrapn”,
14
0       “# Install packagesn”,
14       “/opt/aws/bin/cfn-init -s”, { “Ref” : “AWS::StackName” }, " -r
1 WebServer ",
14       " –access-key “, {”Ref" : “HostKeys” },
2       " –secret-key “, {”Fn::GetAtt“: [”HostKeys“,”SecretAccessKey"]},
14       " –region “, {”Ref" : “AWS::Region” }, " || error_exit ‘Failed to
3 run cfn-init’n",
14
4       “# Copy Github credentials to root ssh directoryn”,
14       “cp /usr/share/tomcat6/.ssh/* /root/.ssh/n”,
5
14       “# Installing Ruby 1.9.3 from RPMn”,
6       “wget -P /home/ec2-user/ https://s3.amazonaws.com/”, { “Ref” :
14 “S3Bucket” }, “/resources/rpm/ruby-1.9.3p0-2.amzn1.x86_64.rpmn”,
7       “rpm -Uvh /home/ec2-user/ruby-1.9.3p0-2.amzn1.x86_64.rpmn”,
14
8       “cat /etc/httpd/conf/httpd.conf2 >> /etc/httpd/conf/httpd.confn”,
14
9       “# Install S3 Gemsn”,
15       “gem install /home/ec2-user/common-step-definitions-1.0.0.gemn”,
0
15       “# Install Public Gemsn”,
1       “gem install bundler –version 1.1.4 –no-rdoc –no-rin”,
15       “gem install aws-sdk –version 1.5.6 –no-rdoc –no-rin”,
2       “gem install cucumber –version 1.2.1 –no-rdoc –no-rin”,
15       “gem install net-ssh –version 2.5.2 –no-rdoc –no-rin”,
3       “gem install capistrano –version 2.12.0 –no-rdoc –no-rin”,
15       “gem install route53 –version 0.2.1 –no-rdoc –no-rin”,
4       “gem install rspec –version 2.10.0 –no-rdoc –no-rin”,
15       “gem install trollop –version 2.0 –no-rdoc –no-rin”,
5
15       “# Update Jenkins with versioned configurationn”,
6       “rm -rf /usr/share/tomcat6/.jenkinsn”,
15       “git clone
7 git@github.com:stelligent/continuous_delivery_open_platform_jenkins_confi
15 guration.git /usr/share/tomcat6/.jenkinsn”,
8
15       “# Get S3 bucket publisher from S3n”,
9       “wget -P /usr/share/tomcat6/.jenkins/ https://s3.amazonaws.com/”,
16 { “Ref” : “S3Bucket” }, “/hudson.plugins.s3.S3BucketPublisher.xmln”,
0
16       “wget -P /tmp/
1 https://raw.github.com/stelligent/continuous_delivery_open_platform/maste
16 r/config/aws/cd_security_group.rbn”,
2       “ruby /tmp/cd_security_group –securityGroupName”, { “Ref” :
16 “FrontendGroup” }, " –port 5432n",
3
16       “# Update main Jenkins confign”,
4       “sed -i ’s@.*@”, { “Ref” : “HostKeys” }, “@’
16 /usr/share/tomcat6/.jenkins/hudson.plugins.s3.S3BucketPublisher.xmln”,
5       “sed -i ‘s@.*@“, {”Fn::GetAtt“:
16 [”HostKeys“,”SecretAccessKey“]},”@’
6 /usr/share/tomcat6/.jenkins/hudson.plugins.s3.S3BucketPublisher.xmln”,
16
7       “# Add AWS Credentials to Tomcatn”,
16       “echo ”AWS_ACCESS_KEY=“, {”Ref" : “HostKeys” }, “” >>
8 /etc/sysconfig/tomcat6n",
16       “echo ”AWS_SECRET_ACCESS_KEY=“, {”Fn::GetAtt“:
9 [”HostKeys“,”SecretAccessKey“]},”" >> /etc/sysconfig/tomcat6n",
17
0       “# Add AWS CLI Toolsn”,
17       “echo ”export AWS_CLOUDFORMATION_HOME=/opt/aws/apitools/cfn" >>
1 /etc/sysconfig/tomcat6n",
17       “echo ”export AWS_SNS_HOME=/opt/aws/apitools/sns" >>
2 /etc/sysconfig/tomcat6n",
17       “echo ”export
3 PATH=$PATH:/opt/aws/apitools/sns/bin:/opt/aws/apitools/cfn/bin" >>
17 /etc/sysconfig/tomcat6n",
4
17       “# Add Jenkins Environment Variablen”,
5       “echo ”export SNS_TOPIC=“, {”Ref" : “MySNSTopic” }, “” >>
17 /etc/sysconfig/tomcat6n",
6       “echo ”export JENKINS_DOMAIN=“, {”Fn::Join" : [“”, [“http://”,
17 { “Ref” : “ApplicationName” }, “.”, { “Ref” : “HostedZone” }]] }, “”>>
7 /etc/sysconfig/tomcat6n",
17       “echo ”export JENKINS_ENVIRONMENT=“, {”Ref" : “ApplicationName” },
8 “” >> /etc/sysconfig/tomcat6n",
17
9       “wget -P /tmp/
18 https://raw.github.com/stelligent/continuous_delivery_open_platform/maste
0 r/config/aws/showback_domain.rbn”,
18       “echo ”export SGID=`ruby /tmp/showback_domain.rb –item properties
1 –key SGID`" >> /etc/sysconfig/tomcat6n",
18
2       “chown -R tomcat:tomcat /usr/share/tomcat6/n”,
18       “chmod +x /usr/share/tomcat6/scripts/aws/*n”,
3       “chmod +x /opt/aws/apitools/cfn/bin/*n”,
18
4       “service tomcat6 restartn”,
18       “service httpd restartn”,
5
18       “/opt/aws/bin/cfn-signal”, " -e 0“,” ’“, {”Ref" :
6 “WaitHandle” },“’”
18     ]]}}
7   }
18 },
8
18
9
19
0
19
1
19
2
19
3
19
4
19
5
19
6
19
7
19
8
19
9
20
0
20
1
20
2
20
3
20
4
20
5
20
6
20
7
20
8
20
9
21
0
21
1
21
2
21
3
21
4
21
5
21
6
21
7
21
8
21
9
22
0
22
1
22
2
22
3
22
4
22
5
22
6
22
7
22
8
22
9

Calling cfn init from UserData

"# Install packagesn",


1 "/opt/aws/bin/cfn-init -s ", { "Ref" : "AWS::StackName" }, " -r
2 WebServer ",
3 " --access-key ", { "Ref" : "HostKeys" },
4 " --secret-key ", {"Fn::GetAtt": ["HostKeys", "SecretAccessKey"]},
5 " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run
6 cfn-init'n",
},

cfn_init is used to retrieve and interpret the resource metadata, installing packages, creating
files and starting services. In the Manatee template we use cfn_init for easy access to other
AWS resources, such as S3.

1 "/etc/tomcat6/server.xml" : {
2   "source" : { "Fn::Join" : ["", ["https://s3.amazonaws.com/", { "Ref" :
3 "S3Bucket" }, "/server.xml"]]},
4   "mode" : "000554",
5   "owner" : "root",
6   "group" : "root",
7   "authentication" : "S3AccessCreds" },
8
"AWS::CloudFormation::Authentication" : {
9
  "S3AccessCreds" : {
10
    "type" : "S3",
11
    "accessKeyId" : { "Ref" : "HostKeys" },
12
    "secretKey" : {"Fn::GetAtt": ["HostKeys", "SecretAccessKey"]},
13
    "buckets" : [ { "Ref" : "S3Bucket"} ]
14
  }
15
}

When possible, we try to use cfn_init rather than UserData bash commands because it
stores a detailed log of Cfn events on the instance.

AWS::EC2::SecurityGroup

When creating a Jenkins instance, we only want certain ports to be open and only open to
certain users. For this we use Security Groups. Security groups are firewall rules defined at
the AWS level. You can use them to set which ports, or range of ports to be opened. In
addition to defining which ports are to be open, you can define who they should be open to
using CIDR.

"FrontendGroup" : {
  "Type" : "AWS::EC2::SecurityGroup",   
1
"Properties" : {
2
    "GroupDescription" : "Enable SSH and access to Apache and Tomcat",
3
    "SecurityGroupIngress" : [
4
      {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22",
5
"CidrIp" : "0.0.0.0/0"},
6
      {"IpProtocol" : "tcp", "FromPort" : "8080", "ToPort" : "8080",
7
"CidrIp" : "0.0.0.0/0"},
8
      {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80",
9
"CidrIp" : "0.0.0.0/0"}
10
    ]
11
  }
},

In this security group we are opening ports 22, 80 and 8080. Since we are opening 8080, we
are able to access Jenkins at the completion of the template. By default, ports on an instance
are closed, meaning these are necessary to be specified in order to have access to Jenkins.

AWS::EC2::EIP

When an instance is created, it is given a public DNS name similar to: ec2-107-20-139-
148.compute-1.amazonaws.com. By using Elastic IPs, you can associate your instance an IP
rather than a DNS.

1 "IPAddress" : {   "Type" : "AWS::EC2::EIP"


2 },
3
4 “IPAssoc” : {
5   “Type” : “AWS::EC2::EIPAssociation”,
6   “Properties” : {
7     “InstanceId” : { “Ref” : “WebServer” },
8     “EIP” : { “Ref” : “IPAddress” }
9   }
10 },
In the snippets above, we create a new Elastic IP and then associate it with the EC2 instance
created above. We do this so we can reference the Elastic IP when creating the Route53
Domain name.

AWS::CloudWatch::Alarm

1 "CPUAlarmLow": {
2   "Type": "AWS::CloudWatch::Alarm",
3   "Properties": {
4     "AlarmDescription": "Scale-down if CPU < 70% for 10 minutes",
5     "MetricName": "CPUUtilization",     "Namespace": "AWS/EC2",
6     "Statistic": "Average",     "Period": "300",
7     "EvaluationPeriods": "2",
8     "Threshold": "70",
9     "AlarmActions": [ { "Ref": "SNSTopic" } ],
10     "Dimensions": [{
11       "Name": "WebServerName",
12       "Value": { "Ref": "WebServer" }
13     }],
14     "ComparisonOperator": "LessThanThreshold"
15   }
16 },

There are many reasons an instance can become unavailable. CloudWatch is used to monitor
instance usage and performance. CloudWatch can be set to notify specified individuals if the
instance experiences higher than normal CPU utilization, disk usage, network usage, etc. In
the Manatee infrastructure we use CloudWatch to monitor disk utilization and notify team
members if it reaches 90 percent.

If the Jenkins instance goes down, our CD pipeline becomes temporarily unavailable. This
presents a problem as the development team is temporarily blocked from testing their code.
CloudWatch helps notify us if this is an impending problem..

AWS::CloudFormation::WaitConditionHandle,
AWS::CloudFormation::WaitCondition

Wait Conditions are used to wait for all of the resources in a template to be completed before
signally template success.

1 "WaitHandle" : {
2   "Type" : "AWS::CloudFormation::WaitConditionHandle"
3 },
4
5 “WaitCondition” : {
6   “Type” : “AWS::CloudFormation::WaitCondition”,
7   “DependsOn” : “WebServer”,
8   “Properties” : {
9     “Handle” : { “Ref” : “WaitHandle” },
10     “Timeout” : “990”
11   }
12 }

When creating the instance, if a wait condition is not used, CloudFormation won’t wait for
the completion of the UserData script. It will signal success if the EC2 instance is allocated
successfully rather than waiting for the UserData script to run and signal success.
Outputs
Outputs are used to return information from what was created during the CloudFormaiton
stack creation to the user. In order to return values, you define the Output name and then the
resource you want to reference:

"Outputs" : {
1   "Domain" : {
2     "Value" : { "Fn::Join" : ["", ["http://", { "Ref" :
3 "ApplicationName" }, ".", { "Ref" : "HostedZone" }]]
4 },
5     "Description" : "URL for newly created Jenkins app"   
6 },
7   "NexusURL" : {
8     "Value" : { "Fn::Join" : ["", ["http://", { "Ref" : "IPAddress" },
9 ":8080/nexus"]] },
10     "Description" : "URL for newly created Nexus repository"
11   },
12   "InstanceIPAddress" : {
13     "Value" : { "Ref" : "IPAddress" }
14 }
}

For instance with the InstanceIPAddress, we are refernceing the IPAddress resource which
happens to be the Elastic IP. This will return the Elastic IP address to the CloudFormation
console.

CloudFormation allows us to completely script and version our infrastructure. This enables
our infrastructure to be recreated the same way every time by just running the
CloudFormation template. Because of this, your environments can be run in a Continuous
integration cycle, rebuilding with every change in the script.

In the next part of our series – which is all about Dynamic Configuration – we’ll go through
building your infrastructure to only require a minimal amount of hard coded properties if any.
In this next article, you’ll see how you can use CloudFormation to build “property file less”
infrastructure.

Resources:

 http://aws.amazon.com/cloudformation/
 http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-
template-resource-type-ref.html

Tutotial : Continuous Delivery in the Cloud


Part 4 of 6
Inpart 1 of this series, I introduced theContinuous Delivery (CD) pipeline for theManatee
Tracking application. Inpart 2 I went over how we use this CD pipeline to deliver software
from checkin to production. In part 3, we focused on how CloudFormation is used to script
the virtual AWS components that create the Manatee infrastructure. A list of topics for each
of the articles is summarized below:

Part 1: Introduction – Introduction to continuous delivery in the cloud and the rest of the
articles; Part 2: CD Pipeline – In-depth look at the CD Pipeline; Part 3: CloudFormation –
Scripted virtual resource provisioning; Part 4: Dynamic Configuration –  What you’re reading
now; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6:
Infrastructure Automation – Scripted environment provisioning (Infrastructure Automation)

In this part of the series, I am going to explain how we dynamically generate our
configuration and avoid property files whenever possible. Instead of using property files, we
store and retrieve configuration on the fly – as part of the CD pipeline – without predefining
these values in a static file (i.e. a properties file) ahead of time. We do this using two
methods: AWS SimpleDB and CloudFormation.

SimpleDB is a highly available non-relational data storage service that only stores strings in
key value pairs. CloudFormation, as discussed in Part 3 of the series, is a scripting language
for allocating and configuring AWS virtual resources.

Using SimpleDB

Throughout the CD pipeline, we often need to manage state across multiple Jenkins jobs. To
do this, we use SimpleDB. As the pipeline executes, values that will be needed by subsequent
jobs get stored in SimpleDB as properties. When the properties are needed we use a simple
Ruby script script to return the key/value pair from SimpleDB and then use it as part of the
job. The values being stored and retrieved range from IP addresses and domain names to
AMI (Machine Images) IDs.

So what makes this dynamic? As Jenkins jobs or CloudFormation templates are run, we often
end up with properties that need to be used elsewhere. Instead of hard coding all of the values
to be used in a property file, we create, store and retrieve them as the pipeline executes.

Below is the CreateTargetEnvironment Jenkins job script that creates a new target
environment from a CloudFormation script production.template

1 if [ $deployToProduction ] == true
2 then
3 SSH_KEY=production
4 else
5 SSH_KEY=development
6 fi
7
8 # Create Cloudformaton Stack
9 ruby /usr/share/tomcat6/scripts/aws/create_stack.rb ${STACK_NAME}
10 ${WORKSPACE}/production.template ${HOST} ${JENKINSIP} ${SSH_KEY}
11 ${SGID} ${SNS_TOPIC}
12
13 # Load SimpleDB Domain with Key/Value Pairs
14 ruby /usr/share/tomcat6/scripts/aws/load_domain.rb ${STACK_NAME}
15
16 # Pull and store variables from SimpleDB
17 host=`ruby /usr/share/tomcat6/scripts/aws/showback_domain.rb $
18 {STACK_NAME} InstanceIPAddress`
19 # Run Acceptance Tests
20 cucumber features/production.feature host=${host} user=ec2-user
key=/usr/share/tomcat6/.ssh/id_rsa

Referenced above in the CreateTargetEnvironment code snippet. This is the


load_domain.rb script that iterates over a file and sends key/value pairs to SimpleDB.

1 require 'rubygems'
2 require 'aws-sdk'
3 load File.expand_path('../../config/aws.config', __FILE__)
4
5 stackname=ARGV[0]
6
7 file = File.open(“/tmp/properties”, “r”)
8
9 sdb = AWS::SimpleDB.new
10
11 AWS::SimpleDB.consistent_reads do
12   domain = sdb.domains[“stacks”]
13   item = domain.items[“#{stackname}”]
14
15   file.each_line do|line|
16     key,value = line.split ‘=’
17     item.attributes.set(
18       “#{key}” => “#{value}”)
19   end
20 end

Referenced above in the CreateTargetEnvironment code snippet. This is the


showback_domain.rb script which connects to SimpleDB and returns a key/value pair.

1 load File.expand_path('../../config/aws.config', __FILE__)


2
3 item_name=ARGV[0]
4 key=ARGV[1]
5
6 sdb = AWS::SimpleDB.new
7
8 AWS::SimpleDB.consistent_reads do
9   domain = sdb.domains[“stacks”]
10   item = domain.items[“#{item_name}”]
11
12   item.attributes.each_value do |name, value|
13     if name == “#{key}”
14       puts “#{value}”.chomp
15     end
16   end
17 end

In the above in the CreateTargetEnvironment code snippet, we store the outputs of the
CloudFormation stack in a temporary file. We then iterate over the file with the
load_domain.rb script and store the key/value pairs in SimpleDB.

Following this, we make a call to SimpleDB with the showback_domain.rb script and return
the instance IP address (created in the CloudFormation template) and store it in the host
variable. host is then used by cucumber to ssh into the target instance and run the acceptance
tests.

Using CloudFormation

In our CloudFormation templates we allocate multiple AWS resources. Every time we run the
template, a different resource is being used. For example, in our jenkins.template we
create a new IAM user. Every time we run the template a different IAM user with different
credentials is created. We need a way to reference these resources. This is where
CloudFormation comes in. You can reference resources within other resources throughout the
script. You can define a reference to another resource using the Ref function in
CloudFormation. Using Ref, you can dynamically refer to values of other resources such as
an IP Address, domain name, etc.

In the script we are creating an IAM user, referencing the IAM user to create AWS Access
keys and then storing them in an environment variable.

"CfnUser" : {
1   "Type" : "AWS::IAM::User",
2   "Properties" : {
3     "Path": "/",
4     "Policies": [{
5       "PolicyName": "root",
6       "PolicyDocument": {
7         "Statement":[{
8           "Effect":"Allow",
9           "Action":"*",
10           "Resource":"*"
11         }
12       ]}
13     }]
14   }
15 },
16
17 “HostKeys” : {
18   “Type” : “AWS::IAM::AccessKey”,
19   “Properties” : {
20     “UserName” : { “Ref”: “CfnUser” }
21   }
22 },
23
24 “# Add AWS Credentials to Tomcatn”,
25 “echo ”AWS_ACCESS_KEY=“, {”Ref" : “HostKeys” }, “” >>
26 /etc/sysconfig/tomcat6n",
27 “echo ”AWS_SECRET_ACCESS_KEY=“, {”Fn::GetAtt“:
[”HostKeys“,”SecretAccessKey“]},”" >> /etc/sysconfig/tomcat6n",

We can then use these access keys in other scripts by referencing the $AWS_ACCESS_KEY and
$AWS_SECRET_ACCESS_KEY environment variables.

How is this different from typical configuration management?

Typically in many organizations, there’s a big property with hard coded key/value pairs that
gets passed into the pipeline. The pipeline executes using the given parameters and cannot
scale or change without a user modifying the property file. They are unable to scale or adapt
because all of the properties are hard coded, if the property file hard codes the IP to an EC2
instance and it goes down for whatever reason, their pipeline doesn’t work until someone
fixes the property file. There are more effective ways of doing this when using the cloud. The
cloud is provides on-demand resources that will constantly be changing. These resources will
have different IP addresses, domain names, etc associated with them every time.

With dynamic configuration, there are no property files, every property is generated as part of
the pipeline.

With this dynamic approach, the pipeline values change with every run. As new cloud
resources are allocated, the pipeline is able to adjust itself and automatically without the need
for users to constantly modify property files. This leads to less time spent debugging those
cumbersome property file management issues that plague most companies.

In the next part of our series – which is all about Deployment Automation – we’ll go through
scripting and testing your deployment using industry-standard tools. In this next article,
you’ll see how to orchestrate deployment sequences and configuration using Capistrano.

Tutorial : Continuous Delivery in the Cloud


Part 5 of 6
Inpart 1 of this series, I introduced theContinuous Delivery (CD) pipeline for theManatee
Tracking application. Inpart 2 I went over how we use this CD pipeline to deliver software
from checkin to production. In part 3, we focused on how CloudFormation is used to script
the virtual AWS components that create the Manatee infrastructure. A list of topics for each
of the articles is summarized below:

Part 1: Introduction – Introduction to continuous delivery in the cloud and the rest of the
articles; Part 2: CD Pipeline – In-depth look at the CD Pipeline; Part 3: CloudFormation –
Scripted virtual resource provisioning; Part 4: Dynamic Configuration –  What you’re reading
now; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6:
Infrastructure Automation – Scripted environment provisioning (Infrastructure Automation)

In this part of the series, I am going to explain how we dynamically generate our
configuration and avoid property files whenever possible. Instead of using property files, we
store and retrieve configuration on the fly – as part of the CD pipeline – without predefining
these values in a static file (i.e. a properties file) ahead of time. We do this using two
methods: AWS SimpleDB and CloudFormation.

SimpleDB is a highly available non-relational data storage service that only stores strings in
key value pairs. CloudFormation, as discussed in Part 3 of the series, is a scripting language
for allocating and configuring AWS virtual resources.

Using SimpleDB
Throughout the CD pipeline, we often need to manage state across multiple Jenkins jobs. To
do this, we use SimpleDB. As the pipeline executes, values that will be needed by subsequent
jobs get stored in SimpleDB as properties. When the properties are needed we use a simple
Ruby script script to return the key/value pair from SimpleDB and then use it as part of the
job. The values being stored and retrieved range from IP addresses and domain names to
AMI (Machine Images) IDs.

So what makes this dynamic? As Jenkins jobs or CloudFormation templates are run, we often
end up with properties that need to be used elsewhere. Instead of hard coding all of the values
to be used in a property file, we create, store and retrieve them as the pipeline executes.

Below is the CreateTargetEnvironment Jenkins job script that creates a new target
environment from a CloudFormation script production.template

1 if [ $deployToProduction ] == true
2 then
3 SSH_KEY=production
4 else SSH_KEY=development
5 fi
# Create Cloudformaton Stack
ruby /usr/share/tomcat6/scripts/aws/create_stack.rb $
{STACK_NAME} ${WORKSPACE}/production.template ${HOST} $
1
{JENKINSIP} ${SSH_KEY} ${SGID} ${SNS_TOPIC}
2
3
# Load SimpleDB Domain with Key/Value Pairs
4
ruby /usr/share/tomcat6/scripts/aws/load_domain.rb $
5
{STACK_NAME}
6
7
# Pull and store variables from SimpleDB
8
host=`ruby /usr/share/tomcat6/scripts/aws/showback_domain.rb $
9
{STACK_NAME} InstanceIPAddress`
10
11
# Run Acceptance Tests
cucumber features/production.feature host=${host} user=ec2-
user key=/usr/share/tomcat6/.ssh/id_rsa

Referenced above in the CreateTargetEnvironment code snippet. This is the


load_domain.rb script that iterates over a file and sends key/value pairs to SimpleDB.

1 require 'rubygems'
2 require 'aws-sdk'
3 load File.expand_path('../../config/aws.config', __FILE__)
4
5
6 stackname=ARGV[0]
7
8 file = File.open(“/tmp/properties”, “r”)
9
10 sdb = AWS::SimpleDB.new
11
12 AWS::SimpleDB.consistent_reads do
13   domain = sdb.domains[“stacks”]
14   item = domain.items[“#{stackname}”]
15
16   file.each_line do|line|
17     key,value = line.split ‘=’
18     item.attributes.set(
19       “#{key}” => “#{value}”)
20   end
21 end

Referenced above in the CreateTargetEnvironment code snippet. This is the


showback_domain.rb script which connects to SimpleDB and returns a key/value pair.

1 load File.expand_path('../../config/aws.config', __FILE__)


2
3 item_name=ARGV[0]
4 key=ARGV[1]
5
6 sdb = AWS::SimpleDB.new
7
8 AWS::SimpleDB.consistent_reads do
9   domain = sdb.domains[“stacks”]
10   item = domain.items[“#{item_name}”]
11
12   item.attributes.each_value do |name, value|
13     if name == “#{key}”
14       puts “#{value}”.chomp
15     end
16   end
17 end

In the above in the CreateTargetEnvironment code snippet, we store the outputs of the
CloudFormation stack in a temporary file. We then iterate over the file with the
load_domain.rb script and store the key/value pairs in SimpleDB.

Following this, we make a call to SimpleDB with the showback_domain.rb script and return
the instance IP address (created in the CloudFormation template) and store it in the host
variable. host is then used by cucumber to ssh into the target instance and run the acceptance
tests.

Using CloudFormation

In our CloudFormation templates we allocate multiple AWS resources. Every time we run the
template, a different resource is being used. For example, in our jenkins.template we
create a new IAM user. Every time we run the template a different IAM user with different
credentials is created. We need a way to reference these resources. This is where
CloudFormation comes in. You can reference resources within other resources throughout the
script. You can define a reference to another resource using the Ref function in
CloudFormation. Using Ref, you can dynamically refer to values of other resources such as
an IP Address, domain name, etc.

In the script we are creating an IAM user, referencing the IAM user to create AWS Access
keys and then storing them in an environment variable.

1 "CfnUser" : {
2   "Type" : "AWS::IAM::User",
3   "Properties" : {
4     "Path": "/",
5     "Policies": [{
6       "PolicyName": "root",
      "PolicyDocument": {
7
        "Statement":[{
8
          "Effect":"Allow",
9
          "Action":"*",
10
          "Resource":"*"
11
        }
12
      ]}
13
    }]
14
  }
15
},`
16
17
“HostKeys” : {
18
  “Type” : “AWS::IAM::AccessKey”,
19
  “Properties” : {
20
    “UserName” : { “Ref”: “CfnUser” }
21
  }
22
},
23
24
“# Add AWS Credentials to Tomcatn”,
25
“echo ”AWS_ACCESS_KEY=“, {”Ref" : “HostKeys” }, “” >>
26
/etc/sysconfig/tomcat6n",
27
“echo ”AWS_SECRET_ACCESS_KEY=“, {”Fn::GetAtt“:
28
[”HostKeys“,”SecretAccessKey“]},”" >> /etc/sysconfig/tomcat6n",

We can then use these access keys in other scripts by referencing the $AWS_ACCESS_KEY and
$AWS_SECRET_ACCESS_KEY environment variables.

How is this different from typical configuration management?

Typically in many organizations, there’s a big property with hard coded key/value pairs that
gets passed into the pipeline. The pipeline executes using the given parameters and cannot
scale or change without a user modifying the property file. They are unable to scale or adapt
because all of the properties are hard coded, if the property file hard codes the IP to an EC2
instance and it goes down for whatever reason, their pipeline doesn’t work until someone
fixes the property file. There are more effective ways of doing this when using the cloud. The
cloud is provides on-demand resources that will constantly be changing. These resources will
have different IP addresses, domain names, etc associated with them every time.

With dynamic configuration, there are no property files, every property is generated as part of
the pipeline.

With this dynamic approach, the pipeline values change with every run. As new cloud
resources are allocated, the pipeline is able to adjust itself and automatically without the need
for users to constantly modify property files. This leads to less time spent debugging those
cumbersome property file management issues that plague most companies.

In the next part of our series – which is all about Deployment Automation – we’ll go through
scripting and testing your deployment using industry-standard tools. In this next article,
you’ll see how to orchestrate deployment sequences and configuration using Capistrano.
Tutotial : Continuous Delivery in the Cloud
Part 6 of 6
Inpart 1 of this series, I introduced theContinuous Delivery (CD) pipeline for theManatee
Tracking application. Inpart 2,I went over how we use this CD pipeline to deliver software
from checkin to production. Inpart 3,we focused on how CloudFormation is used to script the
virtual AWS components that create the Manatee infrastructure. Then inpart 4, we focused on
a “property file less” environment by dynamically setting and retrieving properties. Part 5
explained how we use Capistrano for scripting our deployment. A list of topics for each of
the articles is summarized below:

Part 1: Introduction – Introduction to continuous delivery in the cloud and the rest of the
articles; Part 2: CD Pipeline – In-depth look at the CD Pipeline; Part 3: CloudFormation –
Scripted virtual resource provisioning; Part 4: Dynamic Configuration – “Property file less”
infrastructure; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6:
Infrastructure Automation – What you’re reading now;

In this part of the series, I am going to show how we use Puppet in combination with
CloudFormation to script our target environment infrastructure, preparing it for a Manatee
application deployment.

What is Puppet?

Puppet is a Ruby based infrastructure automation tool. Puppet is primarily used for
provisioning environments and managing configuration. Puppet is made to support multiple
operating systems, making your infrastructure automation cross-platform.

How does Puppet work?

Puppet uses a library called Facter which collects facts about your system. Facter returns
details such as the operating system, architecture, IP address, etc. Puppet uses these facts to
make decisions for provisioning your environment. Below is an example of the facts returned
by Facter.

1 # Facter architecture => i386


2 ...
3 ipaddress => 172.16.182.129
4 is_virtual => true
5 kernel => Linux
6 kernelmajversion => 2.6
7 ...
8 operatingsystem => CentOS
9 operatingsystemrelease => 5.5
10 physicalprocessorcount => 0
11 processor0 => Intel(R) Core(TM)2 Duo CPU     P8800  @ 2.66GHz
12 processorcount => 1 productname => VMware Virtual Platform

Puppet uses the operating system fact to decide the service name as show below:

1 case $operatingsystem {
2   centos, redhat: {
3     $service_name = 'ntpd'
4     $conf_file    = 'ntp.conf.el'
5   }
6 }`

With this case statement, if the operating environment is either centos or redhat the service
name ntpd and the configuration file ntp.conf.el are used.

Puppet is declarative by nature. Inside a Puppet module you define the end state the
environment end state after the Puppet run. Puppet enforces this state during the run. If at any
point the environment does not conform to the desired state, the Puppet run fails.

Anatomy of a Puppet Module

To script the infrastructure Puppet uses modules for organizing related code to perform a
specific task. A Puppet module has multiple sub directories that contain resources for
performing the intended task. Below are these resources:

manifests/: Contains the manifest class files for defining how to perform the intended task
files/: Contains static files that the node can download during the installation lib/:
Contains plugins templates/: Contains templates which can be used by the module’s
manifests tests/: Contains tests for the module

Puppet also uses manifests to manage multiple modules together site.pp. Puppet also uses
another manifest to define what to install on each node, default.pp.

How to run Puppet

Puppet can be run using either a master agent configuration or a solo installation (puppet
apply).

Master Agent: With a master agent installation, you configure one main master puppet node
which manages and configure all of your agent nodes (target environments). The master
initiates the installation of the agent and manages it throughout its lifecycle. This model
enables infrastructure changes to your agents in parallel by controlling the master node.

Solo: In a solo Puppet run, it’s up to the user to place the desired Puppet module on the target
environment. Once the module is on the target environment, the user needs run puppet
apply --modulepath=/path/to/modules/ /path/to/site.pp. Puppet will then provision
the server with the provided modules and site.pp without relying on another node.

Why do we use Puppet?

We use Puppet to script and automate our infrastructure — making our environment
provisioning repeatable, fully automated, and less error prone. Furthermore, scripting our
environments gives us complete control over our infrastructure and the ability to terminate
and recreate environments as often as they choose.

Puppet for Manatees


In the Manatee infrastructure, we use Puppet for provisioning our target environments. I am
going to go through our manifests and modules while explaining their use and purpose. In our
Manatee infrastructure, we create a new target environment as part of the CD pipeline –
discussed in part 2 of the series,CD Pipeline. Below I provide a high-level summary of the
environment provisioning process:

1. CloudFormation dynamically creates a params.pp manifest with AWS variables 2.


CloudFormation runs puppet apply as part of UserData 3. Puppet runs the modules defined in
hosts/default.pp. 4. Cucumber acceptance tests are run to verify the infrastructure was
provisioned correctly.

Now that we know at a high-level what’s being done during the environment provisioning,
let’s take a deeper look at the scripts in more detail. The actual scripts can be found here:
Puppet

First we will start off with the manifests.

The site.pp (shown below) serves two purposes. It loads the other manifests default.pp,
params.pp and also sets stages pre, main and post.

1 import "hosts/*" import "classes/*"


2
3 stage { [pre, post]: }
4 Stage[pre] -> Stage[main] -> Stage[post]

These stages are used to define the order in which Puppet modules should be run. If the
Puppet module is defined as pre,it will run before Puppet modules defined as main or post.
Moreover if stages aren’t defined, Puppet will determine the order of execution. The
default.pp (referenced below) shows how staging defined for executing puppet modules.

1 node default {
2   class { "params": stage => pre }
3   class { "java": stage => pre }
4   class { "system": stage => pre }
5   class { "tomcat6": stage => main }
6   class { "postgresql": stage => main }
7   class { "subversion": stage => main }
8   class { "httpd": stage => main }
9   class { "groovy": stage => main }
10 }

The default.pp manifest also defines which Puppet modules to use for provisioning the target
environment.

params.pp (shown below), loaded from site.pp, is dynamically created using


CloudFormation. params.pp is used for setting AWS property values that are used later in
the Puppet modules.

1 class params {
2   $s3_bucket = ''   
3 $application_name = ''
4   $hosted_zone = ''
5   $access_key = ''
6   $secret_access_key = ''
7   $jenkins_internal_ip = ''
8 }

Now that we have an overview of the manifests used, lets take a look at the Puppet modules
themselves.

In our java module, which is run in the pre stage, we are running a simple installation using
packages. This is easily dealt with in Puppet by using the package resource. This relies on
Puppet’s knowledge of the operating system and the package manager. Puppet simply
installs the package that is declared.

1 class java {
2   package { "java-1.6.0-openjdk": ensure => "installed" }
3 }

The next module we’ll discuss is system. System is also run during the pre stage and is used
for the setup of all the extra operations that don’t necessarily need their own module. These
actions include setting up general packages (gcc, make, etc.), installing ruby gems (AWS sdk,
bundler, etc.), and downloading custom scripts used on the target environment.

1 class system {
2
3   include params
4
5   $access_key = $params::access_key
6  $secret_access_key = $params::secret_access_key
7
8   Exec { path => ‘/usr/bin:/bin:/usr/sbin:/sbin’ }
9
10   package { “gcc”: ensure => “installed” }
11   package { “mod_proxy_html”: ensure => “installed” }
12   package { “perl”: ensure => “installed” }
13   package { “libxslt-devel”: ensure => “installed” }
14   package { “libxml2-devel”: ensure => “installed” }
15   package { “make”: ensure => “installed” }
16
17   package {“bundler”:
18     ensure => “1.1.4”,
19     provider => gem
20   }
21
22   package {“trollop”:
23     ensure => “2.0”,
24     provider => gem
25   }
26
27   package {“aws-sdk”:
28     ensure => “1.5.6”,
29     provider => gem,
30     require => [
31       Package[“gcc”],
32       Package[“make”]
33     ]
34   }
35
36   file { “/home/ec2-user/aws.config”:
    content => template(“system/aws.config.erb”),
37
    owner => ‘ec2-user’,
38
    group => ‘ec2-user’,
39
    mode => ‘500’,
40
  }
41
42
  define download_file($site=“”,$cwd=“”,$creates=“”){
43
    exec { $name:
44
      command => “wget ${site}/${name}”,
45
      cwd => $cwd,
46
      creates => “${cwd}/${name}”
47
    }
48
  }
49
50
  download_file {“database_update.rb”:
51
    site => “https://s3.amazonaws.com/sea2shore”,
52
    cwd => “/home/ec2-user”,
53
    creates => “/home/ec2-user/database_update.rb”,
54
  }
55
56
  download_file {“id_rsa.pub”:
57
    site => “https://s3.amazonaws.com/sea2shore/private”,
58
    cwd => “/tmp”,
59
    creates => “/tmp/id_rsa.pub”
60
  }
61
62
  exec {“authorized_keys”:
63
    command => “cat /tmp/id_rsa.pub >> /home/ec2-
64
user/.ssh/authorized_keys”,
65
    require => Download_file[“id_rsa.pub”]
66
    }
67
  }

First I want to point out that at the top we are specifying to include params. This enables
the system module to access the params.pp file. This way we can use the properties defined
in params.pp.

1 include params
2
3 $access_key = $params::access_key
4 $secret_access_key = $params::secret_access_key

This enables us to define the parameters in one central location and then reference that
location with other module.

As we move through the script we are using the package resource similar to previous
modules. For each rubygem we use the package resource and explicitly tell Puppet to use the
gem provider. You can specify other providers like rpm and yum.

We use the file resource to create files from templates.

1 AWS.config(
2   :access_key_id => "<%= "#{access_key}" %>",
3   :secret_access_key => "<%= "#{secret_access_key}" %>"
4 )
In the aws.config.erb template (referenced above) we are using the properties defined in
params.pp for dynamically creating an aws.config credential file. This file is then used by
our database_update.rb script for connecting to S3.

Speaking of the database_update.rb script, we need to get it on the target environment. To


do this, we define a download_file resource.

1 define download_file($site="",$cwd="",$creates=""){
2   exec { $name:
3     command => "wget ${site}/${name}",
4     cwd => $cwd,
5     creates => "${cwd}/${name}"
6   }
7 }

This creates a new resource for Puppet to use. Using this we are able to download both the
database_update.rb and id_rsa.pub public SSH key.

As a final step for setting up the system, we execute a bash line for copying the id_rsa.pub
contents into the authorized_keys file for the ec2-user. This enables clients with the
connected id_rsa key to ssh into the target environment as ec2-user.

The Manatee infrastructure uses Apache for the webserver, Tomcat for the app server, and
PostgreSQL for its database. Puppet these up as part of the main stage, meaning they run in
order after the pre stage modules are run.

In our httpd module, we are performing several steps discussed previously. The httpd
package is installed and creating a new file from a template.

1 class httpd {   include params


2
3   $application_name = $params::application_name
4   $hosted_zone = $params::hosted_zone
5
6   package { ‘httpd’:
7     ensure => installed,
8   }
9
10   file { “/etc/httpd/conf/httpd.conf”:
11     content => template(“httpd/httpd.conf.erb”),
12     require => Package[“httpd”],
13     owner => ‘ec2-user’,
14     group => ‘ec2-user’,
15     mode => ‘664’,
16   }
17
18   service { ‘httpd’:
19     ensure => running,
20     enable => true,
21     require => [
22       Package[“httpd”],
23       File[“/etc/httpd/conf/httpd.conf”]],
24       subscribe => Package[’httpd’],
25     }
26   }
The new piece of functionality used in our httpd module is service. service allows us
define the state the httpd service should be in at the end of our run. In this case, we are
declaring that it should be running.

The Tomcat module again uses package to define what to install and service to declare the
end state of the tomcat service.

class tomcat6 {
1
2
  Exec { path => ‘/usr/bin:/bin:/usr/sbin:/sbin’ }
3
4
  package { “tomcat6”:
5
    ensure => “installed”
6
  }
7
8
  $backup_directories = [
9
    “/usr/share/tomcat6/.sarvatix/”,
10
    “/usr/share/tomcat6/.sarvatix/manatees/”,
11
    “/usr/share/tomcat6/.sarvatix/manatees/wildtracks/”,
12
13
    “/usr/share/tomcat6/.sarvatix/manatees/wildtracks/database_backups/”,
14
15
    “/usr/share/tomcat6/.sarvatix/manatees/wildtracks/database_backups/ba
16
ckup_archive”,
17
  ]
18
19
  file { $backup_directories:
20
    ensure => “directory”,
21
    owner => “tomcat”,
22
    group => “tomcat”,
23
    mode => 777,
24
    require => Package[“tomcat6”],
25
  }
26
27
  service { “tomcat6”:
28
    enable => true,
29
    require => [
30
      File[$backup_directories],
31
      Package[“tomcat6”]],
32
    ensure => running,
33
  }
34
}

Tomcat uses the file resource differently then previous modules. tomcat uses file for
creating directories. This is defined using ensure => “directory”.

We are using the package resource for installing PostgreSQL, building files from templates
using the file resource, performing bash executions with exec, and declaring the intended
state of the PostgreSQL using the service resource.

1 class postgresql {
2
3   include params
4
5   $jenkins_internal_ip = $params::jenkins_internal_ip
6
7   Exec { path => ‘/usr/bin:/bin:/usr/sbin:/sbin’ }
8
9   define download_file($site=“”,$cwd=“”,$creates=“”){
10     exec { $name:
11       command => “wget ${site}/${name}”,
12       cwd => $cwd,
13       creates => “${cwd}/${name}”
14     }
15   }
16
17   download_file {“wildtracks.sql”:
18     site => “https://s3.amazonaws.com/sea2shore”,
19     cwd => “/tmp”,
20     creates => “/tmp/wildtracks.sql”
21   }
22
23   download_file {“createDbAndOwner.sql”:
24     site => “https://s3.amazonaws.com/sea2shore”,
25     cwd => “/tmp”,
26     creates => “/tmp/createDbAndOwner.sql”
27   }
28
29   package { “postgresql8-server”:
30     ensure => installed,
31   }
32
33   exec { “initdb”:
34     command => “service postgresql initdb”,
35     require => Package[“postgresql8-server”]
36   }
37
38   file { “/var/lib/pgsql/data/pg_hba.conf”:
39     content => template(“postgresql/pg_hba.conf.erb”),
40     require => Exec[“initdb”],
41     owner => ‘postgres’,
42     group => ‘postgres’,
43     mode => ‘600’,
44   }
45
46   file { “/var/lib/pgsql/data/postgresql.conf”:
47     content => template(“postgresql/postgresql.conf.erb”),
48     require => Exec[“initdb”],
49     owner => ‘postgres’,
50     group => ‘postgres’,
51     mode => ‘600’,
52   }
53
54   service { “postgresql”:
55     enable => true,
56     require => [
57       Exec[“initdb”],
58       File[“/var/lib/pgsql/data/postgresql.conf”],
59       File[“/var/lib/pgsql/data/pg_hba.conf”]],
60     ensure => running,
61   }
62
63   exec { “create-user”:
64     command => “echo CREATE USER root | psql -U postgres”,
65     require => Service[“postgresql”]
66   }
67
68   exec { “create-db-owner”:
    require => [
69
      Download_file[“createDbAndOwner.sql”],
70
      Exec[“create-user”],
71
      Service[“postgresql”]],
72
    command => “psql < /tmp/createDbAndOwner.sql -U postgres”
73
  }
74
75
  exec { “load-database”:
76
    require => [
77
      Download_file[“wildtracks.sql”],
78
      Exec[“create-user”],
79
      Service[“postgresql”],
80
      Exec[“create-db-owner”]],
81
    command => “psql -U manatee_user -d manatees_wildtrack -f
82
/tmp/wildtracks.sql”
83
  }
84
}

In this module we are creating a new user on the PostgreSQL database:

1 exec { "create-user":
2   command => "echo CREATE USER root | psql -U postgres",
3   require => Service["postgresql"]
4 }

In this next section we download the latest Manatee database SQL dump.

1 download_file {"wildtracks.sql":
2   site => "https://s3.amazonaws.com/sea2shore",
3   cwd => "/tmp",   creates => "/tmp/wildtracks.sql"
4 }

In the section below, we load the database with the SQL file. This builds our target
environments with the production database content giving developers an exact replica
sandbox to work in.

1
exec { “load-database”:
2
  require => [
3
    Download_file[“wildtracks.sql”],
4
    Exec[“create-user”],
5
    Service[“postgresql”],
6
    Exec[“create-db-owner”]],
7
  command => “psql -U manatee_user -d manatees_wildtrack -f
8
/tmp/wildtracks.sql”
9
  }
10
}

Lastly in our Puppet run, we install subversion and groovy on the target node. We could
have just included these in our system module, but they seemed general purpose enough to
create individual modules.

Subversion manifest:

1 class subversion {
2   package { "subversion":
3     ensure => "installed"   
4 }
5 }

Groovy manifest:

class groovy {   Exec { path => '/usr/bin:/bin:/usr/sbin:/sbin' }


1
2
  define download_file($site=“”,$cwd=“”,$creates=“”){
3
    exec { $name:
4
    command => “wget ${site}/${name}”,
5
    cwd => $cwd,
6
    creates => “${cwd}/${name}”
7
    }
8
  }
9
10
  download_file {“groovy-1.8.2.tar.gz”:
11
    site => “https://s3.amazonaws.com/sea2shore/resources/binaries”,
12
    cwd => “/tmp”,
13
    creates => “/tmp/groovy-1.8.2.tar.gz”,
14
  }
15
16
  file { “/usr/bin/groovy-1.8.2/”:
17
    ensure => “directory”,
18
    owner => “root”,
19
    group => “root”,
20
    mode => 755,
21
    require => Download_file[“groovy-1.8.2.tar.gz”],
22
  }
23
24
  exec { “extract-groovy”:
25
    command => “tar -C /usr/bin/groovy-1.8.2/ -xvf /tmp/groovy-
26
1.8.2.tar.gz”,
27
    require => File[“/usr/bin/groovy-1.8.2/”],
28
  }
29
}

The Subversion manifest is relatively straightforward as we are using the package resource.
The Groovy manifest is slightly different, we are downloading the Groovy tar, placing it on
the filesystem, and then extracting it.

We’ve gone through how the target environment is provisioned. We do however have one
more task, testing. It’s not enough to assume that if Puppet doesn’t error out, that everything
got installed successfully. For this reason, we use Cucumber to do acceptance testing against
our environment. Our tests check if services are running, configuration files are present and if
the right packages have been installed.

Puppet allows us to completely script and version our target environments. Consequently, this
enables us to treat environments as disposable entities. As a practice, we create a new target
environment every time our CD pipeline is run. This way we are always deploying against a
known state.

As our blog series is coming to a close, let’s recap what we’ve gone through. In the Manatee
infrastructure we use a combination of CloudFormation for scripting AWS resources, Puppet
for scripting target environments, Capistrano for deployment automation, Simple DB and
CloudFormation for dynamic properties and
Jenkins for coordinating all the resources into one cohesive unit for moving a Manatee
application change from check-in to production in just a single click.

DevOps in the cloud Tutorial


Lesson 1 – Walkthrough of Pipeline
Components
Learning Objectives
By the end of this lesson you will be able to:

 Check in changes to application code.


 Make simple configuration changes.
 Change an existing automated test.
 Make scripted changes to the database.
 Make a change to a simple build script.
 Make a change to a deployment script.
 Make a change to the infrastructure scripts.
 Run Jenkins Continuous Integration server jobs.
 View and run jobs within a deployment pipeline.
 View static analysis reports.
 View feedback from a dashboard.

Lesson 2 – Definitions, Practices, patterns


and tools for implementing Continuous
Delivery in the Cloud
Learning Objectives
By the end of this lesson you will be able to:

 Define Continuous Delivery, DevOps, Continuous Deployment and the Cloud.


 Identify Practices, patterns and tools for implementing Continuous Delivery in the
Cloud.
 Create a ‘spaghetti diagram’ and a ‘value-stream map’.
Lesson 3 – Primer/Inrotduction to Amazon
Web Services
Learning Objectives
By the end of this lesson you will be able to:

 Create an Amazon Web Services (AWS) account and access services.


 Use basic features of the AWS Management Console.
 Define security groups.
 Use basic features of the Elastic Compute Cloud (EC2).
 Use basic features of Elastic Load Balancing (ELB).
 Employ Auto Scaling.
 Use Amazon CloudWatch to monitor resources.
 Use Amazon Route 53 to manage a domain.
 Use Amazon Simple Storage Service (S3) to store objects.
 Identify the basic features of AWS CloudFormation for automating infrastructures.
 Identify the basic features of AWS Elastic Beanstalk.
 Identify the purpose other available AWS services.

Lesson 4 – Implemnting Continuous


Integration pipeline in AWS
Learning Objectives
By the end of this lesson you will be able to:

 Set up your development environment.


 Install and configure Jenkins plug-ins.
 Configure a scripted environment job.
 Configure a scripted build job.
 Configure a scripted deployment job.
 Configure and run on-demand jobs.
 Configure and run scheduled jobs.
 Configure and run continuous jobs.
 Create continuous feedback mechanisms.

Lesson 5 – Implementing Infrastructure


Automation in AWS
Learning Objectives
By the end of this lesson you will be able to:

 Create a CloudFormation template.


 Integrate Puppet with CloudFormation.
 Create a transient environment.
 Lock down environments.
 Create a ‘Chaos Monkey’.

Lesson 6 – Building and Deploying


Software
Learning Objectives
By the end of this lesson you will be able to:

 Script a build.
 Script a deployment.
 Set up and utilize a dependency-management repository.
 Deploy to target environments.
 Perform a self-service deployment.

Lesson 7 – Configuration Management


Learning Objectives
By the end of this lesson you will be able to:

 Work from the canonical version.


 Version system configurations and other artifacts.
 Setup a dynamic configuration management database.

Lesson 8 – Database
Learning Objectives
By the end of this lesson you will be able to:

 Script a database.
 Script the upgrade and downgrade of a database.
 Use a database sandbox.

Lesson 9 – Testing
Learning Objectives
By the end of this lesson you will be able to:

 View and run unit tests.

Lesson 10 – Delivery Pipeline


Learning Objectives
By the end of this lesson you will be able to:

 Create a delivery pipeline with dependent jobs.

You might also like