You are on page 1of 65

UNIVERSITY OF PETROLEUM AND ENERGY

STUDIES, DEHRADUN

BACHELOR OF TECHNOLOGY COMPUTER


SCIENCE
Specialization in
CLOUD COMPUTING & VIRTUALIZATION TECHNOLOGY

SEMESTER – VI
Lab Report
DevOps

Under the guidance of


PROF. ABHIRUP KHANNA

SUBMITTED BY:
Shubham Bhardwaj
500087527
R2142201874
Exercise 1. Setup Environment for DevOps.
What this exercise is about: In this step-by-step lab we'll walk through Setup environment for
DevOps model.
What you should be able to do: This experiment is intended to explore installation process and
primary setup for DevOps Operations.
Introduction: DevOps provides integration with popular open source and third-party tools and
services across the entire DevOps workflow.
Requirements: Docker Toolbox, Git,Atom,COnEMU and Internet connection.
Instructor exercise overview: Software’s
• Docker Toolbox
• Git
• Atom
• ConEMU

Enabling Virtualiza on from BIOS:

Installing Required Software’s:


Docker Toolbox:
ti
Exercise 2. Setup Environment for Docker in
windows and Linux .

Install Docker and Run a Docker Container


on Ubuntu

Retrieve and add the GPG Public Keys

To download the key, we will use the wget command from the Terminal.

Command: wget https://download.docker.com/linux/ubuntu/gpg

The downloaded key in my case is called gpg. After the public key is downloaded, add it to the
system keys using apt-key.
Command: sudo apt-key add gpg
Verify Key Fingerprint
Now, to verify that we have added the proper key, we need to check the fingerprint for Docker’s key.
Command: sudo apt-key fingerprint 0EBFCD88

Install Required Packages


To set up the Docker repository, our system must have packages which allow us to download files
over HTTPS. So, you need the below packages, which can be downloaded using apt or apt-get.

# Install packages to allow apt to use a repository over HTTPS:


Command: sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-
common
Add the Docker Repository to the Sources List
We need to add the Docker source to our list of sources in the system so that any future updates can
occur using that source URL when using apt update.

Command: sudo vi /etc/apt/sources.list

Go to the end of the file, and add this line to ensure that we add the repository source link.
deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
Install Docker on Ubuntu
Now that we have our sources with us, we are ready to install Docker!
Command: sudo apt update
Command: sudo apt install docker-ce

Now, if there aren’t any errors, we will return to our shell prompt, and our install has completed
successfully!

Verify Docker Installation


If Docker was installed correctly, it would have then automatically started a Docker daemon process.
So, we need to use the systemctl command and check if the docker service has started or not.

Command: sudo systemctl status docker


Now, let us now look at how we can configure Docker and run a Docker Container.

Run a Docker Container


A Docker Container, being similar to a Virtual Machine, also needs an image to work on. There are
various images hosted on Docker Hub, Docker’s official website for hosting images. Any image you
need can be fetched from this website.

Fetch the image


Command: sudo docker pull hello-world

Run the container


Command: sudo docker run hello-world

Running a Docker Debian Container


Command: sudo docker run -it debian

Installation of docker in Windows


Install Git in Windows

Instal Git in Linux

Download Docker images


docker pull Jenkins:

docker pull sonarqube:

docker pull mattgruter/artifactory:


docker pull tomcat:

docker pull alpine:


Exercise 3. Setup Environment for Chef
Fundamentals
What this exercise is about:
In this step-by-step lab we'll walk through Setup environment for Chef Too
What you should be able to do:
This experiment is intended to explore the installation process and primary setup of chef for DevOps.
Operations.
Introduction:
Chef is a method for the setup of the system that handles the application instead of utilizing the
manual cycle to customize, check and deploy it quite quickly. It supports various frameworks such as
Linux, Ubuntu, Centos & Solaris etc. Chef has a client server architecture. You can use AWS,
Google Cloud Platform, the Open Stack etc. to connect the cloud infrastructure. Let us grasp device.
control before approaching the supervisor thoroughly.
Requirements:
Docker Toolbox, Git, Atom, COnEMU and Internet connection
Instructor exercise overview:
COMPONENT MINIMUMREQUIREMENT
System Laptop/Desktop with internet connection Memory-8GB RAM
CPU-Quad Core CPU
Disk Space-20 GB Disk Space available OS-Windows / OSX

Software’s Prerequisites
Docker Desktop-latest Virtualbox-5.0.20 Vagrant-1.8.1 Atom-1.7.4
Git for Windows ( on windows only)-2.8.3 ConEmu ( on windows only)-150813g

Systems Preparation
For preparing system to run virtual machines, and with setup of useful utilities refer to Common Lab
Setup Instructions.
Additional Software’s
• Download and install Chef Development Kit
• Install Chef Plugin for Atom
• Open Atom editor
• On OS X, select Atom -> Preferences. On Windows File -> Settings
• From Install, Search for language-chef and install the extension.
• Restart
Exercise 4. Setup Environment for Puppet
Fundamentals
What this exercise is about:
In this step-by-step lab we'll walk through Setup environment for Puppet tool.
What you should be able to do:
This experiment is intended to explore installation process and primary setup of Puppet tool for
DevOps Operations.
Introduction:
Puppet is a Configuration Management tool that is used for deploying, configuring and managing.
servers. It performs the following functions:
• Defining distinct configurations for each and every host, and continuously checking and confirming
whether the required configuration is in place and is not altered (if altered Puppet will revert to
the required configuration) on the host.
• Dynamic scaling-up and scaling-down of machines.
• Providing control over all your configured machines, so a centralized (master-server or repo-based)
change gets propagated to all, automatically.

Requirements:
Instructor exercise overview:
COMPONENT MINIMUMREQUIREMENT
System Laptop/Desktop with internet connection Memory-8GB RAM
CPU-Quad Core CPU
Disk Space-20 GB Disk Space available OS-Windows / OSX
Software’s Prerequisites
Docker Desktop-latest Virtualbox-5.0.20 Vagrant-1.8.1 Atom-1.7.4
Git for Windows ( on windows only)-2.8.3 ConEmu ( on windows only)-150813g
Systems Preparation
For preparing system to run virtual machines, and with setup of useful utilities refer to
Common Lab Setup Instructions.
Additional Software’s
• Download and install Chef Development Kit
• Install Chef Plugin for Atom
• Open Atom editor
• On OS X, select Atom -> Preferences. On Windows File -> Settings
• From Install, Search for language-chef and install the extension.
• Restart Atom
Exercise 5. Setting up Learning Environment
for Jenkins.
What this exercise is about:
In this step-by-step lab we'll walk through Setup environment for Jenkins tool.
What you should be able to do:
This experiment is intended to explore installation process and primary setup of Jenkins tool for
DevOps Operations.
Introduction:
Jenkins is an open-source automation tool written in Java with plugins built for Continuous
Integration
purposes. Jenkins is used to build and test your software projects continuously making it easier for
developers to integrate changes to the project and making it easier for users to obtain a fresh build. It
also allows you to continuously deliver your software by integrating with many testing and
deployment technologies.
With Jenkins, organizations can accelerate the software development process through automation.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test,
package, stage, deploy, static analysis, and much more.
Jenkins achieves Continuous Integration with the help of plugins. Plugins allows the integration of
Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for
that tool. For example: Git, Maven 2 project, Amazon EC2, HTML publisher etc.

Requirements:
Docker Toolbox, Git, Atom, COnEMU and Internet connection
Instructor exercise overview:
Setting up Learning Environment with Docker
This is the easiest method to setup Jenkins and is a recommended option.
Installing Docker Engine
Proceed with installing Docker Engine on your choice of Operating System. For details on how to
install docker visit the official installation page at docs.docker.com.
We assume you have installed docker and are ready to launch containers before proceeding. To
validate docker environment run.
docker ps
If the above command goes through without errors, you are all set.
After installing docker, pull our Jenkins docker image from docker hub. This is the simplest way of
installing Jenkins and requires minimal effort.

docker run -idt --name jenkins -v jenkins_home:/var/jenkins_home -v


/var/run/docker.sock:/var/run/docker.sock -p 8080:8080 -p 50000:50000 jenkins/jenkins:2.178-
slim

If you install it using the instructions above, find out the IP address and go to
http://YOUR_IP_ADDRESS:8080 to access jenkins UI.
After that you have to select install To start/stop jenkins with docker, use the following commands,
docker start jenkins docker stop jenkins
Common Post Installation Steps
After the installation, you will be asked for a password. The password will be saved in the following
file.
/var/jenkins_home/secrets/initialAdminPassword
Passwords can be also fetched from the logs. You could run the following command to view the
password,
docker logs jenkins or to follow the logs docker logs -f jenkins
Exercise 5. Setting up Learning Environment
for Ansible.
Introduction
Configuration management systems are designed to streamline the process of
controlling large numbers of servers, for administrators and operations teams. They
allow you to control many different systems in an automated way from one central
location.
While there are many popular configuration management tools available for Linux
systems, such as Chef and Puppet, these are often more complex than many people
want or need. Ansible is a great alternative to these options because it offers an
architecture that doesn’t require special software to be installed on nodes, using SSH
to execute the automation tasks and YAML files to define provisioning details.
In this guide, we’ll discuss how to install Ansible on an Ubuntu 20.04 server and go
over some basics of how to use this software. For a more high-level overview of
Ansible as configuration management tool, please refer to An Introduction to
Configuration Management with Ansible

Prerequisites
To follow this tutorial, you will need:
• One Ansible Control Node: The Ansible control node is the machine we’ll use
to connect to and control the Ansible hosts over SSH. Your Ansible control node
can either be your local machine or a server dedicated to running Ansible,
though this guide assumes your control node is an Ubuntu 20.04 system. Make
sure, the control node has:
• A non-root user with sudo privileges. To set this up, you can follow Steps.
2 and 3 of our Initial Server Setup Guide for Ubuntu 20.04. However,
please note that if you’re using a remote server as your Ansible Control
node, you should follow every step of this guide. Doing so will configure.
• An SSH keypair associated with this user. To set this up, you can
follow Step 1 of our guide on How to Set Up SSH Keys on Ubuntu 20.04.
• One or more Ansible Hosts: An Ansible host is any machine that your Ansible
control node is configured to automate. This guide assumes your Ansible hosts.
are remote Ubuntu 20.04 servers. Make sure each Ansible host has:
• The Ansible control node’s SSH public key added to
the authorized_keys of a system user. This user can be either root or
a regular user with sudo privileges. To set this up, you can follow Step.
2 of How to Set Up SSH Keys on Ubuntu 20.04.
Step 1 — Installing Ansible
To begin using Ansible as a means of managing your server infrastructure, you need
to install the Ansible software on the machine that will serve as the Ansible control.
node.
From your control node, run the following command to include the official project’s.
PPA (personal package archive) in your system’s list of sources:

Following this update, you can install the Ansible software with:
Your Ansible control node now has all of the software required to administer your hosts. Next, we will
go over how to add your hosts to the control node’s inventory file so that it can control them.

Step 2 — Se ng Up the Inventory File

The inventory file contains information about the hosts you’ll manage with Ansible.
You can include anywhere from one to several hundred servers in your inventory file,
and hosts can be organized into groups and subgroups. The inventory file is also often
used to set variables that will be valid only for specific hosts or groups, in order to be
used within playbooks and templates. Some variables can also affect the way a
tti
playbook is run, like the ansible_python_interpreter variable that we’ll see in a
moment.
To edit the contents of your default Ansible inventory, open the /etc/ansible/hosts file using your text
editor of choice, on your Ansible control node
The default inventory le provided by the Ansible installa on contains a number of examples that
you can use as references for se ng up your inventory. The following example de nes a group
named [servers] with three di erent servers in it, each iden ed by a custom alias: server1, server2,
and server3. Be sure to replace the highlighted IPs with the IP addresses of your Ansible hosts
sudo nano /etc/ansible/hosts

The all:vars subgroup sets the ansible_python_interpreter host parameter that


will be valid for all hosts included in this inventory. This parameter makes sure the
remote server uses the /usr/bin/python3 Python 3 executable instead
of /usr/bin/python (Python 2.7), which is not present on recent Ubuntu versions.
When you’re finished, save and close the file by pressing CTRL+X then Y and ENTER to
confirm your changes.
Whenever you want to check your inventory, you can run:

ansible-inventory --list -y

Step 3 — Testing Connection

After setting up the inventory file to include your servers, it’s time to check if Ansible is able to
connect to these servers and run commands via SSH.
For this guide, we’ll be using the Ubuntu root account because that’s typically the only account
available by default on newly created servers. If your Ansible hosts already have a regular sudo user
created, you are encouraged to use that account instead.
You can use the -u argument to specify the remote system user. When not provided,
Ansible will try to connect as your current system user on the control node.
From your local machine or Ansible control node, run:

ansible all -m ping -u root

This command will use Ansible’s built-in ping module to run a connectivity test on all nodes from
your default inventory, connecting as root. The ping module will test:

• if hosts are accessible;


fi
ff
tti
ti
ti
fi
fi
• if you have valid SSH credentials;
• if hosts are able to run Ansible modules using Python.

If this is the first time, you’re connecting to these servers via SSH, you’ll be asked to
confirm the authenticity of the hosts you’re connecting to via Ansible. When prompted,
type yes and then hit ENTER to confirm.
Once you get a "pong" reply from a host, it means you’re ready to run Ansible.
commands and playbooks on that server.
Exp 7: Adding Build Triggers
What this exercise is about:
In this step-by-step lab we'll walk through Jenkins Adding Triggers to the Jobs.
What you should be able to do:
This experiment is intended to explore Adding Triggers to the Jobs for DevOps operations.
Introduction:
Build triggers decide when a Jenkins job is run. Whether it happens based on an external event e.g., a
push to git repository, a scheduled run, or a job is run after another job is completed, there are plenty
of options to trigger builds.
Requirements:
Docker Toolbox, Git, Atom, COnEMU and Internet connection
Instructor exercise overview:
From project page, click on Configure.
Types of Triggers
1. Trigger builds remotely
2. Build after other projects are built
3. Build periodically
4. Poll SCM
Execute Jobs Remotely
Jobs can be triggered remotely outside of Jenkin’s. This is very useful when you would like the jobs
to be triggered based on some event or part of the logic you have written as part of your script. This is
also, the way you would trigger the job based on the activities performed on the repositories. e.g.,
adding new commit to git hub.

• Click on job1 and then select Configure


• In Build Trigger, check Trigger Builds remotely
Define a token (More complex one than the example)
• Save the job.
• To trigger the job, you need two things
• user
• user's API token
• We will use admin (first user we created) user's API token to trigger this job. You can find this
atjenkins_homepage -> people -> admin -> configure.
Click on Show API Token Note down the user's API token.
• Visit the trigger from browser or use curl
user:<API_TOKEN>@<Jenkins_URL>/job/job1/build?token=<JOB_TOKEN>
Example:
http://admin:552dab89b070c0fcc3fad281c51318ad@10.40.1.14:8080/job/job1/build?
token=mytoken
• This will trigger the build.

Building Jobs Pipeline


One of the important features of Jenkins is its ability to build a pipeline of jobs, whereas based on the
outcome of one job, another can be triggered. e.g. only if you are able to compile the code, you may
want to proceed with testing, else its quiet useless to do so. Using Build after other projects are built
trigger, this can be easily achieved. We will be creating a job pipeline using this feature in the next
LAB.
Scheduled Runs
Like creating cronjobs or scheduled jobs its possible to define a run time schedule with Jenkins.
Polling SCM
This option allows Jenkins to regularly check the source code management system e.g., a remote git
repository to check if there are any updates, and launch a job based on it. Ideally post commit hooks/
webhooks with git should trigger the builds, however it may not always be possible. E.g., if you are
hosting Jenkins inside a private network, which is not reachable by git repository hosted on the
cloud, triggering a webhook will not be possible. In such cases the next best option is to poll git
repository at a regular interval and trigger the builds.
Exp 8: Building a Pipeline
What this exercise is about:
In this step-by-step lab we'll walk through Building a Pipeline for Jenkins Jobs
What you should be able to do:
This experiment is intended to explore Building a Pipeline for Jenkins Jobs for DevOps operations.
Introduction:
DevOps provides integration with popular open source and third-party tools and services across the
entire DevOps workflow for Building a Pipeline for Jenkins Jobs.
Requirements:
Docker Toolbox, Git, Atom, COnEMU and Internet connection
Instructor exercise overview:
Creating More Jobs to add to pipeline.
Let’s create 2 more jobs so that we could connect those together to setup a mock build pipeline.
To create new jobs, you could click on create items, name the job and select the last option which.
says Copy from Another Job

For this tutorial, lets create jobs by name job2 and job3 which should be copies of job1.
At the end of this exercise, you should see 3 jobs listed on Jenkins dashboard as above. While creating
Job2 and Job3 for the first time, do not use any build triggers. We will update the configurations while
defining upstream/downstream.
Connecting jobs
Let’s now create a pipeline by connecting these jobs together. We would create a pipeline with
job1 => job2 => job3
Where, job2 should run, only if job1 is built successfully, and should trigger job3 once it builds itself
successfully. We could either define both connections from job2 or go to job1 and job3 and define its
relationship with job2. We would do the later.
Let’s open Job1 configurations and from Post Build Actions select Build Other Projects and select
job2.
Exp 9: Creating Java Build Project
What this exercise is about:
In this step-by-step lab we'll walk through Creating Java Build Project for DevOps.
What you should be able to do:
This experiment is intended to explore Creating Java Build Project for DevOps operations and link
with other applications.
Introduction:
DevOps provides integration with popular open source and third-party tools and services across the
entire DevOps workflow for Creating Java Build Project
Requirements:
Docker Toolbox, Git, Atom, COnEMU and Internet connection
Instructor exercise overview:
Creating Build Job for a Java Project
In this LAB, we are going to create a job to build/compile a sample java application with maven.
Creating Maven Project
Before we start to create our build job, we need to install maven-integration plugin.

To create a build project,


• From New Item, select Maven Project and provide it a name e.g. "build".
Note: If you do not see Maven Project option on the job creation page, install Maven Integration
Plugin from plugins manager

From the con gura on screen, scroll to Source Code Management, select GIT and provide repository
URL.
fi
ti
From Build Triggers select Poll SCM. Let’s configure it to poll every 5 minutes using the following
schedule
H/2 * * * *
Scroll down to Build step and you should see Root POM selected since it’s a Maven Project. In the
Goals and options section, provide compile as a goal.
In addition to compile, following are the goals Maven project could take.
1. validate
2. compile - compile source code
3. test - unit tests
4. package - build jar/war
5. integration-test
6. verify
7. install
8. deploy
• Save the job and click on Build Now. Following is a snippet from the output of the build job.

You might also like