You are on page 1of 118

Ramrao Adik Institute of Technology

DEPARTMENT OF INFORMATION TECHNOLOGY


ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 1

EXPERIMENT TITLE To understand DevOps: Principles, Practices, and


DevOps Engineer Role and Responsibilities

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 21/07/2021

SUBMISSION DATE 28/07/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
DevOps
Experiments

Name: Ayush Premjith


Roll No.:19IT2034
Batch: B-B2

Experiment No. : 1
Aim: To understand DevOps: Principles, Practices, and DevOps
Engineer Role and Responsibilities
Theory:

What is DevOps?
DevOps stands for development and operations. It’s a practice that
aims at merging development, quality assurance, and operations
(deployment and integration) into a single, continuous set of
processes. This methodology is a natural extension of Agile and
continuous delivery approaches.

By adopting DevOps companies gain three core advantages that cover technical,
business, and cultural aspects of development.
Higher speed and quality of product releases. DevOps speeds up product
release by introducing continuous delivery, encouraging faster feedback, and
allowing developers to fix bugs in the system in the early stages. Practicing DevOps,
the team can focus on the quality of the product and automate a number of
processes.

Faster responsiveness to customer needs. With DevOps, a team can react to


change requests from customers faster, adding new and updating existing features.
As a result, the time-to-market and value-delivery rates increase.
Better working environment. DevOps principles and practices lead to better
communication between team members, and increased productivity and agility.
Teams that practice DevOps are considered to be more productive and cross-skilled.
Members of a DevOps team, both those who develop and those who operate, act in
concert. These benefits come only with the understanding that DevOps isn’t merely a
set of actions, but rather a philosophy that fosters cross-functional team
communication. More importantly, it doesn’t require substantial technical changes as
the main focus is put on altering the way people work. The whole success depends
on adhering to DevOps principles.
DevOps principles
In 2010 Damon Edwards and John Willis came up with the CAMS
model to showcase the key values of DevOps. CAMS is an acronym
that stands for Culture, Automation, Measurement, and Sharing. As
these are the main principles of DevOps, we’ll examine them in more
detail.

Culture
DevOps is initially the culture and mindset forging strong collaborative bonds
between software development and infrastructure operations teams. This culture is
built upon the following pillars.
Constant collaboration and communication. These have been the building blocks
of DevOps since its dawn. Your team should work cohesively with the understanding
of the needs and expectations of all members.
Gradual changes. The implementation of gradual rollouts allows delivery teams to
release a product to users while having an opportunity to make updates and roll back
if something goes wrong.
Shared end-to-end responsibility. When every member of a team moves towards
one goal and is equally responsible for a project from beginning to end, they work
cohesively and look for ways of facilitating other members’ tasks
Early problem-solving. DevOps requires that tasks be performed as early in the
project lifecycle as possible. So, in case of any issues, they will be addressed more
quickly.
DevOps model and practices
DevOps requires a delivery cycle that comprises planning, development, testing,
deployment, release, and monitoring with active cooperation between different
members of a team.
Agile planning
In contrast to traditional approaches of project management, Agile planning
organizes work in short iterations (e.g. sprints) to increase the number of releases.
This means that the team has only high-level objectives outlined, while making
detailed planning for two iterations in advance. This allows for flexibility and pivots
once the ideas are tested on an early product increment. Check our Agile
infographics to learn more about different methods applied.

Continuous development
The concept of continuous “everything” embraces continuous or iterative software
development, meaning that all the development work is divided into small portions
for better and faster production. Engineers commit code in small chunks multiple
times a day for it to be easily tested.
Continuous automated testing
A quality assurance team sets committed code testing using automation tools like
Selenium, Ranorex, UFT, etc. If bugs and vulnerabilities are revealed, they are sent
back to the engineering team. This stage also entails version control to detect
integration problems in advance. A Version Control System (VCS) allows developers
to record changes in the files and share them with other members of the team,
regardless of their location.
Continuous deployment
At this stage, the code is deployed to run in production on a public server. Code
must be deployed in a way that doesn’t affect already functioning features and can
be available for a large number of users. Frequent deployment allows for a “fail fast”
approach, meaning that the new features are tested and verified early. There are
various automated tools that help engineers deploy a product increment. The most
popular are Chef, Puppet, Azure Resource Manager, and Google Cloud Deployment
Manager.
Continuous monitoring
The final stage of the DevOps lifecycle is oriented to the assessment of the whole
cycle. The goal of monitoring is detecting the problematic areas of a process and
analyzing the feedback from the team and users to report existing inaccuracies and
improve the product’s functioning.
DevOps tools
The main reason to implement DevOps is to improve the delivery pipeline and
integration process by automating these activities. As a result, the product gets a
shorter time-to-market. To achieve this automated release pipeline, the team must
acquire specific tools instead of building them from scratch.

Currently, existing DevOps tools cover almost all stages of continuous delivery,
starting from continuous integration environments and ending with containerization
and deployment. While today some of the processes are still automated with custom
scripts, mostly DevOps engineers use various products. Let’s have a look at the
most popular ones.

Server configuration tools are used to manage and configure servers in DevOps.
Puppet is one of the most widely used systems in this category. Chef is a tool for
infrastructure as code management that runs both on cloud and hardware servers.
One more popular solution is Ansible that automates configuration management,
cloud provisioning, and application deployment.

CI/CD stages also require task-specific tools for automation — such as Jenkins that
comes with lots of additional plugins to tweak continuous delivery workflow or GitLab
CI, a free and open-source CI/CD instrument presented by GitLab.
For more solutions, check our corresponding article where we compare the major CI
tools on today’s market.

Containerization and orchestration stages rely on a bunch of dedicated tools to


build, configure, and manage containers that allow software products to function
across various environments. Docker is the most popular instrument for building self-
contained units and packaging code into them. The widely-used container
orchestration platforms are commercial OpenShift and open-source Kubernetes.

Monitoring and alerting in DevOps is typically facilitated by Nagios, a powerful tool


that presents analytics in visual reports or open-source Prometheus.

A DevOps Engineer: role and responsibilities


In the book Effective DevOps by Ryn Daniels and Jennifer Davis, the existence of a
specific DevOps person is questioned: “It doesn’t usually make much sense to have
a director of DevOps or some other position that puts one person in charge of
DevOps. DevOps is at its core a cultural movement, and its ideas and principles
need to be used throughout entire organizations in order to be effective.”

Some other DevOps experts partly disagree with this statement. They also believe
that a team is a key to effectiveness. But in this interpretation, a team – including
developers, a quality assurance leader, a code release manager, and an automation
architect – work under the supervision of a DevOps engineer.
So, the title of a DevOps Engineer is an arguable one. Nonetheless, DevOps
engineers are still in demand on the IT labor market. Some consider this person to
be either a system administrator who knows how to code or a developer with a
system administrator’s skills.
DevOps engineer responsibilities
In a way, both definitions are fair. The main function of a DevOps engineer is to
introduce the continuous delivery and continuous integration workflow, which
requires the understanding of the mentioned tools and the knowledge of several
programming languages.

Depending on the organization, job descriptions differ. Smaller businesses look for
engineers with broader skillsets and responsibilities. The basic and widely-
accepted responsibilities of a DevOps engineer are:

• Writing specifications and documentation for the server-side features

• Continuous deployment and continuous integration (CI/CD) management

• Performance assessment and monitoring


• Infrastructure management

• Cloud deployment and management

• Assistance with DevOps culture adoption


Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 2

EXPERIMENT TITLE To understand Version Control System / Source Code


Management, install git and create a GitHub account.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 28/07/2021

SUBMISSION DATE 4/08/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. : 2
Aim: To understand Version Control System / Source Code
Management, install git and create a GitHub account

Theory:

What is version control?

Version control allows you to keep track of your work and helps you to
easily explore the changes you have made, be it data, coding scripts,
notes, etc. You are probably already doing some type of version
control, if you save multiple files, such as
Dissertation_script_25thFeb.R, Dissertation_script_26thFeb.R, etc.

This approach will leave you with tens or hundreds of similar files,
making it rather cumbersome to directly compare different versions,
and is not easy to share among collaborators. With version control
software such as Git, version control is much smoother and easier to
implement. Using an online platform like Github to store your files
means that you have an online back up of your work, which is beneficial
for both you and your collaborators.

Git uses the command line to perform more advanced actions and we
encourage you to look through the extra resources we have added at
the end of the tutorial later, to get more comfortable with
Git. But until then, here we offer a gentle introduction to syncing
RStudio and Github, so you can start using version control in minutes.

What are the benefits of using version control?

Having a GitHub repo makes it easy for you to keep track of


collaborative and personal projects - all files necessary for certain
analyses can be held together and people can add in their code,
graphs, etc. as the projects develop. Each file on GitHub has a history,
making it easy to explore the changes that occurred to it at different
time points. You can review other people’s code, add comments to
certain lines or the overall document, and suggest changes. For
collaborative projects, GitHub allows you to assign tasks to different
users, making it clear who is responsible for which part of the
analysis. You can also ask certain users to review your code. For
personal projects, version control allows you to keep track of your
work and easily navigate among the many versions of the files you
create, whilst also maintaining an online backup.

Step1:Install git
To install the Git Account
GO to the git page once you reach the page click sign up

The page look like this

After clicking on create account you will able to see your account

Conclusion: understood the concept of version control system.


Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 3

EXPERIMENT TITLE To Perform various GIT operations on local and


Remote repositories using GIT Cheat-Sheet.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 28/07/2021

SUBMISSION DATE 4/08/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. : 3
Aim: To Perform various GIT operations on local and Remote
repositories using GIT Cheat-Sheet

Theory: Git is a distributed version control system that helps


developers collaborate on projects of any scale.

Linus Torvalds, the developer of the Linux kernel, created Git in 2005 to
help control the Linux kernel's development.

What is a Distributed Version Control System?


A distributed version control system is a system that helps you keep
track of changes you've made to files in your project.

This change history lives on your local machine and lets you revert to a
previous version of your project with ease in case something goes
wrong.

Git makes collaboration easy. Everyone on the team can keep a full
backup of the repositories they're working on on their local machine.
Then, thanks to an external server like BitBucket, GitHub or GitLab, they
can safely store the repository in a single place.

This way, different members of the team can copy it locally and
everyone has a clear overview of all changes made by the whole
team.
Conclusion: performed git operation using git cheat.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 4

EXPERIMENT TITLE To understand Continuous Integration, install and


configure Jenkins with Maven/Ant/Gradle to setup a
build Job.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 4/8/2021, 11/8/2021

SUBMISSION DATE 18/08/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. 4

Aim: To understand Continuous Integration, install and configure Jenkins


with Maven/Ant/Gradle to setup a build Job.
Theory: Jenkins is an open source continuous integration (CI) and
continuous delivery (CD) tool built in Java. Jenkins basically builds,
tests and deploy software projects. It’s one of the most useful
development tools you can master, that’s why today we’ll show you
how to install Jenkins on Ubuntu 18.04. Empower your VPS server
with this powerful tool!
Jenkins is loved by teams of different sizes, for projects with various
languages such as Java, Ruby, Dot Net, PHP, etc. Jenkins is an
independent platform, which means you can use it on Windows,
Linux or any other operating system.

Why Use Jenkins?

In order to understand Jenkins, you must have an understanding of Continuous


Integration (CI) and Continuous Delivery (CD):

• Continuous Integration – the practice of constantly merging


development work with the main branch.

• Continuous Delivery – is continual delivery of code to an


environment once the code is ready to ship. This could be to staging
or production. The product is delivered to a user base, which can be
QAs or customers for review and inspection.

Developers regularly update the code in the shared repository (such as


GitHub or TFS). In nightly builds, changes made in the source code
throughout the day are built together at the end of the day, making it hard
to find the errors. This is where Jenkins comes in
As soon as a developer commits any change to the shared repository,
Jenkins will immediately trigger a build, and in case of an error,
immediately notify (Continuous Integration CI).
With Jenkins, we can also set post-build tests (unit test, performance test,
acceptance test) in an automated manner. Whenever there is a successful
build, Jenkins will perform these tests and generate a report (Continuous
Delivery CD).

Installation Steps of Jenkins on Ubuntu

Step 1: update ubuntu repository

$sudo apt-get update

Step 2: Install Java development Kit


$sudo apt-install default-jdk

Installation and Configuration of Jenkins

Step1: Install Jdk


The first Jenkins prerequisite is JDK. The following command will install
JDK and JRE:
Step 2: Install A web server

Make sure Nginx is up and running on your Ubuntu based machine by


typing in your server’s IP in your web browser and hitting enter. You should
be greeted by the Nginx welcome screen
Step 3:Install Jenkins

Next we will install Jenkins. Issue the following four commands in sequence
to install Jenkins on Ubuntu:

wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-


key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ >
/etc/apt/sources.list.d/jenkins.list'
sudo apt-get update sudo
apt-get install Jenkins
Type in the Administrator Password

By default Jenkins will run on port 8080. To start Jenkins type in the
IP of your VPS and the port number 8080. It would look something
like this in your browser – 120.0.0.1:8080.
You will be asked to enter the administrator password. You can find
the password in
the /var/lib/jenkins/secrets/initialAdminPassword file. You can use
the cat command to display the password:
cat /var/lib/jenkins/secrets/initialAdminPassword
Conclusion: Successfully installed and configure Jenkins in ubuntu.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 5

EXPERIMENT TITLE To Build the pipeline of jobs using Maven / Gradle / Ant
in Jenkins, create a pipeline script to Test and deploy
an application over the tomcat server.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 4/8/2021, 11/8/2021

SUBMISSION DATE 18/08/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
PRACTICAL NO: 5

Aim: To build the pipeline of jobs using Maven / Gradle / Ant in Jenkins, create a
pipeline script to test and deploy an application over the tomcat server.

LO Number and Statement:-

LO1: To understand the fundamentals of DevOps engineering and be fully


proficient with DevOps terminologies, concepts, benefits, and deployment options
to meet your business requirements.

LO3: To understand the importance of Jenkins to Build and deploy Software Applications
on server environment

Theory:

Tomcat:

It is an open-source Java servlet container that implements many Java Enterprise


Specs such as the Websites API, Java-Server Pages and last but not least, the
Java Servlet. It is still one of the most widely used java-sever due to several
capabilities such as good extensibility, proven core engine, and well-test and
durable. Here we used the term "servlet" many times, so what is java servlet; it
is a kind of software that enables the webserver to handle the dynamic(java-
based) content using the Http protocols.

Jenkins Tomcat deploy prerequisites:

The following pieces of software are required to follow this example of a Jenkins
deployment of a WAR file to Tomcat:

• Java web application that Jenkins can deploy to Tomcat;


• Git source code management tool installation;
• Java 8 or newer installation of the JDK;
• build tool -- this tutorial uses Maven, but any build tool, such as Ivy, ANT or
Gradle, that can package a Java application into a WAR file will do; and
• Jenkins CI tool installation.

A WAR file for Jenkins to deploy:


Step 1: Add to Tomcat a user with deployment rights

For a successful Jenkins Tomcat deploy of a WAR file, you must add a new user
to Tomcat with manager-script rights. You can do this with an edit of the
tomcat-users.xml file, which can be found in Tomcat's conf directory.

<!-- User to deploy WAR file from Jenkins to Tomcat -->


<user username="username" password="<password>" roles="manager-script" />

After you edit the tomcat-users.xml file, it's a good idea to bounce the Tomcat server
to confirm the changes have taken effect.

Step 2: Add the 'Deploy WAR/EAR to a container Jenkins' plugin

Out of the box, there are no built-in features that perform a Jenkins WAR file
deployment to Tomcat. That means a Jenkins Tomcat deploy plugin must be installed
in the CI tool to make a deployment happen.
The most popular Jenkins Tomcat deployment plugin is named Deploy to container,
which can be installed through the Plugin Manager tab under the "Manage Jenkins"
section of the tool.

Install the Deploy to container Jenkins Tomcat deploy plugin.

Step 3: The Jenkins build job

With the Jenkins Tomcat deployment plugin installed, it's time to create a new
Jenkins build job that can build an application and deploy a packaged WAR file to
Tomcat.
Step 3A: Create a Jenkins freestyle project

The Jenkins build job we need to create will be named deploy-war-fromjenkins-to-tomcat,


and it will be a freestyle project type.

This freestyle Jenkins job will build a WAR and deploy it to Tomcat.

Step 3B: Configure standard build job properties

The Jenkins build job will be configured with the following properties:

JDK: java8
Git Repository URL: https://github.com/cameronmcnz/rock-paper-scissors.git Git
branch specifier: */patch-1
Maven Goals: clean install

Basic Jenkins build job settings for a Tomcat WAR deployment.


Step 3C: The Jenkins Tomcat deploy plugin post-build action

The Jenkins deploy war/ear to a container post-build action.

After a build, the final step of a Jenkins pipeline deploy to Tomcat is to use the Deploy
to container plugin in a post-build action.
Three of the four settings used by the Deploy WAR/EAR to a container plugin can
be typed in directly:
WAR/EAR files: **/*.war
Context path: rps
Containers: Tomcat 8.x
Tomcat URL: http://localhost:8081

To configure the credentials, you must click the Add button next to the empty entry
field and create a new Jenkins credentials object:

Jenkins credential used by the Deploy war/ear to a container plugin.

The username and password need to match what was entered into the tomcatusers.xml
file in an earlier step:
Username: Sam
Password: *******

Step 3D: Run the Jenkins build job

Now that you have specified all of the configurations, the Jenkins build job can be
saved and run.
When the build job finishes, the Jenkins Tomcat deploy of a WAR file will have
also completed, and a file named rps.war will be visible in the webapps directory
of Tomcat.
A WAR file deployed to Tomcat through Jenkins.

Step 4: Test the deployed WAR file

With the WAR file deployed, test the application by running Tomcat and pointing
your browser to the following URL:

http://localhost:8081/rps/#

Jenkins Tomcat WAR deployment summary

To recap, here is a summary of the steps required to perform a Jenkins Tomcat WAR
file deployment:

1. Add a user with WAR deployment rights to the tomcat-users.xml.


2. Add the Deploy to container Jenkins Tomcat plugin.
3. Create a Jenkins build job with a Deploy to container post-build action.
4. Run the Jenkins build job.
5. Test to ensure the Jenkins deployment of the WAR file to Tomcat was successful.

Conclusion: We have successfully built the pipeline of jobs using Maven / Gradle
/ Ant in Jenkins and also created a pipeline script to test and deploy an application
over the tomcat server.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 6

EXPERIMENT TITLE To understand Jenkins Master-Slave Architecture and


scale your Jenkins standalone implementation by
implementing slave node.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 18/8/2021, 25/8/2021

SUBMISSION DATE 1/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
PRACTICAL NO: 6
Aim: To understand Jenkins Master-Slave Architecture and scale your Jenkins standalone
implementation by implementing slave nodes.

LO Number And Statement: LO1 – To understand the fundamentals of


DevOps engineering and be fully proficient with DevOps terminologies,
concepts, benefits, and deployment options to meet your business requirements.
LO3 – To understand the importance of Jenkins to Build and deploy Software
Applications on server environment Theory:

Need of Jenkins Mater-[Agent-slave] Architecture:


When we build the Jenkins job in a single Jenkins master node then Jenkins
uses the resource of the base machine and If no executor is available then the
jobs are queued in the Jenkins server. Sometimes you might need several
different environments to test your builds. This cannot be done by a single
Jenkins server. It is recommended not to run different jobs in the same system
that required a different environment. In such scenarios where we need a
different machine with a different environment that takes the specific job from
the master to build.
On the same Jenkins setup, multiple teams are working with their jobs. All jobs
are running on the same base operating system and the base operating system
has limited resources. Also, we don't want to put our personal data on the same
system where other teams can read.

Jenkins Distributed Architecture:


Jenkins uses A Master-Slave architecture to manage distributed builds. The
machine where we install Jenkins software will be Jenkins master and that run’s
on port 8080 by default. On the slave machine, we install a program called
Agent. This agent requires JVM. This agent executes the tasks provided by
Jenkins master. We can launch n numbers of agents and we can configure which
task will be run on which agent server from Jenkins master by assigning the
agent to the task.
Steps to Configure Jenkins Master and Slave Nodes:
1. Click on Manage Jenkins in the left corner on the Jenkins dashboard.
2. Click on Manage Nodes.

3. Select New Node and enter the name of the node in the Node Name field.
4. Select Permanent Agent and click the OK button. Initially, you will get only one
option, "Permanent Agent." Once you have one or more slaves you will get the
"Copy Existing Node" option.

5. After Clicking OK, following configuration page will appear for machine Test,
enter the required information.
6. Node is Created

7. Click on agent.jar to download the agent.

8. In the same directory, where you have downloaded the agent you have to run the
command.

We can see Node Is Connected.


Creating a Pipeline and Running on The Slave Machine:
3. Click New Item in the top left corner on the dashboard.
4. Enter the name of your project in the Enter an item name field, and select the
Pipeline project, and click OK button.

5. Enter Description (optional).


6. Go to the Pipeline section, make sure the Definition field has the Pipeline script
option selected.
7. Copy and paste the following declarative Pipeline script into a script field.
8. Click on Save, it will redirect to the Pipeline view page.

9. Add C:\Program Files\Git\bin to the PATH environment variable of the slave


node in Jenkins to get access to sh.
10.Then add C:\Program Files\Git\usr\bin to the PATH environment variable
locally on the Windows Master slave to get access to nohup.

11.Restart the Node.


12.Go back to your Project Dashboard.
13.On the left pane, click the Build Now button to execute your Pipeline.
14.After Pipeline execution is completed, the Pipeline view will be as shown
below.

15.We can verify the history of executed build under the Build History by clicking
the build number.
16.Click on build number and select Console Output. Here you can see that the
pipeline ran on a slave machine.
Conclusion: Successfully understood Jenkins Master-Slave Architecture, created
a Slave Node and created a Pipeline Running on the Slave Machine.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 7

EXPERIMENT TITLE To Setup and Run Selenium Tests in Jenkins Using


Maven.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 1/09/2021

SUBMISSION DATE 8/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
PRACTICAL NO: 7
Aim: To Setup and Run Selenium Tests in Jenkins Using Maven

LO Number And Statement: LO1 – To understand the fundamentals of


DevOps engineering and be fully proficient with DevOps terminologies,
concepts, benefits, and deployment options to meet your business requirements.
LO3 – To understand the importance of Jenkins to Build and deploy Software Applications
on server environment
LO4 – Understand the importance of Selenium and Jenkins to test Software
Applications Theory:

Selenium is a free (open-source) automated testing framework used to validate


web applications across different browsers and platforms. You can use multiple
programming languages like Java, C#, Python etc to create Selenium Test
Scripts. Testing done using the Selenium testing tool is usually referred to as
Selenium Testing.
Selenium Software is not just a single tool but a suite of software, each piece
catering to different Selenium QA testing needs of an organization. Here is the
list of tools
• Selenium Integrated Development Environment (IDE)
• Selenium Remote Control (RC)
• WebDriver
• Selenium Grid
At the moment, Selenium RC and WebDriver are merged into a single
framework to form Selenium 2. Selenium 1, by the way, refers to Selenium RC.

How to Choose the Right Selenium Tool for Your Need


Tool Why Choose?
• To learn about concepts on automated testing and Selenium, including:
Selenese commands such as type, open, clickAndWait, assert, verify, etc
Locators such as id, name, xpath, css selector, etc.
Selenium IDE Executing customized JavaScript code using
runScript Exporting test cases in various formats.
• To create tests with little or no prior knowledge in programming.
To create simple test cases and test suites that you can export later to RC
To test a web application against Firefox and Chrome only.
• To design a test using a more expressive language than Selenese
• To run your test against different browsers (except HtmlUnit) on
differen systems.
Selenium RC
• To deploy your tests across multiple environments using Selenium
Grid.
• To test your application against a new browser that supports JavaScript.
•To test web applications with complex AJAX-based scenarios.
• To use a certain programming language in designing your test
WebDriver • case. To test applications that are rich in AJAX-based
functionalities.
Tool Why Choose?
To execute tests on the HtmlUnit browser.
To create customized test results.
Selenium To run your Selenium RC scripts in multiple browsers and
operating sys simultaneously.
Grid
• To run a huge test suite, that needs to complete in the soonest time
possi

Steps to install Maven and use it with TestNG Selenium:

For this tutorial, we will use Eclipse (Juno) IDE for Java Developers to set up
Selenium WebDriver Project. Additionally, we need add m2eclipse plugin to
Eclipse to facilitate the build process and create pom.xml file. Let’s add
m2eclipse plugin to Eclipse with following steps:

Step 1) In Eclipse IDE, select Help | Install New Software from Eclipse Main Menu.
Step 2) On the Install dialog, Enter the
URL http://download.eclipse.org/technology/m2e/releases/. Select Work
with and m2e plugin as shown in the following screenshot:
Step 3) Click on Next button and finish installation.

Configure Eclipse with Maven:

With m2e plugin is installed, we now need create Maven project.


Step 1) In Eclipse IDE, create a new project by selecting File |
New | Other from Eclipse menu.
Step 2) On the New dialog, select Maven | Maven Project and click Next
Step 3) On the New Maven Project dialog select the Create a simple project
and click Next
Step 4) Enter WebdriverTest in Group Id: and
Artifact Id: and click finish

Step 5) Eclipse will create WebdriverTest with following structure:


Step 6) Right-click on JRE System Library and select the Properties option
from the menu. On the Properties for JRE System Library dialog box, make
sure Workspace default JRE is selected and click OK

Step 7). Select pom.xml from Project Explorer, pom.xml file will Open in Editor
section

Step 8) Add the Selenium, Maven, TestNG, Junit dependencies to pom.xml in the
<project> node:
Step 9) To add TestNG library in Eclipse install it from Help/Eclipse Marketplace
and after installation Restart IDE.
Step 10) Create a New TestNG Class. Enter Package name as “example” and “NewTest”
in the Name: textbox and click on the Finish button as shown in the
following screenshot:
Step 11) Eclipse will create the NewTest class as shown in the following screenshot:

Step 12) Add the following code to the NewTest class:


This code will verify the title of Guru99 Selenium Page
Step 13) Right-click on the WebdriverTest and select TestNG | Convert to TestNG.
Eclipse will create testng.xml which says that you need to run only one test with
the name NewTest as shown in the following screenshot:
Update the project and make sure that file appears in the tree Package Explorer
(right click on the project – Refresh).
Step 14) Now you need to run test through this testng.xml. So, in Nav-Bar go
to Run/Run Configurations and create a new launch TestNG, select the
project and field Suite as testng.xml and click Run.

Make sure that build finished successfully.


Step 14). Additionally, we need to add
1. maven-compiler-plugin
2. maven-surefire-plugin
3. testng.xml to pom.xml.

The maven-surefire-plugin is used to configure and execute tests. Here plugin is


used to configure the testing.xml for TestNG test and generate test reports. The
maven-compiler-plugin is used to help in compiling the code and using the
particular JDK version for compilation. Add all dependencies in the following
code snippet, to pom.xml in the <plugin> node:
Step 15) To run the tests in the Maven lifecycle, Right-click on the
WebdriverTest and select Run As | Maven test. Maven will execute test from the
project. Make sure that build finished successfully.
Configure Jenkins to Run Maven with TestNg Selenium:

Step 1) Navigate to the Jenkins Dashboard (http://localhost:8080 by default) in


the browser window. Click on the New Item link to create a CI job.

Step 2) Select the Maven project button as shown in the following screenshot:
Using the Build a Maven Project option, Jenkins supports building and testing Maven
projects.

Step 3) Click on OK button. A new job with name “WebdriverTest” is created


in Jenkins Dashboard.

Step 4) Go to the Build section of new job.


In the Root POM textbox, enter full path to pom.xml
In Goals and options section, enter “clean test”

Click on Apply button.

Step 5) On the WebdriverTest project page, click on the Build Now link.
Console Output:

Step 6) Once the build process is completed, go back to the WebdriverTest


project. The WebdriverTest project page displays the build history and links
to the results as shown in the following screenshot:
Step 7) Click on the “Latest Test Result” link to view the test results as shown in
the following screenshot:

Conclusion: Successfully Setup and Run Selenium Tests in Jenkins Using Maven.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 8

EXPERIMENT TITLE To understand Docker Architecture and Container Life


Cycle, install Docker and execute docker commands to
manage images and interact with containers.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 8/09/2021

SUBMISSION DATE 15/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
NAME : Ayush
PremjithROLL NO :
19IT2034 BATCH : B2

EXPERIMENT NO. 8
Practical name: Docker architecture and container Life cycle, install Docker and execute docker
commands to manage images and interact with containers.

Aim: To understand Docker architecture and container Life cycle, install Docker and execute docker
commands to manage images and interact with containers.

Lab Outcome: LO1: To understand the fundamentals of DevOps engineering and be fully
proficient with DevOps terminologies, concepts, benefits, and deployment options to meet your
business requirements.

LO5: To understand the concept containerization and Analyze the containerization of OS image and
deployment of application over Docker.

Theory:

What is Docker?

Docker is an open-source containerization platform. It enables developers to package


applications into containers—standardized executable components combining application
source code with the operating system (OS) libraries and dependencies required to run that code
in any environment. Containers simplify delivery of distributed applications, and have become
increasingly popular as organizations shift to cloud-native development and hybrid multi cloud
environments.

Developers can create containers without Docker, but the platform makes it easier, simpler, and
safer to build, deploy and manage containers. Docker is essentially a toolkit that enables
developers to build, deploy, run, update, and stop containers using simple commands and work-
saving automation through a single API.

Docker also refers to Docker, Inc. (link resides outside IBM), the company that sells the
commercial version of Docker, and to the Docker open-source project (link resides outside
IBM), to which Docker, Inc. and many other organizations and individuals contribute.

Why to use Docker?


Docker is so popular today that “Docker” and “containers” are used interchangeably. But the
first container-related technologies were available for years- even decades (link resides outside
IBM)- before Docker was released to the public in 2013.

Most notably, in 2008, LinuXContainers (LXC) was implemented in the Linux kernel, fully
enabling virtualization for a single instance of Linux. While LXC is still used today, newer
technologies using the Linux kernel are available. Ubuntu, a modern, open-source Linux
operating system, also provides this capability.
1
Docker enhanced the native Linux containerization capabilities with technologies that enable:
● Improved and seamless portability: While LXC containers often reference
machinespecific configurations, Docker containers run without modification across any
desktop, data center and cloud environment.
● Even lighter weight and more granular updates: With LXC, multiple processes can be
combined within a single container. With Docker containers, only one process can run
in each container. This makes it possible to build an application that can continue
running while one of its parts is taken down for an update or repair.
● Automated container creation: Docker can automatically build a container based on
application source code.
● Container versioning: Docker can track versions of a container image, roll back to
previous versions, and trace who built a version and how. It can even upload only the
deltas between an existing version and a new one.
● Container reuse: Existing containers can be used as base images essentially like
templates for building new containers.
● Shared container libraries: Developers can access an open-source registry containing
thousands of user-contributed containers.

Implementation and Output:


● install Docker
For install Docker we need following Commands:
Step1 :
sudo apt-get update

2
Step 2 :
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \ gnupg
\ lsb-
release

3
Step 3: Add Docker’s official GPG key from the official website and perform the next commands
in the list.

● $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o


/usr/share/keyrings/docker-archive-keyring
● ls

4
● echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-
archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Step 4: Now Install Docker Engine:


$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io

5
6
Docker Installation is complete on your system.

● Perform operation on docker:


Step 1: For that we need use this commands:

● $sude su
● docker image ls

7
Step 2:
● Create a git repository docker-java
● git clone https://github.com/Nidhhiii/docker.java.git
● Cd docker-java
● ls
Step 3: nano Dockerfile
Type the following code and save it by Control + X and click Y to save.

8
Step 4: nano HelloWorld.java
Type the following code and save it by Control + X and click Y to save.

Conclusion:

Hence, we understood docker architecture and container Life cycle, installed Docker and executed
docker commands to manage images and interact with containers.

9
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 9

EXPERIMENT TITLE To learn Dockerfile instructions, build an image for a


sample web application using Dockerfile. Dockerfile
can be a web application, python based , java based,
nodejs.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2041

CLASS TE - IT

SEMESTER V

15/09/2021, 22/09/2021
GIVEN DATE

SUBMISSION DATE 29/09/2021

CORRECTION DATE

REMARK
TIMELY PRESENTATION UNDERSTANDING TOTAL
SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. 9

To learn Docker file instructions, build an


image for a sample web application using
Docker file.

Aim - To learn Docker file instructions, build an image


for a sample web application using Docker file.

Theory - To deploy our custom web app with using


Docker file, we need a server. For that, we had used
nginx server. So, first we pull the nginx image from
docker hub , then create a container of nginx. Then
run that container on port 80 (Localhost). We had
created a folder at ubuntu (Desktop). In that,
index.html, style.css and Docker file was there. So,
finally we are able to deploy our app using docker
file.
1. We cloned the docker file from GitHub

2. Pulled the nginx image from Docker hub


3. The using docker images command, we are able to see images
running

4. Now , go to the directory where out html , CSS and Docker file is
Now , we can make our container run by command

Docker run -it -p 80:80 myapp bash


Or we can build the image
Our docker file must contain the server image ( base image )
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 10

EXPERIMENT TITLE Installation of Ansible on top of AWS instance,


Configure SSH access to Ansible Host/Slave and set up
ansible host and tested connection.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 20/09/2021

SUBMISSION DATE 6/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. 10

Name : Ayush Premjith


Roll NO. : 19IT2034
Batch : B/B2

Aim : Installation of Ansible on top of AWS instance, Configure SSH access to


Ansible Host/Slave and set up ansible host and tested connection
Thoery : Ansible is an open-source software provisioning, configuration
management, and applicationdeployment tool. Ansible was written by Michael
DeHaan. Ansible is agentless, temporarily connecting remotely via SSH.

Step 1: Install Ansible on ansible_mas. On ansible_slave installed python.


Launch 3 instances on AWS. Goto AWS Console
Open MobaXterm and connect the instances by providing IPs and by generated
key.

Step 2 : To connect the host and master, then check the pinging and download
ansible on master machine.
By above command we can see that both master and host are able to connect
with each other with help of ansible.

Conclusion : Thus, Successfully Installed Ansible on top of AWS instance,


Configure SSH access to Ansible Host/Slave and set up ansible host and tested
connection
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 11

EXPERIMENT TITLE Deploy a WEB APPLICATION BY PROVISIONING LAMP


STACK USING ANSIBLE PLAYBOOK.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 6/10/2021

SUBMISSION DATE 13/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment NO. 11

Name : Ayush Premjith


Roll No. : 19IT2034
Batch : B/B2
Aim : DEPLOY A WEB APPLICATION BY PROVISIONING LAMP STACK USING ANSIBLE
PLAYBOOK

Thoery : Playbooks use YAML format, so there is not much syntax needed, but
indentation must be respected. Ansible playbooks tend to be more of a
configuration language than a programming language. A playbook is a
collection of plays. Through a playbook, you can designate specific roles to
some of the hosts and other roles to other hosts. By doing so, you can
orchestrate multiple servers in very diverse scenarios, all in one playbook.

Each ansible playbook works with an inventory file. The inventory file contains a
list of servers divided into groups for better control for details like IP address
and SSH port for each host. Ansible Playbook to install LAMP stack with
necessary packages and tools.

Ensure that you have copied the workspace folder which has all the .yml files, php
files and users.sql file present in it.

Setup with Ansible after checking all clients pings with ansible.
Create workspace folder named “ansible”, cd over it and by following command
add the remote ansible git repository to the folder and move to codes folder.

After cd to codes folder make index.html file and write the content in it, then open
the lampstack_1.yml
To Deploy the app with ansible hit with command ansible-playbook
lampstack_1.yml

Now to run the playbook Before that goto browser and type IP address of
ansible_slave
Conclusion : Thus, successfully implemented Provisioning Lamp stack on
Ubuntu using Ansible playbook on top of AWS Instance.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : DevOps Lab


COURSE CODE ITL503

EXPERIMENT NO. 12

EXPERIMENT TITLE Deploy a WEBSITE CODE ON THE NODE BY


PROVISIONING MYSQL SERVER and DATABASE
USING ANSIBLE PLAYBOOk.

NAME OF STUDENT Ayush Premjith

ROLL NO. 19IT2034

CLASS TE - IT

SEMESTER V

GIVEN DATE 13/10/2021

SUBMISSION DATE 20/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS

4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No. 12

Name : Ayush Premjith


Roll No. : 19IT2034

Batch : B/B2.
Aim : DEPLOY A WEBSITE CODE ON THE NODE BY PROVISIONING MYSQL
SERVER and DATABASE USING ANSIBLE PLAYBOOK

Theory : Playbooks use YAML format, so there is not much syntax needed, but
indentation must be respected. Ansible playbooks tend to be more of a
configuration language than a programming language. A playbook is a
collection of plays. Through a playbook, you can designate specific roles to
some of the hosts and other roles to other hosts. By doing so, you can
orchestrate multiple servers in very diverse scenarios, all in one playbook.

Each ansible playbook works with an inventory file. The inventory file contains
a list of servers divided into groups for better control for details like IP address
and SSH port for each host.

Ansible Playbook to install with necessary packages and tools. Ensure that you
have copied the workspace folder which has all the .yml files, php files and
users.sql file present in it.
To Run and Deploy the Playbook

$ansible-playbook mysqlmodule.yml
Conclusion : Thus, Successfully DEPLOY A WEBSITE CODE ON THE NODE BY
PROVISIONING MYSQL SERVER and DATABASE USING ANSIBLE PLAYBOOK
Name: Ayush Premjith
Roll No. :19IT2034
Batch:B2

DEVOPS LAB
ASSIGNMENT NO. 1
Case study on DevOps Implementation in real world

Adobe is a fantastic case study for DevOps because their software-


engineering process shows a fundamental understanding of DevOps
thinking and a focus on quality attributes through automation-assisted
process. Recall, DevOps practitioners espouse a driven focus on quality
attributes to meet business needs, leveraging automated processes to
achieve consistency and efficiency.

How Adobe Does DevOps :-

First they moved their infrastructure from on-prem to cloud to be able to


scale their service, a process that took years to complete.

To all the companies that think building your own infrastructure and tools
from scratch is the best approach because no one can do it as good as
you – one of the main reasons Adobe is so successful today is because
they realized the scalability advantages of cloud early on, and let Amazon
handle the heavy lifting of building the best datacentres.

Adobe has been building out a managed service that includes continuous
integration/continuous development capabilities on the Adobe cloud
service. Now Adobe is extending the DevOps processes enabled by its
CI/CD platform to make them more customizable in addition to
expanding the scope of the application development tools it provides to
include support for single page application (SPA) JavaScript frameworks.
Name: Ayush Premjith
Roll No. :19IT2034
Batch:B2

DevOps processes are now an integral part of how Adobe enables


customers to build a custom application experience using low-code tools
it provides in Adobe Cloud. In the case of Adobe, that DevOps framework
is made available via Cloud Manager, a feature of Experience Manager
Managed Services that enables the streamlining of code development
across staging and production environments in addition to providing code
inspection, security validation and performance testing capabilities.
Developers can now customize their CI/CD pipeline within Cloud Manager
against specific key performance indicators (KPIs) they define.

Adobe Experience Manager as a cloud service :-

Adobe Experience Manager is a modern, cloud-native application that


accelerates delivery of omni-channel personalized experiences
throughout the customer journey. Informed by data insights, Experience
Manager optimizes both marketer and developer workflows throughout
the entire content lifecycle. Adobe Experience Manager as Cloud Service
consists of industry-leading cloud applications for hybrid content
management (CMS) and digital asset management (DAM), each of which
can scale up to help meet the demands of even the largest global
corporations. The modern cloud-native architecture of Experience
Manager as a Cloud Service is built upon a container-based infrastructure
offering API-driven development and a guided DevOps process. It allows
IT to focus on strategic business outcomes instead of getting slowed
down by operational concerns. This helps organizations achieve faster
time to market while being flexible and extensible to meet unique
business requirements. With Experience Manager as a Cloud Service, your
teams can focus on innovating instead of planning for product upgrades.
New product features are thoroughly tested and delivered to your teams
without any interruption so that they always have access to the state-of-
the art application.
Name: Ayush Premjith
Roll No. :19IT2034
Batch:B2

Adobe’s Container Management Platform :-

To support building modern cloud-based applications that can easily run


and scale across multiple cloud infrastructure providers, Adobe has
developed a container-based application platform based on the
Kubernetes container orchestration engine. Adobe Experience Manager
as a Cloud Service is built on this new container-based platform. This
container platform provides core security functionality built-in to further
strengthen applications built upon it. This platform also provides more
flexibility in implementing stronger security and compliance controls on-
the-fly without disrupting existing applications. This platform will serve as
the foundation for future versions of Adobe’s solutions helping to ensure
that industry-standard security practices are built into everything we do.
DevOps process managed by Adobe is enabling many organizations to
smooth out what otherwise often can be a choppy digital business
transformation. Building a new generation of customer experience
applications requires access to modern tools that can transcend
organizational silos spanning data that resides in sales, marketing and
financial applications. Last month, Adobe announced it is acquiring
Marketo, a marketing automation platform to facilitate that transition.
Once that deal is closed, the DevOps framework Adobe has developed in
the cloud will be extended to include the Marketo platform.

It’s hard to say to what degree Adobe will be able to drive adoption of
DevOps processes across the Adobe Cloud platform. What might be even
more interesting to see is how many of the organizations that rely on
Adobe Cloud to develop applications will even realize they made the
transition to DevOps.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

Assignment No 2

Self Learning :- BITBUCKET

Name : - Ayush Premjith

Roll No. :- 19IT2034

Batch :- B(B2).
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

Bitbucket :-

Bitbucket Cloud is a Git based code hosting and


collaboration tool, built for teams. Bitbucket's best-in-class Jira
and Trello integrations are designed to bring the entire
software team together to execute on a project. We provide
one place for your team to collaborate on code from concept to
Cloud, build quality code through automated testing, and
deploy code with confidence.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

To create Repositories in BitBucket we will do following process step


by step :-
Step 1) Create Your BitBucket Account or Rather Sign in with google.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.
After Logging in You Will see the Following Interface

Step 2) Create a new Private or Public Repository and do the


necessary changes you want to do.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.
Step 3) After creating the Repository you will have the dashboard for
that particular website

Step 4) Then clone the Repository to Your Desktop/Laptop with


cloning link given in Clone button on Repositories dashboard.

Step 5) Then after cloning it will ask for the Password of your
BitBucket Accout and you have to provide it.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

As you can see the cloning is done in our terminal

Step 6) We will now add some code to our Repository while going in
Source Tab and providing the code with File name

We will commit the changes by clicking on commit Button and we


can see the following commited changes in dashboard of Repository
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

By this way we can manage our repository

Step 7) We can add the Code and commit the changes via Code
Editor too and we will have the following steps below mentioned
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.
Name :- Ayush Premjith
Roll No :- 19IT2034
Batch :- B2.

As we can see the commited changes are reflected while commiting


throught code editor.

Conclusion :- Thus we have learned about BitBucket and code


management and Repository management in BitBucket.

You might also like