You are on page 1of 11

JENKINS

Continuoua Integration-Continuous Delivery (shown in Screenshot (18).png)


===========================================
Jenkins is a tool for performing CI-CD

Stage 1 (Continuous Download)


-----------------------------
when ever dovelopers create or modify code they upload in to some version
controlling
system(git,svn).Jenkins will immediately get a notification and it will download
the code
from the remote version controlling server. If the download is unsuccessfull
jenkins will
send notification to the version controlling server admin.

Stage 2 (Continuous Build)


-------------------------------
the code download in the previous stage has to converted into an artifact. This
is called as the build process and it can be down using tools like Anr,Mavin,Gradle
MSBuild,Nant ect.these build tools as installed as plugin in Jenkins and with their
help Jenkins will convert the code into an artifact.This artifact can be in the
form
of jae,war,ear,exe ect files.If the build process faild ie ifjenkins is not able to

create an artifact it will notify the dovelopers that the build has failed and
developers
will fix the defect and upload the modified code into version controlling server.

Stage 3 (Continuous Deployement)


------------------------------------
The artifact created in the previous srage has to be deployed into the QA
environment
where a team of testers can access the application and test it the QA environment
might
be running on some application servers like tomcat Jboss,weblogic application
server.Jenkins
will perform this deployment and if the deployment fails itwill notify the
middleware team about the failure

Stage 4 (Continuous testing)


------------------------------------
Jenkins will run the automation testing programs created by the testers and check
if the application
deployed in the QA environment is working properly. these automation testing
programs can be created by
testers using selenium,Jmeter ect. If the automation testing programs fail jenkins
will send notification
to the testers and dovelopers

Stage 5 (Continuous Delivery)


-----------------------------------------
If the passed Jenkins will deploy the application into the production environment
where it becomes live is the
end user or client start accessing it.

In AWS create 3 instances Dev,QA,Prodservers


1.Devserver
----------
we need to install
JDK,Jenkins,Git,Maven
sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y git maven
open google jenkins link copy to past the terminal use wget
java -jar jenkins.war

QA & Prodservers
---------------------
we need to install
tomcat8,tomcat8-admin
sudo apt-get inastall tomcat8
sudo apt-get inastall -y tomcat8-admin
goto cd /etc/tomcat8
tomcat-user.xml
{OR}
to open
sudo vim /etc/tomcat8/tomcat-users.xml(edit shown in Screenshot (19).png)
sudo service tomcat8 restart
(<users username="koti" password="koti@1995" roles="manager-script"/>)

Stage 1
to add git repository link
Stage 2
add build as maven
add packages
Stage 3
1. open the dashboard of jenkins
2. click on manage jenkins ----> search plugins
3. go toavilable section -----> search fo "Deploy to container" plugin
4. click on install with out restart
5. goto the development job---> click on configure
6. goto post build actions---> click on add build action
7. click on deploye war/ear to container ---> add tomcat creditial to give private
ip(http://ip:8080)

Stage 4 (continuous Testing)


1. Creat a ne iten ---> enter item name as "Testing"
2. goto sourse code management ---> click on git
2. enter the git hub url testing
4. go to build section
5. Click on add build step ---> Exicute shell command
java -jar testing.war (echo "testing passed")
6. click on save and apply
7. go to dashboard of jenkins ---> go to setting job --->click onbuild icon
The above job will download the selenium testing programs from hithub
and execute them on the application that was deployed into QAserver in stage 3
Linking Devolopment job with testing job
1. Go to Dovelopment job ---> click on configure
2. go to post build actions ---> click on Add post build action
3. click on build other projects
4. enter the project name as "testing"

Copy artifacts created bydevolopments job to testing job


1. click on manage jenkins --->manage plugins
2. go to avilable section
3. search for "copy artifacts" plugin ----> install it
4. goto dashboard ---->click on configure
5. goto add post build actions ---> click on Archive the arifacts
6. enter the file name as **/*.war-----> click on save
7. goto bashboard --->click on configure ----> goto build section --> add build
step
8. click on copy artifacts from another project
9. enter the project name as development ---> save

Sate 5 (continuous Delivery)


1. goto dashboard
2. goto post testing job ---> click on configure
3. goto post build actions --> add post build action
4. click on Deploy war/ear to contaner ---**/*.war , prodwebapp and private ip -->
apply and save

Master Slave
when we want run multiple jobs on jenkins parrally it might down grade the
performance of the jenkins server
to over come this problem to use master slave jenkins where we can distibute the
work load this is also called as distibuted jenkins build

the main machine where jenkins is runnig is called as the master machine.from
master to slave we should establish password less connectivity
1. create a new AWS ubuntu 1 instance and name it as slave
2. connect to the slave machine using gitbash
3. set password for default "ubuntu" user
sudo passwd ubuntu
4. Edit the sshd_config file for allowing ssh access
sudo vim /etc/ssh/sshd_config
go to insert mode and search for passworsAuthentication" change it fron no to
yes
5. Restart ssh
sudo service restart
6. Connect to the master machine
7. Generate the ssh keys
ssh-keygen
this will generate two keys called public and private in .ssh folder
8. copy the public keys into slave machine
ssh-copy-id ubuntu@private_ip_of_slave
this command will copy the content of public keys in to a file called
"authorised_keys" on the remote server
Note: once the above setup is done master will be able to do ssh to slave without
password
ssh ubuntu@private ip of slave

Build Pipeline
---------------
this is plugin which is used for getting better gui of jenkins interface it shows
the list of upstearm and down sterm jobs
in linked formate in a seperate gui here we can see individual logs of each job and
control them
--->click on manage jenkins --> click on manage plugins ---> goto avilable section
search for build pipeline and install it --->goto back and click on + symble.
enter some view name.--->goto up and stream job select initial job.

Pipeline
-----------
This is the feature of the jenkins whera we can implement all the stages of ci/cd
from the level of code this code is create using grovy this file is known as
jenkins file.
generally this file is uploaded into the remote ropsitory along with the
devolopment code fron github repo this jenkins file will trigger all the stages of
ci/cd.

Advantages
1. since jenkins file are checked into version controling s/m they give the team
members the ability to modify the code and still maintain multiple versions.
2. jenkins files or pipeline codes are servive both plane and unplaned restart of
the jenkins master
3. pipeline code can implement real world senorio like if conditions loops ect. ie
if stage jenkins failds we can exicute once set of actions and if it passes we can
exicute another set of actions.
4. jenkins files can implement ci/cd with minimum no. of plugins so they are much
faster.
5. If it is possibe to pass jenkins file at any stage and take intaracive input and
than continue from there.

Pipeline as codecan implimented 2 ways


1. Scripted Pipeline
2. Declerative pipeline

1. Scripted Pipeline
-------------------------------
Syntax:

node('master/slave)
{
stage('Stage name in ci/cd)
{

Actual groovy code for implementing this stage


}
}

2. Declerative pipeline
-----------------------------------------
pipeline
{
agent any
stage
{
stage('Stage name in ci/cd')
{
steps
{
Actual groovy code implementing this stage
}
}
}
}
----------------------------------------------------------------
Create scripted pipeline job(Screenshot (26).png)

goto dashboard of jenkins click new


goto pipeline section and create groovy code
'

jenkins: Hudson creats a very populor open sourse java base projects converted to
oracle to jankins.
continuous intigration
5 steps in Jankins
1.continuous down ( integrated with git )download the code into git
2.bild ( downloaded code duild -artifact correct or not test(jar var )
3.diployment (aws)
4.testing (UAT testing)
5.delevery
or
MASTER SLAVE Jankins (autoscaling not working)

GIT (Distibuted centrolized version control )

version control (new or old )

1. working direcotory
2. staging aera
3. untracked files

VPC (Virtual private cloud) -region dependent


IGW (Internet gate wave)

resource group ---> vnets ---> subnets ---> nsg(network security group(Firewall))
---> VM

GIT
vesion control software --> git

Reposirory : GitHub, Bitbucket, GitLab, VSTS


1.git init
2.git status
3.git add .
4.git commit -m "(changing coment)"
5.git log --oneline
6.git checkout (commit)
brance create
7.git checkout (-b (name))
7.1 git reset Head (file name) --> (This command will reset your Staging/Index to
working directory to the state of your last commit. Effectively taking you back)
7.2 git reset --soft commit id local to staging back(first head file after one)
7.3 git reset --mixed commit id(n-1)(local to workspace)
8.git revert HEAD (go to before commit)
git pull --rebase (central repository any change that modified files will be
update in local repository )
9.git rebase -i HEAD~1 (change the or edit the commit name)
10.git clean -n (asking remove or not )
11.git clean -f (files removed unwanted)

1.mkdir git project


2.code git project/ (open vss code )
3.cd git project/
4.touch index.html (create one file in git project)
5.git init
6.git status
7.git add .
8.git log --oneline
$ git config --global user.name

MAVIN
*local repository
*central repository
*remote repository

download mavin software in google after mavin path is pasted in environment


vaeiables
1.mvn archetype:generate (download all plegins to central repository)
2. mavi asking groip id name
mvn archetype:generate > test.out (downloaded plugins save in one file)
3.mvn package
MAVIN ON LINUX
winscp software is use to go on windows to linux
java file is transfer to linnux and extract here
1. tar -xvf(extract verbose force) file name

install java in linux using script


vi /etc/profile.d/jdk9.sh ---command
ecport JAVA_HOME=/opt/java/jdk-9.0.1
export PATH=${JAVA_HOME}/bin:${PATH
source /ect/profile.d/java9.sh ----exicusition command
yum install wget(packages download)
create mavin folder in linux after go to google copy mavin install link and past it
on linux comand line(wget link)
to extract file (tar -xvf filename) change directory to apache-mavin-version

cd /etc/profile.d
vi maven.sh ---command
ecport M2_HOME=/opt/java/jdk-9.0.1
export PATH=${M2_HOME}/bin:${PATH}
source mavin.sh ---command after goto root location
create mavin folder after
mvn archetype:generate

mavin has 23 facess or goals are there some importent


1.validate 2.compile 3.test 4.package 5.intigration-test 6.verify 7.install
8.deploy

Kuberneties(K8)

This is also container orchestration tool it was product of google curently it is


open sourse
k8 can be handling all the production relsted problems like high availability, load
balancing,scaling,
performing rooling updates,disaser recovery etc

Pod :This is the smalest k8 object and it is used for storing containers k8 does
not deploye docker containers directly
instad it deployes the containers within the pods. single pod can contain
multiple contaiers but generally containers
and pods share one to one relationship

Create cluster and connect to cluster


1. To see the list of nodes in the cluster or ip adress
kubectl get nodes or kubectl get nodes -o wide

UseCase 1
start nginx as a pod in the kuberneties cluster and name the pod as webserver
-----> kubectl run --image nginx webserver

To get detailed info aabout the pod


kubectl describe pods pod_name

To delete the pod


kubectl delete pods pod_name

UseCase 2
1. Start tomcat in the kubernetes cluster with 3 replicas and name it appserver
kubectl run --image tomcat appserver --replicas 3
2. To see the list of pods related to tomcat
kubectl get pods -o widw | grep appserver

UseCase 3
start mysql with replcas 2 in k8 cluster amd name it as mydb
1. kubectl run --image mysql:5 mydb --env MYSQL_ROOT_PASSWORD=koti --replicas=2

2. To see the list of pods related to tomcat


kubectl get pods -o widw | grep mydb

3. To delete the mydb service completely from the cluster


kubectl delete deployement mydb
To scale the above mysql from 2 replicas to 4
kubectl scale deployments/mydb --replicas=4

Implimenting K8 objects with yamil file. these yamil file contain 4 top level
fields(shown in Screenshot (20).png)
apiVersion: this is the veersion of K8 api that is used for creating the objects
kind: this is used to specify the type of K8 object that we want to creat
metadata: this contains information like name labels etc lables is again dist
objects which can contain any key value pairs
spec: this contain exact information about the docked images,container
names,portmaping,environment veriables

orchestration tool

1.load balance
2.auto scaling
3.health checkups
4.up n down times
----> it is a open tool
4 steps

1.node
2.services
3.pod (storage purpoe)
4.

LINUX COMANDS

1. uname (server name operating system(EX:Linux))


2. uname -r (Release kernel version )
3. uname -a (Release kernel version host name everything)
4. whoami (root present user who)
4. who am i or who or w (how many users in server )
5. pwd (present working directory)
6. ls -l or ls -al (long list or ls=list)
7. ls -i (i node number)
8. ls -a (hidden files)
9. (Esc dd,3dd,ndd delete
nyy (n number of lines copy) yy (copy entire line)) vim editor lo chestam
1.Esc i insertion mode, 2.ESC:wq exicution mode, 3. comand line mode
p=past,u=undo,G=cursor go to down, H=high
dw=perticular word delete, x=single letter delete
yw=perticular word copy, r=replace single letter,R=entire line repalace
$=end of the line
o=create new line under
O=create line new line upper
a=append one letter
/=search
10. rm filename (remove file) asking conformation
11. rm -i
12. rm -rf (forcefully delete)
13. touch (files added
14. rm -rf * (remove present directory files)
15. mkdir -p filename(Ex:d5/d6/d7) file lopala file create cheyali
16. rmdir filename(remove directory) directory lopala files delete chesinatavata
use
17. lssh (how many shells)
18. echo $SHEEL (default shell)
19. yum update -y =update

AWS
VPN (virtual private network)

1. First Launch instance


2. login to the vpn router and check the static IP given by ISP
3. go to the AWS and create a customer gateway
4. In AWS,create a virtual private Gateway.
5. create a VPN connection in AWS and DOwnload the config and share it with Network
team
6. Network team will use the config details and create the VPN from the datacenter
7. once VPN is established, connect to the server using private IP 10.1.1.100

DOCKER
1. apt-get update
go to docs.docker.com https://docs.docker.com/install/linux/docker-ce/ubuntu/
2. sudo apt-get remove docker docker-engine docker.io (uninstall old versions)
3. apt-get update
4. Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
5. Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
6. sudo apt-key fingerprint 0EBFCD88
7. add repository

sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

8. apt-get update
9. sudo apt-get install docker-ce docker-ce-cli containerd.io
10. docker ps OR docker container ls(show containers)
11. docker pull nginx (download image in docker.hub)
12. docker images (show images)
13. docker run -it -p 80:80 nginx /bin/bash
14. change ubuntu to container in this location commands will not run thats why
update
15. apt-get update (next)
apt-get install net tools
attach command is used to go to in side container(EX:docker attach container id
bash)
16.service nginx status (check status)
17. service nginx start
18. docker stop container id
19. docker rm container id (remove one or two id's)
20. docker ps -a
21. docker rm $(docker ps -ap) remove all containers
22.docker pull ubuntu (container)
23.docker run --rm -it --name=ubuntuout --hostname=ubuntuin -p 8080:80 ubuntu bash
24. docker attach container id
apt-get install iputils-ping(inside container some commands are not working
thats why the command is used)
25. docker commit containerid repostry name(change docker image)
26. docker login
27. docker push repostry name(image push to docker hub)
28. docker inspect container id (show all information)

NETWORKING
29. docker network ls (show networks)
30. docker container create (refer google docs.docker)
VOLMES
31. docker volume create testvol2
32. docker volume inspect testvol1
copy directory and create container

Docker image : Is a collection binarie and libreries which are nessasary for one
sofware to run
container : Runing instance of an image is called container. any number of
containers can be created from one image
Docker Host : The machine on which docker is installed is called as theDocket
Host.
Docker client : This is an anaplication which is the part of the docker engine.
which is resposible for accepting the docker commands
from the user and passed to Docker Deamon
Docker Deamon : This is the background process
Docker Registry: This is the location where all the Docker images are stored.
1.public 2.private (shown Screenshot (13)
Docker Compose : Docker compose is a future of Docker which is use creating micro
services archit where multiple containers are linked
with each other. docker compose uses yamil files for forfoming
this activities. the main advantage is re usabiliti
installing docker-compose : open docs.docker /compose/install |
goto linux and copy 1&2 commands |to check the version
of docker compose: docker-copose --version
smple yamil file (shown Screenshot (14,15 )
to run the file:docker-compose up -d
create a docker compose file for setting up testing environment where selinium hub
container should with two node containers firefoc node
and chorme node(Screenshot (16,17 )

Docker volume : 1.simple docker volume 2.docker volume container


1. simple docker volumes are only for preserving the data even
after container is deleted but they cannot be share b/w other
containers to create volumes that can be shared b/w multiple
containers we can docker volumes containers
(Ex : create c1,c2,c3 on mount by data c2 container should use volume
use by c1. c3 co
(docker run --name c1 -it -v /data centos
touch file f1 f2
docker run --name -it --volume-from c1 centos)
Docker volume : containers are ephemeral but the data should persist thet it should
be available even container will deleted this can bd done
using volumes. volume is an external folder are device which
mounted on to the container in such a way that even

Docker File : this is a text based file which users predefined file using which we
can create costomized docker image
Important key words of docker file
FROM : this represents the base image from which we want crate costomized
image
MAINTAINER : this is the name of author or organization that as created the docker
file
CMD : the is used for running any process from out side the container
ENTRYPOINT : Every docker container srats a default pross and long as the process
is running that container will running
RUN
: This is used for running commandas with in the container it is generally
used running command related to package manegement
COPY,ADD,VOLUME,USER,WORKDIR,LABEL,STOPSIGNAL,ENV,EXPOSE.THE ADVANTAGE OF USIN
DOCKER FILE OVER THE COMMIT COMMAND WE CAN PERFORM VERSION CONTROLLING ON THE
DOCKER FILE
Create a docker file

1 vim dockefile
Go into insertmode by pressing i
FROM nginx
MAINTAINER koti
save nd quit ESC :wq Enter

2. create a image from the above dockerfile


docker build -t mynginx .

Creating costomized Docker images : this can be done in 2 ways using 1. Docker
commit 2.Docket file
UseCase
1. Start ubuntu as a container
docker run --name c1 -it ubuntu

2. Install in git
apt-get update
apt-get install -y git
git --version
exit

3. save the above containers an image


docker commit c1 myubuntu

4. delete the ubuntu container


docker rm -f c1

5 create a new container from the above image


docker run --name c1 -it myubuntu

6. check if git is already present


git --version

Docker file
----------------------
imported keywords
1. FROM : this represents base image from which we want to create cotomized image.
2. MAINTAINER : this is the name of the author or the organization that has created
docker file.
3. CMD
4. ENTRYPOINT
5. RUN ,COPY, ADD, VOLUME,USER, WORKDIR, LABEL, STOPSIGNAL, ENV,EXPOSE

You might also like