Professional Documents
Culture Documents
create an artifact it will notify the dovelopers that the build has failed and
developers
will fix the defect and upload the modified code into version controlling server.
QA & Prodservers
---------------------
we need to install
tomcat8,tomcat8-admin
sudo apt-get inastall tomcat8
sudo apt-get inastall -y tomcat8-admin
goto cd /etc/tomcat8
tomcat-user.xml
{OR}
to open
sudo vim /etc/tomcat8/tomcat-users.xml(edit shown in Screenshot (19).png)
sudo service tomcat8 restart
(<users username="koti" password="koti@1995" roles="manager-script"/>)
Stage 1
to add git repository link
Stage 2
add build as maven
add packages
Stage 3
1. open the dashboard of jenkins
2. click on manage jenkins ----> search plugins
3. go toavilable section -----> search fo "Deploy to container" plugin
4. click on install with out restart
5. goto the development job---> click on configure
6. goto post build actions---> click on add build action
7. click on deploye war/ear to container ---> add tomcat creditial to give private
ip(http://ip:8080)
Master Slave
when we want run multiple jobs on jenkins parrally it might down grade the
performance of the jenkins server
to over come this problem to use master slave jenkins where we can distibute the
work load this is also called as distibuted jenkins build
the main machine where jenkins is runnig is called as the master machine.from
master to slave we should establish password less connectivity
1. create a new AWS ubuntu 1 instance and name it as slave
2. connect to the slave machine using gitbash
3. set password for default "ubuntu" user
sudo passwd ubuntu
4. Edit the sshd_config file for allowing ssh access
sudo vim /etc/ssh/sshd_config
go to insert mode and search for passworsAuthentication" change it fron no to
yes
5. Restart ssh
sudo service restart
6. Connect to the master machine
7. Generate the ssh keys
ssh-keygen
this will generate two keys called public and private in .ssh folder
8. copy the public keys into slave machine
ssh-copy-id ubuntu@private_ip_of_slave
this command will copy the content of public keys in to a file called
"authorised_keys" on the remote server
Note: once the above setup is done master will be able to do ssh to slave without
password
ssh ubuntu@private ip of slave
Build Pipeline
---------------
this is plugin which is used for getting better gui of jenkins interface it shows
the list of upstearm and down sterm jobs
in linked formate in a seperate gui here we can see individual logs of each job and
control them
--->click on manage jenkins --> click on manage plugins ---> goto avilable section
search for build pipeline and install it --->goto back and click on + symble.
enter some view name.--->goto up and stream job select initial job.
Pipeline
-----------
This is the feature of the jenkins whera we can implement all the stages of ci/cd
from the level of code this code is create using grovy this file is known as
jenkins file.
generally this file is uploaded into the remote ropsitory along with the
devolopment code fron github repo this jenkins file will trigger all the stages of
ci/cd.
Advantages
1. since jenkins file are checked into version controling s/m they give the team
members the ability to modify the code and still maintain multiple versions.
2. jenkins files or pipeline codes are servive both plane and unplaned restart of
the jenkins master
3. pipeline code can implement real world senorio like if conditions loops ect. ie
if stage jenkins failds we can exicute once set of actions and if it passes we can
exicute another set of actions.
4. jenkins files can implement ci/cd with minimum no. of plugins so they are much
faster.
5. If it is possibe to pass jenkins file at any stage and take intaracive input and
than continue from there.
1. Scripted Pipeline
-------------------------------
Syntax:
node('master/slave)
{
stage('Stage name in ci/cd)
{
2. Declerative pipeline
-----------------------------------------
pipeline
{
agent any
stage
{
stage('Stage name in ci/cd')
{
steps
{
Actual groovy code implementing this stage
}
}
}
}
----------------------------------------------------------------
Create scripted pipeline job(Screenshot (26).png)
jenkins: Hudson creats a very populor open sourse java base projects converted to
oracle to jankins.
continuous intigration
5 steps in Jankins
1.continuous down ( integrated with git )download the code into git
2.bild ( downloaded code duild -artifact correct or not test(jar var )
3.diployment (aws)
4.testing (UAT testing)
5.delevery
or
MASTER SLAVE Jankins (autoscaling not working)
1. working direcotory
2. staging aera
3. untracked files
resource group ---> vnets ---> subnets ---> nsg(network security group(Firewall))
---> VM
GIT
vesion control software --> git
MAVIN
*local repository
*central repository
*remote repository
cd /etc/profile.d
vi maven.sh ---command
ecport M2_HOME=/opt/java/jdk-9.0.1
export PATH=${M2_HOME}/bin:${PATH}
source mavin.sh ---command after goto root location
create mavin folder after
mvn archetype:generate
Kuberneties(K8)
Pod :This is the smalest k8 object and it is used for storing containers k8 does
not deploye docker containers directly
instad it deployes the containers within the pods. single pod can contain
multiple contaiers but generally containers
and pods share one to one relationship
UseCase 1
start nginx as a pod in the kuberneties cluster and name the pod as webserver
-----> kubectl run --image nginx webserver
UseCase 2
1. Start tomcat in the kubernetes cluster with 3 replicas and name it appserver
kubectl run --image tomcat appserver --replicas 3
2. To see the list of pods related to tomcat
kubectl get pods -o widw | grep appserver
UseCase 3
start mysql with replcas 2 in k8 cluster amd name it as mydb
1. kubectl run --image mysql:5 mydb --env MYSQL_ROOT_PASSWORD=koti --replicas=2
Implimenting K8 objects with yamil file. these yamil file contain 4 top level
fields(shown in Screenshot (20).png)
apiVersion: this is the veersion of K8 api that is used for creating the objects
kind: this is used to specify the type of K8 object that we want to creat
metadata: this contains information like name labels etc lables is again dist
objects which can contain any key value pairs
spec: this contain exact information about the docked images,container
names,portmaping,environment veriables
orchestration tool
1.load balance
2.auto scaling
3.health checkups
4.up n down times
----> it is a open tool
4 steps
1.node
2.services
3.pod (storage purpoe)
4.
LINUX COMANDS
AWS
VPN (virtual private network)
DOCKER
1. apt-get update
go to docs.docker.com https://docs.docker.com/install/linux/docker-ce/ubuntu/
2. sudo apt-get remove docker docker-engine docker.io (uninstall old versions)
3. apt-get update
4. Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
5. Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
6. sudo apt-key fingerprint 0EBFCD88
7. add repository
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
8. apt-get update
9. sudo apt-get install docker-ce docker-ce-cli containerd.io
10. docker ps OR docker container ls(show containers)
11. docker pull nginx (download image in docker.hub)
12. docker images (show images)
13. docker run -it -p 80:80 nginx /bin/bash
14. change ubuntu to container in this location commands will not run thats why
update
15. apt-get update (next)
apt-get install net tools
attach command is used to go to in side container(EX:docker attach container id
bash)
16.service nginx status (check status)
17. service nginx start
18. docker stop container id
19. docker rm container id (remove one or two id's)
20. docker ps -a
21. docker rm $(docker ps -ap) remove all containers
22.docker pull ubuntu (container)
23.docker run --rm -it --name=ubuntuout --hostname=ubuntuin -p 8080:80 ubuntu bash
24. docker attach container id
apt-get install iputils-ping(inside container some commands are not working
thats why the command is used)
25. docker commit containerid repostry name(change docker image)
26. docker login
27. docker push repostry name(image push to docker hub)
28. docker inspect container id (show all information)
NETWORKING
29. docker network ls (show networks)
30. docker container create (refer google docs.docker)
VOLMES
31. docker volume create testvol2
32. docker volume inspect testvol1
copy directory and create container
Docker image : Is a collection binarie and libreries which are nessasary for one
sofware to run
container : Runing instance of an image is called container. any number of
containers can be created from one image
Docker Host : The machine on which docker is installed is called as theDocket
Host.
Docker client : This is an anaplication which is the part of the docker engine.
which is resposible for accepting the docker commands
from the user and passed to Docker Deamon
Docker Deamon : This is the background process
Docker Registry: This is the location where all the Docker images are stored.
1.public 2.private (shown Screenshot (13)
Docker Compose : Docker compose is a future of Docker which is use creating micro
services archit where multiple containers are linked
with each other. docker compose uses yamil files for forfoming
this activities. the main advantage is re usabiliti
installing docker-compose : open docs.docker /compose/install |
goto linux and copy 1&2 commands |to check the version
of docker compose: docker-copose --version
smple yamil file (shown Screenshot (14,15 )
to run the file:docker-compose up -d
create a docker compose file for setting up testing environment where selinium hub
container should with two node containers firefoc node
and chorme node(Screenshot (16,17 )
Docker File : this is a text based file which users predefined file using which we
can create costomized docker image
Important key words of docker file
FROM : this represents the base image from which we want crate costomized
image
MAINTAINER : this is the name of author or organization that as created the docker
file
CMD : the is used for running any process from out side the container
ENTRYPOINT : Every docker container srats a default pross and long as the process
is running that container will running
RUN
: This is used for running commandas with in the container it is generally
used running command related to package manegement
COPY,ADD,VOLUME,USER,WORKDIR,LABEL,STOPSIGNAL,ENV,EXPOSE.THE ADVANTAGE OF USIN
DOCKER FILE OVER THE COMMIT COMMAND WE CAN PERFORM VERSION CONTROLLING ON THE
DOCKER FILE
Create a docker file
1 vim dockefile
Go into insertmode by pressing i
FROM nginx
MAINTAINER koti
save nd quit ESC :wq Enter
Creating costomized Docker images : this can be done in 2 ways using 1. Docker
commit 2.Docket file
UseCase
1. Start ubuntu as a container
docker run --name c1 -it ubuntu
2. Install in git
apt-get update
apt-get install -y git
git --version
exit
Docker file
----------------------
imported keywords
1. FROM : this represents base image from which we want to create cotomized image.
2. MAINTAINER : this is the name of the author or the organization that has created
docker file.
3. CMD
4. ENTRYPOINT
5. RUN ,COPY, ADD, VOLUME,USER, WORKDIR, LABEL, STOPSIGNAL, ENV,EXPOSE