You are on page 1of 42

Configuring a professional open source

development environment – Part 2


By Thomas Arnbjerg, TechPeople, 2018

Jenkins Continuous Integration Server

Introduction 2
Prerequisites 3
Installing Jenkins 3
Introduction 4
Steps 4
Installing external programs 7
Cppcheck 8
cpplint 9
sloccount 9
valgrind 10
gcovr 10
Eclipse 10
Installing Jenkins plugins 11
Creating a build job 12
Build job ‘HelloWorld’ 13
Generating a data basis 13
Visualizing the result 22
Other 24
Tour of the ‘HelloWorld’ and the Jenkins user interface 25
Summary page 25
Live build console output 26
Job overview 27
Automatic start of build 28
Jenkins and branches 29
Branching – GitFlow 29
Branches and Jenkins file 30
Multibranch pipeline build and HelloWorld 31
Final Jenkins tips 37
Mission Control Plugin 37
Build Monitor Plugin 38
ThinBackup 38
Job Configuration History 39
Docker support 39
Cpplint support 39
More Jenkins nodes 39
Blue Ocean user interface 39
Rounding off 41

1/42
Introduction
In Part 1,a variety of functions in Jenkins were visualized. I this part, we are going to drill
below the surface and look at the configuration and use in a Continuous Integration (CI)
scenario in which Jenkins supports the development in feature branches.
We configure a number of build jobs using the support functions which were visualized in
Part 1 (shown below in reduced format).

CppCheck, Unit Tests (TAP format) and Code SLOCCount (Source Lines Of Code Count), Valgrind
Coverage. analysis

For a Linux system, we will then configure Jenkins to do the following:


● Check the source code using the CPPCheck and CPPLint tools which means
creating static code analysis to locate basic code errors and deviations from the code
standard.
● Run the test unit and visualize the result
● Check how much of the code is covered by the unit test (code coverage)
● Count the number of code lines and visualize the result with gcov.
● Check binaries with valgrind and visualize the result. We only do memory check
(Valgrind contains more tools).

Finally, a number of tips for distributed installations, etc. are provided.

Implementation of all these checks are low-cost and they have been able to detect errors in
all projects I have participated in!

As is evident from the above description, Jenkins support typically falls into two parts:
● Use of an external tool which is activated by Jenkins

2/42
● Visualization of the result. Typically, this is carried out via a purpose-built Jenkins
plugin.

First, some practical aspects and with these a disclaimer:

This subject is very comprehensive, and the description following below should be seen as
an appetizer – even though it is fully functional. Jenkins can do so much more!

Prerequisites
The configuration of Jenkins is described below. For that, the source code is loaded into a
Git repository. The prerequisite for the examples is that the Git repository described is
available – and this includes authentication configured with SSH keys. The private key in the
SSH key pair must be available in connection with configuration of Jenkins.

A Git server can be created quickly using a finished Docker image, e.g. this one:
Https://hub.docker.com/r/jkarlos/git-server-docker/ which also allows authentication with SSH
public/private key pairs.

Installing Jenkins
*** NOTE *** The screenshots are created in a browser with Danish as the main language.
Consequently, the Jenkins user interface is mostly in Danish. In some instances, the Danish
translation is not available and the browser then lapses back into English – hence the
occasional language confusion.

Jenkins may be installed in four ways.


1) You can download the war file on the home page: https://jenkins.io/download/. The
advantages are that you get complete control of the installation and may regenerate
it later on and that you get the most up-to-date version (contrary to many other tools,
the Jenkins community is REALLY well organized with regard to dependencies and
quality and I have never experienced problems with their latest versions)
2) It can be installed using the package manager which comes with the Linux distro that
is to be used (Jenkins is Java based and also runs on Windows, but this is out of
scope for the purposes of the present document). The advantage is that the
installation is well integrated. Conversely, it is not the latest. In practice this is rarely
important though.
3) A finished image may be installed in a cloud solution. The advantage is that you will
quickly be up and running. On the other hand, you will drag all sorts of things into the
installation. Usually, this will not be the last version either.
4) It is possible to use the “official” Docker image which is maintained by the Jenkins
community. Docker is a container technology with some of the same characteristics
as a virtual machine – only way faster and less resource intensive – refer to
Wikipedia. Docker will have you up and running in no time and it has a very new
version. Again, the disadvantage is that it is difficult to get a full understanding of it,
the way it is established, which may be a problem in regulated environments (e.g.

3/42
development of medical devices). The Jenkins community maintains two Docker
images as described here: https://hub.docker.com/r/jenkins/jenkins/. The use of it is
described in Github
here:https://github.com/jenkinsci/docker/blob/master/README.md

Here, we are going to use the latter version and use one of the authorized Docker images –
jenkins lts (long term support).

Introduction
The following description details the steps on a Linux machine with ‘Opensuse Tumbleweed’
installed. The procedure will certainly be the same in other Linux distributions and in all
circumstances it is a prerequisite that Docker is functioning on a Linux machine (you may try
to write the command ‘docker version’ in a console to determine this).

Below follows a condensed version of the use.

Steps
1. Start a console and type ‘docker pull jenkins/jenkins:lts’. This downloads the Docker
image to the machine.

Screen dump somewhat into the download process of the Docker image ‘jenkins:lts’
2. Now, go to the Linux user’s ‘HOME’ folder by typing ‘cd +<Enter>. Let the
‘jenkins_home’ folder run by executing the ‘mkdir jenkins_home +<Enter>’ command
3. Start the newly downloaded image with this command:
‘docker run -p 8080:8080 -p 50000:50000 -v
$HOME/jenkins_home:/var/jenkins_home jenkins/jenkins:lts’
4. Jenkins is now available on port 8080 on the machine as illustrated:

4/42
Starting page for Jenkins in a browser.

When a Docker container with an image is started, everything in the Docker container is
generally ‘read-only’. After a reboot of the container, everything is thus reset. Usually, a
connection is created out of the Docker container to the host machine. In the command
above, ‘-v $HOME/jenkins_home:/var/jenkins_home’ indicates that the ‘/var/jenkins_home’
folder INSIDE the Docker container must be mapped to ‘$HOME/jenkins_home’ OUTSIDE
the Docker container. Items placed in ‘$HOME/jenkins_home’ therefore survive reboots and
fortunately this is also true for configuration of build jobs and plugins. The configuration of
Jenkins itself is therefore not deleted in connection with a reboot.
After start of the Jenkins Docker container, Jenkins is now available in a browser on port
8080.
The displayed path, ‘/var/jenkins_home/secrets/initialAdminPassword’, refers to a path inside
the Docker container but since ‘/var/jenkins_home’ inside the container is mapped to
‘$HOME/jenkins_home’ outside, we can simply run the ‘cat
~/jenkins_home/secrets/initialAdminPassword’ command to get access to the initial
administrator password.

5/42
Listing Jenkins secret at first startup The administrator password is simply
pasted into the password field.

Clicking ‘Continue’ displays the image for installation of plugins. Just click ‘Install suggested
plugins’. As is the case with everything else in Jenkins, this is really well-thought-out.
Jenkins then starts installing plugins.

Installation of plugins. Select ‘Install Jenkins’ installation of plugins.


suggested plugins’

Enter the first user and Jenkins is ready for use.

6/42
Creating the first user Ready for use

Proceeding to installation of the necessary external tools.

Installing external programs


We have chosen to run Jenkins in a read-only Docker image. This means that all the
programs we install will disappear next time the Docker container is rebooted. Normally, a
custom Docker container is created using a purpose-built Docker file, but this is outside the
scope of this document (Googling custom jenkins docker image will produce lots of relevant
hits).
Below, we are going to install the programs temporarily but the same commands may easily
be added to a Docker file which will bring you a long way in the process.

To install the programs within the Docker container we need to know its ID. This is retrieved
by writing the command ‘sudo docker ps’ in a terminal and read the ID as illustrated below.

Identifying the ID for a Docker container.

The ID is used in the next sections with the reference <DOCKER ID>.

7/42
In each of the following sections, we will start by entering the Docker container and end by
exiting it again. Naturally, this is not necessary, if all the installations are done together.

Cppcheck
The Cppcheck tool is used to carry out staticcode analysis on the source code. The project
is hosted on Sourceforge here: http://cppcheck.sourceforge.net/

This is a tool which carries out a VERY thorough code analysis; its a good idea to add it right
from the start to enable developers to rectify findings on an ongoing basis.

The project is very active, and there will be a new release approximately every three months.
You may either download the tool or build it yourself if you want the ‘latest and greatest’.
Alternatively, you can install the version which is normally found in the Linux distro you use.
We choose the latter and install the version which our Jenkins Docker image supports –
which is the version supported by the Linux distro used to create our Jenkins Docker image.

Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.

Starting a console session inside the Docker container. Notice that the prompt
changes to ‘root@<DOCKER ID>
● Type the ‘apt-get update’ command now. The list of available packets is now
updated.

Updating the package list. As shown, the Docker image is based on Debian
stretch.
● Now type ‘apt-get install cppcheck’ and confirm the installation (click <Enter>).
Cppcheck is now installed. Type ‘cppcheck --version’ to verify the installation.

Version 1.76 of cppcheck is installed. At the time of writing, the latest version is

8/42
1.82 and is 18 months newer.
● Type ‘exit’ to leave the container.

When you type ‘exit’, you leave the Docker container.

cpplint
Cpplint is a Python script originally developed by Google for checking compliance with their
code standard. The project is hosted here: https://github.com/cpplint/cpplint
You only need to download the ‘cpplint.py’ file and then adapt the script to your own
requirements if needed. A quick search for ‘cpplint.py’ produces a large number of
examples.

Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.
● Type this command in the Docker container:
‘wget https://github.com/cpplint/cpplint/blob/master/cpplint.py’. Then ‘cpplint.py’ is
downloaded as illustrated.

Download of ‘cpplint.py’ (the console is distorted since the first instance of ‘.py’
should be placed at the end of line 1 – a copy/paste occurrence).
● Type ‘exit’ to leave the container.

sloccount
sloccount is a tool which counts the number of source code lines for a vast variety of
languages. The program is found here: https://sourceforge.net/projects/sloccount/

Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.

9/42
● Type this command in the Docker container: ‘apt-get install sloccount’, sloccount is
now installed.
● Type ‘exit’ to leave the container.

valgrind
The Valgrind tool is capable of analyzing the dynamic behaviour of an application, including
memory management and threading. The project is found here: http://valgrind.org/

Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.
● Type this command in the Docker container: ‘apt-get install valgrind’. When you
confirm, valgrind is installed.
● Type ‘exit’ to leave the container.

gcovr
The ‘gcovr’ tool handles post-processing of the results of gcov profiling and generates
coverage reports. The tool is found here: https://gcovr.com/
We choose the easy solution and install it using the Python pip tool which has to be installed
first.

Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.
● Type this command in the Docker container: ‘apt-get install python-pip’ and confirm.
Then pip is installed from the Python environment.
● Type ‘pip install gcovr’ and then pip installs gcovr.

Installing gcovr via pip.


● Type ‘exit’ to leave the container.

Eclipse
We need to use eclipse to build the project. Consequently, Eclipse also has to be installed.
Do the following:
● Type the following console command to start a console session in the Docker
container: sudo docker exec -it -u root <DOCKER ID> bash.

10/42
● Type this command in the Docker container: ‘wget
http://mirror.dkm.cz/eclipse/technology/epp/downloads/release/oxygen/3a/eclipse-
cpp-oxygen-3a-linux-gtk-x86_64.tar.gz’ and confirm. Eclipse downloads.
● Type the command below to extract Eclipse and confirm: ‘tar -xvzf eclipse-cpp-
oxygen-3a-linux-gtk-x86_64.tar.gz’’
● Type ‘exit’ to leave the container.

Eclipse is now installed in ‘/eclipse’ in the Docker container.

Installing Jenkins plugins


Having installed the external tools from the previous chapter, we continue with the
configuration of Jenkins.

First, a number of plugins have to be installed.

Do the following:
● Open a browser and type ‘localhost:8080’. You may try to log in as the user who was
created during the configuration of Jenkins above.
● Click the ‘Manage Jenkins’ link and then ‘Manage Plugins’

The starting point for configuration of Starting point for handling plugins
Jenkins.
● Open the ‘Available’ tab and select the ‘cppcheck’, ‘valgrind’, ‘cobertura’, ‘sloccount’,
‘xunit’ and ‘warnings’ add-ins. Click the ‘Install without reboot’ button

11/42
Plugin manager – available plugins. The installation is activated by pressing
the ‘Install without reboot’ button
● Jenkins then installs the selected plugins. Use the top left link to return to the front
page after the installation is completed.

Installation of plugins is active Link to front page.

Everything is now prepared for configuration of the first job.

Creating a build job


Having completed the Jenkins installation, we may now create the first build job. We will start
by doing the basic configurations and review the final results at the end.

Build job ‘HelloWorld’


We want to build the HelloWorld project each time a new source code is checked into Git.
Moreover, we want to carry out static code analysis, unit tests and memory check using the
tools described above. Last but not least, we need a sloc-count and a code coverage score
for our unit tests.

12/42
First, we need to generate the data basis and then visualize the result.

Generating a data basis


Do this in Jenkins:
● Click the ‘Create new jobs’ link from the summary page in Jenkins.

Creating a new job.


● Type the name ‘HelloWorld’ and choose the ‘Freestyle project’ type. Click the ‘Ok’
button to configure.

13/42
Creating the freestyle job ‘HelloWorld’
● Enter a description of the build as illustrated and select ‘Git’ as source code
management (SCM). Type the address of the Git repository in ‘Repository URL’. In
Jenkins, an error is displayed, marked with red. We will get back to that shortly.

The first step in creating a build job. Description and choice of SCM system.
● Click the ‘Add’ button to create a login. Choose the ‘SSH Username with a private
key’ type. Enter the username which is to be associated with this key (e.g.
developer). Copy the private key to the ‘Key’ field as shown here. Then click ‘Add’ at
the bottom of the window (not visible in the figure below).

14/42
Button for creating new login. Creating a SSH login based on a
public/private key pair. The public key
part is registered with the Git server.
The private key is registered in Jenkins
– e.g. as shown here.
● Choose to build the ‘develop’ branch (REMEMBER)

Selecting the ‘develop’ branch.


● Various circumstances may trigger a new build. We want Jenkins to carry out regular
Git polling and, in case of changes, to start a build. So, check the ‘Poll source code
management’ box and enter the mask ‘* * * * *’ to check once every minute’.

15/42
Configuration of polling of the Git repository once every minute; in case of any
changes, a new build is to be started.
● Click ‘Save’ and proceed to the summary page. The build job is now displayed in
‘HelloWorld’.

16/42
The summary page in Jenkins after the ‘HelloWorld’ job has been created.
● Click the button to the right to start a build. When the build has been created, the
‘Latest success’ column displays build #1. And of course the blue circle indicates
‘Success’.

Manuel start of the build Status after the first successful build.
The colour blue indicates success.
● Now we are going to add substance to the building job (so far, only the source code
is being fetched). Click the ‘HelloWorld’ link in the overview and select ‘Configure’.
We have now returned to the configuration window.

17/42
Link to the build job configuration Configuration link
● Click the ‘Build’ tab, click the ‘Add build step’ button and select ‘Execute shell.

18/42
Creating build steps – running shell command (shell script)
● Write this in the command box:

/eclipse/eclipse --launcher.suppressErrors -nosplash -application


org.eclipse.cdt.managedbuilder.core.headlessbuild -importAll $WORKSPACE -data $WORKSPACE -cleanBuild
HelloWorld/Debug

This command starts Eclipse in a ‘headless mode’ (without user interface), imports all
projects into the $WORKSPACE folder and builds the HelloWorld project in
debugging configuration. View more command line arguments here: https://gnu-mcu-
eclipse.github.io/advanced/headless-builds/.
If you have used cmake or make you could of course just replace the call to Eclipse
with the equivalent commands.

19/42
● Add yet another build step of the ‘shell command’ type in the same way as above.
Specify this command:

sloccount --duplicates --wide --details $WORKSPACE > sloccount.sc

So, we activate sloccount and save the result in the ‘sloccount.sc’ file.

2 build steps – the first one compiles with Eclipse, the second runs the sloccount
tool.
● Add yet another build step of the ‘shell command’ type in the same way as above.
Specify this command:

cppcheck --xml --std=c++11 --enable=all $WORKSPACE 2> cppcheck.xml

Here, we activate cppcheck and save the result in the ‘cppcheck.txt’ file
● Add yet another build step of the ‘shell command’ type in the same way as above.
Specify the command field as:

/eclipse/eclipse --launcher.suppressErrors -nosplash -application


org.eclipse.cdt.managedbuilder.core.headlessbuild -data $WORKSPACE -cleanBuild HelloWorldUnitTest/Debug

./HelloWorldUnitTest/Debug/HelloWorldUnitTest -r junit 1> unitTestResult.xml

Using these commands we activate Eclipse first in ‘headless’ mode and build the
debugging configuration with the unit test program ‘HelloWorldUnitTest’. Then
‘HelloWorldUnitTest’ is executed and the result of the unit test is saved in the JUnit
format in the ‘unitTestResult.xml’ file

● We have of course built ‘HelloWorldUnitTest’ using gcov profiling (flag -ftest-


coverage -fprofile-arcs),and therefore running ‘HelloWorldUnitTest’ in the previous

20/42
step has generated gcov profiling files (gcda og gcno – files). These should be
handled with ‘gcovr’ for the code coverage to be visualized.
Therefore, add yet another build step of the ‘shell command’ type in the same way as
above. Specify the command field as:

gcovr --root=$WORKSPACE/HelloWorldUnitTest --xml 1>codeCoverage.xml


This command processes the gcov files and save the results in ‘codeCoverage.xml’

Build steps for cppcheck, unit test and code coverage.


● The final thing missing is the memory check using Valgrind. The output will usually
have to be filtered (suppressions). These filters can be used to have Valgrind
generating in a separate mode. Consequently, we are creating two runs with
Valgrind. In the first one, we are generating possible new suppressions – the other
one is used for performing the check itself. If a need for adding new filters should
arise, we already have the input then. So, create a new build step of the ‘shell
command’ type as above. Specify the command field as:

valgrind --leak-check=full --gen-suppressions=all HelloWorldUnitTest/Debug/HelloWorldUnitTest >


VALGRIND_new_suppressions_HelloWorldUnitTest.txt 2>&1

valgrind --xml=yes --xml-file=VALGRIND_HelloWorldUnitTest.xml


HelloWorldUnitTest/Debug/HelloWorldUnitTest

So, we're saving possible suppressions in the


‘VALGRIND_new_suppressions_HelloWorldUnitTest.txt’ file and the result of the memory profiling
in the ‘VALGRIND_HelloWorldUnitTest.xml’ file

21/42
Visualizing the result
Once the building steps in the previous section have been completed, this produces a
number of files with results which are to be visualized.

Do the following:
● First SLOCCount: Click ‘Add post-build-action’ and select ‘Publish SLOCCount
analysis result’. Specify the ‘SLOCCOUNT reports’ as ‘sloccount.sc’ and click ‘Apply’.

Adding ‘Post build Selecting sloccount results Identifying file with results.
action’
● Then CPPCheck. Click ‘Add post-build-action’ again and select ‘Publish CPPCheck
results’. Set ‘Cppcheck report XMLs‘ to ‘cppcheck.txt’. Click and click ‘Apply’ (the
figure below shows the configuration in ‘Advanced’ mode).

22/42
Cppcheck configuration after clicking the ‘Advanced’ button. Notice that it
is possible to define threshold values for when the build should fail (red
dots) and warnings (yellow dots).
● Then follows the unit test. Click ‘Add post-build-action’ again and select ‘Publish
JUnit Test Result Report’. Set ‘Test Report XMLs ’ to ‘unitTestResult.xml’. Click
‘Apply’.
● Code coverage resembles the others. Click ‘Add post-build-action’ again and select
‘Publish Cobertura Coverage Report’. Set ‘Cobertura xml report pattern’ to
‘codeCoverage.xml’ and click ‘Apply (the figure below shows the setup in ‘Advanced
mode’).

23/42
The Cobertura coverage report after pressing ‘Advanced’. Notice that threshold
values can be specified which influence the ‘weather’ for the build job (see section
below about the summary window)
● And finally we have to add the Valgrind report. Click ‘Add post-build-action’ again,
and select ‘Publish Valgrind Results’. Set ‘Report pattern’ to
‘VALGRIND_HelloWorldUnitTest.xml’ and click ‘Apply’.

Other
Moreover, we want to save the binary ‘HelloWorld’ application. So, another step is added:

24/42
● Click ‘Add post-build-action’ again, and select ‘Archive artifacts’. Set the files to be
archived to ‘**/HelloWorld, **/*.xml’ and click ‘Apply’

The basic configuration is now completed. Click ‘Save’; Jenkins now returns to the summary
page.

The summary page in Jenkins after configuration of the HelloWorld build job.

Tour of the ‘HelloWorld’ and the Jenkins user


interface
This section is a quick tour of the Jenkins user interface and describes how the HelloWorld
job is handled.

Summary page
The list of configured build jobs is displayed in the right-hand side. Information about the
latest status, trend for the latest builds, build number of the latest successful and failed
builds, length of a build and a button for starting the build is displayed for each build job.

Build status (blue Trend (weather) Build 1 No failed builds Length of a build Start new build

25/42
= success) succeeded 26
minutes ago.

Click the ‘Start new build’ button to start a new build. In the left-hand side the status for ‘job
executors’ is displayed where the new build starts. If the ‘Start new build’ is pressed again
while a build of the job in question is running, the new build is queued.

After clicking ‘Start new build’, a new build A job can only be activated once. Additional
(build 101 in this case) appears in the status activation is queued.
with a progress bar. Mousing over the
progress bar displays estimated times. The
red X aborts the build.

Clicking the progress bar for a build in progress results in live feed of console output.

Live build console output

Live console output for a build in progress.

26/42
Click the word ‘Jenkins’ in the top left corner to return to the overview.
Click a job name to enter the overview for the job in question.

Job overview
The checks we configured in the previous section and the same elements as in chapter 1
are displayed.

The job overview in ‘HelloWorld’. The following areas are displayed seen from left to right,
from the top and down:
● Navigation/command links (e.g. ‘Build now’)
● Build history with status, build number and time stamp (e.g. blue, build 86)
● Download links to build artifacts (e.g. codeCoverage.xml of 414.14 KB)
● Metric tables (e.g. CppCheck results)
● Trend graphs (e.g. CppCheck trend).

27/42
Automatic start of build
In connection with the configuration of ‘HelloWorld’ we carried out a configuration to the
effect that every minute Jenkins should check Git for any changes. We are going to test that
now. So, we introduce a change in HelloWorld and commit it to Git.

Change in HelloWorld carried out by ‘tar’ with commit message.

We are now waiting in Jenkins…and within a minute, a new build job is queued and starts
shortly after.

New build is queued... ... and started

28/42
Click the build number in the overview (#107 in the figure) to view details. It appears
that the new build is triggered by a SCM change (=Source Control Management change =
git) and we can see the commit comment.

Click the build number to see details Details for the build.

Click ‘detail’ next to the commit comment. It now becomes evident that the changed file is
‘main.cpp’ and that it is ‘tar’ which has effected it.

Jenkins and branches

Branching – GitFlow
It is quite customary to develop new functions in separate branches of the source code
(feature branches) and merge them in a common code base once the feature has
completed. ‘GitFlow’ is a widely used paradigm – see here: http://nvie.com/posts/a-
successful-git-branching-model/

In GitFlow the common code base is called ‘develop’. Each time a new feature is started, a
new branch of the source code is created in which all work is carried out. Once development
has finished, the feature branch is merged back into ‘develop’ and the work is consolidated.
When we consolidated ‘HelloWorld’, we chose that the ‘develop’ branch should be built. So,
our build job only relates to this branch.
It would be optimal if the unit test, cppcheck, valgrind check, etc, was likewise carried out in
the individual feature branches meaning that QA for the code was carried out BEFORE the
merge with develop.

Fortunately, this is possible with Jenkins and the solution is called ‘Multibranch Pipeline
Builds’.

29/42
Branches and Jenkins file
In a multibranch pipeline build Jenkins automatically clones job configurations for each SCM
branch. Once configuration has been made, there are no costs in connection with feature
branches and you are constantly getting quality activities.
A description of the job configuration should be found in a text file named ‘Jenkinsfile’ in the
root of the repository.

Jenkinsfile placed in the root of the Eclipse workspace which is equal to the root of git-
repo.

The Jenkinsfile must comply with a pipeline syntax described here


https://jenkins.io/doc/book/pipeline/syntax/. You can make just about everything in a
Jenkinsfile – including everything that was demonstrated in the graphic configuration of
‘HelloWorld’.
The content displayed below is from a reduced Jenkinsfile which:
● Runs cppcheck of the code and generates the report
● Builds ‘HelloWorld’
● Activates Helloworld + all XML files
● Generates sloccount and publishes the result
● Builds HelloWorldUnitTest, runs the program and publishes the result.

pipeline {
agent any

options {
disableConcurrentBuilds()
}

stages {
stage('Static Code Analysis') {
steps {
sh 'cppcheck --template="{file},{line},{severity},{id},{message}" --std=c++11 --enable=all $WORKSPACE 2> cppcheck.txt'
step([$class: 'WarningsPublisher', canComputeNew: false, canresolverelativepaths: false, defaultEncoding: '', excludePattern: '', healthy: '', includePattern: '',
messagesPattern: '', parserConfigurations: [[parserName: 'cppcheck', pattern: 'cppcheck.txt']], unHealthy: ''])
}
}
stage ('Building'){

30/42
steps {

sh "/eclipse/eclipse --launcher.suppressErrors -nosplash -application org.eclipse.cdt.managedbuilder.core.headlessbuild -importAll $WORKSPACE -data


$WORKSPACE -cleanBuild HelloWorld/Debug > build.log 2>&1"

archiveArtifacts artifacts: '**/HelloWorld, **/*.xml', fingerprint: true

sh 'sloccount --wide --details $WORKSPACE/HelloWorldUnitTest/HelloWorld > sloccount.sc'


sloccountPublish encoding: '', pattern: 'sloccount.sc'
}

}
stage('Testing') {
steps {
sh "/eclipse/eclipse --launcher.suppressErrors -nosplash -application org.eclipse.cdt.managedbuilder.core.headlessbuild -data $WORKSPACE -cleanBuild
HelloWorldUnitTest/Debug"
sh "./HelloWorldUnitTest/Debug/HelloWorldUnitTest -r junit 1> unitTestResult.xml"
step([$class: 'XUnitBuilder',thresholds: [[$class: 'SkippedThreshold', failureThreshold: '0'],[$class: 'FailedThreshold', failureThreshold: '0']], tools: [[$class: 'JUnitType',
pattern: 'unitTestResult.xml']]])

}
}
}
post {
failure {
sh "echo Something went wrong. Jenkins may send a mail or slack message"
}
}
}

Jenkinsfile which builds HelloWorld etc.

Having committed a Jenkinsfile to demo_workspace.git as described above, we will now


configure a new Multibranch pipeline job

Multibranch pipeline build and HelloWorld


Do the following:
● Click the ‘New Item’ link. Enter a name and select the Multibranch Pipeline type.
Click Ok.

31/42
‘New Item’ link Creating ‘Multibranch Pipeline’ for ‘HelloWorld’.

● Enter the same Git credentials as for ‘HelloWorld’ in the ‘Branch Sources’ section.
Tick off ‘Periodically if not otherwise run’ and set the period to 1 minute.
Click ‘Save’

32/42
Configuration of a ‘Multibranch Pipeline’ for ‘HelloWorld’.
The Git configuration is displayed at the top; it matches the simple ‘HelloWorld’
configuration
As indicated just below, branches are ‘discovered’.
The build is managed by a Jenkinsfile
Jenkins scans Git once every minute to discover changes.

33/42
A new job now appears in the overview. Clicking the name displays a folder with jobs and
reveals the only branch at the moment – viz. ‘develop’.

New multibranch pipeline job. Content of HelloWorldMultibranchPipeline

Triggering a new build via the link in the right-hand side starts a new build of the ‘develop’
branch.

Build of the ‘develop’ branch in the With a finished build, the sun is now
pipeline build shining on ‘develop’.

Clicking the ‘develop’ branch displays the branch status as illustrated below (a couple of
builds have run).

34/42
Overview of the ‘develop’ branch. We can now see what was configured in our Jenkinsfile:
Build and archiving of ‘HelloWorld’,Cppcheck result, Sloccount, Unit test results (none
failed – hence no results are displayed) and artifacts. An overview of the pipeline’s phases
with time spent is also displayed

We will now create a new feature branch in Git and make a number of changes.

35/42
Activating new feature in GitFlow.

Name ‘new_feature Change in feature branch with commit message.

36/42
Following commit we open the multibranch pipeline job in Jenkins and wait – and a new
branch appears within one minute. The build starts, and if we open branch status we will see
an exact copy of what was seen in the ‘develop’ branch.

New branch detected, Overview of the new feature branch.


multibranch pipeline

So, all QA activities can take place in the feature branch which reduces the noise when we
merge back (particularly if, before that, you have already merged ‘develop’ with the feature
branch – then there is almost no excuse for breaking the build).

Final Jenkins tips


There are hundreds of plugins for Jenkins. Below follow a few which may prove very useful.

Mission Control Plugin


This plugin provides a visual overview suited for a dashboard in an office environment.

37/42
Example of mission control dashboard on a laptop

Build Monitor Plugin


Using this plugin will generate a useful overview of critical jobs. It can be used as ‘build light’.

Build monitor status on a laptop screen.

ThinBackup
The ThinBackup plugin will handle backup/restore of the Jenkins setup.

38/42
Job Configuration History
This plugin monitors changes in the setup of the individual jobs. It is thus possible to see
what has been changed and who has carried out the change. It is also possible to switch
between versions – which is a really big help in an environment where managing the
configurations is important!

Docker support
Jenkins supports Docker images in many contexts. Each step in the Jenkinsfile may e.g. be
carried out in its own Docker container. This means that a build environment in a Docker
container may be spread really fast to a number of Jenkins servers.
There is a huge amount of contributing plugins – e.g. CloudBees Docker Build and Publish,
docker-build-step

Cpplint support
A huge number of hacks have been made which make it possible to use existing plugins for
new purposes. Use this link to see an example:
https://stackoverflow.com/questions/14172232/how-to-make-cpplint-work-with-jenkins-
warnings-plugin

- where a Python script is used to convert output from cpplint to a format that is
readable for the cppcheck plugin.

More Jenkins nodes


It is easy to distribute jobs to several machines. Jenkins is not installed on each slave
machine; a simple Java client is started which connects to the Jenkins master machine.
Jenkins then commands the Java client to run build jobs. The prerequisite is that the build
tools are installed on the slave machines (here, Docker might be a help).
Der may be may reasons for wanting more Jenkins nodes:
● Different build jobs must be run in different operating systems (Jenkins is Java
based).
● Parallellizing builds
● Different Jenkins instances for different tasks – e.g. a cloud hosted master Jenkins
and several local Jenkins servers to run system, component and integration tests
against target devices.

Blue Ocean user interface


In the sections above we reviewed the traditional Jenkins user interface. Jenkins are working
on a major project – creating a new user interface – ‘Blue Ocean’ from scratch. A number of
Blue Ocean plugins are available.

39/42
Visualization in Blue Ocean of the projects from the the previous section

Activities in our Multibranch pipeline build in Blue Ocean

40/42
Visualization of a single pipeline build from Blue Ocean.

Rounding off
Jenkins and CI pipelines are generally a much more comprehensive subject and we have
only scratched the surface. As a tool, Jenkins is incredibly flexible due to the large number of
plugins and numerous usage situations. I have used Jenkins:
● As a clean build server where the configuration management dimension was used to
create full trackability from code to binary – at all levels.
● To run unit tests on the build server itself.
● To run component tests at the roof.
● To run psudo system tests on the build server (binary build for x86 but with the same
code base as for an ARM target.

The only annoying aspect about Jenkins is that (like all other open source projects) Jenkins
does not have a large marketing department which highlights its excellent qualities; and that
all the functionalities available today as well as future improvements have required and will
continue to require that a person who understands the problems and has the skills and the
time to solve them (could be sponsored) works out an extension which will make life easier
for everyone. These are the mechanisms which have created the super tool that can be
installed today free of charge.

From a product perspective, Jenkins is therefore a model for the way in which products
should be developed and designed.

41/42

You might also like