You are on page 1of 100

LIST OF EXPERIMENTS

SL.NO. EXPERIMENT NAME


1. AWS Cloud 9 IDE
2. Create a simple pipeline (S3 bucket)
3. Creating a cluster with kubeadm
4. Install and Set Up kubectl on Windows
5. Working with Terraform
6. Deploying infrastructure with an approval job
using Terraform
7. How to Integrate Jenkins SAST to SonarQube
8. Running Jenkins and SonarQube on Docker
9. Working with Nagios
10. Monitoring Windows Server with Nagios Core
11. Creating a Serverless Workflow
12 Using an Amazon S3 trigger to invoke a Lambda
function
Ex.1 AWS CLOUD 9 IDE

LAB OBJECTIVES:

To understand the benefits of Cloud Infrastructure and setup AWS CLOUD 9 IDE, Launch
Cloud9 IDE

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about Cloud9 IDE to meet the
application requirement.

PROCEDURE:

Step 1. Sign in to your AWS account

1. Sign in to your AWS account at https://aws.amazon.com with an IAM user role that has the
necessary permissions.

2. Use the region selector in the navigation bar to choose the AWS Region where you want to
deploy AWS Cloud9.

3. Select the key pair that you created earlier. In the navigation pane of the Amazon EC2
console, choose Key Pairs, and then choose your key pair from the list

Step 2. Launch the Quick Start (CONTD.)

1. choose deploy AWS Cloud9 into a new VPC on AWS

2. On the Select Template page, keep the default setting for the template URL, and then

choose Next.

3. Monitor the status of the stack. When the status is CREATE_COMPLETE, the AWS Cloud9
instance is ready.

Step 3. Test the deployment

Upon successful completion of the CloudFormation stack, you will be able to navigate to the
AWS Cloud9 Console and log in to your new instance.

Conclusion:

Thus the Concept of AWS Cloud 9 IDE is studied and verified for the application.
EX.2 CREATE A SIMPLE PIPELINE (S3 BUCKET)

LAB OBJECTIVES:

To understand about a simple pipeline and to know the concept of S3 bucket.

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about to create a simple
pipeline to meet the application requirement.

PROCEDURE:

Step 1: Create an S3 bucket for your application

You can store your source files or applications in any versioned location. In this tutorial, you
create an S3 bucket for the sample applications and enable versioning on that bucket.

To create an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. In Bucket name, enter a name for your bucket (for example, awscodepipeline-demobucket-
example-date).

4. After the bucket is created, a success banner displays. Choose Go to bucket details.

5. On the Properties tab, choose Versioning. Choose Enable versioning, and then choose
Save.

When versioning is enabled, Amazon S3 saves every version of every object in the bucket.

6. On the Permissions tab, leave the defaults. For more information about S3 bucket and
object permissions, see Specifying Permissions in a Policy.

Step 2: Create Amazon EC2 Windows instances and install the CodeDeploy agent
To create an instance role

1. Open the IAM console at https://console.aws.amazon.com/iam/).


2. Choose Next: Review. Enter a name for the role (for example, EC2InstanceRole).

Step 3: Create an application in CodeDeploy


To create an application in CodeDeploy

1. Open the CodeDeploy console at https://console.aws.amazon.com/codedeploy.

2. In Compute Platform, choose EC2/On-premises.

3. Choose Create application.

Step 4: Create your first pipeline in CodePipeline


To create a CodePipeline automated release process

1. Sign in to the AWS Management Console and open the CodePipeline console at
http://console.aws.amazon.com/codesuite/codepipeline/home.

2. On the Welcome page, Getting started page, or the Pipelines page, choose Create
pipeline.

3. Leave the settings under Advanced settings at their defaults, and then choose Next
Under Change detection options, leave the defaults. This allows CodePipeline to use Amazon
CloudWatch Events to detect changes in your source bucket.

Choose Next.

4. In Step 4: Add deploy stage, in Deploy provider, choose AWS CodeDeploy. The Region field
defaults to the same AWS Region as your pipeline. In Application name, enter
MyDemoApplication, or choose the Refresh button, and then choose the application name from
the list. In Deployment group, enter MyDemoDeploymentGroup, or choose it from the list, and
then choose Next.
5. In Step 5: Review, review the information, and then choose Create pipeline.

6. The pipeline starts to run. You can view progress and success and failure messages as the
CodePipeline sample deploys a webpage to each of the Amazon EC2 instances in the
CodeDeploy deployment.

Now, verify the results.

The following page is the sample application you uploaded to your S3 bucket.
CONCLUSION:

Thus the creation of S3 Bucket is done and the relevant application is verified.

EX.3 CREATING A CLUSTER WITH KUBEADM

LAB Objectives:

 Install a single control-plane Kubernetes cluster


 Install a Pod network on the cluster so that your Pods can talk to each other

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about to create a cluster with
kubeadm to meet the application requirement.

PROCEDURE:

Instructions

Installing kubeadm on your hosts

Note:
If you have already installed kubeadm, run apt-get update && apt-get upgrade or yum update to
get the latest version of kubeadm.

1. (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster


to high availability you should specify the --control-plane-endpoint to set the shared
endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP
address of a load-balancer.
2. (Optional) Since version 1.14, kubeadm tries to detect the container runtime on Linux by
using a list of well known domain socket paths. To use different container runtime or if
there are more than one installed on the provisioned node, specify the --cri-
socket argument to kubeadm init. See Installing runtime.

3. (Optional) Run kubeadm config images pull prior to kubeadm init to verify connectivity


to the gcr.io container image registry.

To initialize the control-plane node run:

kubeadm init <args>

Make a record of the kubeadm join command that kubeadm init outputs. You need this command
to join nodes to your cluster.

The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this token can add
authenticated nodes to your cluster. These tokens can be listed, created, and deleted with
the kubeadm token command. See the kubeadm reference guide.

Installing a Pod network add-on

Caution:
 By default, kubeadm sets up your cluster to use and enforce use of RBAC (role based
access control). Make sure that your Pod network plugin supports RBAC, and so do any
manifests that you use to deploy it.
 If you want to use IPv6--either dual-stack, or single-stack IPv6 only networking--for your
cluster, make sure that your Pod network plugin supports IPv6. IPv6 support was added
to CNI in v0.6.0.

Note:
The example above assumes SSH access is enabled for root. If that is not the case, you can copy
the admin.conf file to be accessible by some other user and scp using that other user instead.

The admin.conf file gives the user superuser privileges over the cluster. This file should be used
sparingly.

CONCLUSION:

Thus Creating a cluster with kubeadm is done and the related work is verified for application.
EX.4 INSTALL AND SET UP KUBECTL ON WINDOWS

LAB OBJECTIVES:

To understand about the Install and Set Up kubectl on Windows

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about kubectl to meet the
application requirement.

PROCEDURE:

Install kubectl on Windows

The following methods exist for installing kubectl on Windows:

• Install kubectl binary with curl on Windows


• Install on Windows using Chocolatey or Scoop

Install kubectl binary with curl on Windows

1. Download the latest release v1.21.0.

Or if you have curl installed, use this command:

curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl.exe

Note: To find out the latest stable version (for example, for

scripting), take a look at https://dl.k8s.io/release/stable.txt.

2. Validate the binary (optional)

Validate the kubectl binary against the checksum file:

Using Command Prompt to manually compare CertUtil's output to the checksum file
downloaded:

Note: Docker Desktop for Windows adds its own version of kubectl to PATH. If you have
installed Docker Desktop before, you may need to place your PATH entry before the one added
by the Docker Desktop installer or remove the Docker Desktop's kubectl.

Install on Windows using Chocolatey or Scoop

1. To install kubectl on Windows you can use either Chocolatey package manager or Scoop
command-line installer.

o choco

o scoop

2. choco install kubernetes-cli

3. Test to ensure the version you installed is up-to-date:

4. kubectl version --client

5. Navigate to your home directory:


6. # If you're using cmd.exe, run: cd %USERPROFILE%

7. cd ~

8. Create the .kube directory:

9. mkdir .kube

10. Change to the .kube directory you just created:

11. cd .kube

12. Configure kubectl to use a remote Kubernetes cluster:

13. New-Item config -type file

Verify kubectl configuration

In order for kubectl to find and access a Kubernetes cluster, it needs a kubeconfig file, which is
created automatically when you create a cluster using kube-up.sh or successfully deploy a
Minikube cluster. By default, kubectl configuration is located at ~/.kube/config.

Install kubectl convert plugin

A plugin for Kubernetes command-line tool kubectl, which allows you to convert manifests
between different API versions. This can be particularly helpful to migrate manifests to a non-
deprecated api version with newer Kubernetes release. For more info, visit migrate to non
deprecated apis

1. Download the latest release with the command:


2. curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl-convert.exe
3. Validate the binary (optional)
o Using Command Prompt to manually compare CertUtil's output to the checksum
file downloaded:
o CertUtil -hashfile kubectl-convert.exe SHA256
o type kubectl-convert.exe.sha256
o Using PowerShell to automate the verification using the -eq operator to get
a True or False result:
o $($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $
(type .\kubectl-convert.exe.sha256)
4. Add the binary in to your PATH.
5. Verify plugin is successfully installed
6. kubectl convert --help
If you do not see an error, it means the plugin is successfully installed.
CONCLUSION:

Thus the installing and set up of kubectl on Windows is done and verified.

EX.5 WORKING WITH TERRAFORM

LAB OBJECTIVES:

To understand about Terraform

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about working with
Terraform

PROCEDURE:

We will install Terraform on Ubuntu and provision a very basic infrastructure.

Install Terraform

Download the latest terraform package.

Refer to the official download page to get the latest version for the respective OS.

terraform_0.13.0_linux_amd64.zip

Extract the downloaded package.

inflating: terraform
Move the terraform executable file to the path shown below. Check the terraform version.

Terraform v0.13.0

You can see these are the available commands in terraform for execution.

geekflare@geekflare:~$ terraform
Usage: terraform [-version] [-help] <command> [args]

The available commands for execution are listed below.


The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.

Common commands:
apply Builds or changes infrastructure
workspace Workspace management

All other commands:


force-unlock Manually unlock the terraform state
push Obsolete command for Terraform Enterprise legacy (v1)
state Advanced state management

Provision AWS EC2 Instance Using Terraform

In this demo, I am going to launch a new AWS EC2 instance using Terraform.

Create a working directory for this Terraform demo.

Go to the directory and create a terraform configuration file where you define the provider and
resources to launch an AWS EC2 instance.

Note: I have changed the access and secret keys 😛, you need to use your own.

From the configuration mentioned above, you can see I am mentioning the provider like AWS.
Inside the provider, I am giving AWS user credentials and regions where the instance must be
launched.

In resources, I am giving AMI details of Ubuntu (ami-0a634ae95e11c6f91) and mentioning the


instance type should be t2.micro

You can see how easy and readable the configuration file is, even if you are not a die-hard coder.

terraform init
Now, the first step is to initialize terraform.

terraform plan

Next is the plan stage; it will create the execution graph for creating and provisioning the
infrastructure.

terraform apply

The apply stage will execute the configuration file and launch an AWS EC2 instance. When you
run apply command, it will ask you, “Do you want to perform these actions?”, you need to type
yes and hit enter.

You have successfully launched an AWS EC2 instance using Terraform.


terraform destroy

Finally, if you want to delete the infrastructure, you need to run the destroy command.

If you recheck the EC2 dashboard, you will see the instance got terminated.

CONCLUSION:
Thus the installation of terraform and the working of terraform is studied and verified.

EX.6 DEPLOYING INFRASTRUCTURE WITH AN APPROVAL JOB USING


TERRAFORM

LAB OBJECTIVES:

To understand about deploying infrastructure with an approval job using Terraform

LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about deploying
infrastructure with an approval job using Terraform

PROCEDURE:

Installing the Terraform CLI

I have found that the easiest way to install Terraform CLI is to download the prebuilt binary for
your platform from the official download page and move it to a folder that is currently in
your PATH environment variable.
Note: Terraform offers extensive documentation about installation, if you would like to try a
different method.
On Linux, for example, installing Terraform is as easy as running
Confirm that it was installed correctly by running:

terraform version

It should print something similar to Terraform v0.13.5 (Terraform is updated regularly so don’t


be surprised if your version is something different)
Now that Terraform is working, we can create our project.

Setting up Google Cloud

Our first task is to create a project on Google Cloud to store everything related to Terraform
itself, like the state and service accounts. Using the gcloud CLI enter:

Link the recently created project to your billing account by running:

gcloud beta billing projects \

link $TERRAFORM_PROJECT_IDENTIFIER \

--billing-account [billing-account-id]
Remember to replace [billing-account-id] with the actual ID of your billing account. If you do
not know what ID to use, the easiest way to get it is to run:

gcloud beta billing accounts list

Your ID is the first column of the output. Use the ID whose OPEN status (the third column)
is TRUE.
Next, enable the required APIs on the Terraform project:

gcloud services enable \

cloudresourcemanager.googleapis.com \

cloudbilling.googleapis.com \

compute.googleapis.com \

iam.googleapis.com \

serviceusage.googleapis.com \

container.googleapis.com

Creating a service account

Create the service account we will be using with Terraform:

gcloud iam service-accounts create terraform \

--display-name "Terraform admin account"

It should show something like this:


Created service account [terraform].
Store the service account email in a variable:
export
TERRAFORM_SERVICE_ACCOUNT_EMAIL="terraform@$TERRAFORM_PROJECT_IDE
NTIFIER.iam.gserviceaccount.com

Adding roles

Next, we need to create roles so Terraform can store their state inside a storage bucket we will
create later on. In this step, we will add the viewer and storage.admin roles to the service account
inside our Terraform Google Cloud project:

gcloud projects add-iam-policy-binding $TERRAFORM_PROJECT_IDENTIFIER \

--member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL \

--role roles/viewer

Create a separate project to create our Kubernetes Cluster on:

gcloud projects create circleci-k8s-cluster-$RANDOM_ID --name circleci-k8s-cluster

Note that we are using the commands that we used for the previous project, but we are using the
name circleci-k8s-cluster. Another difference is that we are not setting this project as the default
for the gcloud CLI.
Put the identifier in a variable just like we did before:

export CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER=circleci-k8s-cluster-
$RANDOM_ID

Just like the Terraform project, the new project must be linked to your billing account. Run:

gcloud beta billing projects \

link $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER \
--billing-account [billing-account-id]

Remember to replace [billing-account-id] with your billing account ID.


To complete Google Cloud setup, give the Terraform service account full access on this project.
Assign it the owner role:

gcloud projects add-iam-policy-binding


$CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER \

--member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL \

--role roles/owner

Creating our Infrastructure as Code project

For this tutorial, we will deploy a simple Kubernetes Cluster to Google Cloud. The first thing we
need to do is to create our repository on GitHub and initialize a local repository in our machine
that points to this repo. Name your project circleci-terraform-automated-deploy.
With the GitHub CLI this is as easy as running:

gh repo create circleci-terraform-automated-deploy

Set Visibility to the one you want, and answer Yes to both questions.


Go to the new repository:

cd circleci-terraform-automated-deploy

Create new files:

|- backend.tf
|- k8s-cluster.tf

|- main.tf

|- outputs.tf

|- variables.tf

Create each file, leaving them empty:

touch backend.tf k8s-cluster.tf main.tf outputs.tf variables.tf

Finally, we will export a new shell variable called GOOGLE_APPLICATION_CREDENTIALS.


The google provider we are using in Terraform checks for this variable to authenticate. The new
variable points to the service account key we created earlier:

export GOOGLE_APPLICATION_CREDENTIALS=~/gcloud-terraform-admin.json

Check that Terraform is able to authenticate with Google Cloud to create the initial state. In your
git repository, run:

terraform init

In addition to the version of the Google provider, we are also setting the default values
for project and region. These values will used by default when creating resources. Make sure to
replace [circleci-project-full-identifier] with the actual value. In our case, this is the value of
the $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER shell variable.
Because we changed a provider, we must also reinitialize our Terraform state. Run:

terraform init

Your output should include:


Terraform has been successfully initialized!
Terraform resources are based on actual infrastructure resources, so to create a Kubernetes
Cluster on Google Cloud, we must create two resources:

 google_container_cluster
 google_container_node_pool

After setting the value of the environment variable to the contents of the JSON key file,
click Add Environment Variable.
Now that we have finished creating our context, go back to the projects page in CircleCI (by
clicking on the X icon on the top right). Search for the GitHub repository you created for your
infrastructure. Click Set Up Project.
And then Start Building.

This will run the first build of our repository.


If everything works correctly, the UI will show a different icon it gets to the hold-apply job.
Click the terraform/plan job to review the output of the terraform plan step. If it is correct, go
back to the workflow page and click the hold-apply job. You can approve or cancel it.

Approve it to start the terraform/apply.

This job may take a while to finish. Once it ends, the infrastructure should be available on
Google Cloud Console and our build would be green.
Success!
When you do not need the infrastructure anymore, you can run (locally):

terraform destroy

Conclusion:

Thus the Deploying infrastructure with an approval job using Terraform is done and
verified.

EX.7 HOW TO INTEGRATE JENKINS SAST TO SONARQUBE – DEVSECOPS.

LAB OBJECTIVES:

To understand about to Integrate Jenkins SAST to SonarQube – DevSecOps.

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about to Integrate Jenkins
SAST to SonarQube – DevSecOps.
PROCEDURE:

SonarQube Setup
SonarQube Instance
Before proceeding with the integration, we will setup SonarQube Instance. Choice of the
platform is yours. In this Tutorial, we are using SonarQube Docker Container.

Generate User Token


Now, we need to get the SonarQube user token to make connection between Jenkins and
SonarQube. For the same, go to User > My Account > Security and then, from the bottom of
the page you can create new tokens by clicking the Generate Button. Copy the Token and keep
it safe.

SonarQube Token Generation


Add necessary plugin
In this Tutorial, we are following a Python-based application. So, we need to add a python plugin
in the SonarQube so that it will collect the Bugs and Static code analysis from Jenkins. For the
same, go to Administration > Marketplace > Plugins. Then in the search box, search
for Python. Then, you will see Python Code Quality and Security (Code Analyzer for Python).
Just install. That’s all from the SonarQube side.

Python Code Analyzer for SonarQube


Configuring SonarQube in Codebase.
In the Movie Database Application code base from the GitHub
(https://github.com/PrabhuVignesh/movie-crud-flask ), we will add the soanr-
project.properties file and add the following code inside the file.

This will basically tell the sonar scanner to send the analysis data in the project name with the
mentioned project key. Along with this, we are using python Bandit to scan the Python
Dependency vulnerability and more. So, we are adding the report of the same in the proprieties
file.
SonarQube Scanner Plugin for Jenkins
Tool Configuration SonarQube Scanner
Now, we need to configure the Jenkins plugin for SonarQube Scanner to make a connection with
the SonarQube Instance. For that, got to Manage Jenkins > Configure System > SonarQube
Server. Then, Add SonarQube. In this, give the Installation Name, Server URL then Add the
Authentication token in the Jenkins Credential Manager and select the same in the configuration.
SonarQube Server Configuration in Jenkins
Then, we need to set-up the SonarQube Scanner to scan the source code in the various stage. For
the same, go to Manage Jenkins > Global Tool Configuration > SonarQube Scanner. Then,
Click Add SonarQube Scanner Button. From there, give some name of the scanner type
and Add Installer of your choice. In this case, I have selected SonarQube Scanner from Maven
Central.
SonarQube Scanner Configuration for Jenkins
SonarQube Scanner in Jenkins Pipeline
Now, It’s time to integrate the SonarQube Scanner in the Jenkins Pipeline. For the same, we are
going to add one more stage in the Jenkinsfile called sonar-publish and inside that, I am adding
the following code.

Were this will collect the SonarQube Server information from the sonar-project.properties file
and publish the collected information to the SonarQube Server. So, the overall code will look
like the below snippet.

Once we execute the Jenkins Pipeline for this project, we will get the following output
Jenkins Pipeline for SonarQube
Where it will just execute the SonarQube Scanner and collect the SAST information and Python
bandit report in the format of JSON. Then, it will publish the same in the SonarQube Server. If
you login to the SonarQube and visit the Dashboard, you will see the Analysis of the project
there.
Code analysis Result in SonarQube
Since we have both Jenkins and SonarQube in the Enterprise standard, we have a lot of features
including the alert system. Where we can configure the Email, or Instance message Notification
system for the findings in the SonarQube or Jenkins. In the best case, we can auto convert certain
bugs or findings as ticket and assign to the respective developer.

Conclusion:
Thus to Integrate Jenkins SAST to SonarQube – DevSecOps is studied and verified.
EX.8. RUNNING JENKINS AND SONARQUBE ON DOCKER

LAB OBJECTIVES:

To understand About Running Jenkins and SonarQube on Docker

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about Running Jenkins and
SonarQube on Docker

PROCEDURE:

Enough on the introductions. Let’s jump into the configurations, shall we? First of all, let’s
spin up Jenkins and SonarQube using Docker containers. Note that, we are going to use
docker compose as it is an easy method to handle multiple services. Below is the content of
the docker-compose.yml file which we are going to use.
docker-compose up is the command to run the docker-compose.yml file.

This file, when run, will automatically host the Jenkins listening on port 8080 along with a
slave.
Jenkins hosted using Docker
The SonarQube will be hosted listening on port 9000.
So
narQube hosted using Docker
Configuring Jenkins for SonarQube Analysis

For this, let’s go to Jenkins -> Manage Jenkins ->   Manage Plugins. There, navigate to
“Available” view and look for the plugin “SonarQube Scanner”. Select the plugin and
click on “Install without restart” and wait for the plugin to be installed.
Installing SonarQube Scanner Plugin
For that, let’s click on Jenkins -> Manage Jenkins -> Configure System -> SonarQube
Servers and fill in the required details.
SonarQube Server Configuration
Here,

 Name: Anything meaningful. Eg. sonarqube


 Server URL: <your sonarqube server url>
 Server Authentication Token: Refer below

To get the server authentication token, login to SonarQube and go to  Administration ->
Security -> Users and then click on Tokens. There, Enter a Token name and click on
Generate and copy the token value and paste it in the Jenkins field and then click on
“Done”.
Creating Authorization Token
Finally, save the Jenkins Global configurations by clicking on the “Save” icon.
Manage Jenkins -> Global Tool Configuration -> SonarQube Scanner -> SonarQube
Scanner installations. Enter any meaningful name under the Name field and select an
appropriate method in which you want to install this tool in Jenkins. Here, we are going to
select “Install automatically” option. Then, click on “Save”.
SonarQube Scanner Configuration in Jenkins
Creating and Configuring Jenkins Pipeline Job
let’s click on “New Item” in Jenkins home page and enter the job name as
“sonarqube_test_pipeline” and then select the “Pipeline” option and then click on “OK”.
Creating Jenkins Pipeline job
Now, inside the job configuration, let’s go to the Pipeline step and select Pipeline Script
from SCM and then select Git and enter the Repository URL and then save the job.
Pipe
line Job Configuration
As shown in the image, the source code is under “develop” branch of the repository
“MEANStackApp”. We have also committed a Jenkinsfile there which will be the input for
our pipeline job.
The Jenkinsfile has the logic to checkout the source code and for SonarQube tool to
perform code analysis on the code. Below is the content of this Jenkinsfile.
Building the Jenkins Pipeline Job
Since we have configured everything, let’s build the job and see what happens. For that,
click on the “Build Now” option in the job.
Building the
Jenkins job
From the logs below, it can be seen that the Jenkins job is successful.
Lo
gs of Jenkins Pipeline Job
Below is the job view in Blue Ocean. Pretty, isn’t it?
Job View in Blue Ocean
To check the analysis report, let’s go to the link as shown in the build logs. The link
basically points to the SonarQube server URL.
SonarQube Analysis Report
Here, it says there are no bugs and vulnerabilities in this code and the Quality Gate status

looks “Passed“. Though it’s a simple app, it is good to know that code quality is good 

We have reached the end of this article. Here, we have learned how to integrate SonarQube
with Jenkins for a simple node.js app in order to perform code analysis. The same
procedure can be followed for applications written in any other programming language.

Conclusion:
Thus Running Jenkins and SonarQube on Docker is studied and verified.

EX.9 CONTINUOUS MONITORING BY NAGIOS

LAB OBJECTIVES:

To understand about Continuous Monitoring by nagios

LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about Continuous Monitoring
by nagios

PROCEDURE:

Let’s start by installing Nagios Core

Install Nagios Core:

The complete process to install Nagios can be summarized in four steps:

1. Install Required Packages In The Monitoring Server


2. Install Nagios Core, Nagios Plugins And NRPE (Nagios Remote Plugin Executor)
3. Set Nagios Password To Access The Web Interface
4. Install NRPE In Client

Step – 1: Install Required Packages On The Monitoring Server:


Visit the website: http://dl.fedoraproject.org/pub/epel/6/

Click on i386, and then you will be redirected to a page.


Since we are using CentOS 6, so we will right click and copy the link location of ‘epel-release-
6-8.noarch.rpm‘, as shown in the above screenshot.

Open the terminal and use rpm -Uvh command and paste the link.

We need to download one more repository, for that visit the website
‘http://rpms.famillecollet.com/enterprise/‘
Right-click and copy the link location for ‘remi-release-6.rpm‘

Again open the terminal and use rpm -Uvh command and paste the link.

Fine, so we are done with the pre-requisites. Let’s proceed to the next step.

Step – 2: Install Nagios Core, Nagios Plugins And NRPE (Nagios Remote Plugin Executor):

Execute the below command in the terminal:

Apache web server is required to monitor the current web server status.

Php is used to process dynamic content of the site date.

Next, we need to enable Apache and Nagios service:

chkconfig httpd on && chkconfig nagios on


Our next step is to start Nagios and Apache:

service httpd start && service nagios start

Now, we will enable swap memory of at least 1GB. It’s time to create the swap file itself using
the dd command:

dd if=/dev/zero of=/swap bs=1024 count=2097152

Swap is basically used to free some, not so frequently accessed information from RAM, and
move it to a specific partition on our hard drive.

If we see no errors, our swap space is ready to use. To activate it immediately, type:

swapon /swap
This file will last on the virtual private server until the machine reboots. You can ensure that the
swap is permanent by adding it to the fstab file.

echo /swap swap swap defaults 0 0 >> /etc/fstab

The operating system kernel can adjust how often it relies on swap through a configuration
parameter known as swappiness.

To find the current swappiness settings, type:

cat /proc/sys/vm/swappiness
Finally, we are done with the second step.

Let’s proceed further and set Nagios password to access the web interface.

Step – 3: Set Nagios Password To Access The Web Interface:

Set the password to access the web interface, use the below command:

htpasswd -c /etc/nagios/passwd nagiosadmin

Type the password and confirm it by retyping it.


Now, open the browser. Here, type your public IP or hostname/nagios. Consider the example
below:

Here, give the user name and password. By default, the user name is nagiosadmin, and password
is what you have set in the previous step. Finally, press OK.

After this, you will directed to Nagios Core dashboard.


we can click on hosts and see the what all hosts your Nagios Core is currently monitoring.
we can notice it is only monitoring one host, i.e. localhost. If we want my Nagios Core to
monitor a remote host, we need to install NRPE in that remote host. This brings us to the next
step, install NRPE In client/machine that we want Nagios to monitor.

Step – 4: Install NRPE In Client:

Alrighty then, let’s install NRPE in the client machine.

Firstly, we need to install the required packages like we did on my Nagios server machine. So,
just execute the same commands, consider the below screenshots:

Now install Nagios, Nagios Plugins and NRPE in client:

yum -y install nagios nagios-plugins-all nrpe


Once it is installed, enable the NRPE service:

chkconfig nrpe on

Our next step is to edit the, nrpe.cfg file. we will be using the vi editor, you can choose any other
editor also:

we need to add the IP address of your monitoring server, in the allowed host line, consider the
below screenshot:

Here, the IP address of my monitoring server is 192.168.56.101.


Now, we need to setup firewall rules to allow connection between monitoring server and client. 

iptables -N NRPE

Now, we will save these configurations:

/etc/init.d/iptables save

Start NRPE service now.

service nrpe start

Now go back to the Monitoring server.

Here, we need to edit nagios.cfg file.

vi /etc/nagios/nagios.cfg

Uncomment the the line – cfg_dir = etc/nagios/servers


Make ‘server’ directory, for that use mkdir command.

mkdir /etc/nagios/servers/

Change your working directory to servers.

cd /etc/nagios/servers

Create a new file in this directory with .cfg extension and edit it. we will name it as client.cfg,
and we will be using vi editor.

vi /etc/nagios/servers/client.cfg

Here add the below lines:


This basically includes the kind of services we want to monitor. Give the hostname of the
machine and its ip address which you want Nagios to monitor.

Similarly, you can add number of services that you want to monitor. The same configurations
can be used to add ‘n’ number of clients.

Last step, set the folder permissions correctly and restart Nagios.

chown -R nagios. /etc/nagios/

Now, restart Nagios

service nagios restart


Open the browser and again type the host name or public ip/nagios/. In our case it is
localhost/nagios/.

Click on hosts to see all the machines Nagios is currently monitoring.


Here we can notice, it is currently monitoring the client machine (hostname of the machine that
we want Nagios to monitor). Basically, we have added a remote host using NRPE.

Conclusion:

Thus the Continuous Monitoring by Nagios is done and verified.

EX.10 MONITORING WINDOWS SERVER WITH NAGIOS CORE

LAB OBJECTIVES:

To understand about the Monitoring Windows Server with Nagios Core


LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about Monitoring Windows
Server with Nagios Core

PROCEDURE:

Install the NRDP agent

In this simplified example, we want to install the listener onto the Nagios server, using an
Ubuntu system. Replace {version} with the current version of the NRDP service:

In the /usr/local/nrdp/server/config.inc.php file, generate a list of tokens permitted to send data.


Define one or more tokens -- these are arbitrary and can be set in any way:

Finally, restart Apache to enable the changes to take effect:

Test the NRDP agent

Navigate to the Nagios server and the NRDP listener, such as http://10.0.0.10/nrdp. Use the
token previously retrieved from the authorized_tokens section of the configuration
file /usr/local/nrdp/server/config.inc.php to send the following JSON to test the listener:

Install the check_ncpa.py plugin

The check_ncpa.py plugin enables Nagios to monitor the installed NCPAs on the hosts. Follow
these steps to install the plugin:

1. Download the plugin.

2. Add the file to the standard Nagios Core location, /usr/local/nagios/libexec.


Apply these agent configurations

After the NRDP installation, install the NCPA. Download the installation files and run the
install.
For the listener configuration, follow these guidelines:

 API Token: Create an arbitrary token to query the API interface.

 Bind IP: Leave as default to listen on all addresses.

 Bind Port: Leave as default.

 SSL Version: Leave as default of TLSv1.2.

 Log Level: Leave as default.

For the NRDP passive configuration, apply the following:

 URL: Use the IP or host name of the Nagios server that hosts the installed NRDP agent.

 NRDP Token: Use the token retrieved from /usr/local/nrdp/server/config.inc.php.

 Hostname: Replace with the host name of the system.

 Check Interval and Log Level: Leave as default.

 Create a NCPA check


 We need to create a simple command to use the check_ncpa.py plugin, and normally it
lives here: /usr/local/nagios/etc/commands.cfg.
 The final step to monitor Windows Server with Nagios is to create a simple CPU check
in /usr/local/nagios/etc/ncpa.cfg.
 Challenges with Nagios Windows monitoring
 Since Nagios was primarily designed for Linux, it does have some Windows monitoring
limitations. However, the Nagios agent for Windows has been around and actively
developed for a long time.

Conclusion:
Thus the study of monitoring Windows Server with Nagios Core is done and verified.

EX.11 CREATING A SERVERLESS WORKFLOW WITH AWS STEP FUNCTIONS AND


AWS LAMBDA

Lab Objectives:

The Lab Experiment aims:

To understand about the Concepts of Creating a Serverless Workflow with AWS Step Functions
and AWS Lambda

Lab Outcomes:

On Successful Completion, the student will be able to

Know the basic concepts of Creating a Serverless Workflow with AWS Step Functions and AWS
Lambda

PROCEDURE:

Step 1. Create a State Machine & Serverless Workflow


our first step is to design a workflow that describes how we want support tickets to be handled in
your call center. Workflows describe a process as a series of discrete tasks that can be repeated
again and again.

we are able to sit down with the call center manager to talk through best practices for handling
support cases. Using the visual workflows in Step Functions as an intuitive reference, we define
the workflow together.

Then, we’ll design our workflow in AWS Step Functions. our workflow will call one AWS
Lambda function to create a support case, invoke another function to assign the case to a support
representative for resolution, and so on.
a. Open the AWS Step Functions console. Select Author with code snippets. In the Name text
box, type CallCenterStateMachine.

b. Replace the contents of the State machine definition window with the Amazon States


Language (ASL) state machine definition below. Amazon States Language is a JSON-based,
structured language used to define your state machine.
c. Click the refresh button to show the ASL state machine definition as a visual workflow. In our
scenario, we can easily verify that the process is described correctly by reviewing the visual
workflow with the call center manager.

d. Click Next.
Step 2. Create an AWS Identity and Access Management (IAM) Role
AWS IAM is a web service that helps us securely control access to AWS resources. In this step,
we will create an IAM role that allows Step Functions to access Lambda.
a. In another browser window, open the AWS Management Console. When the screen loads,
type IAM  in the search bar, then select IAM to open the service console.
b. Click Roles and then click Create Role.
( click to enlarge )

c. On the Create Roles screen, leave AWS Service selected, select Step Functions then


click Next: Permissions. On the next screen, click Next: Review.
d. Enter Role name as step_functions_basic_execution and choose Create role. On the next
screen, click Next: Review.
e. our role is created and appears in the list. Select the name of our role to view it. 
( click to enlarge )

f. Copy the Role ARN on the next screen.


( click to enlarge )

Step 3. Add the IAM Role to the State Machine


Next, we’ll add the IAM role ARN you created to the State Machine in AWS Step Functions.
a. Select the browser tab with the Step Functions console.

b. Paste the ARN you copied in the IAM role ARN text box.


c. Click Create State Machine.

( click to enlarge )

Step 4. Create your AWS Lambda Functions


Now that we’ve created your state machine, we can decide how you want it to perform work. we
can connect our state machine to AWS Lambda functions and other microservices that already
exist in your environment, or create new ones. In this lab, we’ll create a few simple Lambda
functions that simulate various steps for handling support calls, such as assigning the case to a
Customer Support Representative.
a. Click Services, type Lambda in the search text box, then select Lambda to open the service
console.

b. Click Create function.
c. Select Author from scratch.

d. Configure our first Lambda function with these settings:

Name – OpenCaseFunction.
Runtime – Node.js 4.3.
Role – Create custom role.

A new IAM window appears.

e. For the Role name, keep lambda_basic_execution and click Allow.


we are automatically returned to the Lambda console.
f. Click Create function.

g. Replace the contents of the Function code window with the following code, and then
click Save.
h. At the top of the page, click Functions.
When complete, you should have 5 Lambda functions.
( click to enlarge )

Step 5. Populate your Workflow


The next step is to populate the task states in your Step Functions workflow with the Lambda
functions you just created.
a. Click Services, type Step in the search text box, then select Step Functions to open the service
console.
b. On the State machines screen, select your CallCenterStateMachine and click Edit.
( click to enlarge )

c. In the State machine definition section, find the line below the Open Case state which starts
with Resource.

Replace the ARN with the ARN of your OpenCaseFunction.

If you click the sample ARN, a list of the AWS Lambda functions in your account will appear
and you can select it from the list.
( click to enlarge )

d. Repeat the previous step to update the Lambda function ARNs for the Assign Case, Work on
Case, Close Case, and Escalate Case Task states in your state machine, then click Save.
( click to enlarge )

Step 6. Execute your Workflow


our serverless workflow is now ready to be executed. A state machine execution is an instance of
your workflow, and occurs each time a Step Functions state machine runs and performs its tasks.
Each Step Functions state machine can have multiple simultaneous executions, which we can
initiate from the Step Functions console (which is what we’ll do next), or using the AWS SDKs,
the Step Functions API actions, or the AWS CLI. An execution receives JSON input and
produces JSON output.
a. Click Start execution.

( click to enlarge )

b. A New execution dialog box appears. To supply an ID for your support case, enter the content
from below in the New execution dialog box in the Input window, then click Start execution.
{
"inputCaseID": "001"
}
( click to enlarge )

c. As our workflow executes, each step will change color in the Visual workflow pane. Wait a
few seconds for execution to complete. Then, in the Execution details pane,
click Input and Output to view the inputs and results of our workflow.
d. Step Functions lets we inspect each step of our workflow execution, including the inputs and
outputs of each state. Click on each task in our workflow and expand the Input and Output fields
under Step details. We can see that the case ID you inputted into our state machine is passed
from each step to the next, and that the messages are updated as each Lambda function
completes its work.
e. Scroll down to the Execution event history section. Click through each step of execution to see
how Step Functions called your Lambda functions and passed data between functions.
f. Depending on the output of our WorkOnCaseFunction, our workflow may have ended by
resolving the support case and closing the ticket, or escalating the ticket to the next tier of
support. we can re-run the execution a few more times to observe this different behavior. This n
our state machine to loop back to the Work On Case state. No changes to our Lambda functions
would be required. The functions we built for this lab are samples only, so we’ll move on to the
next step.
( click to enlarge )

Step 7. Terminate your Resources


In this step we will terminate our AWS Step Functions and AWS Lambda related resources.

Important: Terminating resources that are not actively being used reduces costs and is a best
practice. Not terminating our resources can result in a charge.
a. At the top of the AWS Step Functions console window, click State machines.
b. In the State machines window, select the CallCenterStateMachine and click Delete. To
confirm we want to delete the state machine, in the dialog box that appears, click Delete state
machine. our state machine will be deleted in a minute or two, after Step Functions has
confirmed that any in-process executions have completed.
c. Next, we’ll delete your Lambda functions. Click Services in the AWS Management Console
menu, then select Lambda.
d. In the Functions screen, click each of the functions you created for this lab and then
select Actions and then Delete. Confirm the deletion by clicking Delete again.
e. Lastly, we’ll delete your IAM roles. Click Services in the AWS Management Console menu,
then select IAM.
f. Select both of the IAM roles that we created for this lab, then click Delete role. Confirm the
delete by clicking Yes, Delete on the dialog box.

We can now sign out of the AWS Management console.

Conclusion:

Thus the study of Creating a Serverless Workflow with AWS Step Functions and AWS Lambda
is done and verified.
EX.12. AMAZON S3 TRIGGER TO INVOKE A LAMBDA FUNCTION

LAB OBJECTIVES:

To understand about Amazon S3 trigger to invoke a Lambda function

LAB OUTCOMES:

On Successful Completion, the Student will be able to understand about Amazon S3 trigger to
invoke a Lambda function

PROCEDURE:

Prerequisites
To use Lambda and other AWS services, we need an AWS account. If we do not have an
account, visit aws.amazon.com and choose Create an AWS Account. For instructions, see How
do I create and activate a new AWS account?

This lab assumes that we have some knowledge of basic Lambda operations and the Lambda
console. If we haven't already, follow the instructions in Getting started with Lambda to create
your first Lambda function.

Create a bucket and upload a sample object

Create an Amazon S3 bucket and upload a test file to our new bucket. our Lambda function
retrieves information about this file when we test the function from the console.

To create an Amazon S3 bucket using the console

1. Open the Amazon S3 console.


2. Choose Create bucket.
3. Under General configuration, do the following:
a. For Bucket name, enter a unique name.
b. For AWS Region, choose a Region. Note that we must create our Lambda function in the same
Region.
4. Choose Create bucket.

After creating the bucket, Amazon S3 opens the Buckets page, which displays a list of all
buckets in our account in the current Region.

To upload a test object using the Amazon S3 console

1. On the Buckets page of the Amazon S3 console, choose the name of the bucket that we created.
2. On the Objects tab, choose Upload.
3. Drag a test file from our local machine to the Upload page.
4. Choose Upload.

Create the Lambda function

Use a function blueprint to create the Lambda function. A blueprint provides a sample function
that demonstrates how to use Lambda with other AWS services. Also, a blueprint includes
sample code and function configuration presets for a certain runtime. For this lab, we can choose
the blueprint for the Node.js or Python runtime.

To create a Lambda function from a blueprint in the console

1. Open the Functions page on the Lambda console.


2. Choose Create function.
3. On the Create function page, choose Use a blueprint.
4. Under Blueprints, enter s3 in the search box.
5. Under Basic information, do the following:
a. For Function name, enter my-s3-function.
b. For Execution role, choose Create a new role from AWS policy templates.
c. For Role name, enter my-s3-function-role.
6. Under S3 trigger, choose the S3 bucket that you created previously.

When we configure an S3 trigger using the Lambda console, the console modifies our
function's resource-based policy to allow Amazon S3 to invoke the function.
7. Choose Create function.

Review the function code

The Lambda function retrieves the source S3 bucket name and the key name of the uploaded
object from the event parameter that it receives. The function uses the Amazon S3 getObject API
to retrieve the content type of the object.

While viewing your function in the Lambda console, you can review the function code on
the Code tab, under Code source. The code looks like the following:

 Node.js
 Python

Test in the console

Invoke the Lambda function manually using sample Amazon S3 event data

1. On the Upload page, upload a few .jpg or .png image files to the bucket.


2. Open the Functions page on the Lambda console.
3. Choose the name of your function (my-s3-function).
4. To verify that the function ran once for each file that we uploaded, choose the Monitor tab. This
page shows graphs for the metrics that Lambda sends to CloudWatch. The count in
the Invocations graph should match the number of files that we uploaded to the Amazon S3
bucket.

For more information on these graphs, see Monitoring functions in the AWS Lambda console.
5. (Optional) To view the logs in the CloudWatch console, choose View logs in CloudWatch.
Choose a log stream to view the logs output for one of the function invocations.

Clean up the resources

We can now delete the resources that we created for this lab, unless we want to retain them. By
deleting AWS resources that we’re no longer using, we prevent unnecessary charges to your
AWS account.
Conclusion:

Thus the study of Amazon S3 trigger to invoke a Lambda function is done and verified.

You might also like