You are on page 1of 204

Introduction to

Fullstack and DevOps


Nicolas El Khoury
DevOps Consultant
Overview

Background Information.

Course Curriculum

Course Outcome
Background
Information
General Information

DevOps Engineer | Software Engineer | Solutions Architect

Seven years of experience.

University Instructor

Linkedin Profile
Academia

BE Computer Engineering.

MSc Information Systems Security.

Publications:
Energy-Aware Placement and Scheduling of Network Traffic Flows with Deadlines on Virtual Network Functions.

Placement And Scheduling of Network Traffic on Virtual Network Functions


Community

Entity Achievements
AWS Beirut User Group - Community Lead.
- Organized dozens of workshops and tutorials related to AWS.
Meta Developer Circle: - Mentor.
Beirut - Delivered multiple presentations and workshops related to
software development and microservices.
AWS Community Builders - Member.
- Generated multiple articles for the global tech community.
Awards and Certifications

AWS Certified SysOps Administrator - Associate

AWS Certified Solutions Architect - Associate

AWS Certified Developer - Associate

HashiCorp Certified: Terraform Associate


Course
Curriculum
Topics
● Introduction to Linux commands.
● Introduction to Web Applications.
● Web Application Concepts.
● Introduction to Databases.
● Introduction to Backend Services.
● Introduction to Infrastructure Types.
● Introduction to Docker.
● Advanced Docker Concepts.
Course
Outcome
Basic Understanding of:
● Web Applications.
● Deployment Requirements.
● Containers and Docker.
● Amazon Web Services.
● DevOps Concepts
Thank
You
DevOps:
what it is
what it isn’t
Nicolas El Khoury
DevOps Consultant
Overview

Evolution of the Software Industry.


Software Delivery Models.
What is DevOps.
What are DevOps Engineers.
Evolution
of Software
Evolution of the Software Industry
● Software companies are everywhere.

● Microservices are now a thing.

● Kubernetes is everywhere.

● Cloud Providers now have hundreds of managed services.

● Continuous Delivery tools allow us to easily automate everything.

● Infrastructure can now be created and managed through code.

● Multi-cloud solutions are becoming popular.

● Everyone wants to become a DevOps Engineer, and all companies want to apply DevOps.

There is no clear definition of DevOps.


Software
Development
Methods
Waterfall Model

Design Develop Test Deploy


Business Analysts Developers QA Testers Operations

• Rigid Process.
• Discourages changes.
• Difficult to measure progress.
• Slow and complex deployments.
Agile

Develop
• Operations team is left out.
Developers

• Deployment is still considered as a black box,


resulting in results similar to those of the waterfall
Design
model. Test
Business
QA Testers
Analysts

Deploy
Operations
DevOps

DevOps is a bouquet of philosophies, sets of tools &


practices, that aims to:
Develop
Developers

● Create and manage infrastructure resources.


● Release software changes.
● Perform necessary tests. Design
Test
● Automatically spin new environments seamlessly. Business
QA Testers
● Enhance system security. Analysts
● Ensure scalability.
● Improve collaboration.

Deploy
Operations
DevOps: What it isn’t

● Deployment of software on the cloud using the Agile approach.

● Creating software using the Microservices approach.

● Using Infrastructure as Code tools with no clear purpose.

● The adoption of unneeded automation tools in general.


DevOps Engineer: What it is, What it isn’t

● Engineers who create cloud infrastructure. ● Create any required infrastructure.

● Kubernetes Gurus. ● Deploy the application.

● Cloud Enthusiasts. ● Provide continuous delivery mechanisms.

● Automate processes
Thank You
Introduction to Linux
Commands
Nicolas El Khoury

Introduction
Linux Shell
Basic Linux Commands

Introduction
Linux is an Operating System, similar to Microsoft Windows and MAC OS. It is
completely open source and free. Several distributions (flavors) exist, including, but
not limited to Ubuntu, Kali Linux, Red Hat Enterprise Linux (RHEL), CentOS, etc.

Linux Servers are used across the vast majority of the servers online due to their
fast, secure, and free characteristics.

Linux Shell
One way to interact with the Operating System is through the Graphical User
Interface. However, this is not the only way. As a matter of fact, most Linux servers
online cannot be accessed through a GUI. An alternative is using the Command Line
Interface, which allows the user to interact with the Operating System through
commands. The Linux Shell is then a program that takes these commands from the
user and sends them to the Operating System to Process.

Basic Linux Commands


In this lecture, we go over some of the Linux commands, especially those that we
will use in this course:

pwd - Short for Print Working Directory. As the name states, this command
prints the absolute path to the current directory.

ls - List files and directories. There are many flags that can be used with this
command. An example is ls -lah:

Introduction to Linux Commands 1


1. -l: Lists files in the long format (permissions, file and directory owners, file
and directory size, date modified, etc).

2. -a : Includes hidden directories and files.

3. -h : Prints sizes in a human-readable format.

man - Displays the user manual for any Linux command (i.e., man ls displays
information about the ls command).

mkdir - used to create directories. mkdir /home/ubuntu/directories Creates the


directory /home/ubuntu/directories . The -p flag ensures that intermediate
directories are created when needed. For example, creating the
/home/ubuntu/directories/directory1/subdirectory1 without the -p flag will not

succeed if the directory1 directory does not exist. mkdir -p


/home/ubuntu/directories/directory1/subdirectory1

cd - Short for Change Directory, used to navigate between directories. For


instance, cd /home/ubuntu/directories/directory1/subdirectory1 will navigate the
user to the directory /home/ubuntu/directories/directory1/subdirectory1 . cd ..
navigates the user to the previous directory.

Introduction to Linux Commands 2


touch - used to create a file. For instance touch newFile.txt

cp - Copy files and directories from the source to the destination. For example,
cp /home/ubuntu/directories/directory1/newFile.txt /home/ubuntu/directories copies
the newFile.txt file from its old directory to /home/ubuntu/directories .

mv - Move files and directories from the source to the destination. For example,
mv /home/ubuntu/directories/directory1/newFile.txt

/home/ubuntu/directories/directory1/subdirectory1 moves the newFile.txt file from


its current directory to /home/ubuntu/directories/directory1/subdirectory1 .

Introduction to Linux Commands 3


echo - Writes characters to the console. The "echo" command also allows
writing content to a file.

1. Write to the console: echo 'hello world!' prints “hello world!” to the console.

2. Write to file: echo 'hello world!' > file.txt prints “hello world!” to a file
named file.txt

cat - Prints the content of a file to the console. cat file.txt

nano - Text Editor. nano file.txt . Allows us to access and edit a file. nano

allows the creation of a file if it doesn’t exist: nano nanoFile.txt

Introduction to Linux Commands 4


chmod - Modify the set of permissions for a file or directory. Currently,
nanoFile.txt has read/write permissions. Modify the permissions of nanoFile.txt

to read-only chmod 400 nanoFile.txt . As you can see, the permissions clearly
changed giving the user only read permissions. Now attempting to modify the
content of the file won’t work.

Code Permission User

0400 Read Owner

0200 Write Owner

0100 Execute / Search Owner

0040 Read Group

0020 Write Group

0010 Execute / Search Group

Introduction to Linux Commands 5


Code Permission User

0004 Read Others

0002 Write Others

0001 Execute / Search Others

chown - Changes the ownership of a file or directory. Currently, nanoFile.txt is


owned by the ubuntu user that created the file. Change the ownership of the
nanoFile.txt file to root chown root:root nanoFile.txt . This command cannot be

performed without root privileges, which brings us to the next command.

sudo - Short for “SuperUser Do”. Performs a command with root permissions or
privileges. Similar to “Run as Administrator” on Windows. sudo chown root:root
nanoFile.txt . This command is now performed using root privileges. Since we

are logged in using the ubuntu user, we are no longer able to see the contents of
the file without using sudo .

rm - Delete files or directories. rm -rf /home/ubuntu/directories

1. -r remove the directory and all subdirectories and files.

2. -f remove the desired files and directories without prompt.

Introduction to Linux Commands 6


Introduction to Linux Commands 7
Introduction to
Web Applications
Nicolas El Khoury
DevOps Consultant
Overview

The Internet.
The World Wide Web.
Client-Server Architecture.
Domain Resolution.
Load Balancing.

Demo - Deploy and Serve a Static Website.


Demo - Deploy and Server two Static Websites.
Demo - Add (fake) Domain Names to the Applications.
Demo - Enable Load Balancing.
What is
Everything
and why
The Internet

Global Network.
Allows connection between devices.
Devices connect to the internet using the TCP/IP protocol.
The World Wide Web

It is not the Internet!!!

A global collection of documents and resources linked together.

Can be accessed through HyperText Transfer Protocol (HTTP)

Made out of several components: HTTP protocol, URLs, URIs, HTML.


Web Application

An application served through the internet and consumed by a client

Platform agnostic

Examples: Gmail, Facebook, Whatsapp, etc.


Connecting the Dots
Client – Server Architecture

A computing model to serve and consume resources.

Clients: Mobiles, Browsers, IOT Devices, etc.

Servers: Mail servers, File servers, etc.

Servers are reached by IPs, and serve applications using ports.


Domain Resolution

Impossible to memorize the IP address of every server.

Impractical on large scale.

Domain Resolution is mapping a domain name to the servers IPs.


Domain Name System

Database System containing domain names and their Corresponding IP addresses.


Load Balancing

The act of distributing traffic across multiple replicas of a service.

Performance.

Scalability.

Availability.

Security.
Load Balancing - Continued

Algorithms: Round Robin, Least Connections, Least Response Time, Least Bandwidth.

Health Checks: Protocol, Port, Path, HealthCheck Interval, Healthy Threshold Count
Knowledge
Application
Demo – Deploy and Serve a Static Website

Create an AWS EC2 machine.

Install a Web Server (Apache2).

Deploy a simple HTML application.

Configure the webserver.


Demo – Deploy and serve two static websites

Create a second HTML page.

Serve the application on port 81.

Configure the webserver.


Demo – Add (fake) domain names to the applications

First app served through: myfirstapp.com

Second app served through: mysecondapp.com

Serve both apps on port 80 (default HTTP port).


Demo – Enable Load Balancing

Create two webservers.

Deploy the applications on each webserver.

Create and configure a load balancer.


Thank You
Demo – Deploy and Serve a
Static Website
Nicolas El Khoury

Introduction
Solution
Compute and Networking Resources
SSH Keypair
Security Group
EC2 instance
Apache2 Installation and configuration
Application Deployment
Webserver Configuration

Introduction
In this demo, we are going to deploy a simple HTML website on an AWS EC2
Ubuntu Machine. To do so, we are going to:

Create a simple HTML page.

Create the networking and compute resources on AWS.

Install the Apache2 webserver.

Deploy the HTML page.

Configure the webserver to serve the application on port 80.

Demo – Deploy and Serve a Static Website 1


The webpage to be deployed is nothing but a simple HTML page:

<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>
<p>I have no idea what I'm doing.</p>
</body>
</html>

Solution
Compute and Networking Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.

2. Create a Keypair.

3. The key will be automatically downloaded. Move it to a hidden directory.

4. Modify the permissions to read only: chmod 400 <keyName>.pem

Security Group
1. Navigate to the Security group option from the left menu.

2. Specify a name: aws-demo.

3. Attach it to the default VPC.

4. Enable ports 22 and 80 to all IPv4 addresses.

Demo – Deploy and Serve a Static Website 2


EC2 instance
Navigate to AWS EC2 —> instances —> Launch instances, with the following
parameters:

Name: webserver

AMI: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type

Instance Type: t3.medium (Or any type of your choice)

Key pair name: aws-demo

Network Settings:

Select existing security group: aws-demo

Configure storage: 1 x 25 GiB gp2 Root volume

Demo – Deploy and Serve a Static Website 3


SSH to the machine: ssh ubuntu@<Public IP address> -i <path to key>.pem

Apache2 Installation and configuration

Demo – Deploy and Serve a Static Website 4


1. Update the local package index to reflect the latest upstream changes: sudo apt-

get update

2. Install the Apache2 webserver: sudo apt-get install -y apache2

3. Verify that the deployment worked by performing a request to the machine using
its public IP. The Apache2 default page must be loaded on the browser:

Application Deployment
Perform the following steps to deploy the application:

# Create a directory
sudo mkdir /var/www/myfirstapp
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myfirstapp
# Create the index.html file and paste the HTML code into it
sudo nano /var/www/myfirstapp/index.html
# Create the log directory
sudo mkdir /var/log/myfirstapp
# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/myfirstapp/

Webserver Configuration
1. Create the virtual host file: sudo nano /etc/apache2/sites-available/myfirstapp.conf

Demo – Deploy and Serve a Static Website 5


2. Paste the configuration below:

<VirtualHost *:80>
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log
CustomLog /var/log/myfirstapp/requests.log combined
</VirtualHost>

Enable the configuration:

# Enable the site configuration


sudo a2ensite myfirstapp.conf
# Disable the default configuration
sudo a2dissite 000-default.conf
# Test the configuration
sudo apache2ctl configtest
# Restart apache
sudo systemctl restart apache2

Perform a request on the server. The response will change this time, loading the
custom page that was configured.

Demo – Deploy and Serve a Static Website 6


Demo – Deploy and Serve two
Static Websites
Nicolas El Khoury

Introduction
Solution
Application Deployment
Webserver Configuration

Introduction
In the previous demo, we installed Apache and configured it to listen and serve a
static website on port 80. In this demo, on the same machine, we are going to:

Create a second HTML page.

Deploy the HTML page.

Configure the webserver to serve the application on port 81.

The webpage to be deployed is nothing but a simple HTML page:

<!DOCTYPE html>
<html>
<head>
<title>My Second Application</title>
</head>
<body>
<p>Neither do I.</p>
</body>
</html>

Solution
Application Deployment
Perform the following steps to deploy the application:

Demo – Deploy and Serve two Static Websites 1


# Create a directory
sudo mkdir /var/www/mysecondapp
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/mysecondapp
# Change the directory permissions
sudo chmod -R 755 /var/www/mysecondapp
# Create the index.html file and paste the code in it
sudo nano /var/www/mysecondapp/index.html
# Create the log directory
sudo mkdir /var/log/mysecondapp
# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/mysecondapp/

Webserver Configuration
a. Create the virtual host file: sudo nano /etc/apache2/sites-available/mysecondapp.conf

b. Paste the following:

<VirtualHost *:81>
DocumentRoot /var/www/mysecondapp
ErrorLog /var/log/mysecondapp/error.log
CustomLog /var/log/mysecondapp/requests.log combined
</VirtualHost>

Enable the configuration:

# Enable the site configuration


sudo a2ensite mysecondapp.conf
# Test the configuration
sudo apache2ctl configtest
# Restart apache
sudo systemctl restart apache2

1. Performing a request on port 81 will not load the application:

a. Modify the Security Group to include another inbound rule allowing requests
on port 81 from anywhere.

b. Instruct Apache to listen port 81:

i. Edit configuration file: sudo nano /etc/apache2/ports.conf

ii. Add the port: listen 81

iii. Restart apache: sudo service apache2 restart

Demo – Deploy and Serve two Static Websites 2


2. Perform a request on port 81.

Demo – Deploy and Serve two Static Websites 3


Demo - Add Domain Names to
the Application
Nicolas El Khoury

Introduction
Solution
Application1 Virtualhost
Application2 Virtualhost
Hosts File Modification
Application Testing

Introduction
In the previous demos, we deployed two HTML applications on one EC2
machine. One application is served on port 80, while the other one is served on
port 81.

Instead of serving the applications using the IP address of the machine, and
different ports, we will use domain names, and both applications will be served
on port 80.

Application1 will be served using domain name: myfirstapp.com

Application2 will be served using domain name: mysecondapp.com

Solution
Application1 Virtualhost
Modify the virtualhost of Application1: sudo nano /etc/apache2/sites-

available/myfirstapp.conf

Include the domain name:

<VirtualHost *:80>
ServerName myfirstapp.com
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log

Demo - Add Domain Names to the Application 1


CustomLog /var/log/myfirstapp/requests.log combined
</VirtualHost>

Application2 Virtualhost
Modify the virtualhost of Application2: sudo nano /etc/apache2/sites-

available/mysecondapp.conf

Include the domain name:

<VirtualHost *:80>
ServerName mysecondapp.com
DocumentRoot /var/www/mysecondapp
ErrorLog /var/log/mysecondapp/error.log
CustomLog /var/log/mysecondapp/requests.log combined
</VirtualHost>

Restart Apache: sudo service apache2 restart

Hosts File Modification


Add the following record to the local machine’s host file: (MAC or Linux: sudo

nano /etc/hosts )

<SERVER IP> myfirstapp.com mysecondapp.com

Application Testing
Query the first application myfirstapp.com :

Query the second application mysecondapp.com :

Demo - Add Domain Names to the Application 2


Demo - Add Domain Names to the Application 3
Demo – Enable Load Balancing
Nicolas El Khoury

Introduction
Solution
AWS Networking and Compute Resources
SSH Keypair
Security Group
EC2 Instances
App1 Server configuration
Apache2 Installation
Application Deployment
Virtualhost Configuration
App2 Server configuration
Server testing
Load Balancer Configuration
Apache2 Installation
Apache2 Configuration
Application Testing

Introduction
To better understand load balancing, we are going to:

Create three Ubuntu EC2 machines on AWS.

Create two simple HTML pages.

Deploy each application on one VM.

Create and configure a load balancer on the third VM, and instruct it to distribute
the requests on the two machines.

The first application is the following HTML Page:

<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>
<p>I have no idea what I'm doing.</p>

Demo – Enable Load Balancing 1


</body>
</html>

The second application is the following HTML Page:

<!DOCTYPE html>
<html>
<head>
<title>My Second Application</title>
</head>
<body>
<p>Neither do I.</p>
</body>
</html>

Solution
AWS Networking and Compute Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.

2. Create a Keypair.

3. The key will be automatically downloaded. Move it to a hidden directory.

4. Modify the permissions to read only: chmod 400 <keyName>.pem

Security Group
1. Navigate to the Security group option from the left menu.

2. Specify a name.

3. Attach it to the default VPC.

4. Enable ports 22 and 80 to all IPv4 addresses.

EC2 Instances
Create three Ubuntu 20.04 VMs:

Navigate to AWS EC2 —> instances —> Launch instances, with the following
parameters:

Demo – Enable Load Balancing 2


Name: app1 | app2 | loadbalancer (each name corresponds to one VM)

AMI: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type

Instance Type: t3.medium (Or any type of your choice)

Key pair name: aws-demo

Network Settings:

Select existing security group: aws-demo

Configure storage: 1 x 25 GiB gp2 Root volume

App1 Server configuration


Apache2 Installation
1. Update the local package index to reflect the latest upstream changes: sudo apt-

get update

2. Install the Apache2 package: sudo apt-get install -y apache2

Application Deployment
a. Perform the following steps to deploy the first application:

# Create a directory
sudo mkdir /var/www/myapp
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myapp
# Create the index.html file and paste the code of the first app in it
sudo nano /var/www/myapp/index.html
# Create the log directory
sudo mkdir /var/log/myapp
# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/myapp/

Virtualhost Configuration
Create the virtual host file: sudo nano /etc/apache2/sites-available/myapp.conf

Paste the following configuration:

Demo – Enable Load Balancing 3


<VirtualHost *:80>
DocumentRoot /var/www/myapp
ErrorLog /var/log/myapp/error.log
CustomLog /var/log/myapp/requests.log combined
</VirtualHost>

Enable the configuration:

# Enable the site configuration


sudo a2ensite myapp.conf
# Disable the default configuration
sudo a2dissite 000-default.conf
# Test the configuration
sudo apache2ctl configtest
# Restart apache
sudo systemctl restart apache2

Perform a request on the server to ensure that the configuration is done


properly.

App2 Server configuration


Repeat the same steps exactly on the second server to:

Install Apache2.

Deploy the application.

Create the virtual host.

Test a request

Server testing
Perform a request on the server to ensure that the configuration is done properly.

Load Balancer Configuration


Perform the following on the loadbalancer VM:

Demo – Enable Load Balancing 4


Apache2 Installation
1. Update the local package index to reflect the latest upstream changes: sudo apt-

get update

2. Install the Apache2 package: sudo apt-get install -y apache2

Apache2 Configuration
Install the required modules

sudo a2enmod proxy


sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod headers

Create the virtual host: sudo nano /etc/apache2/sites-available/lbmanager.conf

Paste the following configuration:

<VirtualHost *:80>
<Proxy balancer://myservers>
BalancerMember http://<APP1 IP>:80
BalancerMember http://<APP2 IP>:80
</Proxy>

ProxyPass "/" "balancer://myservers/"


ProxyPassReverse "/" "balancer://myservers/"
</VirtualHost>

Enable the configuration:

# Disable the default configuration


sudo a2dissite 000-default.conf
# Enable the lbmanager
sudo a2ensite lbmanager.conf
# Test the configuration
sudo apache2ctl configtest
# Restart apache
sudo systemctl restart apache2

Restart Apache2: sudo service apache2 restart

Application Testing

Demo – Enable Load Balancing 5


Entering the IP of the load balancer should balance the load between the two
machines:

Demo – Enable Load Balancing 6


Web Application
Concepts
Nicolas El Khoury
DevOps Consultant
Overview

Websites vs Web Applications.

HTTP Protocol.

Web Application Layers.

Web Application Components.

Web Application Architecture.


Websites vs
Web Applications
Websites

Set of interconnected documents.

Developed using HTML, CSS, Javscript.

Limited User Interaction.

Stateless.

Blogs, News Sites, etc.


Web Applications

Provide complex functionalities.

Allows for User Interaction.

More complex to architect.

Online Games, e-commerce, Online Learning, etc.


Client – Server Architecture

A computing model to serve and consume resources.

Clients: Mobiles, Browsers, IOT Devices, etc.

Servers: Mail servers, File servers, etc.

Servers are reached by IPs, and serve applications using ports.


HTTP
Protocol
Definition

Designed to fetch web pages deployed on the internet.

Communication is done using HTTP Messages.


HTTP Protocol
Components
HTTP Methods

Indicates the desired actions.

GET: Fetch data from the server.

POST: Create data on the server.

PUT: Modify data on the server.

DELETE: Delete data from the server.


HTTP Version

The version of the HTTP version used.


HTTP Uniform Resource Locator (URL)

The complete and unique address of a resource on the web.

•Protocol: http
•Domain Name: mywebapp.com
•Application port: 80
•Path: /some/api
•Query String Paramters: key1=val1,key2=val2
HTTP Headers

Contain information about the request/response.

Stored as key-value pairs.


HTTP Body

Contains information sent by the client to the server.


HTTP Status Code

Code dictating the status of the request.

Status Codes:

2XX: Success
4XX: Logical error
5XX: Server Error
Web Application Layers

Presentation Layer: Client side application.

Application (Business Logic) Layer: Server side application.

Database Layer: Data storage and persistence.


Web Application Components

Frontend Application. EVENTS

No Sql Img resize

Backend Application.

Internal LB
MySQL Amazon S3 CDN

Database.

Message Bus. Public APIs & Operation APIs API calls


Manager(s) / Worker node
Discovery
Public Load
Content Delivery Network.

Workflow Management Platform. Worker node


[……. Metrics & logs
Balancer

Event-based alarms

Container Orchestration Amazon


Tool CloudWatch

Private Subnets Redis Cache


Web
Application
Architecture
Definition

Catalog Module
Catalog Service A

Customer Module
Customer Service

Order Module
Order Service
Payment Module

Payment Service

Monolithic Architecture Microservices Architecture


Monolithic Applications - Advantages

Ease of Development.

Ease of Deployment.

Ease of testing.
Monolithic Applications - Disadvantages

Slower Development Lifecycles. Code Ownership and Team Division Problems.

Code Dependency. Technology Lock-in.

Performance issues. Technical Debt.

Scalability issues. Infrastructure Costs.


Microservices Applications - Advantages

Fault Tolerance.

High Scalability.

Ease of maintenance.

Ease of Deployment.

Technological Freedom.

Fast Development Lifecycles.


Microservices Applications - Disadvantages

Complex Infrastructure.

Need for DevOps.

Increased Network Calls.

Complex End to End Testing.


Thank You
Introduction to
Server Side Apps
Nicolas El Khoury
DevOps Consultant
Overview

Databases: DBMS, Data Model, Database Types.

NK-backend Service.

Demo: Deploy the NK-backend service on AWS.


Databases
Databases

Database Management System: Software programs that enable the creation and management of databases.

Data Model: allows the developers to logically structure this data in different tables and collections.

Database Types: Relational Databases, Document Databases, Graph Databases.


Relational Databases

Students (Table) Courses (Table)

ID Name (String) Age (Integer) ID Name isAvailable

111 Jhon 25 124 Introduction to Fullstack true

25 Peter 87 629 Introduction to DevOps true

ID studentID courseID

1 629 25

2 629 111

StudentsToCourses (Table)
Document Databases

Students (Table) Courses (Table)


StudentsToCourses (Table)
Graph Databases
Connecting
the Dots
How Everything Works Together
NK-backend
Service
NK-Backend Service

Open Source Project developed using NodeJS and SailsJS

uses ArangoDB to store and manage its data.

RESTful service, and exposes CRUP APIs to manage “Persons”.

● GET /health: The health endpoint of the service.

● POST /person: Creates a Person Record in the database.

● GET /persons: Fetches all existing persons from the database.

● DELETE /person/:id: Deletes a person record from the database.


Demo
NK-Backend Service Deployment

● Create the AWS required resources.

● Configure the virtual machine with the necessary pre-requiresites.

● Deploy the database and connect to it using a client.

● Deploy the backend application.

● Perform API requests.


Thank You
NK-backend Service Deployment
Nicolas El Khoury

Introduction
Solution
AWS Networking and Compute Resources
SSH Keypair
Security Group
EC2 instance
ArangoDB Installation
Backend Service Deployment
Git Installation
NodeJS and NPM Installation
Backend Service
API testing
Health API
Postman Installation
Create User API

Introduction
To better understand how an application is deployed and serves traffic over the internet, in this demo, we
will deploy the nk-backend service and its database on an AWS EC2 machine, and explore some of its
APIs by communicating with them using the Postman API client. The following steps will be completed:

Create an AWS EC2 virtual machine.

Configure the virtual machine with the necessary prerequisites.

Deploy the database and connect to it using its client.

Deploy the backend application.

Perform API requests to validate the deployment.

Solution
AWS Networking and Compute Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.

2. Create a Keypair.

3. The key will be automatically downloaded. Move it to a hidden directory.

4. Modify the permissions to read only: chmod 400 <keyName>.pem

Security Group
Create a Security Group with the following inbound rules:

NK-backend Service Deployment 1


8080: To communicate with the backend application.

8529: To communicate with the database.

22: To SSH to the machine.

EC2 instance
Navigate to AWS EC2 —> instances —> Launch instances, with the following parameters:

Name: webserver

AMI: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type

Instance Type: t3.medium (Or any type of your choice)

Key pair name: aws-demo

Network Settings:

Select existing security group: aws-demo

Configure storage: 1 x 25 GiB gp2 Root volume

Finally, SSH to the instance:

ArangoDB Installation

# Update the package repository


sudo apt-get update
# Add the repository key to the package manager
curl -OL https://download.arangodb.com/arangodb310/DEBIAN/Release.key
sudo apt-key add - < Release.key

# install arangodb
echo 'deb https://download.arangodb.com/arangodb310/DEBIAN/ /' | sudo tee /etc/apt/sources.list.d/arangodb.list
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install arangodb3=3.10.0-1 -y

the installer will prompt you to enter and confirm the root user’s password: rootPassword

NK-backend Service Deployment 2


A few more questions must be answered by Yes or No. Leave as Default.

After the installation is complete, check that the database is running: sudo service arangodb3 status

Modify the ArangoDB configuration file: sudo nano /etc/arangodb3/arangod.conf and replace the endpoint

= tcp://127.0.0.1:8529 line with endpoint = tcp://0.0.0.0:8529

NK-backend Service Deployment 3


Restart the database service and check its status:

sudo service arangodb3 restart


sudo service arangodb3 status

Login to the database management console using the machine’s IP and the database’s port. In this
case: http://<MACHINE IP>:8529/

NK-backend Service Deployment 4


Login using the configured username (root) and password (rootPassword)

Select the _system database.

NK-backend Service Deployment 5


Reaching this point signals the success of the deployment of the database.

Backend Service Deployment


Git Installation
Update the package repository: sudo apt-get update

Install Git: sudo apt-get install git -y

Ensure git is installed by checking the version: git --version

NodeJS and NPM Installation


Install NodeJS and NPM: sudo apt-get install nodejs npm -y

Ensure NodeJS is installed by checking the version: node -v

Backend Service
Clone the code from the repository: git clone https://github.com/devops-beyond-limits/nk-backend-

service.git

Navigate into the root directory cd nk-backend-service and print the content ls -lah

NK-backend Service Deployment 6


Install the application packages: npm install

Export the applications environment variables:

# The port on which the application is listening


export PORT=8080
# Variables related to the database credentials already installed
export ARANGODB_HOST=localhost
export ARANGODB_PORT=8529
export ARANGODB_USERNAME=root
export ARANGODB_PASSWORD=rootPassword
# The name of the database to be used by the application
export ARANGODB_DB_NAME=persons
# We will explain these Environment Variables at a later stage.
export ARANGO_MAX_RETRY_ATTEMPTS=3

NK-backend Service Deployment 7


export ARANGO_RETRY_DELAY=250
export JWT_SECRET=mysecret
export JWT_ACCESS_TOKEN_VALIDITY=3600
export JWT_REFRESH_TOKEN_VALIDITY=86400

Launch the application: node app.js

The application logs clearly show that the application successfully connected to the database created
a database and a collection, both called persons .

Validate the creation of the database by logging in to the database administration console.

NK-backend Service Deployment 8


Notice the persons collection created by the application.

NK-backend Service Deployment 9


API testing
Health API
Validate that the application is running, by performing the /health API: http://<MACHINE IP>:8080/health

Postman Installation
Download and Install Postman.

Save the Postman collection in a json file. This file contains a template for all APIs.

Import the Postman Collection.

NK-backend Service Deployment 10


As can be seen, three APIs are now populated in Postman.

Create User API


Click on the Create Person API.

Submit a request with the following parameters. Make sure to replace the variables with the
corresponding IP and port.

NK-backend Service Deployment 11


Notice the successful response with a status code of 200 and the returned data to the client

The application logs clearly indicate the success of the operation

The database clearly shows a new record added

NK-backend Service Deployment 12


Infrastructure
Types
Nicolas El Khoury
DevOps Consultant
Overview

Physical Servers

Virtual Machines .

Containers

Serverless
Infrastructure Types
Physical Servers
Advantages Disadvantages
Ownership and Customization Large CAPEX and OPEX
Performance Management Overhead
Lack of Scalability
Resource Mismanagement
Improper Isolation
Performance Degradation over Time
Virtual Machines
Advantages Disadvantages
Low CAPEX Performance Issues
Flexibility Security Concerns
Disaster Recovery Increased Resource Waste
Better Resource Management
Proper Environment Isolation
Containers
Advantages Disadvantages
Decreased Overhead Data Persistence
Portability Cross-Platform Incompatibility
Rapid Delivery Cycles
Serverless
Advantages Disadvantages
Cost Security and Privacy
Scalability Vendor Lock-in
Fast Delivery Cycles Complex Troubleshooting
Thank You
Introduction to
Containers and
Docker
Nicolas El Khoury
DevOps Consultant
Overview

Container Image

Container

Container Registry

Dockerfile

Container Data Management (Bind Mounts and Volumes)


Terminology and Definitions

Container Image: A snapshot of an application and its packages.

Container: A running instance of an image.


Container Registry: Service to store and share container images.
Docker: Container Engine that aids in delivering software packages as containers.
Dockerfile: Text document. Contains all the instructions needed to build an image.
Container Data
Management
Terminology and Definitions

Volumes

Bind Mounts
Demo
Demo

AWS Networking and Compute resources

Deployment of an HTML application on an EC2 machine

Containerization of the application


Redeployment of the application using containers
Demo
Demo

Deploy the Containerized version of ArangoDB without data persistence.

Enable data persistence using Docker volumes.

Build the backend service using a Dockerfile.


Deploy the containerized version of the backend service.
Perform API requests to validate the deployment.
Thank You
Demo - Introduction to
Containers
Nicolas El Khoury

Introduction
AWS Infrastructure
Security Group
Key pair
AWS EC2 Machine
Application Deployment on the EC2 machine
Application Code
Apache2 Installation
Application Deployment
Virtual Host
Application Deployment using containers
IAM Role
AWS CLI
Docker Installation
Base Image
Customize the Base Image
Create a Custom Image
Push the Image to AWS ECR
Create a Docker image using Dockerfiles

Introduction
To better understand the difference between the concepts explained, we will attempt
to deploy a simple HTML application on an Ubuntu EC2 machine. Then we will
containerize and redeploy it. The following steps will be performed:

Create the networking and compute resources on AWS.

Deploy a simple HTML application on an AWS EC2 machine.

Containerize the application.

Store the image on AWS Elastic Container Registry.

Deploy the containerized application.

Demo - Introduction to Containers 1


AWS Infrastructure
Security Group
Navigate to AWS EC2 —> Security Groups —> Create security group, with
the following parameters:

Security group name: aws-demo

Description: Allows inbound connections to ports 22 and 80 from anywhere

VPC: default VPC

Inbound rules:

Rule1:

Type: SSH

Source: Anywhere-IPv4

Rule2:

Type: HTTP

Source: Anywhere-IPv4

Key pair
Navigate to AWS EC2 —> Key pairs —> Create key pair, with the following
parameters:

Name: aws-demo

Private key file format: .pem

Demo - Introduction to Containers 2


# Create a hidden directory
mkdir ~/.keypairs/aws-demo
# Move the key to the created directory
mv ~/Downloads/aws-demo.pem ~/.keypairs/aws-demo/
# Change the permissions of the key
sudo chmod 400 ~/.keypairs/aws-demo/aws-demo.pem

AWS EC2 Machine


Navigate to AWS EC2 —> instances —> Launch instances, with the following
parameters:

Name: aws-demo

AMI: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type

Instance Type: t3.medium (t3.micro can be used for free tier, but may suffer
from performance issues)

Key pair name: aws-demo

Network Settings:

Select existing security group: aws-demo

Configure storage: 1 x 25 GiB gp2 Root volume

Leave the rest as defaults and launch the instance.

Demo - Introduction to Containers 3


An EC2 VM is created, and is assigned both a private and a public IPv4 addresses.

Telnet is one way to ensure the machine is accessible on ports 22 and 80:

# Make sure to replace the machine's IP with the one attributed to your machine
telnet 3.250.206.251 22
telnet 3.250.206.251 80

Demo - Introduction to Containers 4


SSH to the machine, using the key pair created: ssh ubuntu@3.250.206.251 -i

~/.keypairs/aws-demo/aws-demo.pem

Application Deployment on the EC2 machine


Application Code
The application to be deployed is a simple HTML document:

<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>

Demo - Introduction to Containers 5


<p>I have no idea what I'm doing.</p>
</body>
</html>

Apache2 Installation
Update the local package index to reflect the latest upstream changes: sudo apt-

get update

Install the Apache2 package: sudo apt-get install -y apache2

Check if the service is running: sudo service apache2 status

Verify that the deployment worked by hitting the public IP of the machine:

Application Deployment

# Create a directory
sudo mkdir /var/www/myfirstapp
# Change the ownership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myfirstapp
# Create the index.html file and paste the code in it
sudo nano /var/www/myfirstapp/index.html
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp/index.html
# Create the log directory
sudo mkdir /var/log/myfirstapp

Demo - Introduction to Containers 6


# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/myfirstapp/

Virtual Host
Create the virtual host file: sudo nano /etc/apache2/sites-available/myfirstapp.conf

Paste the following:

<VirtualHost *:80>
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log
CustomLog /var/log/myfirstapp/requests.log combined
</VirtualHost>

Enable the configuration:

# Enable the site configuration


sudo a2ensite myfirstapp.conf
# Disable the default configuration
sudo a2dissite 000-default.conf
# Test the configuration
sudo apache2ctl configtest
# Restart apache
sudo systemctl restart apache2

Perform a request on the server. The response will now return the HTML
document created:

Stop the apache webserver: sudo service apache2 stop

Demo - Introduction to Containers 7


Application Deployment using containers
IAM Role
Navigate to IAM —> Roles —> Create Role, with the following parameters:

Trusted entity type: AWS Service

Common use cases: EC2

Permissions policies: AdministratorAccess

Role Name: aws-demo

Attach this role to the EC2 Machine: Actions —> Security —> Modify IAM Role

AWS CLI

# Update the package repository


sudo apt-get update
# Install unzip on the machine
sudo apt-get install -y unzip
# Download the zipped package
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" \
-o "awscliv2.zip"
# unzip the package
unzip awscliv2.zip
# Run the installer
sudo ./aws/install

Demo - Introduction to Containers 8


Ensure the AWS CLI is installed by checking the version: aws --version

Docker Installation

# Update the package index and install the required packages


sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Add Docker’s official GPG key:


sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Set up the repository


echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list \
> /dev/null

# Update the package index again


sudo apt-get update

# Install the latest version of docker


sudo apt-get install -y docker-ce docker-ce-cli containerd.io \
docker-compose-plugin

# Add the Docker user to the existing User's group


#(to run Docker commands without sudo)
sudo usermod -aG docker $USER

To validate that Docker is installed and the changes are all applied, restart the SSH
session, and query the docker containers: docker ps -a . A response similar to the
one below indicates the success of the installation.

Base Image
Pull the Apache2 Docker Image: docker pull httpd:2.4-alpine .

List the available images docker images .

Demo - Introduction to Containers 9


Create a Docker container: docker run -d --name myfirstcontainer -p 80:80

httpd:2.4-alpine

Ensure that the container is successfully running: docker ps -a

Monitor the container logs: docker logs -f myfirstcontainer

Attempt to make a request to the container, using the machine’s public IP and
port 80: http://<MACHINE IP>:80

Customize the Base Image


In this example, the following simple HTML page representing a website will be
added, thus creating a custom image.

<!DOCTYPE html>
<html>
<head>
<title>My First Dockerized Website</title>
</head>

Demo - Introduction to Containers 10


<body>
<p>I am inside a Docker Container.</p>
</body>
</html>

Create an interactive sh shell on the container: docker exec -it myfirstcontainer

sh .

Navigate to the designated directory cd /usr/local/apache2/htdocs/ . The directory


already has a file named index.html which contains the default Apache page
loaded above. Modify it to include the custom HTML page above, and hit the
container again: http://MACHINE IP>:80

Clearly, the image shows that the changes have been reflected.

Create a Custom Image


The changes performed will not persist, especially when the container crashes.
As a matter of fact, by default, containers are ephemeral. To verify it, remove the
container and start it again:

docker rm -f myfirstcontainer
docker ps -a
docker run -d --name myfirstcontainer -p 80:80 httpd:2.4-alpine

Now hit the container again h ttp://<MACHINE IP>:80 .

The changes performed disappeared. To persist the changes, a custom image


must be built. The custom image is a snapshot of the container after adding the
custom website. Repeat the steps above to add the HTML page, and ensure the
container is returning the new page again.

Create a Docker Image from the running container: docker commit

myfirstcontainer .

Name and tag the image: docker tag <image ID> custom-httpd:v1 .

Demo - Introduction to Containers 11


Remove the old container, and create a new one using the new image:

docker rm -f myfirstcontainer
docker run -d --name mysecondcontainer -p 80:80 custom-httpd:v1

Hitting the machine on port 80 should return the new HTML page now no matter
how many times the container is destroyed and created.

Push the Image to AWS ECR


Create a Container repository on AWS ECR, navigate to Amazon ECR —>
Repositories —> Private —> Create repository, with the following parameters:

Visibility Settings: Private

Repository name: custom-httpd

Leave the rest as defaults and create the repository.

Demo - Introduction to Containers 12


First, login to the ECR from the VM:

aws ecr get-login-password --region <REGION ID> | docker login --username AWS \
--password-stdin <ACCOUNT ID>.dkr.ecr.<REGION ID>.amazonaws.com

Tag the image with that found in the ECR repository:

# Tag the image with the correct repository name


docker tag custom-httpd:v1 \
<ACCOUNT ID>.dkr.ecr.<REGION ID>.amazonaws.com/custom-httpd:v1
# Push the image
docker push <ACCOUNT ID>.dkr.ecr.<REGION ID>.amazonaws.com/custom-httpd:v1

Demo - Introduction to Containers 13


Remove all the images and containers from the VM.

# Delete all the containers from the server


docker rm -f $(docker ps -a)
# Delete all the images from the server
docker rmi -f $(docker images)
# List all the available images and containers (should return empty)
docker images
docker ps -a

create a third container, but this time, reference the image located in the ECR:
docker run -d --name mythirdcontainer -p 80:80 <ACCOUNT ID>.dkr.ecr.<REGION

ID>.amazonaws.com/custom-httpd:v1

Finally, hit the server again http://3.250.206.251:80

Remove the images and containers from the VM:

# Delete all the containers from the server


docker rm -f $(docker ps -a -q)
# Delete all the images from the server
docker rmi -f $(docker images)
# List all the available images and containers (should return empty)
docker images
docker ps -a

Create a Docker image using Dockerfiles

Demo - Introduction to Containers 14


Create a temporary directory: mkdir ~/tempDir

Place the application code inside the directory in a file called index.html

<!DOCTYPE html>
<html>
<head>
<title>My Final Dockerized Website</title>
</head>
<body>
<p>I am Dockerized using a Dockerfile.</p>
</body>
</html>

Create a Dockerfile next to the index.html file, with the following content:

FROM httpd:2.4-alpine
COPY index.html /usr/local/apache2/htdocs/

The resultant directory should look as follows:

Build the Docker Image: docker build -f Dockerfile -t <ACCOUNT ID>.dkr.ecr.<REGION

ID>.amazonaws.com/custom-httpd:v-Dockerfile .

Push the image to the ECR: docker push <ACCOUNT ID>.dkr.ecr.<RGION

ID>.amazonaws.com/custom-httpd:v-Dockerfile

Demo - Introduction to Containers 15


Simulate a fresh installation of the image, remove all the containers and images
from the server, and create a final container from the newly pushed image:

# Remove existing containers


docker rm -f $(docker ps -a -q)
# Remove the images
docker rmi -f $(docker images)
# Create the final container
docker run -d --name myfinalcontainer -p 80:80 <ACCOUNT ID>.dkr.ecr.<REGION ID>.amazon
aws.com/custom-httpd:v-Dockerfile

Hit the machine via its IP

Demo - Introduction to Containers 16


Advanced Docker Concepts
Nicolas El Khoury

Introduction
Solution
ArangoDB Deployment
No Data Persistence
Data Persistence
NK-backend Service Deployment

Introduction
Now that we learned the basics of containers and Docker, we will take it to the next
level in this demo. We will containerize and deploy the NK-backend application that
we previously deployed on an Ubuntu VM. To do so, we will deploy the containerized
version of the Arango database, as well as the NK Backend Service. The following
steps will be completed:

Deploy the Containerized version of ArangoDB without data persistence.

Enable data persistence using Docker volumes.

Build the backend service using a Dockerfile.

Deploy the containerized version of the backend service.

Perform API requests to validate the deployment.

Solution
ArangoDB Deployment
No Data Persistence
Create an ArangoDB Docker container:

docker run -d --name person-db -p 8529:8529 -e ARANGO_STORAGE_ENGINE=rocksdb -e ARANGO


_ROOT_PASSWORD=rootPassword arangodb/arangodb:3.6.3

Advanced Docker Concepts 1


Ensure the image is pulled docker images and the container is running docker ps -

a:

Login to the database (username: root, password: rootPassword), and create


some data for testing purposes.

ArangoDB is now deployed as a Docker container, and is able to serve requests on


port 8529. However, if the container fails, the data will disappear. To verify,
completely remove the container, and then start a new one, simulating a total failure:

# Remove the container


docker rm -f person-db
# Ensure that the container is totally removed
docker ps -a
# Recreate the container with the same command used
docker run -d --name person-db -p 8529:8529 -e ARANGO_STORAGE_ENGINE=rocksdb -e ARANGO
_ROOT_PASSWORD=rootPassword arangodb/arangodb:3.6.3

Logging back into the management console clearly shows that all the databases,
collections, and data entered are now missing.

Advanced Docker Concepts 2


Data Persistence
Create an ArangoDB container with a named volume:

# Delete the container


docker rm -f person-db
# Make sure the container is deleted
docker ps -a
# Recreate the container with a named volume
docker run -d --name person-db -p 8529:8529 -v arango-volume:/var/lib/arangodb3 -e ARA
NGO_STORAGE_ENGINE=rocksdb -e ARANGO_ROOT_PASSWORD=rootPassword arangodb/arangodb:3.6.
3

Examine the Volume:

# Check if the container is successfully up


docker ps -a
# List all the available volumes. The picture clearly shows the creation of the arango
-volume volume

Advanced Docker Concepts 3


docker volume ls
# Navigate inside the volume directory (sudo permissions are needed). Clearly, the vol
ume is created inside the directory.
sudo ls -lah /var/lib/docker/volumes
# Navigate inside the volume directory. Each volume is created in such a format: /var/
lib/docker/volumes/<volume>/_data/
sudo ls -lah /var/lib/docker/volumes/arango-volume
# Inspect what's inside the _data directory. Evidently, it is Arango's Data
sudo ls -lah /var/lib/docker/volumes/arango-volume/_data
# Exec into the container's data directory. The same data that was found on the host v
olume, is present in the container
docker exec -it person-db sh
# Once inside the container, list the content of the data directory
ls -lah /var/lib/arangodb3

Simulate a Failure, and ensure data persistence:

Create temporary data (e.g., database, collection, data)

Remove the container: docker rm -f person-db

Recreate the container: docker run -d --name person-db -p 8529:8529 -v arango-

volume:/var/lib/arangodb3 -e ARANGO_STORAGE_ENGINE=rocksdb -e

ARANGO_ROOT_PASSWORD=rootPassword arangodb/arangodb:3.6.3

Log back into the management console. unlike the previous command, the
data created still exists.

NK-backend Service Deployment


Clone the repository:

# Clone the repository


git clone https://github.com/devops-beyond-limits/nk-backend-service.git
# Navigate to the root directory
cd nk-backend-service

Modify the Dockerfile: nano Dockerfile

Modify the ARANGODB_HOST variable to include the machine’s public IP.

Modify the ARANGODB_PASSWORD variable to match that specified on the


ArangoDB container.

Build the Docker image: docker build -t backend-service:v-Dockerfile -f Dockerfile

List all the available images on the server: docker images

Advanced Docker Concepts 4


Run a container from the backend service image: docker run -d --name backend -p

80:1337 backend-service:v-Dockerfile

Ensure that the application connected to the database through the logs: docker

logs backend

Advanced Docker Concepts 5


Hi and welcome to this lecture entitled DevOps What It Is and What It Isn't.
DevOps, SIS Ops Devsecops Cloud ops are all catchy and trendy buzzwords that are circulating
all over the world.
And the tagline A simple search for such keywords will return thousands of people and
companies, applying them one way or the other.
Unfortunately, as with most trends, numerous definitions and variations arise confusing the
general public, creating irrelevant job positions and career path, and leading to inefficient
software development lifecycles further complicating everything.
In this lecture, we try to define what DevOps really is by discussing the evolution of the software
industry. Some of the most popular software delivery models. And finally, what is DevOps and
what are DevOps engineers? A lot of things change in the past years, especially in software
related solutions, which can be summarized in these points.
I'm going to read them as they are since they are self descriptive. Today, almost everything is
digitized. Software companies are everywhere. Microservices are now a thing and Kubernetes is
everywhere. Cloud providers now have hundreds of managed services as opposed to having a
few compute services a few years ago. Continue with delivery tools. Allow us to easily automate
everything. Infrastructure can now be created and managed through code. Multicloud solutions
are becoming popular.
Everyone wants to become a DevOps engineer and all companies want to apply DevOps.
However, one thing that really not change there is no clear definition of DevOps. DevOps is
nothing more than a software delivery model.
Embracing today's available technologies and aiming to enhance the software development
lifecycle to better understand DevOps.
It is important to understand the preceding models, namely the waterfall and Agile models. The
Waterfall model is one of the oldest software delivery models introduced in the seventies long
before I was born. It defines the software development lifecycle into predefined sequential
phases, each performing a specific activity and must be fully complete before the next one can
begin with no overlap between them.
For example, business analysts must complete all the application requirements and design. The
developers will then use the complete documents to develop all of the application. Once done,
the engineers will fully test the code before being finally deployed by the operations team. With
the current technological advancements and capabilities, this model becomes cumbersome and
inefficient.
In fact, the model employs rigid processes, discouraging changes. It is also difficult to measure
progress due to the siloed mode of work. And finally, the deployments are slow and complex.
The Agile model formally launched in the early 2000, provides a more flexible approach to
deliver software. Unlike the Waterfall model, Agile promotes continuous iteration of design,
development and testing throughout the software development lifecycle of the project,
breaking down silos between the different phases and shortening release cycles from month to
weeks in what is called sprints.
As the name states, the model increases the agility through continuous planning, improvement,
team collaboration, development and delivery and responses to change.
However, the operations teams are left out given that infrastructure and operations did not
require the same agility at that time. The birth of cloud computing, whether it's on demand,
delivery of I.T. resources, revolutionize software delivery before software development and
delivery required an iterative approach while the infrastructure remained rigid with the
adoption of the cloud.
The need for owning and maintaining physical data centers was replaced by renting them out
from cloud providers using different flexible payment models. For example the pay as you go
model. Cloud computing came with several benefits, including but not limited to agility by
providing ease of access to a wide range of compute resources on demand, allowing for the
creation of complex resources in minutes. Elasticity.
Resource utilization is no longer a problem, especially with the ability to quickly modify the
compute resources based on the varying needs.
And finally, cost saving the pay as you go model and the elasticity of the resources permit the
users to continuously optimize the cost of the compute resources.
With the adoption of cloud computing and software delivery development and operations team
can teams can no longer be siloed, as is the case with the Agile model. As a matter of fact, the
development and management of the infrastructure must now align with that of the application
itself. In light of the above, DevOps is a bouquet of philosophies, sets of tools and practices that
aim to decrease the cost, time and complexity of delivering software applications by unifying
both the software development and infrastructure management processes. DevOps aims to
create to automate as many processes as possible to reliably and efficiently create and manage
infrastructure resources, release software changes, perform necessary tasks, for example, unit
integration, stress tests, etc. automatically spin new environments, seamlessly enhance system
security, ensure scalability and improve collaboration.
Having said this, in my opinion, Devsecops says ops and ops, cloud notes and all these variations
can all be replaced by the term DevOps. Clearly, DevOps is nothing more than a set of
philosophies and best practices to enhance the software delivery using today's existing
technologies. Therefore, DevOps is not deployment of software on the cloud using the Agile
approach. Most of today's understanding confuses this with DevOps. It is also not creating
software using the microservices approach. It is also not using infrastructure as code tools with
no clear purpose. And finally, it is not the adoption of unneeded automation tools in general.
Many entities attempting to apply DevOps might fall for the misconceptions listed above and
rather still apply unknowingly the Agile model, but on the cloud. The inability to truly define
DevOps resulted in creating a lot of inefficient and weird job positions that might not
necessarily contribute to the software development lifecycle. Worse, this can lead to the further
deterioration of the quality of the application and the life cycle as a whole. In light of the above,
DevOps engineers are not engineers who create cloud infrastructure. Those are those are site
reliability engineers. They are also not Kubernetes gurus. They are those are just Kubernetes
gurus, not DevOps engineers. And finally, they are not cloud enthusiasts. In brief, a DevOps
engineer is someone with enough skillset to bridge the gap between the development and
operation team through creating the required infrastructure, deploying the application,
providing continuous delivery mechanisms and automating all the processes previously done
manually by the different departments, for example, the development, testing, security,
etc..DevOps engineers must have a strong background in system administration and software
development. In conclusion, Davos is still confusing, is still a confusing term for most of the tech
industry. Several definitions have been created without clear understanding and straightforward
value. Davos is nothing more than a culture that further enhances the software delivery
lifecycles. Becoming a successful DevOps engineer requires you to have a strong knowledge in
both development and operations.
Hi and welcome to this lecture entitled Introduction to Web Application. Currently, thousands of
applications are deployed and managed every day on dynamically created network resources.
Technology advancement allowed for most of these operations to be carried out quickly and
efficiently
through processes and automation tools. Full Stack and DevOps engineers are two of the main
pillars of realizing the software delivery. Computers in the sixties were the size of a room as
opposed to today.
Data exchange between computers was not a walk in the park. In fact, to access data stored on
a computer, one had to either physically travel to the computer site or have the magnetic tapes
holding data ship through the traditional mail system. Imagine the hassle of doing that.
However, in order to appreciate the present, it is essential to go back to the beginning and
understand what is everything and why. In this session, we will go back in history and
understand what the Internet, the worldwide web and
client server architecture actually are and understand the basics of name resolution and load
balancing.
For demos, as can be seen in the slides, will be performed to apply the knowledge learned in
this lecture. Let us start with with what is everything and why? The Internet is a global network.
It's a network of networks that allows computers and other electronic devices to communicate
and share information with each other. Initially used by very few academic and research
institutes, its use and importance increased over time. The TCP IP protocols standardize the
connection of computer systems to the Internet, leading to the Internet being adopted to the
wider public.
The worldwide web, also known as the Web, constitutes an interconnected repository of
documents accessible through the Internet using a browser. The Internet and the worldwide
web are not the same things. The former provides connectivity between computer systems
across the globe. The latter is an application built on top of the Internet. The Web, as we know
today, is composed of several components, the most important of which are the HTTP protocol,
which standardizes data exchange between clients and web servers. Uniform Resource Locators
and Uniform Resource Identifiers to access unique documents and the hypertext Markup
language for building web documents. Finally, a web application is, as the name states, an
application that is served through the Internet and consumed by a client through a web
browser. Web applications are platform agnostic, meaning they can run on any machine using a
browser. Examples of popular web applications include, but are not limited to Google,
Facebook, YouTube and Gmail. Having learned the basics of the Internet, the World Wide Web
and Web applications, This diagram depicts how everything is connected together to form what
we enjoy today and collectively called the Internet. The Internet is the tool that allows all the
other networks to be interconnected, as can be seen in public clouds such as email applications,
web servers, private data centers, personal computers and mobiles are all interconnected
through the Internet. The client server architecture is a computing model to serve and consume
web resources. And this model clients, for example, mobiles, laptops, Iot devices, etc., consume
resources from applications hosted on remote servers In order for the client to fetch the web
application and HTTP request must be made to the server using its IP address. The web server
listens and serves the serves the application through a port.
Evidently it is impossible and impractical for the user to communicate with the servers using
their
IP addresses. In addition to the technical complexities that may arise which are out of the scope
of the session, imagine the experience of having to communicate daily with dozens of
applications using their IP addresses. To solve this problem domain resolution, which is mapping
domain names to these IPS were invented with domain resolution. Websites are accessed using
domain names, which are translated by specialized servers to their desired IP addresses. The
picture The picture here shows the IP address of Google and Facebook. A domain name system
is thus a database system containing all the registered domain names and their corresponding
IP addresses. When a user types google.com in their browser. For, for example, the request
performs the following trajectory. Note that the diagram is oversimplified. First, the request
leaves the client all the way to the server. The DNS server returns the IP address of Google dot
com to the browser. Then another HTTP request is made to the IP address. Finally, the Google
web server returns to the client the webpage to be rendered. Modern day Web application
scans can search thousands or even millions of requests per second. Moreover, due to the
advancement of the services, online requests may be resource intensive. For example,
streaming, transferring large data, persistent connections, etc. to cater for such requests.
Effectively, One server, no matter how large, is neither a guarantee nor a best practice. A better
alternative would be to horizontally scale that is to add multiple replicas of the server running
the same application and distribute the load across them. A load balancer is a network
appliance, be it hardware or software that sits between the application servers and the clients
acting as an entry point to the system and distributing traffic based on customized rules. Load
balancers come with great advantages. One of them is performance through distributing the
load across multiple servers, availability through distributing traffic across healthy replicas in
addition to solving the single point of failure issues. Scalability through the ability of adding and
removing servers upon need. And finally, security through filtering traffic based on custom
defined rules. Load balancers can route traffic to the back and servers using different
algorithms. First we have road round robin, which is a simple solution to distribute the traffic on
all servers by rotation. This method does not take into account the server performance list
connections. Distributes traffic to the server with the least number of connections. List
response time distributes traffic to the server with the fastest response times to health checks
and finally leased bandwidth distributes traffic to the server in the least amount of bandwidth.
Health checks is a mechanism offered by most load balancers to continuously monitor the
health of the backend servers and distribute the traffic accordingly. With the proper setup, a
load balancer may detect failed nodes and avoid distributing traffic to it until it is repaired.
Some of the important parameters to configure when setting up health checks are the protocol
to use the port to perform the request on the health check path Health check interval, which is
the amount of time between health checks for a specific target. Unhealthy threshold count,
which is the number of consecutive failed attempts before considering a target as unhealthy.
And finally, the threshold count, which is the number of consecutive successful attempts before
considering the target as healthy.
Hi and welcome to this demo entitled Deploy and Serve a Static Website. And this demo we are
going to deploy a simple HTML website on an easy to open machine.
To do so, we are going to create a simple YAML page, create the networking and compute
resources required on us.
Then we will install on the machine the Apache two web server.
Deploy the HML page and finally we will configure the web server to serve the application on
Port eight.
So the first thing that we should do is actually create the zookeeper so we can switch to the
machine. I will click on Create Keeper.
I will give it a name and I will leave everything else as default.
Now the keeper is deployed on my machine, but in order to be able to access h, I have to move
it to a hidden directory or let me get it from the downloads.
Folder and I will place it in the keeper's hidden directory that I had already created.
Now it is over here and I will change the permissions to read only. Now I also have to create a
security group. So create security group. I will call it a demo. I would also call the description a
demo. And I will add two inbound rules. One to allow me to search from anywhere and one to
allow requests on Port 80 from anywhere as well. And I will create the security group.
Finally, I will create an Ubuntu instance. I will give it a name called web server.
I will use the Ubuntu 20.04 rmi and I will use the three dot medium instance type.
But you can use of course any type that you want. I will use the demo keeper and the demo
security group which allows inbound rules on Port 22 and 80 from anywhere. I will add a little
bit more of storage and then I will launch the instance. So now the instance is being initialized
and it will take a little bit of time. So I will pause the video and I will continue after the instance
is ready. As you can see now, the instance is ready. So I will assess h inside it using its public IP.
Let me get the public IP. So I say open to at public IP and I will use the a W. S demo key that I
had created. So now I am inside the machine.
Now I want to actually install Apache. So to do so, I will update the package index to reflect the
latest changes and the packages.
This will take a few seconds and now I will install Apache, which will also take a few seconds.
Now, once installed, I can I can test the configuration that it is working by hitting the IP and port
80 of the machine. When done, Apache should return its default page.
I will copy the public IP pasted on the browser. Apache returned its default page, so now we are
sure that all the configuration that we have done so far is correct since we can hit the machine
on Port 80. So now it is time to actually deploy the custom HTML page on the machine. First, I
will create a directory in which I will deploy the application. My first app is the name of my
directory and I will change the ownership from root to w w w data. And then I will create inside
this directory and index the HTML file and I will paste the HTML document inside it. So there
you go.
It's there. It has a title called My First Application and a small paragraph that says, I have no idea
what I'm doing.
Let me save it. Now that the application is deployed, I will also create a directory for the logs.
This will be used by Apache to log the requests and the errors, and I will also change the
ownership from root to w w w data.
Now the final step is to actually create a virtual host, the virtual host, and will instruct Apache
on what to do when it receives requests. I will create a file called my first app dot com.
Inside the ETSI Apache two sites available directory.
And I will paste this virtual host that tells Apache that please listen on Port 80 and whenever
you receive a request on Port 80, you have to load to the client.
The document whose root is is found in this directory.
In addition to that, please log all the other logs in this directory and all the requests in this
directory. So I will save. And now I have to enable this configuration file using the A and side
command. And I will disable the default configuration.
I will also check that all the syntax of my configuration is correct. It says syntax. And finally, I will
restart Apache.
Now, if I hit the machine using its public IP and on Port 80, the default page will no longer be
returned.
Instead, it will return the custom HTML page that I had already deployed.
Hello and welcome to this demo entitled Deploy and Serve two static websites and the previous
demo we installed Apache and we configured it to listen and serve a static website on Port 80.
And this demo on the same machine, we are going to create a second edge HTML page and we
will deploy it and then we will configure the web server to serve the application on Port 81.
First, let us make sure that the first application is still running.
It is still running. Now I will switch again to the machine.
I am inside the machine and then I will start by creating the directory in which I will deploy the
code.
Then I will change the ownership from root to w w w data.
Then I will create the index, the HTML file, and I will paste the second application code, which is
almost similar to the first one. Then I will create the log directory and I will change its ownership
as well. Finally, I will create another virtual host to tell Apache what to do when it receives
requests on Port 81.
This virtual host instructs the Apache that it is listening on Port 81, and whenever it receives a
request on Port 81, it will roll.
It will load the documents that are present in the var w w w my second up and I told Apache
where to place the error and request logs. So finally I will save. I will enable the configuration. I
will test the syntax. And finally, I will restart Apache.
Now I will try to hit the machine using its public IP on Port 81.
Unfortunately, it's not working. But this is normal because the security group attached to this
machine does not allow requests on Port 81.
So let us change this. Let's go back to the security group and modify the inbound rules and add
another rule. Custom TCP IP Port 81 from anywhere safe.
Now let's test again. So I will get the public IP based it port 81 and it still won't work. But this is
also normal because by default, Apache does not listen to Port 81 and we have to enable it for it
to work. So let us do that.
We will edit the ports dot com file inside the DC Apache two and we will add this line that says
Please listen on Port 81. I would save restart Apache. And now I will test again. There you go. It's
working. So now I have an application running on Port 80 and another one running on Port 81
on the same servers.

Welcome to this demo entitled Add Domain Names to the application and the previous demos.
We deployed to HTML applications on one SC two machine.
One application is served on Port 80, while the other one is served on Port 81.
Instead of serving the applications using the IP address of the machine.
And on different ports, we will use in this demo domain names and both applications will be
served on Port 80. So the first application will be served using domain name, my first app dot
com and the second application will be served using domain name, my second app dot com. I
am logged into my machine. So first of all, I will modify the virtual host of the first application
since it is it is listening on port 80. We will not modify this one but I will add the server name
directive, which is my first app dot com. So now this virtual host will listen on Port 80 and will
serve requests coming with the server name. My first app dot com.
Let me save this and now I will modify the virtual host of my second app. So let's do this. Now as
you can see, it is listening on Port 81. I will change this to listen on Port 80.
And I will add the server name directive that is listening using server name. My second app dot
com. And now I will restart Apache. Now the last thing to do since we are using fake domain
names, I need to instruct my machine on how to translate the domain names into the
corresponding IP address. So to do so, I have to modify the host file on my machine. I will open
a new tab and modify the hosts file.
So I need to add the the IP address of the machine and the domain names.
So this line instructs my machine that whenever I put in my browser, either my first app dot com
or my second app dot com, translate it to this IP address.
Let me go into the browser, get the. So I will first app dot com. And as you can see, the first
application is loaded. Now let me try it for my second app dot com.
I have my first uploaded using my first app Dotcom, and I have my second uploaded using my
second app dot com.
Hi and welcome to this lecture entitled Web Application Concepts. So far we have learned to
deploy and serve websites using Apache Web servers. Moreover, we leverage the power of load
balancing in order to scale the website when needed. With great innovation comes great
complexity, with the internet and technological advancement. Websites are the simplest forms
of applications created and deployed.
Web applications, on the other hand, are more complicated to design, develop, deploy and
maintain. This lecture provides a deep dive into web applications with the aim to understand
their nature, importance, characteristics and challenges.
In this video, I will discuss the differences between websites and web applications, the HTTP
protocol and the web application layers, Components and architecture.
Websites versus Web applications. By definition, websites are a set of interconnected
documents, images, videos, or any other piece of information developed using HTML, CSS and
JavaScript and deployed and served using one of the ways we introduced previously.
User interaction with websites is limited to the user fetching the websites information only.
Moreover, websites are usually stateless and thus requests from different users yield to the
same results at all times.
Examples of websites include but are not limited to company websites, blogs, news, websites,
etc.. Web applications, on the other hand, are much more complex than websites and offer
more functionalities to the user. Google. Facebook. Online gaming e-commerce platforms are all
examples of web applications. Such applications allow the user to interact with them in
different ways, such as creating accounts, playing games, buying and selling products, etc..
Evidently, in order to provide such complex functionalities, the architecture of the web
application can prove to be much more complex than that of a website.
As already discussed in its simplest form, most web applications follow the client server
architecture, in which a client application is loaded from the server to the browser, allowing
communication with the web application on the server through the internet using the HTTP
protocol. Communication between application components happens through the application
programming interface or API. Now let us discuss the HTTP protocol.
By definition, the Hypertext Transfer Protocol or HTTP is a protocol designed to load web pages
deployed and exposed on the Internet.
The protocol is designed to standardize the exchange of information between connected
devices. To better understand the protocol.
Consider this diagram. A typical flow begins when the client machine sends an HTTP request to
the server. The server takes the appropriate action, and response responds with an HTP
response which will be processed by the client.
Communication over HTTP is done using HTTP messages.
It can either be an HTTP request from the client to the server or an HTTP response from the
server to the client. Usually HTTP requests and responses are composed of the same structure
and components with some minor
differences. The first component to discuss is the HTTP method, which is used to indicate the
desired action. Some of the methods include get, which is used to fetch data from the server
post, used to create new data on the server, but used to modify existing data on the server and
delete used to delete data from the server. The HTTP version is used to indicate the version of
the HTTP protocol used. HTTP uniform resource locator or your URL constitutes the complete
address of a resource on the web. It is unique per resource.
A URL can be composed of multiple fields dissecting the URL and this picture results in the
following. The protocol is http. The domain name is my web app dot com. The application port
is 80. The path is forward slash some forward slash API, and the rest are the query string
parameters. HTTP headers include information about the request or response stored in key
value pairs. The HTTP headers provide more context and information to the to the HTTP
request. For example, headers specify the accepted languages, preferred media format, etc. The
HTTP body contains the information sent by the client to the server or vice versa. Finally, the
HTTP status code dictates the status of the requests. Some of the most widely used status code
blocks are to x x to indicate a successful completion of the request for x x to indicate a logical
error while serving the request. For example, four of four not found 5xx to indicate a server
error while serving the request. For example. 504 Unable to reach database. An application is
divided into three layers. More layers can be added to the application design, but for simplicity
purposes, this lecture explains only three. The first one is the presentation layer, also known as
the client side. The client applications are designed for the user to interact and manipulate the
application. Front end applications are developed using many technologies. For example,
Angular JS, React as Vue.js, etc. Application or business logic layer is part of the application
server side. It accepts and processes the user requests and interacts with the database for data
modification. Such applications can be developed using Node.js, Python, BHP, Java, etc.. And
finally we have the database layer. This is where all the data resigns and is persisted. Usually a
website is composed of a simple code developed entirely using HTML, CSS and JavaScript alone.
On the other hand, web applications are more complex and are made of different components.
In its simplest form, a web application is composed of a front end application which represents
the client side and is used to allow the user to interact with the application. The backend
application, which represents the server side and is used to serve the user, requests a database
to store and load the application and user data. The components above are essential to create
web applications. However, the latter may require additional components to serve more
complex functionalities in a more optimized way. For example, an in-memory database for
caching a message bus for asynchronous communication, content delivery network for serving
and caching, static content, a workflow management platform for organizing processes and
many other components. Clearly, as the applications use, case grows in size and complexity, so
will the complexity to design it. Therefore, a proper way to architect and organize the
application is needed. Here is a diagram representing the architecture of an eCommerce web
application that I designed and deployed on AWS a few years ago. You can clearly notice the
amount of interconnected components required to make it work.
Well, application architecture mainly depicts the way the different components are built and
interact with each other. As a matter of fact, as the application grows in size, a well tailored
architecture is essential to ensure the proper functioning of the application. There exist multiple
architectures when designing web applications. Nonetheless, two of the most prominent ones
are monolithic and microservices. To better understand the difference between microservices
and monolithic applications. Consider the example of building an e-commerce platform.
Typically, such a platform contains several functionalities, namely catalog to serve and display
available items. Customer to handle customer related functionalities.
Order to manage orders happening on the platform and payment to allow online payment
functionalities.
From a monolith perspective, all the aforementioned functionalities are designed using one
technology for example Java, PHP or Node.js.
Having one large code base and deployed entirely on one server.
Each functionality is designed as a separate module which interacts with one another through
requiring each other.
On the other hand, if design using the microservices approach, each functionality turns into a
separate and independent service, most probably with its own database tool that is developed,
deployed and managed on its own. Evidently, both approaches possess numerous advantages
as well as disadvantages. Monolithic applications indeed provide several attractive advantages,
especially the ease of management, since they are very easy to develop, deploy and test.
Unfortunately, the aforementioned aforementioned advantages begin disappearing as the
application grows in size.
As a matter of fact, monolithic applications comes with with many great disadvantages,
especially at scale. One of them is slower development lifecycle. As the size of the application
grows, so does the amount of time and complexity required to build, test and deploy the
application at every change. And with scale, changes are more frequent. Evidently the bigger
the application, the slower it will become to continuously develop and publish the application.
Codependency no matter how organized one can be. Organizing the aggregation of the code in
one place and the communication between modules becomes inefficient and inconsistent at
the end. Performance issues. As the functionalities grow in size and complexity, maintaining a
proper performance becomes an issue. In fact, the different services and capabilities of the
application will require further capabilities and resources from the hosting server. Moreover,
having different functionalities may require different specialized infrastructure for different
types of functionalities, which may prove to be difficult to achieve given the centralized nature
of monolith. Since everything is deployed on one server. Scalability issues. Scaling a monolith
becomes inefficient at scale. Typically, different parts of the application may require different
scalability rules. For example, considering the e-commerce application as you do, most of the
traffic is directed towards the catalog module, whereas all the remaining ones are receiving
minimal traffic. In the case of Monolith, all of the applications will have to be scaled out using
large servers, although only one small part of the application requires it. Infrastructure costs.
Provisioning resources to operate the monolith may prove to generate unwanted costs. Code
ownership and team division problems due to the interconnected nature of the code and the
monolith. Onboarding members on the team and clearly dividing responsibilities between them
becomes problematic. Technology lock in being centralized in one code repository. All of the
application with its different functionalities is developed using a specific set of technologies.
However, this may prove to be costly as each technology has advantages and disadvantages.
Being locked to one stack not only forbids the application from the advantages of the available
technologies but may affect its performance as well. Technical debt. Technology is advancing
rapidly and platform changes with enhancements are being rolled out quickly as the monolith
grows. Upgrading it becomes costly, which may affect the performance and the continuity of the
application. And finally, the single point of failure. A failure in one method service or end point
may lead to the failure and the whole application. Clearly centralized architectures, especially
monolithic applications, come with great advantages, especially for simple applications on small
scales. It is intuitive to design, develop, deploy and maintain them. Unfortunately, the above
disappears quickly as the application and the maintaining team grows. Microservices, on the
other hand, alleviate most of the monolith disadvantages due to their extremely distributed
nature. The first advantage is fault tolerance due to their distributed and independent nature.
Failure in one microservice should not bring all of the application down. Therefore, single points
of failure and downtimes are minimized. High scalability microservices allow to have different
scaling mechanisms and rules for different microservices. depending on the nature of the traffic
and resource consumption of each of them. This allows for a more optimized resource
utilization and scalability rules. Ease of maintenance. Even if the application grows in size and
complexity. Teams can be organized to work independently on different microservices. With
proper communication between teams. The main thing is to agree upon all the exposed end
points and communication between services. For instance, the catalog service and the order
service can be developed and maintained separately by completely different teams. The teams
must only collaborate on exposing and using the end points of each service, assuming that one
service will have to communicate with the other. Ease of deployment. It is easier and faster to
build and package a small microservice, especially with today's powerful automation tools, and
serve them on smaller commodity servers. Technological freedom. Each microservice can be
designed, developed and maintained on its own, using the most convenient set of tools and
technologies and regardless of the other microservices forming the application. This provides a
great advantage by allowing these teams to leverage the existing choices and technologies and
use those that fit the purpose of each functionality. For example, the catalog service may be
developed using Node.js while the customer service is developed using Golang. First
development lifecycles. Developers working on specific independent and small scale services
allows the services and the overall application to be maintained and updated with fast
development and release cycles. Despite their many advantages, microservices come with
multiple disadvantages, which must be considered and addressed by teams willing to develop
software using this architecture. The first one is complex infrastructure. The distributed nature
of microservices dictates a more complex infrastructure to be created. As a matter of fact, as
the number of services grows, each of which requiring its own setup rules, configurations and
scalability rules. A proper infrastructure capable of serving these different needs is required.
The need for DevOps microservices are decoupled from each other. However, microservices and
their underlying infrastructures are highly coupled. Deploying and managing microservices is
quite different than that of a monolithic application. Developers and operations teams must
collaborate efficiently to ensure that the application and infrastructure being developed work
well together. This entails both teams to understand each other, learn additional skills, and
continuously coordinate. Increased network calls. Microservices are deployed on different
servers, different subnets and in different physical locations. Moreover, a simple request in
coming to a microservice application genuinely must traverse multiple services and
components, each reach through a network code. This greatly enhances the need for a robust
infrastructure and finally complex end to end testing. While unit testing may be a great
advantage with microservices end to end or integration testing may not be as pleasant. As a
matter of fact as the number of services and components grows. Automation tools and robust
testing processes are required to continuously create testbeds and successful test scenarios at
every release, which may prove to be complex and costly at the same time.
Hi and welcome to this demo entitled Enable Load Balancing to Better Understand Load
Balancing. We are going to create three upon two is two machines on us and we will create two
simple HTML pages. We will deploy the first one on one VM.
We will deploy the second one on the second VM, and the third VM will be configured as a load
balancer using Apache and we will instruct it to distribute the load on the first two virtual
machines. Now, to save time, I already did some of the work.
So on the first server, which is up on, I installed Apache and I deployed the first application.
I did the same thing on app two and I installed Apache on the third virtual machine. The
configuration that I have done is working correctly. I will hit the first server and the first
application is returned. I will now hit the IP of the second machine.
And the second application is returned. Now I will hit the load balancer machine, which has only
Apache installed and it will return the default
Apache page. So so far everything I have done is working correctly.
Now what I will do in this demo is that I will configure Apache on the load balancer machine to
actually work as a load balancer. So the first step is that I have to enable a few modules for
Apache to work as a load balancer. So I will do this on the third machine.
And now I would configure a virtual host, which is going to be a little bit different than the one
we used to do before. So let me paste it and I will explain what it means.
So I'm instructing Apache on this. Third is that you will listen on the on Port 80 and whenever
you receive a request you are going to route it either to the first member, which is the first
virtual machine or the second virtual machine. Now all I have to change is to put the IPS of each
machine instead of these placeholders. So I'll get the IP of the first machine and I will replace it
here. And I would do the same for the IP of the second VM.
And I would save the configuration. Now I will disable the default virtual host and I will enable
the load balancer, virtual host.
I will test my syntax. And finally, I will restart Apache.
So now if I hit the load balancer machine, it's supposed to rout me one time to the first virtual
machine and load the first application and the other time to the second virtual machine and
load the second application. Let's test it so I will get the IP of the load balancer and I will place it
in the browser. So now it loaded the first application.
Now if I hit it again, it loaded the second application and if I repeat the steps, it will keep on
distributing the load across the two virtual machines. And this is load balancing.
Hi and welcome to this lecture entitled Introduction to Server Side Applications. After learning
different concepts about web applications such as the differences between websites and web
applications and web application layers, components and architecture, we are going to focus in
this video on the application server side.
In this video, we are going to discuss different database concepts and then we will discuss the
key back and service which we will deploy on AWS and the next video.
A database is a software program that enables capabilities to efficiently store and query data in
a system.
Almost every system in the world includes some sort of database that manages the data. For
example, smart devices, mobile phones, etc..
Most of today's web applications rely on constantly changing data.
For instance, Facebook and Twitter are all examples of web applications with massive amounts
of data flowing in and out of the system.
Therefore, there must be a reliable way to store and manage all this data.
A database management system, or DBMS, is a software program that enables the creation and
management of databases and data models, thus providing efficient tools for managing the
data. My SQL, MongoDB and Aramco DB are all examples of DBMS Database management
systems are designed with multiple tools and capabilities, such as a database engine, data,
query language monitoring tools and user management tools and applications. Data may grow
to become extremely complex. Moreover, the application needs to constantly store, generate,
modify and remove data from the database in a dynamic and reliable way.
Data models are designed processes to organize the way the data is managed.
Consider the commerce application explained in the previous lecture.
Such an application must manage data related to the catalog. For example, product description.
Product prices. Data related to the customer.
For example name, age, address data related to orders generated on the system, such as all the
details, products purchased and data related to payments such as credit card information and
transaction records. Clearly, each service requires different types of data to be stored. A data
model allows the developers to logically structure this data and different tables and collections
in a way to reliably store and fetch the required data. Different database types currently exist
each of each supporting a distinct data model of all existing database types.
This lecture explains relational databases, document databases and graph databases.
Relational databases are based on the relational data model which organizes data into tables.
Each column in a table defines an attribute with characteristics.
Each row in a table corresponds to one record and is identified by a unique ID.
For instance, to store the names and ages of all students in a university, a relational database
can be used as an example.
A table called students can be created with two attributes name of type of type, string and age
of type integer. Moreover, several tables can be created referencing one another. For instance,
another table called courses containing a list of available courses in the university has two
properties name of type string and is available of type bullying.
Finally, a third table called student.
To course, registrations can be created to keep track about student registration to the courses
with two columns, one corresponding to the student ID, while the other corresponding to the
course ID structure. Query Language. SQL is one language used to query the data from the
structured databases. As a matter of fact, SQL provides efficient capabilities to manage data and
much more complex and large
data models. Examples of DBMS and relational databases include my SQL Oracle database and
PostgreSQL.
Relational databases are best used when the data on the system is highly structured and doesn't
change frequently over time.
A document database is a type of non relational database designed to manage which store and
query data and JSON formatted records.
Such databases provide more flexibility, especially for the developers due to their loose model.
That can change dynamically over time and between records without much administrative
overhead. Modeling The university example presented earlier results in pre JSON documents as
can be seen in the picture. Query languages are provided by the vendor to manage the data.
Document databases are best used for use cases similar to catalogs, user profiles and new
projects that need to be built with agility in mind. Examples of document databases include
MongoDB, Amazon, Document, DB and Cosmos DB Graph databases are known as SQL
databases that stored data without schemas while connecting the records in each table using
relationships. Data records are stored in what is known as nodes. Nodes can have relationships
with one another using edges graph databases.
Combine the flexibility of no SQL databases with the power of relational databases.
The university example is depicted in this diagram.
The student node stores JSON records, each of which represents information about a student.
The courses Node stores, JSON records, each of which represents information about a course
the student took. Course registration is of type edge and stores JSON records related to the
registration of each student to each course.
Edges must have valid database record IDs one for the student and another one for the course.
Graph databases offer a flexible data model as well as a robust way to manage relationships
between data points. Therefore, although graph databases can be useful in most use cases, they
are most widely used in fraud management systems, identity and access management and
recommendation engines. Examples of graph databases include Arango, DB and New Forge. This
diagram summarizes how a web application is deployed on us and how users interact with it.
Consider an application consisting of a front end application, backend application, and a
relational database. One way to deploy it is as follows.
The database is deployed and served using a managed database, service or audience. For
security purposes, it is recommended not to expose the database directly to the Internet,
meaning deploy it in private subnets and properly secure the access.
The back end application is deployed on an easy to server with a public IP.
The front end application is deployed and served on AWS three.
All the application components reside within an AWS virtual private cluster.
A typical HTTP request response cycle is as follows The client sends an HTP request to the front
end application.
The front end application code is returned and is loaded on the client's browser.
The client sends an API call through the front end application to the back end application. The
backend application validates and processes the request.
The backend application communicates with the database for managing the data related to the
request. And finally, the backend application sends and HTTP response containing the
information requested by the client. The back end service is an open source project that serves
as a basic backend service. The service is developed using Node.js and save us. The service uses
a language DB to store and manage its data.
It is a restful service and exposes cloud APIs to manage person records.
It contains the following APIs Get Health, which is the health endpoint.
It is used to ensure that the service is up and running.
Both person. It creates a person record in the database.
The function performs a transaction against the database checks if the person already exists
using the email. If the person already exists, a logical error is returned.
Else the person record is created in the database and a success response is returned to the
client. That person's it fetches all the existing persons in the database.
And finally, delete person by ID, deletes a person record from the database using the ID and the
next video, we will deploy the back end service on AWS.
Hi, and welcome to this demo entitled and Payback and Service Deployment to better
understand how an application is deployed and search traffic over the internet and this tutorial
we will deploy the K back end service and its database on an easy to machine and explore some
of its APIs by communicating with them using the Postman API client. The following steps will
be completed. First, we will create an AWS virtual machine. Then we will configure it with the
necessary prerequisites.
Afterwards, we will deploy the database and connect to it using a client.
And finally, we will deploy the backend application and perform some API requests to validate
the deployment. The back end service and the database will both be deployed on one virtual
machine. Moreover, the backend service will be exposed on Port 88, while the database uses
port 85294 Communication. Therefore, to successfully perform the deployment, we will create
a security group with the following inbound rules port 88 to communicate with the back end
application port 8529 to communicate with the database and port 2222 to edge to the machine,
not to save time. I have already created the security group and the machine.
As you can see here, the there is a security group with the desired inbound rules and the
Ubuntu 20.04 machine. So now I will associate to the machine using its public IP.
But public IP and the US Delucchi. So now I am inside the machine.
The first thing to install is the Aramco database. Now, the official end would be documentation
explains all the steps required to deploy the database. So I will start by updating the package
repository. And then a bunch of other commands.
Before updating the package repository again. And finally installing the Aramco database, which
should take a few seconds. During the installation, the installer will prompt us to enter and
confirm the route user's password. So I would put the password.
And I will leave those questions as they are. And once the installation is complete, I will make
sure that the service is running by checking its status.
So the GDB is now installed and is running.
Now the first thing I will attempt to do is to actually connect to it using its client.
And I will successfully not be able to communicate with the database.
But this is normal because by default R&B does not allow connections from outside of the VM.
To solve this issue, the negative configuration file must be edited.
So let us do this. I will edit the DB configuration file and I will edit the end point to allow
external connections. I will save the fine. Restart the automotive service. And now I will attempt
to connect to it again. This time it should work. So now I will add the root username and
password and I will select the system database which will redirect me to the administration
console. The console provides me with different functionalities, such as the ability to create
databases, perform queries, monitor performance and logs, etc.. So reaching this point signals
the success of the deployment of the database. Now it's time to actually deploy the backend
service.
As explained previously, the back end service is a not just restful service built using Celsius.
Moreover, the application code is stored on a public GitHub repository which everyone has
access to. So in order to fully deploy and run the back end service, some prerequisites must be
installed. First of all, I will update the package repository again and then I need to install. Git is
an open source tool for source code management.
Git allows teams of developers to collaborate together and track changes efficiently happening
on the code while storing the code safely in a remote repository.
And this demo we will use get to download the backend service from the GitHub repository to
the server. So first of all, I will install it.
And I will make sure that it is installed using the Git version command.
And this shows that it is already installed. So now the next step is to actually install the Node.js
platform. So to do so I will install Node.js and NPM using the apt get tool.
So now that Node.js is installed along with NPM, I will clone the repository on the server and I
will go inside the directory. So the application code is successfully installed on the machine. For
simplicity purposes, understanding the use of the different files is out of scope. Rather, in this
demo we focus on the system requirements for the application to run Being an object
application, the service requires some packages to be installed.
Therefore, to do so, we will install the packages using the NPM install command. Finally, the
application depends on some environment variables. Environment variables are very similar to
global variables in any programming language. Once a global variable is defined in a file, it can
be accessed from anywhere inside that file.
Similarly, environment variables are system wide variables in Linux.
Once an environment variable is exported, it can be used by any process on the machine. So let
us start by exporting exporting the environment variables of the application. The first one is the
port. It tells the application to use port 88.
Next, we have five environment variables related to the database credentials such as the host
port username, password, and finally the database name, which is persons.
And finally, the application requires five more environment variables that we don't really have
to care about, but they are necessary for the application to run.
Finally, I will run the application using the Node app Dogs Command.
The application logs clearly show that the application successfully connected to the database
and created a database called Persons, as well as a connection with the same name. So to
validate the creation of the database, I will log back into the database administration console.
And as you can see, there is a persons database which I will select. And there is a person's
collection which is empty. This was created by the application. So now I will validate that the
deployment is successful and that the application is reachable from my machine using the
Custom Health API. So I will use the postman client which I have downloaded over here. I will
put the machine IP port 80, 80 and the forward head. We have a successful response indicating
that the back end service is running. So now to test the available APIs, we need to download the
Postman collection, which you can find in the repository and import it to the Postman API client.
Now I have already done so as and as you can see, there are three APIs. I will only test one API
which is create person. Now, before I do so, I have to change the variables to reflect the server's
IP address.
I would put the correct IP address and the correct port. And finally, I will hit the Create Person
API, which has a JSON body over here.
So this information should be added to the database if everything is successful. So let's send an
API. I have a successful response indicating that this database record has been added to the
database. Now we can validate it through the application logs, which clearly indicates that the
application received the request. It added to the database and it returned a response to the
client. And finally, I will go back to the database and refresh the person's collection.And I can
see the record that is added over here that.

Hi and welcome to this lecture entitled Infrastructure Types.


Currently there exist multiple infrastructure types to deploy and manage web applications.
Each option possesses its own advantages, disadvantages and use cases.
Moreover, a combination of more than one type can be used together to create the desired
infrastructure.
In this lecture, we discuss four of the most popular infrastructure types physical servers, virtual
machines, containers, and serverless.
This diagram lists different infrastructure options for deploying and managing applications.
To better understand the difference between each type.
Consider a scenario where Company X requires two applications A and B to be deployed.
Let us begin by talking about physical servers and this type of infrastructure.
The company would have to purchase, configure and manage a physical server in a physical
location, for example, a data center.
Moreover, once configured and an operating system is installed, both applications A and B can
be deployed on the server.
The company must ensure the correct configuration and management of the server and
application throughout its lifecycle.
Physical servers have many advantages.
Being customizable and giving powerful performance through full dedication of the server
resources to the application.
However, some of the disadvantages include large CapEx and OpEx since setting up the required
infrastructure components may require a large upfront investment in addition to another one
for maintaining the resources management overhead to continuously support and manage the
resources.
Lack of scalability.
In fact, modifying the compute resources is not intuitive and requires complicated labor work
and a lot of time. The risk of over or under provisioning due to the lack of scalability.
Performance degradation since hardware components will degrade and fail over time.
And finally, improper isolation between applications.
All the applications deployed on the same physical host share all the host resources together.
Virtual Machines.
One of the best practices when deploying web applications is to isolate the application
components on dedicated environment and resources.
Consider an application composed of several components in my database and not just backend
API servers and not backend consumer service area, just front end and rabbit MQ.
Typically, each of these components must be properly installed and configured on the server
with enough resources available.
Deploying and managing such an application on physical servers may become cumbersome,
especially at scale.
In fact, deploying all the components on one physical server may pose several risks.
For example, improper isolation for each application component, race conditions, deadlocks,
and resource overconsumption by components.
The server also represents a single point of failure.
Deploying the components on multiple servers, on the other hand, is also not an intuitive
approach, especially due to cost, lack of scalability and all the other disadvantages that we
already discussed.
Virtual machines represent the digitized version of the physical servers.
As a matter of fact, hypervisors, for example, Oracle, VirtualBox, Hyper-V and VMware are
software solutions that allow the creation and management of one or more virtual machines on
physical server.
Different virtual machines with different flavors can be created and configured on the same
physical host.
For example, one physical server may or may host three different VMs, each with its own
dedicated resources and operating system and can be managed separately from the other ones.
Some of the advantages include low capital expenditure, since there is no need to buy and
manage hardware components.
Flexibility through the ability to quickly create, destroy and manage different VM sizes with
different flavors.
Disaster Recovery.
Most virtual machine vendors are shipped with solid backup and recovery recovery mechanisms
for the virtual machines.
Reduce the risk of resource misuse. And finally, proper environment isolation.
Some of the disadvantages include performance issues.
Virtual machines have an extra level of virtualization before accessing the compute resources,
rendering them less performance than the physical machines.
Security issues.
Multiple virtual machines share the compute resources of the underlying host without proper
security mechanisms.
This may pose a huge security risk for the data in each virtual machine.
And finally, increased overhead and resource consumption.
Virtualization includes the operating system as the number of virtual machines placed on a host
increases.
More resources are wasted as overhead to manage each virtual machine requirement.
While virtual machines digitize the underlying hardware, Containerization is another form of
virtualization.
But for the operating system, only container engines, for example, Docker are software
applications that allow the creation of lightweight environments containing only the application
and the required
binaries for it to run on the underlying operating system.
All the containers on a single machine share the system resources and the operating system,
making containers a much more lightweight solution than virtual machines in general.
A container engine deployed on the server, whether it is a physical or virtual machine, takes
care of the creation and management of containers on the server.
Some of the advantages of of containers include decreased overhead.
Containers require fewer resources than virtual machines, especially since virtualization does
not include the operating system portability.
Container images are highly portable and can be easily deployed on different platforms.
A Docker image can be deployed on any container engine that supports it.
For example, Docker, Kubernetes a W as easy as a W, s, X, Microsoft X, Google, Kubernetes
engine, And finally, faster build and release cycles.
Containers due to their nature enhance the software development lifecycle from development
to continuous delivery of software changes.
Some of the disadvantages include data persistence.
Containers also support data.
Persistent persistence through different mechanisms are still considered as a bad solution for
applications that require persistent data.
For example, stateful applications, databases, etc. Up until today, it is not advice to deploy such
applications as containers.
Resource overhead and performance issues.
Containers are more lightweight, require fewer resources for the underlying host and generally
perform better than VMs.
However, being a virtualization technology resource, overhead and performance issues still exist
with containers, especially when improperly configured and managed.
And finally, cross-platform incompatibility.
Containers designed to work on one platform will not work on other platforms.
For instance, Linux containers do not work on Windows operating systems and vice versa.
Certain solutions.
For example, a W Lambda functions, Microsoft Azure functions.
Google functions mainly designed by cloud providers not long ago are also becoming greatly
popular nowadays.
Despite the name, Serverless Architectures are not really without servers.
Rather, solution providers went deeper into virtualization, removing the need to focus on
anything but writing the application code.
The code is then packaged and deployed into specialized functions that take care of managing
and running it.
Serverless solutions paved the way for new concepts, especially function as a service which
promotes the creation and deployment of a single function per serverless application.
For example, one function to send verification emails as soon as a new user is created.
The diagram clearly showcases the architecture of server led solutions.
Application code is packaged and uploaded to a function which represents a virtualized
environment that is completely taken care of by the provider.
Serverless architectures, although alleviate a lot of challenges presented by the previous three
infrastructure types, is still unable to replace any of them due to its many limitations.
As a matter of fact, serverless architectures do not meet the requirements of all use cases and
therefore work best in conjunction with other infrastructure types.
More serverless solutions are offered by providers rather than being a solution that can be
deployed and managed by anyone.
Some of the advantages of serverless include cost.
Users only pay for the resources consumed during the time of execution.
Idle functions generally do not use any resources and therefore the cost of operation is greatly
reduced. Scalability.
Serverless models are highly scalable by design and do not require the intervention of the user.
And finally, faster build and release cycles.
Developers only need to focus on writing code and uploading it to a readily available
infrastructure.
Some of the disadvantages include security.
The application code and data are handled by third party providers.
Therefore, security measures are all outsourced to the managing provider.
The security concerns are some of the biggest for users of the serverless model, especially when
it comes to sensitive applications that have strict security requirements.
Privacy, the application code, and data are executed on shared environments with other
application codes, which poses huge privacy and security concerns. Vendor lock in.
Server solutions are generally offered by third party providers for example AWS, Microsoft,
Google,etc. Each of these solutions are tailored to the provider's interests.
For instance, a function deployed on AWS Lambda function may not necessarily work on Azure
functions without code modifications.
Excessive use and dependence on a provider may lead to serious issues of vendor lock ins,
especially as the application grows complex troubleshooting and contrast with the ease of use
and deployment of the code.
Troubleshooting and debugging the applications is not always straightforward.
In fact, serverless models do not provide access whatsoever to the underlying infrastructure
and provide their own generic troubleshooting tools, which may not always be enough.
In conclusion, technology is advancing at a rapid pace.
The reliance of software solutions is increasing by the day, and therefore robust infrastructure
solutions are required to efficiently deploy, run and manage these software solutions.
This lecture describes several infrastructure options, along with the advantages and
disadvantages of each of them.
Evidently, there is no best solution or a solution that fits all use cases.
Rather, each infrastructure time favors a certain function or use case to get the best
performance.
Users must carefully assess each option or combination of options depending on their
requirements.
Hi and welcome to this lecture entitled Introduction to Containers and Docker.
With the rapid evolution of the software industry in general, developing and deploying web
applications is not as straightforward as writing the code and deploying it on remote servers.
As a matter of fact, today's software development lifecycle requires the collaboration of
different teams for example, developers, designers, managers, etc. working on different tools
and technologies to serve the challenging application requirements and meet the customer
needs in an organized and optimized way.
Such collaboration may prove to be extremely complex and costly, if not properly managed.
Docker is a containerization software that aids in simplifying the workflow by enabling a
portable and consistent application that can be deployed rapidly anywhere, thus allowing
software development teams to operate the application in a more optimized way.
This lecture explains different container tools and terminologies, namely container images,
containers, container registries, Docker files, and container data management.
Moreover, Docker is introduced and used to reinforce the information learned with examples
and scenarios.
A container image is nothing but a snapshot of the desired environment.
A container uses an image as a starting point for that process.
For example, a sample image may contain a database or engine X or a customized Node.js
restful service. An image is therefore a snapshot of an isolated environment, usually created by
a maintainer and can be stored in a container registry.
A container is then the running version of an image.
A container registry is a service to store and maintain images.
Container registries can either be public, allowing any user to download the images or private
requiring user authentication to manage the images.
Examples of container registries include Docker Hub, Amazon Elastic Container Registry and
Microsoft Azure Container Registry.
As already mentioned, Docker is a container engine that aids in creating and managing Docker
images and containers.
A Docker file is a text document interpreted by Docker and contains all the commands required
to build a certain Docker image.
A Docker file allows the creation of the resulting image using one built command only by
default. Docker containers do not support data persistence.
When a container no longer exists, all the data saved inside it will disappear.
Worse, sharing data between containers cannot be achieved intuitively.
One of the many ways to persist data would be to store them on the host machine rather than
in the container itself.
Therefore, even if the container crashes or is restarted or stopped, the data remains intact on
the host machine.
Docker presents two options for data persistence on the host machine volumes and bind
mounts.
Docker volumes are created and managed by Docker.
Volumes can either be created specifically by the client using the Docker volume Create
Command or automatically by Docker when mounted to a container.
Volumes are usually stored locally on the host machine, for example, inside the Docker volumes
directory on Linux and can also support volume drivers allowing to store data on remote hosts
or on cloud providers.
A volume can be mounted into multiple containers simultaneously, either with read only or read
write policies. Furthermore, volumes can either be named or anonymous.
Docker ensures the uniqueness of the names of the volumes by mouth serve the same purpose
as volumes. That is persisting data.
However, mind mounts have limited functionalities compared to volumes and are not managed
by Docker.
By mode can be located inside any file system directory, for example.
Forward slash opt for slash some directory or any other directory of your choice.
In general, it is advice to always rely on named volumes rather than on buying mounts.
Hi, and welcome to this demo entitled Introduction to Containers to better understand the
differences between the concepts explained and the previous lectures.
In this demo, we will attempt to deploy a simple HTML application on an open machine.
Then we will containerize the application and redeploy it.
The following steps will be performed.
First, we will create the networking and compute resources on us.
Then we will deploy a simple HTML application on an easy to machine.
Afterwards, we will containerize the application and store the image on a AWS elastic container
registry or ACR.
And finally, we will deploy the containerized version of the application.
Now, I have already created the security group and the easy to machine.
I also installed Apache and deployed the HTML application.
We have done all these steps many times before.
So to test my setup, I will perform an API request using the machine's IP.
And the custom page should be returned.
I will put the machine's IP in the browser and the custom page is now loaded.
In conclusion, to deploy the application on the SC two machine, several tools have to be
deployed and configured.
While this is manageable for one simple application, things won't be as easy and straightforward
when the application grows.
For instance, assume an application of five components must be deployed.
Managing each component on the server will not be so easy.
So on the next part in this of this demo, we will show how Containerization can remove such
problems. So first of all, I will stop the Apache web server.
And I will validate it by hitting the machines IP again.
And the custom page shouldn't be loaded anymore.
Now it is time to containerized the application.
But before doing so, there are three prerequisites that must be done.
First, we need to create and attach an IAM rule with enough permissions to allow the VM to
communicate
with the ACR.
Since we will be pushing and pulling images to and from the ACR, then we will install the SLA.
And finally, we will install Docker.
So let us start by creating the IAM role.
Here I am in the IAM service.
I will create a role.
The use case will be easy to since we will be attaching it to an easy to machine.
I will give it administrator access to make sure I have enough permissions.
Finally, I would give it a name and create the role.
Now that the role is created, I will proceed to attach it to the machine.
Now I have already installed the client Docker.
However, the document attached to this video contains all the steps and commands required to
install everything in great detail.
So I will validate that the SLI is installed by side by checking the SLI version.
And the doctor is installed using the Dr. as command, which lists all the available containers.
Evidently, I have no containers over here.
There exists several official Docker images.
Curated and hosted on Docker Hub, aiming to serve as starting points for users, attempting to
use them or built on top of them.
Also, there is an official repository for the Apache web server containing all the information
necessary to deploy and operate the image.
So first I will start by downloading the image to the server using the Docker pool command.
I will use the 2.4 Alpine image tag.
An image tag is an identifier to distinguish the different image versions available.
So after downloading the image successfully, I will list the available images and as we can see,
the http d image with a tag of 2.4.
Alpine is now successfully installed on my server.
Next, it is time to run a container from this image.
The goal is to run the Apache server on Port eight and have it accept requests.
So to create a docker container, I will do so using the Docker run command dash D, which means
run in detached mode or in the background.
I will give the container a name, my first container and I will map the container port to the host
port so I can actually hit the container from outside the Docker network.
And finally, I will use the image and the tag.
So now, as we can see, there is a container with the name of my first container and using the
HTTP image running on my machine.
Now I need to hit the container using the machine's IP and hopefully I will get the default
Apache page.
We have the default apology page loaded as soon as we hit the machine.
So now that the base image is successfully deployed and running, it is time to customize it.
The official documentation presents clear instructions on the different ways the image can be
customized. For example, creating custom virtual hosts, adding static content, adding custom
certificates, so on and so forth. So in this example, we need to deploy on the container a custom
HTML page.
Now first thing is that I will execute or I will go inside the container using the exact commands.
Now I am in the container inside the US or local Apache two directory.
I will navigate to HD docs.
And as you can see, there is an index dot hml file which contains the default page that gets
loaded every time we hit the container.
So I need to modify this and add my custom HTML page. First of all, I will add nano.
Then I will edit the index file, remove the old content and replace it with my HTML document.
Now I will save it, accept the container and I will perform another request hoping to get the
custom HTML page now. So let me get the machine's IP.
Put it in the browser and we have the custom HTML application loaded now.
Unfortunately, the changes performed will not persist, especially when the container crashes.
As a matter of fact, by default, containers are ephemeral, which means that custom data
generated during runtime will disappear as soon as the container fails.
To verify it, I will simulate a failure in the container by removing it and starting it again.
So I will remove the container. I will make sure that it is removed. And I will start another
container using the same Docker run command.
So now I will hit the container again using the machine's IP.
And as you can see, the changes performed disappeared.
So to persist, the changes, a custom image must be built. The custom image is a snapshot of the
container. After adding the custom website.
So I will repeat the steps to add the custom HTML page and ensure that the container is
returning the new document again.
So let me go back inside the container and repeat the same steps. Edit the index file.
Remove the Default page and replace it again.
With the custom documents, exit the container and hit the machine again.
So, to create a new image from the customized running container, I will use the Docker.
Commit command. Commit. My first container.
And now I will list all the images available.
Clearly, there is a new image with no name and no tag that has been just created using the
Docker Comet Command.
So now I need to give it a name and a tag and I can do so using the Docker tag command.
I will add the image ID and then I will give it a name, custom http ID and a tag v1 one.
Now the image should be created. Yes, it was created.
So now I have a new image called custom http ID and has a tag of V1.
So what I will do is that I will remove the old container and I will create a new container, but this
time using the custom http d image that I just created.
So I gave it a new name called My second container. And I am using the new image now.
Now, if I hit the machine again, I should receive the custom document instead of the default
page. So we were able to process the changes.
The custom image is now located on the virtual machine.
However, storing the image on the VM alone is not a best practice, especially in real life
scenarios. In fact, this is not a good way to share the image with other developers working on
the same image.
Also, I have no good or easy way to download and run the image on different servers.
And finally, there is a risk of losing the image, especially if the virtual machine crashes and it
is not well backed up. So, a better solution would be to host the image in a container registry.
And this demo I will be using the ACR to store the image.
So first, I need to create a container repository.
Here I am in the Elastic Container Registry Service. I will create a repository that is private.
I will give it a name, custom http. I will leave everything else as default and I will create a
repository.
So now, as you can see, A.W. has created a repository for me with a u r i for for it.
And as you can see, there are no images pushed yet.
So the first thing to do is that I need to log into the ACR from my machine.
And I will do this using this command. Look and succeeded.
Now, as you can see here, the repository created on us has a different name than the one we
created. Therefore, we need to tag the image with the correct name before we are able to push
it. So let us do this using the local tag command.
And now, as you can see, I have a third image with a different name now.
So, I will push the image from the VAM to the ACR using the Docker push command.
So, if we look here, the image with the image tag V1 is successfully pushed on the ACR.
Clearly, we were able to push the image to the U.S.S.R. now to finally test that the new custom
image is successfully built and pushed.
I will create a third container, but this time from the image located in the container registry.
So, to do so, I will clean the server from all existing server from all existing containers and
images. Let me first delete all the containers.
And all the images.
Let's make sure that everything is removed. So I have no containers or I still have the images.
Let's remove them one by one. So no more images and no more containers.
Now I will run a third container, but this time I will use the image.
You are I. That is from the ACR. So as you can see, Docker run the name my third container.
Port mapping. And finally, I am using the image from the ACR.
So let's try it is downloading the image. It created a container.
And now I will attempt to hit the machine and I hope to receive the custom HTML page.
So let's do this. Let's get the machine IP. Paste it in the browser. And I have my custom HTML
page. Creating images from existing containers is one intuitive way.
However, such an approach may prove to be inefficient and inconsistent.
In fact, Docker images may need to be built several times a day.
Also, a Docker image may require several complex commands to be built.
And finally, this approach is difficult to maintain as the number of services and teams grow.
Docker files are considered a better alternative, capable of providing more consistency and
allowing for automating the build steps to better understand Docker files.
We will containerized the application again, but this time using Docker files.
So to do so, the application code and the Docker file must be placed together on the server.
I will first create a directory called Temp dear.
Then I will place the application code inside the directory in a file called index HTML.
And finally, I will create a docker file next to the indexed HTML file with the following content.
The docker file has to come, has two instructions.
First of all, it says use http D 2.4 Alpine as the base image and copy the index HTML file from the
server to the container inside this directory.
So basically we are automating everything we have done manually before. I will save.
And now to build the image using a docker file.
I will use the Docker build command and I will explain the command after I paste it.
So I am telling Docker that please build, use this Docker file file and tag the resultant image with
this name. And this dot here is to point to the context or the location of the files.
So I will hit enter. And as you can see, it will execute all the commands in the Docker file.
And finally, it has built the image and it has targeted.
So if I put Docker images, I can clearly see the resultant image over here with the tag, the
Docker file. No, I will push the image to the ACR.
Let me get the command. So Docker push the image name.
And I will navigate again to the ACR refresh.
And the newly created image is over here.
Finally, to simulate a fresh installation of the image.
I will remove all the containers and the images from the server and will create a final container
from the newly pushed image.
So first of all, remove all the containers. They know all the images.
I would make sure that there are no containers and no images.
And finally, I will run a final version of the container using the custom HTTP image that I just
pushed. And now I will make sure that the container is running.
And finally, I will perform a request.
And there you go.
The custom page that I just created is now loaded.
Hi, and welcome to this demo entitled Advanced Docker Concepts.
Now that we have learned the basics of containers and Docker, we will take it to the next level
and this time we will contain the rights and deploy the anti back and application that we
previously deployed on an Ubuntu VM. To do so, we will deploy the containerized version of the
database with data persistence as well as DNA back and service.
The following steps will be completed.
We're going to first deploy the containerized version of an MVP without data persistence.
Then we are going to enable data persistence using Docker volumes.
Afterwards, we will build the back end service using a Docker file and we will deploy the
containerized version of the back end service.
Finally, we are going to perform some API requests to validate the deployment.
I have already created the security group with all the inbound rules required and the virtual
machine.
In addition to that, I installed Docker on the machine.
The containerized version of a rainbow TB can be found on Docker Hub.
I will create an R&B Docker container using the following command.
Docker run and detached mode Map the containers port to the host port Using environment
variables. I am specifying CDB as the storage engine and root password to be the password of
the root user. Finally, I am using the official Arango DB Docker image with the tag 3.6.3.
Once finished, I will make sure that the image is downloaded on the server and that the
container is now running.
Using the browser, I will log into the database using the root username and password that we
specified, and I will attempt to create some data for testing purposes.
So first of all, I will create a test database. Then I will look into it.
And I will create a test collection.
C and A.B. is now deployed as a Docker container and is able to serve requests on Port 8529.
However, if the container fails, the data will disappear.
To verify this, I will completely remove the container and then I will start the new one with with
the same command simulating a total failure.
So let us do this first. I will remove the container.
I will make sure that it is completely removed.
And now I will start it again using the same command.
Now I will log in again to the management console and see what happens.
So only the system database exists.
All the data that we entered are now missing on production systems. This is a recipe for disaster.
To enable data persistence, I will delete the existing container and recreate it with a named
volume mounted to encode this data directory in the container which can be found in the var lib
rainbow db three directories. So first of all, remove the container.
Make sure that it is removed and then I will recreate it with the same command.
But with the volume named volume and volume mounted to the wall DB three directory.
So now I will make sure that the container exists, and that the volume is created.
Now I will go back to the management console and will create some task data.
I will enter the username and password. I will now create a database.
Look into it and create a test collection. Now I will remove the container to simulate a failure,
and then I will recreate it using the same command. and same volume and see if the data
persists.
Now the container is created again. I will log back to the management console.
Good username and password. And the data persisted.
The back end service is developed using Node.js and sales jazz to successfully deploy it on an
Ubuntu VM. We had to perform the following steps.
Install not just on the server.
Clone the repository, install the dependencies, export the environment variables and finally
start the application.
Instead of manually performing all these steps to containerize the service, a better option would
be to use a docker file containing all the commands.
So to build the image on the server first, I will clone the repository.
And then let's examine the Docker file. We are instructing Docker to use Node.js as base image.
Set the working directory to forward slash up, add all the files inside the directory to the
container inside the forward slash app directory.
Install the packages, export the environment variables.
And finally, we specify the run command.
Now there are two things to modify over here.
First, we must change the automotive password to the one that we specified in the DB
container, and then the PDB host must be replaced with the machine's public IP.
So now I will save, and I will build the image using the Docker built command.
Giving the image a name of back-end service and attack v Docker file.
Now that the build is complete, I will list all the images and we can clearly see that there is the
base image and the final image that was created.
Now let's run a container from the back and service image using the Docker run command.
Let's make sure that the container is up.
And let's monitor the logs of the of the container to make sure that it was able to successfully
connect to the database. Now, the last step would be to perform some API requests to make
sure that the whole flow is working. So I have my postman collection up and running.
First of all, I will hit the health API. I need to get the machines public IP.
I will replace it here on Port 80. And it is successfully returning a success message.
Now I will attempt to create a person, but first, I need to modify the environments.
And I will send an API request with the following body. A success response is returned.
And finally, if we log into the DB management console, we can see that there is a person’s
database created by the application, a person's collection, and finally a record.
4. Training Exercises

True Or False Questions


1. The Internet and the World Wide Web are the same
thing.
Solution
Answer: False
Explanation: The Internet and the World Wide Web are not the same thing. The
former is the “Network of Networks”, and provides connectivity between computer
systems across the globe. The latter is an application built on top of the internet.

2. The TCP/IP Protocol is used to exchange data


between clients and servers.
Solution
Answer: False
Explanation: The TCP/IP protocol is used to connect devices to the internet. The
HTTP protocol is used to exchange data between clients and servers.

3. The FTP protocol is used to remotely connect to


Windows machines.
Solution
Answer: False
Explanation: RDP is the protocol used to connect to remote Windows machines.

3. The Domain Registry System (DRS) is used to map


domain names to their corresponding IP addresses.
Solution

4. Training Exercises 1
Answer: False
Explanation: There is no such thing as a Domain Registry System. The tool in
question is the Domain Name System (DNS)

4. Load Balancing is best used with Vertical Scaling.


Solution
Answer: False
Explanation: Load balancing is best used with Horizontal Scaling (The process of
placing similar servers in parallel)

5. The Least Response Time Load Balancing algorithm


routes requests to instances with the longest response
times to health checks.
Solution
Answer: False
Explanation: The Least Response Time Load Balancing algorithm routes requests
to instances with the shortest response times to health checks.

Multiple Choice Questions


1. Select the valid load balancing algorithms for
Apache2.
a. Robin Hood

b. Least Connections

c. Least CPU consumption

d. Least response Time

e. All of the above

Solution
Answer: b, d

4. Training Exercises 2
2. Select the valid components to build static websites.
a. C++

b. Databases

c. HTML

d. CSS

e. All of the above

Solution
Answer: c, d

3. Select the true statements.


a. Apache2 is an example of Web servers.

b. Web servers are used to store important information, such as the registered users
in an application.

c. The dig command in Linux fetches the IP address of a certain domain name.

d. Vertical Scaling is the process of changing the operating system of a machine, in


order to accommodate traffic.

e. All of the above

Solution
Answer: a, c

4. Select the true statements.


a. The ls command prints out all the files and directories that are present in the
system.

b. The -z flag in the ls command prints the user that created the file.

c. The help command is used to display the user manual for any linux command.

d. The -p flag in the mkdir command is used to create intermediate directories.

e. All of the above

4. Training Exercises 3
Solution
Answer: d

Linux Commands
1. Create an empty file called index.html inside the /opt/
directory
Solution
Answer: touch /opt/index.html

2. Create a directory path called /opt/mnt/tempDir/ .


Assume that only the /opt directory is created
Solution
Answer: mkdir -p /opt/mnt/tempDir/

3. Using one command, create a file called temp.txt .


The file should be in the already created director /mnt/ .
The file content must be: Hellozzz . Assume you are
currently present in the /opt/myfirstapp/ directory.
Solution
Answer: echo 'Helloz' > /mnt/temp.txt

4. Append the sentence How are you? to the temp.txt file


created in the question above
Solution
Answer: echo 'How are you?' > /mnt/temp.txt

5. Delete the temp.txt file inside the /mnt directory


Solution
Answer: rm -f /mnt/temp.txt

4. Training Exercises 4
4. Training Exercises 5
1. The command mv /mnt/file.txt /opt moves the file.txt from the /opt directory to the /mnt
directory.
a. True
b. False
2. Select all true statements.
a. A message bus is used to enable asynchronous communication between the application
components.
b. It is a good practice to serve a website over HTTPS instead of HTTP.
c. It is a good practice to deploy the database in public subnets and protect it with a
username and password
d. Redis is used for serving and caching static data such as videos and images
3. Select the valid HTTP methods
a. INSERT
b. REMOVE
c. RETRIEVE
d. PATCH
4. The default ports for the MySQL, MongoDB, and SSH protocols are (The order is important).
a. 1, 21, 80, 543
b. 3306, 27017, 22
c. 8080, 8443, 23
d. 22, 80, 45
5. Print the content of the index.html file to stdout

Cat index.html

6. What is the functionality performed by the bash script below:


#!/bin/bash
mkdir /opt/photos
chown -R ubuntu /opt /photos
cp /tempPhotos/* /opt/photos/
rm -rf /tempPhotos/
a. Create a file called /opt/photos -> Create a linux user called ubuntu -> Override the photos
file by the already existing tempPhotos files -> Removing the tempPhotos files
b. Create a file called photos -> Change its ownership to the ubuntu user and group -> Move
the contents of the /tempPhotos directory to the /opt/photos directory
c. Create a directory called /opt/photos -> Change the ownership of the /opt/photos and
everything inside it to the ubuntu user and group -> Move the files from the /tempPhotos
directory to the /opt/photos/ directory -> Remove the /tempPhotos directory
d. Create a directory called /opt/photos -> Change the ownership of the /opt/photos and
everything inside it to the ubuntu user and group -> Copy the files from the /tempPhotos
directory to the /opt/photos/ directory -> Remove the /tempPhotos directory
7. Display information about the amount of free disk space on a paritition on the machine
df
8. Select all true statements about databases
a. MongoDB is a graph database
b. Databases are considered as part of the client side of the application
c. hMySQL databases usually store data in JSON format
d. Document databases are best used in cases which data models constantly change over
time
9. The command sudo mkdir -p /opt/temp/directory/file.txt creates a file called file.txt inside the
/opt/temp/directory/ directory.
a. True
b. False
10. The Application (Business Logic) Layer can be developed using frameworks such as NodeJS,
Python, PHP.
a. True
b. False
11. HTTP status codes 5XX indicate a logical error (e.g. Database unreachable)
a. True
b. False
12. Write “hello world!’ to file.txt:

echo “hello world” > file.txt

13. The Least Response Time Load Balancing algorithm routes traffic to the server with the fastest
response time to health checks.
a. True
b. False
14. Select the valid HTTP components
a. Column
b. Method
c. Body
d. Index
15. The command rm is used to create new files
a. True
b. False
16. Load Balancing is the process of distributing traffic across multiple applications on one server
a. True
b. False
17. Ubuntu, Centos, Fedora are all official Linux distributions
a. True
b. False
18. A Devops Engineer is supposed to deploy a HTML website on AWS. The solution will be deployed
on a single AWS EC2 Linux VM, with Apache2 installed and configured to serve the website. No
domain name is assigned to the website. Therefore, accessing the website can be reached via
the server’s IP, on port 82 (e.g. http:14.3.2.1:82/some/route .) The virtual machine already
exists, and Apache is already installed with its default configuration. The engineer deployed the
code, configured the virtual host, and restarted Apache2. The requests are not successfully
reaching the website. What are the possible causes?
a. The security group (Inbound traffic) rules are not properly configured
b. Apache2 is not properly configured to listen to port 82
c. The virtual machine must be restarted
d. A database must be configured
19. MongoDB, Apache, and NGINX are ALL valid examples of Document databases.
a. True
b. False
20. Select all true statements
a. NGINX is an alternative to Apache
b. The PUT HTTP method is used to create new data on the application
c. The default ports for HTTP and HTTPS are 80 and 443 respectively
d. 504 code is usually used to signal a “Resource Not Found” error (Resource could be
page, file, video, etc…)
21. Change the ownership of the index.html to the ubuntu user and group

chown ubuntu:ubuntu index.html

22. pat scaling is the process of adding or removing servers. Vertical scaling is the process of
modifying the resources of the server.
a. True
b. False
23. HTML and CSS are used to build backend applications
a. True
b. False
24. Select all true statements
a. 403 code is usually used to signal a “Resource Not Found” error (resource could be page,
file, video, etc…)
b. 4XX codes usually indicate a server side error (e.g. unreachable database)
c. 2XX codes indicate that the request was successfully processed and returned to the
client
d. 5XX codes usually indicate a logical error (e.g. Client requesting a file that doesn’t exist)
25. The command cd stands for current directory, and is used to print the absolute path of the
current directory
a. True
b. False
26. The World Wide Web is the “Network of Networks”, and provides connectivity between
computer systems across the globe
a. True
b. False
27. MySQL and PostgreSQL are best used in cases where the data model is stable, and does not
change frequently
a. True
b. False
28. Facebook, Google, LinkedIn are all examples of web applications
a. True
b. False
29. Print the absolute path to the current directory
pwd
30. Select the valid Database Management Systems (DBMS)
a. MongoDB
b. Files
c. MySQL
d. Notebook
• Question 1
1 out of 1 points
Ubuntu Unity, Ubuntu Cinnamon, and Xubuntu are all official Ubuntu variants

Selected Answer:
True
Answers:
True
False
• Question 2
1 out of 1 points
MariaDB and PostgreSQL are best used in cases where the data model is stable, and does
not change frequently.

Selected Answer:
True
Answers:
True
False
• Question 3
1 out of 1 points
The command cd stands for change directory. It is used to change the current directory of
the terminal.
Selected Answer:
True
Answers:
True
False
• Question 4
1 out of 1 points
Load Balancing is the process of distributing traffic across multiple application replicas each
on a dfferent server.

Selected Answer:
True
Answers:
True
False
• Question 5
1 out of 1 points
The internet is the “Network of Networks”, and provides connectivity between computer
systems across the globe.

Selected Answer:
True
Answers:
True
False
• Question 6
3 out of 3 points
A DevOps engineer is attempting to write a completely useless script. Unfortunately, the engineer avoided
engineer complete the commands of the script. (PS: Do not be like this engineer)
#!/bin/bash

# Create the /opt/photos directory


[A1] -p /opt/photos

# Change the ownership of /opt/photos to the ubuntu user and group


[A2] -R [A3]:ubuntu /opt/photos

# Copy the content of /tempPhotos/* to /opt/photos/


[A4] /tempPhotos/* /opt/photos/

# List the content of /opt/photos/


[A5] -lah /opt/photos/

# Remove the content of /tempPhotos/


[A6] -rf /tempPhotos/
Specified Answer for: A1 mkdir
Specified Answer for: A2 chown
Specified Answer for: A3 ubuntu
Specified Answer for: A4 cp
Specified Answer for: A5 ls
Specified Answer for: A6 rm
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match mkdir

Correct Answers for: A2

Evaluation Method Correct Answer C


Exact Match chown

Correct Answers for: A3

Evaluation Method Correct Answer C


Exact Match ubuntu

Correct Answers for: A4

Evaluation Method Correct Answer C


Exact Match cp

Correct Answers for: A5

Evaluation Method Correct Answer C


Exact Match ls

Correct Answers for: A6

Evaluation Method Correct Answer C


Exact Match rm
• Question 7
0 out of 1 points
The command sudo mkdir -p /opt/temp/directory/file.txt creates a directory
called file.txt inside the /opt/temp/directory/ directory
Selected Answer:
False
Answers:
True
False
• Question 8
1 out of 1 points
The default ports for the ArangoDB, Redis, and MySQL are (The order is important).

Selected Answers: 4.
8529, 6379, 3306
Answers: 1.
8524, 6385, 3358
2.
22, 80, 45
3.
8925, 9756, 6033
4.
8529, 6379, 3306
• Question 9
1 out of 1 points
Facebook, Google, LinkedIn are all examples of static websites.

Selected Answer:
False
Answers: True

False
• Question 10
1 out of 1 points
Display the user manual for the Linux command used for moving files:

[A1] [A2]
Specified Answer for: A1 man
Specified Answer for: A2 mv
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match man

Correct Answers for: A2

Evaluation Method Correct Answer C


Exact Match mv
• Question 11
1 out of 1 points
Append "helloz!" to file.txt:
[A1] "helloz!" [A2] file.txt
Specified Answer for: A1 echo
Specified Answer for: A2 >>
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match echo

Correct Answers for: A2

Evaluation Method Correct Answer C


Exact Match >>
• Question 12
1 out of 1 points
Select all true statements

Selected 2.
Answers: It is a good practice to serve a website over HTTPS instead of HTTP.
4.
It is a good practice to deploy the database in private subnets and and disable
direct access from the internet.
Answers: 1.
A Content Delivery Network is used to enable asynchronous communication
between the application components.
2.
It is a good practice to serve a website over HTTPS instead of HTTP.
3.
Redis is used for serving and caching static data such as videos and images.
4.
It is a good practice to deploy the database in private subnets and and disable
direct access from the internet.
• Question 13
1 out of 1 points
The Least Connection Load Balancing algorithm routes traffic to the server with the least
number of active connections at the time the client request is received.
Selected Answer:
True
Answers:
True
False
• Question 14
1 out of 1 points
Select all true statements.

Selected 1.
Answers: 404 code is usually used to signal a “Resource Not Found” error (Resource
could be page, file, video, etc).
Answers: 1.
404 code is usually used to signal a “Resource Not Found” error (Resource
could be page, file, video, etc).
2.
Apache2 is an alternative to MongoDB.
3.
The INSERT HTTP method is used to create new data on the application.
4.
The default ports for HTTP and HTTPS are 8080 and 4443 respectively.
• Question 15
1 out of 1 points
HTML, CSS, and Javascript can be used to build Frontend applications

Selected Answer:
True
Answers:
True
False
• Question 16
1 out of 1 points
Change the permissions of the file.txt file:
[A1] 753 file.txt
Specified Answer for: A1 chmod
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match chmod
• Question 17
1 out of 1 points
Select the valid HTTP methods.

Selected Answers: 1.
PUT
3.
GET
Answers: 1.
PUT
2.
EXEC
3.
GET
4.
MODIFY
• Question 18
1 out of 1 points
Select the valid Database Management Systems (DBMS)

Selected Answers: 1.
MongoDB
Answers: 1.
MongoDB
2.
Notebook
3.
Files
4.
Hard Disk Drive
• Question 19
1 out of 1 points
The command cp /mnt/file.txt /opt copies the file.txt file from the /opt directory to
the /mnt directory
Selected Answer:
False
Answers: True

False
• Question 20
1 out of 1 points
Vertical scaling is the process of adding or removing servers. Horizontal scaling is the
process of modifying the resources of the server.

Selected Answer:
False
Answers: True

False
• Question 21
1 out of 1 points
Select all true statements

Selected 4.
Answers: 3XX codes indicates that further action needs to be taken by the user agent
in order to fulfill a request.
Answers: 1.
3XX codes indicate that the request was successfully processed and returned
to the client.
2.
3XX codes usually indicate a server side error (e.g., Unreachable database).
3.
3XX codes usually indicate a logical error (e.g., Client requesting a file that
doesn’t exist)
4.
3XX codes indicates that further action needs to be taken by the user agent
in order to fulfill a request.
• Question 22
1 out of 1 points
Select the valid HTTP components.

Selected Answers: 2.
Header
4.
Protocol
Answers: 1.
Shard
2.
Header
3.
Row
4.
Protocol
• Question 23
1 out of 1 points
Change the ownership of the index.html file to the ubuntu user and root group:

chown [A1]:[A2] index.html


Specified Answer for: A1 ubuntu
Specified Answer for: A2 root
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match ubuntu

Correct Answers for: A2

Evaluation Method Correct Answer C


Exact Match root
• Question 24
1 out of 1 points
List the content of the /opt directory:
[A1] -lah /opt
Specified Answer for: A1 ls
Correct Answers for: A1

Evaluation Method Correct Answer C


Exact Match ls
• Question 25
1 out of 1 points
Select all true statements about databases.

Selected 3.
Answers: Document databases are best used in cases which data models constantly
change over time.
4.
MongoDB is a NoSQL database.
Answers: 1.
MySQL databases usually store data in JSON format.
2.
Databases are considered as part of the client side of the application.
3.
Document databases are best used in cases which data models constantly
change over time.
4.
MongoDB is a NoSQL database.
• Question 26
1 out of 1 points
MongoDB, Amazon DynamoDB, and ArangoDB are ALL valid examples of NoSQL
databases.

Selected Answer:
True
Answers:
True
False
• Question 27
1 out of 1 points
HTTP status code 403 is returned when the request does not have the correct permissions to
be processed by the server (e.g., Fetching someone else's data).

Selected Answer:
True
Answers:
True
False
• Question 28
1 out of 1 points
The Application (Business Logic) Layer can be developed using frameworks such as
NodeJS, Python, PHP.

Selected Answer:
True
Answers:
True
False
A Docker image is a running process of a Docker Container.
• True
• False
A Dockerfile is used to store and share Docker images between users.
• True
• False
docker pull alpine:latest is used to run a Docker container.
• True
• False
docker rmi alpine is used to remove the alpine image from the server.
• True
• False
A Docker registry repository can be either public or private.
• True
• False

HTTP status codes 5XX indicate a server error (e.g., Database unreachable).
• True
• False
The command cp /mnt/file.txt /opt moves the file.txt file from
the /mnt directory to the /opt directory.
• True
• False
A Message bus is used to enable asynchronous communication between the
application components.
• True
• False
Content Delivery Networks are used for serving and caching static data such as
videos and images.
• True
• False
MongoDB and Redis are relational databases.
• True
• False
It is a good practice to serve a website over HTTP instead of HTTPS.
• True
• False

It is a good practice to deploy the database in private subnets (no public IP) and
protect it with a username and password.
• True
• False
Containers are more lightweight than Virtual Machines.
• True
• False
Docker can be installed on Linux and/or Windows machines.
• True
• False
docker rm -f $(docker ps -a -q) removes all the existing images on the server
• True
• False
A Content Delivery Network (CDN) is used to enable asynchronous
communication between the application components.
• True
• False
RabbitMQ is used for caching and serving static data such as websites.
• True
• False
Git is a tool for source code management, allowing multiple developers to
efficiently collaborate together.
• True
• False
It is not a best practice to expose the database directly to the internet.
• True
• False
MySQL and PostgreSQL are document databases.
• True
• False

You might also like