Professional Documents
Culture Documents
Background Information.
Course Curriculum
Course Outcome
Background
Information
General Information
University Instructor
Linkedin Profile
Academia
BE Computer Engineering.
Publications:
Energy-Aware Placement and Scheduling of Network Traffic Flows with Deadlines on Virtual Network Functions.
Entity Achievements
AWS Beirut User Group - Community Lead.
- Organized dozens of workshops and tutorials related to AWS.
Meta Developer Circle: - Mentor.
Beirut - Delivered multiple presentations and workshops related to
software development and microservices.
AWS Community Builders - Member.
- Generated multiple articles for the global tech community.
Awards and Certifications
● Kubernetes is everywhere.
● Everyone wants to become a DevOps Engineer, and all companies want to apply DevOps.
• Rigid Process.
• Discourages changes.
• Difficult to measure progress.
• Slow and complex deployments.
Agile
Develop
• Operations team is left out.
Developers
Deploy
Operations
DevOps
Deploy
Operations
DevOps: What it isn’t
● Automate processes
Thank You
Introduction to Linux
Commands
Nicolas El Khoury
Introduction
Linux Shell
Basic Linux Commands
Introduction
Linux is an Operating System, similar to Microsoft Windows and MAC OS. It is
completely open source and free. Several distributions (flavors) exist, including, but
not limited to Ubuntu, Kali Linux, Red Hat Enterprise Linux (RHEL), CentOS, etc.
Linux Servers are used across the vast majority of the servers online due to their
fast, secure, and free characteristics.
Linux Shell
One way to interact with the Operating System is through the Graphical User
Interface. However, this is not the only way. As a matter of fact, most Linux servers
online cannot be accessed through a GUI. An alternative is using the Command Line
Interface, which allows the user to interact with the Operating System through
commands. The Linux Shell is then a program that takes these commands from the
user and sends them to the Operating System to Process.
pwd - Short for Print Working Directory. As the name states, this command
prints the absolute path to the current directory.
ls - List files and directories. There are many flags that can be used with this
command. An example is ls -lah:
man - Displays the user manual for any Linux command (i.e., man ls displays
information about the ls command).
cp - Copy files and directories from the source to the destination. For example,
cp /home/ubuntu/directories/directory1/newFile.txt /home/ubuntu/directories copies
the newFile.txt file from its old directory to /home/ubuntu/directories .
mv - Move files and directories from the source to the destination. For example,
mv /home/ubuntu/directories/directory1/newFile.txt
1. Write to the console: echo 'hello world!' prints “hello world!” to the console.
2. Write to file: echo 'hello world!' > file.txt prints “hello world!” to a file
named file.txt
nano - Text Editor. nano file.txt . Allows us to access and edit a file. nano
to read-only chmod 400 nanoFile.txt . As you can see, the permissions clearly
changed giving the user only read permissions. Now attempting to modify the
content of the file won’t work.
sudo - Short for “SuperUser Do”. Performs a command with root permissions or
privileges. Similar to “Run as Administrator” on Windows. sudo chown root:root
nanoFile.txt . This command is now performed using root privileges. Since we
are logged in using the ubuntu user, we are no longer able to see the contents of
the file without using sudo .
The Internet.
The World Wide Web.
Client-Server Architecture.
Domain Resolution.
Load Balancing.
Global Network.
Allows connection between devices.
Devices connect to the internet using the TCP/IP protocol.
The World Wide Web
Platform agnostic
Performance.
Scalability.
Availability.
Security.
Load Balancing - Continued
Algorithms: Round Robin, Least Connections, Least Response Time, Least Bandwidth.
Health Checks: Protocol, Port, Path, HealthCheck Interval, Healthy Threshold Count
Knowledge
Application
Demo – Deploy and Serve a Static Website
Introduction
Solution
Compute and Networking Resources
SSH Keypair
Security Group
EC2 instance
Apache2 Installation and configuration
Application Deployment
Webserver Configuration
Introduction
In this demo, we are going to deploy a simple HTML website on an AWS EC2
Ubuntu Machine. To do so, we are going to:
<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>
<p>I have no idea what I'm doing.</p>
</body>
</html>
Solution
Compute and Networking Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.
2. Create a Keypair.
Security Group
1. Navigate to the Security group option from the left menu.
Name: webserver
Network Settings:
get update
3. Verify that the deployment worked by performing a request to the machine using
its public IP. The Apache2 default page must be loaded on the browser:
Application Deployment
Perform the following steps to deploy the application:
# Create a directory
sudo mkdir /var/www/myfirstapp
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myfirstapp
# Create the index.html file and paste the HTML code into it
sudo nano /var/www/myfirstapp/index.html
# Create the log directory
sudo mkdir /var/log/myfirstapp
# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/myfirstapp/
Webserver Configuration
1. Create the virtual host file: sudo nano /etc/apache2/sites-available/myfirstapp.conf
<VirtualHost *:80>
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log
CustomLog /var/log/myfirstapp/requests.log combined
</VirtualHost>
Perform a request on the server. The response will change this time, loading the
custom page that was configured.
Introduction
Solution
Application Deployment
Webserver Configuration
Introduction
In the previous demo, we installed Apache and configured it to listen and serve a
static website on port 80. In this demo, on the same machine, we are going to:
<!DOCTYPE html>
<html>
<head>
<title>My Second Application</title>
</head>
<body>
<p>Neither do I.</p>
</body>
</html>
Solution
Application Deployment
Perform the following steps to deploy the application:
Webserver Configuration
a. Create the virtual host file: sudo nano /etc/apache2/sites-available/mysecondapp.conf
<VirtualHost *:81>
DocumentRoot /var/www/mysecondapp
ErrorLog /var/log/mysecondapp/error.log
CustomLog /var/log/mysecondapp/requests.log combined
</VirtualHost>
a. Modify the Security Group to include another inbound rule allowing requests
on port 81 from anywhere.
Introduction
Solution
Application1 Virtualhost
Application2 Virtualhost
Hosts File Modification
Application Testing
Introduction
In the previous demos, we deployed two HTML applications on one EC2
machine. One application is served on port 80, while the other one is served on
port 81.
Instead of serving the applications using the IP address of the machine, and
different ports, we will use domain names, and both applications will be served
on port 80.
Solution
Application1 Virtualhost
Modify the virtualhost of Application1: sudo nano /etc/apache2/sites-
available/myfirstapp.conf
<VirtualHost *:80>
ServerName myfirstapp.com
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log
Application2 Virtualhost
Modify the virtualhost of Application2: sudo nano /etc/apache2/sites-
available/mysecondapp.conf
<VirtualHost *:80>
ServerName mysecondapp.com
DocumentRoot /var/www/mysecondapp
ErrorLog /var/log/mysecondapp/error.log
CustomLog /var/log/mysecondapp/requests.log combined
</VirtualHost>
nano /etc/hosts )
Application Testing
Query the first application myfirstapp.com :
Introduction
Solution
AWS Networking and Compute Resources
SSH Keypair
Security Group
EC2 Instances
App1 Server configuration
Apache2 Installation
Application Deployment
Virtualhost Configuration
App2 Server configuration
Server testing
Load Balancer Configuration
Apache2 Installation
Apache2 Configuration
Application Testing
Introduction
To better understand load balancing, we are going to:
Create and configure a load balancer on the third VM, and instruct it to distribute
the requests on the two machines.
<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>
<p>I have no idea what I'm doing.</p>
<!DOCTYPE html>
<html>
<head>
<title>My Second Application</title>
</head>
<body>
<p>Neither do I.</p>
</body>
</html>
Solution
AWS Networking and Compute Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.
2. Create a Keypair.
Security Group
1. Navigate to the Security group option from the left menu.
2. Specify a name.
EC2 Instances
Create three Ubuntu 20.04 VMs:
Navigate to AWS EC2 —> instances —> Launch instances, with the following
parameters:
Network Settings:
get update
Application Deployment
a. Perform the following steps to deploy the first application:
# Create a directory
sudo mkdir /var/www/myapp
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myapp
# Create the index.html file and paste the code of the first app in it
sudo nano /var/www/myapp/index.html
# Create the log directory
sudo mkdir /var/log/myapp
# Change the ownership of the directory
sudo chown -R www-data:www-data /var/log/myapp/
Virtualhost Configuration
Create the virtual host file: sudo nano /etc/apache2/sites-available/myapp.conf
Install Apache2.
Test a request
Server testing
Perform a request on the server to ensure that the configuration is done properly.
get update
Apache2 Configuration
Install the required modules
<VirtualHost *:80>
<Proxy balancer://myservers>
BalancerMember http://<APP1 IP>:80
BalancerMember http://<APP2 IP>:80
</Proxy>
Application Testing
HTTP Protocol.
Stateless.
•Protocol: http
•Domain Name: mywebapp.com
•Application port: 80
•Path: /some/api
•Query String Paramters: key1=val1,key2=val2
HTTP Headers
Status Codes:
2XX: Success
4XX: Logical error
5XX: Server Error
Web Application Layers
Backend Application.
Internal LB
MySQL Amazon S3 CDN
Database.
Event-based alarms
Catalog Module
Catalog Service A
Customer Module
Customer Service
Order Module
Order Service
Payment Module
Payment Service
Ease of Development.
Ease of Deployment.
Ease of testing.
Monolithic Applications - Disadvantages
Fault Tolerance.
High Scalability.
Ease of maintenance.
Ease of Deployment.
Technological Freedom.
Complex Infrastructure.
NK-backend Service.
Database Management System: Software programs that enable the creation and management of databases.
Data Model: allows the developers to logically structure this data in different tables and collections.
ID studentID courseID
1 629 25
2 629 111
StudentsToCourses (Table)
Document Databases
Introduction
Solution
AWS Networking and Compute Resources
SSH Keypair
Security Group
EC2 instance
ArangoDB Installation
Backend Service Deployment
Git Installation
NodeJS and NPM Installation
Backend Service
API testing
Health API
Postman Installation
Create User API
Introduction
To better understand how an application is deployed and serves traffic over the internet, in this demo, we
will deploy the nk-backend service and its database on an AWS EC2 machine, and explore some of its
APIs by communicating with them using the Postman API client. The following steps will be completed:
Solution
AWS Networking and Compute Resources
SSH Keypair
1. Navigate to the EC2 service, Key Pairs option from the left menu.
2. Create a Keypair.
Security Group
Create a Security Group with the following inbound rules:
EC2 instance
Navigate to AWS EC2 —> instances —> Launch instances, with the following parameters:
Name: webserver
Network Settings:
ArangoDB Installation
# install arangodb
echo 'deb https://download.arangodb.com/arangodb310/DEBIAN/ /' | sudo tee /etc/apt/sources.list.d/arangodb.list
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install arangodb3=3.10.0-1 -y
the installer will prompt you to enter and confirm the root user’s password: rootPassword
After the installation is complete, check that the database is running: sudo service arangodb3 status
Modify the ArangoDB configuration file: sudo nano /etc/arangodb3/arangod.conf and replace the endpoint
Login to the database management console using the machine’s IP and the database’s port. In this
case: http://<MACHINE IP>:8529/
Backend Service
Clone the code from the repository: git clone https://github.com/devops-beyond-limits/nk-backend-
service.git
Navigate into the root directory cd nk-backend-service and print the content ls -lah
The application logs clearly show that the application successfully connected to the database created
a database and a collection, both called persons .
Validate the creation of the database by logging in to the database administration console.
Postman Installation
Download and Install Postman.
Save the Postman collection in a json file. This file contains a template for all APIs.
Submit a request with the following parameters. Make sure to replace the variables with the
corresponding IP and port.
Physical Servers
Virtual Machines .
Containers
Serverless
Infrastructure Types
Physical Servers
Advantages Disadvantages
Ownership and Customization Large CAPEX and OPEX
Performance Management Overhead
Lack of Scalability
Resource Mismanagement
Improper Isolation
Performance Degradation over Time
Virtual Machines
Advantages Disadvantages
Low CAPEX Performance Issues
Flexibility Security Concerns
Disaster Recovery Increased Resource Waste
Better Resource Management
Proper Environment Isolation
Containers
Advantages Disadvantages
Decreased Overhead Data Persistence
Portability Cross-Platform Incompatibility
Rapid Delivery Cycles
Serverless
Advantages Disadvantages
Cost Security and Privacy
Scalability Vendor Lock-in
Fast Delivery Cycles Complex Troubleshooting
Thank You
Introduction to
Containers and
Docker
Nicolas El Khoury
DevOps Consultant
Overview
Container Image
Container
Container Registry
Dockerfile
Volumes
Bind Mounts
Demo
Demo
Introduction
AWS Infrastructure
Security Group
Key pair
AWS EC2 Machine
Application Deployment on the EC2 machine
Application Code
Apache2 Installation
Application Deployment
Virtual Host
Application Deployment using containers
IAM Role
AWS CLI
Docker Installation
Base Image
Customize the Base Image
Create a Custom Image
Push the Image to AWS ECR
Create a Docker image using Dockerfiles
Introduction
To better understand the difference between the concepts explained, we will attempt
to deploy a simple HTML application on an Ubuntu EC2 machine. Then we will
containerize and redeploy it. The following steps will be performed:
Inbound rules:
Rule1:
Type: SSH
Source: Anywhere-IPv4
Rule2:
Type: HTTP
Source: Anywhere-IPv4
Key pair
Navigate to AWS EC2 —> Key pairs —> Create key pair, with the following
parameters:
Name: aws-demo
Name: aws-demo
Instance Type: t3.medium (t3.micro can be used for free tier, but may suffer
from performance issues)
Network Settings:
Telnet is one way to ensure the machine is accessible on ports 22 and 80:
# Make sure to replace the machine's IP with the one attributed to your machine
telnet 3.250.206.251 22
telnet 3.250.206.251 80
~/.keypairs/aws-demo/aws-demo.pem
<!DOCTYPE html>
<html>
<head>
<title>My First Application</title>
</head>
<body>
Apache2 Installation
Update the local package index to reflect the latest upstream changes: sudo apt-
get update
Verify that the deployment worked by hitting the public IP of the machine:
Application Deployment
# Create a directory
sudo mkdir /var/www/myfirstapp
# Change the ownership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp
# Change the directory permissions
sudo chmod -R 755 /var/www/myfirstapp
# Create the index.html file and paste the code in it
sudo nano /var/www/myfirstapp/index.html
# Change the owership to www-data
sudo chown -R www-data:www-data /var/www/myfirstapp/index.html
# Create the log directory
sudo mkdir /var/log/myfirstapp
Virtual Host
Create the virtual host file: sudo nano /etc/apache2/sites-available/myfirstapp.conf
<VirtualHost *:80>
DocumentRoot /var/www/myfirstapp
ErrorLog /var/log/myfirstapp/error.log
CustomLog /var/log/myfirstapp/requests.log combined
</VirtualHost>
Perform a request on the server. The response will now return the HTML
document created:
Attach this role to the EC2 Machine: Actions —> Security —> Modify IAM Role
AWS CLI
Docker Installation
To validate that Docker is installed and the changes are all applied, restart the SSH
session, and query the docker containers: docker ps -a . A response similar to the
one below indicates the success of the installation.
Base Image
Pull the Apache2 Docker Image: docker pull httpd:2.4-alpine .
httpd:2.4-alpine
Attempt to make a request to the container, using the machine’s public IP and
port 80: http://<MACHINE IP>:80
<!DOCTYPE html>
<html>
<head>
<title>My First Dockerized Website</title>
</head>
sh .
Clearly, the image shows that the changes have been reflected.
docker rm -f myfirstcontainer
docker ps -a
docker run -d --name myfirstcontainer -p 80:80 httpd:2.4-alpine
myfirstcontainer .
Name and tag the image: docker tag <image ID> custom-httpd:v1 .
docker rm -f myfirstcontainer
docker run -d --name mysecondcontainer -p 80:80 custom-httpd:v1
Hitting the machine on port 80 should return the new HTML page now no matter
how many times the container is destroyed and created.
aws ecr get-login-password --region <REGION ID> | docker login --username AWS \
--password-stdin <ACCOUNT ID>.dkr.ecr.<REGION ID>.amazonaws.com
create a third container, but this time, reference the image located in the ECR:
docker run -d --name mythirdcontainer -p 80:80 <ACCOUNT ID>.dkr.ecr.<REGION
ID>.amazonaws.com/custom-httpd:v1
Place the application code inside the directory in a file called index.html
<!DOCTYPE html>
<html>
<head>
<title>My Final Dockerized Website</title>
</head>
<body>
<p>I am Dockerized using a Dockerfile.</p>
</body>
</html>
Create a Dockerfile next to the index.html file, with the following content:
FROM httpd:2.4-alpine
COPY index.html /usr/local/apache2/htdocs/
ID>.amazonaws.com/custom-httpd:v-Dockerfile .
ID>.amazonaws.com/custom-httpd:v-Dockerfile
Introduction
Solution
ArangoDB Deployment
No Data Persistence
Data Persistence
NK-backend Service Deployment
Introduction
Now that we learned the basics of containers and Docker, we will take it to the next
level in this demo. We will containerize and deploy the NK-backend application that
we previously deployed on an Ubuntu VM. To do so, we will deploy the containerized
version of the Arango database, as well as the NK Backend Service. The following
steps will be completed:
Solution
ArangoDB Deployment
No Data Persistence
Create an ArangoDB Docker container:
a:
Logging back into the management console clearly shows that all the databases,
collections, and data entered are now missing.
volume:/var/lib/arangodb3 -e ARANGO_STORAGE_ENGINE=rocksdb -e
ARANGO_ROOT_PASSWORD=rootPassword arangodb/arangodb:3.6.3
Log back into the management console. unlike the previous command, the
data created still exists.
80:1337 backend-service:v-Dockerfile
Ensure that the application connected to the database through the logs: docker
logs backend
Welcome to this demo entitled Add Domain Names to the application and the previous demos.
We deployed to HTML applications on one SC two machine.
One application is served on Port 80, while the other one is served on Port 81.
Instead of serving the applications using the IP address of the machine.
And on different ports, we will use in this demo domain names and both applications will be
served on Port 80. So the first application will be served using domain name, my first app dot
com and the second application will be served using domain name, my second app dot com. I
am logged into my machine. So first of all, I will modify the virtual host of the first application
since it is it is listening on port 80. We will not modify this one but I will add the server name
directive, which is my first app dot com. So now this virtual host will listen on Port 80 and will
serve requests coming with the server name. My first app dot com.
Let me save this and now I will modify the virtual host of my second app. So let's do this. Now as
you can see, it is listening on Port 81. I will change this to listen on Port 80.
And I will add the server name directive that is listening using server name. My second app dot
com. And now I will restart Apache. Now the last thing to do since we are using fake domain
names, I need to instruct my machine on how to translate the domain names into the
corresponding IP address. So to do so, I have to modify the host file on my machine. I will open
a new tab and modify the hosts file.
So I need to add the the IP address of the machine and the domain names.
So this line instructs my machine that whenever I put in my browser, either my first app dot com
or my second app dot com, translate it to this IP address.
Let me go into the browser, get the. So I will first app dot com. And as you can see, the first
application is loaded. Now let me try it for my second app dot com.
I have my first uploaded using my first app Dotcom, and I have my second uploaded using my
second app dot com.
Hi and welcome to this lecture entitled Web Application Concepts. So far we have learned to
deploy and serve websites using Apache Web servers. Moreover, we leverage the power of load
balancing in order to scale the website when needed. With great innovation comes great
complexity, with the internet and technological advancement. Websites are the simplest forms
of applications created and deployed.
Web applications, on the other hand, are more complicated to design, develop, deploy and
maintain. This lecture provides a deep dive into web applications with the aim to understand
their nature, importance, characteristics and challenges.
In this video, I will discuss the differences between websites and web applications, the HTTP
protocol and the web application layers, Components and architecture.
Websites versus Web applications. By definition, websites are a set of interconnected
documents, images, videos, or any other piece of information developed using HTML, CSS and
JavaScript and deployed and served using one of the ways we introduced previously.
User interaction with websites is limited to the user fetching the websites information only.
Moreover, websites are usually stateless and thus requests from different users yield to the
same results at all times.
Examples of websites include but are not limited to company websites, blogs, news, websites,
etc.. Web applications, on the other hand, are much more complex than websites and offer
more functionalities to the user. Google. Facebook. Online gaming e-commerce platforms are all
examples of web applications. Such applications allow the user to interact with them in
different ways, such as creating accounts, playing games, buying and selling products, etc..
Evidently, in order to provide such complex functionalities, the architecture of the web
application can prove to be much more complex than that of a website.
As already discussed in its simplest form, most web applications follow the client server
architecture, in which a client application is loaded from the server to the browser, allowing
communication with the web application on the server through the internet using the HTTP
protocol. Communication between application components happens through the application
programming interface or API. Now let us discuss the HTTP protocol.
By definition, the Hypertext Transfer Protocol or HTTP is a protocol designed to load web pages
deployed and exposed on the Internet.
The protocol is designed to standardize the exchange of information between connected
devices. To better understand the protocol.
Consider this diagram. A typical flow begins when the client machine sends an HTTP request to
the server. The server takes the appropriate action, and response responds with an HTP
response which will be processed by the client.
Communication over HTTP is done using HTTP messages.
It can either be an HTTP request from the client to the server or an HTTP response from the
server to the client. Usually HTTP requests and responses are composed of the same structure
and components with some minor
differences. The first component to discuss is the HTTP method, which is used to indicate the
desired action. Some of the methods include get, which is used to fetch data from the server
post, used to create new data on the server, but used to modify existing data on the server and
delete used to delete data from the server. The HTTP version is used to indicate the version of
the HTTP protocol used. HTTP uniform resource locator or your URL constitutes the complete
address of a resource on the web. It is unique per resource.
A URL can be composed of multiple fields dissecting the URL and this picture results in the
following. The protocol is http. The domain name is my web app dot com. The application port
is 80. The path is forward slash some forward slash API, and the rest are the query string
parameters. HTTP headers include information about the request or response stored in key
value pairs. The HTTP headers provide more context and information to the to the HTTP
request. For example, headers specify the accepted languages, preferred media format, etc. The
HTTP body contains the information sent by the client to the server or vice versa. Finally, the
HTTP status code dictates the status of the requests. Some of the most widely used status code
blocks are to x x to indicate a successful completion of the request for x x to indicate a logical
error while serving the request. For example, four of four not found 5xx to indicate a server
error while serving the request. For example. 504 Unable to reach database. An application is
divided into three layers. More layers can be added to the application design, but for simplicity
purposes, this lecture explains only three. The first one is the presentation layer, also known as
the client side. The client applications are designed for the user to interact and manipulate the
application. Front end applications are developed using many technologies. For example,
Angular JS, React as Vue.js, etc. Application or business logic layer is part of the application
server side. It accepts and processes the user requests and interacts with the database for data
modification. Such applications can be developed using Node.js, Python, BHP, Java, etc.. And
finally we have the database layer. This is where all the data resigns and is persisted. Usually a
website is composed of a simple code developed entirely using HTML, CSS and JavaScript alone.
On the other hand, web applications are more complex and are made of different components.
In its simplest form, a web application is composed of a front end application which represents
the client side and is used to allow the user to interact with the application. The backend
application, which represents the server side and is used to serve the user, requests a database
to store and load the application and user data. The components above are essential to create
web applications. However, the latter may require additional components to serve more
complex functionalities in a more optimized way. For example, an in-memory database for
caching a message bus for asynchronous communication, content delivery network for serving
and caching, static content, a workflow management platform for organizing processes and
many other components. Clearly, as the applications use, case grows in size and complexity, so
will the complexity to design it. Therefore, a proper way to architect and organize the
application is needed. Here is a diagram representing the architecture of an eCommerce web
application that I designed and deployed on AWS a few years ago. You can clearly notice the
amount of interconnected components required to make it work.
Well, application architecture mainly depicts the way the different components are built and
interact with each other. As a matter of fact, as the application grows in size, a well tailored
architecture is essential to ensure the proper functioning of the application. There exist multiple
architectures when designing web applications. Nonetheless, two of the most prominent ones
are monolithic and microservices. To better understand the difference between microservices
and monolithic applications. Consider the example of building an e-commerce platform.
Typically, such a platform contains several functionalities, namely catalog to serve and display
available items. Customer to handle customer related functionalities.
Order to manage orders happening on the platform and payment to allow online payment
functionalities.
From a monolith perspective, all the aforementioned functionalities are designed using one
technology for example Java, PHP or Node.js.
Having one large code base and deployed entirely on one server.
Each functionality is designed as a separate module which interacts with one another through
requiring each other.
On the other hand, if design using the microservices approach, each functionality turns into a
separate and independent service, most probably with its own database tool that is developed,
deployed and managed on its own. Evidently, both approaches possess numerous advantages
as well as disadvantages. Monolithic applications indeed provide several attractive advantages,
especially the ease of management, since they are very easy to develop, deploy and test.
Unfortunately, the aforementioned aforementioned advantages begin disappearing as the
application grows in size.
As a matter of fact, monolithic applications comes with with many great disadvantages,
especially at scale. One of them is slower development lifecycle. As the size of the application
grows, so does the amount of time and complexity required to build, test and deploy the
application at every change. And with scale, changes are more frequent. Evidently the bigger
the application, the slower it will become to continuously develop and publish the application.
Codependency no matter how organized one can be. Organizing the aggregation of the code in
one place and the communication between modules becomes inefficient and inconsistent at
the end. Performance issues. As the functionalities grow in size and complexity, maintaining a
proper performance becomes an issue. In fact, the different services and capabilities of the
application will require further capabilities and resources from the hosting server. Moreover,
having different functionalities may require different specialized infrastructure for different
types of functionalities, which may prove to be difficult to achieve given the centralized nature
of monolith. Since everything is deployed on one server. Scalability issues. Scaling a monolith
becomes inefficient at scale. Typically, different parts of the application may require different
scalability rules. For example, considering the e-commerce application as you do, most of the
traffic is directed towards the catalog module, whereas all the remaining ones are receiving
minimal traffic. In the case of Monolith, all of the applications will have to be scaled out using
large servers, although only one small part of the application requires it. Infrastructure costs.
Provisioning resources to operate the monolith may prove to generate unwanted costs. Code
ownership and team division problems due to the interconnected nature of the code and the
monolith. Onboarding members on the team and clearly dividing responsibilities between them
becomes problematic. Technology lock in being centralized in one code repository. All of the
application with its different functionalities is developed using a specific set of technologies.
However, this may prove to be costly as each technology has advantages and disadvantages.
Being locked to one stack not only forbids the application from the advantages of the available
technologies but may affect its performance as well. Technical debt. Technology is advancing
rapidly and platform changes with enhancements are being rolled out quickly as the monolith
grows. Upgrading it becomes costly, which may affect the performance and the continuity of the
application. And finally, the single point of failure. A failure in one method service or end point
may lead to the failure and the whole application. Clearly centralized architectures, especially
monolithic applications, come with great advantages, especially for simple applications on small
scales. It is intuitive to design, develop, deploy and maintain them. Unfortunately, the above
disappears quickly as the application and the maintaining team grows. Microservices, on the
other hand, alleviate most of the monolith disadvantages due to their extremely distributed
nature. The first advantage is fault tolerance due to their distributed and independent nature.
Failure in one microservice should not bring all of the application down. Therefore, single points
of failure and downtimes are minimized. High scalability microservices allow to have different
scaling mechanisms and rules for different microservices. depending on the nature of the traffic
and resource consumption of each of them. This allows for a more optimized resource
utilization and scalability rules. Ease of maintenance. Even if the application grows in size and
complexity. Teams can be organized to work independently on different microservices. With
proper communication between teams. The main thing is to agree upon all the exposed end
points and communication between services. For instance, the catalog service and the order
service can be developed and maintained separately by completely different teams. The teams
must only collaborate on exposing and using the end points of each service, assuming that one
service will have to communicate with the other. Ease of deployment. It is easier and faster to
build and package a small microservice, especially with today's powerful automation tools, and
serve them on smaller commodity servers. Technological freedom. Each microservice can be
designed, developed and maintained on its own, using the most convenient set of tools and
technologies and regardless of the other microservices forming the application. This provides a
great advantage by allowing these teams to leverage the existing choices and technologies and
use those that fit the purpose of each functionality. For example, the catalog service may be
developed using Node.js while the customer service is developed using Golang. First
development lifecycles. Developers working on specific independent and small scale services
allows the services and the overall application to be maintained and updated with fast
development and release cycles. Despite their many advantages, microservices come with
multiple disadvantages, which must be considered and addressed by teams willing to develop
software using this architecture. The first one is complex infrastructure. The distributed nature
of microservices dictates a more complex infrastructure to be created. As a matter of fact, as
the number of services grows, each of which requiring its own setup rules, configurations and
scalability rules. A proper infrastructure capable of serving these different needs is required.
The need for DevOps microservices are decoupled from each other. However, microservices and
their underlying infrastructures are highly coupled. Deploying and managing microservices is
quite different than that of a monolithic application. Developers and operations teams must
collaborate efficiently to ensure that the application and infrastructure being developed work
well together. This entails both teams to understand each other, learn additional skills, and
continuously coordinate. Increased network calls. Microservices are deployed on different
servers, different subnets and in different physical locations. Moreover, a simple request in
coming to a microservice application genuinely must traverse multiple services and
components, each reach through a network code. This greatly enhances the need for a robust
infrastructure and finally complex end to end testing. While unit testing may be a great
advantage with microservices end to end or integration testing may not be as pleasant. As a
matter of fact as the number of services and components grows. Automation tools and robust
testing processes are required to continuously create testbeds and successful test scenarios at
every release, which may prove to be complex and costly at the same time.
Hi and welcome to this demo entitled Enable Load Balancing to Better Understand Load
Balancing. We are going to create three upon two is two machines on us and we will create two
simple HTML pages. We will deploy the first one on one VM.
We will deploy the second one on the second VM, and the third VM will be configured as a load
balancer using Apache and we will instruct it to distribute the load on the first two virtual
machines. Now, to save time, I already did some of the work.
So on the first server, which is up on, I installed Apache and I deployed the first application.
I did the same thing on app two and I installed Apache on the third virtual machine. The
configuration that I have done is working correctly. I will hit the first server and the first
application is returned. I will now hit the IP of the second machine.
And the second application is returned. Now I will hit the load balancer machine, which has only
Apache installed and it will return the default
Apache page. So so far everything I have done is working correctly.
Now what I will do in this demo is that I will configure Apache on the load balancer machine to
actually work as a load balancer. So the first step is that I have to enable a few modules for
Apache to work as a load balancer. So I will do this on the third machine.
And now I would configure a virtual host, which is going to be a little bit different than the one
we used to do before. So let me paste it and I will explain what it means.
So I'm instructing Apache on this. Third is that you will listen on the on Port 80 and whenever
you receive a request you are going to route it either to the first member, which is the first
virtual machine or the second virtual machine. Now all I have to change is to put the IPS of each
machine instead of these placeholders. So I'll get the IP of the first machine and I will replace it
here. And I would do the same for the IP of the second VM.
And I would save the configuration. Now I will disable the default virtual host and I will enable
the load balancer, virtual host.
I will test my syntax. And finally, I will restart Apache.
So now if I hit the load balancer machine, it's supposed to rout me one time to the first virtual
machine and load the first application and the other time to the second virtual machine and
load the second application. Let's test it so I will get the IP of the load balancer and I will place it
in the browser. So now it loaded the first application.
Now if I hit it again, it loaded the second application and if I repeat the steps, it will keep on
distributing the load across the two virtual machines. And this is load balancing.
Hi and welcome to this lecture entitled Introduction to Server Side Applications. After learning
different concepts about web applications such as the differences between websites and web
applications and web application layers, components and architecture, we are going to focus in
this video on the application server side.
In this video, we are going to discuss different database concepts and then we will discuss the
key back and service which we will deploy on AWS and the next video.
A database is a software program that enables capabilities to efficiently store and query data in
a system.
Almost every system in the world includes some sort of database that manages the data. For
example, smart devices, mobile phones, etc..
Most of today's web applications rely on constantly changing data.
For instance, Facebook and Twitter are all examples of web applications with massive amounts
of data flowing in and out of the system.
Therefore, there must be a reliable way to store and manage all this data.
A database management system, or DBMS, is a software program that enables the creation and
management of databases and data models, thus providing efficient tools for managing the
data. My SQL, MongoDB and Aramco DB are all examples of DBMS Database management
systems are designed with multiple tools and capabilities, such as a database engine, data,
query language monitoring tools and user management tools and applications. Data may grow
to become extremely complex. Moreover, the application needs to constantly store, generate,
modify and remove data from the database in a dynamic and reliable way.
Data models are designed processes to organize the way the data is managed.
Consider the commerce application explained in the previous lecture.
Such an application must manage data related to the catalog. For example, product description.
Product prices. Data related to the customer.
For example name, age, address data related to orders generated on the system, such as all the
details, products purchased and data related to payments such as credit card information and
transaction records. Clearly, each service requires different types of data to be stored. A data
model allows the developers to logically structure this data and different tables and collections
in a way to reliably store and fetch the required data. Different database types currently exist
each of each supporting a distinct data model of all existing database types.
This lecture explains relational databases, document databases and graph databases.
Relational databases are based on the relational data model which organizes data into tables.
Each column in a table defines an attribute with characteristics.
Each row in a table corresponds to one record and is identified by a unique ID.
For instance, to store the names and ages of all students in a university, a relational database
can be used as an example.
A table called students can be created with two attributes name of type of type, string and age
of type integer. Moreover, several tables can be created referencing one another. For instance,
another table called courses containing a list of available courses in the university has two
properties name of type string and is available of type bullying.
Finally, a third table called student.
To course, registrations can be created to keep track about student registration to the courses
with two columns, one corresponding to the student ID, while the other corresponding to the
course ID structure. Query Language. SQL is one language used to query the data from the
structured databases. As a matter of fact, SQL provides efficient capabilities to manage data and
much more complex and large
data models. Examples of DBMS and relational databases include my SQL Oracle database and
PostgreSQL.
Relational databases are best used when the data on the system is highly structured and doesn't
change frequently over time.
A document database is a type of non relational database designed to manage which store and
query data and JSON formatted records.
Such databases provide more flexibility, especially for the developers due to their loose model.
That can change dynamically over time and between records without much administrative
overhead. Modeling The university example presented earlier results in pre JSON documents as
can be seen in the picture. Query languages are provided by the vendor to manage the data.
Document databases are best used for use cases similar to catalogs, user profiles and new
projects that need to be built with agility in mind. Examples of document databases include
MongoDB, Amazon, Document, DB and Cosmos DB Graph databases are known as SQL
databases that stored data without schemas while connecting the records in each table using
relationships. Data records are stored in what is known as nodes. Nodes can have relationships
with one another using edges graph databases.
Combine the flexibility of no SQL databases with the power of relational databases.
The university example is depicted in this diagram.
The student node stores JSON records, each of which represents information about a student.
The courses Node stores, JSON records, each of which represents information about a course
the student took. Course registration is of type edge and stores JSON records related to the
registration of each student to each course.
Edges must have valid database record IDs one for the student and another one for the course.
Graph databases offer a flexible data model as well as a robust way to manage relationships
between data points. Therefore, although graph databases can be useful in most use cases, they
are most widely used in fraud management systems, identity and access management and
recommendation engines. Examples of graph databases include Arango, DB and New Forge. This
diagram summarizes how a web application is deployed on us and how users interact with it.
Consider an application consisting of a front end application, backend application, and a
relational database. One way to deploy it is as follows.
The database is deployed and served using a managed database, service or audience. For
security purposes, it is recommended not to expose the database directly to the Internet,
meaning deploy it in private subnets and properly secure the access.
The back end application is deployed on an easy to server with a public IP.
The front end application is deployed and served on AWS three.
All the application components reside within an AWS virtual private cluster.
A typical HTTP request response cycle is as follows The client sends an HTP request to the front
end application.
The front end application code is returned and is loaded on the client's browser.
The client sends an API call through the front end application to the back end application. The
backend application validates and processes the request.
The backend application communicates with the database for managing the data related to the
request. And finally, the backend application sends and HTTP response containing the
information requested by the client. The back end service is an open source project that serves
as a basic backend service. The service is developed using Node.js and save us. The service uses
a language DB to store and manage its data.
It is a restful service and exposes cloud APIs to manage person records.
It contains the following APIs Get Health, which is the health endpoint.
It is used to ensure that the service is up and running.
Both person. It creates a person record in the database.
The function performs a transaction against the database checks if the person already exists
using the email. If the person already exists, a logical error is returned.
Else the person record is created in the database and a success response is returned to the
client. That person's it fetches all the existing persons in the database.
And finally, delete person by ID, deletes a person record from the database using the ID and the
next video, we will deploy the back end service on AWS.
Hi, and welcome to this demo entitled and Payback and Service Deployment to better
understand how an application is deployed and search traffic over the internet and this tutorial
we will deploy the K back end service and its database on an easy to machine and explore some
of its APIs by communicating with them using the Postman API client. The following steps will
be completed. First, we will create an AWS virtual machine. Then we will configure it with the
necessary prerequisites.
Afterwards, we will deploy the database and connect to it using a client.
And finally, we will deploy the backend application and perform some API requests to validate
the deployment. The back end service and the database will both be deployed on one virtual
machine. Moreover, the backend service will be exposed on Port 88, while the database uses
port 85294 Communication. Therefore, to successfully perform the deployment, we will create
a security group with the following inbound rules port 88 to communicate with the back end
application port 8529 to communicate with the database and port 2222 to edge to the machine,
not to save time. I have already created the security group and the machine.
As you can see here, the there is a security group with the desired inbound rules and the
Ubuntu 20.04 machine. So now I will associate to the machine using its public IP.
But public IP and the US Delucchi. So now I am inside the machine.
The first thing to install is the Aramco database. Now, the official end would be documentation
explains all the steps required to deploy the database. So I will start by updating the package
repository. And then a bunch of other commands.
Before updating the package repository again. And finally installing the Aramco database, which
should take a few seconds. During the installation, the installer will prompt us to enter and
confirm the route user's password. So I would put the password.
And I will leave those questions as they are. And once the installation is complete, I will make
sure that the service is running by checking its status.
So the GDB is now installed and is running.
Now the first thing I will attempt to do is to actually connect to it using its client.
And I will successfully not be able to communicate with the database.
But this is normal because by default R&B does not allow connections from outside of the VM.
To solve this issue, the negative configuration file must be edited.
So let us do this. I will edit the DB configuration file and I will edit the end point to allow
external connections. I will save the fine. Restart the automotive service. And now I will attempt
to connect to it again. This time it should work. So now I will add the root username and
password and I will select the system database which will redirect me to the administration
console. The console provides me with different functionalities, such as the ability to create
databases, perform queries, monitor performance and logs, etc.. So reaching this point signals
the success of the deployment of the database. Now it's time to actually deploy the backend
service.
As explained previously, the back end service is a not just restful service built using Celsius.
Moreover, the application code is stored on a public GitHub repository which everyone has
access to. So in order to fully deploy and run the back end service, some prerequisites must be
installed. First of all, I will update the package repository again and then I need to install. Git is
an open source tool for source code management.
Git allows teams of developers to collaborate together and track changes efficiently happening
on the code while storing the code safely in a remote repository.
And this demo we will use get to download the backend service from the GitHub repository to
the server. So first of all, I will install it.
And I will make sure that it is installed using the Git version command.
And this shows that it is already installed. So now the next step is to actually install the Node.js
platform. So to do so I will install Node.js and NPM using the apt get tool.
So now that Node.js is installed along with NPM, I will clone the repository on the server and I
will go inside the directory. So the application code is successfully installed on the machine. For
simplicity purposes, understanding the use of the different files is out of scope. Rather, in this
demo we focus on the system requirements for the application to run Being an object
application, the service requires some packages to be installed.
Therefore, to do so, we will install the packages using the NPM install command. Finally, the
application depends on some environment variables. Environment variables are very similar to
global variables in any programming language. Once a global variable is defined in a file, it can
be accessed from anywhere inside that file.
Similarly, environment variables are system wide variables in Linux.
Once an environment variable is exported, it can be used by any process on the machine. So let
us start by exporting exporting the environment variables of the application. The first one is the
port. It tells the application to use port 88.
Next, we have five environment variables related to the database credentials such as the host
port username, password, and finally the database name, which is persons.
And finally, the application requires five more environment variables that we don't really have
to care about, but they are necessary for the application to run.
Finally, I will run the application using the Node app Dogs Command.
The application logs clearly show that the application successfully connected to the database
and created a database called Persons, as well as a connection with the same name. So to
validate the creation of the database, I will log back into the database administration console.
And as you can see, there is a persons database which I will select. And there is a person's
collection which is empty. This was created by the application. So now I will validate that the
deployment is successful and that the application is reachable from my machine using the
Custom Health API. So I will use the postman client which I have downloaded over here. I will
put the machine IP port 80, 80 and the forward head. We have a successful response indicating
that the back end service is running. So now to test the available APIs, we need to download the
Postman collection, which you can find in the repository and import it to the Postman API client.
Now I have already done so as and as you can see, there are three APIs. I will only test one API
which is create person. Now, before I do so, I have to change the variables to reflect the server's
IP address.
I would put the correct IP address and the correct port. And finally, I will hit the Create Person
API, which has a JSON body over here.
So this information should be added to the database if everything is successful. So let's send an
API. I have a successful response indicating that this database record has been added to the
database. Now we can validate it through the application logs, which clearly indicates that the
application received the request. It added to the database and it returned a response to the
client. And finally, I will go back to the database and refresh the person's collection.And I can
see the record that is added over here that.
4. Training Exercises 1
Answer: False
Explanation: There is no such thing as a Domain Registry System. The tool in
question is the Domain Name System (DNS)
b. Least Connections
Solution
Answer: b, d
4. Training Exercises 2
2. Select the valid components to build static websites.
a. C++
b. Databases
c. HTML
d. CSS
Solution
Answer: c, d
b. Web servers are used to store important information, such as the registered users
in an application.
c. The dig command in Linux fetches the IP address of a certain domain name.
Solution
Answer: a, c
b. The -z flag in the ls command prints the user that created the file.
c. The help command is used to display the user manual for any linux command.
4. Training Exercises 3
Solution
Answer: d
Linux Commands
1. Create an empty file called index.html inside the /opt/
directory
Solution
Answer: touch /opt/index.html
4. Training Exercises 4
4. Training Exercises 5
1. The command mv /mnt/file.txt /opt moves the file.txt from the /opt directory to the /mnt
directory.
a. True
b. False
2. Select all true statements.
a. A message bus is used to enable asynchronous communication between the application
components.
b. It is a good practice to serve a website over HTTPS instead of HTTP.
c. It is a good practice to deploy the database in public subnets and protect it with a
username and password
d. Redis is used for serving and caching static data such as videos and images
3. Select the valid HTTP methods
a. INSERT
b. REMOVE
c. RETRIEVE
d. PATCH
4. The default ports for the MySQL, MongoDB, and SSH protocols are (The order is important).
a. 1, 21, 80, 543
b. 3306, 27017, 22
c. 8080, 8443, 23
d. 22, 80, 45
5. Print the content of the index.html file to stdout
Cat index.html
13. The Least Response Time Load Balancing algorithm routes traffic to the server with the fastest
response time to health checks.
a. True
b. False
14. Select the valid HTTP components
a. Column
b. Method
c. Body
d. Index
15. The command rm is used to create new files
a. True
b. False
16. Load Balancing is the process of distributing traffic across multiple applications on one server
a. True
b. False
17. Ubuntu, Centos, Fedora are all official Linux distributions
a. True
b. False
18. A Devops Engineer is supposed to deploy a HTML website on AWS. The solution will be deployed
on a single AWS EC2 Linux VM, with Apache2 installed and configured to serve the website. No
domain name is assigned to the website. Therefore, accessing the website can be reached via
the server’s IP, on port 82 (e.g. http:14.3.2.1:82/some/route .) The virtual machine already
exists, and Apache is already installed with its default configuration. The engineer deployed the
code, configured the virtual host, and restarted Apache2. The requests are not successfully
reaching the website. What are the possible causes?
a. The security group (Inbound traffic) rules are not properly configured
b. Apache2 is not properly configured to listen to port 82
c. The virtual machine must be restarted
d. A database must be configured
19. MongoDB, Apache, and NGINX are ALL valid examples of Document databases.
a. True
b. False
20. Select all true statements
a. NGINX is an alternative to Apache
b. The PUT HTTP method is used to create new data on the application
c. The default ports for HTTP and HTTPS are 80 and 443 respectively
d. 504 code is usually used to signal a “Resource Not Found” error (Resource could be
page, file, video, etc…)
21. Change the ownership of the index.html to the ubuntu user and group
22. pat scaling is the process of adding or removing servers. Vertical scaling is the process of
modifying the resources of the server.
a. True
b. False
23. HTML and CSS are used to build backend applications
a. True
b. False
24. Select all true statements
a. 403 code is usually used to signal a “Resource Not Found” error (resource could be page,
file, video, etc…)
b. 4XX codes usually indicate a server side error (e.g. unreachable database)
c. 2XX codes indicate that the request was successfully processed and returned to the
client
d. 5XX codes usually indicate a logical error (e.g. Client requesting a file that doesn’t exist)
25. The command cd stands for current directory, and is used to print the absolute path of the
current directory
a. True
b. False
26. The World Wide Web is the “Network of Networks”, and provides connectivity between
computer systems across the globe
a. True
b. False
27. MySQL and PostgreSQL are best used in cases where the data model is stable, and does not
change frequently
a. True
b. False
28. Facebook, Google, LinkedIn are all examples of web applications
a. True
b. False
29. Print the absolute path to the current directory
pwd
30. Select the valid Database Management Systems (DBMS)
a. MongoDB
b. Files
c. MySQL
d. Notebook
• Question 1
1 out of 1 points
Ubuntu Unity, Ubuntu Cinnamon, and Xubuntu are all official Ubuntu variants
Selected Answer:
True
Answers:
True
False
• Question 2
1 out of 1 points
MariaDB and PostgreSQL are best used in cases where the data model is stable, and does
not change frequently.
Selected Answer:
True
Answers:
True
False
• Question 3
1 out of 1 points
The command cd stands for change directory. It is used to change the current directory of
the terminal.
Selected Answer:
True
Answers:
True
False
• Question 4
1 out of 1 points
Load Balancing is the process of distributing traffic across multiple application replicas each
on a dfferent server.
Selected Answer:
True
Answers:
True
False
• Question 5
1 out of 1 points
The internet is the “Network of Networks”, and provides connectivity between computer
systems across the globe.
Selected Answer:
True
Answers:
True
False
• Question 6
3 out of 3 points
A DevOps engineer is attempting to write a completely useless script. Unfortunately, the engineer avoided
engineer complete the commands of the script. (PS: Do not be like this engineer)
#!/bin/bash
Selected Answers: 4.
8529, 6379, 3306
Answers: 1.
8524, 6385, 3358
2.
22, 80, 45
3.
8925, 9756, 6033
4.
8529, 6379, 3306
• Question 9
1 out of 1 points
Facebook, Google, LinkedIn are all examples of static websites.
Selected Answer:
False
Answers: True
False
• Question 10
1 out of 1 points
Display the user manual for the Linux command used for moving files:
[A1] [A2]
Specified Answer for: A1 man
Specified Answer for: A2 mv
Correct Answers for: A1
Selected 2.
Answers: It is a good practice to serve a website over HTTPS instead of HTTP.
4.
It is a good practice to deploy the database in private subnets and and disable
direct access from the internet.
Answers: 1.
A Content Delivery Network is used to enable asynchronous communication
between the application components.
2.
It is a good practice to serve a website over HTTPS instead of HTTP.
3.
Redis is used for serving and caching static data such as videos and images.
4.
It is a good practice to deploy the database in private subnets and and disable
direct access from the internet.
• Question 13
1 out of 1 points
The Least Connection Load Balancing algorithm routes traffic to the server with the least
number of active connections at the time the client request is received.
Selected Answer:
True
Answers:
True
False
• Question 14
1 out of 1 points
Select all true statements.
Selected 1.
Answers: 404 code is usually used to signal a “Resource Not Found” error (Resource
could be page, file, video, etc).
Answers: 1.
404 code is usually used to signal a “Resource Not Found” error (Resource
could be page, file, video, etc).
2.
Apache2 is an alternative to MongoDB.
3.
The INSERT HTTP method is used to create new data on the application.
4.
The default ports for HTTP and HTTPS are 8080 and 4443 respectively.
• Question 15
1 out of 1 points
HTML, CSS, and Javascript can be used to build Frontend applications
Selected Answer:
True
Answers:
True
False
• Question 16
1 out of 1 points
Change the permissions of the file.txt file:
[A1] 753 file.txt
Specified Answer for: A1 chmod
Correct Answers for: A1
Selected Answers: 1.
PUT
3.
GET
Answers: 1.
PUT
2.
EXEC
3.
GET
4.
MODIFY
• Question 18
1 out of 1 points
Select the valid Database Management Systems (DBMS)
Selected Answers: 1.
MongoDB
Answers: 1.
MongoDB
2.
Notebook
3.
Files
4.
Hard Disk Drive
• Question 19
1 out of 1 points
The command cp /mnt/file.txt /opt copies the file.txt file from the /opt directory to
the /mnt directory
Selected Answer:
False
Answers: True
False
• Question 20
1 out of 1 points
Vertical scaling is the process of adding or removing servers. Horizontal scaling is the
process of modifying the resources of the server.
Selected Answer:
False
Answers: True
False
• Question 21
1 out of 1 points
Select all true statements
Selected 4.
Answers: 3XX codes indicates that further action needs to be taken by the user agent
in order to fulfill a request.
Answers: 1.
3XX codes indicate that the request was successfully processed and returned
to the client.
2.
3XX codes usually indicate a server side error (e.g., Unreachable database).
3.
3XX codes usually indicate a logical error (e.g., Client requesting a file that
doesn’t exist)
4.
3XX codes indicates that further action needs to be taken by the user agent
in order to fulfill a request.
• Question 22
1 out of 1 points
Select the valid HTTP components.
Selected Answers: 2.
Header
4.
Protocol
Answers: 1.
Shard
2.
Header
3.
Row
4.
Protocol
• Question 23
1 out of 1 points
Change the ownership of the index.html file to the ubuntu user and root group:
Selected 3.
Answers: Document databases are best used in cases which data models constantly
change over time.
4.
MongoDB is a NoSQL database.
Answers: 1.
MySQL databases usually store data in JSON format.
2.
Databases are considered as part of the client side of the application.
3.
Document databases are best used in cases which data models constantly
change over time.
4.
MongoDB is a NoSQL database.
• Question 26
1 out of 1 points
MongoDB, Amazon DynamoDB, and ArangoDB are ALL valid examples of NoSQL
databases.
Selected Answer:
True
Answers:
True
False
• Question 27
1 out of 1 points
HTTP status code 403 is returned when the request does not have the correct permissions to
be processed by the server (e.g., Fetching someone else's data).
Selected Answer:
True
Answers:
True
False
• Question 28
1 out of 1 points
The Application (Business Logic) Layer can be developed using frameworks such as
NodeJS, Python, PHP.
Selected Answer:
True
Answers:
True
False
A Docker image is a running process of a Docker Container.
• True
• False
A Dockerfile is used to store and share Docker images between users.
• True
• False
docker pull alpine:latest is used to run a Docker container.
• True
• False
docker rmi alpine is used to remove the alpine image from the server.
• True
• False
A Docker registry repository can be either public or private.
• True
• False
HTTP status codes 5XX indicate a server error (e.g., Database unreachable).
• True
• False
The command cp /mnt/file.txt /opt moves the file.txt file from
the /mnt directory to the /opt directory.
• True
• False
A Message bus is used to enable asynchronous communication between the
application components.
• True
• False
Content Delivery Networks are used for serving and caching static data such as
videos and images.
• True
• False
MongoDB and Redis are relational databases.
• True
• False
It is a good practice to serve a website over HTTP instead of HTTPS.
• True
• False
It is a good practice to deploy the database in private subnets (no public IP) and
protect it with a username and password.
• True
• False
Containers are more lightweight than Virtual Machines.
• True
• False
Docker can be installed on Linux and/or Windows machines.
• True
• False
docker rm -f $(docker ps -a -q) removes all the existing images on the server
• True
• False
A Content Delivery Network (CDN) is used to enable asynchronous
communication between the application components.
• True
• False
RabbitMQ is used for caching and serving static data such as websites.
• True
• False
Git is a tool for source code management, allowing multiple developers to
efficiently collaborate together.
• True
• False
It is not a best practice to expose the database directly to the internet.
• True
• False
MySQL and PostgreSQL are document databases.
• True
• False