You are on page 1of 82

Activity-1

1. Compare features of different OS(windows, Linux, RTOS-


Vxworks/android)

Windows Operating System


Windows is an operating system designed by Microsoft to be used on a standard x86 Intel
and AMD processors. It provides an interface, known as a graphical user interface(GUI)
which eliminates the need to memorize commands for the command line by using a mouse
to navigate through menus, dialog boxes, buttons, tabs, and icons. The operating system
was named windows since the programs are displayed in the shape of a square. This
Windows operating system has been designed for both a novice user just using at home as
well as for professionals who are into development.

Features:
· It is designed to run on any standard x86 Intel and AMD hence most of the hardware
vendors make drivers for windows like Dell, HP, etc.
· It supports enhanced performance by utilizing multi-core processors.
· It comes preloaded with many productivity tools which helps to complete all types of
everyday tasks on your computer.
· Windows has a very large user base so there is a much larger selection of available
software programs, utilities.
· Windows is backward compatible meaning old programs can run on newer versions.
· Hardware is automatically detected eliminating need of manually installing any device
drivers.
LINUX Operating System

The Linux OS is an open source operating system project that is a freely distributed, cross-
platform operating system developed based on UNIX. This operating system is developed
by Linus Torvalds. The name Linux comes from the Linux kernel. It is basically the system
software on a computer that allows apps and users to perform some specific task on the
computer. The development of Linux operating system pioneered the open source
development and became the symbol of software collaboration.

Features:
· Linux is free can be downloaded from the Internet or redistribute it under GNU licenses
and has the best community support.
· Linux OS is easily portable which means it can be installed on various types of devices
like mobile, tablet computers.
· It is a multi-user, multitasking operating system.
· BASH is the Linux interpreter program which can be used to execute commands.
· Linux provides multiple levels of file structures i.e. hierarchical structure in which all
the files required by the system and those that are created by the user are arranged.
· Linux provides user security using authentication features and also threat detection and
solution is very fast because Linux is mainly community driven.

What is a Real-Time Operating System (RTOS)?


A real time operating system is a software system that is designed to manage real time
applications. Real time applications require immediate responses to inputs and events, and the
real time OS is responsible for ensuring that these responses occur in a timely and
deterministic manner. In this article, we will discuss what is real time operating system, its
uses, types of real time operating system, working of real time OS, real time operating system
examples, advantages, and disadvantages of real time operating system.

What is Real Time Operating System?

A real time operating system is a type of operating system used in computing systems that
require strict completion deadlines for all tasks that need to be performed. An real time OS is
critical in applications that need immediate and deterministic behavior, such as in industrial
control systems, aerospace and defense, medical devices, and automotive industries.Overall,
an real time operating system ensures that a system is reliable, safe, and efficient.

Features of Real Time Operating System

· Real time OS occupies very less space.

· The response time of real time OS is predictable.

· It consumes some of the resources.

· In real time OS, the kernel restores the state of the task and passes control of the CPU for that
task.
VxWorks Operating System

VxWorks operating system provides unrivalled deterministic high performance. It establishes


the benchmark for a scalable, safe, secure, and dependable operating environment for
mission-critical computing systems that require the highest requirements. Leading global
innovators have used VxWorks for more than 40 years to produce award-winning, creative
solutions for aerospace, military, rail, vehicles, medical devices, manufacturing facilities, and
communications networks.

In this article, you will learn about the VxWorks operating system's history and architecture,
capabilities, functions, and features.

What is VxWorks Operating System?

VxWorks is created as proprietary software by Wind River Systems, a completely owned


subsidiary of Aptiv. It was firstly launched in 1987. It is mainly intended for embedded
systems that require real-time and deterministic performance. In many cases, it needs safety
and security certification in aerospace and robotics, medical devices, industrial equipment,
energy, transportation, defence, automotive, network infrastructure, and consumer
electronics.

It supports the AMD/Intel architecture, the ARM architecture, the POWER architecture,
and the RISC-V architecture. On 32 and 64-bit processors, the real-time operating system
may be utilized in multicore mixed modes, symmetric multiprocessing, multi-OS
architectures, and asymmetric multiprocessing.

The VxWorks development environment contains the kernel, board support packages, the
Wind River Workbench development suite, and third-party software and hardware
technologies. The real-time operating system in VxWorks 7 version has been redesigned for
modularity and upgradeability, with the operating system kernel separated from middleware,
applications, and other packages. Scalability, security, safety, connection, and graphics have
all been enhanced to meet the demands of the Internet of Things (IoT).

Features of the VxWorks Operating System

There are various features of the VxWorks OS. Some features of the VxWorks OS are as
follows:

1. Memory protection strategies insulate user-mode applications from other user-mode


apps and the kernel.
2. It provides memory protection.
3. It provides a real-time processor.
4. It contains several file systems, including Disk Operating System Filing System,
High-Reliability File System, and Network File System.
5. It is an error-handling framework.
6. It offers an Internet Protocol Version 6 (IPV 6) networking stack.
7. It has a multitasking kernel with preemptive, round-robin scheduling and quick
interrupt response.
8. It has a 64-bit operating system.
9. It contains a dual-mode IPv6 networking stack with IPv6 Ready Logo certification.
10. It offers support for symmetric multiprocessing and asymmetric multiprocessing.

Android Operating System

Android is a mobile operating system based on a modified version of the Linux kernel and
other open-source software, designed primarily for touchscreen mobile devices such as
smartphones and tablets. Android is developed by a partnership of developers known as the
Open Handset Alliance and commercially sponsored by Google. It was disclosed in
November 2007, with the first commercial Android device, the HTC Dream, launched in
September 2008.

It is free and open-source software. Its source code is Android Open Source Project (AOSP),
primarily licensed under the Apache License. However, most Android devices dispatch with
additional proprietary software pre-installed, mainly Google Mobile Services (GMS),
including core apps such as Google Chrome, the digital distribution platform Google Play
and the associated Google Play Services development platform.

o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-
customized user interface and some with software suite, such as TouchWizand
later One UI by Samsung, and HTC Sense.
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or
LineageOS. However, the "Android" name and logo are trademarks of Google which
impose standards to restrict "uncertified" devices outside their ecosystem to use
android branding.

Features of Android Operating System

Below are the following unique features and characteristics of the android operating
system, such as:

1. Near Field Communication (NFC)

Most Android devices support NFC, which allows electronic devices to interact across short
distances easily. The main goal here is to create a payment option that is simpler than
carrying cash or credit cards, and while the market hasn't exploded as many experts had
predicted, there may be an alternative in the works, in the form of Bluetooth Low Energy
(BLE).

2. Infrared Transmission
The Android operating system supports a built-in infrared transmitter that allows you to use
your phone or tablet as a remote control.

2. Automation

The Tasker app allows control of app permissions and also automates them.

4. Wireless App Downloads

You can download apps on your PC by using the Android Market or third-party options
like AppBrain. Then it automatically syncs them to your Droid, and no plugging is required.

5. Storage and Battery Swap

Android phones also have unique hardware capabilities. Google's OS makes it possible to
upgrade, replace, and remove your battery that no longer holds a charge. In addition, Android
phones come with SD card slots for expandable storage.

6. Custom Home Screens

While it's possible to hack certain phones to customize the home screen, Android comes with
this capability from the get-go. Download a third-party launcher like Apex, Nova, and you
can add gestures, new shortcuts, or even performance enhancements for older-model devices.

7. Widgets

Apps are versatile, but sometimes you want information at a glance instead of having to open
an app and wait for it to load. Android widgets let you display just about any feature you
choose on the home screen, including weather apps, music widgets, or productivity tools that
helpfully remind you of upcoming meetings or approaching deadlines.

8. Custom ROMs

Because the Android operating system is open-source, developers can twist the current OS
and build their versions, which users can download and install in place of the stock OS. Some
are filled with features, while others change the look and feel of a device. Chances are, if
there's a feature you want, someone has already built a custom ROM for it.

2. Study the evolution of OS to recognize the important of current


OS trends.

Introduction to Evolution of the Operating System

The computer network has many resources such as software and hardware that are mandatory
to finish the task. Generally, the required resources are file storage, CPU, memory, input and
output devices and so on. The operating system acts as the controller of all the above-
mentioned resources and assigns them with specific programs executed to perform the task.
Hence, the operating system is a resource manager that handles the resource as a user view
and system view. The evolution of the operating system is marked from the programming of
punch cards to training machines to speak and interpret any language.

Various Evolution of the Operating System

The various evolution of an operating system are given below:

1. Serial Processing
It develops by 1940 to 1950’s programmers incorporated by the hardware components
without the implementation of the operating system. The problems here are the scheduling
and setup time. The user’s login for machine time by wasting the computed time. The setup
time is involved when loading the compiler, saving the compiled program, source program,
linking and buffering. If any intermediate error occurs, the process gets starts over.

2. The Batch System


It is used by improving the utilization and application of computers. Jobs were scheduled and
submitted on cards and tapes. Then sequentially executed on the monitors by using Job
Control Language. The first computers are used in the process of the batch operating process
made the computer batch of jobs without any pause or stop. The program is written in the
punch cards and then copied to the processing unit of the tape. When the computer completed
a single job, it instantly begins the next task on the tape. Professional operators are trained to
communicate with the machine where the users dropped the jobs and fetched back to pick the
results after the job is executed.

Though it is uncomfortable for the users it is made to keep the expensive computer as busy
up to the extent by running a leveraged stream of jobs. The protection of memory doesn’t
allow the memory area comprises the monitor to altered and the timer protects the job from
monopolizing the system. The processor sustains as idle when the input and output devices
are in use by the bad utilization of CPU time.

3. Multi-programmed Batch System


It is used to have several jobs to execute which should be held in main memory. Job
scheduling is made up of the processor to decide which program to execute.

4. Time-Shared Operating System


Used to develop the substitute batch systems. The user communicated directly with the
computer by printing ports like an electric teletype. Few users shared the computer
instantaneously and spent a fraction of a second on every job before starting with the next
one. The fast server can act on many user’s processes instantly by creating the iteration when
they were receiving its full attention. The Timesharing systems are used by multiple
programs to apply to the computer system by sharing the system interactively.

The multi-programming is used to manage multiple communicative jobs. The time of the
processor is shared among multiple users and many users can simultaneously access the
system via terminals. Printing ports needed that programs with the command-line user
interface, where the user has written responses to prompt or written commands. The
interaction is scrolled down as a roll of paper.

The video terminals replaced the use of printing terminals that displayed the fixed size
characters. Some are used to develops forms on the screen but many usually with scrolled
like glass teletype. Personal computers became adaptable in the mid-1970s. The commercial
feasible personal computer is Altair 8800, that came into the market and rocked the business
values. The Altair doesn’t have an operating system because it has only light-emitting
diodes and toggle switches for input and output. So the people started to use floppy disk and
connected terminals.

The digital research implemented the CP/M operating system in 1976 for Altair and similar
computers. Later DOS and CP/M had a command-line interface similar to the time-sharing
operating systems. These computers were dedicated only to single users and do not apply to
shared users.

5. Macintosh Operating System


It was dependent on decades of research on graphical oriented personal computer operating
systems and applications. In 1960, the photo shows a Sutherland pioneer program sketchpad
is developed by using many characteristics of the modern graphical user interface but the
hardware components cost around millions of dollars that occupied a room.

After many research gaps, the project on large computers and enhancement in hardware made
the Macintosh commercially and economically feasible. The research prototypes such as
sketchpads are still under process at many research labs. It formed the basis of expected
products.

Operating System in the Trend


The current operating system provides program execution, I/O operations, communication,
file-system manipulation, error detection, allocation of resources, accounting, and protection.

· The derivatives of CP are CP-VM, CP/M, CP/M-86, DOS, DR-DOS, and FreeDOS.
· The OS of the Microsoft Windows is Windows 3.x, Windows 95/98, Windows XP,
Windows Vista, Windows 7, Windows 8.
· The derivates of MULTICS are UNIX, Xenix, Linux, QNX, VSTa, RISC iX, Mac
OSX and so on.
· The derivatives of VMS are OS/2, React OS, KeyKOS, OS/360, and OS/400.

The real-time operating system is an advanced multi-feature operating system applied when
there are rigid time needs for the flow of data or the operation of a processor. The distributed
operating system is an interconnection between two or multiple nodes but the processor
doesn’t share any memory. It is also called as loosely packed systems.

3. Explain the different flavors of LINUX


The Flavors of Linux Operating System

Fedora
This flavour of operating system is the foundation for the commercial Redhat Linux
version. It weighs more on features and functionality along with free software. Fedora
can be used freely within a community and has third party repositories this makes it
unlicensed distro version of the Linux OS and imparts a community driver feature for
accessibility. This category of OS receives a regular update every 6 Months making it a
more scalable in terms of performance.

Unlike Ubuntu, Fedora doesn’t make his desktop environment or other software. Fedora
project uses “upstream” software facilitating a platform that integrates all the upstream
software without adding their custom tools or patching it much. Fedora arrived with the
GNOME 3 Desktop Environment by default, even supposing it to be available for other
desktop environments also.
Redhat
Red Hat Enterprise commercializes its Linux distribution intended for Servers and
workstations. It is the most favored version of Linux OS and relies on open source
Fedora version.
This version caters to long releases to ensure stability among its features. This version
is trademarked to prohibit the Red Hat Enterprise Linux Software from being
redistributed. Nonetheless, the core software is free and open-source.

CentOS
CentOS is a community version of Redhat. It is a community project that takes the Red Hat
Enterprise Linux code, removes all Red Hat’s trademarks, and makes it available for free
use and distribution.
It is available free, and support comes from the community as opposed to Redhat itself.
Activity-2

1. Explain OS level virtualization and state its benefits.

What is Operating System Virtualization?


Operating system virtualizations includes a modified form than a normal operating system so
that different users can operate its end use different applications. This whole process
shall perform on a single computer at a time.

In OS virtualizations, the virtual eyes environment accepts command from any of the user
operating it and performs different task on the same machine by running different
applications.

In operating system virtualizations when the application does not interfere with another one
even though they are functioning in the same computer.
The kernel of an operating system allows more than one isolated user-space instance to exist.
These instances call as software containers, which are virtualizations engines.

Uses of Operating System Virtualization

These are reasons, which are telling why we have to use Operating System Virtualization
in Cloud Computing.

· Operating System Virtualization uses to integrate server hardware by moving services on


separate servers.
· It providing security to the hardware resources which harm by distrusting users.
· OS Virtualization uses for virtual hosting environment.
· It can separate several applications into containers.

How Operating System Virtualization Works?

The operating system of the computer manages all the software and hardware of the
computer. With the help of the operating system, several different computer programs can
run at the same time.

This is done by using the CPU of the computer. With the combination of few components of
the computer which is coordinated by the operating system, every program runs successfully.
2. Compare VMS and Containers
Virtual machines and Containers are two ways of deploying multiple, isolated services
on a single platform.

Virtual Machine:
It runs on top of an emulating software called the hypervisor which sits between the
hardware and the virtual machine. The hypervisor is the key to enabling virtualization.
It manages the sharing of physical resources into virtual machines. Each virtual
machine runs its guest operating system. They are less agile and have lower portability
than containers.

Container:
It sits on the top of a physical server and its host operating system. They share a
common operating system that requires care and feeding for bug fixes and patches.
They are more agile and have higher portability than virtual machines.

SNo. Virtual Machines(VM) Containers

1 VM is a piece of software While a container is


that allows you to install software that allows
other software inside of it so different functionalities of
you control it virtually as an application
opposed to installing the independently.
software directly on the
computer.
2. Applications running on a While applications running
VM system, or hypervisor, in a container environment
can run different OS. share a single OS.

3. VM virtualizes the While containers virtualize


computer system, meaning the operating system, or the
its hardware. software only.

4. VM size is very large, While the size of the


generally in gigabytes. container is very light,
generally a few hundred
megabytes, though it may
vary as per use.

5. VM takes longer to run than While containers take far


containers, the exact time less time to run.
depending on the
underlying hardware.

6. VM uses a lot of system While containers require


memory. very less memory.

7. VM is more secure, as the While containers are less


underlying hardware isn’t secure, as the virtualization
shared between processes. is software-based, and
memory is shared.

8. VMs are useful when we While containers are useful


require all of the OS when we are required to
resources to run various maximize the running
applications. applications using minimal
servers.

9. Examples of Type 1 Examples of containers are


hypervisors are KVM, Xen, RancherOS, PhotonOS, and
and VMware. Virtualbox is Containers by Docker.
a Type 2 hypervisor
3. Identify the difference between hypervisors and Linux
containers.

Hypervisor, VMs Container technology


Hypervisor virtualize the hardware of the host machine to Container technology virtualize the OS of the host machine to
create multiple VM. create multiple containers.
VMs are an abstraction of hardware. Container technology is an abstraction of the app layer.
The created VMs need to have their own OS. The created container use the OS of the host machine.
VMs take more space (e.g. RAM, ROM etc.) Containers take less space (e.g. RAM, ROM)
Hypervisor is mostly used by cloud service provider to Containers are used to run multiple copy of application.
create required VMs on powerful hardware for customers.
Considering resource utilization, VMs are not cost- Considering resource utilization, containers are much cost-
effective to run multiple copies of application. effective compare to VMs to run multiple copies of
application.
VMs do not provide efficient & flexible sharing and Container technology enables us to share and run the
running option as containers. application anywhere.
VMs are the result of using hypervisor. Containers are the result of using container technology.
Basically, we need to use container engine to create the
containers.

4. Comprehend the benefits of virtualization.

Benefits of Virtualization
· Virtualized services help businesses scale faster and be more flexible.
· 1. Cut Your IT Expenses
· Lower spending is one of the main virtualization benefits. When using
virtualization technology, one physical server hosts many virtual machines. You
can use these machines for different purposes. Half of business owners consider
this point to be very important.

· 2. Eliminate Downtime & Improve Resiliency


· The recovery time of a virtual machine is shorter compared to the repair of a
physical server. 63% of business owners define this factor as critical. 31% say this
is one of the most important benefits of virtualization, according to Statista.

In the event of a disaster, an operator can retrieve a file with all the VM data from
a computer in minutes. This caters to business interoperability, trustworthiness,
and resilience.

· 3. Improve Productivity, Efficiency, & Agility


· This is one of the essential advantages of virtualization. Your team only has to
maintain one server instead of several. This frees up time that you would otherwise
spend on technical support. You can instead use this time to respond to changing
environments and concentrate on critical tasks.

· 4. Monitor Independence and DevOps Delivery


· Virtualization allows you to split a working environment into several virtual
machines. Any developer can quickly switch from one VM to another to test an
application or do another task.

Developers won’t need to request a new computer with the required operating
system. Instead, they can complete their tasks. Thus, the independence of DevOps
makes for another critical point in the list of the benefits of virtualization.

· 5. Faster Provisioning of Applications & Resources


· With a VM, you are free from having to install hardware components. You don’t
have to set up new machines, create networks, or perform similar tasks. You can
do this all virtually. Thus, your team can get a new environment, storage section,
or network in minutes.
· 6. Simplified Data Center Management
· One of the key benefits of virtualization lies in the domain of data centers. To
manage the latter more effectively, companies can organize additional spaces and
build extra computer rooms to secure operations and use them as a disaster
recovery center.

Additionally, virtualization eliminates the need for high-power supplies and


cooling facilities for data centers.

· 7. Greater Business Continuity & Disaster Recovery


· If a computer goes down unexpectedly, there is no better solution for data recovery
than virtualization. A virtual machine is easy to move from one hypervisor to
another on a different device in minutes. The expected uptime is 99.99%. This is
the main factor that motivates people to develop a virtualization strategy.

· 8. Greater ROI
· Among the other benefits of virtualization is the idea of getting a significant return
on smaller investments. You had to spend a lot of money on setting up an on-site
working environment and buying hardware in the past.

Now, you can just spend some time setting up a machine, purchasing a license
(from Microsoft Azure or AWS), or just getting access for starters. According to
Statista, enterprises spend just 9% of their IT budget on virtualization.

Aside from this small initial investment, the rest of the money you earn is yours.
Add here that you are then able to dedicate more time to your business rather than
setting up a new environment.

· 9. Enhanced Systems Security


· According to a report on the annual number of data breaches in the US, the number
of data exposures has grown from 2005 to 2020, reaching the peak in 2017.

However, the number of data leaks in 2020 still remained high. In the US alone,
155.8 million people suffered data leaks. The reason was almost always attributed
to weak security.
Security enhancement is one of the most important benefits of virtualization.

Virtual firewalls give you the best of two worlds. First, they are isolated, like other
virtual applications. Thus, VMs are safe from viruses and malicious attacks of
different sorts. Second, they are cheaper and simpler to install, maintain, and
update.

In terms of virtualization benefits, you get higher visibility of what’s going on both
in virtual and physical environments. This enables faster provisioning of resources.
You also react to adverse events more quickly.

· 10. Company Becomes Eco-Friendly in all Senses


· Hardware maintenance is not just a matter of time and cost, but is also energy-
consuming. Power supplies to informational technology offices have a strong
climate impact. This impact can be reduced considerably thanks to the
implementation of virtual machines. Thus, low-energy use belongs in the category
of benefits of virtualization. Lower energy use brings profit to society in general.

The advantages of virtualization are obvious. It is no wonder more businesses are


adopting virtualization solutions every year.

According to a Virtualization report, 29% of companies worldwide have started


dabbling in virtualization. However, there is one curious thing: among the
different benefits of virtualization, the most important one is stability.

In some cases, the disadvantages of virtualization are also a piece of the puzzle.
For example, virtualization takes more time compared to the use of local systems.

As well, with time, you still may face scalability issues since you cannot grow
endlessly. At some point, you will need to expand your hardware base.
Activity-3

1. Explain ex2/ex3 filesystem attributes.

Introduction

There are many different Linux filesystems available, each with its own advantages and
disadvantages. In this article, we will compare the four most popular Linux filesystems: Ext2,
Ext3, Ext4 and Btrfs.

Ext2 is the oldest of the four filesystems, and is still used by many Linux distributions. It is a
very stable filesystem, but does not support features such as journaling or extended attributes.

Ext3 is a journaled version of Ext2, and is therefore more reliable. However, it does not
support some of the newer features found in other filesystems such as extent-based allocation
or delayed allocation.

What is a Linux Filesystem?

A Linux filesystem is the underlying software component that provides access to the data on
a storage device. The three most popular Linux filesystems are Ext, Ext2, and Ext3. Each has
its own strengths and weaknesses, which we will explore in this article.

Ext: The original Linux filesystem was created by Linus Torvalds himself. It is simple and
straightforward, but does not have some of the features that newer filesystems have.

Ext2: The second version of the Ext filesystem was released in 1993. It added support for
larger file sizes and extended attributes.

Ext3: The third version of the Ext filesystem was released in 2001. It added journaling,
which helps to prevent data loss in the case of a power failure or system crash.
Overview of Ext2, Ext3

There are many different Linux filesystems available, but the most popular are Ext2, Ext3,
Ext4 and Ext5. All of these filesystems have their own advantages and disadvantages, so it’s
important to choose the right one for your needs.

Ext2 is the oldest of the four filesystems, and it’s also the simplest. It doesn’t have any
journaling features, so it’s not as reliable as the other options. However, it’s also much faster
than the other options, so it’s a good choice if you need maximum performance.

Ext3 is a slightly newer version of Ext2 that includes journaling. This makes it more reliable,
but it also means that it’s slightly slower. However, most people feel that the reliability is
worth the trade-off in speed.

Pros and Cons of Each Filesystem

There are a few different types of Linux filesystems available, each with its own set of pros
and cons. Here’s a quick rundown of the most popular ones:

– EXT2: One of the most popular Linux filesystems, EXT2 is known for being fast and
stable. However, it doesn’t support journaling, which means that data can be lost in the event
of a power failure or system crash.

– EXT3: An extension of EXT2, EXT3 adds journaling support to help prevent data loss.
It’s also compatible with most major Linux distributions.

– XFS: Another popular Linux filesystem, XFS is known for being scalable and efficient. It
doesn’t support journaling, however, so data can be lost in the event of a power failure or
system crash.

Comparison of Features Between the Different Filesystems

There are many different types of Linux filesystems available, each with its own benefits and
drawbacks. In this article, we’ll take a look at four of the most popular filesystems: Ext2,
Ext3, Ext4, and Btrfs.
Ext2:

Ext2 is the oldest filesystem in this list, having been first introduced in 1992. Despite its age,
it’s still widely used thanks to its simplicity and reliability. One downside of Ext2 is that it
doesn’t support journaling, which means that data can be lost in the event of a power failure
or system crash.

Ext3:

Ext3 was introduced in 2001 and is a journaled version of Ext2. This means that data is less
likely to be lost in the event of a power failure or system crash. However, Ext3 is not as
widely used as Ext2 due to its slightly higher overhead.

Choosing the Right Filesystem for Your System

There are many different types of Linux filesystems available, and choosing the right one for
your system can be a daunting task. In this article, we will compare the most popular Linux
filesystems: Ext2, Ext3, Ext4 and Btrfs.

Ext2:

Ext2 is the oldest and most popular Linux filesystem. It is a journaling filesystem, which
means that it keeps track of all changes made to the filesystem. This makes it very reliable
and ensures that the filesystem can be easily recovered in case of a crash. However,
journaling can also make the filesystem slower, so it is not ideal for systems that require high
performance.

Ext3:

Ext3 is an improved version of Ext2 that adds support for journaling. This makes it even
more reliable than Ext2, but it also comes with a performance penalty. If you need maximum
reliability, then Ext3 is a good choice. However, if you need maximum performance, then
you should consider using another filesystem such as Ext4 or Btrfs.
2. Discuss the file-mount and unmount system calls.

Mount system call makes a directory accessible by attaching a root directory of one file
system to another directory. In UNIX directories are represented by a tree structure, and
hence mounting would mean attaching them to the branches. This means the file system
found on one device can be attached to the tree. The location in the system where the file is
attached is called as a mount point.

Example:-
Mount –t type device dir

- This will attach or mount the file system found on device of type type to the directory dir.
- Unmount system calls does the opposite. It un mounts or detaches the attached file front the
target or mount point. If a file is opened or used by some process cannot be detached.

The attaching of a file system to another file system is done by using mount system call. At
the time of mounting, there is an essential splicing one directory tree onto a branch in another
directory tree is done. The mount takes two arguments. One – the mount point, which is a
directory in the current file naming system, two – the file system to mount to that point. At
the time of inserting CDROM into the system, the corresponding CDROM file system will
automatically mounts to the directory - /dev/cdrom in the system.
The unmount system call is used to detach a file system.
Activity-4:

1. Explain Compare Linux fork() and Windows createprocess()


functions.

· UNIX, the fork() function creates a new process by duplicating the existing process.
The new process is called the child process and has its own unique process ID (PID).
· In Windows, the CreateProcess() function creates a new process by creating a new
process image.

How the UNIX fork() and Windows CreateProcess() functions differ?


The fundamental differences between UNIX fork() and Windows CreateProcess() functions
are as follows:

1. Purpose: fork() is used to create a new process in UNIX, which is a clone of the parent
process with a separate memory space. CreateProcess() in Windows is used to create a new
process with a specified executable file to run.
2. Inheritance: In UNIX, fork() duplicates the parent process, inheriting its file descriptors,
environment variables, and memory layout. In Windows, CreateProcess() doesn't inherit
these attributes directly; instead, it requires explicit configuration during process creation.

3. Memory: fork() shares the same memory layout between parent and child processes
initially, with copy-on-write protection. CreateProcess() assigns a completely separate
memory space for the new process in Windows.

4. Process ID: fork() returns the child's process ID in the parent process and zero in the child
process. CreateProcess() returns a PROCESS_INFORMATION structure containing the new
process's handle and process ID.
Activity-5:

1. Study probable conditions for deadlock occurrence and how to


overcome it.

What is Deadlock in OS?

All the processes in a system require some resources such as central processing unit(CPU),
file storage, input/output devices, etc to execute it. Once the execution is finished, the process
releases the resource it was holding. However, when many processes run on a system they
also compete for these resources they require for execution. This may arise a deadlock
situation.

A deadlock is a situation in which more than one process is blocked because it is holding a
resource and also requires some resource that is acquired by some other process. Therefore,
none of the processes gets executed.

Neccessary Conditions for Deadlock

The four necessary conditions for a deadlock to arise are as follows.

· Mutual Exclusion: Only one process can use a resource at any given time i.e. the
resources are non-sharable.
· Hold and wait: A process is holding at least one resource at a time and is waiting
to acquire other resources held by some other process.
· No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.
· Circular Wait: A set of processes are waiting for each other in a circular fashion.
For example, lets say there are a set of processes {�0P0,�1P1,�2P2,�3P3} such
that �0P0 depends on �1P1, �1P1 depends on �2P2, �2P2 depends
on �3P3 and �3P3 depends on �0P0. This creates a circular relation between all
these processes and they have to wait forever to be executed.

Methods of Handling Deadlocks in Operating System

The first two methods are used to ensure the system never enters a deadlock.

Deadlock Prevention

This is done by restraining the ways a request can be made. Since deadlock occurs when all
the above four conditions are met, we try to prevent any one of them, thus preventing a
deadlock.
Deadlock Avoidance

When a process requests a resource, the deadlock avoidance algorithm examines the
resource-allocation state. If allocating that resource sends the system into an unsafe state, the
request is got granted.

Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back to
avoid deadlock.

Deadlock Detection and Recovery

We let the system fall into a deadlock and if it happens, we detect it using a detection
algorithm and try to recover.

Some ways of recovery are as follows.

· Aborting all the deadlocked processes.


· Abort one process at a time until the system recovers from the deadlock.
· Resource Preemption: Resources are taken one by one from a process and assigned to
higher priority processes until the deadlock is resolved.

Deadlock Ignorance

In the method, the system assumes that deadlock never occurs. Since the problem of deadlock
situation is not frequent, some systems simply ignore it. Operating systems such as UNIX and
Windows follow this approach. However, if a deadlock occurs we can reboot our system and
the deadlock is resolved automatically.

Note: The above approach is an example of Ostrich Algorithm. It is a strategy of ignoring


potential problems on the basis that they are extremely rare.

2. Study Identify relationship between threads and processes.

What is Process?

A process is an instance of a program that is being executed. When we run a program, it


does not execute directly. It takes some time to follow all the steps required to execute the
program, and following these execution steps is known as a process.

A process can create other processes to perform multiple tasks at a time; the created
processes are known as clone or child process, and the main process is known as the parent
process. Each process contains its own memory space and does not share it with the other
processes. It is known as the active entity. A typical process remains in the below form in
memory.
A process in OS can remain in any of the following states:

o NEW: A new process is being created.


o READY: A process is ready and waiting to be allocated to a processor.
o RUNNING: The program is being executed.
o WAITING: Waiting for some event to happen or occur.
o TERMINATED: Execution finished.

How do Processes work?

When we start executing the program, the processor begins to process it. It takes the
following steps:

o Firstly, the program is loaded into the computer's memory in binary code after
translation.
o A program requires memory and other OS resources to run it. The resources such that
registers, program counter, and a stack, and these resources are provided by the OS.
o A register can have an instruction, a storage address, or other data that is required by
the process.
o The program counter maintains the track of the program sequence.
o The stack has information on the active subroutines of a computer program.
o A program may have different instances of it, and each instance of the running
program is knowns as the individual process.

A thread is the subset of a process and is also known as the lightweight process. A process
can have more than one thread, and these threads are managed independently by the
scheduler. All the threads within one process are interrelated to each other. Threads have
some common information, such as data segment, code segment, files, etc., that is shared to
their peer threads. But contains its own registers, stack, and counter.

How does thread work?

As we have discussed that a thread is a subprocess or an execution unit within a process. A


process can contain a single thread to multiple threads. A thread works as follows:

o When a process starts, OS assigns the memory and resources to it. Each thread within
a process shares the memory and resources of that process only.
o Threads are mainly used to improve the processing of an application. In reality, only a
single thread is executed at a time, but due to fast context switching between threads
gives an illusion that threads are running parallelly.
o If a single thread executes in a process, it is known as a single-threaded And if
multiple threads execute simultaneously, then it is known as multithreading.

3. Comprehend the differences between types of threads.

Threads and its types in Operating System

Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another
but gives the illusion as if they are executing in parallel. Each thread has different states.
Each thread has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.

Types of Threads:

1. User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and to cause
interrupt to Kernel. Kernel doesn’t know about the user level thread and manages them as
if they were single-threaded processes.
· Advantages of ULT –
· Can be implemented on an OS that doesn’t support multithreading.
· Simple representation since thread has only program counter, register set,
stack space.
· Simple to create since no intervention of kernel.
· Thread switching is fast since no OS calls need to be made.
· Limitations of ULT –
· No or less co-ordination among the threads and Kernel.
· If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one) that keeps
track of all the threads in the system. In addition kernel also maintains the traditional
process table to keep track of the processes. OS kernel provides system call to create and
manage threads.
· Advantages of KLT –
· Since kernel has full knowledge about the threads in the system, scheduler
may decide to give more time to processes having large number of threads.
· Good for applications that frequently block.
· Limitations of KLT –
· Slow and inefficient.
· It requires thread control block so it is an overhead.
Activity-6:

1. Study Compare the features of swapping and paging.


An OS handles the computer system's primary functions. It manages hardware devices,
processes, files, and various other duties. Memory management is one of them. It collects
data of all memory regions and their allocation or free status. OS use two memory
management techniques: swapping and paging. Swapping may be added to any processor
scheduling approach to move jobs from the main memory to the back store. In contrast,
paging allows a process's physical address space to be non-contiguous.

In this article, you will learn about the difference between Paging and Swapping in the
operating system. But before discussing the differences, you must know about the Paging and
Swapping in the operating system.

What is paging in the operating system?

Paging is a memory management technique that assigns a process to a non-contiguous


address area. External fragmentation does not occur when a process's physical address is non-
contiguous. Usually, the size is 4KB, and paging always occurs between currently active
pages.

Paging is accomplished by dividing the RAM into fixed-sized sections known as frames. A
process's logical memory is divided into identical fixed-size units called pages. The hardware
determines the page size and frame size as we know that the procedure must be executed
from the main memory. Therefore, whenever a process has to run, its pages from the source,
or back store, are loaded into any free frames in the main memory.

What is Swapping in the operating system?

A memory management technique called Swapping removes inactive programs from the
computer system's main memory. Any process must be executed in memory. Still, it can be
temporarily swapped out of memory to a backup store and then returned to memory to
continue its execution. Swapping is done to provide memory for the operation of other
processes.

The swapping mechanism typically impacts performance, but it also aids in executing many
large operations concurrently. Swapping is another name for a method of memory
compression. Generally, low priority processes can be swapped so that higher priority
processes may be loaded and executed.
Key Differences between the Paging and Swapping

There are various key differences between Paging and Swapping in the operating system.
Some main differences between Paging and Swapping in the operating system are as follows:

1. Paging is a memory management method that enables systems to store and get the
data from secondary storage for usage in the main memory. In contrast, Swapping
temporarily transfers a process from the primary to secondary memory.
2. Paging is more flexible than swapping because paging transfers pages. On the other
hand, Swapping is less flexible.
3. There are many processes in the main memory during swapping. On the other hand,
there are some processes in the main memory while paging.
4. Swapping involves processes switching between the main memory and secondary
memory. On the other hand, pages are equal-size memory blocks that transfer
between the main memory and secondary memory during paging.
5. Swapping allows the CPU to access processes more quickly. On the other hand,
paging allows virtual memory to be implemented.
6. Swapping is appropriate for heavy workloads. On the other hand, the paging is
appropriate for light to medium workloads.
7. Swapping allows multiprogramming. In contrast, paging allows a process's physical
address space to be non-contiguous, which prevents external fragmentation.

Head-to-head comparison between the Paging and Swapping

There are various head-to-head comparisons between Paging and Swapping. Some
differences between Paging and Swapping are as follows:
Features Paging Swapping

Definition It is a memory management method that It temporarily transfers a process


enables systems to store and get data from the primary to secondary
from secondary storage for usage in the memory.
RAM.

Basic Paging permits a process's memory Swapping allows multiple programs


address space to be non-contiguous. in the operating system to run
concurrently.

Flexibility Paging is more flexible as the only pages Swapping is less flexible because it
of a process are moved. moves the entire process back and
forth between RAM and the back
store.

Main Functionality During paging, the pages are equal-size Swapping involves processes
memory chunks that travel between the switching between main memory and
primary and secondary memory. secondary memory.

Multiprogramming Paging enables more processes to run in Compared to paging, Swapping


the main memory. enables fewer programs to run in the
main memory.

Workloads Swapping is appropriate for heavy The paging is appropriate for light to
workloads. medium workloads.

Usage Paging allows virtual memory to be Swapping allows the CPU to access
implemented. processes more quickly.

Processes There are many processes in the main There are some processes in the main
memory during swapping. memory while paging.
Activity-7:

1. Compare different Linux shells.

What is a Linux Shell


A Linux shell is a command-line interface that allows users to interact with the Linux
operating system by typing commands. It serves as a bridge between the user and the core
Linux subsystems, including the Linux kernel.
Shell is a very powerful component of any Linux distribution that takes text commands from
the user, interprets them, and interacts with system utilities and apps. As a result, you can
carry out a long list of activities, such as creating files, monitoring processes, downloading
files from the Internet, and creating a database and text processing without leaving the shell.

Why Do We Need Different Linux Shells?


Now that you understand the benefits of shells, the next question is why are there different
shells?

The straight answer is that different shells cater to specific user preferences and requirements.
Each shell has unique features and styles, making it suitable for a particular set of tasks and
UX requirements.

Users can choose the shell that aligns with their workflow or requirements. While all shells
can accomplish essential operations, shells vary in the level of flexibility and user
experience.

Let’s now discuss the 8 popular types of Linux shells and explore the benefits and features of
these shells.

Types of Linux Shells

The following discussion will help you pick the right shell that fits your particular workflows
and operational usage.

Note that you need a basic understanding of Linux commands and using the command-line
utilities available on a typical Linux distribution.

#1. Bourne Shell (sh)


Bourne Shell (sh) is one of the earliest Unix shells, created by Stephen Bourne at Bell Labs in
the 1970s. It’s known for its simplicity and availability on virtually all Unix-like systems.

While lacking advanced features, it has the necessary basic scripting and automation
capabilities. As a result, the Bourne Shell scripts are highly portable across Unix-like
systems, making them reliable for simple command-line tasks and script execution.

Features

· It is typically referred to as sh.


· Full Path to Command: /bin/sh or /sbin/sh
· Default Prompt for Non-Root User: $.
· Default Prompt for Root User: #.

Advanced Features
It has no advanced features, but it’s used as the base for other shells.

Limitations

· It lacks logical and arithmetic expansion.


· There is no support for command history or recall.
· There is no command autocomplete option in sh.

Example
Here’s an example of a Bourne Shell script:

#!/bin/sh

echo "Hello, World!"

In this script:

The #!/bin/sh line means the script should be run in the Bourne Shell.

When you run the script, the echo “Hello, World!” line prints “Hello, World!” on the screen.
The /bin/sh file is in modern Linux systems but acts like a shortcut to its main shell, Bash (on
Linux) or dash (on Ubuntu and Debian). This link is set up to work like the Bourne shell.

#2. C Shell (csh)

The C Shell, often abbreviated as csh, is another shell with a long association with Linux.
Developed by Bill Joy at the University of California, Berkeley, in the late 1970s, the C
Shell is known for its unique syntax and command-line editing capabilities. It was the first
shell that introduced the command history feature.
Features

· Shell Name: It is referred to as csh in scripts.


· Full Path to Command: /bin/csh.
· Default Prompt for Non-Root User: hostname %.
· Default Prompt for Root User: hostname #.

The following screenshot demonstrates the prompt and basic syntax:

Advanced Features

· The C shell introduced the command history feature that tracks and recalls previously
executed commands.
· Users can create custom aliases for frequently used apps.
· It introduced tilde (~) to represent the user’s home directory for enhanced
convenience.
· Csh incorporated a built-in expression grammar for more flexible command
execution.

Limitations

· The C shell was criticized for its syntax inconsistencies, which can confuse even the
advanced users.
· It lacked full support for standard input/output (stdio) file handles and functions,
limiting specific capabilities.
· Its limited recursion abilities meant that users couldn’t use sequences of complex
commands.
Compared to the Bourne shell, the C shell improved readability and performance. Its
interactive features and innovations influenced the development of subsequent Unix shells.

#3. TENEX C Shell (tcsh)

TENEX C Shell ( short as tcsh) is an upgraded version of the C Shell. It can remember past
commands, helps complete file names, and generally allows complex scripts. Many Unix
systems already have tcsh installed.

Features

· Shell Name: It is referred to as tcsh.


· Full Path to Command: /bin/tcsh.
· Default Prompt for Non-Root User: hostname:directory>.
· Default Prompt for Root User: hostname:directory#.

Advanced Features

· Enhanced command history management.


· Customizable auto-completion capabilities.
· Support for wildcard pattern matching.
· Comprehensive job control functionalities.
· Built-in where command.

Limitations

· It’s not the best choice for running complex scripts.


· It might not work on all distributions due to portability issues.
· The shell script grammar can be a little confusing for beginners.

#4. KornShell (ksh)

The KornShell, or ksh, was developed by David Korn at AT&T Bell Laboratories. It
combines the best features of the Bourne Shell and the C Shell, offering a powerful and
user-friendly shell with advanced scripting capabilities. This shell has superior speed
performance compared to the C and Bourne shells.

Features:

· Shell Name: It is referred to as ksh.


· Full Path to Command: /bin/ksh or /bin/ksh93.
· Default Prompt for Non-Root User: $.
· Default Prompt for Root User: #.

Advanced Features

· Support for built-in mathematical functions and floating-point arithmetic.


· Integration of object-oriented programming capabilities.
· Enhanced extensibility for built-in commands.
· Compatibility with the Bourne shell.

Limitations

· ksh usually doesn’t work well with extremely complex scripts.


· ksh scripts may not work universally on all shells.
Activity-8:

1. Write a cron job that runs all essential apps. On an


hourly/daily/weekly/monthly basis.(for ex. Executing
Antivirus)

How to Automate Tasks with cron Jobs in Linux

If you're working in IT, you might need to schedule various repetitive tasks as part of your
automation processes.

For example, you could schedule a particular job to periodically execute at specific times of
the day. This is helpful for performing daily backups, monthly log archiving, weekly file
deletion to create space, and so on.

And if you use Linux as your OS, you'll use something called a cron job to make this happen.

What is a cron?
Cron is a job scheduling utility present in Unix like systems. The crond daemon enables cron
functionality and runs in background. The cron reads the crontab (cron tables) for running
predefined scripts.
By using a specific syntax, you can configure a cron job to schedule scripts or other
commands to run automatically.

For individual users, the cron service checks the following file: /var/spool/cron/crontabs
Contents of /var/spool/cron/crontabs

What are cron jobs in Linux?


Any task that you schedule through crons is called a cron job. Cron jobs help us automate our
routine tasks, whether they're hourly, daily, monthly, or yearly.

Now, let's see how cron jobs work.

How to Control Access to crons


In order to use cron jobs, an admin needs to allow cron jobs to be added for users in the
'/etc/cron.allow' file.

If you get a prompt like this, it means you don't have permission to use cron.

Cron job addition denied for user John.


To allow John to use crons, include his name in '/etc/cron.allow'. This will allow John to
create and edit cron jobs.
· VALUE DESCRIPTION

Minutes 0-59 Command would be executed at the specific minute.

Hours 0-23 Command would be executed at the specific hour.

Allowing John in file cron.allow


Users can also be denied access to cron job access by entering their usernames in the file
'/etc/cron.d/cron.deny'.

How to Add cron Jobs in Linux


First, to use cron jobs, you'll need to check the status of the cron service. If cron is not
installed, you can easily download it through the package manager. Just use this to check:

# Check cron service on Linux system


sudo systemctl status cron.service
Cron job syntax
Crontabs use the following flags for adding and listing cron jobs.

· crontab -e: edits crontab entries to add, delete, or edit cron jobs.
· crontab -l: list all the cron jobs for the current user.
· crontab -u username -l: list another user's crons.
· crontab -u username -e: edit another user's crons.
When you list crons, you'll see something like this:

# Cron job example


* * * * * sh /path/to/script.sh
In the above example,

· * * * * * represents minute(s) hour(s) day(s) month(s) weekday(s), respectively.


SCHEDULE SCHEDULED VALUE

50*8* At 00:05 in August. · s


Days 1-31 Commands would be executed in these days of the months.

Months 1-12 The month in which tasks need to be executed.

Weekdays 0-6 Days of the week where commands would run. Here, 0 is Sunday.

h represents that the script is a bash script and should be run from /bin/bash.
· /path/to/script.sh specifies the path to script.
Below is the summary of the cron job syntax.

* * * * * sh /path/to/script/script.sh
| | | | | |
| | | | | Command or Script to Execute
| | | | |
| | | | |
| | | | |
| | | | Day of the Week(0-6)
| | | |
| | | Month of the Year(1-12)
| | |
| | Day of the Month(1-31)
| |
| Hour(0-23)
|
Min(0-59)

Cron job examples

Below are some examples of scheduling cron jobs.


54**6 At 04:05 on Saturday.

0 22 * * 1-5 At 22:00 on every day-of-week from Monday through Friday.

It is okay if you are unable to grasp this all at once. You can practice and generate cron
schedules with the crontab guru.

How to set up a cron job


In this section, we will look at an example of how to schedule a simple script with a cron job.

1. Create a script called date-script.sh which prints the system date and time
and appends it to a file. The script is shown below:

Script for printing date.

2. Make the script executable by giving it execution rights.

chmod 775 date-script.sh


3. Add the script in the crontab using crontab -e.
Here, we have scheduled it to run per minute.

Adding a cron job in crontab every minute.

4. Check the output of the file date-out.txt. According to the script, the
system date should be printed to this file every minute.
Output of our cron job.
Activity-9

1. Compare static and DHCP IP addresses and check whether


these can be switched over.

Static IP Address and Dynamic IP Address are both used to identify a computer on a network
or on the Internet.

· Static IP address is provided by the Internet Service Provider and remains fixed till
the system is connected to the network.
· Dynamic IP address is provided by the DHCP, generally a company gets a single
static IP address and then generates the dynamic IP address for its computers within
the organization's network.

Read through this article to find out more about static and dynamic IP addresses and how
they are different from each other.

What is an IP Address?

An IP address is a numerical identifier that identifies a computer on a network that


communicates using the Internet Protocol. An IP address can be used to identify a host or
network interface and address a specific location.

IP address is provided by the Internet Service Provider and is called the logical address of a
computer connected on a network. Every unique instance linked to any computer
communication network employing the TCP/IP communication protocols is given an IP
address.

When network nodes connect to a network, the Dynamic Host Configuration Protocol
(DHCP) server allocates IP addresses. DHCP assigns IP addresses from a pool of available
addresses that are part of the addressing system as a whole. Even though DHCP only offers
dynamic addresses, many machines reserve static IP addresses given to a single entity and
cannot be used again.
IP addresses are generally represented by a 32-bit unsigned binary value. It is represented in a
dotted decimal format. For example, "192.165.20.40" is a valid IP address.

What is Static IP Address?

A static IP address is explicitly allocated to a device rather than one that a DHCP server has
assigned. Because it does not change, it is called static.

Static IP addresses can be configured on routers, phones, tablets, desktops, laptops, and any
other device that can use an IP address. This can be done either by the device itself handing
out IP addresses or by manually typing the IP address into the device.

If you want to host a website from your home, have a file server on your network, utilize
networked printers, forward ports to a specific device, run a print server, or use a remote
access application, you'll need a static IP address. DNS servers are an example of a static IP
address at work.

What is Dynamic IP Address?

An ISP gives you a dynamic IP address that you can use for a limited time. If a dynamic
address isn't in use, it can be allocated to another device automatically. DHCP or PPPoE are
used to assign dynamic IP addresses.

Internet Service Providers and networks with many connecting clients or end-nodes
commonly use dynamic IP addresses. A DHCP server handles the task of assigning,
reassigning, and altering dynamic IP addresses. The scarcity of static IP addresses on IPv4 is
one of the key reasons for using dynamic IP addresses. Dynamic IP addresses allow a single
IP address to be swapped across many nodes to get around this problem.

Difference between Static and Dynamic IP Address


Key Static IP Address Dynamic IP Address

Internet Service Provider, ISP DHCP (Dynamic Host Configuration Protocol)


Provider provides the static IP Address. is used to generate dynamic IP Address.

Static IP address does not get Dynamic IP address can be changed any time.
Changes
changed with time.

Static IP Address is less Dynamic IP address being volatile in nature is


Security
secured. less risky.

Static IP address is difficult to Dynamic IP address is easy to assign and


Designation
assign or reassign. reassign.

Device Device using static IP address Device using dynamic IP address is difficult to
tracking can be traced easily. trace.

Static IP address is highly Dynamic IP address is less stable than static IP


Stability
stable. address.

Static IP address is costly to Dynamic IP address is cheaper to use and


Cost
maintain. maintain than static IP address.

The following table highlights the major differences between a static IP address and a
dynamic IP address −

Conclusion

To conclude, static IP addresses are provided by ISPs and they remain fixed, while dynamic
IP addresses are assigned by the DHCP and they keep changing regularly when a user logs in.
2. Compare Study different options offered by Linux for package
management.

Overview

Packages in Linux are similar to executable files in Windows operating system but are not
executable. A package in Linux is a compressed software archive file containing all the files
included with a software application that provides any functionality. Packages can be a
command-line utility, GUI application, or software library. This process is the same as
installing any application, software, or utility in Windows.

Introduction to Packages in Linux

Linux distributions use a package manager to install packages. Package management in


Linux can be explained using the following; Package is a set of files, metadata, and other
information required to install any software, application, or tools on a Linux. These packages
are required to install, update, and upgrade these tools. These tools and software and the
Linux distribution are upgraded through Package managers. Instructions on how to update
and upgrade packages are covered further in the article.

Core Concepts for Package Management

Regarding package management in Linux, package management is the term used to signify
the installation and maintenance of Packages in your system. Package managers reduce the
requirement for manually downloading and installing various dependencies required for the
software.

Packages

A package contains all the necessary data required for the installation and maintenance of the
software package. These packages are created by someone known as a package maintainer. A
package maintainer takes care of the packages. They ensure active maintenance, bug fixes if
any, and the final compilation of the package.
Repositories

These Packages are present in the Repositories that contain packages specially designed,
compiled, and maintained for each Linux version and distribution. These Repositories contain
thousands of packages created by the distribution vendors. Sometimes projects may handle
their packaging and distribution.

Dependencies

Some packages might require some other pre-installed software to function correctly. A
resource or software that a package depends on is called its dependency. Dependencies
include metadata on how to build the code you depend on and information on where to find
the files containing it. The package manager takes care of all these problems for you. It will
install, modify, upgrade, update, and remove package files and provide dependency
resolution. Resolving a dependency means suppose you have software that requires Python
version 3.1. You are trying to install another software that requires version 3.3, which causes
a conflict, and this conflict will be required to resolve before proceeding with the installation
of this software. The package manager facilitates this. The package manager also ensures we
receive the original and authentic package by verifying their certificates and checksum to
ensure they have not been modified.

What is a Package Manager in Linux?

In simple terms, a package manager is a software tool used for package management in Linux
i.e. to manage the installation, removal, and updating of various software packages. It can be
thought of as a hub for all software packages available for your system. The package manager
keeps track of all the installed packages on the system, including their dependencies, and uses
this information to resolve conflicts and handle updates.

Using a package manager in Linux can save us a lot of time and effort compared to manually
installing software and its dependencies. When we install a package, the package manager
automatically checks if any other software is required for it to work correctly and installs
these dependencies for us. This relieves us from the problem of figuring out what other
softwares are needed and manually installing them. A package manager can also
automatically check for updates to installed packages and install them for us. This helps us to
keep our system up-to-date and secure.

Functions of Package Manager

The Package managers can be of two types based on their functions. The first ones are low-
level, which ensures installing a package, upgrading a package, or checking which all
packages are installed. The other type is the high level which ensures dependency resolution.

Comparison between Various Package Managers and How to Use Them?

· DPKG – This is the abbreviation for Debian-based Package Management System. All
the Debian-based Linux and their distros use DPKG. DPKG is used with packages
made for Debian-based Linux, which end with .deb Extension. Although it cannot
download and install packages and their dependency automatically.

o To install a package with DPKG, use the following command dpkg -i


package_name.deb // This command will install the package with name
package_name.deb
o To remove a package with DPKG, use the following command dpkg -r
package_name // This command will remove the package named
package_name

· APT - APT is the abbreviation for Advanced Packaging Tool. It is the most widely
used tool and the default package manager available in Ubuntu and other Debian-
based distros.
o To install a package with apt, use the following command sudo apt install
package_name // This command will install the package with the name
package_name, change it according to the package name you wish to install
o To remove a package with apt, use the following command sudo apt remove
package_name // This command will remove the package with the name
package_name. However, this doesn’t remove the dependencies and package
configurations.
o To completely remove the package with apt, use the following command sudo
apt purge package_name // This command completely removes the package as
well as the dependencies and configuration of the packages.
o To remove any leftover dependencies, use the following command sudo apt
autoremove // This will automatically remove any dependencies or leftovers
from previously removed packages.
o The apt update command: sudo apt update // This command gets a copy of the
latest version of all the packages installed in our system from the repositories.
Please note this does not upgrade any packages and only fetches the latest
version of the package.
o The apt upgrade command: sudo apt upgrade // This command will check the
list of available upgrades and then upgrade the packages one by one. Usually,
this command is run after “sudo apt update” so that initially, the list of all
available updates is updated with the update command, and the upgrade is
done with the Sudo upgrade command.
o To upgrade one specific package as per the requirement sudo apt upgrade
package_name // This command will only upgrade that specific package.
However, you need to run the update command first to get an update, and then
you can upgrade the package.
· APT and APT – GET

o APT and APT–GET are very similar. You can consider apt as a modern and
more human graphical interface-based implementation of the apt-get. Apt is
more famously used than apt-get, but apt-get has its functionality, such as
running low-level commands.

· YUM- This is the abbreviation for “ Yellow Dog Updater, Modified ”. This was once
known as YUP or Yellow Dog Updater. This package manager is primarily used in
Red Hat Enterprise Linux. This package manager is a high-level package manager
who can perform functions such as dependency resolution. As Yum Downloads and
installs the package, it does not require any downloaded files.
o To install a package with yum, use the following command yum install
package_name // This command will install the package with the name
package_name, and change it according to the package name you wish to
install.
o To remove a package with yum, use the following command yum remove
package_name // This command will remove the package named
package_name and resolve any dependencies
o To update a package using yum, use the following command yum update
package_name // This command will automatically resolve any dependencies
and update the package to the latest stable version.
o The update command: yum update // This command will automatically fetch
and install all the updates available for your system.
· DNF - This is the abbreviation for “ Dandified YUM, ”. This package manager is the
successor to YUM. This version includes several improvements, such as improved
performance and quicker dependency resolution.
· RPM - This is the abbreviation for “ Red Hat Package Manager ”. This package
manager is used in Red Hat-based Linux operating systems such as Fedora, CentOS,
etc. RPM is used with packages made for Red Hat-based Linux, and these packages
end with .rpm Extension. This package manager is a low-level package manager who
can perform functions such as installation, upgrade, and removal of packages.RPM
requires the package downloaded to install the package.
o To install a package with RPM, use the following command rpm -i
package_name.rpm // This command will install the package with name
package_name.rpm
o To upgrade a package with RPM, use the following command rpm -U
package_name.rpm // This command will install the package with name
package_name.rpm
o To remove or erase a package with RPM, use the following command rpm -e
package_name // This command will remove the package named
package_name
· Pacman – Lastly, we have a very famous package manager called Pacman
abbreviation for "Package manager". This Package manager is used majorly in Arch
Linux and Arch Linux-based distros. In addition to automatically obtaining and
installing all required packages, Pacman is capable of resolving dependencies.
Pacman simplifies the process of installation and maintenance of packages.
o To install a package with pacman, use the following command pacman -S
package_name // This command will install the package with name
package_name
o To upgrade a package with pacman, use the following command pacman -
Syu // This command will update all the packages in the system. It
synchronizes with the repository and updates all the system packages based on
updates available.
o To remove or erase a package with pacman, use the following
command pacman -Rs package_name // This command will remove the
package named package_name

What are the Various Packaging Formats in Linux?

Various vendors provide their package manager and package format. Some package
managers do allow the usage of multiple packaging formats to be used. Some of the prevalent
packaging formats include:

· RPM packages (.rpm)

o The .rpm package extension was designed and developed by the Red Hat
Linux distribution and used in the Red Hat Package manager (RPM)

· Debian packages (.deb)

o The .deb package was designed and developed by the Debian Linux
distribution. They are majorly used in Debian-based Linux and distros.

· TAR archives (.tar)

o The .tar format is short for Tape Archive. This is just for creating an archive or
combination of multiple files and directories into one file. Tar archives do not
compress the consisting files and directories.

· TGZ archives (.tgz)


o The .tgz format is Tar archives, except for the difference that files in tgz
archives are compressed using the GNU Zip compression technique. The
result is a compressed archive file and thus less in size.

· GZip Archives (.gz)

o The .gz archives are created after direct compression using the GZIP Utility.

Activity-10

1. Identify few alternatives to open LDAP and make a


comparison.

· OpenLDAP has been one of the most popular choices for implementing the LDAP
protocol since its inception in 1998.

· However, as more LDAP and directory solutions enter the scene, understanding each
and deciding which best suits your needs becomes more challenging.

· OpenLDAP Overview
· OpenLDAP is command-line driven software that allows IT admins to build and
manage an LDAP directory. Due to its minimal UI and reliance on the CLI, it requires
an in-depth knowledge of the LDAP protocol and directory structure.

· However, IT admins can supplement OpenLDAP with a third-party application, like


phpLDAPadmin, which is a web application that allows admins to interact with
OpenLDAP via a basic UI. Of course, because of it’s open source nature, it can be
highly flexible and customizable.

· OpenLDAP’s pure-LDAP approach differs from most LDAP software, which


generally includes more features and functionality than OpenLDAP does. This makes
OpenLDAP a tech-savvy option that suits technical use cases, like supporting Linux
servers and Linux-based applications. Further, because it requires more expertise,
OpenLDAP has historically been favored by the Ops crowd.

· OpenLDAPs Benefits
· OpenLDAP often wins out over its competitors for its cost, flexibility, and OS-
agnosticism. We’ll cover these below, and then dive into the OpenLDAP alternatives
it’s most often up against.

· Low Costs
· OpenLDAP is free from a software perspective (of course, not free to implement if
you include somebody’s time, hosting costs, etc.). This is a significant driving factor
in its popularity, making OpenLDAP a common choice for startups and lean IT
teams.

· While the software is free, however, OpenLDAP incurs hidden costs in its
maintenance and management. Since it is generated as simple-source code that needs
to be built into the “service,” the challenge of OpenLDAP is installing, configuring,
and implementing the code into a working directory service instance.

· For MSPs, every additional client multiplies this challenge, as each individual
customer generally requires their own OpenLDAP instance. Due to this hurdle, some
organizations and MSPs opt for a more user-friendly and feature-rich option.

· OS-Agnosticism
· OpenLDAP supports Windows, Mac, and Linux operating systems. This contrasts
with other solutions, like Microsoft AD; as a Windows product, AD fares better with
Windows than with other operating systems.

· OpenLDAP isn’t the only OS-agnostic solution, however. Other directory solutions,
like JumpCloud, are OS-agnostic as well.

· Flexibility
· Being open-source makes OpenLDAP incredibly flexible. Its minimal UI and code-
reliant functionality don’t lock users into predetermined workflows; rather, IT can
manipulate the software to do exactly what they need.

· This gives it broad applicability; however, the minimal interface also requires more
expertise than competing solutions. We’ll get into this trade-off next.
· Where OpenLDAP Falls Short

· Manual-Intensive Configuration Management


· With OpenLDAP, directory configuration and management are manual. This makes
app additions and directory modifications difficult; keeping up with app dependencies
and maintaining your directory’s format and integrity takes significant ongoing
manual labor. This need for ongoing maintenance, combined with OpenLDAP’s
reliance on code, means OpenLDAP requires significant expertise that’s available on
an ongoing basis.

· More Limited Toolset than Competitors


· While OpenLDAP is flexible in terms of how LDAP can be implemented, it is not
generally considered to be a robust toolset. This is because OpenLDAP’s functionality
is limited to implementing the LDAP protocol; other directory services, such as
JumpCloud, work with several other protocols as well, broadening their capabilities
which helps establish a more foundational technology for IT admins to build upon.

· Limited Scope
· By only working with LDAP, OpenLDAP’s directory approach is more narrow than
other solutions on the market. As SaaS and cloud-based solutions replace legacy-
owned software, the number of protocols different solutions use to authenticate and
authorize users is growing. Modern directory services have begun to follow suit with
multi-protocol approaches. These allow the directory to unify more resources — not
just those that are compatible with LDAP — and connect them with users.

· A robust multi-protocol directory like JumpCloud, for example, can unify resources
that use LDAP, SAML, SCIM, RADIUS, and many other protocols.

· By comparison, OpenLDAP only works with LDAP-compatible resources. Because


not all resources are likely to be compatible with LDAP anymore, this disperses
resources and precludes the option of building a truly unified directory.

· OpenLDAP Alternatives
· While there are many directory solutions out there, there are few big competitors
OpenLDAP often goes up against.

· Compare OpenLDAP and JumpCloud

· Because both OpenLDAP and JumpCloud are free to try, we recommend testing each
out in your own environment with a small subset or test environment. This will allow
you to experience the pros and cons of each and evaluate which would work better for
your team and environment.
Activity-11

1. Compare Explore other network commands required for a


sysadmin and interpret their functions and usage.

ifconfig Display and manipulate route and network interfaces.

ip It is a replacement of ifconfig command.

traceroute Network troubleshooting utility.

tracepath Similar to traceroute but doesn't require root privileges.

ping To check connectivity between two nodes.

netstat Display connection information.


ss It is a replacement of netstat.

dig Query DNS related information.

nslookup Find DNS related query.

route Shows and manipulate IP routing table.

host Performs DNS lookups.

arp View or add contents of the kernel's ARP table.

iwconfig Used to configure wireless network interface.

hostname To identify a network name.

curl or wget To download a file from internet.

mtr Combines ping and tracepath into a single command.

whois Will tell you about the website's whois.

ifplugstatus Tells whether a cable is plugged in or not.

Linux Networking Commands

Every computer is connected to some other computer through a network whether internally or
externally to exchange some information. This network can be small as some computers
connected in your home or office, or can be large or complicated as in large University or the
entire Internet.
Maintaining a system's network is a task of System/Network administrator. Their task
includes network configuration and troubleshooting.

Explanation of the above commands:

ifconfig: ifconfig is short for interface configurator. This command is utilized in network
inspection, initializing the interface, enabling or disabling an IP address, and configuring an
interface with an IP address. Also, it is used to show the network and route interface.

The basic details shown with ifconfig are:

o MTU
o MAC address
o IP address

Syntax:

1. Ifconfig

ip: It is the updated and latest edition of ifconfig command. The command provides the
information of every network, such as ifconfig. Also, it can be used to get information about
a particular interface.

Syntax:

1. ip a
2. ip addr

traceroute: The traceroute command is one of the most helpful commands in the
networking field. It's used to balance the network. It identifies the delay and decides the
pathway to our target. Basically, it aids in the below ways:

o It determines the location of the network latency and informs it.


o It follows the path to the destination.
o It gives the names and recognizes all devices on the path.
Syntax:

1. traceroute <destination>

tracepath: The tracepath command is the same as the traceroute command, and it is used to
find network delays. Besides, it does not need root privileges. By default, it comes pre-
installed in Ubuntu. It traces the path to the destination and recognizes all hops in it. It
identifies the point at which the network is weak if our network is not strong enough.

Syntax:

1. tracepath <destination>

ping: It is short for Packet Internet Groper. The ping command is one of the widely used
commands for network troubleshooting. Basically, it inspects the network connectivity
between two different nodes.

Syntax:

1. ping <destination>

netstat: It is short for network statistics. It gives statistical figures of many interfaces,
which contain open sockets, connection information, and routing tables.

Syntax:

1. Netstat

ss: This command is the substitution for the netstat command. The ss command is more
informative and much faster than netstat. The ss command's faster response is possible
because it fetches every information from inside the kernel userspace.

Syntax:

1. Ss
nsloopup: The nslookup command is an older edition of the dig command. Also, it is
utilized for DNS related problems.

Syntax:

1. nslookup <domainname>

dig: dig is short for Domain Information Groper. The dig command is an improvised edition
of the nslookup command. It is utilized in DNS lookup to reserve the DNS name server.
Also, it is used to balance DNS related problems. Mainly, it is used to authorize DNS
mappings, host addresses, MX records, and every other DNS record for the best DNS
topography understanding.

Syntax:

1. dig <domainname>

route: The route command shows and employs the routing table available for our system.
Basically, a router is used to detect a better way to transfer the packets around a destination.

Syntax:

1. Route

host: The host command shows the IP address for a hostname and the domain name for an
IP address. Also, it is used to get DNS lookup for DNS related issues.

Syntax:

1. host -t <resourceName>

arp: The arp command is short for Address Resolution Protocol. This command is used to
see and include content in the ARP table of the kernel.

Syntax:

1. Arp
iwconfig: It is a simple command which is used to see and set the system's hostname.

Syntax:

1. Hostname

curl and wget: These commands are used to download files from CLI from the internet.
curl must be specified with the "O" option to get the file, while wget is directly used.

curl Syntax:

1. curl -O <fileLink>

wget Syntax:

1. wget <fileLink>

mtr: The mtr command is a mix of the traceroute and ping commands. It regularly shows
information related to the packets transferred using the ping time of all hops. Also, it is used
to see network problems.

Syntax:

1. mtr <path>

whois: The whois command fetches every website related information. We can get every
information of a website, such as an owner and the registration information.

Syntax:

1. mtr <websiteName>

ifplugstatus: The ifplugstatus command checks whether a cable is currently plugged into a
network interface. It is not available in Ubuntu directly. We can install it with the help of the
below command:

1. sudo apt-get install ifplugd


Syntax:

1. Ifplugstatus

iftop: The iftop command is utilized in traffic monitoring.

tcpdump: The tcpdump command is widely used in network analysis with other commands
of the Linux network. It analyses the traffic passing from the network interface and shows it.
When balancing the network, this type of packet access will be crucial.

Syntax:

1. $ tcpdump -i <network_device>

Activity-12:

1. Compare Study the difference between application server and


web server.

A server is a central repository where information and computer programs are held and
accessed by the programmer within the network. Web server and Application server are
kinds of the server which employed to deliver sites and therefore the latter deals with
application operations performed between users and back-end business applications of the
organization.
Web Server:
It is a computer program that accepts the request for data and sends the specified documents.
Web server may be a computer where the online content is kept. Essentially internet server is
employed to host sites however there exist different web servers conjointly like recreation,
storage, FTP, email, etc.
Example of Web Servers:
· Apache Tomcat
· Resin
Application server:
It encompasses Web container as well as EJB container. Application servers organize the run
atmosphere for enterprises applications. Application server may be a reasonably server that
mean how to put operating system, hosting the applications and services for users, IT services
and organizations. In this, user interface similarly as protocol and RPC/RMI protocols are
used.
Examples of Application Server:
· Weblogic
· JBoss
· Websphere

Difference between web server and application server:


S.NO Web Server Application Server

While application server encompasses


1. Web server encompasses web container only.
Web container as well as EJB container.
S.NO Web Server Application Server

Whereas application server is fitted


2. Web server is useful or fitted for static content.
for dynamic content.

While application server utilize more


3. Web server consumes or utilizes less resources.
resources.

Web servers arrange the run environment While application servers arrange the run
4.
for web applications. environment for enterprises applications.

While in application server, multithreading


5. In web servers, multithreading is supported.
is not supported.

Web server’s capacity is lower than application While application server’s capacity is
6.
server. higher than web server.

In web server, HTML and HTTP protocols are While in this, GUI as well as HTTP
7.
used. and RPC/RMI protocols are used.

Processes that are not resource-intensive are Processes that are resource-intensive are
8.
supported. supported.

Transactions and connection pooling is not Transactions and connection pooling is


9.
supported. supported.

The capacity of fault tolerance is low as


10. It has high fault tolerance.
compared to application servers.

Web Server examples are Apache HTTP Application Servers example are JBoss ,
11.
Server , Nginx. Glassfish.
2. Identify the role of virtual host.

Introduction to Virtual Host

The Virtual Host term refers to the practice of executing multiple websites
(like enterprise1.test.com and enterprise2.test.com) on one device. Virtual hosts can be "IP-
based", meaning that we have a distinct IP address for all websites, or "name-based",
meaning that we have more than one name executing on all IP addresses. The concept that
they are executing on a similar physical server isn't probable to the end user.

Apache server was one of the initial servers for supporting IP-based virtual hosts. The 1.1 and
later versions of Apache support both name-based and IP-based virtual hosts. The latter
virtual host variants are sometimes also known as non-IP or host-based virtual hosts.

A virtual host began with the aim of hosting multiple websites on one device in its starting
days. It would also define sharing individual machine resources, like CPU and memory. The
resources are utilized and shared in such a manner that maximum capability is archived.

The virtual host serves more aims than ever with the development of cloud computing,
including solutions like virtual storage hosting, virtual server hosting, virtual application
hosting, and sometimes entire/virtual data center hosting as well.

Virtual Host Working

There are several ways to set up a virtual host, and almost all ways are listed and explained
below that are utilized today:

o IP-based
o Name-based
o Port-based

IP-based

It is one of the easiest ways and could be utilized to use distinct directives based on the IP
address. We use distinct IPs for all domains in the IP-based virtual hosting.

More than one IP will point to the unique domains of the server, and there would be only a
single IP for a single server. This virtual hosting is gained by making more than one IP
address for one server.

Name-based

These types of virtual hosts are the most frequently and commonly used virtual hosting
method used today. It will use one IP address for every domain on the provided server. The
named-based virtual host will transfer a message to a server alerting about the domain name
to which it's trying to connect when a browser is attempting to connect to a server. The server
inspects the host configuration and hence returns the request along with the correct website if
the domain name is given.

Port-based

This type of virtual hosting is also the same as IP-based virtual hosting. The main difference
between the two is rather than using the distinct IP address for all virtual hosts; we utilize
ports in which the server is configured for responding to more than one website that is rely on
the server port.

Virtual Hosting

Virtual hosting is a technique to host more than one domain name (with isolated handling of
all names) on one server. It allows a server for sharing its resources, like processor cycles and
memory, without needing every service given to use a similar hostname. The virtual hosting
term is generally used in web server references, but the foundation does carry on to other
internet services.
Shared web hosting is one of the extensively used applications. The amount for shared
web hosting is less than an embedded web server due to various customers can be hosted on
one server. Also, it is very basic for one entity to wish for using more than one name on a
similar machine so that the names can reverse services provided instead of where those
services appear to be hosted.

o There are two primary types of virtual hosting: IP-based and name-based.
o Name-based virtual hosting utilizes a hostname illustrated by the client.
o It saves IP addresses the related administrative overhead.
o However, the protocol being served should be supplied the hostname at the right
point.
o There are serious difficulties with name-based virtual hosting with TLS/SSL.
o IP-based virtual hosting utilizes an isolated IP address for all hostnames, and it can be
implemented with protocol but needs an embedded IP address per domain name
provided.
o Also, port-based virtual hosting is possible in the convention but is rarely utilized in
practice due to it's unfamiliar to users.

IP-based and name-based virtual hosting can be merged: a server may contain more than
one IP address and serve more than one name on a few or each of those IP addresses. It
can be helpful when using TLS/SSL with wildcard certificates.

3. Explain different types of Apache virtual hosts and how they


are set up.

Introduction
The Apache HTTP server is a popular open-source web server that offers flexibility, power,
and widespread support for developers. Apache server configuration does not take place in a
single monolithic file, but instead happens through a modular design where new files can be
added and modified as needed. Within this modular design, you can create an individual site
or domain called a virtual host.

Using virtual hosts, one Apache instance can serve multiple websites. Each domain or
individual site that is configured using Apache will direct the visitor to a specific directory
holding that site’s information. This is done without indicating to the visitor that the same
server is also responsible for other sites. This scheme is expandable without any software
limit as long as your server can handle the load.

In this guide, you will set up Apache virtual hosts on an Ubuntu 20.04 server. During this
process, you’ll learn how to serve different content to different visitors depending on which
domains they are requesting by creating two virtual host sites.

Note: If you do not have domains available at this time, you can use test values locally on
your computer. Step 6 of this tutorial will show you how to test and configure your test
values. This will allow you to validate your configuration even though your content won’t be
available to other visitors through the domain name.
Step 1 — Creating the Directory Structure

The first step is to create a directory structure that will hold the site data that you will be
serving to visitors.

Your document root, the top-level directory that Apache looks at to find content to serve, will
be set to individual directories under the /var/www directory. You will create a directory here
for each of the virtual hosts.
Within each of these directories, you will create a public_html directory.
The public_html directory contains the content that will be served to your visitors. The parent
directories, named here as your_domain_1 and your_domain_2, will hold the scripts and
application code to support the web content.

Use these commands, with your own domain names, to create your directories:

1. sudo mkdir -p /var/www/your_domain_1/public_html


2. sudo mkdir -p /var/www/your_domain_2/public_html
Copy
Be sure to replace your_domain_1 and your_domain_2 with your own respective domains.
For example, if one of your domains was example.com you would create this directory
structure: /var/www/example.com/public_html.
Step 2 — Granting Permissions
You’ve created the directory structure for your files, but they are owned by the root user. If
you want your regular user to be able to modify files in these web directories, you can change
the ownership with these commands:
1. sudo chown -R $USER:$USER /var/www/your_domain_1/public_html
2. sudo chown -R $USER:$USER /var/www/your_domain_2/public_html
Copy
The $USER variable will take the value of the user you are currently logged in as when you
press ENTER. By doing this, the regular user now owns the public_html subdirectories where
you will be storing your content.
You should also modify your permissions to ensure that read access is permitted to the
general web directory and all of the files and folders it contains so that the pages can be
served correctly:

1. sudo chmod -R 755 /var/www


Copy

Your web server now has the permissions it needs to serve content, and your user should be
able to create content within the necessary folders. The next step is to create content for your
virtual host sites.

Step 3 — Creating New Virtual Host Files

Virtual host files are the files that specify the actual configuration of your virtual hosts and
dictates how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf. You can copy this file
to create virtual host files for each of your domains.

Copy the default configuration file over to the first domain:

1. sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-


available/your_domain_1.conf
Copy
Be aware that the default Ubuntu configuration requires that each virtual host file end
in .conf.
Open the new file in your preferred text editor with root privileges:
1. sudo nano /etc/apache2/sites-available/your_domain_1.conf
Copy

With comments removed, the file will be similar to this:

/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Within this file, customize the items for your first domain and add some additional directives.
This virtual host section matches any requests that are made on port 80, the default HTTP
port.
First, change the ServerAdmin directive to an email that the site administrator can receive
emails through:
/etc/apache2/sites-available/your_domain_1.conf
ServerAdmin admin@your_domain_1
After this, add two additional directives. The first, called ServerName, establishes the base
domain for the virtual host definition. The second, called ServerAlias, defines further names
that should match as if they were the base name. This is useful for matching additional hosts
you defined. For instance, if you set the ServerName directive to example.com you could
define a ServerAlias to www.example.com, and both will point to this server’s IP address.
Add these two directives to your configuration file after the ServerAdmin line:
/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_1
ServerName your_domain_1
ServerAlias www.your_domain_1
DocumentRoot /var/www/html
...
</VirtualHost>
Next, change your virtual host file location for the document root for this domain. Edit
the DocumentRoot directive to point to the directory you created for this host:
/etc/apache2/sites-available/your_domain_1.conf
DocumentRoot /var/www/your_domain_1/public_html

Here is an example of the virtual host file with all of the adjustments made above:

/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_1
ServerName your_domain_1
ServerAlias www.your_domain_1
DocumentRoot /var/www/your_domain_1/public_html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
...
</VirtualHost>

Save and close the file.

Create your second configuration file by copying over the file from your first virtual host site:
1. sudo cp /etc/apache2/sites-available/your_domain_1.conf /etc/apache2/sites-
available/your_domain_2.conf
Copy

Open the new file in your preferred editor:

1. sudo nano /etc/apache2/sites-available/your_domain_2.conf


Copy

You now need to modify all of the pieces of information to reference your second domain.
When you are finished, it should look like this:

/etc/apache2/sites-available/your_domain_2.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_2
ServerName your_domain_2
ServerAlias www.your_domain_2
DocumentRoot /var/www/your_domain_2/public_html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
...
</VirtualHost>

Save and close the file when you are finished.

Step 4— Enabling the New Virtual Host Files

Now that you have created your virtual host files, you must enable them. Apache includes
some tools that allow you to do this.

Copy

There will be output for both sites, similar to the example below, reminding you to reload
your Apache server:

Output
Enabling site example.com.
To activate the new configuration, you need to run:
systemctl reload apache2
Before reloading your server, disable the default site defined in 000-default.conf by using
the a2dissite command:
1. sudo a2dissite 000-default.conf
Copy
Output
Site 000-default disabled.
To activate the new configuration, you need to run:
systemctl reload apache2

Next, test for configuration errors:

1. sudo apache2ctl configtest


Copy

The should receive the following output:

Output
...
Syntax OK

When you are finished, restart Apache to make these changes take effect.

1. sudo systemctl restart apache2


Copy

Optionally, you can check the status of the server after all these changes with this command:

1. sudo systemctl status apache2


Copy

Your server should now be set up to serve two websites. If you’re using real domain names,
you can skip Step 6 and move on to Step 7. If you’re testing your configuration locally,
follow Step 6 to learn how to test your setup using your local computer.

Step 5— Testing Your Results

Now that you have your virtual hosts configured, you can test your setup by going to the
domains that you configured in your web browser:

http://your_domain_1

You can also visit your second host page and view the file you created for your second site:

http://your_domain_2

If both of these sites work as expected, you’ve successfully configured two virtual hosts on
the same server.

Note: If you adjusted your local computer’s hosts file, like in Step 6 of this tutorial, you
may want to delete the lines you added now that you verified that your configuration works.
This will prevent your hosts file from being filled with entries that are no longer necessary.
3 virtual host types

Someone who wants to visit your website types in an address and hopes to end up in the right
destination. Virtual servers handle that query in a few different ways.

You could base your virtual server on:

· Internet protocol (IP) address. Use a different IP for each domain, but point them to
one server. Allow that one server to resolve multiple IP addresses.
· Name. Use one IP for all domains on the server. During connection, ask your visitors
which site they'd like to visit. After that query, resolve the visit to the proper site.
· Port. Assign each website to a different port on the server.

Drawbacks of these methods include:

· Delays. Choose the name system, and some browsers will struggle to authenticate the
site. Your visitors could be told your site is not secure, or others may wait long
periods for your site to load.
· Complexity. It takes little coding to set up IP addresses for each site, but you may run
out of available IP addresses to use. And you must keep track of which address
corresponds to each site.

Activity-13

1. Compare the features between RAID and SSD.

What is a solid-state drive?

A solid-state drive (SSD) is a type of mass storage device used in place of a spinning hard
disk drive (HDD). Solid-state drives have no moving parts and information is saved onto
integrated circuits (ICs).

Although SSDs serve the same functions as hard drives, their internal parts are different.
SSDs store data using flash memory, allowing them to access data much faster than hard
drives.
Benefits of using a solid-state drive (SSD)

SSDs have access speeds of 35-100 microseconds making it 25 to 100 times faster than a
HDD. This makes an SSD more reliable because it uses less power, has faster access times,
increases the battery life, and has quicker file transfers.

Since there are no moving parts, it is more durable to drops and shudders making it more
resilient against data loss caused by physical or external trauma.

The absence of a rotating metal platter to store data and a moving read arms means an SSD
makes little noise. Compared to an HDD, the rotation of the metal platter and the movement
of the read arm creates noise.

Lastly, a SSD is considerably more compact than a HDD because there are no moving parts.
This also means that solid-state drives are more suitable for portable electronic devices such
as tablets and cellular phones [2].

Why are solid-state drives (SSDs) important?

Everything is faster

SSDs enable “instant on” allowing your system to boot almost immediately. Imagine sitting
in class being able to access LEARN instantly, and being able to switch slides during a
lecture without waiting.

Seamless multitasking

The improved data access of a SSD allows computers to run multiple programs with ease.
Sometimes being a student means tackling a number of things at once.

Seamless multitasking gives you the opportunity to not only maximize your learning but
gives you the ability to conquer more than one task on one screen.

Increased durability and reliability


Being a student can be difficult at times. Whether you find yourself having to run from
Hagey Hall to St. Jerome’s, or trying to get to your 8:30am lecture on time, it is important
that your laptop can handle these situations.

Since SSDs have no moving parts that are susceptible to damage, they are extremely durable
and reliable.

Better system cooling

SSDs use flash memory which means they’re able to maintain more consistent operating
temperatures which will not only keep overall system temperatures down, but it will also
ensure your system stays alive for longer.

The longer your computer can stay alive, the less worries you will have stressing over buying
a new computer, and stressing over losing your files.

Better gaming

Faster data access speeds enable faster load times. If you are a part of the Games Institute,
there is an increased chance at first strikes and a seamless gaming experience.

Flexible storage

SSDs are available in multiple forms. Some forms like mSATA are able to plug into your
system’s motherboard allowing the drive to work alongside your existing hard drive.

Flexible storage is significant especially for students. With the amount of assignments stored
onto your computer will eventually slow down the efficiency of your computer. Flexible
storage allows you to organize your computer and allows it to run more efficiently.

More time for what matters

The increased speed of a SSD means you will be able to get more done in less time. This
means you have more time to better your academic and personal development.

How do solid-state drives (SSD) work?

Solid-state drives use semiconductor chips to store data. The chips used in solid-state drive
deliver non-volatile memory, meaning the data stay even without power.

SSDs cannot overwrite existing information; they have to erase it first. However, when you
delete a file in Windows or Mac OS, it is not erased immediately – the space is marked as
available for re-use. In order to actually re-use this space, the SSD has to be given a “TRIM”
command. Once there are enough pages to be erased, then the SSD will do a “garbage
collection” operation and delete the data as a block.

SSDs have more space available than what is advertised because of over-provisioning. Over-
provisioning is storage that is not available to the operating system but is instead used for
internal tasks. The over-provisioned space takes up a small percentage of the overall solid-
state drive.

Block remapping occurs at the 70% mark when there is no data to be deleted the solid state
drive will move all files around in a cycle causing the drive to slow down.

The last process is wear levelling, the process designed to extend the life of a solid-state
drive. It arranges data so that the erase cycles are distributed evenly throughout the blocks of
the device.

What is RAID?
RAID (redundant array of independent disks) is a way of storing the same data in different
places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive
failure. There are different RAID levels, however, and not all have the goal of providing
redundancy.

How RAID works


RAID works by placing data on multiple disks and allowing input/output (I/O) operations to
overlap in a balanced way, improving performance. Because using multiple disks increases
the mean time between failures, storing data redundantly also increases fault tolerance.

RAID arrays appear to the operating system (OS) as a single logical drive.

RAID employs the techniques of disk mirroring or disk striping. Mirroring will copy identical
data onto more than one drive. Striping partitions help spread data over multiple disk drives.
Each drive's storage space is divided into units ranging from a sector of 512 bytes up to
several megabytes. The stripes of all the disks are interleaved and addressed in order. Disk
mirroring and disk striping can also be combined in a RAID array.

An image of a hard drive in a RAID array.


In a single-user system where large records are stored, the stripes are typically set up to be
small (512 bytes, for example) so that a single record spans all the disks and can be accessed
quickly by reading all the disks at the same time.

In a multiuser system, better performance requires a stripe wide enough to hold the typical or
maximum size record, enabling overlapped disk I/O across drives.

RAID levels
RAID devices use different versions, called levels. The original paper that coined the term
and developed the RAID setup concept defined six levels of RAID -- 0 through 5. This
numbered system enabled those in IT to differentiate RAID versions. The number of levels
has since expanded and has been broken into three categories: standard, nested and
nonstandard RAID levels.

Standard RAID levels


RAID 0. This configuration has striping but no redundancy of data. It offers the best
performance, but it does not provide fault tolerance.

A visualization of RAID 0.

RAID 1. Also known as disk mirroring, this configuration consists of at least two drives that
duplicate the storage of data. There is no striping. Read performance is improved, since either
disk can be read at the same time. Write performance is the same as for single disk storage.
A visualization of RAID 1.

RAID 2. This configuration uses striping across disks, with some disks storing error checking
and correcting (ECC) information. RAID 2 also uses a dedicated Hamming code parity, a
linear form of ECC. RAID 2 has no advantage over RAID 3 and is no longer used.

A visualization of RAID 2.

RAID 3. This technique uses striping and dedicates one drive to storing parity information.
The embedded ECC information is used to detect errors. Data recovery is accomplished by
calculating the exclusive information recorded on the other drives. Because an I/O operation
addresses all the drives at the same time, RAID 3 cannot overlap I/O. For this reason, RAID 3
is best for single-user systems with long record applications.
A visualization of RAID 3.

RAID 4. This level uses large stripes, which means a user can read records from any single
drive. Overlapped I/O can then be used for read operations. Because all write operations are
required to update the parity drive, no I/O overlapping is possible.

A visualization of RAID 4.

RAID 5. This level is based on parity block-level striping. The parity information is striped
across each drive, enabling the array to function, even if one drive were to fail. The array's
architecture enables read and write operations to span multiple drives. This results in
performance better than that of a single drive, but not as high as a RAID 0 array. RAID 5
requires at least three disks, but it is often recommended to use at least five disks for
performance reasons.

RAID 5 arrays are generally considered to be a poor choice for use on write-intensive
systems because of the performance impact associated with writing parity data. When a disk
fails, it can take a long time to rebuild a RAID 5 array.

A visualization of RAID 5.

RAID 6. This technique is similar to RAID 5, but it includes a second parity scheme
distributed across the drives in the array. The use of additional parity enables the array to
continue functioning, even if two disks fail simultaneously. However, this extra protection
comes at a cost. RAID 6 arrays often have slower write performance than RAID 5 arrays.

A visualization of RAID 6.
Nested RAID levels
Some RAID levels that are based on a combination of RAID levels are referred to as nested
RAID. Here are some examples of nested RAID levels.

RAID 10 (RAID 1+0). Combining RAID 1 and RAID 0, this level is often referred to as
RAID 10, which offers higher performance than RAID 1, but at a much higher cost. In RAID
1+0, the data is mirrored and the mirrors are striped.

A visualization of RAID 10.

RAID 01 (RAID 0+1). RAID 0+1 is similar to RAID 1+0, except the data organization
method is slightly different. Rather than creating a mirror and then striping it, RAID 0+1
creates a stripe set and then mirrors the stripe set.

RAID 03 (RAID 0+3, also known as RAID 53 or RAID 5+3). This level uses striping in
RAID 0 style for RAID 3's virtual disk blocks. This offers higher performance than RAID 3,
but at a higher cost.
RAID 50 (RAID 5+0). This configuration combines RAID 5 distributed parity with RAID 0
striping to improve RAID 5 performance without reducing data protection.

Nonstandard RAID levels


Nonstandard RAID levels vary from standard RAID levels and are usually developed by
companies or organizations for mainly proprietary use. Here are some examples.

RAID 7. A nonstandard RAID level based on RAID 3 and RAID 4 that adds caching. It
includes a real-time embedded OS as a controller, caching via a high-speed bus and other
characteristics of a standalone computer.

Adaptive RAID. This level enables the RAID controller to decide how to store the parity on
disks. It will choose between RAID 3 and RAID 5. The choice depends on what RAID set
type will perform better with the type of data being written to the disks.

Linux MD RAID 10. This level, provided by the Linux kernel, supports the creation of
nested and nonstandard RAID arrays. Linux software RAID can also support the creation of
standard RAID 0, RAID 1, RAID 4, RAID 5 and RAID 6 configurations.

Benefits of RAID
Advantages of RAID include the following:

· Improved cost-effectiveness because lower-priced disks are used in large numbers.


· Using multiple hard drives enables RAID to improve the performance of a single hard
drive.
· Increased computer speed and reliability after a crash, depending on the configuration.
· Reads and writes can be performed faster than with a single drive with RAID 0. This is
because a file system is split up and distributed across drives that work together on the
same file.
· There is increased availability and resiliency with RAID 5. With mirroring, two drives
can contain the same data, ensuring one will continue to work if the other fails.
When should you use RAID?
Instances where it is useful to have a RAID setup include:

· When a large amount of data needs to be restored. If a drive fails and data is lost, that
data can be restored quickly, because this data is also stored in other drives.
· When uptime and availability are important business factors. If data needs to be
restored, it can be done quickly without downtime.
· When working with large files. RAID provides speed and reliability when working with
large files.
· When an organization needs to reduce strain on physical hardware and increase
overall performance. As an example, a hardware RAID card can include additional
memory to be used as a cache.
· When having I/O disk issues. RAID will provide additional throughput by reading and
writing data from multiple drives, instead of needing to wait for one drive to perform
tasks.
· When cost is a factor. The cost of a RAID array is lower than it was in the past, and
lower-priced disks are used in large numbers, making it cheaper.

You might also like