Professional Documents
Culture Documents
Submission on
“Operating System”
Submitted to:
Prof. A S Manjunath
Assistant Professor
Department of Computer Applications
JSS Science and Technology University, Mysuru
Submitted by:
Prajwal K P
Reg. No: 01JST19PMC031
1. Define the term process and differentiate between heavyweight and
lightweight processes. Assume that following jobs have arrived in the order
1,2,3,4 and 5:
Job Arrival Time Burst Time Priority
1 0 15 2
2 2 03 1
3 5 05 5
4 6 08 4
5 7 12 3
Give Gantt chart and calculate Avg. Turn-around Time and Waiting Time for:
i) FCFS ii) SJF scheduling and Pre-emptive priority algorithm.
A process is a program in execution. It is a unit of work within the system. Program is
a passive entity, process is an active entity.
Process needs resources to accomplish its task
In a lightweight process, threads are used to divvy up the workload. Here you would
see one process executing in the OS (for this application or service.) This process would
possess 1 or more threads. Each of the threads in this process shares the same address space.
Because threads share their address space, communication between the threads is simple and
efficient. Each thread could be compared to a process in a heavyweight scenario
In a heavyweight process, new processes are created to perform the work in parallel.
Here (for the same application or service), you would see multiple processes running. Each
heavyweight process contains its own address space. Communication between these
processes would involve additional communications mechanisms such as sockets or pipes.
The benefits of a lightweight process come from the conservation of resources. Since threads
use the same code section, data section and OS resources, less overall resources are used. The
drawback is now you have to ensure your system is thread-safe. You have to make sure the
threads don't step on each other. Fortunately, Java provides the necessary tools to allow you
to do this.
2. a) Define a file system. What are various components of a file system? State
and explain various file allocation methods.
A file is a collection of related information that is recorded on secondary storage. Or
file is a collection of logically related entities. From user’s perspective a file is the smallest
allotment of logical secondary storage.
A file's attributes vary from one operating system to another but typically consist of these:
Name: Name is the symbolic file name and is the only information kept in human
readable form.
Identifier: This unique tag is a number that identifies the file within the file system; it
is in non-human-readable form of the file.
Type: This information is needed for systems which support different types of files or
its format.
Location: This information is a pointer to a device which points to the location of the
file on the device where it is stored.
Size: The current size of the file (which is in bytes, words, etc.) which possibly the
maximum allowed size gets included in this attribute.
Protection: Access-control information establishes who can do the reading, writing,
executing, etc.
Date, Time & user identification: This information might be kept for the creation of
the file, its last modification and last used. These data might be useful for in the field of
protection, security, and monitoring its usage.
Three major methods of allocating disk space are in wide use: contiguous, linked, and
indexed.
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example,
if a file requires n blocks and is given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting block
address and the length of the file (in terms of blocks required), we can determine the blocks
occupied by the file.
The directory entry for a file with contiguous allocation contains
Address of starting block
Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as
(b+k).
This is extremely fast since the number of seeks are minimal because of contiguous allocation
of file blocks.
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous
memory at a particular instance.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks
are needed to access every block individually. This makes linked allocation slower.
It does not support random or direct access. We cannot directly access the blocks of a
file. A block k of a file can be accessed by traversing k blocks sequentially (sequential
access) from the starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all
the blocks occupied by a file. Each file has its own index block. The ith entry in the index
block contains the disk address of the ith file block. The directory entry contains the address
of the index block as shown in the image:
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides
fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked allocation.
For very small files, say files that expand only 2-3 blocks, the indexed allocation
would keep one entire block (index block) for the pointers which is inefficient in terms
of memory utilization. However, in linked allocation we lose the space of only 1 pointer
per block.
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for holding the
pointers. Every index block would then contain a pointer or the address to the next
index block.
2. Multilevel index: In this policy, a first level index block is used to point to the second
level index blocks which in turn points to the disk blocks occupied by the file. This can
be extended to 3 or more levels depending on the maximum file size.
Combined Scheme:
In this scheme, a special block called the Inode (information Node) contains all the
information about the file such as the name, size, authority, etc. and the remaining space of
Inode is used to store the Disk Block addresses which contain the actual file as shown in the
image below. The first few of these pointers in Inode point to the direct blocks i.e. the
pointers contain the addresses of the disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be single indirect, double indirect or
triple indirect. Single Indirect block is the disk block that does not contain the file data but the
disk address of the blocks that contain the file data. Similarly, double indirect blocks do not
contain the file data but the disk address of the blocks that contain the address of the blocks
containing the file data.
b) What problems could occur of system allowed a file system to be mounted
simultaneously at more than one location?
Consistency semantics specify how multiple users are to access a shared file
simultaneously similar to process synchronization algorithms
Tend to be less complex due to disk I/O and network latency (for remote file systems
Andrew File System (AFS) implemented complex remote file sharing semantics
UNIX files system (UFS) implements:
Writes to an open file visible immediately to other users of the same open file
Sharing file pointer to allow multiple users to read and write concurrently AFS has
session semantics Writes only visible to sessions starting after the file is closed.
There would be multiple paths to the same file, which could confuse users or
encourage mistakes (deleting a file with one path deletes the file in all the other paths).
These types of operating system is a recent advancement in the world of computer technology
and are being widely accepted all-over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate each other using a shared
communication network. Independent systems possess their own memory unit and CPU.
These are referred as loosely coupled systems or distributed systems. These system’s
processors differ in size and function. The major benefit of working with these types of
operating system is that it is always possible that one user can access the files or software
which are not actually present on his system but on some other system connected within this
network i.e., remote access is enabled within the devices connected in that network.
Advantages of Distributed Operating System:
Failure of one will not affect the other network communication, as all systems are
independent from each other
Electronic mail increases the data exchange speed
Since resources are being shared, computation is highly fast and durable
Load on host computer reduces
These systems are easily scalable as many systems can be easily added to the network
Delay in data processing reduces
Disadvantages of Distributed Operating System:
Failure of the main network will stop the entire communication
To establish distributed systems the language which are used are not well defined yet
These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are
- LOCUS etc.
Open Source – Linux source code is freely available and it is community based
development project. Multiple team’s works in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
Multi-User – Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
Hierarchical File System – Linux provides a standard file structure in which system
files/ user files are arranged.
Shell – Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs etc.
Security – Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Linux is fast, free and easy to use, power laptops and servers around the world. Linux
has many more features to amaze its users such as:
Live CD/USB: Almost all Linux distributions have Live CD/USB feature by which
user can run/try the OS even without installing it on the system.
Application Support: Linux has its own software repository from where users can
download and install thousands of applications just by issuing a command in Linux
Terminal or Shell. Linux can also run Windows applications if needed.
An operating system manages the devices in a computer system with the help of
device controllers and device drivers. Each device in the computer system is equipped with
the help of device controller. For example, the various devices controllers in a computer
system may be disk controller, printer controller, tape-drive controller and memory
controller. All these devices controllers are connected with each other through a system bus.
The device controllers are actually the hardware components that contains some buffers
registers to store the data temporarily. The transfer of data between a running process and the
various devices of the computer system is accomplished only through these devices
controllers.
5. The other major responsibility of the device management function is to implement the
Application Programming Interface (API).
Wait
wait(S)
while (S<=0);
S--;
Signal
signal(S)
S++;
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows:
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is
the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is
decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0
and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.
Advantages of Semaphores
Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
Disadvantages of Semaphores
Semaphores are complicated so the wait and signal operations must be implemented
in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a
structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.
Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric
multiprocessors. Details about them are as follows:
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains
a master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then
the throughput increases by an amount just under N.
Multiprocessor systems are cheaper than single processor systems in the long run because
they share the data storage, peripheral devices, power supplies etc. If there are multiple
processes that share data, it is better to schedule them on multiprocessor systems with shared
data than have different computer systems with multiple copies of the data.
There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.
There are multiple processors in a multiprocessor system that share peripherals, memory etc.
So, it is much more complicated to schedule processes and impart resources to processes.than
in single processor systems. Hence, a more complex and complicated operating system is
required in multiprocessor systems.
All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems
6. Explain any two Page Replacement algorithms with a suitable example?
Page Replacement Algorithms:
First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.
Example: Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number of
page faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1
Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1,
0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
Program execution
I/O operations
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major activities
of an operating system with respect to file management −
The operating system gives the permission to the program for operation on file.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication −
Both the processes can be on one computer or on different computers, but are
connected through a computer network.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system with
respect to error handling −
Resource Management
Protection
Shortest Job first has the advantage of having a minimum average waiting time
among all scheduling algorithms.
It is a Greedy Algorithm.
It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
It is practically infeasible as Operating System may not know burst time and therefore
may not sort them. While it is not possible to predict execution time, several methods
can be used to estimate the execution time for a job, such as a weighted average of
previous execution times. SJF can be used in specialized environments where accurate
estimates of running time are available.
Algorithm:
1. Sort all the process according to the arrival time.
2. Then select that process which has minimum arrival time and minimum Burst time.
3. After completion of process make a pool of process which after till the completion of
previous process and select that process among the pool which is having minimum
Burst time.
How to compute below times in SJF using a program?
1. Completion Time: Time at which process completes its execution.
2. Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
3. Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
Multiple-level queues are not an independent scheduling algorithm. They make use
of other existing algorithms to group and schedule jobs with common characteristics.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.
The pages for the private code and data can appear anywhere in the logical address space
It is very common for many computer users to be running the same program at the same time
in a large multiprogramming computer system.
Now, to avoid having two copies of same page in the memory at the same time, just share the
pages.
But a problem arises that not all the pages are shareable.
Generally, read-only pages are shareable, for example, program text; but data pages are not
shareable.
With shared pages, a problem occurs, whenever two or more than two processes (multiple
processes) share some code.
Let's suppose that the process X and process Y, both are running the editor and sharing its
pages.
Now if the scheduler decides to remove the process X from the memory, evicting all its pages
and filling the empty page frames with the other program will cause the process Y to generate
large number of page faults just to bring them back again.
In the similar way, whenever the process X terminates, it is essential to be able to discover
that the pages are still in use so that their disk space will not be freed by any accident.
Windows
Generally referred to as the Microsoft Windows, these OS are manufactured and
developed by the tech-giant Microsoft and are the most commonly used OS for personal
computers and to some extent in mobile phones or the Windows phone. Microsoft Windows
is a collection of many graphics oriented operating system, first developed and launched in
1985 by the name Windows 1.0. When it started it had the aim to provide a graphical shell to
the then famous MS-DOS which had a character user interface, but it didn’t gained much
popularity then. Slowly with the implementation of innovative features, the OS gained
popularity and soon dominated the market of Computer Industry, owing to its freedom of use
and user-friendly environment. Let’s look at the advantages and disadvantages of using
Microsoft Windows.
Advantages:
Hardware compatibility: Almost every computer hardware manufacturing industry
supports Microsoft Windows. This makes the users buy any random computer
manufacturing brand and get the latest version of pre-loaded Microsoft Windows 10 in
it.
Pre-loaded and available Softwares: Windows comes with much user-friendly
software to make the everyday task easier and if the software is not available then one
can easily get it from the Internet and run it.
Ease of Use: Microsoft Windows has developed by far the most user-friendly OS in
the market, keeping in mind that it serves the purpose of most types of market in the
world. It’s the most preferred OS for personal computers.
Game Runner: Windows supports a plethora of games manufactured till date and
comes with all the supporting base software to drive the game engine. So it’s the most
popular OS among the game lovers.
Disadvantages:
Expensive: Microsoft is a closed source OS and the license cost is really high. It’s not
possible for every class of society to buy new license every time one is expired. The
latest Windows 10 costs around 6000 to 8000 INR.
Poor Security: Windows is much more prone to virus and malware in comparison to
other OS like Linux or Mac in the market.
Not reliable: Windows starts to lag with time and eventually needs booting every time
and now to get back the initial speed.
There are many versions of Windows that has been developed since 1985, but few that
revolutionized the industry of Operating System are:
1. Windows 95
2. Windows 98
3. Windows NT
4. Windows XP
5. Windows Vista
6. Windows 7
7. Windows 8
8. Windows 8.1
9. Windows 10(Latest)
According to Net Applications, that tracks use based on web use, Windows is the most-used
operating system family for personal computers as of July 2017 with close to 90% usage
share and rising.
9. Explain the security attacks on operating system. Write the steps to protect the computer
system from various attacks.
(Protection) dealt with protecting files and other resources from accidental misuse by
cooperating users sharing a system, generally using the computer for normal
purposes.
(Security) deals with protecting systems from deliberate attacks, either internal or
external, from individuals intentionally attempting to steal information, damage
information, or otherwise deliberately wreak havoc in some manner.
Some of the most common types of violations include:
o Breach of Confidentiality - Theft of private or confidential information, such
as credit-card numbers, trade secrets, patents, secret formulas, manufacturing
procedures, medical information, financial information, etc.
o Breach of Integrity - Unauthorized modification of data, which may have
serious indirect consequences. For example a popular game or other program's
source code could be modified to open up security holes on users systems
before being released to the public.
o Breach of Availability - Unauthorized destruction of data, often just for the
"fun" of causing havoc and for bragging rites. Vandalism of web sites is a
common form of this violation.
o Theft of Service - Unauthorized use of resources, such as theft of CPU cycles,
installation of daemons running an unauthorized file server, or tapping into the
target's telephone or networking services.
o Denial of Service, DOS - Preventing legitimate users from using the system,
often by overloading and overwhelming the system with an excess of requests
for service.
One common attack is masquerading, in which the attacker pretends to be a trusted third
party. A variation of this is the man-in-the-middle, in which the attacker masquerades as
both ends of the conversation to two targets.
3. Enable a firewall
A firewall acts as a barrier between your computer or network and the internet. It
effectively closes the computer ports that prevent communication with your device. This
protects your computer by stopping threats from entering the system and spreading between
devices. It can also help prevent your data leaving your computer.
4. Adjust your browser settings
Most browsers have options that enable you to adjust the level of privacy and security
while you browse. These can help lower the risk of malware infections reaching your
computer and malicious hackers attacking your device. Some browsers even enable you to
tell websites not to track your movements by blocking cookies.
8. Use a VPN
A Virtual Private Network (VPN) is an excellent way to step up your security,
especially when browsing online. While using a VPN, all of your internet traffic is encrypted
and tunneled through an intermediary server in a separate location. This masks your IP,
replacing it with a different one, so that your ISP can no longer monitor your activity.
10.What are device management policies for storing data in operating System?
• Managing the memory component that controls buffering, caching and spooling.
Device management:
A running program may need additional resources such as more memory, tape drives,
access to files and so on.
• request device, release device: To request a required device and to release the device
after use.
• get device attributes, set device attributes: To determine and reset the device attributes.
Keep tracks of all devices and the program which is responsible to perform this is
called I/O controller.
Monitoring the status of each device such as storage drivers, printers and other
peripheral devices.
Enforcing preset policies and taking a decision which process gets the device when
and for how long.
Allocates and Deallocates the device in an efficient way. De-allocating them at two
levels: at the process level when I/O command has been executed and the device is
temporarily released, and at the job level, when the job is finished and the device is
permanently released.
Optimizes the performance of individual devices.
Types of devices
The OS peripheral devices can be categorized into 3: Dedicated, Shared, and Virtual.
The differences among them are the functions of the characteristics of the devices as well as
how they are managed by the Device Manager.
Dedicated devices:-
Shared devices:-
These devices can be allocated o several processes. Disk-DASD can be shared among
several processes at the same time by interleaving their requests. The interleaving is carefully
controlled by the Device Manager and all issues must be resolved on the basis of
predetermined policies.
Virtual Devices:-
These devices are the combination of the first two types and they are dedicated
devices which are transformed into shared devices. For example, a printer converted into a
shareable device via spooling program which re-routes all the print requests to a disk. A print
job is not sent straight to the printer, instead, it goes to the disk(spool)until it is fully prepared
with all the necessary sequences and formatting, then it goes to the printers. This technique
can transform one printer into several virtual printers which leads to better performance and
use.
11. (A.) what are the advantages and disadvantages of multiprocessor systems?
Advantages of Multiprocessor Systems
In a multiprocessor system, even if one processor fails, the system will not halt. This
ability to continue working despite hardware failure is known as graceful degradation. For
example: If there are 5 processors in a multiprocessor system and one of them fails, then also
4 processors are still working. So the system only becomes slower and does not ground to a
halt.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system
increases i.e. number of processes getting executed per unit of time increase. If there are N
processors then the throughput increases by an amount just under N.
Multiprocessor systems are cheaper than single processor systems in the long run
because they share the data storage, peripheral devices, power supplies etc. If there are
multiple processes that share data, it is better to schedule them on multiprocessor systems
with shared data than have different computer systems with multiple copies of the data.
There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.
All the processors in the multiprocessor system share the memory. So a much larger
pool of memory is required as compared to single processor systems.
I/O-bound process – spends major time doing I/O than computations, many short CPU
bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
I/O controllers are a series of microchips which help in the communication of data
between the central processing unit and the motherboard. The main purpose of this system is
to help in the interaction of peripheral devices with the control units (CUs). Put simply, the
I/O controller helps in the connection and control of various peripheral devices, which are
input and output devices. It is usually installed on the motherboard of a computer. However,
it can also be used as an accessory in the case of replacements or in order to add more
peripheral devices to the computer.
One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.
An I/O system is required to take an application I/O request and send it to the physical
device, then take whatever response comes back from the device and send it to the
application. I/O devices can be divided into two categories −
*Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key
etc.
*Character devices − A character device is one with which the driver communicates
by sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc
*Device Controllers Device drivers are software modules that can be plugged into an OS to
handle a particular device. Operating System takes help from device drivers to handle all I/O
devices.
The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an
electronic component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate with
the Operating Systems. A device controller may be able to handle multiple devices. As an
interface its main task is to convert serial bit stream to block of bytes, perform error
correction as necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
Synchronous vs. asynchronous I/O
Synchronous I/O − in this scheme CPU execution waits while I/O proceeds
Draw a timeline for each of the following scheduling algorithms and determine
which one gives the best results.
1) FCFS
2) SJF
4) Priority scheduling.
Less complicated
Can only be accessed via two indivisible (atomic) operations
wait (S) {
while S <= 0
; // no-op
S--;
signal (S) {
S++;
Also known as mutex locks Can implement a counting semaphore S as a binary semaphore
wait (mutex);
// Critical Section
signal (mutex);
} while (TRUE);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the crtical section. Could now have busy waiting
in critical section implementation But implementation code is short Little busy waiting if
critical section rarely occupied Note that applications may spend lots of time in critical
sections and therefore this is not a good solution.
With each semaphore there is an associated waiting queue. Each entry in a waiting queue has
two data items: value (of type integer) pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and place it in the ready queue.
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
block();
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
wakeup(P);
}
}
Deadlock Prevention
Mutual Exclusion – not required for sharable resources; must hold for non-sharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources Require process to request and be allocated all its resources before it
begins execution, or allow process to request resources only when the process has none Low
resource utilization; starvation possible
No Preemption –
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released.
Pre-empted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information available Simplest and
most useful model requires that each process declare the maximum number of resources of
each type that it may need .The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-wait condition Resource-
allocation state is defined by the number of available and allocated resources, and the
maximum demands of the processes
Safe State
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi , the resources that Pi can still request can be
satisfied by currently available resources + resources held by all the Pj , with j < inThat is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate When Pi terminates, Pi +1 can obtain its needed resources, and so
on
Basic Facts
REPEATED
16. What is the need of Page replacement? Consider the following reference
string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Find the number of Page Faults with FIFO, Optimal Page replacement and
LRU with four free frames which are empty initially. Which algorithm gives the
minimum number of page faults?
(A). FIFO:
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 4 4 4 4 4 4 4 4 4 4 7 7 7
7 7 7 7 7 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2
F F F F H F H F H H F H H F F H H F H H
In FIFO algorithm 10 page fault and 10 Hits will occur.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 4 4 4 4 4 4 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 7 7 7 7 3 3 3 3 3 3 3 3 3 3 3 3 7 7 7
F F F F H F H F H H H H H F H H H F H H
In optimal page replacement 8 page faults and 12 hits will occur.
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 4 4 4 4 4 4 4 4 4 4 7 7 7
7 7 7 7 7 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2
F F F F H F H F H H F H H F F H H F H H
Page fault:
When a process tries to reference a page not currently present in RAM, the processor treats
this invalid memory reference as a page fault and transfers control from the program to the
operating system. The operating system must:
The method the operating system uses to select the page frame to reuse, which is
its page replacement algorithm, is important to efficiency. The operating system predicts the
page frame least likely to be needed soon, often through the least recently used (LRU)
algorithm or an algorithm based on the program's working set. To further increase
responsiveness, paging systems may predict which pages will be needed soon, preemptively
loading them into RAM before a program references them.
Demand paging
When pure demand paging is used, pages are loaded only when they are referenced. A
program from a memory mapped file begins execution with none of its pages in RAM. As
the program commits page faults, the operating system copies the needed pages from a
file, e.g., memory-mapped file, paging file, or a swap partition containing the page data
into RAM.
Anticipatory paging
This technique, sometimes also called swap prefetch, predicts which pages will be
referenced soon, to minimize future page faults. For example, after reading a page to
service a page fault, the operating system may also read the next few pages even though
they are not yet needed (a prediction using locality of reference). If a program ends, the
operating system may delay freeing its pages, in case the user runs the same program
again.
Free page queue, stealing, and reclamation
The free page queue is a list of page frames that are available for assignment.
Preventing this queue from being empty minimizes the computing necessary to service a
page fault. Some operating systems periodically look for pages that have not been
recently referenced and then free the page frame and add it to the free page queue, a
process known as "page stealing". Some operating systems support page reclamation; if a
program commits a page fault by referencing a page that was stolen, the operating system
detects this and restores the page frame without having to read the contents back into
RAM.
Pre-cleaning
The operating system may periodically pre-clean dirty pages: write modified pages
back to disk even though they might be further modified. This minimizes the amount of
cleaning needed to obtain new page frames at the moment a new program starts or a new data
file is opened, and improves responsiveness.
REPEATED
Hardware upgrades
New services
Fixes the issues of resources
Controls the user and hardware operations
A set of blocked processes each holding a resource and waiting to acquire a resource held by
another process in the set
Example
P1 and P2 each hold one disk drive and each needs another one
Example
P0 P1
Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
No preemption: a resource can be released only voluntarily by the process holding it, after
that process has completed its task
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that
In modern computers, most of the secondary storage is in the form of magnetic disks.
Hence, knowing the structure of a magnetic disk is necessary to understand how the data in
the disk is accessed by the computer.
Structure of a magnetic disk
Tracks of the same distance from centre form a cylinder. A read-write head is used to read
data from a sector of the magnetic disk.
Transfer rate: This is the rate at which the data moves from disk to the computer.
Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational
latency is defined as the time taken by the arm to reach the required sector in the track.
Even though the disk is arranged as sectors and tracks physically, the data is logically
arranged and addressed as an array of blocks of fixed size. The size of a block can
be 512 or 1024 bytes. Each logical block is mapped with a sector on the disk, sequentially. In
this way, each sector in the disk will have a logical address.
Multiple I/O requests may arrive by different processes and only one I/O request can
be served at a time by the disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
Two or more request may be far from each other so can result in greater disk arm
movement.
Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick
look at some of the important terms:
Seek Time: Seek time is the time taken to locate the disk arm to a specified track
where the data is to be read or write. So the disk scheduling algorithm that gives
minimum average seek time is better.
Rotational Latency: Rotational Latency is the time taken by the desired sector of
disk to rotate into a position so that it can access the read/write heads. So the disk
scheduling algorithm that gives minimum rotational latency is better.
Transfer Time: Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and number of bytes to be transferred.
Disk Access Time: Disk Access Time is:
Rotational Latency +
Transfer Time
Disk Response Time: Response Time is the average of time spent by a request
waiting to perform its I/O operation. Average Response time is the response time of the
all requests. Variance Response Time is measure of how individual request are serviced
with respect to average response time. So the disk scheduling algorithm that gives
minimum variance response time is better.
Disk Scheduling Algorithms
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.
Advantages:
3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and
services the requests coming in its path and after reaching the end of disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm works as
an elevator and hence also known as elevator algorithm. As a result, the requests at the
midrange are serviced more and those arriving behind the disk arm will have to wait.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been
scanned, after reversing its direction. So, it may be possible that too many requests are
waiting at the other end or there may be zero or few requests pending at the scanned
area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm
and hence it is known as C-SCAN (Circular SCAN).
Advantages:
Multiprogramming needed for efficiency Single user cannot keep CPU and I/O
devices busy at all times Multiprogramming organizes jobs (code and data) so CPU always
has one to Execute A subset of total jobs in system is kept in memoryOne job selected and
run via job scheduling
When it has to wait (for I/O for example), OS switches to another job
Each user has at least one program executing in memory [process If several jobs ready
to run at the same time CPU scheduling
If processes don’t fit in memory, swapping moves them in and out to run
• Each layer is an abstract object model that contains objects and the routines that are
required to access these objects.
Hardware: The innermost layers represent the hardware part.
Kernel:
• The core of the operating system that manages the hardware resources of a computer.
• It interacts directly with the hardware but never interacts with the user.
Shell: The command interpreter which is an interface between the user and the Kernel
architecture.
File:
• A file is a sequence of bits, bytes, lines, records, which was defined by the user or
programmer.
Terms:
File system:
The part of the OS that manages the files is known as file system.
1. Name – It is an attribute which identifies the name of file which is in the form of human
readable format.
2. Identifier It is an unique identification of each file within file system and is a nonhuman –
readable form. Generally id represented by numbers.
5. Size – it indicates current file size (in bytes, words or blocks) and also contains the
maximum allowable file size.
Here user1 can read vvfgc file but this user cannot write the content in the file.
7. Time, date, and user identification – contains information about creation, last
modification, and last use. These data is used for protection, security, and usage monitoring.
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them
4. Terminates
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this
involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
Scheduling Criteria
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
Max throughput
To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the
basis of multithreaded computer systems
To discuss the APIs for the Pthreads, Win32, and Java thread libraries
Process needs interaction with operating But threads do not need to interact with
system. OS.
Each process can execute its code in its All threads can share same memory. local
own memory area