You are on page 1of 60

JSS Mahavidyapeetha

JSS Science and Technology University, Mysuru – 570 006

DEPARTMENT OF COMPUTER APPLICATIONS

Submission on

“Operating System”

Submitted to:

Prof. A S Manjunath
Assistant Professor
Department of Computer Applications
JSS Science and Technology University, Mysuru

Submitted by:
Prajwal K P
Reg. No: 01JST19PMC031
1. Define the term process and differentiate between heavyweight and
lightweight processes. Assume that following jobs have arrived in the order
1,2,3,4 and 5:
Job Arrival Time Burst Time Priority
1 0 15 2
2 2 03 1
3 5 05 5
4 6 08 4
5 7 12 3

Give Gantt chart and calculate Avg. Turn-around Time and Waiting Time for:
i) FCFS ii) SJF scheduling and Pre-emptive priority algorithm.
A process is a program in execution. It is a unit of work within the system. Program is
a passive entity, process is an active entity.
Process needs resources to accomplish its task

In a lightweight process, threads are used to divvy up the workload. Here you would
see one process executing in the OS (for this application or service.) This process would
possess 1 or more threads. Each of the threads in this process shares the same address space.
Because threads share their address space, communication between the threads is simple and
efficient. Each thread could be compared to a process in a heavyweight scenario
In a heavyweight process, new processes are created to perform the work in parallel.
Here (for the same application or service), you would see multiple processes running. Each
heavyweight process contains its own address space. Communication between these
processes would involve additional communications mechanisms such as sockets or pipes.
The benefits of a lightweight process come from the conservation of resources. Since threads
use the same code section, data section and OS resources, less overall resources are used. The
drawback is now you have to ensure your system is thread-safe. You have to make sure the
threads don't step on each other. Fortunately, Java provides the necessary tools to allow you
to do this.

2. a) Define a file system. What are various components of a file system? State
and explain various file allocation methods.
A file is a collection of related information that is recorded on secondary storage. Or
file is a collection of logically related entities. From user’s perspective a file is the smallest
allotment of logical secondary storage.
A file's attributes vary from one operating system to another but typically consist of these:

 Name: Name is the symbolic file name and is the only information kept in human
readable form.
 Identifier: This unique tag is a number that identifies the file within the file system; it
is in non-human-readable form of the file.
 Type: This information is needed for systems which support different types of files or
its format.
 Location: This information is a pointer to a device which points to the location of the
file on the device where it is stored.
 Size: The current size of the file (which is in bytes, words, etc.) which possibly the
maximum allowed size gets included in this attribute.
 Protection: Access-control information establishes who can do the reading, writing,
executing, etc.
 Date, Time & user identification: This information might be kept for the creation of
the file, its last modification and last used. These data might be useful for in the field of
protection, security, and monitoring its usage.

Three major methods of allocating disk space are in wide use: contiguous, linked, and
indexed.
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example,
if a file requires n blocks and is given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting block
address and the length of the file (in terms of blocks required), we can determine the blocks
occupied by the file.
The directory entry for a file with contiguous allocation contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as
(b+k).
This is extremely fast since the number of seeks are minimal because of contiguous allocation
of file blocks.

Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous
memory at a particular instance.

2. Linked List Allocation


In this scheme, each file is a linked list of disk blocks which need not be contiguous.
The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed.
The last block (25) contains -1 indicating a null pointer and does not point to any other block.
Advantages:
 This is very flexible in terms of file size. File size can be increased easily since the
system does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively
better in terms of memory utilization.

Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks
are needed to access every block individually. This makes linked allocation slower.
 It does not support random or direct access. We cannot directly access the blocks of a
file. A block k of a file can be accessed by traversing k blocks sequentially (sequential
access) from the starting block of the file via block pointers.
 Pointers required in the linked allocation incur some extra overhead.

3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all
the blocks occupied by a file. Each file has its own index block. The ith entry in the index
block contains the disk address of the ith file block. The directory entry contains the address
of the index block as shown in the image:

Advantages:
 This supports direct access to the blocks occupied by the file and therefore provides
fast access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation
would keep one entire block (index block) for the pointers which is inefficient in terms
of memory utilization. However, in linked allocation we lose the space of only 1 pointer
per block.
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:

1. Linked scheme: This scheme links two or more index blocks together for holding the
pointers. Every index block would then contain a pointer or the address to the next
index block.
2. Multilevel index: In this policy, a first level index block is used to point to the second
level index blocks which in turn points to the disk blocks occupied by the file. This can
be extended to 3 or more levels depending on the maximum file size.

Combined Scheme: 

In this scheme, a special block called the Inode (information Node) contains all the
information about the file such as the name, size, authority, etc. and the remaining space of
Inode is used to store the Disk Block addresses which contain the actual file as shown in the
image below. The first few of these pointers in Inode point to the direct blocks i.e. the
pointers contain the addresses of the disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be single indirect, double indirect or
triple indirect. Single Indirect block is the disk block that does not contain the file data but the
disk address of the blocks that contain the file data. Similarly, double indirect blocks do not
contain the file data but the disk address of the blocks that contain the address of the blocks
containing the file data.
b) What problems could occur of system allowed a file system to be mounted
simultaneously at more than one location?

Consistency semantics specify how multiple users are to access a shared file
simultaneously similar to process synchronization algorithms
 Tend to be less complex due to disk I/O and network latency (for remote file systems
Andrew File System (AFS) implemented complex remote file sharing semantics
UNIX files system (UFS) implements:
 Writes to an open file visible immediately to other users of the same open file
 Sharing file pointer to allow multiple users to read and write concurrently AFS has
session semantics Writes only visible to sessions starting after the file is closed.
There would be multiple paths to the same file, which could confuse users or
encourage mistakes (deleting a file with one path deletes the file in all the other paths).

3. Write short notes on:


a) Distributed operating systems

These types of operating system is a recent advancement in the world of computer technology
and are being widely accepted all-over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate each other using a shared
communication network. Independent systems possess their own memory unit and CPU.
These are referred as loosely coupled systems or distributed systems. These system’s
processors differ in size and function. The major benefit of working with these types of
operating system is that it is always possible that one user can access the files or software
which are not actually present on his system but on some other system connected within this
network i.e., remote access is enabled within the devices connected in that network.
Advantages of Distributed Operating System:
 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which are used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are
- LOCUS etc.

b) Features of LINUX file system


 Portable – Portability means software’s can works on different types of hardwires in
same way. Linux kernel and application programs support their installation on any kind of
hardware platform.

 Open Source – Linux source code is freely available and it is community based
development project. Multiple team’s works in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.

 Multi-User – Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.

 Multiprogramming – Linux is a multiprogramming system means multiple


applications can run at same time.

 Hierarchical File System – Linux provides a standard file structure in which system
files/ user files are arranged.

 Shell – Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs etc.
 Security – Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.

Linux is fast, free and easy to use, power laptops and servers around the world. Linux
has many more features to amaze its users such as:

 Live CD/USB: Almost all Linux distributions have Live CD/USB feature by which
user can run/try the OS even without installing it on the system.

 Graphical user interface (X Window System): People think that Linux is a


command line OS, somewhere its true also but not necessarily, Linux have packages
which can be installed to make the whole OS graphics based as Windows.

 Support’s most national or customized keyboards: Linux is used worldwide and


hence available in multiple languages, and supports most of their custom national
keyboards.

 Application Support: Linux has its own software repository from where users can
download and install thousands of applications just by issuing a command in Linux
Terminal or Shell. Linux can also run Windows applications if needed.

4. (A) Write a detailed note on Device management policies.


Device Management is another important function of the operating system. Device
management is responsible for managing all the hardware devices of the computer system. It
may also include the management of the storage device as well as the management of all the
input and output devices of the computer system. It is the responsibility of the operating
system to keep track of the status of all the devices in the computer system. The status of any
computing devices, internal or external may be either free or busy. If a device requested by a
process is free at a specific instant of time, the operating system allocates it to the process.

An operating system manages the devices in a computer system with the help of
device controllers and device drivers. Each device in the computer system is equipped with
the help of device controller. For example, the various devices controllers in a computer
system may be disk controller, printer controller, tape-drive controller and memory
controller. All these devices controllers are connected with each other through a system bus.
The device controllers are actually the hardware components that contains some buffers
registers to store the data temporarily. The transfer of data between a running process and the
various devices of the computer system is accomplished only through these devices
controllers.

Some important points of device management


1. An operating system communicates with the devices controllers with the help of device
drivers while allocating the device to the various processes running on the computer system.
2. Device drivers are the software programs that are used by an operating system to control
the functioning of various devices in a uniform manner.
3. The device drivers may also be regarded as the system software programs acting as an
intermediary between the processes and the devices controllers.
4. The device controller used in the device management operation usually includes three
different registers command, status, and data.

5. The other major responsibility of the device management function is to implement the
Application Programming Interface (API).

(b) Explain the role of I/O traffic controller in detail


I/O controllers are a series of microchips which help in the communication of data
between the central processing unit and the motherboard. The main purpose of this system is
to help in the interaction of peripheral devices with the control units (CUs). Put simply, the
I/O controller helps in the connection and control of various peripheral devices, which are
input and output devices. It is usually installed on the motherboard of a computer. However,
it can also be used as an accessory in the case of replacements or in order to add more
peripheral devices to the computer.
One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.
An I/O system is required to take an application I/O request and send it to the
physical device, then take whatever response comes back from the device and send it to the
application. I/O devices can be divided into two categories −
*Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key
etc.
*Character devices − A character device is one with which the driver communicates
by sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc
*Device Controllers: Device drivers are software modules that can be plugged into
an OS to handle a particular device. Operating System takes help from device drivers
to handle all I/O devices.
*The Device Controller works like an interface between a device and a device driver.
I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical
component and an electronic component where electronic component is called the
device controller.
There is always a device controller and a device driver for each device to
communicate with the Operating Systems. A device controller may be able to handle
multiple devices. As an interface its main task is to convert serial bit stream to block of
bytes, perform error correction as necessary.
Any device connected to the computer is connected by a plug and socket, and the
socket is connected to a device controller. Following is a model for connecting the CPU,
memory, controllers, and I/O devices where CPU and device controllers all use a common
bus for communication.

Synchronous vs. asynchronous I/O


Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
Asynchronous I/O − I/O proceeds concurrently with CPU execution

5. (a) Explain the concept of semaphores in detail.


Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −

 Wait

The wait operation decrements the value of its argument S, if it is positive. If S is


negative or zero, then no operation is performed.

wait(S)

while (S<=0);

S--;

 Signal

The signal operation increments the value of its argument S.

signal(S)

S++;

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows:

 Counting Semaphores

These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is
the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is
decremented.
 Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted to 0
and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows:

 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.

 There is no resource wastage because of busy waiting in semaphores as processor


time is not wasted unnecessarily to check if a condition is fulfilled to allow a process
to access the critical section.

 Semaphores are implemented in the machine independent code of the microkernel. So


they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented
in the correct order to prevent deadlocks.

 Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a
structured layout for the system.

 Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.

(b) Explain in detail the concept of Multiprocessor Operating Systems.


Most computer systems are single processor systems i.e they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays. These
systems have multiple processors working in parallel that share the computer clock, memory,
bus, peripheral devices etc. An image demonstrating the multiprocessor architecture is:

Types of Multiprocessors

There are mainly two types of multiprocessors i.e. symmetric and asymmetric
multiprocessors. Details about them are as follows:

Symmetric Multiprocessors

In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master - slave relationship exists between them.

An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.

Asymmetric Multiprocessors

In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains
a master slave relationship.

Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.

Advantages of Multiprocessor Systems

There are multiple advantages to multiprocessor systems. Some of these are:

More reliable Systems


In a multiprocessor system, even if one processor fails, the system will not halt. This ability
to continue working despite hardware failure is known as graceful degradation. For example:
If there are 5 processors in a multiprocessor system and one of them fails, then also 4
processors are still working. So the system only becomes slower and does not ground to a
halt.

Enhanced Throughput

If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then
the throughput increases by an amount just under N.

More Economic Systems

Multiprocessor systems are cheaper than single processor systems in the long run because
they share the data storage, peripheral devices, power supplies etc. If there are multiple
processes that share data, it is better to schedule them on multiprocessor systems with shared
data than have different computer systems with multiple copies of the data.

Disadvantages of Multiprocessor Systems

There are some disadvantages as well to multiprocessor systems. Some of these are:

Increased Expense

Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.

Complicated Operating System Required

There are multiple processors in a multiprocessor system that share peripherals, memory etc.
So, it is much more complicated to schedule processes and impart resources to processes.than
in single processor systems. Hence, a more complex and complicated operating system is
required in multiprocessor systems.

Large Main Memory Required

All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems
6. Explain any two Page Replacement algorithms with a suitable example?
Page Replacement Algorithms:
 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.
Example: Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number of
page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in  memory so —> 0 Page Faults.
Then 5 comes, it is not available in  memory so it replaces the oldest page slot i.e 1. —>1
Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm.  For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1,
0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
 Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.

7. a) Write a note on the various services provided by the operating systems.


An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.

 It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system −

 Program execution

 I/O operations

 File System manipulation

 Communication

 Error Detection

 Resource Allocation

 Protection

Program execution

Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.

A process includes the complete execution context (code to execute, data to


manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −

 Loads a program into memory.

 Executes the program.

 Handles program's execution.

 Provides a mechanism for process synchronization.


 Provides a mechanism for process communication.

 Provides a mechanism for deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.

An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.

 Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major activities
of an operating system with respect to file management −

 Program needs to read a file or write a file.

 The operating system gives the permission to the program for operation on file.

 Permission varies from read-only, read-write, denied and so on.

 Operating System provides an interface to the user to create/delete files.

 Operating System provides an interface to the user to create/delete directories.

 Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them

 Both the processes can be on one computer or on different computers, but are
connected through a computer network.

 Communication may be implemented by two methods, either by Shared Memory or


by Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system with
respect to error handling −

 The OS constantly checks for possible errors.

 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory,


CPU cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.

 CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of


multiple processes, the various processes must be protected from each other's activities.

Protection refers to a mechanism or a way to control the access of programs,


processes, or users to the resources defined by a computer system. Following are the major
activities of an operating system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.

 The OS provides authentication features for each user by means of passwords.

b) Explain in detail about the following CPU scheduling algorithms


(a) Shortest Job First
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the
waiting process with the smallest execution time to execute next. SJN is a non-preemptive
algorithm.

 Shortest Job first has the advantage of having a minimum average waiting time
among all scheduling algorithms.
 It is a Greedy Algorithm.
 It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
 It is practically infeasible as Operating System may not know burst time and therefore
may not sort them. While it is not possible to predict execution time, several methods
can be used to estimate the execution time for a job, such as a weighted average of
previous execution times. SJF can be used in specialized environments where accurate
estimates of running time are available.
Algorithm:
1. Sort all the process according to the arrival time.
2. Then select that process which has minimum arrival time and minimum Burst time.
3. After completion of process make a pool of process which after till the completion of
previous process and select that process among the pool which is having minimum
Burst time.
How to compute below times in SJF using a program?
1. Completion Time: Time at which process completes its execution.
2. Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
3. Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time

(b) Multilevel feedback Queue scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use
of other existing algorithms to group and schedule jobs with common characteristics.

 Multiple queues are maintained for processes with common characteristics.

 Each queue can have its own scheduling algorithms.

 Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.

8. a) Explain with an example the concept of shared pages in detail.


Shared Pages
One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems). Shared code must appear in same location in the logical address
space of all processes Private code and data Each process keeps a separate copy of the code
and data .

The pages for the private code and data can appear anywhere in the logical address space

Sharing is another design issue for paging system.

It is very common for many computer users to be running the same program at the same time
in a large multiprogramming computer system.

Now, to avoid having two copies of same page in the memory at the same time, just share the
pages.

But a problem arises that not all the pages are shareable.

Generally, read-only pages are shareable, for example, program text; but data pages are not
shareable.

With shared pages, a problem occurs, whenever two or more than two processes (multiple
processes) share some code.

Let's suppose that the process X and process Y, both are running the editor and sharing its
pages.

Now if the scheduler decides to remove the process X from the memory, evicting all its pages
and filling the empty page frames with the other program will cause the process Y to generate
large number of page faults just to bring them back again.

In the similar way, whenever the process X terminates, it is essential to be able to discover
that the pages are still in use so that their disk space will not be freed by any accident.

b) Write a brief note on Windows based operating Systems

Windows
Generally referred to as the Microsoft Windows, these OS are manufactured and
developed by the tech-giant Microsoft and are the most commonly used OS for personal
computers and to some extent in mobile phones or the Windows phone. Microsoft Windows
is a collection of many graphics oriented operating system, first developed and launched in
1985 by the name Windows 1.0. When it started it had the aim to provide a graphical shell to
the then famous MS-DOS which had a character user interface, but it didn’t gained much
popularity then. Slowly with the implementation of innovative features, the OS gained
popularity and soon dominated the market of Computer Industry, owing to its freedom of use
and user-friendly environment. Let’s look at the advantages and disadvantages of using
Microsoft Windows.

Advantages:
 Hardware compatibility: Almost every computer hardware manufacturing industry
supports Microsoft Windows. This makes the users buy any random computer
manufacturing brand and get the latest version of pre-loaded Microsoft Windows 10 in
it.
 Pre-loaded and available Softwares: Windows comes with much user-friendly
software to make the everyday task easier and if the software is not available then one
can easily get it from the Internet and run it.
 Ease of Use: Microsoft Windows has developed by far the most user-friendly OS in
the market, keeping in mind that it serves the purpose of most types of market in the
world. It’s the most preferred OS for personal computers.
 Game Runner: Windows supports a plethora of games manufactured till date and
comes with all the supporting base software to drive the game engine. So it’s the most
popular OS among the game lovers.
Disadvantages:
 Expensive: Microsoft is a closed source OS and the license cost is really high. It’s not
possible for every class of society to buy new license every time one is expired. The
latest Windows 10 costs around 6000 to 8000 INR.
 Poor Security:  Windows is much more prone to virus and malware in comparison to
other OS like Linux or Mac in the market.
 Not reliable: Windows starts to lag with time and eventually needs booting every time
and now to get back the initial speed.
There are many versions of Windows that has been developed since 1985, but few that
revolutionized the industry of Operating System are:

1. Windows 95
2. Windows 98
3. Windows NT
4. Windows XP
5. Windows Vista
6. Windows 7
7. Windows 8
8. Windows 8.1
9. Windows 10(Latest)
According to Net Applications, that tracks use based on web use, Windows is the most-used
operating system family for personal computers as of July 2017 with close to 90% usage
share and rising.

9. Explain the security attacks on operating system. Write the steps to protect the computer
system from various attacks.

The Security Problem

 (Protection) dealt with protecting files and other resources from accidental misuse by
cooperating users sharing a system, generally using the computer for normal
purposes.
 (Security) deals with protecting systems from deliberate attacks, either internal or
external, from individuals intentionally attempting to steal information, damage
information, or otherwise deliberately wreak havoc in some manner.
 Some of the most common types of violations include:
o Breach of Confidentiality - Theft of private or confidential information, such
as credit-card numbers, trade secrets, patents, secret formulas, manufacturing
procedures, medical information, financial information, etc.
o Breach of Integrity - Unauthorized modification of data, which may have
serious indirect consequences. For example a popular game or other program's
source code could be modified to open up security holes on users systems
before being released to the public.
o Breach of Availability - Unauthorized destruction of data, often just for the
"fun" of causing havoc and for bragging rites. Vandalism of web sites is a
common form of this violation.
o Theft of Service - Unauthorized use of resources, such as theft of CPU cycles,
installation of daemons running an unauthorized file server, or tapping into the
target's telephone or networking services.
o Denial of Service, DOS - Preventing legitimate users from using the system,
often by overloading and overwhelming the system with an excess of requests
for service.

One common attack is masquerading, in which the attacker pretends to be a trusted third
party. A variation of this is the man-in-the-middle, in which the attacker masquerades as
both ends of the conversation to two targets.

To protect the computer system from various attack:

1. Keep up with system and software security updates


While software and security updates can often seem like an annoyance, it really is
important to stay on top of them. Aside from adding extra features, they often cover security
holes. This means the provider of the operating system (OS) or software has found
vulnerabilities which give hackers the opportunity to compromise the program or even your
entire computer.

2. Have your wits about you


It should go without saying; being suspicious is one of the best things you can do to
keep your computer secure. Admittedly, with hacker techniques becoming increasingly
sophisticated, it can be difficult to tell when you’re under attack. All it takes is one email
open or link click and your computer could be compromised.

3. Enable a firewall
A firewall acts as a barrier between your computer or network and the internet. It
effectively closes the computer ports that prevent communication with your device. This
protects your computer by stopping threats from entering the system and spreading between
devices. It can also help prevent your data leaving your computer.
4. Adjust your browser settings
Most browsers have options that enable you to adjust the level of privacy and security
while you browse. These can help lower the risk of malware infections reaching your
computer and malicious hackers attacking your device. Some browsers even enable you to
tell websites not to track your movements by blocking cookies.

5. Install antivirus and anti spyware software


Any machine connected to the internet is inherently vulnerable to viruses and other
threats, including malware, ransomware, and Trojan attacks. An antivirus software isn’t a
completely foolproof option but it can definitely help.

6. Password protect your software and lock your device


Most web-connected software that you install on your system requires login
credentials. The most important thing here is not to use the same password across all
applications. This makes it far too easy for someone to hack into all of your accounts and
possibly steal your identity.

7. Encrypt your data


Whether your computer houses your life’s work or a load of files with sentimental
value like photos and videos, it’s likely worth protecting that information. One way to ensure
it doesn’t fall into the wrong hands is to encrypt your data. Encrypted data will require
resources to decrypt it; this alone might be enough to deter a hacker from pursuing action.

8. Use a VPN
A Virtual Private Network (VPN) is an excellent way to step up your security,
especially when browsing online. While using a VPN, all of your internet traffic is encrypted
and tunneled through an intermediary server in a separate location. This masks your IP,
replacing it with a different one, so that your ISP can no longer monitor your activity.

10.What are device management policies for storing data in operating System?

I/O Device Management

The I/O subsystem of OS is responsible for:

•Managing device drivers for I/O devices.


• Managing I/O operations with devices such as keyboard, mouse, printer etc.,

• Managing the memory component that controls buffering, caching and spooling.

Device management:

A running program may need additional resources such as more memory, tape drives,
access to files and so on.

• request device, release device: To request a required device and to release the device
after use.

• read, write, and reposition: To read, write and repositioning a device.

• get device attributes, set device attributes: To determine and reset the device attributes.

An operating system or the OS manages communication with the devices through


their respective drivers. The operating system component provides a uniform interface to
access devices of varied physical attributes. For device management in operating system:

 Keep tracks of all devices and the program which is responsible to perform this is
called I/O controller.
 Monitoring the status of each device such as storage drivers, printers and other
peripheral devices.
 Enforcing preset policies and taking a decision which process gets the device when
and for how long.
 Allocates and Deallocates the device in an efficient way. De-allocating them at two
levels: at the process level when I/O command has been executed and the device is
temporarily released, and at the job level, when the job is finished and the device is
permanently released.
 Optimizes the performance of individual devices.

Types of devices

The OS peripheral devices can be categorized into 3: Dedicated, Shared, and Virtual.
The differences among them are the functions of the characteristics of the devices as well as
how they are managed by the Device Manager.
Dedicated devices:-

Such type of devices in the device management in operating system are dedicated


or assigned to only one job at a time until that job releases them. Devices like printers, tape
drivers, plotters etc. demand such allocation scheme since it would be awkward if several
users share them at the same point of time. The disadvantages of such kind f devices s the
inefficiency resulting from the allocation of the device to a single user for the entire duration
of job execution even though the device is not put to use 100% of the time.

Shared devices:-

These devices can be allocated o several processes. Disk-DASD can be shared among
several processes at the same time by interleaving their requests. The interleaving is carefully
controlled by the Device Manager and all issues must be resolved on the basis of
predetermined policies.

Virtual Devices:-

These devices are the combination of the first two types and they are dedicated
devices which are transformed into shared devices. For example, a printer converted into a
shareable device via spooling program which re-routes all the print requests to a disk. A print
job is not sent straight to the printer, instead, it goes to the disk(spool)until it is fully prepared
with all the necessary sequences and formatting, then it goes to the printers. This technique
can transform one printer into several virtual printers which leads to better performance and
use.

11. (A.) what are the advantages and disadvantages of multiprocessor systems?
Advantages of Multiprocessor Systems

There are multiple advantages to multiprocessor systems. Some of these are:

More reliable Systems

In a multiprocessor system, even if one processor fails, the system will not halt. This
ability to continue working despite hardware failure is known as graceful degradation. For
example: If there are 5 processors in a multiprocessor system and one of them fails, then also
4 processors are still working. So the system only becomes slower and does not ground to a
halt.

Enhanced Throughput

If multiple processors are working in tandem, then the throughput of the system
increases i.e. number of processes getting executed per unit of time increase. If there are N
processors then the throughput increases by an amount just under N.

More Economic Systems

Multiprocessor systems are cheaper than single processor systems in the long run
because they share the data storage, peripheral devices, power supplies etc. If there are
multiple processes that share data, it is better to schedule them on multiprocessor systems
with shared data than have different computer systems with multiple copies of the data.

Disadvantages of Multiprocessor Systems

There are some disadvantages as well to multiprocessor systems. Some of these are:

Increased Expense

Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.

Complicated Operating System Required

There are multiple processors in a multiprocessor system that share peripherals,


memory etc. So, it is much more complicated to schedule processes and impart resources to
processes than in single processor systems. Hence, a more complex and complicated
operating system is required in multiprocessor systems.

Large Main Memory Required

All the processors in the multiprocessor system share the memory. So a much larger
pool of memory is required as compared to single processor systems.

b. What are the basic components of Linux?


Linux is an operating system, it is just like Windows and Mac OS X. Operating
system is software that leverages hardware of the devices such as the laptop, desktop or tabs
to the most. In a simple way, we can say the operating system is a bridge between the
software and the hardware. Without OS it is not possible to run or execute software or
program.

 Hardware: Peripheral devices such as RAM, HDD, CPU together constitute


Hardware layer for the LINUX operating system.
 Kernel: The Core part of the Linux OS is called Kernel, it is responsible for many
activities of the LINUX operating system. It interacts directly with hardware, which
provides low-level services like providing hardware details to the system. We have
two types of kernels – Monolithic Kernel and Microkernel
 Shell: The shell is an interface between the user and the kernel; it hides the
complexity of functions of the kernel from the user. It accepts commands from the
user and performs the action.
 Utilities: Operating system functions are granted to the user from the Utilities.
Individual and specialized functions are can be utilized from the system utilities.

12. a) What is role of scheduler? What requirement is to be satisfied for a


solution of a critical section problem?
Schedulers
 Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
 Short-term scheduler (or CPU scheduler) – selects which process should be
executed next and allocates CPU

Addition of Medium Term Scheduling

Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast) Long-


term scheduler is invoked very infrequently (seconds, minutes) Þ (may be slow) The long-
term scheduler controls the degree of multiprogramming Processes can be described as either:

I/O-bound process – spends major time doing I/O than computations, many short CPU
bursts

CPU-bound process – spends more time doing computations; few very long CPU bursts

b) Explain the use of I/O traffic controller in operating system.

I/O controllers are a series of microchips which help in the communication of data
between the central processing unit and the motherboard. The main purpose of this system is
to help in the interaction of peripheral devices with the control units (CUs). Put simply, the
I/O controller helps in the connection and control of various peripheral devices, which are
input and output devices. It is usually installed on the motherboard of a computer. However,
it can also be used as an accessory in the case of replacements or in order to add more
peripheral devices to the computer.

One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-
mapped screen, LED, Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.

An I/O system is required to take an application I/O request and send it to the physical
device, then take whatever response comes back from the device and send it to the
application. I/O devices can be divided into two categories −
*Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key
etc.

*Character devices − A character device is one with which the driver communicates
by sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc

*Device Controllers Device drivers are software modules that can be plugged into an OS to
handle a particular device. Operating System takes help from device drivers to handle all I/O
devices.

The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an
electronic component where electronic component is called the device controller.

There is always a device controller and a device driver for each device to communicate with
the Operating Systems. A device controller may be able to handle multiple devices. As an
interface its main task is to convert serial bit stream to block of bytes, perform error
correction as necessary.

Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
Synchronous vs. asynchronous I/O

Synchronous I/O − in this scheme CPU execution waits while I/O proceeds

Asynchronous I/O − I/O proceeds concurrently with CPU execution

13. Given the following information:

Job number Arrival time CPU cycle Priority


1 0 75 3
2 10 40 1
3 10 25 4
4 80 20 5
5 85 45 2

Draw a timeline for each of the following scheduling algorithms and determine
which one gives the best results.

1) FCFS

2) SJF

3) Round Robin (using a time quantum of 15)

4) Priority scheduling.

Assume a small integer means higher priority.

14. a) Explain the concept of semaphores in detail.


Synchronization tool that does not require busy waiting nSemaphore S – integer variable

Two standard operations modify S: wait() and signal()

Originally called P() and V()

Less complicated
Can only be accessed via two indivisible (atomic) operations

wait (S) {

while S <= 0

; // no-op

S--;

signal (S) {

S++;

Semaphore as General Synchronization Tool

Counting semaphore – integer value can range over an unrestricted domain

Binary semaphore – integer value can range only between 0

and 1; can be simpler to implement

Also known as mutex locks Can implement a counting semaphore S as a binary semaphore

Provides mutual exclusion Semaphore mutex; // initialized to do {

wait (mutex);

// Critical Section

signal (mutex);

} while (TRUE);

Semaphore Implementation

Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the crtical section. Could now have busy waiting
in critical section implementation But implementation code is short Little busy waiting if
critical section rarely occupied Note that applications may spend lots of time in critical
sections and therefore this is not a good solution.

Semaphore Implementation with no Busy waiting

With each semaphore there is an associated waiting queue. Each entry in a waiting queue has
two data items: value (of type integer) pointer to next record in the list

Two operations:

block – place the process invoking the operation on the appropriate waiting queue.

wakeup – remove one of processes in the waiting queue and place it in the ready queue.

Implementation of wait:

wait(semaphore *S) {

S->value--;

if (S->value < 0) {

add this process to S->list;

block();

Implementation of signal:

signal(semaphore *S) {

S->value++;

if (S->value <= 0) {

remove a process P from S->list;

wakeup(P);

}
}

b) Explain in brief about deadlock prevention


Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes

Deadlock Prevention

Restrain the ways request can be made

Mutual Exclusion – not required for sharable resources; must hold for non-sharable
resources

Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources Require process to request and be allocated all its resources before it
begins execution, or allow process to request resources only when the process has none Low
resource utilization; starvation possible

No Preemption –

If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released.

Pre-empted resources are added to the list of resources for which the process is waiting

Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting

Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration

Deadlock Avoidance

Requires that the system has some additional a priori information available Simplest and
most useful model requires that each process declare the maximum number of resources of
each type that it may need .The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-wait condition Resource-
allocation state is defined by the number of available and allocated resources, and the
maximum demands of the processes

Safe State

When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state

System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi , the resources that Pi can still request can be
satisfied by currently available resources + resources held by all the Pj , with j < inThat is:

If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate When Pi terminates, Pi +1 can obtain its needed resources, and so
on

Basic Facts

If a system is in safe state Þ no deadlocks If a system is in unsafe state Þ possibility of


deadlock Avoidance Þ ensure that a system will never enter an unsafe state.

15. a) Explain in detail the concept of Multiprocessor Operating Systems.

b) Write a detailed note on role of I/O traffic controller.

REPEATED

16. What is the need of Page replacement? Consider the following reference
string

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Find the number of Page Faults with FIFO, Optimal Page replacement and
LRU with four free frames which are empty initially. Which algorithm gives the
minimum number of page faults?
(A). FIFO:

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 4 4 4 4 4 4 4 4 4 4 7 7 7

7 7 7 7 7 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2

F F F F H F H F H H F H H F F H H F H H
In FIFO algorithm 10 page fault and 10 Hits will occur.

(B). Optimal page replacement:

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

1 1 1 1 1 4 4 4 4 4 4 1 1 1 1 1 1 1

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7 7 7 7 7 3 3 3 3 3 3 3 3 3 3 3 3 7 7 7

F F F F H F H F H H H H H F H H H F H H
In optimal page replacement 8 page faults and 12 hits will occur.

(C.)Least recently used:

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 4 4 4 4 4 4 4 4 4 4 7 7 7

7 7 7 7 7 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2

F F F F H F H F H H F H H F F H H F H H

In LRU algorithm 10 page faults and 10 hits will occur.

17. a) Write a detailed note on Paging scheme of memory management.


In computer operating systems, paging is a memory management scheme by which a
computer stores and retrieves data from secondary storage for use in main memory. In this
scheme, the operating system retrieves data from secondary storage in same-
size blocks called pages. Paging is an important part of virtual memory implementations in
modern operating systems, using secondary storage to let programs exceed the size of
available physical memory.

For simplicity, main memory is called "RAM" (an acronym of "random-access


memory") and secondary storage is called "disk" (a shorthand for "hard disk drive"), but the
concepts do not depend on whether these terms apply literally to a specific computer system.

Page fault:

When a process tries to reference a page not currently present in RAM, the processor treats
this invalid memory reference as a page fault and transfers control from the program to the
operating system. The operating system must:

1. Determine the location of the data on disk.


2. Obtain an empty page frame in RAM to use as a container for the data.
3. Load the requested data into the available page frame.
4. Update the page table to refer to the new page frame.
5. Return control to the program, transparently retrying the instruction that caused the
page fault.
When all page frames are in use, the operating system must select a page frame to
reuse for the page the program now needs. If the evicted page frame was dynamically
allocated by a program to hold data, or if a program modified it since it was read into RAM
(in other words, if it has become "dirty"), it must be written out to disk before being freed. If
a program later references the evicted page, another page fault occurs and the page must be
read back into RAM.

The method the operating system uses to select the page frame to reuse, which is
its page replacement algorithm, is important to efficiency. The operating system predicts the
page frame least likely to be needed soon, often through the least recently used (LRU)
algorithm or an algorithm based on the program's working set. To further increase
responsiveness, paging systems may predict which pages will be needed soon, preemptively
loading them into RAM before a program references them.

Demand paging
When pure demand paging is used, pages are loaded only when they are referenced. A
program from a memory mapped file begins execution with none of its pages in RAM. As
the program commits page faults, the operating system copies the needed pages from a
file, e.g., memory-mapped file, paging file, or a swap partition containing the page data
into RAM.
Anticipatory paging
This technique, sometimes also called swap prefetch, predicts which pages will be
referenced soon, to minimize future page faults. For example, after reading a page to
service a page fault, the operating system may also read the next few pages even though
they are not yet needed (a prediction using locality of reference). If a program ends, the
operating system may delay freeing its pages, in case the user runs the same program
again.
Free page queue, stealing, and reclamation
The free page queue is a list of page frames that are available for assignment.
Preventing this queue from being empty minimizes the computing necessary to service a
page fault. Some operating systems periodically look for pages that have not been
recently referenced and then free the page frame and add it to the free page queue, a
process known as "page stealing". Some operating systems support page reclamation; if a
program commits a page fault by referencing a page that was stolen, the operating system
detects this and restores the page frame without having to read the contents back into
RAM.
Pre-cleaning
The operating system may periodically pre-clean dirty pages: write modified pages
back to disk even though they might be further modified. This minimizes the amount of
cleaning needed to obtain new page frames at the moment a new program starts or a new data
file is opened, and improves responsiveness.

b) Write a brief note on distributed operating system

REPEATED

18. a) Explain the different views of an operating system in brief.


Application View of an Operating System:
The OS provides an execution environment for running programs.
 The execution environment provides a program with the processor time and memory
space that it needs to run.
 The execution environment provides interfaces through which a program can use
networks, storage, I/O devices, and other system hardware components.
 Interfaces provide a simplified, abstract view of hardware to application programs.
 The execution environment isolates running programs from one another and prevents
undesirable interactions among them.
Other Views of an Operating System View:
The OS manages the hardware resources of a computer system.
 Resources include processors, memory, disks and other storage devices, network
interfaces, I/O devices such as keyboards, mice and monitors, and so on.
 The operating system allocates resources among running programs.
 It controls the sharing of resources among programs.
 The OS itself also uses resources, which it must share with application programs.

Implementation View: The OS is a concurrent, real-time program.


 Concurrency arises naturally in an OS when it supports concurrent applications, and
because it must interact directly with the hardware.
 Hardware interactions also impose timing constraints.

Application View of an Operating System

User View of Operating System:


The Operating System is an interface, hides the details which must be performed and
presents a virtual machine to the user that makes easier to use. Operating System provides the
following services to the user.
 Execution of a program
 Access to I/O devices
 Controlled access to files
 Error detection (Hardware failures, and software errors)

Hardware View of Operating System:


The Operating System manages the resources efficiently in order to offer the services
to the user programs. Operating System acts as a resource managers:
 Allocation of resources
 Controlling the execution of a program
 Control the operations of I/O devices
 Protection of resources
 Monitors the data

System View of Operating System:


Operating System is a program that functions in the same way as other programs . It is
a set of instructions that are executed by the processor. Operating System acts as a program to
perform the following.

 Hardware upgrades
 New services
 Fixes the issues of resources
 Controls the user and hardware operations

b) Define the term deadlock. Explain various necessary conditions for a


deadlock to occur.
To develop a description of deadlocks, which prevent sets of concurrent processes from
completing their tasks to present a number of different methods for preventing or avoiding
deadlocks in a computer system.

The Deadlock Problem

A set of blocked processes each holding a resource and waiting to acquire a resource held by
another process in the set

Example

System has 2 disk drives

P1 and P2 each hold one disk drive and each needs another one

Example

semaphores A and B, initialized to 1

P0 P1

wait (A); wait(B)

wait (B); wait(A)


Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously

Mutual exclusion: only one process at a time can use a resource

Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes

No preemption: a resource can be released only voluntarily by the process holding it, after
that process has completed its task

Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that

is held by P1, P1 is waiting for a resource that is held by

P2, …, Pn–1 is waiting for a resource that is held by

Pn, and P0 is waiting for a resource that is held by P0.

19. Write a detailed note on secondary storage structure.


Secondary storage devices are those devices whose memory is nonvolatile, meaning,
the stored data will be intact even if the system is turned off. Here are a few things worth
noting about secondary storage.

 Secondary storage is also called auxiliary storage.


 Secondary storage is less expensive when compared to primary memory like RAMs.
 The speed of the secondary storage is also lesser than that of primary storage.
 Hence, the data which is less frequently accessed is kept in the secondary storage.
 A few examples are magnetic disks, magnetic tapes, removable thumb drives etc.

Magnetic Disk Structure

In modern computers, most of the secondary storage is in the form of magnetic disks.
Hence, knowing the structure of a magnetic disk is necessary to understand how the data in
the disk is accessed by the computer.
Structure of a magnetic disk

A magnetic disk contains several platters. Each platter is divided into circular


shaped tracks. The length of the tracks near the centre is less than the length of the tracks
farther from the centre. Each track is further divided into sectors, as shown in the figure.

Tracks of the same distance from centre form a cylinder. A read-write head is used to read
data from a sector of the magnetic disk.

The speed of the disk is measured as two parts:

 Transfer rate: This is the rate at which the data moves from disk to the computer.
 Random access time: It is the sum of the seek time and rotational latency.

Seek time is the time taken by the arm to move to the required track. Rotational
latency is defined as the time taken by the arm to reach the required sector in the track.

Even though the disk is arranged as sectors and tracks physically, the data is logically
arranged and addressed as an array of blocks of fixed size. The size of a block can
be 512 or 1024 bytes. Each logical block is mapped with a sector on the disk, sequentially. In
this way, each sector in the disk will have a logical address.

20. Explain in detail the various Algorithms of Disk Scheduling with an


example.
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:

 Multiple I/O requests may arrive by different processes and only one I/O request can
be served at a time by the disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick
look at some of the important terms:

 Seek Time: Seek time is the time taken to locate the disk arm to a specified track
where the data is to be read or write. So the disk scheduling algorithm that gives
minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of
disk to rotate into a position so that it can access the read/write heads. So the disk
scheduling algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +

Rotational Latency +

Transfer Time
 Disk Response Time: Response Time is the average of time spent by a request
waiting to perform its I/O operation. Average Response time is the response time of the
all requests. Variance Response Time is measure of how individual request are serviced
with respect to average response time. So the disk scheduling algorithm that gives
minimum variance response time is better.
Disk Scheduling Algorithms
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.
Advantages:

 Every request gets a fair chance


 No indefinite postponement
Disadvantages:

 Does not try to optimize seek time


 May not provide the best possible service
2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the queue
and then they are scheduled according to their calculated seek time. As a result, the
request near the disk arm will get executed first. SSTF is certainly an improvement over
FCFS as it decreases the average response time and increases the throughput of system.
Advantages:

 Average Response Time decreases


 Throughput increases
Disadvantages:

 Overhead to calculate seek time in advance


 Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
 High variance of response time as SSTF favours only some requests

3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and
services the requests coming in its path and after reaching the end of disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm works as
an elevator and hence also known as elevator algorithm. As a result, the requests at the
midrange are serviced more and those arriving behind the disk arm will have to wait.
Advantages:

 High throughput
 Low variance of response time
 Average response time
Disadvantages:

 Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been
scanned, after reversing its direction. So, it may be possible that too many requests are
waiting at the other end or there may be zero or few requests pending at the scanned
area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm
and hence it is known as C-SCAN (Circular SCAN).

Advantages:

 Provides more uniform wait time compared to SCAN


5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to
be serviced in front of the head and then reverses its direction from there only. Thus it
prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
6. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar
to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the
end goes only to the last request to be serviced in front of the head and then from there
goes to the other end’s last request. Thus, it also prevents the extra delay which
occurred due to unnecessary traversal to the end of the disk.

21. (a) Explain in detail the Layered Architecture of an OS.

Operating System Definition

OS is a resource allocator Manages all resources Decides between conflicting


requests for efficient and fair resource use OS is a control program Controls execution of
programs to prevent errors and improper use of the computer No universally accepted
definition Everything a vendor ships when you order an operating system” is good
approximation But varies wildly

Operating System Structure

Multiprogramming needed for efficiency Single user cannot keep CPU and I/O
devices busy at all times Multiprogramming organizes jobs (code and data) so CPU always
has one to Execute A subset of total jobs in system is kept in memoryOne job selected and
run via job scheduling

When it has to wait (for I/O for example), OS switches to another job

Timesharing (multitasking) is logical extension in which CPU switches jobs so


frequently that users can interact with each job while it is running, creating interactive
computing Response time should be < 1 second

Each user has at least one program executing in memory [process If several jobs ready
to run at the same time CPU scheduling

If processes don’t fit in memory, swapping moves them in and out to run

Virtual memory allows execution of processes not completely in memory

Operating System Structure

• Operating system is divided into number of layers.

• Each layer is an abstract object model that contains objects and the routines that are
required to access these objects.
Hardware: The innermost layers represent the hardware part.

Kernel:

• The core of the operating system that manages the hardware resources of a computer.

• It interacts directly with the hardware but never interacts with the user.

Shell: The command interpreter which is an interface between the user and the Kernel
architecture.

(b) Write a brief note on Logical File System.

File:

• A file is a named collection of related information that is recorded in secondary storage


such as magnetic disks, tapes etc. [or]

• A file is a sequence of bits, bytes, lines, records, which was defined by the user or
programmer.

Terms:

Field: it is a collection of bytes that can be identified by a user.

Record: It is a collection of fields.

File: It is a collection of records.

Directory/Folder: It is a collection of files and their attributes.

File system:

The part of the OS that manages the files is known as file system.

Function of file system:

To manage the disk space on a secondary storage.

The file system consists of two parts:

1) A collection of files: each storing data


2) A directory structure: This organizes and provides information about all the files in the
system.

The common file attributes are:

1. Name – It is an attribute which identifies the name of file which is in the form of human
readable format.

Suppose “vvfgc” is a file name.

2. Identifier It is an unique identification of each file within file system and is a nonhuman –
readable form. Generally id represented by numbers.

For example “vvfgc” is a file with having unique id “12345”.

3. Type – It indicates type of file with extension.

Ex: vvfgc is a file with 12345 id of text file. “vvfgc.txt”

4. Location – It is a Pointer to a device and location of a file on device.

5. Size – it indicates current file size (in bytes, words or blocks) and also contains the
maximum allowable file size.

Suppose vvfgc file having 1024 KB size.

6. Protection – access control information determines, who can do reading, writing,


executing.

Suppose filename username permission Vvfgc user1 “r”

Here user1 can read vvfgc file but this user cannot write the content in the file.

7. Time, date, and user identification – contains information about creation, last
modification, and last use. These data is used for protection, security, and usage monitoring.

22. Explain any three Page Replacement algorithms with an example.


REPEATED
23. Write a detailed note on CPU scheduling criteria.
To introduce CPU scheduling, which is the basis for multiprogrammed operating
systems

To describe various CPU-scheduling algorithms

To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular


system

Maximum CPU utilization obtained with multiprogramming

CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait

CPU burst distribution

CPU Scheduler

Selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them

CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

4. Terminates

Scheduling under 1 and 4 is nonpreemptive

All other scheduling is preemptive

Dispatcher

Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this

involves:

switching context
switching to user mode

jumping to the proper location in the user program to restart that program

Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

Scheduling Criteria

CPU utilization – keep the CPU as busy as possible

Throughput – # of processes that complete their execution per time unit

Turnaround time – amount of time to execute a particular process

Waiting time – amount of time a process has been waiting in the ready queue

Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

24. Explain in detail about views of an Operating System.


Application View of an Operating System:
The OS provides an execution environment for running programs.
 The execution environment provides a program with the processor time and memory
space that it needs to run.
 The execution environment provides interfaces through which a program can use
networks, storage, I/O devices, and other system hardware components.
 Interfaces provide a simplified, abstract view of hardware to application programs.
 The execution environment isolates running programs from one another and prevents
undesirable interactions among them.

Other Views of an Operating System View:


The OS manages the hardware resources of a computer system.
 Resources include processors, memory, disks and other storage devices, network
interfaces, I/O devices such as keyboards, mice and monitors, and so on.
 The operating system allocates resources among running programs.
 It controls the sharing of resources among programs.
 The OS itself also uses resources, which it must share with application programs.

Implementation View: The OS is a concurrent, real-time program.


 Concurrency arises naturally in an OS when it supports concurrent applications, and
because it must interact directly with the hardware.
 Hardware interactions also impose timing constraints.

Application View of an Operating System


User View of Operating System:
The Operating System is an interface, hides the details which must be performed and
presents a virtual machine to the user that makes easier to use. Operating System provides the
following services to the user.
 Execution of a program
 Access to I/O devices
 Controlled access to files
 Error detection (Hardware failures, and software errors)

Hardware View of Operating System:


The Operating System manages the resources efficiently in order to offer the services
to the user programs. Operating System acts as a resource managers:
 Allocation of resources
 Controlling the execution of a program
 Control the operations of I/O devices
 Protection of resources
 Monitors the data
System View of Operating System:
Operating System is a program that functions in the same way as other programs . It is a set
of instructions that are executed by the processor. Operating System acts as a program to
perform the following.
 Hardware upgrades
 New services
 Fixes the issues of resources
Controls the user and hardware operations

25. What causes a process/thread to change the state?


Threads

To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the
basis of multithreaded computer systems

To discuss the APIs for the Pthreads, Win32, and Java thread libraries

To examine issues related to multithreaded programming

Single and Multithreaded Processes

It is a Heavy weighted process. It is a light weight process.

Process needs interaction with operating But threads do not need to interact with
system. OS.
Each process can execute its code in its All threads can share same memory. local
own memory area

Processes are dependent Threads are independent.

Because if one process is blocked, then Because, if one thread is blocked or


no other process can execute until the waiting , a second thread in the same task
first process is unblocked. can run.

i. From running to ready?

ii. From ready to running?

iii. From running to blocked?

iv. From blocked to ready?

You might also like