You are on page 1of 31

2.

a) Define a file system. What are various components of a file system? State and explain
various file allocation methods.
File system is the part of the operating system which is responsible for file management. It
provides a mechanism to store the data and access to the file contents including data and
programs.

File Structure

A File Structure should be according to a required format that the operating system can
understand.
 A file has a certain defined structure according to its type.
 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by the
machine.
 When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.

File Type

File type refers to the ability of the operating system to distinguish different types of file such as
text files source files and binary files etc. Many operating systems support many types of files.
Operating system like MS-DOS and UNIX have the following types of files −
Ordinary files

 These are the files that contain user information.


 These may have text, databases or executable program.
The user can apply various operations on such files like add, modify, delete or even remove the
entire file.
Directory files

 These files contain list of file names and other information related to these files.
Special files

 These files are also known as device files.


 These files represent physical device like disks, terminals, printers, networks, tape drive
etc.
These files are of two types −
 Character special files − data is handled character by character as in case of terminals
or printers.
 Block special files − data is handled in blocks as in the case of disks and tapes.
FILE ALLOCATION METHODS

1. Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of
file creation. Thus, this is a pre-allocation strategy, using variable size portions. The file
allocation table needs just a single entry for each file, showing the starting block and the length
of the file. This method is best from the point of view of the individual sequential file. Multiple
blocks can be read in at a time to improve I/O performance for sequential processing. It is also
easy to retrieve a single block. For example, if a file starts at block b, and the ith block of the file
is wanted, its location on secondary storage is simply b+i-1.
Disadvantage
 External fragmentation will occur, making it difficult to find contiguous blocks of space
of sufficient length. Compaction algorithm will be necessary to free up additional space on
disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.

2. Linked Allocation (Non-contiguous allocation) : Allocation is on an individual block basis.


Each block contains a pointer to the next block in the chain. Again the file table needs just a
single entry for each file, showing the starting block and the length of the file. Although pre-
allocation is possible, it is more common simply to allocate blocks as needed. Any free block can
be added to the chain. The blocks need not be continuous. Increase in file size is always possible
if free disk block is available. There is no external fragmentation because only one block at a
time is needed but there can be internal fragmentation but it exists only in the last disk block of
file.

Disadvantage:
 Internal fragmentation exists in last disk block of file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
 It supports only the sequential access of files.

3. Indexed Allocation:
It addresses many of the problems of contiguous and chained allocation. In this case, the file
allocation table contains a separate one-level index for each file: The index has one entry for
each block allocated to the file. Allocation may be on the basis of fixed-size blocks or variable-
sized blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation technique supports both sequential and
direct access to the file and thus is the most popular form of file allocation.
b) What problems could occur of system allowed a file system to be mounted
simultaneously at more than one location?

There is no problem with this beyond problems that would occur when more than one program
accesses a local file system at the same time or even when you have a single program accessing
the same files from multiple threads.

The potential challenges of any multi-threaded access to storage include:

1. File semantics are not super tight. For a counter-example to file semantics look at
Apache Zookeeper. With Zookeeper, all znodes (similar to files or directories) are
updated completely or not at all and all updates to any znodes are completely ordered
for all updaters and observers. This makes it much easier to write distributed programs
and makes Zookeeper much slower than a high performance file system. MapR FS, in
contrast, only globally orders updates to overlapping byte ranges in a single file, or to
the same row in a table or to the same topic in a stream.
2. In addition, your program may be buffering updates for you. This is really good for
performance and can be really confusing for correctness. All your updates should have
been persisted when a flush returns, but you don’t know that no other updates got in
front of you and you don’t know if anything happened after your flush.
3. Failure modes in distributed systems are complex. If you do a write, do a flush and
the flush doesn’t return (because your program crashes) or returns an error, you really
don’t know if your update occurred.
4. For files, it is allowable for the contents of a large write to occur out of order and
possibly even in different orders for different readers. The only guarantee, really, is
that after a successful flush, all updates will have happened and before the first write
after a flush, no updates will have happened. If you write a megabyte, the last block of
the write could be visible first to one reader, but the first block might be visible first to
another reader. This can be damned confusing if you find out about this after you bake
all kinds of assumptions into your program.

Note that pretty much all modern computers are really distributed systems. They have multiple
hardware threads and many have multiple sockets. Memory caches can lead to contradictory
views of what is in memory so don’t be surprised when the situation is more difficult for
persisted data.

Also, to repeat, this can happen even when you have one program running as a single process on
a single computer. Even if you don’t think you have multiple threads going, you probably do
have multiple threads in your I/O system. Your single-threaded program is a distributed system
that only gives you an illusion of being anything else, especially when it comes to failure modes.
3.Write a short notes on:

a) Distributed Operating System.


An Operating System performs all the basic tasks like managing file,process, and memory. Thus
operating system acts as manager of all the resources, i.e. resource manager. Thus operating
system becomes an interface between user and machine.
These types of operating system is a recent advancement in the world of computer technology
and are being widely accepted all-over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate each other using a shared communication
network. Independent systems possess their own memory unit and CPU. These are referred
as loosely coupled systems or distributed systems. These system’s processors differ in size and
function. The major benefit of working with these types of operating system is that it is always
possible that one user can access the files or software which are not actually present on his
system but on some other system connected within this network i.e., remote access is enabled
within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces.
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which are used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet

b) Features of LINUX file system.

4.
a) Write a detailed note on device management policies.

Device Management is another important function of the operating system. Device management
is responsible for managing all the hardware devices of the computer system. It may also include
the management of the storage device as well as the management of all the input and output
devices of the computer system. It is the responsibility of the operating system to keep track of
the status of all the devices in the computer system. The status of any computing devices,
internal or external may be either free or busy. If a device requested by a process is free at a
specific instant of time, the operating system allocates it to the process.

An operating system manages the devices in a computer system with the help of device
controllers and device drivers. Each device in the computer system is equipped with the help of
device controller. For example, the various devices controllers in a computer system may be disk
controller, printer controller, tape-drive controller and memory controller. All these devices
controllers are connected with each other through a system bus. The device controllers are
actually the hardware components that contains some buffers registers to store the data
temporarily. The transfer of data between a running process and the various devices of the
computer system is accomplished only through these devices controllers.

Some important points of device management


1. An operating system communicates with the devices controllers with the help of device
drivers while allocating the device to the various processes running on the computer
system.
2. Device drivers are the software programs that are used by an operating system to control
the functioning of various devices in a uniform manner.
3. The device drivers may also be regarded as the system software programs acting as an
intermediary between the processes and the devices controllers
4. The device controller used in the device management operation usually includes three
different registers command, status, and data.
5. The other major responsibility of the device management function is to implement the
Application Programming Interface (API).

b) Explain the role of I/O traffic controller in detail.


5)

a) Explain the concept of semaphores in detail.


Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
 Signal
The signal operation increments the value of its argument S.
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows:

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.

 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and
1. The wait operation only works when the semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows:

 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.

b) Explain in detail the concept of multiprocessor operating system.

Multiprocessing is the use of two or more central processing units within a single computer
system. These systems have multiple processors working in parallel that share the computer
clock,memory,bus,peripheral devices.
Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows:
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e. no
master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains a
master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Advantages of Multiprocesssor Systems:
There are multiple advantages to multiprocessor systems. Some of these are:

More reliable Systems

Enhanced Throughput

More Economic Systems


Disadvantages of Multiprocessor Systems

Increased Expense

Complicated Operating System Required

Large Main Memory Required

6.Explain any two page replacement algorithm with a suitable example.


In an operating system that uses paging for memory management, a page replacement algorithm
is needed to decide which page needs to be replaced when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Page Replacement Algorithms :
 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in  memory so —> 0 Page Faults.
Then 5 comes, it is not available in  memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault.
Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames.Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault

0 is already in memory so —> 0 Page fault

4 will takes place of 1 —> 1 Page Faul

Now for the further page reference string —> 0 Page fault because they are already available in
the memory.

7.

a)Write a note on the various services provided by the operating systems.


An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution

Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its
own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU


cycles and files storage are to be allocated to each user or job. Following are the major activities
of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an
operating system with respect to protection −

The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
b)Explain in detail about the CPU scheduling algorithms.
a)Shortest job first
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. SJN is a non-preemptive algorithm.
 Shortest Job first has the advantage of having a minimum average waiting time among all
scheduling algorithms.
 It is a Greedy Algorithm.
 It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
 It is practically infeasible as Operating System may not know burst time and therefore
may not sort them. While it is not possible to predict execution time, several methods can
be used to estimate the execution time for a job, such as a weighted average of previous
execution times. SJF can be used in specialized environments where accurate estimates of
running time are available.

Algorithm:
1. Sort all the process according to the arrival time.
2. Then select that process which has minimum arrival time and minimum Burst time.
3. After completion of process make a pool of process which after till the completion of
previous process and select that process among the pool which is having minimum Burst
time.

How to compute below times in SJF?


1. Completion Time: Time at which process completes its execution.
2. Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
3. Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

b) Multilevel Feedback Queue Scheduling


In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on
entry to the system. Processes do not move between queues. This setup has the advantage of low
scheduling overhead, but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The
idea is to separate processes with different CPU-burst characteristics. If a process uses too much
CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in
a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents
starvation.

In general, a multilevel feedback queue scheduler is defined by the following parameters:

 The number of queues.


 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher-priority queue.
 The method used to determine when to demote a process to a lower-priority queue.
 The method used to determine which queue a process will enter when that process needs
service.

The definition of a multilevel feedback queue scheduler makes it the most general CPU-
scheduling algorithm. It can be configured to match a specific system under design.
Unfortunately, it also requires some means of selecting values for all the parameters to define the
best scheduler. Although a multilevel feedback queue is the most general scheme, it is also
the most complex.
8.a)Explain with an example the concept of shared pages in detail.

Sharing is another design issue for paging system.

It is very common for many computer users to be running the same program at the same time in
a large multiprogramming computer system.

Now, to avoid having two copies of same page in the memory at the same time, just share the
pages.

But a problem arises that not all the pages are shareable.

Generally, read-only pages are shareable, for example, program text; but data pages are not
shareable.

With shared pages, a problem occurs, whenever two or more than two processes (multiple
processes) share some code.

Let's suppose that the process X and process Y, both are running the editor and sharing its pages.

Now if the scheduler decides to remove the process X from the memory, evicting all its pages
and filling the empty page frames with the other program will cause the process Y to generate
large number of page faults just to bring them back again.

In the similar way, whenever the process X terminates, it is essential to be able to discover that
the pages are still in use so that their disk space will not be freed by any accident.

b) Write a brief notes on windows based operating systems.

Windows OS, computer operating system (OS) developed by Microsoft Corporation to


run personal computers (PCs). Featuring the first graphical user interface (GUI) for IBM-
compatible PCs, the Windows OS soon dominated the PC market. Approximately 90 percent of
PCs run some version of Windows.
The first version of Windows, released in 1985, was simply a GUI offered as an extension of
Microsoft’s existing disk operating system, or MS-DOS. Based in part on licensed concepts
that Apple Inc. had used for its Macintosh System Software, Windows for the first time allowed
DOS users to visually navigate a virtual desktop, opening graphical “windows” displaying the
contents of electronic folders and files with the click of a mouse button, rather than typing
commands and directory paths at a text prompt.
Subsequent versions introduced greater functionality, including native Windows File Manager,
Program Manager, and Print Manager programs, and a more dynamic interface. Microsoft also
developed specialized Windows packages, including the networkable Windows for Workgroups
and the high-powered Windows NT, aimed at businesses. The 1995 consumer release Windows
95 fully integrated Windows and DOS and offered built-in Internet support, including the World
Wide Web browser Internet Explorer.

With the 2001 release of Windows XP, Microsoft united its various Windows packages under a
single banner, offering multiple editions for consumers, businesses, multimedia developers, and
others. Windows XP abandoned the long-used Windows 95 kernel for a more powerful code
base and offered a more practical interface and improved application and memory management.
The highly successful XP standard was succeeded in late 2006 by Windows Vista, which
experienced a troubled rollout and met with considerable marketplace resistance, quickly
acquiring a reputation for being a large, slow, and resource-consuming system. Responding to
Vista’s disappointing adoption rate, Microsoft in 2009 released Windows 7, an OS whose
interface was similar to that of Vista but was met with enthusiasm for its noticeable speed
improvement and its modest system requirements.

Windows 8 in 2012 offered a start screen with applications appearing as tiles on a grid and the
ability to synchronize settings so users could log on to another Windows 8 machine and use their
preferred settings. In 2015 Microsoft released Windows 10, which came with Cortana, a digital
personal assistant like Apple’s Siri, and the Web browser Microsoft Edge, which replaced
Internet Explorer. Microsoft also announced that Windows 10 would be the last version of
Windows, meaning that users would receive regular updates to the OS but that no more large-
scale revisions would be done.

9.Explain the security attacks on operating system. Write the steps to protect the system
from various attacks.

Security refers to providing a protection system to computer system resources such as CPU,
memory, disk, software programs and most importantly data/information stored in the computer
system. If a computer program is run by an unauthorized user, then he/she may cause severe
damage to computer or data stored in it. So a computer system must be protected against
unauthorized access, malicious access to system memory, viruses, worms etc.

Program Threats

Operating system's processes and kernel do the designated task as instructed. If a user program
made these process do malicious tasks, then it is known as Program Threats. One of the
common example of program threat is a program installed in a computer which can store and
send user credentials via network to some hacker. Following is the list of some well-known
program threats.
 Trojan Horse − Such program traps user login credentials and stores them to send to
malicious user who can later on login to computer and can access system resources.
 Trap Door − If a program which is designed to work as required, have a security hole in
its code and perform illegal action without knowledge of user then it is called to have a
trap door.
 Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain
conditions met otherwise it works as a genuine program. It is harder to detect.
 Virus − Virus as name suggest can replicate themselves on computer system. They are
highly dangerous and can modify/delete user files, crash systems. A virus is generatlly a
small code embedded in a program. As user accesses the program, the virus starts
getting embedded in other files/ programs and can make system unusable for user

System Threats

System threats refers to misuse of system services and network connections to put user in
trouble. System threats can be used to launch program threats on a complete network called as
program attack. System threats creates such an environment that operating system resources/
user files are misused. Following is the list of some well-known system threats.
 Worm − Worm is a process which can choked down a system performance by using
system resources to extreme levels. A Worm process generates its multiple copies where
each copy uses system resources, prevents all other processes to get required resources.
Worms processes can even shut down an entire network.
 Port Scanning − Port scanning is a mechanism or means by which a hacker can detects
system vulnerabilities to make an attack on the system.
 Denial of Service − Denial of service attacks normally prevents user to make legitimate
use of the system. For example, a user may not be able to use internet if denial of service
attacks browser's content settings.

Security Measures Taken –


To protect the system, Security measures can be taken at the following levels:
 Physical:
The sites containing computer systems must be physically secured against armed and
malicious intruders. The workstations must be carefully protected.
 Human:
Only appropriate users must have the authorization to access the system.
Phishing(collecting confidential information) and Dumpster Diving(collecting basic
information so as to gain unauthorized access) must be avoided.
 Operating system:
The system must protect itself from accidental or purposeful security breaches.
 Networking System:
Almost all of the information is shared between different systems via a network.
Intercepting these data could be just as harmful as breaking into a computer. Henceforth,
Network should be properly secured against such attacks.
Usually, Anti Malware programs are used to periodically detect and remove such viruses and
threats. Additionally, to protect the system from the Network Threats, Firewall is also be used.
11)
a) What are the advantages and disadvantages of multiprocessor systems.
Advantages of multiprocessor system
There are multiple advantages to multiprocessor systems. Some of these are:
More reliable Systems
In a multiprocessor system, even if one processor fails, the system will not halt. This ability to
continue working despite hardware failure is known as graceful degradation. For example: If
there are 5 processors in a multiprocessor system and one of them fails, then also 4 processors
are still working. So the system only becomes slower and does not ground to a halt.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then the
throughput increases by an amount just under N.
More Economic Systems
Multiprocessor systems are cheaper than single processor systems in the long run because they
share the data storage, peripheral devices, power supplies etc. If there are multiple processes that
share data, it is better to schedule them on multiprocessor systems with shared data than have
different computer systems with multiple copies of the data.

Disadvantages of multiprocessor system

There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer
systems, still they are quite expensive. It is much cheaper to buy a simple single processor
system than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So,
it is much more complicated to schedule processes and impart resources to processes.than in
single processor systems. Hence, a more complex and complicated operating system is required
in multiprocessor systems.
Large Main Memory Required
All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems.
b)What are the basic components of linux?
Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX compatibility.
Its functionality list is quite similar to that of UNIX.

Components of Linux System

Linux Operating System has primarily three components


 Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It consists of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.
 System Library − System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
 System Utility − System Utility programs are responsible to do specialized, individual
level tasks.

14)Explain in brief about deadlock prevention.

If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.
Let's see how we can prevent each of the conditions.

1. Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind the
deadlock. If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.

Spooling

For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one
of them according to FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it collects the output when it is
produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get
space in that spool.
We cannot force a resource to be used by more than one process at the same time since it will not
be fair enough and some serious problems may arise in the performance. Therefore, we cannot
violate mutual exclusion for a process practically.

2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process which
are holding one resource and waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution has been
started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)

This can be implemented practically if a process declares all the resources initially. However,
this sounds very practical but can't be done in the computer system because a process can't
determine necessary resources initially.

Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold
a resource for a very long time.

3. No Preemption

Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take
the resource away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the
process then all the work which it has done till now can become inconsistent.

Consider a printer is being used by any process. If we take the printer away from that process
and assign it to some other process then all the data which has been printed can become
inconsistent and ineffective and also the fact that the process can't start printing again from
where it has left which causes performance inefficiency.

4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed.

Among all the methods, violating Circular wait is the only approach that can be implemented
practically.

17)
a)Explain the paging scheme for memory management in detail.

Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
 Logical Address Space or Virtual Address Space( represented in words or bytes): The set
of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses.

Example:
 If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G =
230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
 If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M
= 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU)
which is a hardware device and this mapping is known as paging technique.
 The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
 The Logical address Space is also splitted into fixed-size blocks, called pages.
 Page Size = Frame Size

Let us consider an example:


 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

Address generated by CPU is divided into


 Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset(d): Number of bits required to represent particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
 
The hardware implementation of page table can be done by using dedicated registers. But the
usage of register for the page table is satisfactory only if page table is small. If page table contain
large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast
look up hardware cache.
 The TLB is associative, high speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously.If the
item is found, then corresponding value is returned.

18)
a)Explain the different views of an operating system in brief.
User View of Operating System:
The Operating System is an interface, hides the details which must be performed and presents a
virtual machine to the user that makes easier to use. Operating System provides the following
services to the user.
 Execution of a program
 Access to I/O devices
 Controlled access to files
 Error detection (Hardware failures, and software errors)
Hardware View of Operating System:
The Operating System manages the resources efficiently in order to offer the services to the user
programs. Operating System acts as a resource managers:
 Allocation of resources
 Controlling the execcution of a program
 Control the operationss of I/O devices
 Protecftion of resources
 Monitors the data
System View of Operating System:
Operating System is a program that functions in the same way as other programs . It is a set of
instructions that are executed by the processor. Operating System acts as a program to perform
the following.
 Hardware upgrades
 New services
 Fixes the issues of resources
 Controls the user and hardware operations

b)Define deadlock. Explain various necessary conditions for a deadlock to occur.

Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.

Necessary conditions of deadlock:

There are four different conditions that result in Deadlock. These four conditions are also
known as Coffman conditions and these conditions are not mutually exclusive. Let's look at
them one by one.

 Mutual Exclusion: A resource can be held by only one process at a time. In other
words, if a process P1 is using some resource R at a particular instant of time, then
some other process P2 can't hold or use the same resource R at that particular instant of
time. The process P2 can make a request for that resource R but it can't use that
resource simultaneously with process P1.

 Hold and Wait: A process can hold a number of resources at a time and at the same
time, it can request for other resources that are being held by some other process. For
example, a process P1 can hold two resources R1 and R2 and at the same time, it can
request some resource R3 that is currently held by process P2.
 No preemption: A resource can't be preempted from the process by another process,
forcefully. For example, if a process P1 is using some resource R, then some other
process P2 can't forcefully take that resource. If it is so, then what's the need for various
scheduling algorithm. The process P2 can request for the resource R and can wait for
that resource to be freed by the process P1.
 Circular Wait: Circular wait is a condition when the first process is waiting for the
resource held by the second process, the second process is waiting for the resource held
by the third process, and so on. At last, the last process is waiting for the resource held
by the first process. So, every process is waiting for each other to release the resource
and no one is releasing their own resource. Everyone is waiting here for getting the
resource. This is called a circular wait.

Deadlock will happen if all the above four conditions happen simultaneously.

19) write a detailed note on secondary storage structure.

The secondary-storage structure depicts a computer peripherals and represent powerful functions
that is more reliable and dependable that the older versions. However, it must be agreeable that
the Storage devices are becoming more powerful with the passage of time along with the
development of newer technologies. The functions and facilities of these storage devices are also
increasing with the evolution of better and higher machines.

Disk Scheduling
Disk scheduling is a scheduling process that mainly focuses on the servicing of the
disk input and output requests in a proper order.

Disk Structure
Disk Structures for examples Hard Disks, Compact Disks, and Removable Compact Disks, and
DVDs are being used to strengthen the User’s system and to make and store the data and
information for a longer period. As various types of disks are being used in larger number these
days so it is very important for the users to know about the structure of these various types of
disks which include.

Swap-Space Management
Space management using swapping is a method by which a single page of memory can be copied
to the preconfigured space on a hard disk. This process is used to free up the memory page which
are been occupied earlier. E.g. Linux breaks the physical RAM which is structured with random
access memory into some pitches of memory that is called pages.

Disk Management
Disk management is managing the disk space by creating different disk drives installed and the
partitions associated with those drives. There can be several layers of caching that exists between
the main memory of the computer and disk platters

20) Explain in detail the various algorithms of disk scheduling with an example.

Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or write. So the disk scheduling algorithm that gives minimum
average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.
 Disk Response Time: Response Time is the average of time spent by a request waiting
to perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.

Disk Scheduling Algorithms


1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.

Advantages:
 Every request gets a fair chance
 No indefinite postponement

Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service

2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the queue and
then they are scheduled according to their calculated seek time. As a result, the request near
the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the throughput of system.

Advantages:
 Average Response Time decreases
 Throughput increases

Disadvantages:
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
 High variance of response time as SSTF favours only some requests

3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services
the requests coming in its path and after reaching the end of disk, it reverses its direction
and again services the request arriving in its path. So, this algorithm works as an elevator
and hence also known as elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.

Advantages:
 High throughput
 Low variance of response time
 Average response time

Disadvantages:
 Long waiting time for requests for locations just visited by disk arm

4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned,
after reversing its direction. So, it may be possible that too many requests are waiting at the
other end or there may be zero or few requests pending at the scanned area.

These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm and
hence it is known as C-SCAN (Circular SCAN).

Advantages:
 Provides more uniform wait time compared to SCAN

5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only. Thus it
prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.

6. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to


CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end
goes only to the last request to be serviced in front of the head and then from there goes
to the other end’s last request. Thus, it also prevents the extra delay which occurred due
to unnecessary traversal to the end of the disk.
Each algorithm is unique in its own way. Overall Performance depends on the number and type
of requests.

21
a)Explain in detail the layered architecture of an OS.
There are six layers in the layered operating system. A diagram demonstrating these layers is as
follows:
Details about the six layers are:
Hardware
This layer interacts with the system hardware and coordinates with all the peripheral devices
used such as printer, mouse, keyboard, scanner etc. The hardware layer is the lowest layer in the
layered operating system architecture.
CPU Scheduling
This layer deals with scheduling the processes for the CPU. There are many scheduling queues
that are used to handle processes. When the processes enter the system, they are put into the job
queue. The processes that are ready to execute in the main memory are kept in the ready queue.
Memory Management
Memory management deals with memory and the moving of processes from disk to primary
memory for execution and back again. This is handled by the third layer of the operating system.
Process Management
This layer is responsible for managing the processes i.e assigning the processor to a process at a
time. This is known as process scheduling. The different algorithms used for process scheduling
are FCFS (first come first served), SJF (shortest job first), priority scheduling, round-robin
scheduling etc.
I/O Buffer
I/O devices are very important in the computer systems. They provide users with the means of
interacting with the system. This layer handles the buffers for the I/O devices and makes sure
that they work correctly.
User Programs
This is the highest layer in the layered operating system. This layer deals with the many user
programs and applications that run in an operating system such as word processors, games,
browsers etc.

b)Write a brief note on Logical File System.


The logical file system is the level of the file system at which users can request file operations by
system call. This level of the file system provides the kernel with a consistent view of what
might be multiple physical file systems and multiple file system implementations. As far as the
logical file system is concerned, file system types, whether local, remote, or strictly logical, and
regardless of implementation, are indistinguishable.
A consistent view of file system implementations is made possible by the virtual file system
abstraction. This abstraction specifies the set of file system operations that an implementation
must include in order to carry out logical file system requests. Physical file systems can differ in
how they implement these predefined operations, but they must present a uniform interface to the
logical file system. A list of file system operators can be found at Requirements for a File System
Implementation. For more information on the virual file system, see Virtual File System
Overview.
Each set of predefined operations implemented constitutes a virtual file system. As such, a single
physical file system can appear to the logical file system as one or more separate virtual file
systems.
Virtual file system operations are available at the logical file system level through the virtual file
system switch. This array contains one entry for each virtual file system, with each entry holding
entry point addresses for separate operations. Each file system type has a set of entries in the
virtual file system switch.
The logical file system and the virtual file system switch support other operating system file-
system access semantics. This does not mean that only other operating system file systems can
be supported. It does mean, however, that a file system implementation must be designed to fit
into the logical file system model. Operations or information requested from a file system
implementation need be performed only to the extent possible.
Logical file system can also refer to the tree of known path names in force while the system is
running. A virtual file system that is mounted onto the logical file system tree itself becomes part
of that tree. In fact, a single virtual file system can be mounted onto the logical file system tree at
multiple points, so that nodes in the virtual subtree have multiple names. Multiple mount points
allow maximum flexibility when constructing the logical file system view.

23)write a detailed note on CPU scheduling criteria.

There are many different criterias to check when considering the "best" scheduling algorithm,
they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time(Ideally 100% of the time). Considering a real system, CPU usage should range from
40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of
submission of the process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.

You might also like