You are on page 1of 14

MCS(Morning)

Graded Assignment# : 02
Subject : Operating system
Submitted to : Madam Munazza mah jabeen
Submitted by : Zulfiqar Ahmed
Reg No : 1441-320006
Q1. Briefly Define the Following:

File Access Method


The information stored in a file must be accessed and read into memory. Though there are
many ways to access a file, some system provides only one method, other systems provide
many methods, out of which you must choose the right one for the application.

Sequential Access Method

In this method, the information in the file is processed in order, one record after another. For
example, compiler and various editors access files in this manner.

The read-next – reads the next portion of a file and updates the file pointer which tracks the I/O
location. Similarly, the write-next will write at the end of a file and advances the pointer to the
new end of the file.

Sequential access to File


The sequential access always reset to the beginning of the file and then starts skipping forward
or backward n
records. It works for both sequential devices and random-access devices.

Direct Access Method


The other method for file access is direct access or relative access. For direct access, the file is
viewed as a numbered sequence of blocks or records. This method is based on the disk model
of file. Since disks allow random access to file block.
Direct Access Method Using Index

You can read block 34, then read 45, and write in block 78, there is no restriction on the order
of access to the file.
The direct access method is used in database management. A query is satisfied immediately by
accessing large
amount of information stored in database files directly.

The database maintains an index of blocks which contains the block number. This block can be accessed
directly and
information is retrieves

Search Structures of Directory in Operating System


:A directory is a container that is used to contain folders and file. It organizes files and folders into a
hierarchical manner.

There are several logical structures of a directory, these are given below.

Single-level directory –
Single level directory is simplest directory structure. In it all files are contained in same
directory which make it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files increases
or when the system has more than one user. Since all the files are in the same directory, they
must have the unique name . if two users call their dataset test, then the unique name rule
violated.

Two-level directory –
As we have seen, a single level directory often leads to confusion of files names among
different users. the solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has there own user files directory (UFD). The UFDs
has similar structures, but each lists only the files of a single user. system’s master file directory
(MFD) is searches whenever a new user id=s logged in. The MFD is indexed by username or
account number, and each entry points to the UFD for that user.

Tree-structured directory –
Once we have seen a two-level directory as a tree of height 2, the natural generalization is to
extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create there own subdirectories and to organize on their
files accordingly.

A tree structure is the most common directory structure. The tree has a root directory, and every
file in the system have a unique path.
Disk Quotas
Quota is the amount of space you have to store files, whether you create or access them from
Linux or Windows. The amount of storage space for files is based on the available disk space on
the ECS file servers and on the type of computer account you hold.
Each computer account has both a hard and a soft quota. The soft quota is the point at which
you are warned that you are approaching your hard quota. The hard quota is the absolute
maximum amount of disk space the system grants your account. Do not exceed your hard
quota; bad things happen if you do: the system will not let you do anything in your account that
requires using additional disk space; you cannot create new files; and any files that you try to
edit may become corrupted. The hard quota takes effect as soon as you exceed it; there is no
grace period.

Q2.Explain Banker’s Algorithm for Deadlocks avoidance with the help of

Example?

Banker’s Algorithm in Operating System


The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Banker’s algorithm is named so because it is used in banking system to check whether loan can
be sanctioned to a person or not. Suppose there are n number of account holders in a bank and
the total sum of their money is S. If a person applies for a loan then the bank first subtracts the
loan amount from the total money that bank has and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all the account holders comes to
withdraw their money then the bank can easily do it.
In other words, the bank would never allocate its money in such a way that it can no longer
satisfy the needs of all its customers. The bank would try to be in safe state always.
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available : 
It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a system.
Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type Rj
Need :
 It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
Need [ i,   j ] = k means process Pi currently need ‘k’ instances of resource type Rj
for its execution.
Need [ I,  j ] = Max [ i,   j ] – Allocation [ i,   j ]

Allocation specifies the resources currently allocated to process Pi and Need specifies the


additional resources that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and Resource request algorithm
Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:

Q3. Briefly Explain the Following

Disk Arm Scheduling Algorithms


The time required to read or write a disk block determined by 3 factors.
Seek time: The time to move the arm to the proper cylinder.
Rotational delay: the time for the proper sector to rotate under the head.
     Actual data transfer time
Among these, the Seek time dominates. Error checking is done by controllers.
Several algorithms exist to schedule the servicing of disk I/O requests.
We illustrate them with a request queue (0-199).
        98, 183, 37, 122, 14, 124, 65, 67         Head pointer 53
FCFS:
When the current request is finished, the disk driver has to decide which request to
handle next. Using FCFS, it would go to next cylinder 1, then 36 and so on.

SSF (Shortest Seek First):


Also called SSTF (shortest seek time first). Selects the request with the minimum seek time
from the current head position. SSF scheduling is a form of SJF scheduling; may cause starvation
of some requests.

SCAN:
The disk arm starts at one end of the disk, and moves toward the other end, servicing
requests until it gets to the other end of the disk, where the head movement is reversed
and servicing continues.
Sometimes called the elevator algorithm.  Illustration shows total head movement of
208 cylinders.

C-SCAN:
It provides a more uniform wait time than SCAN. The head moves from one end of the
disk to the other. Servicing requests as it goes. When it reaches the other end, however,
it immediately returns to the beginning of the disk, without servicing any requests on
the return trip. It treats the cylinders as a circular list that wraps around from the last
cylinder to the first one.

C-LOOK:
It is the Version of C-SCAN. The arm only goes as far as the last request in each direction,
then reverses direction immediately, without first going all the way to the end of the
disk.
Selecting a Disk-Scheduling Algorithm:
SSTF is common and has a natural appeal
SCAN and C-SCAN perform better for systems that place a heavy load on the
disk.
Performance depends on the number and types of requests.
Requests for disk service can be influenced by the file-allocation method.
The disk-scheduling algorithm should be written as a separate module of the
operating system, allowing it to be replaced with a different algorithm if
necessary.
Either SSTF or LOOK is a reasonable choice for the default algorithm
Device Driver
Device driver operates a specific device that is attached to a computer. It provides a software
interface for the device controller to access the hardware devices. Therefore, the operating
system or some other computer programs can access that hardware without knowing much
details about that hardware component. The device driver allows to send data and receive data
from the connected hardware device.
Figure 1: Device Driver
When the operating system or a program needs to communicate with a hardware device, it
invokes a routine in the driver. Then the driver issues commands to that device. When the
device sends data back to the driver, the driver invokes routines in the original calling program.

Device Controller
A device controller is a system that handles the incoming and outgoing signals of the CPU. A
device is connected to the computer via a plug and socket, and the socket is connected to a
device controller. Device controllers use binary and digital codes. An IO device contains
mechanical and electrical parts. A device controller is the electrical part of the IO device.

Figure 2: Device Controller
The device controller receives data from a connected device. It stores that data temporarily in a
special purpose register called a local buffer inside the controller.  Each device controller has a
corresponding device driver. The memory is connected to the memory controller. The monitor
is connected to the video controller while the keyboard is connected to the keyboard
controller. Disk drive is connected to the disk controller, and the USB drive is connected to the
USB controller. These controllers are connected to the CPU via the common bus.

Multiprocessors Systems.
Most computer systems are single processor systems i.e they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays. These
systems have multiple processors working in parallel that share the computer clock, memory,
bus, peripheral devices etc. An image demonstrating the multiprocessor architecture is

− 

Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows −
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master - slave relationship exists between them.

An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains a
master slave relationship.

Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option

Distributed System
A distributed system in its most simplest definition is a group of computers working together
as to appear as a single computer to the end-user.
These machines have a shared state, operate concurrently and can fail independently without
affecting the whole system’s uptime.
I propose we incrementally work through an example of distributing a system so that you can
get a better sense of it all:
A traditional stack
Let’s go with a database! Traditional databases are stored on the filesystem of one single
machine, whenever you want to fetch/insert information in it — you talk to that machine
directly.
For us to distribute this database system, we’d need to have this database run on multiple
machines at the same time. The user must be able to talk to whichever machine he chooses
and should not be able to tell that he is not talking to a single machine — if he inserts a
record into node#1, node #3 must be able to return that record.

An architecture that can be considered distributed


Why distribute a system?
Systems are always distributed by necessity. The truth of the matter is — managing
distributed systems is a complex topic chock-full of pitfalls and landmines. It is a headache to
deploy, maintain and debug distributed systems, so why go there at all?
What a distributed system enables you to do is scale horizontally. Going back to our previous
example of the single database server, the only way to handle more traffic would be to
upgrade the hardware the database is running on. This is called scaling vertically.
Scaling vertically is all well and good while you can, but after a certain point you will see that
even the best hardware is not sufficient for enough traffic, not to mention impractical to host.
Scaling horizontally simply means adding more computers rather than upgrading the
hardware of a single one.

Horizontal scaling becomes much


cheaper after a certain threshold
It is significantly cheaper than vertical scaling after a certain threshold but that is not its main
case for preference.
Vertical scaling can only bump your performance up to the latest hardware’s capabilities.
These capabilities prove to be insufficient for technological companies with moderate to big
workloads.
The best thing about horizontal scaling is that you have no cap on how much you can scale —
whenever performance degrades you simply add another machine, up to infinity potentially.
Easy scaling is not the only benefit you get from distributed systems. Fault tolerance and low
latency are also equally as important.
Fault Tolerance — a cluster of ten machines across two data centers is inherently more fault-
tolerant than a single machine. Even if one data center catches on fire, your application would
still work.
Low Latency — The time for a network packet to travel the world is physically bounded by
the speed of light. For example, the shortest possible time for a request‘s round-trip
time (that is, go back and forth) in a fiber-optic cable between New York to Sydney is 160ms.
Distributed systems allow you to have a node in both cities, allowing traffic to hit the node
that is closest to it.
For a distributed system to work, though, you need the software running on those machines
to be specifically designed for running on multiple computers at the same time and handling
the problems that come along with it. This turns out to be no easy feat.

The end

You might also like