You are on page 1of 83

Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect.

In Computer Science (Contact


Number: 9853277844)

Unit-1
Evolution of the operating system, resident monitor, batch processing,
multiprogramming, multiprocessing, time sharing, real time system, I/O interrupt,
DMA, dual mode operation, operating system services.

EVOLUTION OF THE OPERATING SYSTEM

An operating system is an integrated set of program that is used to manage the various
resources and overall operations of a computer system. Operating system goes by many
different names depending on the manufacture of the computer. Other terms used to describe
operating systems are, monitor, executive supervisor controller programs. An operating
system manage and co-ordinates the function performed by the computer hardware,
including the CPU, input device, secondary storage devices, communication and network
equipment.
When a number of computers connected through a network and more than one computer
trying for a common printer or a common resources then the operating system follow some
order and manage the resources in an efficient manner. Generally
resource sharing in two ways “in time” and “in space”. When a
resource is a time sharing resource first one of the tasks gets
resource for some time, then another and so on. For example a
CPU is an in time sharing system. In time sharing system, the
operating system fixes the time slot of the CPU. First one of the
processes gets the CPU when the time slot expired the CPU
switches to next process in the ready queue. The other kind of
sharing is the space sharing. In this method the user sharing the
space of the resources. For example the main memory consisting of several processes at a
time. So the main difference between “in time” sharing resource and “in space” sharing
resources is that “in time” resource is not divided in to units where as “in space” resource is
divided in to units. The structure of operating system consist four layers such as hardwire,
software, system programs and application program. The hardware part consists of C.P.U,
main memory, input /output devices, secondary storage etc. The software includes process
management routines, memory management routines, input/output control routine, file
management routine. The system programs layer consists of compiler, assembler, linker etc.
The application programs are depending on the user.

EVALUTION OF THE OPERTING SYSTEM:


The first operating system was developed in the year 1950 for the IBM 701 computer. This
operating system was elementary in nature and was not as powerful as the operating system
of today’s computer. Since then lot of research has been carried out in this direction with the
result that today we are very powerful operating system and can execute several jobs at a
time on the same machine.

1
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
In the early days of computers job to job transaction was not automatic. For each job be
executed by the computer the operator had to clear the main memory to remove any data
remaining from the previous job, load the program and data of the current job from the input
device, sets the appropriate switches and finally run the job to obtain the result from the out
put device, after the completion of one job by the same process had to repeated for the next
job by the computer operator because of this manual transaction for one job to another, lots
of computer time was wasted. Since the computer remain idle while the operator loaded or
on loaded jobs .In order to reduce the idle time, a method of automatic job to job transaction
was devised with this facility when one job is finished, the system control is automatically
transferred back to the operating system which automatically performs the house keeping
jobs needed to load and run the next job.
FUNCTION OF OPERATING SYSTEM:
1. Processor management that is assignment of processor to different task being
2. Performed by the computer system.
3. Memory management that is allocation of main memory and other storage area of
4. The system programmers as well as the user programs and data.
5. Input /output management that is allocation of the different input and output
6. Device while one or more programs are being executed.
7. Interpretation of command and instruction. Facilities easy communication
8. Between the Computer system and the computer operator.
9. Transfer input from the key board (any one of the input devices) to the memory.
10. Display the messages, be it input or output on the screen.
11. Store data’s or programs in external storage device.
12. Output data to the printer (any one of the output devices) from the memory.
13. Control the printer and other peripherals.
14. Load programs and packages from storage devices and media to the main memory.
15. Copy data or programs from one device to another.
16. Communicate, control, and provide error message given the status of peripherals and
processes.
17. Execute the user programs and commands.
18. Protect working storage from overwriting by another program.
19. Store details of data and location stored for all media and drives.
20. Security and protection to the user data program and files.
MEASURING SYSTEM PERFORMANCE
The efficiency of an operating system and the overall performance of a computer system are
usually measured in terms of the following:
Throughput: Throughput is the amount of work that the system is able to do per unit time. It
is measured as the number of processes that are completed by the system per unit time.
For example: if n processes are completed in en interval of t second, the throughput is taken
as n/t processes per second during that interval. Throughput is normally measured in
processes/ hour. The performance of the CPU is measured in terms of throughput (Means
the performance of the CPU).
Turnaround time: Turn around time is the interval from the time of submission of a job to
the system for processing to the time of completion of the job.

2
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Turn around time = Submission of the job- completion of a job.
Response time: Another measure used in case of interactive system is response time, which
is the interval from the time of submission of a job to the system for the processing to the
time the first response for the job is produced by the system.
RESIDENT MONITOR
A small program, called a resident monitor, is created to transfer control automatically from
one job to another. The resident monitor is always in memory or resident. When the
computer was turned on, the resident monitor was invoked and it would transfer control to a
program .When the programs terminated it would return control to the resident monitor,
which would then go to the next program. Thus the resident monitor would automatically
sequence from one program to another and
from one job to another. How the resident
monitor would knows which program to
execute? In addition to the program or data
for a job, the programmer included the control
card, which contained directives to resident
monitor indicating the program to run.
Control cards provide the information directly
to the monitor. Example: A normal user
program may require one of the three
programs to run:
1) The FORTRAN complier(FTN)
2) The ASSEMBLER (ASM)
3) User’s Program(RUN)
We could use a separate control cards for
each of three:
$FTN: - Execute the FROTRAN
complier.
$ASM: - Execute the assembler.
$RUN: - Execute the user program.
These cards tell the resident monitor which
program to run. We can use two
additional control cards to define the
boundaries of each job:
$JOB: - First card of the job.
$END: - Final card of the job.
These two cards might be useful in account for the machine resource used by the
programmer. Parameter can be used to define the job name, account number to be charged,
and so on. Other control cards can be defined for other function, such as asking the operator
to load or unload a tape. One problem with control is how to distinguish them from data or
program cards. The solution is to identify them by a special character pattern on the card.
Several system used the dollar sign character ($) in the first column to identify a control
card. Others used a different card. A resident monitor has several identifiable parts:

3
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
1) The control card interpreter is responsible for reading and carrying out the instruction on
the cards at the point of execution.
2) The Loader is invoked by the control card interpreter, to load system programs and
application program into memory at intervals.
3) The Device Driver is used by both the control card interpreter and the loader for the
systems I/O devices to perform I/O. The system and application programs are linked to these
same device drivers, providing continuity in their operation, as well as saving memory space
and programming time. In batch system, the resident monitor provides automatic jobs
sequence as indicated by the control cards. When a control card indicates that a program is to
be run, the monitor loads the program into memory and transfer control to it. When the
program completes, it transfer control back to the monitor, who reads the next control card,
load the appropriate program and so on. Thus cycle is repeated until all control cards are
interpreted for the job. Then the monitor automatically continues with the next job.
BATCH PROCESSING
In older day (before 1960) it is difficult to execute a program using computer because the
computer located in three different rooms. One rooms for card reader one rooms for
executing the program and another rooms for printing the result. The user or machine
operator runs in between these rooms to complete a job. We can solve this problem using
batch processing. Batch processing is one of the oldest methods of running program off line
and submits it to the computer center. A computer operation collects the program which have
been punched on cards and stacks one program or job on top of another when a batch of
programs have been collected to operator loads this batch of programs in to the computer at
one time where they are executed one after another. Batch processing is also known as serial
or sequential or stacks job processing. When a computer is used in this way, the input data
are introduced in to the computer and processed automatically without operator’s
intervention. Many different jobs (or set of data) are processed one after another on that
same time but without any interaction from the users during program execution .The method
of batch processing reduces the ideal time of a computer system because transition from one
job to another does not required operator intervention.
MULTIPROGRAMMING
Basically there are two types of programs such as I/O bound programs and CPU bound
programs. Programs used for commercial data processing normally read in vast amount of
data perform very little computation and output large amount of information such program
are known as I/O bound programs on the other hand programs used for scientific and
engineering application need very little I/O but requires enormous computation such
programs are called CPU bound programs because more of CPU time is required for
processing such programs. In order to overcome the problem of under utilization of main
memory and the CPU concept multiprogramming introduced in operating system. When one
program is waiting for I/O transfer, there is another program ready to utilize the CPU. Thus it
is possible for several users to share the time of CPU.
Multiprogramming is a technique to execute number of program simultaneously by a single
processor. In multiprogramming number of processes and reside main memory at a time. The
operating system picks and begins to execute one of the jobs in the main memory. In non-
4
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
multiprogramming system the CPU can execute only one program at a time if the running
program is waiting for an I/O device the CPU becomes idea it will effect on the performance
of CPU. But in multiprogramming environment any I/O wait happened in a process then the
CPU switches from that hob to another job in job pool, so the CPU is not idle at any time.
The doctor doesn’t have only one patient at a time. Number of patients resides in the hospital
under treatment. If the doctor has enough patients a doctor never to be idle.

It is important to note here that multiprogramming is not defined to be the execution of


instruction from several programs simultaneously. Rather, it does means that there are a
number of programs available to the CPU (stored in the main memory) and that a portion of
one is executed, then a segment of another and so on. Although two or more user programs
reside in the main memory simultaneously, the CPU is capable of executing only one
instruction at a time. Hence, at any given time, only one of the programs has control of the
CPU and is executing instruction. Simultaneous execution of more than one program with a
single CPU is impossible. In some multiprogramming system, only fixed number of jobs can
be processed concurrently (Multiprogramming with fixed task) (MFT), while in others the
number of jobs can vary (multiprogramming with variable task) (MVT).

A typical scenario of jobs in a multiprogramming system is shown in the figure. At the


particular time instances shown in the figure, Job A is not utilizing the CPU since the CPU it
is busy writing output data on the disk (I/O operation). Hence the CPU is being utilized to
execute job B, which is also present in the main memory. Job C, also residing in the main
memory, is waiting for the CPU to become free. Actually, as shown in the figure, in case of
multiprogramming all the jobs residing in the main memory will be in one of the following
three states that is running (it is using the CPU), blocked (it is performing I/O operation), and

5
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
ready (it is waiting for CPU to be assigned to it). In our example, Jobs A, B and C are in
blocked, running and ready states respectively. Since job C is in the ready state, as soon as
the execution of job B is completed or job B requires doing I/O operation, the CPU will start
executing job C. In the mean while, if job A completes its output operation, it will be in the
ready state waiting for the CPU. Hence, in multiprogramming, the CPU will never be idle as
long as there is always some job to execute. Note that although many jobs may be in ready
and blocked states, only one job can be running at any instances. The area occupied by each
job residing simultaneously in the main memory is known as memory partition. The actual
number of partition and hence jobs, allowed in the main memory at any given time varies
depending upon the operating system in use. Moreover, those jobs awaiting entry into main
memory are queued on a fast secondary storage device such as magnetic disk. The first job
from this queue will be loaded into the main memory as soon as only one of the jobs already
occupied the main memory is completed and the corresponding memory partition becomes
free
Requirements of multiprogramming system
Multiprogramming has two main advantages: increased throughput (performance of
the CPU) and lowered response time. Throughput is increased by utilizing the ideal time of
the CPU for running other programs that are already residing in the main memory. Response
time is lowered by recognizing the priority of a job as it enters the system and by processing
jobs on a priority basis.

On the other hand, the incorporation of multiprogramming in the operating system


has, of course, complicated matter. For a computer to work simultaneously on many
programs, the flowing additional hardware and software features are required:

Large memory: For multiprogramming to work satisfactory, large main memory is required
to accommodate a good number of user programs along with the operating system.
Memory Protection: Computer designed for multiprogramming must provide some type of
memory protection mechanism to prevent a job in one memory partition from changing
information or instruction of a job in another memory partition. For example: In figure: We
would not want job A to inadvertently destroy something in the completely independent job
B or Job C. In a multiprogramming system, this is achieved by the memory protection
features, a combination of hardware and software, which prevents one job from addressing
beyond the limits of its own allocated memory area.
Job status preservation: In a multiprogramming, when a running job is blocked for I/O
processing, the CPU is taken away from this job and given to another job that is ready for
execution. Later, the former job will be allocated the CPU to continue its execution. Notice
that this requires preserving of the job’s complete status information when the CPU is taken
away from it and restoring this information back, before the CPU is given back to it again.
To enables this, the operating system maintains a Process control block (PCB) for each
loaded process. With this arrangement, before taking away the CPU from the running
process, its status is preserved in its PCB, and before the process resumes execution when
the CPU is given back to it later, its status is restored back from its PCB. Hence, the process
can continue execution without any problem.

6
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Proper job mix: A proper mix of I/O bound and CPU bound jobs required to effectively
overlap the operations of the CPU and I/O devices. It is necessary that when a program is
waiting for I/O operation, another program must have enough computation to keep the CPU
busy. If all programs need I/O at the same time, the CPU will again be idle. Hence, the main
memory should contains some CPU bound programs and some I/O bound programs in its
various partition so that at least one of the program which does not need I/O is always
available to the CPU for processing.
CPU scheduling: In a multiprogramming system, often there will be situation in which two
or more jobs will be in the ready state, waiting for CPU to be allocated for execution. When
more then one process is in the ready state when the CPU becomes free, the operating system
must decide which of the ready jobs should be allocated the CPU for execution. The part of
the operating system concerned with this decision is called the CPU scheduler, and the
algorithm is uses are called the CPU scheduling algorithm.
MULTIPROCESSING

Multiprocessing is a technique to execute two or more


program simultaneously by a Computer system having
more than one CPU. The term multiprocessing is used to
describe interconnected computer configuration or
computers with two or more independent CPU that have
the ability to simultaneously execute several programs.
In such a system, instruction from different and
independent programs can be processed at the instant of
time by different CPU or the CPU may simultaneously execute different instruction from the
same program. There are almost limit less number of possible multiprocessing system. It
provides a built backup. If one of the CPUs breakdown the CPUs automatically takes over
the complete work load until repairs are made. In some systems, CPUs are connected into
computers networks. In these networks, small CPUs called front-end processors which are
used for data communication and the main CPU or host computer of back-end Processors are
used only for major processing jobs and not for data communication. In some
multiprocessing System each CPU performs only specific type pf application, for example in
case of multiprocessing system with two CPUs one may be used for process only online jobs
while another one may be meant for processing only batch applications. How ever these
system are so design that incase of breakdown of one CPU the other CPU takes over the
complete workload until repairs are made, moreover deferent multiprocessing systems use
different type of memory configuration in some system each CPU has its own main memory
in other all the CPU may share a common memory while in some others each CPU may have
access to both separate and Common memories. Multiprocessing systems are of two types-
tightly coupled system and loosely coupled systems. In tightly coupled system, there is a
single system-wide primary memory, which is shared by all the processor. On the other
hand, in loosely coupled system, the processors do not share memory, and each processor
has its own local memory.
Advantage and limitation of Multiprocessing:
Multiprocessing systems typically have the following advantages:
7
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Better Performance: Due to multiplicity of processors, multiprocessor systems have better
performance (shorter response times and higher throughput) than single processor systems.
For example: if there are two different programs to be run, two processors are evidently more
powerful than one because the programs can be simultaneously run on different processors.
Better reliability: Due to multiplicity of processors, multiprocessor systems also better
reliability than single processor system. In a properly designed multiprocessor system, if on
of the processor breaks down, the other processor(s) automatically takes over the system
workload until repairs are made. Hence, a completely breakdown of such systems can be
avoided.
Multiprocessing systems, however, requires a very sophisticated operating system to
schedule, balance, and coordinates the input, output and processing activity of multi
processor. The design of such an operating system is a complex and time taking job.
Moreover multiprocessing system are expensive to procure and maintain. In addition to the
high charge paid initially, the regular operation and maintains of these systems is also a
costly affairs.
TIME SHARING PROCESS
Time sharing is a way or system of allowing more than one person to use a computer at the
same time. A number of terminals may all share the same computer. A time sharing system
has many even 100 of terminals linked up to the same computer at the same time. On line
multiprogramming where programs are executed on a priority basis but in time sharing the
computer is divided (320*10-9) among all users on a schedule basis. For example let us
assume that the time slice for time sharing system is to millisecond i.e. the time sharing
operating system allocates to milliseconds to each user during which a Program belonging to
this user is executed. Suppose there are 100 user for this time sharing system then if l0
millisecond is allocated to each user a particular user will get the CPU attentions only in
every 10 *10milliseconds = 1 second. In a time sharing system only one program can be in
control of the CPU at a given time as a result all the users who are using a time sharing
system will form in one of following three states group such as:
1. Active state: The user program currently has control of the CPU. Obviously only one
user will be active at a time.
2. Ready state(s): The user program is ready to continue but it is waiting for its turn to
get the attention of CPU More than one user can be in ready state at a time.
3. Waiting or blocked state(s): The user has made no request for execution of his job or
the Users program is waiting for some I/O operation. Again more than one use can be
in wait state at a time.

8
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

In fig (b) user 2 is active, user1, 3 and 4 are in wait state and user 5 and 6 are in ready state.
As soon as the time slice of user 2 is completed the time sharing supervisor moves on to the
next ready user (those in wait state are skipped since they are making no demand for CPU).
The next ready user in the queue is user 5 which now becomes active as shown in the fig (c).
Users will remain active until the allocated time slice expires or if the program execution is
over during this time period at that time control is passed on to the next ready user in the
queue which is user 6 for our example whenever I/O operation is completed for a wait user,
that user state will be change to ready and serviced the next time around.

In a typically time sharing systems 100 of users may be using the system simultaneously.
As the total main memory available in a computer is limited it is not possible to keep the
Program of all the users of a time sharing systems simultaneously in the main memory.
Thus in this case, the time sharing operating system keeps only a few programs in the
Main memory and the rest are stored on the disk storage. The memory residence program
include the active program and some of the ready program which will get CPU attention
very shortly, a wait program of the main memory is normally replaced by a ready
program on the disk storage. When a program is to be executed it is brought pad to the
main memory from the disk and the inactive program is send to the disk, the
operation of transforming program from the main memory to the disk storage and back is
know as swapping. Some times this type of swapping process is called as or known as

9
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
roll in roll out.

Requirement of time sharing system:


Time sharing system typically requires the following additional hardware and software
features:
1. A number of terminals simultaneously connected to the system, so that multiple users
can simultaneously use the system in interactive mode.
2. A relatively large memory to support multiprogramming.
3. Memory protection mechanism to prevent one job’s instruction and data from other
jobs in a multiprogramming environment.
4. Job status preservation mechanism to preserve a job’s complete status information
when the CPU is taken away from it, and restoring this information back, before the
CPU is given back it again.
5. A special CPU scheduling algorithm which is allocates the CPU for a very short
period one by one to each user process in a circular fashion.
6. Alarm clock mechanisms to send an interrupt signal to the CPU after every time slice.
Advantages of time sharing systems:
Although time sharing systems are complex to design, they provide several advantages to
their users. The main advantages of time sharing system are as follows:
Reduce CPU idle time:
The speed of thinking and typing of user is much slower than the processing speed of the
computer. Hence, during interactive usage of a system, while a particular user is engaged in
thinking or typing his/her input, a time sharing system can service many other users. In this
manner time sharing system help in reducing the CPU idle time, increasing the system
throughput.
Provides advantages of quick response time:
The special CPU scheduling algorithm used in time sharing systems ensures quick response
time to all users. This feature can be effectively used for interactive used for interactive
programming and debugging to improve the programmer efficiency. Multiple programmers
can simultaneously work for, writing, testing, and debugging of portions of their programs or
try out various approaches to a problem solution.
Offers good computing facility to small users:
Small users can get direct access to much more sophisticated hardware and software then
they could otherwise justify of affords. In time sharing systems, they merely pay a fee for
resources used and are relived of the hard ware, software and personnel problems associated
with acquiring and maintaining their own initialization.
REAL TIME PROCESSING
In a real time or on line operating system all the resources are accessible 24 hours (online).
The computer processes immediately, one or all the inputs and delivers or transmits the
output instantaneously. These operating system are generally single application oriented,
user are not permitted to prepare or modify programs but allowed only to input data, make
enquiry and get reports. Theses are dedicated system meant for only one specific application.
Transaction processing is on line method in which the data are processed immediately and
files get updated as soon as a transaction takes place. On-line processing facilitates the use of
10
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
interactive programs, by which users can communicate with the computer during processing
operation, when such an interactive program supplies the result of processing so they can be
used when needed to control or modify an operation or answer a customer’s inquire – the
procedure is referred to as real-time processing. Airline reservation system, for example,
requires immediate processing. Each time a ticket is issued or cancelled, the data most be
immediately entered into a computer, processed and made available. This is how the system
works. The airline agent interacts with the computer and gets information about flights that
files the customer schedules. When an available flight is found, the computer immediately
records the sale of the desired seat. This automatically reduces the number of seats available
on the flight. Since more than one terminal is used to make reservation for a flight, the ability
to record a transaction immediately prevents duplication of a sale that already been made.
Real-time processing usually uses terminal linked to a CPU via telecommunication lines.
When a one-line computer system operates quickly enough to facilitate the decision making
process in an organization, it is a real-time system.
There are many applications that require an immediate response from the computer. Getting
a stock market quotation, finding the current level of product inventory, and searching a
criminal data files for a possible suspect may all be action that needs to be done without
delays. In these cases, a real time processing system is needed. Real time means immediate
response from the computer. A system in which a transaction accesses and updates a file
quickly enough to affect the original decision making is called a real-time system. The
essential feature is that the input data must be processed quickly enough so that further
action can then be promptly taken on the result. One of the early and very sophisticated
commercial real-time systems was the American Airlines SABER reservation system. The
following factors justify the user of real-time processing for an airline reservation system:
1. There are hundreds of flights daily.
1. Each flight may have as many as 300 seats or more “in inventory”.
2. As soon as a seat is reserved/cancelled, the concerned files are updated
before the next transaction can be processed.
3. The response time should be very short because a customer’s
reservation is to be done while he waits.
4. Seats may be sold for only a portion of flight. For example, Mr.X may
book a seat to Baroda on a Delhi to Bombay flight which stops in
Baroda. That seat will then be available for the Baroda to Bombay leg.
5. Hundred of agents throughout the country are selling seat from the
inventory. An air lines seat is a very perishable item. If it is not sold, it
is lost once a flight is made.

Few more examples of business real-time processing are:


1. Air traffic control system.
2. Reservation system used by hotels and car rental agencies.
3. Systems that provide immediate updating of customer accounts in saving banks.
4. System that provide up to the minute information on stock prices. Process control
system as in nuclear reactor plants and steel mills.
I/O STRUCTURE AND I/O INTERRUPT

11
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
A general purpose computer system consists of a CPU and multiple device controllers that
are connected through a common bus. Each device controller is in charge of a specific type
of device. Depending on the controller, there may be more then attached device. A device
controller maintains some local buffer storage and a set of register. The device controller is
responsible for moving the data between the peripheral devices that it controls and its local
buffer storage. The size of the local buffer within a device controller varies form one
controller to controller, depending on the particular devices being controlled.
To start an I/O operation, the CPU loads the appropriate registers with in the device
controller. The device controller examines the contents of these register to determine what
action to take. For example, if it finds a read request, the controller will start the transfer of
data from the devices to its local buffer. Once the transfer of data is completed, the device
controller informs the CPU that it has finished its operation. It accomplishes this
communication by triggering an interrupt.

This result will occur as the result of a user process requesting I/O. Once the I/O is started,
two actions are possible: 1. In synchronization I/O, the I/O is started then I/O completion
control is return to the user program, 2. A synchronization I/O returns control to the user
program without waiting for the I/O to complete. The I/O then can continue while other
system operation occurs.

Waiting for I/O completion may be accomplished in one of the two ways:
1) Some computer has a special wait instruction that ideal the CPU until the next
interrupt. If the CPU always waits for I/O completion, at most one I/O request is
outstanding at a time. Whenever an I/O interrupt occurs, the operating system knows
exactly which device is interrupting. On the other hand, this approach excludes
concurrent I/O operation to several devices and also excludes the possibility of
overlapping useful computation with I/O.
2) A better alternative is to start the I/O and then to continue processing other operating
system or user program code. A system call is then needed to allow the user program

12
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
to wait for the I/O completion, if desired. If no user program is ready to run and the
operating system has no other work to do, we still require the wait instruction or idle
loop. We also need to be able to keep track of many I/O request at the same time. For
this purpose, the operating system uses a table containing an entry for each I/O
device: the device- status table. Each table entry indicates the devices type, address,
and state (not functioning, idle or busy). If the devices are busy with a request, the
type of request and other parameter will be stored in the table entry for that device.
Since, it is responsible for other processes to issue the request to the same device the
operating system will also maintain a wait queue – a list of waiting request – for each
I/O device.

An I/O device interrupts, when it needs services. When an interrupt occurs, the operating
system first determines which I/O devices caused the interrupt. It then indexes into the I/O
devices status table to determine the status of that devices, and modifies the table entry to
reflect the occurrence of the interrupt. For most device, an interrupt signals completion of an
I/O request. If there are additional request waiting in the queue for this device, the operating
system starts processing the next request.
Finally, Control is returned from the I/O interrupt. If a process was waiting for this request to
complete (as recorded in the device-status table), we can now return control to it. Otherwise,
we return to whatever we were doing before the I/O interrupt: to the execution of the user
program (the program started an I/O operation and that operation has now finished but the
program has not yet waited for the operation to complete) or to the wait loop (the program
started two or more I/O operation and is waiting for a particular one to finish, but this
interrupt was from one of the other operation).
Many interactive systems allow users to type ahead-to enter data before the data are
requested on the key board. In this case, interrupts may occur, signaling the arrival of
characteristics from the terminal, while the device-status table indicates that no program has
required input from the devices. If type ahead is to be allowed, then a buffer must be
provided to store the type ahead character until some program wants them. We may need a
buffer for each input device.
13
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
DMA (Direct Memory Access):
One key to obtain good performance in computer system is minimizing the number of
interrupts that occurs while a program executes. Direct Memory Access (DMA) requires
only a single interrupt for each block of character transferred in an I/O operation. It is faster
then the method in which the processor is interrupt for each character transferred. Once an
I/O operation is initiated , character are transferred to primary storage on a cycle stealing (top
priority) basis the channel temporarily usurps the processor’s path to storage while a
character is being transferred, then the processor continues operation.
When a device is ready to transmit one character of the block, it “interrupts” the processor.
But with DMA, the processor state does not have to be saved; the processors are more
delayed then interrupt. Under the control of special hardware, the character is transferred to
primary storage. When the transfer is complete, the processor resumes operation. DMA is
useful in system that supports a very large volume of I/O transfer. The hardware responsible
for stealing cycles and operation the I/O devices in DMA mode is called a DMA channel.
Normally, a programmed CPU loop to read the bytes one at a time from the controller wastes
CPU time. DMA was developed to free the CPU from this low-level work. When DMA is
used, the CPU gives the controller two items of information, the disk address of the block;
the memory address where the block is to go and the number of bytes to transfer.
After the controller has read the entire block from the device into its buffer, it copies the first
bytes or word into the main memory at the address specified by the DMA memory address.
Then it increment the DMA address and decrements the DMA count by the number of bytes
just transferred. This process is repeated until the DMA count becomes zero, at which time
the controller causes an interrupt. When the operating system starts up, it does not have to
copy the block to memory; it is already there.

DUAL-MODE OPERATION
To ensure proper operation, we must protect system and all other programs and their data
from any malfunctioning program. Protection is needed for any shared resource. We need
two separate modes of operation. Such as:
a) User Mode, b) Monitor Mode/ System Mode/ Privileged Mode/ Supervisor Mode.
A bit, called the mode bit is added to the hardware of the computer to indicate the current
mode: Monitor (0) or user (1). When the mode bit is 0, the track is executed on behalf of the
operating system and when the mode bit is 1, the track is executed on behalf of the user.
At system boot time, the hardware starts in monitor mode. The operating system is then
loaded, and starts user processes in user mode. Whenever an interrupt occurs, the hardware
14
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
switches from user mode to monitor mode i.e. changes the state of the mode bit to 0. Thus
whenever the operating system gain control of the computer, it is in monitor mode. The
system always switches to user mode by setting the mode bit to 1 before passing control to a
user program.
The dual mode of operation provides us with the means for protecting the operating system
from errant (unauthorized) user and errant users from one another. We accomplish this
protection by using machine instruction that may cause harm as privileged instruction. The
hardware allows privileged instruction to be executed only in monitor mode. If an attempt is
made to execute privilege instruction in user mode, the hardware does not execute the
instruction, but rather treats the instruction as illegal and traps it to the operating system.
When a system call is executed, it is treated by the hardware as software interrupt. Control
passes through the interrupt vector to a service routine in the operating system and the mode
bit is set to monitor mode. The system call services routine is a part of the operating system.
The monitor examines the interrupting instruction to determine what system call has
occurred; a parameter indicates what type of services the user program is requesting.
Addition information needed for the request may be passed in register, on the stack or in
memory with pointer to the memory location passed in register. The monitor verifies that the
parameters are corrected and legal, executes the request and return control to the instruction
following the system call.
OPERATING SYSTEM SERVICES
1. The operating system provides a user-friendly environment for the creation and
execution of program and provides services to the user. The main services of
operating systems are:
2. Program execution: A number of tasks required to execute a program. The tasks
include instruction and data must be loaded into the main memory. I/O devices and
files must be initialized and other resource must be prepared. The operating system
handles these tasks for the user.
3. Input /Output operation: A running program may require input and output. This I/O
may involve a file or an I/O operation directly; the operating system must provide
same means to do I/O.
4. File system Manipulation: Program need to read and write files .Program also need
to create and delete files by name. Operating system also provide the facilities to the
user for creating the user file into his/her name and also provide some of the
operation which is associated with that file.
5. Communication: In many circumstances, one process needs to exchange information
with another process. Such communication can occurs in two ways such as : firstly,
takes place between processes that are executing on the same computer. And
secondly, takes place between processes that are executing on the different computer
system that are tied together by a computer network. Communication may be
implemented via shared memory, or by the technique of message passing, in which
packets of information are moved between processes by the operating system.
6. Error Detection: The operating system detects the different types of error and should
take appropriate action. The errors include memory error, power failure, printed out of

15
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
paper, illegal instruction in the program (division by zero, arithmetic overflow,
attempt to access illegal memory local).
7. Resource Allocation: When multiple users are logged on the system or multiple jobs
are running at the same time, resources must be allocated to each of them. Many
different types of resources are managed by the operating system. The resource
includes CPU cycles, main memory, I/O devices, file storage and so on.
8. Accounting: The operating system can keep track of which users how much and what
kind of computer resources. This record keeping is useful to improve computing
services.
9. Protection: Protection is a mechanism or technique through which we are protecting
our files against some of the unauthorized user who may do the corruption. Operating
system gave the facilities to protect the user file.

MULTITASKING
Technically speaking, multitasking is the same as multiprogramming. That is, multitasking is
the system’s capability to concurrently work on more then one task (job or process). This
means that whenever a task (job or process) needs to perform I/O operation, the CPU can be
used for executing some other task that is also residing in the system and is ready to use the
CPU.
Many authors do not distinguish between multiprogramming and multitasking
because both the terms refer to the same concept. However, some authors prefer to use the
term multiprogramming for multi-user system and multitasking for single user system. Note
that even in a single user system, it is not necessary that the system work only on one job at a
time. In fact, a user of a single user system often has multiple tasks concurrently processed
by the system. For example while editing a file in the foreground, a sorting job can be given
in the background. Similarly, while completion of a program is in progress in the
background, the user may be reading his/her electronic mails in the foreground. In this
manner, a user may concurrently work on many tasks. In such situation, the status of each of
the tasks is normally viewed on different windows in a multi tasking system.
Hence, for those who like to differentiate between multiprogramming and
multiprogramming is the concurrent execution of multiple jobs in a multi user system, while
multitasking is the concurrently execution of multiple jobs in a single user system.
DISTRIBUTED DATA PROCESSING:
Earlier computer systems were centralized that is an organization had large computer system
at one place. The programs to be executed were brought to this centre. The distributed
processing system had brought the computing power to the place where it is needed. To
understand the concept of a distributed processing system, let us consider an example big
organization has diversified with its offices situated at different places. Each office has its
own computer system and is capable of accomplishing its own computational needs. This is,
in fact, just decentralization, however if there is such a data processing requirement which a
particular office is unable to complete , may be due to lack of data or hardware limitation. In
this situation this office requires the services of the other computer systems located in other
offices. The best way to solve this problem is to link all the computer systems in all these
offices, thus reducing the burden on the host computer. Thus a computer system capable of
16
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
distributing the computational work among various computer installations is called
distributing processing system. In such a data processing technique, each computer
installation is capable of fulfilling its own needs, but in exceptional situation, it may call
upon the other installations for help .The advantages of distributed systems are:
1. Reduction of the load on the host computer.
2. Minimization of cost in data processing.
3. Reduction of delays in data processing.
4. Better services to the customer.
SPOOLING PROCESS
The process of storing the input data and output results on tape or disk is known as spooling.
The full form of spool is Simultaneous Peripheral Output On Line. Spooling is a technique
that has been successfully used on a number of computer systems to reduce the speed
mismatch and turn the ideal time of CPU. It is a process of placing all data that comes from
an input device or goes to an out put device on either a magnetic tape of disk. A batch of
program, when fed to the keyboard or card reader or any other input devices is read and
temporally stored on a magnet tape or disk instead of being directly stored in the main
memory. The programs stored on tape or disk are now fed to and processed by the main
computer. The results obtained are again written on tape or disk instead of being directly
printed on any output device. Special spooling program are executed by the operating system
to transfer the data from the disk or tap to the main memory or an input to output devices. In
this case the disk or tap devices act as a buffer area between main storage which is externally
fast and I/0 devices which are relatively slow. A buffer is a temporally storage area which
will take information from one device and hold it until another device is ready to receive it.
This is done at different speed. If data is typed in very fast some characters go in to a buffer
before they appear on the screen. Spooling programs are executed when the CPU is rot too
busy with other jobs.

17
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

UNIT-2
File system, file concept, file attribute, file operation, file
type, file structure, access method, sequential access,
index sequential access and direct access, directory
structure, single level, two level, tree structure, file
protection and access control.
FILE SYSTEM
An important component of an operating system is the file system. File system generally
contains:
Accessed method: These are concerned with the manner in which stored data within a file is
accessed.
File management: This is concerned with providing the mechanism for files to be stored,
referenced, shared, and secured.
Auxiliary storage management: This id concerned with the allocating space for files on
secondary storage device.
File integrity mechanism: These are concerned with generating that the information in a file
is uncorrupted.

ROOT DIRECTORY

USER DIRECTORY

USER FILE

18
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
On a large scale time sharing systems and distributed processing systems. It is commonly for
user accounts to contain between 10 and 100 files. Thus with a user community of several
1000 users, a system disk might easily contain 50,000 to 100,000 more separate files. These
files need to be accessed quickly to keep response time small. File system for this of
environment may be organized as per the mentioned above figure. A root is used to indicate
where on disk the root directory begins. The root directory points to the various user
directories. A user directory contains an entry for each of a user’s file. Each entry points to
where the corresponding files are stored on disk. Files name need only be unique within a
given user directory. In hierarchical structured file systems, the system name of a file is
usually formed as the Path name from the root directory to the file.

File system function:


Some of the functions normally attributed to the file system follow:
1. User should be able to create, modify and delete files.
2. The mechanism for sharing files should provide various type of controlled access such
as: read access, write access, execute access etc.
3. User should be able to order the transfer of information between the files.
4. Backup and recovery capability must be provided to prevent either accidental loss or
malicious destruction of information.
5. Most important, the file system should a user
friendly inter base. It should Application Program
Give users a logical view of their data and the function to
be performed upon it rather than a physical view. Logical file system
To provide an efficient and convenient access to the disk,
the operating system imposes one or more file system to File organization Module
allow the data to be stored, located and retrieve easily. A
file system poses two quite different design problems. The Basic file system
first problem is defining how the file system should look
to the user. This task involves defining a file and its I/O Control
attributes, the operation allowed on a file and the directory
structure for organizing files. The second problem is Devices
creating algorithms and data structure map of the logical
file system on to the physical secondary storage devices. File are managed by the operating
system that how they are structured, named,
accessed used, protected and implemented are major
topics in operating system design. As a whole that part of the operating system designing
with files is known as the file system.
The file system itself is generally composed of many different levels.
The lowest level, I/O control consists of device drivers and interrupts handler to transfer
information between the main memory and the disk system. A device driver is thought of as
a translator.
The basic file system needs only to issue generic commands to the appropriate device
drivers to read and write physical block of the disk.

19
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
The file organization module knows about the file and their logical blocks as well as
physical blocks. By knowing the type of file allocation used and the location of the file, the
file organization module can translate logical block addresses to physical block address for
the basic file system to transfer.
Finally, the logical file system manages metadata information. Meta data includes all of the
file system structure excluding the actual data. A file control block (FCB) contains
information about the file including permission and location of the file contains. The logical
file system is also responsible for protection and security.
FILE CONCEPT
Computer can store the information in several different storage media such as magnetic
disks, magnetic tapes and optical disks. So that the computer system will be convenient to
use the operating system provides uniform logical view of information storage. The
operating system abstract from the physical properties of its storage devices to define a
logical storage unit. Files are mapped by the operating system on to physical devices. These
storage devices are persistent through power failure and system reboots. A file is a named
collection or related information that is recorded on secondary storage. Commonly files
represent programs (both source and object forms) and data. Data files may be numeric,
alphabetic, alphanumeric or binary. In general, a file is a sequence of bits, bytes, lines or
record the meaning of which is defined by the file creator and user. The information in a file
is defined by its creator. Many different types of information may be stored in a file- source
program, object program, executable program, numeric data, text, graphic images, and sound
recording and so on.
All computer applications need to store and retrieve information. While a process is running,
it can store a limited amount of information within its own address space. A second problem
with keeping information within a process address space is that when the process to access
the information at the same time. A third problem is that it is frequently necessary for
multiple processes to access the information at the same time. Thus we have three essential
requirements for long term information storage:
a. The multiple processes must be able to access the information concurrently.
b. It must be possible to store a very large amount of information.
c. The information must survive the termination of the process using it.
The usual solution to all these problems is to store information on disk and other
external media in units called files (Or) A file is a named collection of related
information that is recorded on secondary storage.
A file is named collection of data. It normally resides on a secondary storage devices such as
disk or tape. It may be manipulated as a unit by operation such as:
a) OPEN: Prepare a file to be referred.
b) CLOSE: Prevent further reference to a file until it is reopened.
c) CREATE: Build a new file.
d) DESTROY: Remove a file.
e) COPY: Create another version of the file with a new name.
f) RENAME: Change the name of the file.
g) LIST: Prevent or display the contents of a file.
Individual data items within the file may be manipulated by operation like:

20
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
a) READ: Input a data item to a process formation.
b) WRITE: Output a data item from a process to file.
c) UPDATE: Modify an existing data item in a file.
d) INSERT: Add a new data item to a file.
e) DELETE: Remove a data item from a file.
File can be characterized by:
a) VOLATILITY: This refers to the frequency with which additions and deletion are
made to a file.
b) ACTIVITY: This refers to the percentage of a file records accessed during a given
period of time.
c) SIZE: This refers to the amount of information stored in the file.
A Text File: A text file is a sequence of characters organized into lines.
A Source File: A source file is a sequence of subroutines and function each of which is
further organized as declaration followed by executable statement.
An Object File: An object file is a sequence of bytes organized into blocks understandable by
the system linkers.
An Executable File: An executable file is a series of code section that the loader can bring
into memory and execute.
FILE ATTRIBUTE
A file is named, for the convenience of its human users and is referred to by its name. A
name is usually a string of character such as ajit.doc. Some systems differentiate between
upper and lowercase character in names where as other system consider the two cases to be
equivalent. When a file is named, it becomes independent of the process, the user and even
the system that created it. For instances, one user might create the file ajit.doc where as
another user might edit that file by the specifying its name. The files owner might write the
files to a floppy disk send it in an e-mail or copy it across a network and it could still be
called ajit.doc on the destination system.
A file has certain other attributes which vary from one operating system another but typically
consist of these:
 NAME: The symbolic file name is the only information kept in human readable form.
 IDENTIFIERS: This unique tag, usually a number identifies the file within the file
system; it is the non- human readable name for the file.
 TYPE: This information is needed for those systems that support different type.
 LOCATION: This information is a pointer to a device and to the location of the file on
that device.
 SIZE: The current size of the file (in bytes, words or blocks) and possibly the maximum
allowed size are included in this attributes.
 PROTECTION: Access control information determines who can do reading, writing,
executing and so on.
 TIME, DATE AND USER IDENTIFICATION: The information may be kept for
creation, last modification and last use. These data can be useful for protection, security
and usage monitoring.

21
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
The information about all the files is kept in the dictionary structure that also resides on
secondary storage. Typically the directories entries consist of the files name and its unique
identifier. In a system with many files the size of the dictionary itself may be
Megabytes. Because dictionary like files must be nonvolatile, they must be stored on the
devices and brought into memory piecemeal as needed.
FILE OPERATION:
File exists to store information and allow it to be retrieved later. Different systems provide
different operation to allow storage and retrieval. Some of the operation of the files is:
CREATE: The file is created with no data. The purpose of the call is to announce that the file
is coming and to set some of the attributes.
DELETE: When the file is no longer needed, it has to be deleted to free up disk space.
OPEN: Before using a file, a process must open it. The purpose of the open call is to allow
the system to fetch the attributes.
CLOSE: When all the access are finished, the attributes and disk addresses are no longer
needed. So the file should be closed to free up internal table space.
READ: Data are read from file. The caller must specify how must data are needed and also
provide a buffer to put them in.
WRITE: Data are written to the file again, unusually at the current position. If the current
position is the end of the file, the file size increases. If the current position is in the middle of
the file, existing data are overwritten and lost for ever.
APPEND: This call is a restricted from of write. It can only add data to the end of the files.
SEEK: For random access file, a method is needed to specify from where to take the data.
One common approach is system calls seek.
SET ATTTRIBUTE: Some of the attributes are user settable and can be changed after the
file has been created. The protection mode information is an obvious example.
RENAME: It frequently happens that a user needs to change the name of an existing file. It
is not always strictly necessary because the files can usually be copied to a new file with the
new name and the old file then deleted.

FILE TYPE
A common technique for implementing file types is to include the type as a part of the file
name. The name is split into two parts- a name and an extension name. Usually separated by
a period (.) character. The system uses the extension to indicate the file and the type of
operations uses the extension to indicate the type of the file and the type of operation that can
be done on that file. For instance only a file with a .com, .exe or .bat extension can be
executed. The .com and .exe files are two forms of binary executable files where as a .bat file
is a batch file containing, in ASCII format, commands to the operating system. MS-DOS
recognizes only a few extensions, MS-DOS recognizes only a few extensions, but
application program also use extensions to indicate file types in which they are interested.
For example, assemblers expect source files to have an (.asm) extension and the word perfect
word processor expects its file to end with a (.wp) extension. These extensions are not
required so a user may specify for a file with the given name and the extensions it expects
because these extensions are not supported by the operating system they can be considered as
“hints” to applications that operate on them.
22
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

FILE TYPE USUAL EXTENSION FUNCTION


Executable File .exe,.com, .bin or .none Read to run machine
language program.
Source code C, CC, java, pas, asm, a Source code for various
language
Batch bat, ch Commands to the
command interpreter
text txt, doc Textual data documents
Word Processor wp, tex, rrt, doc Various word processor
format
Library lib,a,so,dtl,mpeg,mov,r Libraries of routines of
m programmer

Many operating system support several type of files. UNIX and MS-DOS for example have
regular files and directories.
Regular file: Regular files are the file that contains user information. Regular files are
generally either ASCII files or binary files. The files that are created by the user with the
help of any language (may be c, c++, assembly etc) except machine level language is known
as regular file. ASCII file consist of lines of text. The great advantage of ASCII files is that
they can be displayed and printed as is, and they can be edited with an ordinary text editor.
Directories: The file which contains the information about the regular files in a hierarchical
manner is known as directory. Directories are the system files for maintaining the structure
of the file system.
Character special file: Character special file are related to input/ output and used to model
serial I/O devices such as terminals, printers, and network. Basically it is used in networks
system for the input/output operation.
Block special file: Block special files are used to model disks. The operating system will
only executable a file if it has the proper format. The file which contains the information of
various regular files to do a particular task is known as blocked file. It has the five sections:
header, text, data, relocation bits, and symbol table. The header starts with a magic number,
identifying the file as an executable file. Following the header are the text and data of the
program itself. These are loaded into memory and relocated using the relocation bits. The
symbol table is used for debugging.
FILE STRUCTURE
File types also may be used to indicate the internal structure of the file. Source and object
files have structures that match the expectations of the programs that read them. Further
certain files must conform to a required structure that is understood by operating system.
For example, the operating system may require that an executable files have a specific
structure so that it can determine where in memory to load the file and what the location of
the first instruction is. The Macintosh operating system also support a minimal number of
file structure. It expects files to certain two parts: a resource folk and a data folk. The
resource fork contains in formation of interest of the user. A data fork contains program code
or data. Too few structures make programming in convenient, where as too many cause
23
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
operating system bloat and programmer confusion. The buffer (it is the inner area of the
computer where the data are stored temporarily) which contains the instruction of the
operating system in two part such as resource part and data part. Resource folk contain the
instruction of the user where as data folk contains the program code of the user. Basically it
is used IBM or Macintosh operating system.
File can be structured in any of several ways. Three common possibilities are depicted such
as: (a) Byte sequence, (b) Record sequence and (c) Tree

1
R
1 BYTE E
C
R
D
Figure (a) Figure (b)
Figure(c)
The file in figure (a) is an unstructured sequence of bytes. The operating system does not
know or care what is in the file. All it sees are bytes. Any meaning must be imposed by the
user level programs. Both UNIX and MS-DOS use this approach. The figure (b), a file is a
sequence of fixed length records, each with some internal structure. The third kind of file
structure is shown in figure(c). In this organization a file consist of a tree of records not
necessarily all the same length, each contains key field in a fixed position in the record. The
tree is sorted on the key field, to allow rapid searching for a particular key. If the fields are
designed or sorted in the tree basis then it is very much easier to access when ever it is
necessary.
FILE ACCESS
System designers choose to organize, access and process records and file in different ways
depending on the type of application and the needs of the user. The three commonly used in
business data processing application are: Sequential access, direct access/Random access,
indexed sequential access. The selection of a particular file organization or access depends
upon the type of application. The best access to use in a given application is the one that
happens to meet the user needs in the most effective and the economical manner. File
organization or access requires the use of some key field or unique identifying value that is
found in every record in the field. The key value must be unique for each records of the file
because duplication would cause serious problem. In the payroll example the employee code
field may be used as the key field.
SEQUENTIAL ACCESS
In a sequential file, records are store one and
after another in an ascending or descending order
determine by the key fields of the records. In
payroll example, the records of the
employee may be accessed sequentially by
employee code sequence. Sequentially

24
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
organized files that are processed by computer systems are normally stored on storage media
such as magnetic tape or magnetic disk. To access these records, the computer must read the
file in sequence from the beginning. The first record is read and processed first then the
second record is in the file sequence and so on. To locate particular records the computer
program must read in each record in sequence and compare its key field to the one that is
needed. The retrieval search ends only when the desired key matches with the key field of
the currently key record.

Advantages of the sequential access


1. Easy to organize, maintain and organized.
2. It is fast and efficient when dealing with large volume of data that need to be processed
periodically (batch system).

Disadvantages of the sequential access


1. Requires that all new transaction to be stored in to the proper sequence for sequential
access processing.
2. Locating, sorting, modifying, deleting or adding records in the file requires rearrange
the file.
3. This method is too slow to handle application requiring immediate updating or
responses.
4. Timeliness of data in the field directories while batches are being accumulated.
5. Data redundancy is typically high since the some data may be stored in several file
sequence of different keys.

DIRECT ACCESS OR RANDOM ACCESS OR RELATIVE ACCESS


File whose bytes or records can be read in any order are called random access or direct
access or relative access. Random access files are essential for many applications. For
example: if an air line customer call off and wants to reserve a seat on a particular flight, the
reservation program/operator must be able to access the records for that flight without having
to read all the records from the thousand of other flight first. Two methods are used for
specifying where to start reading. In the first one every ‘READ’ operation gives the position
in the file to start reading. In the second one special operation ‘SEEK’ is provided to set the
current position. After a SEEK the file can be read sequentially from the new current
position. In some older mainframe operating system file are classified as being either
sequentially or random access, at the time they are created. This allows the system to use
different storage techniques for these two classes. Modern operating systems do not make
this distinction. All their files are automatically randomly accessed. A direct file consists of
records organized in such a way that it is possible for the computer to direct locate the key of
the desired record without having to search through a sequence of other records. This means
that the time required for online enquire and updating of a few records is much faster than,
when batch techniques are used. However a direct access storage device (DASD) such as a
drum, disk, etc is essential for storing a direct file. A record is stored in a direct file by its
key field. Although it might be possible to directly use the storage location number in DASD
as the key for the records stored on those location, this is seldom done. Instead an arithmetic

25
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
procedure called “Hashing” is frequently used. In this method an address generating
function is used to convert the record key number into a DASD storage address. The address
generating function is selected in such a manner that the generated address should be
distributed uniformly over the entire range of the file area and a unique address should be
generated for each record key. However in practice, the above constraints are usually not
satisfied and the address generating function, often maps a large number of records to the
same storage address. Several methods are followed to overcome this problem of collision
when it occurs.

Advantages of the Direct File Access


Some of the advantages of the direct accesses are:
1. The access to and retrival of a record is quick and direct. Any record can be located
and retrived directly in a fraction of second, without the need for a sequential search
of the files.
2. If requires, it is also possible process direct files records sequentially in a record key
sequence.
3. A direct file organization is must suitable for interactive online applications such as
air line or railway reservation system, teller facility in banking application etc.
4. Transaction needn’t be stored and placed in sequence prior to processing.

Disadvantages of the Direct File Access


Some of the disadvantages of the direct accesses are:
1. This file must be stored on a direct access storage device. Hence relatively expensive
hardware and software resources are required.
2. Files updation (addition and deletion of the records) is more difficult as compare to
sequence file access.
3. Address generation overhead is involved for accessing each record due to hashing
function.
4. Special security measures are necessary for online direct files that are accessible for
several stations.
INDEX ACCESS (OR) INDEX SEQUENTIAL FILES
We are all familiar with the concept an index. For example: The directory in a large multi
shared buildings is an index that helps us to locate a particular person’s room within the
building. For instances, to find the room of Mr. X within the building, we could look up his
name in the directory (index) and read the corresponding floor number and room number.

Employee Address Address Employee


Code Location Location Records
0001 1003 1001 0002 Mr.
X
0002 1001 1002 0004 Mr.
Y
0003 1004 1003 0001 Mr.
Z
26
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
0004 1002 1004 0003 Mr.
P
0005 1005 1005 0005 Mr.
A
……. ……. …….. ……….
Nth Nth Nth Nth
Number Number Number Number

Similarly, if we wished to read the section in the book about printers, we could not begin on
page 1 and read every page until we came across the topic of interest. Rather, we could find
the subject in the contents to locate the page number and then turn directly to that page to
begin reading. Indexed sequential files use exactly the same principle. The records in this
types of files are organized in sequence and an index table is used to speed up access to the
records without requiring a search of the entire file. The records of the file can be stored in
random sequence but the index table is in sorted sequence on the key value. This provides
the user with a very powerful tools. Not only the files can be processed randomly but it can
be processed sequentially. Since the index table is in a sorted sequence on the key values, the
file management systems simply access the data records of the index values. From the above
figure, the concept or technique of file management is commonly referred to as indexed
sequential access methods (ISAM). Files of this type are called ISAM files.
Advantages:
Some of the advantages features of the index file access are:
1. Permits the efficient and economically use of sequential processing technique when
the activity ratio is high.
2. Permits the direct access processing of records in a relatively efficient way when the
activity ratio is low.

Disadvantages:
Some of the disadvantages of indexed file access are:
1. Access to records may be slower than the direct access.
2. The efficient in the use of storage space than some other alternatives. These files must
be stored on a direct access storage device. Hence relatively expensive hardware and
software resources are required.
DIRECTORY STRUCTURE:
The file systems of computers can be extensive. Some system stores million of files on tetra
bytes of disk. To manage all these data, we need to organize them. This organization is
usually done in two parts.
First, disks are split into one or more partitions also known as mini disk in the IBM world
or volumes in the PC. Typically each disk on a system contains at least one partition, which
is a low- level structure in which files and directories reside. Sometimes, partition are used to
provide several separate areas within one disk, each treated as a separate storage device
where as other system allow partition to be large than a disk to group disk into one logical
structure. In this way, the user needs to be concerned with only the logical directory and file

27
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
structure and can ignore completely the problems of physically allocating space for files. For
this reason partition can be thought of as virtual disks.
Second, each partition contains information about files with in it. This information is kept in
entries in a device directory or volume table of contents. The device directory records
information such as name, location, size and type for all files on that partition.

When considering a particular directory structure, we need to keep in mind the operation that
is to be performed on a directory:
SEARCH FOR A FILE: We need to be able to search a directory structure to find the entry
for particular files. Since file have symbolic names and similar names may indicate a
relationship between the files.
CREATE A FILE: New files need to be created and added to the directory.
DELETE A FILE: When a file is no longer needed, we want to remove it from the directory
as well as directory list.
LIST A DIRECTORY: We need to be able to list the files in a directory and the contents of
the directory entry for each file in the list.
RENAME A FILE: Because the names of the files represent its contents to its user the name
must be changeable when the contents or use of the file changes. Remaining a file may also
allow its position within the directory structure to be changed.
TRAVERSE THE FILE SYSTEM: We may wish to access every directory and every file
within a directory. For reliability, it is a good idea to save the contents and structure of the
entire file system at regular intervals. This saving often consists of copying all files to
magnetic tape.
LOGICAL STRUCTURE OF A DIRECTORY:
Depending on the nature of the files, directory structures are mainly four types. Such as:
a. Single level directory structure
b. Two level directory structure
c. Tree level structured directory
d. Acyclic graph directory structure.
The directory structure is depends upon the three factor, such as how much amount of files
we have, as per the requirement of the user and the way of searching. We (user) may use the
different directory structure only for the quick access of a particular file.

28
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

Single directory structure:


The simplest directory structure is the single- level
directory. When the directory structure having a single
level is called as single level directory
structure. Here the number of the level is not more
then one. In case of one level directory structure no level
has behaving as like a MFD as well as UFD. All files are
contained in the same directory, which is easy to
support and understand. A single-level directory has significant limitation, however when the
number of files increases or when the system has more then one user. Since all files are in
the same directory they must have unique names. If two users call their data file then the
unique name rule is violated. MS-DOS operating system allow only 11 character file names,
UNIX allow 255 characters.
Even a single user on a single-level directory may find it difficult to remember the name of
all the files, as the number of the files increases. It is not uncommon for a user to have
hundreds of file on one computer system and an equal number of additional files on another
system. In such an environment, keeping track of so many files is a daunting task. In this
directory system, having only one directory which consisting of all the files.
Advantage: Ability to locate all the files quickly.
Disadvantage: i) Different users may accidentally use the same names for their files. For
example, if the user-1 creates a file called “Sample” (name of the file) and the user-2 also
create a file into the same name like “sample” then the particular file name is over written
here.
(ii) It is not in the multi user system. It is used on small embedded (organization) system
those who has a few amount of files.
Two-level directory structure:
A single level directory often leads to confusion of file names between different users. The
standard solution is to create a separate directory for each user. In the two level directory
structures, each user has its own user file
directory (UFD). Each UFD has a similar
structure but list only the files of a single user.
When a user job starts or a user log in, the
system master file directory (MFD) is
searched. The MFD is indexed by user
name or account number and each entry
point to the UFD for that user.

When a user refers to a particular file only his own UFD is searched. Thus different users
may have files with the same name as long as all the files names within each UFD are
unique. To delete a file, the operating system confines its search to the local UFD thus it can
not accidentally delete another user file that has the same name. The user directories
themselves must be created and deleted as necessary. A special system program is run with
29
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
the appropriate user name and account information. The program creates a new UFD and
adds an entry for it to the MFD. Although the two level directory structures solve the name
collision problem, it still has disadvantages. This structure effectively isolates one user from
another. This isolation is an advantage when the users are completely independent but is a
disadvantage. When the user want to cooperate on some task and to access one another files.
Some systems simply do not allow locating user files to be accessed by other user. In this
structure MFD is one of the levels and UFD is another level so for that this is called as two
level directory structure. To name a particular file uniquely in a two level directory, we must
give both the user name and the file name. A two level directory can be thought of as a tree
or at least an inverted tree. The root of the tree is the MFD and the files are the leaves of the
tree. A special case of this situation occurs in regard to the system files. Those programs
provided as a part of the system loaders, assemblers, compliers, utility routines, libraries and
so on are generally defined as files. When the appropriate commands are given to the
operating system, these files are read by the loader and executed. The standard solution is to
complicate the search procedure slightly. Whenever a file name is given to be loaded, the
operating systems first search the local UFD. If the file is found, it is used. If it is not found,
the system automatically searches the special user directory that contains the system files.
The sequence of directory searched when a file is named is called the searched path.
DISADVANTAGES: It is not satisfactory for users with a large number of files.
TREE LEVEL DIRECTORY STRUCTURE:
The two level directories eliminate name conflict among user but it is not satisfactory for
users with a large number of files. To avoid this, creates the sub-directories and load the
same type of files.
Once we have seen how to view a two level
directory as a two level tree, the natural
generalization is to extend the directory structure to
a tree of arbitrary height. This generalization allows
the users to create their own sub-directories and to
organize their files accordingly. The tree has a root
directory. Every file in the system has a unique path
name. A path name is the path from the root, throughout all the sub directories, to a specified
file. A directory contains a set of files or sub-directory. A sub-directory is simply another file
but it is treated in a special way. All directories have the same internal format. In normal use,
each user has a current directory. The current directory should contain most of the files that
are of current interest to the user. When a reference is made to a file, the current directory is
searched. If a file is needed that is not in the current directory then the user must either
specify a path name or change the current directory to be the directory holding that file. To
change directories, a system call is provided that directory named as a parameter and uses it
to redefine the current directory. Path name can be of two types such as: absolute path
name or relative path name. An absolute path name begins at the root and follows a path
down to the specified file, giving the directory names on the path. A relative path name
defines a path from a current directory. Allowing the user to define his own sub-directories
permits him to impose a structure on his files. This structure might result in separate
directories for files associated with different topic or different forms of information. An

30
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
interesting policy decision in a tree structured directory structure is how to handle the
deletion of directories. If a directory is empty, its entry in its containing directory can simply
be deleted. However suppose the directory to be deleted is not empty but contains several
files or sub-directories; one of two approaches can be taken. Thus to delete a directory, the
user must first delete all the files in that directory. If any sub-directories exist, this procedure
must be applied recursively to them, So that they can be deleted also. A path to a file in tree-
structured directories can be longer then that in a two level directory. To allow users to
access program without having to remember these long paths, the Macintosh operating
system automates the search for executable programs. It maintains a file, called desktop files,
containing the name and location of all executable programs it has seen. A double click on a
file to be searched for a match. Once the match is found, the appropriate executable program
is started with the clicked on files as its input.
DISADVANTAGES: When we add links to an existing tree structured directory, the tree
structure is destroyed.
ACYCLIC GRAPHIC DIRECTORY STRUCTURE:
Consider two programmers who are working on a joint project. The files associated with that
project can be stored in a sub-directory, separating them from other project and files of the
two programmers. But since both programmers are equally responsible for the project, both
want the sub-directory to be in their own directories. The common sub-directory should be
shared. A shared directory or files will exist in the file system in two (or more) places at
once.
A tree structure prohibits the sharing of files or directories. An acyclic graph allows
directories to have shared sub-directories and files. The same file or sub-directory may be in
two different directories. An acyclic graph that is a graph with no cycle is a natural
generalization of the tree structure directory scheme.
A shared file or directory is not the same as the two copies of the file. With two copies each
programmer can view the copy rather than the original. But if one programmer changes the
files the changes will not appear in the other copies, with a shared file only one actual file
exist. So any changes made by one person are immediately visible to the other. Sharing is
particularly important for such directories a new file creates by one person will automatically
appear in all the shared sub-directory.
When the people are working as a team, all the files they want to share may be put in to the
team member put each contents this directory of shared files as a sub-directory. Even when
there is a single user his file organization may require that same files be put in to different
sub-directories. For example: A programmer written for a particular project should be both in
the directory of all programs and in the directory for that project.
An acyclic graph directory structure is more flexible than is a simple tree structure, it is also
more complex. Several problems must be considered carefully. A file may now have
multiple absolute path names. Consequently distinct file names may refer to the same file.
This situation is similar to the aliasing problem for programming language. Another
common approach to implementing shared files is simply to duplicate all information about
them in both sharing directories. Thus both entries are identical and equal. A link is clearly
different from the original directory entry. Thus the two are not equal. Duplicate directory

31
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
entries however make the original and the copy indistinguishable. A major problem with
duplicate directory entries is maintaining consistency of the file is modified.
FILE PROTECTION
When information is kept in a computer system, we want to keep it safe from physical
damage (reliability) and improve access (protection). Reliability is generally provided by the
duplicate copies of files. Many computer have systems programs that automatically copy
disk files to tape at regular intervals to maintain a copy should a file system be accidentally
destroyed. File systems can be damaged by hardware problems such as reading and writing
by power surges or failures, head crashes, dirt, temperature extremes and vandalism. File
may be deleted accidentally. Bugs in the file system software can also cause file contents to
be lost. The need to protect files is a direct of the ability to access files. Systems that do not
permit access to the files of the other users do not need protection. Thus we could provide
complete protection by prohibiting access. Alternatively we could provide free access with
no protection. Both approaches are too extreme for general use.
Protection is a mechanism or methods or technique through which, the user may protect
his/her files against some of the unauthorized user who may do the disturbance or corruption.
There is no need of protection mechanism for the single user operating system but in case of
multi user operating system means when a single system is operated through a more then one
user, their must be need a protection mechanism to protect their individual files against other.
Protection mechanism provides controlled access by limiting the type of file access that can
be made. Access is permitted or denied depending upon the several factors, one of which is
the type of access requested. Several different types of operation may be controlled, such as:
READ: Read from the file.
WRITE: Write or rewrite the files.
EXECUTE: Load the files into memory and execute it.
APPEND: Write new information at the end of file.
DELETE: Delete the file and free its space for possible reuse.
LIST: List the name and attributes of the file.
Other operation such as renaming, copying or editing the file may also be controlled.
There are three methods or techniques are their to provide the protection mechanism, such
as: (a) Password Protection, (b) Access list or access control, (c) Access group.
Password Protection:
The protection problem is associated a password with each files. Just as access to the
computer system is often controlled by a password, access to each file can be controlled by a
password. If the passwords are chosen randomly and changed often, this may be effective in
limiting access to the file to only those users who know the password. This scheme however
has several disadvantages. Firstly, the number of password that a user needs to remember
may become large, making the scheme impractical. Secondly, if only one password is user
for all the files, then once it is discovered all files are discovered.
Access list or access control:
The most general scheme to implement identity- dependent access is to associate with each
file and directory an access control list (ACL) specifying the user name and the types of
access allowed for each user. When a user requests access to a particular file, the operating
system checks the access list associated with that files. If that user is listed for the requested
32
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
access, the access is allowed. Otherwise, a protection violation occurs and the user job is
denied access to the files. The problem with access lists is their length. If we want to allow
everyone to read a file; we must list all users with read access. This technique has to
undesirable consequences:
1) Constructing such a list may be a tedious and unrewarding task.
2) The directory entry, previously of fixed size, now needs to be of variable size,
resulting in more complicated space management.
Access group:
To condense the length of access control list, many systems recognize three classifications of
users in connection with each other, such as:
OWNER: The user who creates the files is the owner.
GROUP: A set of users who are sharing the files and need similar access is a group or work
group.
UNIVERSE: All other users in the system constitute the universe.
As an example, consider a person named SARA, who is writing a new book. She has hired
three graduate students like JIM, DAWN and JILL to help with the project. The text of the
book is kept in a file named BOOK. The protection associated with the files is as follows:
SARA should be able to invoke all operation on the file.
JIM, DAWN and JILL should be able only to read and write the file, they should not be
allowed to delete the files.
All other user should be able to read, but not write the files.
To achieve such a protection, we must create a new group, say text, with members JIM,
DAWN and JILL. The name of the group text must be taken associated with the files BOOK,
and the access right must be set in accordance with the policy we have outlined.

UNIT-4

33
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Logical Vs Physical address space, Overlays, swapping, Contiguous allocation: Single
partition, multiple partitions and dynamic partition, Internal and external
fragmentation, Concept of virtual memory. Non-Contiguous memory allocation,
paging, demand paging, page replacement algorithm: FIFO, Optimal and LRU.

LOGICAL Vs PHYSICAL ADDRESS SPACE


An address generated by the CPU is called logical address space where as an address
generated by memory management unit is called as physical address space.
For example: The program which is written by the user and the size of the program is 100
KB (say here/assumed here). But program loaded into the main memory from 2100-2200(say
here/assumed here), this actual loaded address in main memory is called as “physical
address”. The set of all logical addresses are generated by a program is referred to as
“logical address space”. The set of all physical address corresponding this logical address
referred to as a “physical address space”.
As per the above example:
0 to100KB is called as logical address space and 2100 to 2200 is called physical
address space.
Physical address space= logical address space+ Contents of the base register/Base
address space.
Means, physical address space= 100KB+2100(starting base address) =2200
The compile time and load time address binding methods generated identically logical and
physical address respectively. The run time mapping from virtual to physical addresses is
done by a hardware device called the “memory management unit (MMU)” or the
hardware device which may convert the logical address into a physical address is called
“memory management unit”.

The user program never sees the real physical addresses. We have two different types of
addresses such as: logical address (in the range 0 to max, as per the above example the max
refers to the 100KB) and physical addresses (in the range of R+0 to R+ Max, where R refers
to the starting base address). The users generated only logical addresses and think the
process runs in locations 0 to max. It is actually represented within a memory is;
0+2100=2100, 1+2100=2101, 2+2100=2102………, 100+2100=2200.

CONCEPT OF VIRTUAL MEMORY

34
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
A virtual memory system provides a mechanism for translating program- generated
addresses into correct main memory location. This is done dynamically while programs are
being executed in the CPU.
(OR) Virtual memory is a technique; it allows the execution of a process, even the logical
address space is greater than the physical address space/physical available memory.
For example: Suppose the program size or logical address space is 15MB, but the available
memory is 12MB. So 12MB is loaded in main memory, remaining 3MB is loaded in the
secondary memory. When the 3MB is needed for execution, then swap out 13MB from main
memory to secondary memory and swap in 3MB from secondary memory to primary
memory. The combined concept of swap in and swap out is called as swapping process.
Requirement of virtual memory:
Conventional memory management schemes suffer from the following two main limitations:
1) A process can not be loaded and has to keep waiting for its execution to start until
sufficient free memory for loading the entire process becomes available. This may
\delay a process turn around time.
2) A process can not be loaded in a system whose main memory size is the less then the
total memory required by the process.
Virtual memory is a memory management scheme, which over comes the above
mentioned limitations by allowing the execution of process that might not be completed
loaded in the main memory.
How a virtual memory realized
The basic three concepts used for the realization of virtual memory are:
On-line secondary storage: It is a secondary storage device whose capacity is much larger
than the main memory capacity and which is always kept on-line to the system. High speed
disk storage is usually used for this purpose.
Swapping: Swapping is the process of transferring a block of data from the on-line secondary
to main memory or from main memory to the on-line secondary storage. When the data is
transferring from on-line secondary storage to main memory is called swapping in of data
and when the data is transferring from main memory to on-line secondary storage device is
called swapping out of data.
Demand paging:
In a virtual memory system, all processes are partitioned into pages and reside on the on-line
secondary storage. The physical memory is also partitioned into page frames of the same
size. Now, instead of swapping in the entire process before its execution can start, a
swapping algorithm (called demand paging) is used, which swaps in only those pages of a
process that are currently needed in the memory for continuing the process execution.
Advantages:
The some of the advantages of the virtual memory are:
a) It provides a very large virtual memory to programmers on the system having smaller
physical memory. That is, the logical memory size is no longer constrained by the
physical memory size of the system.
b) It enables the execution of a system whose main memory size is less than the total
memory required by the process.

35
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
c) It enables a process execution to be started even when sufficient free memory for
loading the entire process is not available.
d) Efficient in CPU utilization.
e) Virtual memory makes the task of programming much easier, because the
programmer no longer needed to worry about the amount of physical memory
available.
Disadvantages:
The some of the disadvantages of the virtual memory are:
a) It is difficult to implement because it requires algorithms to support demand paging.
b) If used carelessly, it may substantially decreases performance instead of increasing
performance. This is happens when page fault rate is very high for a process. That is,
the process spends more time in swapping out and swapping in of pages then in its
execution.
OVERLAYS
To enables a process to be large than the amount of memory allocated to it, we can use
overlays. The idea of overlays is to keep in memory only those instruction and data that are
needed at any given time. When other instructions are needed, they are loaded into the space
occupied previously by instruction that no longer needed.
For example: Consider a two- pass assembler program such as passs-1 and pass-2 (Say here).
During Pass-1, it instructs a symbol table as well as machine language code. Similarly Pass-2
also having a symbol table as well as machine language code as like a pass-1. We may able
to partition such as assembler into pass-1 code, pass-2 code, the symbol table and the
common routines used by both pass-1 and pass-2.
Let, Pass-1 was having the space of 70 KB, Pass-2 having 80 KB, symbol table having 20
KB the common routines having the space of 30 KB. To load everything at once, we would
requires 200KB (70KB+80 KB+20 KB+30 KB) of memory. If only 150 KB is available/ the
available memory space is 150 KB, so we can not run our both the process simultaneously.
Pass-1 and pass-2 do not need to be in memory at the same time. We thus define two
overlays: Overlays-A is symbol table, common routines and pass-1 and Overlays-B is the
symbol table, common routines and pass-2. We can add an overlays driver (10 KB) and start
with overlays-A in memory. When we finish passs-1, we jump to the overlays-B in to the
memory, overwritten overlay-A and then transfer control to pass-2. Overlays need only
120KB, where as overlays B needs 130KB. We can run our assembler in the 150 KB of
memory.

36
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

SWAPPING PROCESS
A process must be in memory to be executed. However a process can be swapped
temporarily out of memory to a backing store and then brought back into memory for
continued execution. Swapping is a method to improve the main memory utilization. For
example, the main memory consisting of 10 processes, assume that it is the maximum
capacity, and the CPU currently executing the process P1. In the middle of the execution, the
process P1 need an I/O. Then the CPU switches to another job and the process P1 is moved
to a disk and another process like process P2 is loaded in main memory in the place of
process P1. When the process P1 completes its I/O operation then the process P1 is moved
into main memory from the disk. Switch a process from main memory to disk is said to be
“swapped out” and switch from disk to main memory is said to be “swapped in”. This type
of mechanism is said to be “Swapping”.

Assume a multi programming environment with a round-robin CPU scheduling


algorithm. When a time quantum expires, the memory managers will start to swap out the
process that just finished and to swap another process into the main memory space that has
been freed. In the mean time, the CPU scheduler will allocate a time slice to some other
process in memory. When each process finishes its quantum, it will be swapped with another
process. A swapping policy is used for priority-based scheduling algorithms. If a higher
37
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
priority process arrives and wants services, the memory manager can swap out the lower
priority process and then load and execute the higher priority process. When the higher
priority process finishes, the lower priority process can be swapped back in and continued.
This variant of swapping is sometimes called roll out-roll in process. If we want to swap a
process, we must be sure that it is completely idle. Of particularly concern is any pending
I/O. A process may be waiting for an I/O operation when we want to swap that process to
free up memory. Swapping requires a backing store. The backing store is commonly a fast
disk. It must be large enough to accommodate copies of all memory images for all users and
it must provide direct access to these memory images. The system maintains a ready queue
consisting of all processes whose memory images are on the backing store or in main
memory and are ready to run.
CONTIGUOUS MEMORY ALLOCATION
The main memory must accommodate both operating system and the various user processes.
The memory is usually divided into two partitions one for the resident operating system and
one for the user processes. The operating system places in either low memory or high
memory. Since the interrupt vector is in low memory, programmers usually place the
operating system in low memory.

We usually want several user processes to reside in memory at the same time. In contiguous
memory allocation, each process is contained in single contiguous section of memory. There
are so many methods are available for memory allocation, such as:
a) Single partition memory allocation method.
b) Multiple partition memory allocation method.
Fixed equal Multiple partition memory allocation method.
Fixed variable multiple partition memory allocation method.
c) Dynamic partition memory allocation method.

Single partition memory allocation method


In this memory allocation method, the operating system resides in the low memory and the
remains memory is treated as a single partition. This single partition is available for user
space; only one job can be loaded in this user space.

38
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

The CPU scheduler selects a job from the ready queue for execution, the dispatcher
(switching from one process to another process) loads that job into main memory and
connect the CPU to that job. The main memory consisting of only one process at a time,
because the user space treated as a single partition.
Let the total memory space available is 1MB (Which is equal to the 1024KB). Out of the
total memory space available, 200KB is allocating for the operating system within a low
memory and rest part of the memory space that is 824KB is allocating for user space for
executing the user processes request within a high memory. Assume here the memory space
occupied by each processes are: 124KB for J6, 112KB for J2, 202 KB for J9, and 100KB for
J1 and 225KB for J3. In this case, multiple jobs should be executed but not simultaneously
but one after another because the total memory space (that is user space) is divided into a
single partition. When the first job that is J6 reside in the memory of user space, it only
reserve the 124KB and the rest part of the memory space within a user space are wasted
which is called as internal fragmentation.
Advantages:
The main advantage of single partition memory allocation is its simplicity. It does not
require a great expensive to understand as well as use in such a system.
Disadvantages:
The main disadvantages of single partition memory allocation are:
1) The main memory is not utilized fully as well as properly also, means a lot of memory
will be wasted in this scheme.
2) Poor utilization of processor because of waiting for input –output operation.
3) Poor utilization of memory.
Multiple Partition Memory Allocation Method
This method can be implemented in to two ways, such as:
Fixed equal multiple partition memory partition
In this memory allocation scheme, the operating system occupies the low memory and the
rest of main memory is available for user space. The user space is divided into fixed
partition. The partition sizes are depending on the operating system.

39
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
For example: The total available main memory size is 6MB. 1MB is occupied by the
operating system. The remaining 5MB is partitioned into 5 equal fixed partition (5*1MB)
and J1, J2, J3, J4 J5 are the 5 jobs to be loaded into main memory. These sizes are:
JOB SIZES
JOB-1(J1) 450KB
JOB-2(J2) 1000KB
JOB-3(J3) 1024KB
JOB-4(J4) 1500KB
JOB-5(J5) 500KB
Depending upon the size of each job (which is mentioned in the above table), they are
allocating within a memory space (that is user space) only when the size of each job is less
than or equal to the size of each partition. If size of the job is exceed the size of available
memory space of each partition then that job is not executed by the operating system or not
resides within the partition of user space area.

Advantages:
The main advantages of fixed equal multiple partition memory allocations are:
1) This scheme supports the multiprogramming.
2) Efficient utilization of number of processes as well as input and output devices.
3) Simple and easy to implement.
Disadvantages
The main disadvantages of the fixed equal multiple partitions memory allocations are:

40
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
1) This scheme suffers from both internal as well as external fragmentation.
2) The free area of each partition may not be a large enough for other partition.
3) A job partition size is limited to the size of physical memory.
4) Wastage of large memory space.

INTERNAL AND EXTERNAL FRAGMENTATION


A partition of main memory is wasted within a partition is said to be internal fragmentation
(or) the total amount of memory wasted within a partition after allocating job is called as a
internal fragmentation. And the wastage of total amount memory within an entire partition is
said to be external fragmentation. From the fixed equal multiple partition memory
allocation method, Job-1 is loaded into the partition-1, the maximum size of the partition -1
is 1024KB (IMB), the size of the Job-1 is 450KB. So 1024-450=574KB space is wasted, this
wasted memory is said to be “Internal fragmentation”. But there is no enough to load job-4
because the size of job-4 is greater than the all partition so the entire partition (that is

partition-5) was wasted. This wasted memory is said to be “External fragmentation”.


Therefore, the total internal fragmentation is:
Internal fragmentation = {(1024-450)+(1024-1000)+(1024-500)}KB
= (574KB+24KB+524KB) =1122KB
External fragmentation = 1024KB (that is partition-5 because the total memory is wasted
here)
Fixed variable multiple partition memory allocation method
In this scheme, the user space of main memory is divided into number of partition, but the
partition sizes are different length or sizes. The operating system keeps a table indicating
which partitions of memory are available and which are occupied. When a process arrives
and needs memory, we search for a partition large enough for this process. If we find one,
allocate the partition to that process.
For example: assume that we have 4000KB of main memory available and the operating
system occupies 500KB. The ruminating 3500KB leaves for user processes.

Job Size Arrival Partition Size


JOB QUEUE….>

time
JOB- 825KB 10MS P1 700KB
1(J1)
JOB- 600KB 5MS P2 400KB
2(J2)
JOB- 1200K 20MS P3 525KB
3(J3) B
JOB- 450KB 30MS P4 900KB
4(J4)
JOB- 650KB 15MS P5 350KB
5(J5)
P6 625KB

41
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
J4 J3 J5 J1 J2

Ready Queue (As per the arrival time


period)
As per the mentioned list above, if we may represent the multiple jobs as per their size
into a multi partition memory allocation, then the representing diagram is:

DESCRIPTION OF THE DIAGRAM


Out of 5 jobs, J2 arrives first (5ms), the size of J2 is 600KB, so search all partitions from low
memory to high memory for large enough partition.P1 is greater than J2, so load J2
inP1.From the rest of job, J1 arriving next (10ms). The size is 825KB, so search the enough
partition, P4 is large partition, so load the J1 into P4. J5 arrives next, the size of the job is
650KB, there is no large enough partition to load that job, and So J5 has to wait until enough
partition is available. J3 arrives next, the size of he job is 1200KB, there is no large enough
partition to load that job.J4 arrives last, the size is 450KB, partition P3 is large enough to
load that job. So load J4 into P3. Partitions P2, P5, P6 are totally free. There is no process in
this partition. These wasted memories is said to be “External fragmentation”. The total
external fragmentation is 1375KB (400KB+350KB+625KB). The total internal

42
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
fragmentation is {(700-600) KB+ (525-450) KB + (900-825) KB=250KB}. From the above
figure, the size of J2 is 600KB it is loaded in partition P1. Then the internal fragmentation is
100KB but if it loaded in P6, then internal fragmentation is only 25KB.Why it is loaded in
P1? Three algorithms are available to answer this question:
First-Fit: Allocate the partition that is big enough, searching can start either from low
memory or high memory. We can stop searching as soon as we find a free partition that is
large enough.
Best-Fit: Allocate the smallest partition that is big enough (or) select a partition, which
having the least internal fragmentation.
Worst-Fit: Search the entire partition and select a partition which is the largest of all or select
a partition which having the maximum internal fragmentation.
Advantages:
The some of the advantages of the fixed variable multiple partitions memory allocations are:
1) This scheme supports multiprogramming.
2) Efficient processor utilization and memory utilization is possible.
3) Simple and easy to implement.
Disadvantages:
The some of the disadvantages of the fixed variable multiple partitions memory allocations
are:
1) This scheme suffering from internal and external fragmentation.
2) Possible for large external fragmentation.
Dynamic Partition
In this method partition are created dynamically, so that each process is loaded into the
partition of exactly the same size at that process. In this scheme, the entire user space is
treated as a “big hole”. The boundaries of partition are dynamically changed, and the
boundaries are depends on the size of the processes.
Job Size Arrival
time
JOB- 825KB 10MS
1(J1)
JOB- 600KB 5MS
2(J2)
JOB- 1200K 20MS
3(J3) B
JOB- 450KB 30MS
4(J4)
JOB- 650KB 15MS
5(J5)

J4 J3 J5 J1 J2
Ready Queue (As per the arrival time period)

43
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

Description Of the figure:


Job J2 arrives first, so load the job2 into memory. Next J1 arrives and load the J1 into the
memory then J5, J3 and lastly J4 loaded into the memory simultaneously. Form the above
figure (a), (b), (c), (d) Job J2, J1, J5, J3 are loaded respectively. The last Job is J4 and the
size of J4 is 450KB, but the available memory is 225KB, which is not enough to load J4, so
the job J4 has to wait until the memory available. Assume that, at the mean while, J5 has
finished and it is released its memory. After that, the available memory is 225+650=875KB,
which is enough to load Job-4.
Advantages:
The some of the advantages of dynamic memory allocations are:
1) In this scheme, partitions are changed dynamically. So that scheme does not suffer from
internal fragmentation.
2) Efficient memory and processor utilization.
Disadvantages:
This scheme suffers from external fragmentation.

NON-CONTIGUOUS MEMORY ALLOCATION


In this contiguous allocation models files are stored statically that is user must specify the
size of the file priority. It causes the fragmentation difficult to users. To avoid this here we
have another dynamic approach to disk space allocation called non-contiguous model. The
following two approaches are used to implement non-contiguous model:
Linked allocation
44
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
In this model, a file is represented by the linked list of blocks associated to it. Each block
contain two fields: Data field, which have the file data and control info (information) field,
which contains the address of next disk block associated with particular file.

This arrangement is simple to implement and involves low overheads for allocation and de-
allocation of disk blocks. Also no fragmentation required in this model and users need not to
specify the size of the file in advance. It supports sequential organization.
Disadvantages:
The some of the disadvantages of linked allocation are:
1) Slow direct accessing: To access a particular block associated with file until desired block
achieved.
2) More space required to keep pointers (in average 39% of disk space is required to store
pointers).
3) Reliability is also poor since corruption of a single pointer in a disk-block may lead to loss
of data in entire file.
Index allocation
The main problem with linked allocation model is that it does not support direct accessing of
files since all blocks are scattered over whole disk. To overcome this , we have another
approach called indexed allocation, in which all pointers, pointing to disk blocks associated
with a file are kept in an indexed block. Operating system uses this block to access a block
directly. Here, directory entry of contains just the address of index block.

45
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

Multilevel indexing
In this above explanation, if the size of the block is 512 bytes and let a single pointer take 4
byte then the index block may contains maximum 128 pointers that is block. It denotes that
the largest file in the system can have the size of 128 blocks (128*512 byte) but the files may
exceed this limit. This would be solved by using multiple level indexing. Here, operating
system uses the first level index to reach the second (or next) level index block and from
here it may reach to the desired block that is record. This scheme could be shown as:

NOTE:
1) If needed 3rd and 4th level indexing is also possible that means the level is not restricted
within two levels.
2) But the problem to keep the pointers in this model is huge as compared to keeping this in
linked allocation.
3) Also for small sized file having two (or) three pointers, use of entire index block causes
wastage of space that’s-why some systems supports combination of linked and indexed
allocation.
PAGING TECHNIQUE
It is non-contiguous memory allocation method. In contiguous memory allocation that is
single, multiple and a dynamic method, the entire process is loaded into partition. But in the
paging, the process is divided into small parts; these are loaded into elsewhere in the main

46
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
memory. The basic idea of paging is the main memory (physical address space) is divided
into fixed sized blocks called frames, the logical address space (user job) is divided into
fixed sized blocks, called pages, but page size and frame size should be equal. The size of
the frame or a page is depending on the operating system. In this scheme, the operating
system maintains a data structure that is page table; it is used for mapping purpose. The page
table specifies the same useful information, it tells which frames are allocated, which
frames are available and how many total frames are there and so on. Page table consists
of two fields; one is page number and other one is frame number. Each operating system
has its own methods for storing page table, most allocate a page table for each processes.
Every address generated by CPU divided into two parts; one is “page number” and second
is “page offset/ displacement”. The page number is used index in page table. The logical
address space that is CPU generated address space is divided into pages, each page having
the page number (P) and displacement (D). The pages are loaded into available free frames
in the physical memory.

The mapping between page number and frame number done by page map table. The page
map table specifies which page is loaded in which frame, but the displacement is common.
For example: there are 2 Jobs in the ready queue, the job sizes are 16KB and 24KB
respectively and the page size is 4 KB each. The available main memory size is 72KB (that
is 72/4=18 frames). So Job1 is divided into 4 pages and Job2 is divided into 6 pages. Each
process maintains a page table.

47
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

Four pages of Job1 are loaded in different location in main memory. The operating system
provides a page table for each process; the page table specifies the location in main memory.
The capacity of main memory is 18 frames. But the available jobs are two which having 10
pages, so remaining 8 frames are free. The scheduler can use these free frames to some other
jobs.
Advantage:
The some of the advantages of paging techniques are:
1) It supports the time sharing system.
2) It does not effect from fragmentation.
3) It supports virtual memory.
Disadvantage:
This scheme may suffer “Page break”. For example, the logical address space is 17KB, but
the page size is 4KB. So this job requires 5 frames. But the fifth frame consisting of only
1KB. So the remaining 3KB is wasted. It is said to be page breaks.
DEMANDING PAGING TECHNIQUE
It is the application of virtual memory. It is the combination of paging and swapping
technique. The criterion of this scheme is “a page is not loaded into the main memory from
secondary memory, until it is needed”. So a page is loaded into the main memory by
demand, so this scheme is said to be “demand paging”.
For example: Assume that the logical address space is 72KB, the page and frame size is
8KB, so the logical address space is divided into a pages, numbered from 0 to 8. The
available main memory is 40KB that is 5 frames are available; the remaining four pages are

48
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
loaded in the secondary storage devices. Whenever those pages are required the operating
systems swap-in those pages in to main memory.

The mapping between pages and frames done by page map table. In demand paging, the
page map table consisting of 3 fields. One is page number, second is frame number, and the
third one is valid/invalid bit. The page which is placed within the main memory is called as
valid bit and the page which is placed within the secondary storage devices excluding the
main memory is said to be invalid bit. In the above figure, the page numbers 1,3,4,6 are
loaded in secondary memory, so those bits are set to invalid, remain all pages are reside in
main memory; so those bits are set to valid. The available free frames in main memory is 5,
so 5 pages are loaded; remaining frames are used by other process (UBOP).
PAGE FAULT: When the processor need to execute a particular page, that page is not
available in main memory, this situation is said to be “page fault”. When the page fault is
happened, the page replacement will be needed. The page replacement means select a victim
page in the main memory, replace that page with the required page from the disk. The
students may have a doubt, how to select a victim page. The answer is simple “page
replacement algorithm”. The page replacement algorithms select the victim pages.
PAGE REPLACEMENT ALGORITHM
It is the common in paging systems for all page frames to be in use. In this case operating
system storage management routines, must decide which page in primary memory storage
to displace to make room for an incoming page. We consider each of the following page
replacement algorithms, such as:-
a) The principle of optimality, b) Random page replacement.
c) First-in-First-out (FIFO), d) Least-Recent-Used (LRU)
e) Least-Frequently-Used (LFU), f) Not-Used-Recently (NUR)
g) Second change, h) clock, i) Page fault frequency, j) Working Set.

49
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

FIRST-IN-FIRST-OUT (FIFO)
In first-in-first-out (FIFO), the operating system keeps track of all the pages in memory in a
queue, with the most recent arrival at the back and the earliest arrival in front. When a page
needs to be replaced, the page at the front of the queue (the oldest page) is selected. FIFO
replacement algorithm is used by the VAX/VSM operating system.
First-in-First-Out page replacement can also be implemented with the simple FIFO queue; as
each page is arrives, it is placed at the tail of the queue and pages are replaced from the head
of the queue. Belady, Nelson, and Shedler discovered that under FIFO page replacement,
certain pages reference patterns actually cause more page faults when the number of page
frames allocated to a process is increased. This phenomenon is called the FIFO Anamaly or
Belady’s Anamaly.
For example:
A B C D A B E A B C D E
<……………………Page reference string………………………………>

Page Fram Fram Fram Fram Page Fram Fram Fram Fram
FIFO page replacement with 4 pages available

referenc e e e e refere e e e e
es 1 2 3 4 nces 1 2 3 4
A Fault A …… …… …… E E D C B
… .. .. Fault
B Fault B A …… …… A A E D C
.. . Fault
C Fault C B A …… B B A E D
.. Fault
D Fault D C B A C C B A E
Fault
A No D C B A D D C B A
fault Fault
B No D C B A E E D C B
fault Fault

Two “No page fault”, Figure-(A)

50
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

FIFO page replacement with 3 pages available


Page Frame- Frame- Frame- Page Frame- Frame- Frame-
reference 1 2 3 reference 1 2 3
A fault A ……… ……… E fault E B A
B fault B A ……… A No E B A
fault
C fault C B A B No E B A
fault
D fault D C B C fault C E B
A fault A D C D fault D C E
B fault B A D E No D C E
fault

Three “No page fault”, Figure-(B)


(The FIFO Anamaly)
Page fault ratio = Number of page fault/ Number of bits in the reference string.
From the figure (A), Page fault ratio= 10/12= 83.33%
From the figure (B), Page fault ratio= 09/12= 75%
From the above diagram, the left most indicates the page reference patterns of a process. The
first table shows how this page reference pattern causes pages to be loaded in to storage and
replaced under FIFO when four page frames are allocated to the process. The second table
shows how this process behaves in these same circumstances, but instead with three page
frames allocated to it. To the left of each it indicates whether the new page reference causes
page fault or not. When the process runs in four pages, it actually experiences one more page
fault when it runs in three pages. If the required page is reside in the main memory, then
page replacement is not required, at that situation it is said to be no page fault.

OPTIMAL PAGE REPLACEMENT ALGORITHM


The optimal page replacement algorithm has the lowest page fault rate of all algorithms. The
criteria of this algorithm are “Replace a page that will not be used for the longest period
of time”. To illustrate this algorithm, consider the page reference string like:
1 2 3 2 5 6 3 4 6 3 7 3

1 5 3 6 3 4 2 4 3 4 5 1

Optimal Behavior:
Frame 1 2 3 2 5 6 3 4
51
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
0 1 1 1 1 1 1 1 1
1 2 2 2 2 6 6 6
2 3 3 3 3 3 3
3 5 5 5 4
Page √ √ √ × √ √ × √
fault

Frame 6 3 7 3 1 5 3 6
0 1 1 1 1 1 1 1 1
1 6 6 6 6 6 6 6 6
2 3 3 3 3 3 3 3 3
3 4 4 7 7 7 5 5 5
Page × × √ × × √ × ×
fault

Frame 3 4 2 4 3 4 5 1
0 1 1 2 2 2 2 2 1
1 6 4 4 4 4 4 4 4
2 3 3 3 3 3 3 3 3
3 5 5 5 5 5 5 5 5
Page × √ √ × × × × √
fault

Therefore, the page fault ratio = 11/24 = 45.83%


The optimal page replacement algorithm is difficult to implement, because it requires future
knowledge of reference string, so this algorithm is used mainly for comparison studies.
LEAST RECENTLY USED ALGORITHM (LRU)
The criteria of this algorithm is “Replace a page that has not been used for the longest
period of time”. This strategy is the “Page replacement algorithm looking backward in
time, rather than forward”. To illustrate the algorithm consider the reference string, like:
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
With 3 main frames.
Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7
1 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5
2 2 2 2 1 1 1 0 0 0 3 3 3 6 6
Page √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √
Fault

Therefore, the page fault ratio= 16/16=100%

52
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

UNIT-3
Process concept, Process state transaction diagram, Process control block, Process
scheduling schedulers, CPU scheduling, CPU/IO brust cycle, CPU scheduling algorithm:
FCFS (first come first serve), SJF (shortest job first), Priority method, Round-Robin
method, Deadlock, reallocation graph, dead lock prevention, detection and recovery.

Process Concept
A process is a program at the time of execution (or) the process is nothing but the time
period for execution of a program (or) the technique or a method through which the
operating system may initiate to execute a program is called as process. It includes the
program counter, the process stack and the contents of process register. The operation of
the process stack is to store the data temporarily. A program is a static object that exists in a
file but the process is a dynamic object that exists during the period of program execution or
execution of the program. A program is a sequence of instruction but a process is a sequence
of instruction for execution. A program is loaded into a secondary storage devices but a
process is loaded into the primary storage device. Time span for program is unlimited but the
time span of the process is limited. Entity of the program is passive but the entity of the
process is active.
Process creation:
When a user initiates to execute a program, the operating system creates a process to
represent the, execution of a program.

Process Processor
Source module
Translator

Re-locatable Object Module Executable Program

Link Editor Absolute Program Loader

53
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

From the above diagram, source module is written into the object module with the help of
translator. The re-locatable object module in converted into an absolute program through the
link editor. The absolute program may convert into a corresponding executable program
through a loader and after that the executable program is converted into a process through a
processor. So these are the general steps to create a process through a operating system.
Concurrent Process:
The two or more processes are said to be concurrent, when they are not in serial but their
execution may overlap in time.

Serial Process
The two or more processes are said to be in serial, when the execution of the second one will
be started after the completion of first one/pervious one and so on.
Spawning Process
The method for creating the new process(s) is said to be spawning process. From the
mentioned diagram, the first process said to be parent process and the process(s) within the
parent process is said to be child process. A child process may be also a parent process if it
has some of child process(s). The number of the creating a process by the parent process
depends upon the execution or complexity of the program.

Process termination
A process may terminates when the execution of a program is completed means after the
completion of the program fully, it can terminates from the system automatically but in some
cases the program also may terminates in the mid of execution, due to some reason. They
are:
Time slot expires: When the process execution does not completed within the quantum or
given time period, then the process is terminated from the running state.
Memory boundary violation: If the process needs more memory as compared with the
available memory, then the process is terminated from the running state.
Invalid Instruction: If a process has the illegal instruction and the CPU failed to execute their
instruction then the process is terminated.

54
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Parent process termination: When the parent process terminates then the child process also
terminates automatically in the mid of the execution of a program.
Parent request: If the parent process requests the child process for termination then the child
process terminates automatically.
Unavailability of resources: For the execution of a program, it requires the sufficient amount
of resources but during the execution of a program, if a process faces some of the scarcity of
resources then it is automatically terminates from the program.

Process state transaction diagram


When a process execute, it changes the state. The state of the process is determines by the
current activity of the process. Each of the process may be in one of the following states
like:
New/Born state: The process is being created is called as the state of born/new.
Running state: When the process execution is going on through a processor, is in the state of
running.
Waiting State: The process is waiting for some event to occur.
Ready state: The process is waiting to be assigned to a processor or the process is waiting for
the CPU attention for execution.
Terminate state: The process has finished its execution.
Out of all these states, only one can be running, in any processor at any time but more
than one process may be in the state of ready or waiting state. The ready process(s) are
always resided in the ready queue by following any one of the scheduling technique.

From the above diagram, may indicate about the life cycle of different state of the process.
They are:
New->Ready State: The operating system create a process and prepare the process for
execution. Then the operating system moved or transmitted the process in a ready queue.

Ready->Running State: The operating system selected one of the job from the ready queue
and moves the process from the ready state to running state.

Running->Terminate State: When the execution of the process has completed then the
operating system terminates that process from the running state.

55
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Running->Ready State: When the time slot of a process has expired or if a processor
received any interrupt signal then the operating system transmits the process from running
state to ready state.

Running-> Waiting State: A process is put into the waiting state, if the device requires some
of the resources but in the mean time if the required resources are unavailable then ultimately
the operating system transfer that the particular process towards the waiting state to avoid the
idle time period.

Waiting-> Ready state: A process in the blocked state or waiting state is moved to the ready
state when the event for which it has been waiting occurs.
Process Control Block (PCB)
Each process is represented in the operating system by its own PCB. A process control block
is a data block consists of all the information of a specific process. The PCB is a control
store of information that allows the operating system to locate all the key information about
the process. The operating system uses this information and performs the operation on the
process. The operation include, suspended a process, resume a process, change the
process, dispatch of the process, name the process and so on. The operating system groups
all information that it needs about a particular process into a data structure called a “process
dispatcher” or “Process control block”. Whenever a process is created the operating
system create a corresponding process control block to serve as its runtime description
during the lifetime (execution period) of the process. When the process is terminated, its
PCB is released from the pool (ready queue) to free cells for which new PCB are drawn. A
process becomes known to the operating system and thus eligible to compute for system
resources only when it has an active PCB associated with it. The PCB is a data structure with
the field for recording various aspects of process execution and resource usage. Information
stores in a PCB typically include some or all the followings:
1. Process name, 2. Priority, 3.Hardware state, 4.State of the process (Running, ready,
suspended etc), 5. I/O status, 6.Accounting information, 7.Scheduling information and
usage statistics, 8.File management.

CPU SCHEDULING PROCESS


CPU scheduling process refers to the set of policies and mechanism built in to the operating
system that governs or control the order in which the work to be done by the computer
system is completed. (OR) A scheduler is one of the method or mechanism through which
the operating system may govern the number of process(s) one after another. A scheduler is
an operating system module that selects the next job to be admitted into the system and the
next process to run. Almost all computer resources are schedule before use. The CPU is one
of the primary resources so the CPU is also scheduled before use. The CPU scheduling
algorithm determines how the CPU will be allocated to the process. The scheduler selects
from along the processes in memory that are already to execute and allocates the CPU to one
of them. The ready queue is not necessarily a first in first out (FIFO) queue that means a
ready queue may be implemented as a FIFO queue, a priority queue, a tree or a simply
unordered linked list. All the processes in the ready queue are lined up waiting for chance to

56
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
run on the CPU.CPU scheduling decision may takes place under the following 4
circumstances, such as:
a) When a process switches from the running to the waiting state.
b) When a process switches from the running to the ready state.
c) When a process switches from the waiting state to the ready state.
d) When a process switches from the running state to terminate state.
In circumstances ‘a’and‘d’ there is no choice in terms of scheduling. A new process (if one
exists in the ready queue) must be selected for execution. However there is choice for
scheduling in circumstances b and c. When scheduling takes place only under circumstances
a and d, we say the scheduling scheme is non-preemptive, otherwise the scheduling scheme
is preemptive. CPU scheduling algorithms are of two types such as: Non-preemptive CPU
scheduling and preemptive CPU scheduling.
NON-PREEMPTIVE CPU SCHEDULING:
In non-preemptive CPU scheduling the CPU is assigned to a process and the processor do
not release until the completion of current process. The CPU will assign to another job only
after the completion of previous or last process. Under the non preemptive scheduling once
the CPU has been allocated to a process keeps the CPU until it released the CPU either by
terminating or by switching to the waiting state.
PREEMPTIVE CPU SCHEDULING:
In preemptive scheduling process, the CPU can release the process even in the middle of the
execution period.

CPU SCHEDULING CRITERIA


Different CPU scheduling algorithms have different properties and may favor one class of
processes over another. Many criteria have been suggested for comparing CPU scheduling
algorithms. The characteristics used for comparison can make a substantial difference in the
determination of best algorithm. The criteria includes the following:
CPU UTILIZATION: This is the percentage of a time that the process is busy. The range of
the CPU utilization is from 0% to 100%.
THROUGHPUT: The performance of the CPU is measured by throughput or throughput
through which the performance of the CPU can be measured is called throughput.
Throughput means how many jobs are completed by the CPU within the time period.
TURNAROUND TIME: The time interval between the time period of submission of a
process and the time period of completion of a process. Turnaround time is the sum of the
periods spent waiting to get into the memory. Waiting in the ready queue, execution on the
CPU and doing I/O operation. Thus,
Turnaround time= (Waiting time period in the ready queue) + (Execution time)+( Waiting
time in the ready queue).
WAITING TIME PERIOD: Waiting time is the sum of the periods, spent waiting by a
process in the ready queue. Waiting time is the sum of the periods spent waiting in the ready
queue.
RESPONSE TIME: Response time is the time from a submission of a request until the first
response is produces. This measures called response time, is the amount of time it takes a
short responding but not the time that it takes to output that response. The turnaround time is

57
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
generally limited by the speed of the output devices. We want to maximize CPU utilization
and throughput and to minimize turn around time, waiting time and response time. However
in some circumstances we want to optimize the minimum or maximum values rather than the
average.
For example: To guarantee that all the users get good services, we want to minimize the
maximum response time.

CPU SCHEDULING ALGORITHM/METHODLOGY


CPU scheduling algorithm deciding which of the process in the ready queue is to be
allocated to the CPU. Different types of CPU scheduling algorithms are available such as:
FCFS method, priority method, SJF method and round robin method.
FIRST COME FIRST SERVE (FCFS) METHOD
In FCFS method the numbers of processes are sorted in the ready queue, on e after another
and here the processors (CPU) select the process from a ready queue who may insert first. It
is totally independent upon the execution time period of each and every process. The total
time period taken by the CPU to execute a particular process is called as CPU brust time,
which is measured in terms of milliseconds. The operating system maintains the data
structure that is the ready queue. Perhaps the simplest scheduling discipline is first come first
serve. Processes are dispatched according to their arrival time on the ready queue. Once the
process has the CPU or in a running state, it runs to completion. FCFS is a non-preemptive
discipline, it is fair in the formal sense but somewhat unfair is that long jobs make short jobs
wait. The code for FCFS scheduling is simple to wait and understand. The average waiting
time under the FCFS policy however it is after quite long. Consider the following set of
processes that are arrival time 0 with length of the CPU brust time given in milliseconds.

PROCESS CPU brust


time(Millisecond)
Process (P1) 24
Process (P2) 03
Process (P3) 03
If the processes arrive in the order P1, P2 and P3 and are serve in FCFS order we get the
result shown in the following Gantt chart:

P1 P2 P3
0 24 27 30
The waiting time is 0 millisecond for the process P1, 24 millisecond for process P2 and 27
millisecond for process P3. Thus the average waiting time period is ((0+24+27)/3)
millisecond =17 millisecond. If the processes arrives in the order of P2, P3 and P1. However
the result will be as shown in the Gantt chart:
P2 P3 P1
0 3 6 30
The average waiting time is now ((0+6+3)/3) milliseconds = 3 milliseconds
Thus the average waiting time under FCFS policy is generally not minimal and may vary
substantially if the CPU brust times vary greatly.
58
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Advantages of the FCFS method:
The some of the advantages of the FCFS methods are:
1) It is easy to implement.
2) It is very simple.
Disadvantages of the FCFS method:
The some of the disadvantages of the FCFS methods are:
1) Average waiting time period is very high.
2) It is particularly trouble some for time sharing system.
3) It is trouble for that process who may insert last but the CPU brust time or execution time
period is minimal.
Problem:
a) Turn around time = finishing time – arrival time.
b) Waiting time period = starting time – arrival time.
c) Response time period = first response time period – arrival time.
Note:
Within a program, if the arrival time is not given then it is assumed to be ‘zero’ or it is
considered to be ‘zero’. The total time periods taken by the CPU for execute a particular
process. It is measured in terms of millisecond that is the time period for each process.
Problem (When the arrival time period is not given):
CPU brust time
Process
(millisecond)
P1 5
P2 24
P3 16
P4 10
P5 23
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using FCFS methodology or technique.
Solution: -
Gantt chart

P1 P2 P3 P4 P5

0 5 29 45 55 58

Average Turn around time


Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
Turn around time period for the process P1 = 5 – 0 = 5 millisecond
Turn around time period for the process P2 = 29 – 0 = 29 millisecond
59
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Turn around time period for the process P3 = 45 – 0 = 45 millisecond
Turn around time period for the process P4 = 55 – 0 = 55 millisecond
Turn around time period for the process P5 = 58 – 0 = 58 millisecond
Therefore, average turn around time = (P1+ P2+ P3+ P4 +P5)/5
= (5+29+45+55+58)/5
= 38.4 millisecond
Average waiting time period
Waiting time is the time interval in between the starting time period of a process and arrival
time period of that particular process.
Waiting time = starting time period – arrival time period
Average waiting time is the ration in between the sum of total time period of each process
and the total number of processes.
Waiting time period for the process P1 = 0 – 0 = 0 millisecond
Waiting time period for the process P2 = 5– 0 = 5 millisecond
Waiting time period for the process P3 = 29– 0 = 29 millisecond
Waiting time period for the process P4 = 45 – 0 = 45 millisecond
Waiting time period for the process P5 = 55 – 0 = 55 millisecond
Therefore, average Waiting time = (P1+ P2+ P3+ P4 +P5)/5
= (0+5+29+45+55)/5
= 26.8 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 0 – 0 = 0 millisecond
Response time period for the process P2 = 5– 0 = 5 millisecond
Response time period for the process P3 = 29– 0 = 29 millisecond
Response time period for the process P4 = 45 – 0 = 45 millisecond
Response time period for the process P5 = 55 – 0 = 55 millisecond
Therefore, average Response time = (P1+ P2+ P3+ P4 +P5)/5
= (0+5+29+45+55)/5
= 26.8 millisecond
From the above calculation the average waiting time period is equal to the result of average
response time period so for that it is a non-preemptive CPU scheduling method/process.
Problem (When the arrival time period is given):
CPU brust time Arrival time
Process
(millisecond) period
P1 03 00
P2 06 02
P3 04 04
P4 05 06
P5 02 08

60
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
From the above data calculate the average turn around time, average relative delay time,
as well as average response time by using FCFS methodology or technique.

Solution: -
Gantt chart

P1 P2 P3 P4 P5

0 3 9 13 18 20

Average Turn around time


Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
Turn around time period for the process P1 = 3 – 0 = 3 millisecond
Turn around time period for the process P2 = 9 – 2 = 7 millisecond
Turn around time period for the process P3 = 13 – 4 = 9 millisecond
Turn around time period for the process P4 = 18 – 6 = 12 millisecond
Turn around time period for the process P5 = 20 – 8 = 12 millisecond
Therefore, average turn around time = (P1+ P2+ P3+ P4 +P5)/5
= (3+7+9+12+12)/5
= 8.6 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 0 – 0 = 0 millisecond
Response time period for the process P2 = 3– 2 = 1 millisecond
Response time period for the process P3 = 9– 4 = 5 millisecond
Response time period for the process P4 = 13 – 6 = 7 millisecond
Response time period for the process P5 = 18 – 8 = 10 millisecond
Therefore, average Response time = (P1+ P2+ P3+ P4 +P5)/5
= (0+1+5+7+10)/5
= 4.6 millisecond
Average relative delay time period
Relative delay time period is the ratio in between the turn around time of each process and
the CPU brust time of that particular process.
61
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
So, Relative delay time period= (Turnaround time/CPU brust time)
Average delay time period is the ratio in between the sum of the total turn around time of
each process and the CPU brust time of that particular process.
So, the relative delay time for the process P1= 3/3=1 Millisecond.
The relative delay time for the process P2= 7/6=1.16 Millisecond.
The relative delay time for the process P3= 9/4=2.25 Millisecond.
The relative delay time for the process P4= 12/5=2.4 Millisecond.
The relative delay time for the process P5= 12/2=6 Millisecond.
Therefore, the average delay time period = (1+1.16+2.25+2.4+6)/5
= 12.81/5=2.562 Millisecond
From the above calculation the average relative delay time period is not equal to the result of
average response time period so for that it is a preemptive CPU scheduling method/process.
SHORTEST JOB FIRST (SJF) METHOD
A difficult approach to CPU scheduling is the shortest job first (SJF) scheduling algorithm.
This algorithm associates with each process the length of the process’s next CPU brust.
When the CPU is available, it is assigned to the process that has the smallest next CPU brust.
If the next CPU brust of the two processes are the same, FCFS scheduling is used to break
the tie. Note that more appropriate term for this scheduling method would be the shortest
next CPU brust algorithm. Because scheduling depends on the length of the CPU brust of the
process rather than its total length. SJF selects the job for services in a manner that ensures
the next job will be completed and leave the system as soon as possible. This tends to reduce
the number of jobs waiting and also reduce the number of jobs waiting behind large job. As a
result, SJF can minimize the average waiting time of jobs as they pass through the system.
For example: Consider the following set of processes with the length of CPU brust given in
milliseconds.

CPU brust time


Process
(millisecond)
P1 06
P2 08
P3 07
P4 03

Using SJF scheduling, we would schedule these processes according to the following Gantt
chart:

P4 P1 P3 P2
0 3 9 16 24

The waiting time is 3 milliseconds for the process P1, 16 milliseconds for process P2, 9
milliseconds for the process P3 and 0 milliseconds for the process P4. Thus the average
waiting time is: (3+16+9+0)/4=7 milliseconds. By comparison, if we were using the FCFS
scheduling scheme/method, the average waiting time would be 10.25 milliseconds. The SJF
scheduling algorithm is probably optimal, in that it gives the minimum average waiting time
62
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
for a given set of processes. Moving a short process before long one decrease the waiting
time of the shorter processes more than it increase the waiting time of the long process
consequently the average waiting time decreases. The obvious problem with SJF is that it
requires precise knowledge of how long a job or process will run and this information is not
usually available. The best SJF can do, is to rely a user estimate of run times. In production
environment whether same jobs run regularly it may be possible to provide reasonable
estimates. But in development environment user really know how long their programs will
execute.
Advantages:
The algorithm having the least average waiting time, average response time and average turn
around time.
Disadvantages:
Some of the disadvantages features of SJF methods are:
1) Knowing the length of the CPU brust time, it is difficult.
2) It is optimal algorithm; it can not be implemented at the level of short terms CPU
scheduling.
3) “Aging” is another problem; big jobs are waiting so much time for CPU.

Problem by using SJF technique/Method


Problem (When the arrival time period is not given):

CPU brust time


Process
(millisecond)
P1 5
P2 24
P3 16
P4 10
P5 03
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using SJF methodology or technique.
Solution: -
Gantt chart

P5 P1 P4 P3 P2

0 3 8 18 34 58

Average Turn around time


Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
63
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Turn around time period for the process P1 = 8 – 0 = 8 millisecond
Turn around time period for the process P2 = 58 – 0 = 58 millisecond
Turn around time period for the process P3 = 34 – 0 =34 millisecond
Turn around time period for the process P4 = 18 – 0 = 18 millisecond
Turn around time period for the process P5 = 3 – 0 = 3 millisecond
Therefore, average turn around time = (P1+ P2+ P3+ P4 +P5)/5
= (8+58+34+18+3)/5
= 24.2 millisecond
Average waiting time period
Waiting time is the time interval in between the starting time period of a process and arrival
time period of that particular process.
Waiting time = starting time period – arrival time period
Average waiting time is the ration in between the sum of total time period of each process
and the total number of processes.
Waiting time period for the process P1 = 0 – 0 = 0 millisecond
Waiting time period for the process P2 = 3– 0 = 3 millisecond
Waiting time period for the process P3 = 8– 0 = 8 millisecond
Waiting time period for the process P4 = 18– 0 = 18 millisecond
Waiting time period for the process P5 = 34 – 0 = 34 millisecond
Therefore, average Waiting time = (P1+ P2+ P3+ P4 +P5)/5
= (0+3+8+18+34)/5
= 12.6 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 0 – 0 = 0 millisecond
Response time period for the process P2 = 3– 0 = 3 millisecond
Response time period for the process P3 = 8– 0 = 8 millisecond
Response time period for the process P4 = 18 – 0 = 18 millisecond
Response time period for the process P5 = 34 – 0 = 34 millisecond
Therefore, average Response time = (P1+ P2+ P3+ P4 +P5)/5
= (0+3+8+18+34)/5
= 12.6 millisecond
From the above calculation the average waiting time period is equal to the result of average
response time period so for that it is a non-preemptive CPU scheduling method/process.
Problem (When the arrival time period is given):

CPU brust time Arrival time


Process
(millisecond) period
P1 03 00
P2 06 02

64
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
P3 04 04
P4 05 06
P5 02 08
From the above data calculate the average turn around time, average relative delay time,
as well as average response time by using SJF methodology or technique.
Solution: -
Gantt chart

P5 P1 P3 P4 P2

0 2 5 9 14 20

Average Turn around time


Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
Turn around time period for the process P1 = 5 – 0 = 5 millisecond
Turn around time period for the process P2 = 20 – 2 = 18 millisecond
Turn around time period for the process P3 = 9 – 4 = 5 millisecond
Turn around time period for the process P4 = 14 – 6 = 8 millisecond
Turn around time period for the process P5 = 2 – 8 = -6 millisecond
Therefore, average turn around time = (P1+ P2+ P3+ P4 +P5)/5
= (5+18+5+8-6)/5
= 6 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 0 – 0 = 0 millisecond
Response time period for the process P2 = 2– 2 = 0 millisecond
Response time period for the process P3 = 5– 4 = 1 millisecond
Response time period for the process P4 = 9– 6 = 3 millisecond
Response time period for the process P5 = 14 – 8 = 6 millisecond
Therefore, average Response time = (P1+ P2+ P3+ P4 +P5)/5
= (0+0+1+3+6)/5
= 2 millisecond
Average relative delay time period
65
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Relative delay time period is the ratio in between the turn around time of each process and
the CPU brust time of that particular process.
So, Relative delay time period= (Turnaround time/CPU brust time)
Average delay time period is the ratio in between the sum of the total turn around time of
each process and the CPU brust time of that particular process.
So, the relative delay time for the process P1= 5/3=1.66 Millisecond.
The relative delay time for the process P2= 18/6=3 Millisecond.
The relative delay time for the process P3= 5/4=1.25 Millisecond.
The relative delay time for the process P4= 8/5=1.6 Millisecond.
The relative delay time for the process P5= -6/2=-3 Millisecond.
Therefore, the average delay time period = (1.66+3+1.25+1.6-3)/5
= 4.51/5=0.902 Millisecond
From the above calculation the average relative delay time period is not equal to the result of
average response time period so for that it is a preemptive CPU scheduling method/process.
Problem

CPU brust time


Process
(millisecond)
P1 03
P2 02
P3 08
P4 04
P5 03
P6 04
P7 01
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using SJF methodology or technique.

PRIORITY CPU SCHEDULING METHOD:


The SJF algorithm is a special case of the general priority scheduling algorithm. A priority is
associated with each process and the CPU is associated to the process with the highest
priority. Equal priority processes are scheduling in the FCFS method. An SJF algorithm is
simply a priority algorithm where the priority is inverse of the next CPU brust that means the
larger the CPU brust, the lower the priority and vice versa.
For example: As an example, consider the following set of processes, assumed to have
arrived at time 0, in the order P1,P2,P3,P4,P5 with the length of the CPU brust given in
milliseconds.
CPU brust time Priority
Process
(millisecond)
P1 10 03
P2 01 01
P3 02 04
P4 01 05
P5 05 02

66
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Using priority scheduling, we would schedule these processes according to the following
Gantt chart.

P2 P5 P1 P3 P4

0 1 6 16 18 19

The average waiting period is 8.2millisecond. Priority scheduling can be either preemptive or
non-preemptive. When a process a arrives (new) at the ready queue, its priority is compared
with the priority of the current running process. A preemptive priority scheduling algorithm
will preempt the CPU, if the priority of the newly arrived process is higher than the priority
of the current running process. A non-preemptive priority scheduling algorithm will simply
put the new process at the head of the ready queue. A major problem with priority
scheduling algorithm is indefinite blocking or starvation. A Process that is ready to run but
waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave
some low priority processes waiting indefinitely. A solution to the problem of indefinite
blockage of low priority process is “aging”. Aging is a technique of gradually increasing the
priority of the processes that wait in the system for a long time. The basic idea is straight
forward: each process is assigned a priority and the runable process with the highest priority
is allowed to run.
Problem (When the arrival time period is not given):

CPU brust time Priority


Process
(millisecond)
P1 10 3
P2 01 1
P3 02 4
P4 01 5
P5 05 2
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using priority CPU scheduling methodology or technique.
Solution: -
Gantt chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

Average Turn around time

67
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
Turn around time period for the process P1 = 16 – 0 = 16 millisecond
Turn around time period for the process P2 = 1 – 0 = 1 millisecond
Turn around time period for the process P3 = 18 – 0 =18 millisecond
Turn around time period for the process P4 = 19 – 0 = 19 millisecond
Turn around time period for the process P5 = 6 – 0 = 6 millisecond
Therefore, average turn around time = (P1+ P2+ P3+ P4 +P5)/5
= (16+1+18+19+6)/5
= 12 millisecond
Average waiting time period
Waiting time is the time interval in between the starting time period of a process and arrival
time period of that particular process.
Waiting time = starting time period – arrival time period
Average waiting time is the ration in between the sum of total time period of each process
and the total number of processes.
Waiting time period for the process P1 = 6– 0 = 6 millisecond
Waiting time period for the process P2 = 0– 0 = 0 millisecond
Waiting time period for the process P3 = 16– 0 = 16 millisecond
Waiting time period for the process P4 = 18– 0 = 18 millisecond
Waiting time period for the process P5 = 1 – 0 = 1 millisecond
Therefore, average Waiting time = (P1+ P2+ P3+ P4 +P5)/5
= (6+0+16+18+1)/5
= 8.2 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 6 – 0 = 6 millisecond
Response time period for the process P2 = 0– 0 = 0 millisecond
Response time period for the process P3 = 16– 0 = 16 millisecond
Response time period for the process P4 = 18 – 0 = 18 millisecond
Response time period for the process P5 = 1 – 0 = 1 millisecond
Therefore, average Response time = (P1+ P2+ P3+ P4 +P5)/5
= (6+0+16+18+1)/5
= 8.2 millisecond
From the above calculation the average waiting time period is equal to the result of average
response time period so for that it is a non-preemptive CPU scheduling method/process.
Problem (When the arrival time period is not given):

68
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

CPU brust time Priority


Process
(millisecond)
P1 2 2
P2 5 4
P3 1 1
P4 4 3
P5 5 5
P6 2 1
P7 3 3

From the above data calculate the average turn around time, average waiting time, as well
as average response time by using priority CPU scheduling methodology or technique.
Problem (When the arrival time period is given):

CPU brust time Arrival Time Priority


Process
(millisecond) Period
P1 03 3 2
P2 02 2 3
P3 01 0 1
P4 05 6 4

From the above data calculate the average turn around time, average relative delay time,
as well as average response time by using priority CPU scheduling methodology or
technique.
ROUND-ROBIN METHOD
One of the oldest, simplest, fastest and most widely used algorithm is round robin
methodology/method. Each process is assigned a time interval called as quantum, which is
allowed to run. If the process is still running at the end of the quantum, the CPU is
preempted and given to another process. If the process has blocked or finished before the
quantum has elapsed, the CPU switching is done when the process blocks. Round-robin is
easy to implement. All the scheduler needs to do is maintaining a list of runable processes as
shown in the figure.

69
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

In round robin scheduling processes are dispatcher FIFO. But are given a limited amount of
CPU time called a time slot/time slice or quantum. If a process does not complete before its
CPU time expires, the CPU is preempted and given to the next waiting process. The
preempted process is then placed at the back of the ready queue.

In this scheme as processes entered the system they first reside in a holding queue until their
properties reach the level of processes in an active queue. The average waiting time under
the round robin policy is often long. Consider the following set of processes that arrive at
time 0 with the length of the CPU brust given in milliseconds.

CPU brust time


Process
(millisecond)
P1 24
P2 3
P3 3

If we use a time quantum of 4 milliseconds, then process P1 gets first 4 milliseconds.


Since it requires another 20 milliseconds. It preempted after the first time quantum and the
CPU is given to the next process in the queue process P2.Since process P2 does not need 4
milliseconds it quits before its time quantum expires. The CPU is then given to the next
process that is process P3.Once each process has received one time quantum the CPU is

70
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
returned to process P1 for an additional time quantum. The resulting round robin scheduling
is:

P1 P2 P3 P1 P1

0 4 7 10 14 18

P1 P1 P1
22 26 30

The average waiting time is (17/3) =5.66 milliseconds. In the round robin scheduling
algorithm, no process is allocated the CPU for more than one time quantum in a row (ready
queue). If a process CPU brust exceeds one time quantum that process is preempted and is
put back in the ready queue. The performance of the round robin algorithm depends heavily
on the size of the time quantum, if the time quantum is extremely large the round robin
policy is same as the FCFS policy. If the time quantum is extremely small (say 1
milliseconds) the round robin approach is called processor sharing. The round robin CPU
scheduling algorithm may be a preempted or non-preemptive CPU scheduling algorithm.
Whether the round robin policy is being preemptive or non-preemptive that depends upon
the 2 factors. Such as:
a) CPU brust time of each process.
b) Fixation time slot period that is time quantum by the processor (CPU).
If the CPU brust time period is less than or equal to the time quantum period, then the round
robin policy is non-preemptive CPU scheduling algorithm. But if the CPU brust time period
is greater than the fixation time slot period then the round robin policy is being preemptive
CPU scheduling algorithm.
Problem (When the arrival time period is not given):

CPU brust time


Process
(millisecond)
P1 24
P2 03
P3 03
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using Round Robin CPU scheduling methodology or technique
where the time quantum period is 4 milliseconds.
Solution: -
Gantt chart

P1 P2 P3 P1 P1

71
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)

0 4 7 10 14 18

P1 P1 P1

22 26 30

Average Turn around time


Turn around time is the time interval in between the finishing time period of a process and
arrival time period of that process.
Turn around time = finishing time period – arrival time period
Average turn around time is the ratio in between the sum of turn around time period of each
process and the total number of processes.
Turn around time period for the process P1 = 30 – 0 = 30 millisecond
Turn around time period for the process P2 = 7 – 0 = 7 millisecond
Turn around time period for the process P3 = 10 – 0 =10 millisecond
Therefore, average turn around time = (P1+ P2+ P3)/3
= (30+7+10)/3
= 15.66 millisecond
Average waiting time period
Waiting time is the time interval in between the starting time period of a process and arrival
time period of that particular process.
Waiting time = starting time period – arrival time period
Average waiting time is the ration in between the sum of total time period of each process
and the total number of processes.
Waiting time period for the process P1 = (0-0) + (10-4) = 6 millisecond
Waiting time period for the process P2 = (4-0) = 4 millisecond
Waiting time period for the process P3 = 7– 0 = 7 millisecond
Therefore, average Waiting time = (P1+ P2+ P3)/3
= (6+4+7)/3
= 5.66 millisecond
Average response time period
Response time is the time interval in between the first response time period of a process and
arrival time period of that particular process.
Response time = First response time period – arrival time period
Average response time is the ratio in between the sum of total time period of each process
and the total number of processes.
Response time period for the process P1 = 0 – 0 = 0 millisecond
Response time period for the process P2 = 4– 0 = 4 millisecond
Response time period for the process P3 = 7– 0 = 7 millisecond
Therefore, average Response time = (P1+ P2+ P3)/3

= (0+4+7)/3
72
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
= 3.66 millisecond
Problem (When the arrival time period is not given):

CPU brust time


Process
(millisecond)
P1 30
P2 06
P3 08
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using Round Robin CPU scheduling methodology or technique
where the time quantum period is 5 milliseconds.
(Ans: Average turn around time = 29.66 milliseconds, Average waiting time = 15
milliseconds, Average response time = 5 milliseconds)

Problem (When the arrival time period is not given):

CPU brust time


Process
(millisecond)
P1 15
P2 10
P3 12
P4 13
From the above data calculate the average turn around time, average waiting time, as well
as average response time by using Round Robin CPU scheduling methodology or technique
where the time quantum period is 4 milliseconds.
(Ans: Average turn around time = 44.75 milliseconds, Average waiting time = 32.25
milliseconds, Average response time = 6 milliseconds)

TYPES OF THE SCHEDULER


CPU scheduling process is added here.
In general, there are 3 different types of schedulers which may co exist in a complex
operating system. They are:
a) Long tern CPU scheduling.
b) Medium term CPU scheduling.
c) Short term CPU scheduling.
A) Long term CPU scheduling
The function of the long term scheduler is select the job from the pool of jobs and loaded the
job into main memory of the computer. A ready queue is one type of data structure, it
consisting of two ends that first is front end and the second is rear end. The jobs are loaded
into the ready queue through the front end and the jobs are deleted from the ready queue
through the rear end.

FRONT END----- DELETION----

73
INSERTION---- REAR END---
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
P6 P5 P4 P3 P2 P1

The long term scheduler, when present, works with the batch queue and selects the next
batch job to be executed. Batch job contains all necessary data and commands for their
execution. Batch jobs usually also contains programmer assigned or system assigned,
estimates of their resources need such as memory size, expected execution time and device
requirement.
B) Medium term scheduling
If a process request an I/O in the middle of the execution, then the process removed from the
main memory and loaded into the waiting queue. When an I/O operation completed then the
job moved from the waiting queue to the ready queue. After executing for a while running
process may becomes suspended by meeting an I/O request or by issuing a system call.
Given that suspended processes can’t make any progress towards completion until the related
suspending condition is removed, it is sometimes beneficial to remove them from main
memory to make room (space) for other process. In practice / in general, the main memory
capacity may impose a limit on the number of active processes in the system. In systems,
with no support for virtual memory, this problem may be alleviated by moving suspended
processes to secondary storage. Moving the suspended process in secondary storage is called
swapping. All the process is said to be swapped out or rolled out.
C) Short term scheduling
The function of the short term scheduler is select a job from a ready queue and gives the
control of the CPU to that process with the help of dispatcher. A dispatcher is a module; it
connects the CPU to the process selected by the short term scheduler. The main function of
the dispatcher is switching the CPU from one process to another process. The function of
dispatcher is jumping to the proper location in the user program and the ready queue to start
execution. The time it takes for the dispatcher to stop one process and start another running
knows as dispatcher latency.

DEADLOCK
A dead lock is a situation where a group of processes are permanently blocked as a result of
each process having acquired a subset of the resources needed for its completion and waiting
for the release of the remaining resources held by the others in the same group. Thus making
it impossible for any of the processes to proceed. A process requests the resources but the
resources are not available at that time so the process enters into the waiting state. The
requesting resources are held by another waiting process, so both are in waiting state as a
result there is no work in progress and that situation is called as deadlock.

74
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
From the above diagram P1 and P2 are the processes and R1 and R2 are the two resources.
Suppose process P1 request the resources R1 but the resources R1 is held by the process P2.
Similarly process P2 requests the resources R2 where as R2 held by process
P1. Then both are entered into the waiting state as a result there is no work in progress for
both the process P1 and P2 so the arising situation is called as deadlock. A system consists of
a finite number or resources to be distributed among a number of competing processes. The
resources are partitioned in to several types, each of which consist of some number of
identical instances like memory space, CPU cycle, files and I/O devices are the example of
resources types. A process must request a resource before using it and must release the
resources after using it. A process may request as many resources as it requires to carryout
its designated task. Obviously the numbers of resources requested many, not exceed the total
number of resources available in the system. In other words a process can not request to
printers if the system has only one. Under the normal mode of operation a process may
utilize the resources in only the following sequence:
a) Request: If the request can not be granted immediately (for example: the resources are
being used by another process) then the requesting process must wait until it can acquire the
resources.
b) Use: The process can operate on the resource.
c) Release: the processes release the resources. The request and release of resources are
system calls.
RESOURCE ALLOCATION GRAPH
Resources allocation graph is a pictorial or diagrammatical representation of deadlock
situation depending upon number of process as well as the total number of resources. Or a
resources allocation graph is a directed graph which is used to represent a deadlock situation.
To draw a graph, we need some of the symbol like circle, square and arrow marks. The
graph consisting of two types of nodes one for process which is represented by circle and
second is for resources which is represented by square. Similarly the graph consisting of 2
types of edges: one is requesting edge and other is assignment edge. An edge from process to
resource is called as requesting edge and an edge from a resource to a process is said to be an
assignment edge. If a system is in dead lock state, the resource allocation graph must consist
of a cycle.

CAUSE FOR THE DEADLOCK


The main conditions for deadlock are:
a) Mutual Exclusion
b) Hold and Wait
c) No preemption
d) Circular Wait.
75
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
a) Mutual Exclusion: Mutual Exclusion means resources are in non-sharable mode only, it
means only one process at a time can use a resource. If some other process request that
resources the requesting process must wait until the resource has been released.
b) Hold and Wait: Each and every process is in a deadlock state must hold at least one
resource and is waiting for additional resources which are current held by other process.
c) No preemption: No preemption means resources are not released anyone of the process in
the mid of the execution, they released only after the process has completed its task or job.
d) Circular wait: A circular chain of process exists in which each process holds one or more
resources that are requested by the next process in the chain.

From the above diagram/figure, P1 is waiting for a resource R1 which is held by the process
P1. P2 is waiting for resource R2 but R2 is held by P1. P3 is waiting for resources R4 but the
R4 is held by P2. P2 is waiting for resources R3 which is held by P3.so:

Therefore, it is said to be circular wait.


PREVENTION FOR THE DEADLOCK
By far, the most frequent approach used by designers in dealing with the problem of
deadlock is deadlock prevention. So deadlock prevention is a prevention method before
attacking the deadlock. The various methods of deadlock prevention are considered and the
effect on the both users and the systems. They are:
Mutual Exclusion
Mutual exclusion means only one process can use the resources at a time that is resources are
not shared by the number of process at a time. We can deny by this condition with the simple
protocol,” Convert all non-sharable resources to sharable resources”. If this condition is
satisfied by the deadlock, then we can prevent deadlock.
For example: A printer is not shared by number of process at a time. So we can not convert
the printer from non-sharable to sharable mode so we can not apply this prevention method
for every case.
Hold and Wait

76
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Hold and wait means each and every process in dead lock state must hold at least one
resources and waiting for rest amount of resources. We can deny this condition with simple
protocol is,” A process requests the resources only when, the process has none”. The
second protocol is,” Each process is request and be allocated all its resources before
being of the execution”.
No Preemption
No preemption means resources are not released in the mid of the processing. We can deny
this condition with simple protocol, “A process request some resource before check
whether they are available or not”. If they are available, we allocate them and if they are
not available we check whether allocated some other process that is waiting for some
additional resources. If so we prompt the desired resources from the waiting process and
allocated them to requesting process. If resources are not available on help by a waiting
process then the requesting process must wait while it is waiting for its resources may be
used by other process.
Circular Wait
The fourth and final condition for deadlock is the circular wait condition. One way to ensure
that this condition never holds is to impose a total ordering of all resources types and to
require that each process request resources in an increasing order of enumeration (Means
the numbers which are assigned to the resources that is 1, 2, 3, ….. are in ascending order) .
Let, R= {R1, R2, R3, R4, R5……….. Rn}
Be the set of resources types we assign to each resources type a unique integer number
which allows us to compare the two resources and to determine whether one precedes
another in our ordering. Formally, we define a one function F: R->N. Where N is the set of
natural numbers. For example: if the set of resource type R includes tape drives, disk drives
and printers then the function F might be defined as follows:
F (tape drives) = 1
F (disk drives) =2
F (printers) = 12
We can now consider the following protocols to prevent deadlocks; each process can request
resources only in an increasing order of enumeration. That is a process can initially request
any number of instances of a resources types, say Ri (Initial request). After that the process
can request instances of resource type Rj (Next request), if and only if “F (Rj)>F (Ri)”.
If several instances of the same resource types are needed a single request for all of them
must be issued. For example: Using the function defined previously a process that wants to
use the tape drive and printers at the same time must first request the tape drive and then
request the printers. Alternatively we can requires that whenever a process request an
instances of the resource type Rj it has released any resources Ri such that “F (Ri) >= F
(Rj)”. If these two protocols are used, then the circular wait condition can not hold.
DEADLOCK AVOIDANCE
It is one of the methods of dynamically (online), escaping from the deadlocks. In this
scheme, if a process request for resources, the avoidances algorithm checks before the
allocation of resources about the state of the system. If the state is safe the system allocation
of resources to the requesting process otherwise do not allocate the resources. Safe state

77
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
means “No deadlock will happen; even we allocate the resources to requesting process”.
Unsafe means the deadlock may happen if grant the requesting the resources.
BANKER’S ALGORITHM
It is deadlock avoidance algorithm. The name was chosen/ it was named as so because the
bank never allocates its available cash randomly to the customer. Before allocating the cash
to customer, first of all, the bankers may prepare a data structure on the basis of some factors
like available, allocating, max and need/requirement.
Data structure:
Available: A vector of length n indicates the number of available resources of each type. If
available [J] =K, there are ‘K’ instances of resource type Rj available.
For example: As per the example, which is mentioned below:
The available [A] = (10-(2+3+2)) =3, available [B] = (5-(1+1)) =3,
Available [C] = (7-(2+1+2)) = 2. Where the A, B and C are the three resources.
Max: An n*m (Here the n refers to the row part that is the process and m refers to the
column part that is recourses) matrix defines the maximum demand of each process. If max
[i.j] =K, then process Pi may request at most K instances of resources type Rj. Here ‘i’ refers
to the process and ‘j’ refers to the resource.
For example: As per the example, which is mentioned below:
Max [P0, A] = 7, max [p3, C] = 2 etc
Allocation:
An n*m matrix defines the number of recourses of each type currently allocated to each
process. If allocation [i, j] = K, then Pi is currently allocated K instances of resource type
Rj.
For example: As per the example, which is mentioned below:
Allocation [p0, B] =0, allocation [p3, C] = 1 etc.
Need:
An n*m matrix indicates the remaining recourses need of each process. If Need [i, j] = K,
then process Pi may need K more instances of recourses type Rj to complete its task.
Need [i, j] = Max [i, j] - Allocation [i, j]
As per the formula: Need [p0, A] = Max [p0, A] – Allocation [p0, A] =7-0=7 etc
Let x and y be vectors of length n. We say that x<=y if and only if x[i] <=y[i] for all i= 1, 2,
3, 4… n. For example, if x=(1,7,3,2) and y=(0,3,2,1) then y<=x [ y<x if y<=x and y!=x],
Here y and x are indicates the allocation resources and available resource respectively. We
can treat each row in the matrix allocation and need as vectors and refers to them as
Allocation[i] and Need[i] respectively. The vector Allocation[i] specifies the recourses
currently to process Pi and the vector Need[i] specifies the additional recourses that process
Pi, the vector Need[ i] specifies the additional recourses that process Pi may still request to
complete it’s task.
For example: Consider a system with 5 processes such as P0, P1, P2, P3, P4 and 3
recourses such as A, B and C. Resource type A has 10 instances, resource B has 5 instances
and resource type C has 7 instances. Suppose that the following snapshot of the system has
been taken:

78
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
ProcessAllocation
Max

A B C A B C

P0 0 1 0 7 4 3
P1 2 0 0 3 2 2
P2 3 0 2 7 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

Now we have to calculate available matrix of the recourses.


Available resources= Total- Allocation
Therefore, as per the formula,
Available resource: A= 10-(2+3+2) =10-7=3, B=5-(1+1) =5-2=3, C=7-(2+1+2) =7-5=2
The available resources matrix is (3, 3, 2).
Now we have to calculate the need matrix of the recourses.
Need= Max-Allocation

Process
Need/Requirement
Need/ Requirement

A B C A B C

P0 7-0 4-1 3-0 7 3 3


P1 3-2 2-0 2-0 1 2 2
P2 7-3 0-0 2-2 4 0 0
P3 2-2 2-1 2-1 0 1 1
P4 4-0 3-0 3-2 4 3 1

Search the need matrix from P0 to P4 whose requirement resources are less than the
available matrix that is (1,2,2)<(3,3,2). Then allocate the requesting resources to P1. Now the
process P1 allocation is (2,0,0+1,2,2)=(3,2,2). After completion of process P1 execution the
allocated resources to the process P1 are free, Now the available resources are;
(3,3,2+2,0,0)=(5,3,2). Search the need matrix again, which is less then the available
resources;(0,1,1)<(5,3,2). So allocate resources to the process P3. After the completion of
P3, it releases all its resources then;
Available resources (currently) = Available resources (Previously) + Allocation
(Releasing resources by the process after completion of process execution)
It is (5, 3, 2+2.1.1) = (7, 4, 3). Continue this process until we find the safe state otherwise
wait. The safe sequence for this problem is < P1, P3, P4, P2, and P0>

SAFE STATE
A state is safe if the system can allocate resources to each process (up to its maximum) is
some order and still avoid a deadlock. (OR) A system is in a safe state only if there exists a

79
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
safe sequence. A sequence of processes <P1,P2,P3……,Pn> is a safe sequence for the
current allocation state if, for each Pi, the resources that Pi can still request can be satisfied
by the currently available resources plus the resources held by all the Pj, with j<i. In this
situation, if the resources that process Pi needs are not immediately available, then Pi can
wait until all Pj have finished. When they have finished, Pi can obtain all of its needed
resources, complete its designated task, return its all allocated resources and terminated.
When Pi terminates, Pi+1 can obtained its needed resources and so on. If no such sequence
exists, then the state is said to be unsafe.
A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state. Not all
unsafe states are deadlock. An unsafe state may lead to a deadlock. As long as the state is
safe, the operating system can avoid unsafe (and deadlock) states. In an unsafe state, the
operating system can not prevent processes form requesting resources such that a deadlock
occurs.
For example: Consider a system with 12 resources and 3 processes that is P0,P1 and
P2.Process P0 requires 10 resources, process P1 may need as many as 4 resources and
process P2 requires up to 9 resources. Suppose that, at time T0, process P0 is holding 5
resources, process P1 is holding 2 and process P2 is holding 2 resources. Thus the available
resources (total-allocation that is 12-(5+2+2) =12-9) are 3.

Process Maximum Current Need/Requirement


Need Available (Max-Allocation)
P0 10 05 10-5=05
P1 04 02 04-02=02
P2 09 02 09-02=07
\
At time T0, the operating system is in a safe state. The sequence<P1, P0, P2> satisfies the
safety condition (When the available resources is greater than equal to the requirement
resources), since process P1 can immediately be allowed all its resources and them return
them that is the system will then have 5 available resources (Available resources
(currently) = Available resources (Previously) + Allocation (Releasing resources by the
process after completion of process execution)).Then process P0 can get all its resources
and return them that is the system will then have 10 available resources. And finally process
P2 could get all its resources and return them that is the system will then have all 12
resources.
Suppose that, at the time T1, process P2 request and is allocated 1 more resources. The
system is no longer in a safe state. Thus the available resources (total-allocation that is 12-
(5+2+3) =12-10) are 2.
Process Maximum Current Need/Requirement
Need Available (Max-Allocation)
P0 10 05 10-5=05
P1 04 02 04-02=02
P2 09 03 09-03=06

80
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
At this point, only process P1 can be allocated all its resources. When it returns them, the
system will have only 4 available resources. Since process P0 is allocated 5 resources, but
has a maximum of 10, it may then request 5 more resources. Similarly, process P2 must
request and additional 6 resources and have to wait, resulting in a deadlock. It is based upon
the concept of unsafe state.
Initially, the system is in a safe state. Whenever a process request a resources that is
currently available, the system must decide whether the resources can be allocated
immediately or whether the process must wait. The request is granted only if the allocation
leaves the system in a safe state.

THRASHING
If increase the number of processes submitted to
the CPU for execution, the CPU utilization or the
performance of the CPU (Throughput) will also
increase. But increasing the process continuously
at certain time, the CPU utilization falls sharply,
sometimes, it reaches to zero. This situation is
said to be “Thrashing”
A process is thrashing if it is spending more time
paging than execution. For example: Suppose,
the main memory consisting of 5 jobs initially at
that time, the page fault rate is 0.6, after few
seconds add the 5 jobs to memory, then the rate
will increases to 0.8, after sometimes, add
another 5 jobs, then the page fault rate drops
suddenly to 0.1 or 0.2, sometimes it may be
reach to zero. This unexpected situation said to
be “Thrashing”.

RECOVERY FROM DEADLOCK


When a detection algorithm determines that a
deadlock exists, several alternatives exist. There
are two options for breaking a deadlock. That is:
1) One solution is simply to abort (Remove) one or more processes to break the circular wait.
2) The second option is to preempt some resources from one or more of the deadlock
processes.
PROCESS TERMINATION
To determine deadlock by aborting (Remove or terminate) a process, we use one of the two
methods. In both methods the system reclaims all resources allocated to the terminated
processes.
a) Abort all deadlocked processes
It means to release all the processes in the deadlock state and start the allocation from the
starting point. This method clearly will break the deadlock cycle, but at a great expanse;

81
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
these processes cycle have computed for a long time, the results of these partial computation
must be discarded.
b) Abort one by one process until the deadlock cycle is eliminated
In this method, first abort the one of the processes in the deadlock state and allocated the
resources (resource from abort process) to some other process in the deadlock state, then
check whether the deadlock has broken or not. If not abort another process from the deadlock
state. Continue this process until we recovery from deadlock. Many factor may determines
which process is chosen including:
1) What the priority of the process is?
2) How many more resources the process needs in order to compete.
3) How many processes will need to be terminated?
4) How long the process has computed and how much longer the process will compute
before computing its designated task.
5) How many and what type of resources the process has used.
RESOURCE PREEMPTION
To eliminate deadlock using resources preemption, we successively preempt some resources
from processes and give these resources to other processes until the deadlock cycle is
broken. If preemption is required to deal with deadlock then three issues need to be
addressed.
a) Selecting a victim
Which resources and which processes are to be preempted? Select a victim resource from the
deadlock state and preempted that one. As a process termination, we must determine the
order of preemption to minimize cost. Cost factors may include such parameters as the
number of resources a deadlock process is holding and the amount of time a deadlock
process has thus far consumed during its execution.
b) Rollback
Rollback the processes and resources to some safe state (A state is safe if the system can
allocate resources to each process up to its maximum in some order and till avoid a
deadlock) and restart it from that state. This method requires the system to keep more
information about the state of all the running processes.
c) Starvation
A process or a resource can be picked as a victim (resources) only a finite number of times, it
is said to be starvation. The most common solution is to include the number of rollbacks in
the cost factor.
CPU I/O BRUST CYCLE
The successive of CPU scheduling depends on a observe properly of processes; process
execution consist of a cycle of CPU execution and I/O wait. Processes alternate between
these two state process executions beings with a CPU brust that is followed by an I/O brust
which is followed by another CPU brust then another I/O brust and so on.

82
Material is Prepared By: Mr.Ajit Kumar Mahapatra, Lect. In Computer Science (Contact
Number: 9853277844)
Eventually, the final CPU brust ends with a system request to terminate execution of a
process as shown in the figure below:

The duration of CPU brust have been measured extensively. Although they vary greatly from
process to process and from computer to computer. The numbers of processes are generally
characterized as exponential or hyper exponential with a:
1) Large number of short CPU brust and
2) A small number of large CPU brust.
A CPU bound programs might have a few long CPU brust and an I/O bound program
typically has many short CPU brust. This distribution is more appropriate in CPU scheduling
algorithm.

83

You might also like