Professional Documents
Culture Documents
Process State :
As a process executes, it changes state. The state of a process is defined by means of
the current activity. The following are the various states of the process.
i) Process state
v) Memory-management information
PCB
Process No. : It contains identification of process.
Process State : It contains information about state of the process. The entry in the in this state
may be new / ready / running / waiting / terminated.
Page 2
Pointers or CPU Scheduling information : This information contains pointers to various
scheduling queues. Pointers to some other processes, process priority etc.,
Program Counter : It contains address of the next instruction to be executed in the process.
Registers : Each process requires to use various registers like accumulator, index register,
stacks and general purpose registers. These registers are used to store information regarding
process switching. The following diagram shows the concept of process switching and usage
of registers.
I/O information : This information inputs various I/O devices allocated to this process, list of
open files etc.,
Accounting Information : This information includes amount of CPU time required for the
process, memory required for the process etc.,
Page 3
Scheduling Queues :
Whenever the process enters into the system, they are put into Job Queue.
The processes are connected as a linked list in the job queue. The job queue contains all the
processes in the system.
The processes that are placed in the main memory and these are ready and
waiting for allocation of CPU ( execution ) and these processes are said to be in Ready
Queue. This queue is also a linked list.
Device Queues :
There are several device queues in our system, one for each I/O device.
The list of processes that are waiting for a device are called Device Queue.
Page 4
Queuing Diagram :
A newly created
job
queue and placed in ready queue. One process from the ready queue and CPU is allotted to it.
While executing the process, the following events may occur.
i) The process issues an I/O request. In this case, first the process is placed in I/O
queue.
iii) The process may create a new process ( child process ), after completion of child
In all these cases, the process switches from running state to waiting state and it is put
back in ready queue. Finally, the process is terminated and it is removed from all the queues
( ready and device queues ).
Schedulers :
In the multiprogramming environment, the processes move from one queue to another
queue throughout its life time. The scheduler decides the movement of the particular process
i.e., the operating system must select jobs from the queue and decides their destination. The
selection is done by the scheduler.
Page 5
There are two types of schedulers. They are a) Short Term and b) Long term
In the multiprogramming, all the processes which are created are placed in job queue
in the disk. The Long Term Scheduler ( Job Scheduler ) selects some processes from job
queue and placed in main memory.
Now more than one process is located in main memory ( in the ready queue ). The
Short Term Scheduler ( CPU scheduler ) selects one of the processes and CPU is allocated to
it.
The difference between two schedulers is frequently of their execution. The short
term scheduler must select a process for CPU very frequently. The time require to select a job
is very, very small.
The long term scheduler must select the process from the job queue i.e., from the disk
and placed in the main memory. The time required to select a job from the disk is some what
greater than the time required for the short term scheduler.
The long term scheduler is invoked only when there is empty space in memory i.e.,
after job is terminated. But short term scheduler is invoked number of times as long as there
are jobs in main memory.
Long term scheduler must select a job based on CPU bound, I/O bound processes to
get a good process mix. So, we can use our resources efficiently.
Some processes are I/O bound and some other are CPU bound. An I/O bound process
is going to spend maximum time with I/O devices. A CPU bound process is going to spend
maximum time with CPU. If main memory is full of I/O bound processes then all are going to
wait at I/O device queues and CPU will sit idle. If main memory is full of CPU bound
processes, then all the device queues are empty and CPU is busy. The above two cases are
efficient. So, we require a process mix i.e., some of the processes are CPU bound and some
other processes are I/O bound in main memory.
Page 6
The idea behind medium term scheduler is “ Sometimes it is advantage to remove a
process from main memory to reduce the degree of the multiprogramming and continue with
the remaining processes”. After sometime, the removed process is again placed in main
memory and its execution begins from where it is stopped previously.
The following diagram shows how the medium term scheduler works.
Context Switch :
“When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process.”
In the multiprogramming, CPU switches from one process to another process. While
executing one process, the process information is stored in PCB P 0 and it is executing. After
executing P0, it is loaded in the disk. In that time, process P 1 is idle. The process is known as
Context Switch.
Operations on Processes :
We know that processes are executed concurrently. They must be created and deleted
dynamically. Sometimes the processes communicate with other processes. The operating
system must provide mechanism for these.
Process Creation :
A process may create several processes by using create process system call during the
execution. The creating process is called Parent Process. The newly created processes are
Page 7
called Child Processes. While executing the child processes, some other new processes may
be created and so on. This forms a Process Tree Structure.
We know that each process has assigned a specific task and it requires some resources
to achieve its goal. Generally each process needs resources like CPU time, memory, files, I/O
devices etc., The child processes are able to obtain the resources “a subject of resources that
are allotted to its parent or sometimes they get the resources directly from the operating
system”. The parent is partitioned its resources among their children or it may share the
resources with the childs while these are executed concurrently.
When a process creates a new process, the following two possibilities are happened in
terms of their execution.
ii) The parent waits until child is terminated. There are two possibilities are
Generally the written code is ‘Zero’ for the child process and return code is ‘Non
Zero’ for the parent process. The system call execlp ( execute child process ) is used after the
fork instruction. It loads a binary file into the memory by destroying memory space of its
parents and then starts to execute it. After the execution of it, the parent process execution is
continued.
Page 8
The following C program shows how the UNIX system calls are used.
void main( )
int pid;
pid = fork( );
exit ( -1 );
wait ( );
exit ( 0 );
Page 9
In the above program, the parent creates a child by using a system call. pid for the
child is zero and for the parent process is non negative integer. Now, we have two processes.
The execlp system call loads the specified file into the memory space of its parent and
executes it. The parent needs to wait until child completes its execution.
Process Termination :
The process terminates when it finishes execution of its final instruction and requests
the operating system to delete it by using exit ( ) system call. When the process is terminated,
it returns some data to its parent through wait ( ) system call. All the resources which are
allotted to the process are deallocated.
Sometimes a process is terminated by using a system call abort ( ) and this system call
is invoked by its parent. A parent may invoke this system call.
b) The task which is assigned to its child is not required by the parent.
c) The parent wants to terminate, operating system did not allow a child
Page 10
Cascading :
If a process terminates either normally or abnormally, it kills all its childs. This
phenomena is known as Cascading Termination.
Cooperating Processes :
Independent Process :
A process which is not effected by any other process and which does not effect any
other process.
Co-Operative Process :
A process which is effected by any other processes and which effect any other
processes.
into sub tasks and each of which is executed in parallel. Finally they
communicate with one another and give the result. Operating System must
Page 11
program is divide into several processes such that each process is assigned to a
task and they are executed in parallel. They can able to communicate with one
another and give the result. Operating System provides such an environment.
For example, A print program produces characters and they are consumed by
the printer driver.
The buffer used may be either bounded buffer or unbounded buffer. In the case of
unbounded buffer, there is no limit on the size of the buffer. The consumer may have to wait
for the item and producer always produces items. In the case of bounded buffer, there is
limit on the size of the buffer.
Producer waits if buffer is full and consumer wait if buffer is empty. The buffer may
be provided by the operating system by using the Inter Process Communication ( IPC )
facility. Sometimes the buffer is explicitly created by user program. In this case, the buffer is
having limited size and sharable.
Page 12
The following program shows the solution to the shared bounded buffer for the
producer and consumer problem.
The producer and consumer processes share the following variables. Solution is
correct, but can only use BUFFER_SIZE-1 elements.
Shared data :
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
item nextProduced;
while (1) {
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
item nextConsumed;
Page 13
while (1) {
nextConsumed = buffer[out];
The shared buffer is implemented as a circular queue with two logical pointers in and
out. The variable ‘in’ points to the next free in the buffer. ‘out’ points to the first full position
in the buffer. The buffer is empty when in == out; the buffer is full when (( in + 1 ) %
BUFFER_SIZE == out ).
The producer have a local variable nextProduced and consumer have a local variable
nextConsumed.
In the message passing system, we have two important operations ‘send’ and
‘receive’. In the message passing system, the message sent is either fixed or variable. If the
fixed size messages are sent them system level implementation becomes easy and the task of
programming becomes difficult.
If the variable size message is sent, the system level becomes difficult and
programmer task becomes easy.
Page 14
are not discussing about hardware implementation and we discuss about the logical
implementation of the communication link. There are several methods for logical
implementation of communication link.
a) Direct or Indirect
b) Symmetric or Asymmetric
Direct communication :
In the message passing system, each process must have a unique name. In the direct
communication, each process that wants to communicate must explicitly specify the name of
recipient or sender of the communication. In this scheme, the two operations are defined as
This scheme is also having property of Symmetry i.e., both sender and receiver
processes must specify the name of the other processes participating in the communication.
In the Asymmetric, only the sender specifies the name of the recipient and recipient is
not required to name the sender.
Page 15
receive(id, message) – receive a message from any other process
The variable ‘id’ is dynamically sent to the name of the process participating in the
communication.
Indirect Communication :
In the Indirect Communication messages are send to or receive from mail box or port.
A mail box can be an object and which can store messages. Each mail box has unique
identification. In this scheme, a process can communicate with other processes using different
mail boxes. Two processes are able to communicate means the two processes share the mail
box.
Suppose if we three processes P1, P2, P3 and all share a mail box A. Suppose P 1 sends a
message to A and other processes are able to receive the message from A. Now the question
is ‘who will receive the message ?’ from A which sent by P 1. This is depending upon any one
of the following.
Page 16
b) Allow only one process at a time to execute a receive operation.
c) Allow the system to select arbitrarily the receiver. Sender is notified who
A mail box is owned by either a process / OS. If a mail box is owned by a process, it
is a part of the address space of the process.
Each and every mail box is associated with owner and user. The owner of the mail
box is able to receive message through the mail box.
The user of the mail box is able to send messages through mail box.
If each mail box has unique owner, sending process does not get any confusion of
who receive the message from the mail box.
If the mail box is owned by OS, it is said to be independent and OS must provide
mechanisms for
Synchronization :
Message passing between the two processes may be Synchronous / Asynchronous also
called Blocking / Non-Blocking.
a) Blocking Send : The sending process is blocked until the message is received by
b) Non-Blocking Send : The sending process sends the message & continues its
execution.
Page 17
c) Blocking Receive : The receiving process is blocked until the message is available
to it.
Buffering :
Whether the communication is direct / indirect messages that are transmitted are
stored in buffer which is a temporary queue. The following are 3 ways to implement such a
queue.
a) Zero Capacity : The queue does not store any messages. The sender must block
b) Bounded Capacity : The queue has finite length, say ‘n’ i.e., the buffer is able to
store ‘n’ messages in it. In this case, as long as the queue is not full, the sender
sends the messages and these are stored in the buffer. If the queue is full, sender
c) Unbounded Capacity : The sender never blocks. We do not have any restriction
about the length of the queue. So, sender can store any number of messages.
The zero capacity buffer is sometimes known as “A message passing system with no
buffering”. The two cases are known as “Message passing system with automatic buffering“.
Page 18