You are on page 1of 18

Chapter 4: Processes

A process is a program in execution and it is an active entity. Program counter always


specifies address of the next instruction to execute in the next process. Sometimes many
processes associate with one program.

Process State :
As a process executes, it changes state. The state of a process is defined by means of
the current activity. The following are the various states of the process.

i) new: The process is being created.

ii) running: Instructions are being executed.

iii) waiting: The process is waiting for some event to occur.

iv) ready: The process is waiting to be assigned to a process.

v) terminated: The process has finished execution.

Process Control Block (PCB) :


Each process is represented in operating system by a process control block or task
control block. The PCB contains the following information associated with each process.

i) Process state

ii) Program counter


Page 1
iii) CPU registers

iv) CPU scheduling information

v) Memory-management information

vi) Accounting information

vii) I/O status information

The following is the block diagram of PCB.

PCB
Process No. : It contains identification of process.

Process State : It contains information about state of the process. The entry in the in this state
may be new / ready / running / waiting / terminated.

Page 2
Pointers or CPU Scheduling information : This information contains pointers to various
scheduling queues. Pointers to some other processes, process priority etc.,

Program Counter : It contains address of the next instruction to be executed in the process.

Registers : Each process requires to use various registers like accumulator, index register,
stacks and general purpose registers. These registers are used to store information regarding
process switching. The following diagram shows the concept of process switching and usage
of registers.

I/O information : This information inputs various I/O devices allocated to this process, list of
open files etc.,

Accounting Information : This information includes amount of CPU time required for the
process, memory required for the process etc.,

Page 3
Scheduling Queues :
Whenever the process enters into the system, they are put into Job Queue.
The processes are connected as a linked list in the job queue. The job queue contains all the
processes in the system.

The processes that are placed in the main memory and these are ready and
waiting for allocation of CPU ( execution ) and these processes are said to be in Ready
Queue. This queue is also a linked list.

Device Queues :

There are several device queues in our system, one for each I/O device.

In the multiprogramming system, once a process is allocated CPU, it is


executing for sometime and it may require I/O. In this case if I/O device is free, the device is
going to serve to it. In that time CPU switches to another process. While the second process
is executing, suppose it also requires I/O device but I/O device is already hold by first
process. So, the operating system maintains a queue for each I/O device. This queue consists
all requesting processes.

The list of processes that are waiting for a device are called Device Queue.

Ready Queue And Various I/O Device Queues :

Page 4
Queuing Diagram :

A newly created
job

queue and placed in ready queue. One process from the ready queue and CPU is allotted to it.
While executing the process, the following events may occur.

i) The process issues an I/O request. In this case, first the process is placed in I/O

queue.

ii) The process is removed by force because of an interrupt.

iii) The process may create a new process ( child process ), after completion of child

process only, it will continue.

In all these cases, the process switches from running state to waiting state and it is put
back in ready queue. Finally, the process is terminated and it is removed from all the queues
( ready and device queues ).

Schedulers :

In the multiprogramming environment, the processes move from one queue to another
queue throughout its life time. The scheduler decides the movement of the particular process
i.e., the operating system must select jobs from the queue and decides their destination. The
selection is done by the scheduler.

Page 5
There are two types of schedulers. They are a) Short Term and b) Long term

In the multiprogramming, all the processes which are created are placed in job queue
in the disk. The Long Term Scheduler ( Job Scheduler ) selects some processes from job
queue and placed in main memory.

Now more than one process is located in main memory ( in the ready queue ). The
Short Term Scheduler ( CPU scheduler ) selects one of the processes and CPU is allocated to
it.

The difference between two schedulers is frequently of their execution. The short
term scheduler must select a process for CPU very frequently. The time require to select a job
is very, very small.

The long term scheduler must select the process from the job queue i.e., from the disk
and placed in the main memory. The time required to select a job from the disk is some what
greater than the time required for the short term scheduler.

The long term scheduler is invoked only when there is empty space in memory i.e.,
after job is terminated. But short term scheduler is invoked number of times as long as there
are jobs in main memory.

Long term scheduler must select a job based on CPU bound, I/O bound processes to
get a good process mix. So, we can use our resources efficiently.

Some processes are I/O bound and some other are CPU bound. An I/O bound process
is going to spend maximum time with I/O devices. A CPU bound process is going to spend
maximum time with CPU. If main memory is full of I/O bound processes then all are going to
wait at I/O device queues and CPU will sit idle. If main memory is full of CPU bound
processes, then all the device queues are empty and CPU is busy. The above two cases are
efficient. So, we require a process mix i.e., some of the processes are CPU bound and some
other processes are I/O bound in main memory.

In Time Sharing operating system, we have additional intermediate level of scheduler


known as Medium Term Scheduler.

Page 6
The idea behind medium term scheduler is “ Sometimes it is advantage to remove a
process from main memory to reduce the degree of the multiprogramming and continue with
the remaining processes”. After sometime, the removed process is again placed in main
memory and its execution begins from where it is stopped previously.

The following diagram shows how the medium term scheduler works.

Context Switch :
“When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process.”

In the multiprogramming, CPU switches from one process to another process. While
executing one process, the process information is stored in PCB P 0 and it is executing. After
executing P0, it is loaded in the disk. In that time, process P 1 is idle. The process is known as
Context Switch.

Operations on Processes :

We know that processes are executed concurrently. They must be created and deleted
dynamically. Sometimes the processes communicate with other processes. The operating
system must provide mechanism for these.

Process Creation :

A process may create several processes by using create process system call during the
execution. The creating process is called Parent Process. The newly created processes are

Page 7
called Child Processes. While executing the child processes, some other new processes may
be created and so on. This forms a Process Tree Structure.

We know that each process has assigned a specific task and it requires some resources
to achieve its goal. Generally each process needs resources like CPU time, memory, files, I/O
devices etc., The child processes are able to obtain the resources “a subject of resources that
are allotted to its parent or sometimes they get the resources directly from the operating
system”. The parent is partitioned its resources among their children or it may share the
resources with the childs while these are executed concurrently.

When a process creates a new process, the following two possibilities are happened in
terms of their execution.

i) The parent continuous to execute concurrently with its children.

ii) The parent waits until child is terminated. There are two possibilities are

happen in terms of their memory space occupation.

a) The child process occupies the same memory space of parent.

b) The child process occupies some other part of memory space.

In UNIX operating system, each process is identified by “Process identifier”, which is


unique integer. A new process is created by using fork system call. The new process consists
of a copy of the address space of original process and the parent is able to communicate with
child.

Generally the written code is ‘Zero’ for the child process and return code is ‘Non
Zero’ for the parent process. The system call execlp ( execute child process ) is used after the
fork instruction. It loads a binary file into the memory by destroying memory space of its
parents and then starts to execute it. After the execution of it, the parent process execution is
continued.

Page 8
The following C program shows how the UNIX system calls are used.

void main( )

int pid;

pid = fork( );

if ( pid < 0 ) /* error indication */

fprintf ( stderr, “fork failed” );

exit ( -1 );

if ( pid == 0 ) /* child is created */

execlp ( “/ bin / ls “, “ls”, “Null” );

if ( pid > 0 ) /* execution of parent */

wait ( );

printf( “Child is completed…….” );

exit ( 0 );

Page 9
In the above program, the parent creates a child by using a system call. pid for the
child is zero and for the parent process is non negative integer. Now, we have two processes.
The execlp system call loads the specified file into the memory space of its parent and
executes it. The parent needs to wait until child completes its execution.

Processes Tree on a UNIX System :

Process Termination :

The process terminates when it finishes execution of its final instruction and requests
the operating system to delete it by using exit ( ) system call. When the process is terminated,
it returns some data to its parent through wait ( ) system call. All the resources which are
allotted to the process are deallocated.

Sometimes a process is terminated by using a system call abort ( ) and this system call
is invoked by its parent. A parent may invoke this system call.

a) The child has exceeded its usage of its resources.

b) The task which is assigned to its child is not required by the parent.

c) The parent wants to terminate, operating system did not allow a child

process to execute if its parent terminates.

Page 10
Cascading :

If a process terminates either normally or abnormally, it kills all its childs. This
phenomena is known as Cascading Termination.

Cooperating Processes :

In the multiprogramming, several processes are executed concurrently and each


process is either independent or co-operative process.

Independent Process :

A process which is not effected by any other process and which does not effect any
other process.

Co-Operative Process :

A process which is effected by any other processes and which effect any other
processes.

The following are advantages of co-operative processes.

a) Information Sharing : Operating system must provide an environment in

which multiple processes can share the information.

b) Computations Speedup : To complete a task faster, we divide that task

into sub tasks and each of which is executed in parallel. Finally they

communicate with one another and give the result. Operating System must

provide such an environment.

c) Modularity : We must construct the system in modular fashion i.e., the

Page 11
program is divide into several processes such that each process is assigned to a

task and they are executed in parallel. They can able to communicate with one

another and give the result. Operating System provides such an environment.

d) Convenience : Operating System provides an environment in which several

processes are able to communicate with one another in convenient way.

Producer and Consumer Problem :

It is an example for cooperating process. A producer produces information and it is


consumed by the consumer.

For example, A print program produces characters and they are consumed by
the printer driver.

A compiler produces assembly codes and it is consumed by the assembler.

The assembler produces object code and it is consumed by the loader.

To allow the producer and consumer processes to run concurrently. We require a


buffer. The buffer can be filled by the producer and emptied by the consumer. Here the
producer fills the buffer by producing the item and the consumer takes the item from buffer
and emptied the buffer. The two processes must be synchronized. Therefore, the consumer
could not to try to consume an item which is not yet produced i.e., the consumer must wait
until the required item is produced.

The buffer used may be either bounded buffer or unbounded buffer. In the case of
unbounded buffer, there is no limit on the size of the buffer. The consumer may have to wait
for the item and producer always produces items. In the case of bounded buffer, there is
limit on the size of the buffer.

Producer waits if buffer is full and consumer wait if buffer is empty. The buffer may
be provided by the operating system by using the Inter Process Communication ( IPC )
facility. Sometimes the buffer is explicitly created by user program. In this case, the buffer is
having limited size and sharable.

Page 12
The following program shows the solution to the shared bounded buffer for the
producer and consumer problem.

The producer and consumer processes share the following variables. Solution is
correct, but can only use BUFFER_SIZE-1 elements.

Shared data :

#define BUFFER_SIZE 10

typedef struct {

...

} item;

item buffer[BUFFER_SIZE];

int in = 0;

int out = 0;

Bounded-Buffer – Producer Process :

item nextProduced;

while (1) {

while (((in + 1) % BUFFER_SIZE) == out); /* do nothing */

buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

Bounded-Buffer – Consumer Process :

item nextConsumed;

Page 13
while (1) {

while (in == out); /* do nothing */

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;

The shared buffer is implemented as a circular queue with two logical pointers in and
out. The variable ‘in’ points to the next free in the buffer. ‘out’ points to the first full position
in the buffer. The buffer is empty when in == out; the buffer is full when (( in + 1 ) %
BUFFER_SIZE == out ).

The producer have a local variable nextProduced and consumer have a local variable
nextConsumed.

Inter Process Communication :

In the above method of cooperation processes, we require the processes to share a


common buffer and we require a code for implementing the buffer in the application
program. Another way to get the same address is, buffer is provided by operating system by
means of Inter process communication facility. IPC provides an environment in which the
processes are able to communicate with one another without need of writing code in the
application program and IPC is provided by message passing system. In the message passing
system, messages are transmitted between the sender and receiver.

In the message passing system, we have two important operations ‘send’ and
‘receive’. In the message passing system, the message sent is either fixed or variable. If the
fixed size messages are sent them system level implementation becomes easy and the task of
programming becomes difficult.

If the variable size message is sent, the system level becomes difficult and
programmer task becomes easy.

If process P and process Q wants to communicate, OS establishes a communication


link between them. Here, the link is provided by means of both hardware and software. We

Page 14
are not discussing about hardware implementation and we discuss about the logical
implementation of the communication link. There are several methods for logical
implementation of communication link.

a) Direct or Indirect

b) Symmetric or Asymmetric

c) Automatic or Explicit buffering

d) Sent by copy or reference

Direct communication :

In the message passing system, each process must have a unique name. In the direct
communication, each process that wants to communicate must explicitly specify the name of
recipient or sender of the communication. In this scheme, the two operations are defined as

Q send (P, message) – send a message to process P

P receive(Q, message) – receive a message from process Q

The communication link in this scheme is having the following properties.

a) Links are established automatically.

b) A link is associated with exactly one pair of communicating processes.

c) Between each pair there exists exactly one link.

d) The link may be unidirectional, but is usually bi-directional.

This scheme is also having property of Symmetry i.e., both sender and receiver
processes must specify the name of the other processes participating in the communication.

In the Asymmetric, only the sender specifies the name of the recipient and recipient is
not required to name the sender.

The following two operations defined in asymmetric communication.

send (P, message) – send a message to process P

Page 15
receive(id, message) – receive a message from any other process

The variable ‘id’ is dynamically sent to the name of the process participating in the
communication.

Indirect Communication :

The disadvantage in both schemes ( symmetric & asymmetric ) is “ if we change the


name of one process, we need to examine the definitions of all other processes and make
changes when necessary ”.

In the Indirect Communication messages are send to or receive from mail box or port.
A mail box can be an object and which can store messages. Each mail box has unique
identification. In this scheme, a process can communicate with other processes using different
mail boxes. Two processes are able to communicate means the two processes share the mail
box.

Two operations in this scheme are given by

send(A, message) – send a message to mailbox A

receive(A, message) – receive a message from mailbox A

In this scheme, the communication link has the following properties.

a) Link established only if processes share a common mailbox

b) A link may be associated with many processes.

c) Each pair of processes may share several communication links.

d) Link may be unidirectional or bi-directional.

Suppose if we three processes P1, P2, P3 and all share a mail box A. Suppose P 1 sends a
message to A and other processes are able to receive the message from A. Now the question
is ‘who will receive the message ?’ from A which sent by P 1. This is depending upon any one
of the following.

a) Allow a link to be associated with at most two processes.

Page 16
b) Allow only one process at a time to execute a receive operation.

c) Allow the system to select arbitrarily the receiver. Sender is notified who

the receiver was.

A mail box is owned by either a process / OS. If a mail box is owned by a process, it
is a part of the address space of the process.

Each and every mail box is associated with owner and user. The owner of the mail
box is able to receive message through the mail box.

The user of the mail box is able to send messages through mail box.

If each mail box has unique owner, sending process does not get any confusion of
who receive the message from the mail box.

Suppose the mail box A is owned by process P and if P terminates, immediately A


disappears.

If the mail box is owned by OS, it is said to be independent and OS must provide
mechanisms for

a) Create a new mailbox

b) Send and receive messages through mailbox

c) Destroy a mailbox if it is not required

Synchronization :

Message passing between the two processes may be Synchronous / Asynchronous also
called Blocking / Non-Blocking.

a) Blocking Send : The sending process is blocked until the message is received by

receiving process / mail box.

b) Non-Blocking Send : The sending process sends the message & continues its

execution.

Page 17
c) Blocking Receive : The receiving process is blocked until the message is available

to it.

d) Non-Blocking Receive : The receiving process receives a Null or some message

and continue its execution.

Buffering :

Whether the communication is direct / indirect messages that are transmitted are
stored in buffer which is a temporary queue. The following are 3 ways to implement such a
queue.

a) Zero Capacity : The queue does not store any messages. The sender must block

until the receiver receives the message.

b) Bounded Capacity : The queue has finite length, say ‘n’ i.e., the buffer is able to

store ‘n’ messages in it. In this case, as long as the queue is not full, the sender

sends the messages and these are stored in the buffer. If the queue is full, sender

must block until receiver receives atleast one message.

c) Unbounded Capacity : The sender never blocks. We do not have any restriction

about the length of the queue. So, sender can store any number of messages.

The zero capacity buffer is sometimes known as “A message passing system with no
buffering”. The two cases are known as “Message passing system with automatic buffering“.

************ End of Chapter 4 ************

Page 18

You might also like