You are on page 1of 58

Process Management

• A process can be thought of as a program in


execution.
• A process will need certain resources—such as CPU
time,memory, files, and I/O devices—to accomplish
its task.
• These resources are allocated to the process by the
OS either when it is created or while it is executing.
• Although traditionally a process contained only a
single thread of control as it runs.
• most modern operating systems now support
processes that have multiple threads.
• The operating system is responsible for the following
activities in connection with process and thread
management:
• the creation and deletion of both user and system
processes
• the scheduling of processes
• the provision of mechanisms for synchronization,
communication
• deadlock handling for processes.
• A process is more than the program code, which is
sometimes known as the text section.
• It also includes the current activity, as represented by
the value of the program counter and the contents
of the processor’s registers.
• A process generally also includes the process stack,
which contains temporary data.
• A process may also include a heap,which is memory
that is dynamically allocated during process run time.
• Process State:
• As a process executes, it changes state.
• The state of a process is defined in part by the current
activity of that process. Each process may be in one of
the following states:
• New:The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur
(such as an I/O completion or reception of a signal).
• Ready:The process is waiting to be assigned to a
processor.
• Terminated: The process has finished execution.
Process Control Block:
• Each process is represented in the operating system by a
process control block(PCB)—also called a task control block.
• It contains many pieces of information associated with a
specific process, including these:
• CPU-scheduling information.
• Memory-management information.
• Accounting information. This information includes
the amount of CPU and real time used, time limits,
account numbers, job or process numbers,and so on.
• I/O status information. This information includes the
list of I/O devices allocated to the process, a list of
open files, and so on.
• Process Scheduling
• The objective of time sharing is to switch the CPU
among processes so frequently that users can
interact with each program while it is running.
• To meet these objectives, the process scheduler
selects an available process (possibly from a set of
several available processes) for program execution on
the CPU.
• For a single-processor system, there will never be
more than one running process.
• If there are more processes, the rest will have to wait
until the CPU is free and can be rescheduled.
• Schedulers
• The operating system must select, for scheduling
purposes, processes from these queues in some
fashion. The selection process is carried out by the
appropriate scheduler.
• The long-term scheduler, or job scheduler, selects
processes from this pool and loads them into memory
for execution.
• The short-term scheduler, or CPU scheduler,
selects from among the processes that are ready to
execute and allocates the CPU to one of them.
• Context Switch
• interrupts cause the operating system to change a CPU
from its current task and to run a kernel routine.
• When an interrupt occurs, the system needs to save
the current context of the process running on the CPU.
• so that it can restore that context when its processing
is done, essentially suspending the process and then
resuming it.
• The context is represented in the PCB of the process.
• Generically, we perform a state-save of the current
state of the CPU, be it in kernel or user mode, and then
a state-restore to resume operations.
• Switching the CPU to another process requires
performing a state save of the current process and a
state restore of a different process.
• This task is known as a context switch.
• When a context switch occurs, the kernel saves the
context of the old process in its PCB.
• and loads the saved context of the new process
scheduled to run.
• Context-switch time is pure overhead, because the
system does no useful work while switching.
Inter-process Communication
• Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
• A process is independent if it cannot affect or be
affected by the other processes executing in the system.
• Any process that does not share data with any other
process is independent.
• A process is cooperating if it can affect or be affected by
the other processes executing in the system.
• Clearly, any process that shares data with other
processes is a cooperating process.
• Why should processes co-operate?
• There are several reasons for providing an
environment that allows process cooperation:
• Information sharing.
• Computation speedup: If we want a particular task
to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others.
• Modularity: We may want to construct the system in
a modular fashion, dividing the system functions into
separate processes.
• Convenience.
• Cooperating processes require an interprocess
communication (IPC)mechanism that will allow them
to exchange data and information.
• There are two fundamental models of interprocess
communication:
• shared memory
• message passing
• In the shared-memory model, a region of memory
that is shared by cooperating processes is
established.
• Processes can then exchange information by reading
and writing data to the shared region.
• In the message passing model, communication takes
place by means of messages exchanged between the
cooperating processes.
• Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be
avoided.
• Shared memory allows maximum speed and
convenience of communication.
• Shared-Memory Systems
• Interprocess communication using shared memory
requires communicating processes to establish a region
of shared memory.
• Typically, a shared-memory region resides in the address
space of the process creating the shared-memory
segment.
• Other processes that wish to communicate using this
shared-memory segment must attach it to their address
space.
• They can then exchange information by reading and
writing data in the shared areas.
• The processes are also responsible for ensuring that they
are not writing to the same location simultaneously.
• A producer process produces information that is consumed
by a consumer process.
• To allow producer and consumer processes to run
concurrently, we must have available a buffer of items that
can be filled by the producer and emptied by the consumer.
• This buffer will reside in a region of memory that is shared
by the producer and consumer processes.
• A producer can produce one item while the consumer is
consuming another item.
• The producer and consumer must be synchronized, so that
the consumer does not try to consume an item that
has not yet been produced.
The shared memory segments are known as Buffers
• Unbounded buffer places no practical limit on the
size of the buffer.
• The consumer may have to wait for new items, but
the producer can always produce new items.
• The bounded buffer assumes a fixed buffer size.
• In this case, the consumer must wait if the buffer is
empty, and the producer must wait if the buffer is
full.
• Message-Passing Systems
• particularly useful in a distributed environment, where
the communicating processes may reside on different
computers connected by a network.
• A message-passing facility provides at least two
operations:
send(message) and receive(message).
• Messages sent by a process can be of either fixed or
variable size.
• a communication link must exist between them.
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering
• With indirect communication, the messages are sent
to and received from mailboxes, or ports.
• A mailbox can be viewed abstractly as an object into
which messages can be placed by processes and
from which messages can be removed.
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from
mailbox A.
• A mailbox may be owned either by a process or by
the operating system.
• A process that creates a mailbox to receive the
messages is the owner.
• The process which puts messages into the mailbox is
the user.
Any questions??
CPU Scheduling
• CPU scheduling is the basis of multiprogrammed
operating system
• By switching the CPU among processes, the
operating system can make the computer more
productive.
• CPU-scheduling decisions may take place under
certain circumstances:
1. When a process switches from the running state to
the waiting state.
2. When a process switches from the running state to
the ready state.
3. When a process switches from the waiting state to
the ready state.
4. When a process terminates.
• When scheduling takes place only under
circumstances 1 and 4, we say that the scheduling
scheme is nonpreemptive or cooperative.
• otherwise, it is preemptive.
• Under non-preemptive scheduling, once the CPU has
been allocated to a process, the process keeps the
CPU
• until it releases the CPU either by terminating or by
switching to the waiting state.
• Another component involved in the CPU-scheduling
function is the dispatcher.
• The dispatcher is the module that gives control of the
CPU to the process selected by the short-term
scheduler.
• Scheduling Criteria
1. CPU utilization: In a real system, it should range from
40 percent (for a lightly loaded system) to 90
percent(for a heavily used system).
2. One measure of work is the number of processes
that are completed per time unit, called throughput.
3. Turnaround time: The interval from the time of
submission of a process to the time of completion is
the turnaround time.
4. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
• Waiting time: The CPU-scheduling algorithm does
not affect the amount of time during which a process
executes or does I/O.
• Waiting time is the sum of the periods spent waiting
in the ready queue.
• Response time: Thus, another measure is the time
from the submission of a request until the first
response is produced.
• It is desirable to maximize CPU utilization and
throughput and to minimize turn around time,
waiting time, and response time.
• Scheduling Algorithms
• First-Come, First-Served Scheduling
• Shortest-Job-First Scheduling
• Priority Scheduling
• Round-Robin Scheduling
First-Come, First-Serve Scheduling(non-preemptive)
• With this scheme, the process that requests the CPU
first is allocated the CPU first.
• The implementation of the FCFS policy is easily
managed with a FIFO queue.
• Consider the following set of processes that arrive at
time 0, with the length of the CPU burst given in
milliseconds:
• If the processes arrive in the order P1, P2, P3, and
are served in FCFS order, we get the result shown in
the following Gantt chart.
• The waiting time for P1 is 0 msec.
• The waiting time for P2 is 24 msec.
• The waiting time for P3 is 27 msec.
• The average waiting time is now (0+ 24 + 27)/3 = 17
milliseconds.
• Shortest-Job-First Scheduling(Non-preemptive)
• This algorithm associates with each process the
length of the process’s next CPU burst.
• When the CPU is available, it is assigned to the
process that has the smallest next CPU burst.
• If the next CPU bursts of two processes are the same,
FCFS scheduling is used to break the tie.
• With the same process scenario as above the Gantt
Chart is as below:

P2 P3 P1
• The waiting time for P1 is 6 msec.
• The waiting time for P2 is 0 msec.
• The waiting time for P3 is 3 msec.
• The average waiting time is now (6+ 0 + 3)/3 = 3
milliseconds.
Any questions??
• Priority Scheduling(non pre-emptive)
• A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority.
• Equal-priority processes are scheduled in FCFS order.
• The Gantt Chart is as follows:
• The waiting time for P1 is 6 msec.
• The waiting time for P2 is 0 msec.
• The waiting time for P3 is 16 msec.
• The waiting time for P4 is 18 msec.
• The waiting time for P5 is 01 msec.
• The average waiting time is now
(6+ 0 + 16 + 18 + 01)/5 = 8.2 milliseconds.
• Priority scheduling can be either preemptive or non
preemptive. When a process arrives at the ready
queue, its priority is compared with the priority of
the currently running process.
• A preemptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently
running process.
• For the following processes in ready queue, calculate
the average waiting time and TAT with pre-emptive
priority scheduling. Note that at time t= 4sec,
another process P6 with priority 0 and burst time of
2sec, joins the ready queue.
P2 P5 P6 P5 P1 P3 P4

1sec 3sec 2sec 2sec 10sec 2sec 1sec


• Waiting time P1 = 8sec
• Waiting time P2 = 0sec
• Waiting time P3 = 18sec
• Waiting time P4 = 20sec
• Waiting time P5 = 1sec + 2sec
• Waiting time P6 = 0sec
• Average waiting time = 8.167sec
• Pre-emptive Round-Robin
• A small unit of time, called a time quantum or time
slice, is defined.
• A time quantum is generally from 10 to 100
milliseconds in length.
• The ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time interval
of up to 1 time quantum.
• If the CPU burst time < 1 time quantum, the process
voluntarily releases the CPU and next process is
scheduled.
• If the CPU burst time > 1 time quantum, the process
is pre-empted and next process is scheduled.
The following processes arrive in the ready queue. Use
non-preemptive sceduling to calculate the average
waiting time and turn-around time for all the processes
with FCFS and SJF sheduling algorithms. Also draw the
Gantt chart. Assume that the CPU is idle for the initial
1.0 sec.
• FCFS

P1 P2 P3
8msec 4msec 1msec

• The waiting time for P1 = 0msec


• The waiting time for P2 = 8msec
• The waiting time for P3 = 12msec
• Average waiting time = (0+8+12)/3 = 6.67msec
P1 P2 P3
8msec 4msec 1msec
• The turn around time for
• P1 = 8msec
• P2 = 12msec
• P3 = 13msec
• Avg turn-around-time = (8m + 12m + 13m)/3
= 11msec
• SJF
P3 P2 P1
1msec 4msec 8msec
• The waiting time for P1 = 5msec
• The waiting time for P2 = 1msec
• The waiting time for P3 = 0msec
• Average waiting time = (5+1+0)/3 = 2msec
P3 P2 P1
1msec 4msec 8msec

• The turn around time for


• P1 = 13msec
• P2 = 5.0msec
• P3 = 1msec
• Avg turn-around-time = (13m + 5.0m + 1m)/3
= 6.33msec
Extension of problem
• Since the processes are in the ready queue the arrival
time of the processes is maintained separately in the
TCB/PCB.
• The scheduling decision is taken after 1-sec, it does
not affect the process of scheduling.
• Therefore the solution remains the same.
• Consider the following set of processes, with the length of
the CPU burst given in milliseconds:

• The processes are assumed to have arrived in the order


P1, P2, P3, P4, P5, all at time 0.
• Draw four Gantt charts that illustrate the execution of
these processes using the following scheduling
algorithms:
• FCFS, SJF, non-preemptive priority (a smaller priority
number implies a higher priority), and RR (quantum = 1).
FCFS

P1 P2 P3 P4 P5
10msec 1msec 2msec 1msec 5msec
SJF

P2 P4 P3 P5 P1
1msec 1msec 2msec 5msec 10msec
Non-preemptive priority

P2 P5 P1 P3 P4
1msec 5msec 10msec 2msec 1msec

You might also like