You are on page 1of 46

Chapter 2:

Processes and Process


management

Operating System
Chapter 2:
Processes and Process
management

 Process Concept
 Thread concept
 Process Scheduling
 Inter-process Communication
 Deadlock

Operating System
Process Concept

 An operating system executes a variety of programs:


 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost
interchangeably.
 Process – a program in execution; process execution must
progress in sequential fashion.
 A process will need certain resources—such as CPU time,
memory, files, and I/O devices—to accomplish its task.

Operating System Concepts


 Let us see these two concepts: uni-programming and
multiprogramming.

 Uniprogramming: only one process at a time.

 Typical example: DOS.


 Problem: users often wish to perform more than one activity at a time
(load a remote file while editing a program, for example), and
uniprogramming does not allow this.

 Multiprogramming: multiple processes at a time.

 Typical of UNIX plus all currently envisioned new operating systems.


 Allows system to separate out activities cleanly.

Operating System Concepts


Process State
 As a process executes, it changes state

 new: The process is being created.


 running: Instructions are being executed.
 waiting: The process is waiting for some event to occur.
 ready: The process is waiting to be assigned to a process.
 terminated: The process has finished execution.

 It is important to realize that only one process can be running on


any processor core at any instant.
 However, many processes may be ready and waiting.

Operating System Concepts


Diagram of Process State

Operating System Concepts


State Transitions in Five-State Process Model

 new _ready
 Admitted to ready queue; can now be considered by CPU scheduler
 ready _ running
 CPU scheduler chooses that process to execute next, according to
some scheduling algorithm
 running _ready
 Process has used up its current time slice
 running _blocked
 Process is waiting for some event to occur (for I/O operation to
complete, etc.)
 blocked _ ready
 Whatever event the process was waiting on has occurred
 Running _ Terminated
 When the process completed

Operating System Concepts


2.2 The threads concept
 Why would anyone want to have a kind of process within a
process (thread)?

Operating System Concepts


2.2 The threads concept
 Why would anyone want to have a kind of process within a
process (thread)?

 The main reason for having threads is that in many applications,


multiple activities are going on at once.
 Some of these may block from time to time.
 By decomposing such an application into multiple sequential
threads that run in quasi-parallel, the programming model
becomes simpler.

 A second argument for having threads is that since they are lighter
weight than processes, they are easier (i.e., faster) to create and
destroy than processes.
 A third reason for having threads is also a performance argument.

Operating System Concepts


 Thread Example

 Suppose that the word processor is written as a two threaded


program. One thread interacts with the user and the other handles
reformatting in the background.

 As soon as the sentence is deleted from page 1, the interactive


thread tells the reformatting thread to reformat the whole book.

 Meanwhile, the interactive thread continues to listen to the keyboard


and mouse and responds to simple commands like scrolling page 1
while the other thread is computing madly in the background.

Operating System Concepts


 While we are at it, why not add a third thread?
 Many word processors have a feature of automatically saving the
entire file to disk every few minutes to protect the user against
losing a day's work in the event of a program crash, system crash,
or power failure.

 The third thread can handle the disk backups without interfering
with the other two.

Operating System Concepts


Operating System Concepts
 Processes vs. Threads

 Process = unit of resource ownership


 sometimes called a heavyweight process.

 Thread = unit of scheduling


 A thread (sometimes called a lightweight process) is a single sequential
execution stream within a process.

Operating System Concepts


Key Difference Between Process and Thread

Operating System Concepts


2.3 Processor Scheduling
 Back in the old days of batch systems with input in the form of card
images on a magnetic tape, the scheduling algorithm was simple:
just runs the next job on the tape.

 With multiprogramming systems, the scheduling algorithm became


more complex because there were generally multiple users waiting
for service.

 When a computer is multi-programmed, it frequently has multiple


processes or threads competing for the CPU at the same time.

 This situation occurs whenever two or more of them are


simultaneously in the ready state.

Operating System Concepts


 Scheduling refers to a set of policies and mechanisms to control the
order of work to be performed by a computer system.

 Processor scheduling is the means by which OS allocate processor


time for processes.
 Each CPU core can run one process at a time.
 For a system with a single CPU core, there will never be more than
one process running at a time
 whereas a multicore system can run multiple processes at one time.
 If there are more processes than cores, excess processes will have to
wait until a core is free and can be rescheduled.
 The number of processes currently in memory is known as the degree
of multiprogramming.

Operating System Concepts


 When to schedule?

Operating System Concepts


 When to schedule?

 First, when a new process is created,


 Second, a scheduling decision must-be made when a process exits.
That process can no longer run (since it no longer exists), so some
other process must be chosen from the set of ready processes.
 Third, when a process blocks on I/O, or for some other reason,
another process has to be selected to run. Sometimes the reason for
blocking may play a role in the choice.
 Fourth, when an I/O interrupt occurs, a scheduling decision may be
made. If the interrupt came from an I/O device that has now
completed its work, some process that was blocked waiting for the
I/O may now be ready to run.

Operating System Concepts


 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.

Operating System Concepts


Scheduling Criteria (How to evaluate scheduling algorithm?)

 Many criteria have been suggested for comparing CPU-scheduling


algorithms.

 CPU utilization – keep the CPU as busy as possible

 Throughput – the number of processes that are completed per time


unit

 Turnaround time – amount of time to execute a particular process


 Mean time from submission to completion of process.
 the sum of the periods spent waiting in the ready queue, executing on
the CPU, and doing I/O.

Operating System Concepts


 Waiting time – amount of time a process has been waiting in the
ready queue
 Amount of time spent ready to run but not running.

 Response time – amount of time it takes from when a request was


submitted until the first response is produced, not output (for time-
sharing environment)
 Time between submission of requests and first response to the
request.
 the time it takes to start responding, not the time it takes to output the
response.

 It is desirable to maximize CPU utilization and throughput and to


minimize turnaround time, waiting time, and response time.

Operating System Concepts


Scheduling Algorithms

1. First-Come, First-Served Scheduling


 By far the simplest CPU-scheduling algorithm is the first-
come first-serve (FCFS) scheduling algorithm.
 easily managed with a FIFO queue.
 The negative side is, the average waiting time under the
FCFS policy is often quite long.

Operating System Concepts


Example:
 Consider performance of FCFS algorithm for three compute-bound
processes. What if have 3 processes P1 (takes 24 seconds), P2 (takes
3 seconds) and P3 (takes 3 seconds). If arrive in order P1, P2, P3,
what is :
 Case i. if they arrive in order of p1,p2,p3

 Turn around time(Tr) p1=24,p2=27,p3=30

 Waiting time for P1 = 0; P2 = 24; P3 = 27

 Tr/Ts p1=1,p2=9, p3=10

 Average waiting time: (0 + 24 + 27)/3 = 17

 Average turn around time:(24+27+30)/3=27

 Throughput =3/30=1/10
Operating System Concepts
 Case ii. Suppose that the processes arrive in the order
P2 , P3 , P1 .
The Gantt chart for the schedule is:

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 This reduction is substantial.
 Thus, the average waiting time under an FCFS policy is generally not
minimal and may vary substantially.
 the FCFS scheduling algorithm is non-preemptive.
 Once the CPU has been allocated to a process, that process keeps
the CPU until it releases the CPU, either by terminating or by
requesting I/O.

Operating System Concepts


2. Shortest-Job-First (SJF) Scheduling

 Associates with each process the length of its next CPU burst.
 Use these lengths to schedule the process with the shortest time.
 When the CPU is available, it is assigned to the process that has
the smallest next CPU burst.
 If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.

Operating System Concepts


 Example, consider the following set of processes

 The waiting time is P1 = 3, P2 =16, P3 = 9, and P4 = 0.


 Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7
milliseconds.
 By comparison, if we were using the FCFS scheduling
scheme, the average waiting time would be 10.25.
 The SJF scheduling algorithm is provably optimal, in that
it gives the minimum average waiting time for a given set
of processes.
Operating System Concepts
 SJF has Two schemes:

 Non-preemptive – once CPU given to the process it cannot be


preempted until completes its CPU burst.

 Preemptive – if a new process arrives with CPU burst length less


than remaining time of current executing process, preempt.
 This scheme is know as the Shortest-Remaining-Time-First
(SRTF).

Operating System Concepts


 Example, consider the following four processes, with the length of
the CPU burst given in milliseconds:

 Process P1 is started at time 0, since it is the only process in the


queue. Process P2 arrives at time 1.
 The remaining time for process P1 (7 milliseconds) is larger than the
time required by process P2 (4 milliseconds), so process P1 is
preempted, and process P2 is scheduled.
 The average waiting time for this example is [(10 − 1) + (1 − 1) + (17
− 2) + (5 − 3)]/4 = 26/4 = 6.5 milliseconds.
 Non-preemptive SJF scheduling would result in an average waiting
time of 7.75 milliseconds.
Operating System Concepts
3. Round-Robin scheduling

 similar to FCFS scheduling, but preemption is added to enable the


system to switch between processes.
 the ready queue is implemented as a FIFO queue of processes.
 New processes are added to the tail of the ready queue.
 The CPU scheduler picks the first process from the ready queue,
sets a timer to interrupt after 1 time quantum, and dispatches the
process.
 Each process gets a small unit of CPU time (time quantum), usually
10-100 milliseconds.
 After this time has elapsed, the process is preempted and added to
the end of the ready queue.
 The average waiting time under the RR policy is often long.

Operating System Concepts


3. Round-Robin scheduling (cont…)
 Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:

 If we use a time quantum of 4 milliseconds, then process P1 gets the


first 4 milliseconds.
 Since it requires another 20 milliseconds, it is preempted after the
first time quantum, and the CPU is given to the next process in the
queue, process P2.
 Process P2 does not need 4 milliseconds, so it quits before its time
quantum expires.
 The CPU is then given to the next process, process P3.
 Once each process has received 1 time quantum, the CPU is
returned to process P1 for an additional time quantum.
 The resulting RR schedule is as follows:
Operating System Concepts
3. Round-Robin scheduling
 The resulting RR schedule is as follows:

 P1 waits for 6 milliseconds (10 − 4),


 P2 waits for 4 milliseconds, and
 P3 waits for 7 milliseconds.
 Thus, the average waiting time is
 (6+4+7)/3 = 17/3 = 5.66 milliseconds.

Operating System Concepts


4. Priority Scheduling

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority


(smallest integer  highest priority).
 Equal-priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the priority
(p) is the inverse of the (predicted) next CPU burst.
 The larger the CPU burst, the lower the priority, and vice versa.

Operating System Concepts


4. Priority Scheduling
 As an example, consider the following set of processes,
assumed to have arrived at time 0 in the order P1, P2, · · ·, P5,
with the length of the CPU burst given in milliseconds:

 The average waiting time is 8.2 milliseconds.

Operating System Concepts


4. Priority Scheduling
 Priority scheduling can be either preemptive or non-preemptive.
 Preemptive
 When a process arrives at the ready queue, its priority is compared
with the priority of the currently running process.
 A preemptive priority scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of
the currently running process.
 Non-preemptive
 A non-preemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.
 A major problem with priority scheduling algorithms is indefinite
blocking, or starvation.
 A priority scheduling algorithm can leave some low priority
processes waiting indefinitely.
 A solution to the problem of indefinite blockage of low-priority
processes is aging.
 Aging involves gradually increasing the priority of processes
that wait in the system for a long time.
Operating System Concepts
2.4 Inter-process Communication (IPC)

 Mechanism for processes to communicate and to synchronize their


actions.

 Processes frequently need to communicate with other processes.

 Very briefly, there are three issues here:


 The first was: how one process can pass information to another.
 The second has to do with making sure two or more processes do not
get in each other's way,
 for example, two processes in an airline reservation system each
trying to grab the last seat on a plane for a different customer.
 The third concerns proper sequencing when dependencies are present:
 if process A produces data and process B prints them, B has to wait
until A has produced some data before starting to print.

Operating System Concepts


2.4.1 Race condition

 A situation where several processes access and manipulate the


same data concurrently and the outcome of the execution depends
on the particular order in which the access takes place, is called a
race condition.
 In some operating systems, processes that are working together
may share some common storage that each one can read and write.

 The shared storage may be in main or it may be a shared file; the


location of the shared memory does not change the nature of the
communication or the problems that arise.

Operating System Concepts


2.4.2 Critical Region

How do we avoid race conditions?

 The key to preventing trouble here and in many other situations


involving shared memory, shared files, and shared everything else is
to find some way to prohibit more than one process from reading and
writing the shared data at the same time.
 Put in other words, what we need is mutual exclusion, that is, some
way of making sure that if one process is using a shared variable or
file, the other processes will be excluded from doing the same thing.
 The part of the program where the shared memory is accessed is
called the critical region or critical section.
 when one process is executing in its critical section, no other
process is allowed to execute in its critical section.
 That is, no two processes are executing in their critical sections
at the same time.

Operating System Concepts


2.4.3 Semaphores

 This was the situation in 1965, when E. W. Dijkstra (1965) suggested


using an integer variable to count the number of wakeups saved for
future use.
 In his proposal, a new variable type, which he called a semaphore,
was introduced.
 A semaphore could have the value 0, indicating that no wakeups were
saved, or some positive value if one or more wakeups were pending.
 All modifications to the integer value of the semaphore must be
executed atomically.
 When one process modifies the semaphore value, no other process
can simultaneously modify that same semaphore value.

Operating System Concepts


2.4.4 Mutexes

 When the semaphore's ability to count is not needed, a simplified


version of the semaphore, called a mutex, is sometimes used.
 Mutexes are good only for managing mutual exclusion to some shared
resource or piece of code.
 They are easy and efficient to implement, which makes them
especially useful in thread packages that are implemented entirely in
user space.

Operating System Concepts


2.4.5 Message passing
 Message system – processes communicate with each other without
resorting to shared variables.

 IPC facility provides two operations:


 send(destination, &message) – message size fixed or variable
 receive(source, &message)

 If P and Q wish to communicate, they need to:


 establish a communication link between them
 exchange messages via send/receive

Operating System Concepts


Process Control Block (PCB)

Information associated with each process.


 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information

Operating System Concepts


Process Control Block (PCB)

Operating System Concepts


2.5 Deadlock

A set of blocked processes each holding a resource and waiting to


acquire a resource held by another process in the set.

Example

System has 2 tape drives.


P1 and P2 each hold one tape drive and each needs another one.

Operating System Concepts


Deadlock Characterization

 Deadlock can arise if four conditions hold simultaneously.

 Mutual exclusion: only one process at a time can use a resource.


 Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes.
 No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
 Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes
such that P0 is waiting for a resource that is held by P1, P1 is waiting
for a resource that is held by P2, …, Pn–1 is waiting for a resource that
is held by Pn, and Pn is waiting for a resource that is held by P0.

Operating System Concepts


Methods for Handling Deadlocks

 Ensure that the system will never enter a deadlock state.

 Allow the system to enter a deadlock state and then recover.

 Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX.

Operating System Concepts


Deadlock Prevention

 Mutual Exclusion – not required for sharable resources; must hold


for nonsharable resources.
 Hold and Wait – must guarantee that whenever a process requests
a resource, it does not hold any other resources.
 Require process to request and be allocated all its resources before it
begins execution, or
 allow process to request resources only when the process has
none.
 Low resource utilization; starvation possible.

Operating System Concepts

You might also like