You are on page 1of 61

 Process Concept

 Process scheduling
 Operation on Processes
 Cooperating Processes
 Threads
 Interprocess communication
 An operating system executes a variety of programs:
 Batch system jobs
 Time-shared systems user programs or tasks
 Process a program in execution.
 A process includes:
 program counter
 stack
 data section
 A process is an abstraction that supports running programs
 Different processes may run several instances of the same
 In most systems, processes form a tree, with the root being the
first process to be created
 At a minimum, the following resources are required:
 Memory to contain the program code and data
 A set of CPU registers to support execution
•Multiprogramming of
• Conceptual model
-- 4 independent
-- Processes run
• Only one program active
at any instant!
-- That instant can be
very short…
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a
 terminated: The process has finished execution
 Each process has a process control block (pcb)
 describes its components and other relevant data
 allows efficient and centralized access to all
information related to a process
 Routing queues usually use pointers to pcb s
 Naming: Each process has a process identifier
 must be unique
 must be reused with extreme care after a process
 PCB contains many pieces of
information associated with each
process, including:
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management
 Accounting information
 I/O status information

 Scheduling Queues

 Job queue set of all processes in the system.

 Ready queue set of all processes residing in main

memory, ready and waiting for execution.

 Device queues set of processes waiting for an I/O


 Process may migrate between the various queues

depending on the requirements.
Ready Executing Terminated

 Types of Schedulers
 Long term ( job ) scheduler

 Short term ( CPU ) scheduler

 Intermediate level ( Medium-term ) scheduler

 Long term ( job ) scheduler

 Loads processes from disk to memory

 Controls process mix

 CPU bound I/O bound

 Controls the degree of multiprogramming (number of

processes in memory)
 Infrequent execution (minutes)

 Slow
 Short term ( CPU ) scheduler

 Moves processes between states

 Frequent execution (milliseconds)

 Fast

 If 10 ms are needed to decide who will get the next

100 ms:

 10 / (10+100) = 9% overhead !!!

 Intermediate level ( Medium-term ) scheduler
 Swaps processes in and out of memory (adjust
multiprogramming level)
 Controls process mix

 Dispatcher
 Switching context
 Switching to user mode
 Jumping to proper location in program
 Invoked during every process switch - Dispatcher should be
as fast as possible
 Dispatch latency
 Time that the Dispatcher takes to stop one process and
start another running.
 Save registers of old process
 General purpose registers
 Memory management related information
 Stack pointer
 Save PSW of old process
 Put PCB on relevant queue
 save PC of old process
 Mark PCB of new process as running
 Load PSW of new process
 Load PC of new process
 New process continues to run from where interrupted
 Context switching time is overhead not useful work
 Penalties of CS
 Explicit:
 Cost of loading and storing registers from/into
main memory
 Implicit:
 In a pipelined CPU, must wait until pipeline is
 If a CPU uses memory caches, the process that
is switched in, usually have a large number of
cache misses when it runs until they are loaded
from memory
 Context switching overhead is a big factor in the
efficiency of an operating system, and its cost
continues to increase with faster CPU s
 Parent process creates children processes, which, in turn
create other processes, forming a tree of processes.

 Resource sharing possibilities:

 Parent and children share all resources.

 Children share subset of parent s resources.

 Parent and child share no resources.

 Execution possibilities:

 Parent and children execute concurrently.

 Parent waits until children terminate.

 Address space

 Child duplicate of parent.

 Child has a program loaded into it.

 UNIX examples

 fork system call creates new process

 execve system call used after a fork to replace the process

memory space with a new program.

main( )
{ int pid;
if ((pid = fork()) == 0)
{ /* child code */
execve( )
/* overlay */;
else /* pid>0*/
{ /*parent code - wait*/
waitpid(pid );
 Process executes last statement and asks the operating
system to decide it (exit).
 Output data from child (exit) to parent (wait).
 Process resources are de allocated by operating system.
 Parent may terminate execution of children processes
 Child has exceeded allocated resources.
 Task assigned to child is no longer required.
 Parent is exiting.
 Operating system does not allow child to continue
if its parent terminates.
 Independent processes cannot be affected by the
execution of another process
 Cooperating processes can affect or be affected by
the execution of another process
 What for?

 Information sharing

 Computing Speedup
 overlap CPU and I/O
 use several processors

 Modularity

 Convenience
 Classical models

 Producer-Consumer Producer process produces

information that is consumed by the Consumer
Two versions:

 Unbounded-buffer no practical limit on the

buffer size

 Bounded-buffer assumes a fixed buffer size

 Reader/Writer
Shared data
var n; var buffer, array [0..n–1] of item;
type item = … ; in, out: 0….n-1;

 Producer process  Consumer process

repeat repeat
while in == out do no-op;
produce an item in nextp nextc := buffer [out];
while in + 1 mod n == out out := out+1 mod n;
do no-op;
buffer [in] :=nextp; consume the item in nextc
in :=in+1 mod n;
until false;
until false;

Solution is correct, but can only fill up n–1 buffer.

 A thread (or lightweight process) is a basic unit of CPU
utilization; it consists of:
 program counter
 register set
 stack space
 A thread shares with its peer threads its:
 code section
 data section
 operating-system resources, collectively know as a task.
 A traditional or heavyweight process is equal to a task
with one thread
 Multithreading:

 The OS supports multiple threads of execution within a

single process

 Single threading:

 The OS does not recognize the separate concept of thread

 MS-DOS supports a single user process and a single
 Traditional UNIX supports multiple user processes but
only one thread per process
 Solaris and Windows 2000 support multiple threads
Single Threading Multi-Threading
In a Multithreaded environment, Processes Have:

 A virtual address space which holds the process image

 Protected access to processors, other processes

(inter-process communication), files, and other I/O
While Threads...

 Have execution state (running, ready, etc.)

 Save thread context (e.g. program counter) when not
 Have private storage for local variables and execution stack
 Have shared access to the address space and resources
(files etc.) of their process
 when one thread alters (non-private) data, all other threads
(of the process) can see this
 threads communicate via shared variables
 a file opened by one thread is available to others
Thread Control Block contains a register image, thread priority and
thread state information
 A thread has no data  A process has
segment or heap code/data/heap & other
 A thread cannot live on its
own, it must live within a  There must be at least one
process thread in a process
 There can be more than  Threads within a process
one thread in a process, share code/data/heap,
the first thread calls main share I/O, but each has its
& has the process s stack own stack & registers
 Inexpensive creation  Expensive creation
 Inexpensive context  Expensive context
switching switching
 If a thread dies, its stack is  If a process dies, its
reclaimed resources are reclaimed &
all threads die
 Three key states: Running, Ready, Blocked

 No Suspend state because all threads within the same

process share the same address space (same process

 Suspending implies swapping out the whole process,

suspending all threads in the process

 Termination of a process terminates all threads within the


 Because the process is the environment the thread runs in.

 Three most popular threading models
 User-level threads
 Kernel-level threads
 Combination of user- and kernel-level threads
 Kernel not aware of the
existence of threads
 Thread management
handled by thread library in
user space
 No mode switch (kernel not
 But I/O in one thread could
block the entire process!

“Many-to-One” model
 Contains code for:
 creating and destroying threads
 passing messages and data between threads
 scheduling thread execution
 pass control from one thread to another
 saving and restoring thread contexts

 ULT s can be be implemented on any Operating

System, because no kernel services are required to
support them
 The kernel is not aware of thread activity
 it only manages processes
 If a thread makes an I/O call, the whole process is
 Note: in the thread library that thread is still in
running state, and will resume execution when the
I/O is complete
 So thread states are independent of process states
 Advantages
 Disadvantages
 Thread switching does not
 Most system calls are
involve the kernel: no
blocking for processes. So
mode switching
all threads within a process
 Therefore fast will be implicitly blocked
 Scheduling can be  The kernel can only assign
application specific: processors to processes.
choose the best algorithm Two threads within the
for the situation. same process cannot run
 Can run on any OS. We simultaneously on two
only need a thread library processors
 All thread management is
done by kernel
 No thread library; instead an
API to the kernel thread
 Kernel maintains context
information for the process
and the threads
 Switching between threads
requires the kernel
 Kernel does Scheduling on a
thread basis
“One-to-One” model
 Advantages  Disadvantages
 The kernel can schedule  Thread switching always
involves the kernel. This
multiple threads of the means 2 mode switches
same process on multiple per thread switch
processors  So it is slower compared
 Blocking at thread level, to User Level Threads
not process level  (But
faster than a full
process switch)
 Ifa thread blocks, the CPU
can be assigned to
another thread in the same
 Even the kernel routines
can be multithreaded
 Thread creation done in the
user space
 Bulk of thread scheduling and
synchronization done in user
 ULT s mapped onto KLT s
 The programmer may adjust
the number of KLTs
 KLT s may be assigned to
 Combines the best of both
“Many-to-Many” model
 User-Level  Kernel-Level
 Managed by application  Managed by kernel
 Kernel is not aware of  Consumes kernel
thread resources
 Context switching done by  Context switching done
application (cheap) by kernel (expensive)
 Can create as many as the  Number limited by kernel
application needs resources
 Must be used with care  Simpler to use

 Key issue: kernel threads provide virtual processors to user-level

threads, but if all of kthreads block, then all user-level threads will
block even if the program logic allows them to proceed
 Process includes the user s address space, stack, and
process control block
 User-level threads (threads library)
 invisible to the OS
 are the interface for application parallelism
 Kernel threads
 the unit that can be dispatched on a processor
 Lightweight processes (LWP)
 each LWP supports one or more ULTs and maps to
exactly one KLT

 Task 2 is equivalent to a pure ULT approach

 Tasks 1 and 3 map one or more ULT’s onto a fixed number of LWP’s
 Note how task 3 maps a single ULT to a single LWP bound to a CPU
 Only objects scheduled within the system

 May be multiplexed on the CPU s or tied to a specific CPU

 Each LWP is tied to a kernel level thread

 Share the execution environment of the task

 Same address space, instructions, data, file (any thread

opens file, all threads can read).

 Can be tied to a LWP or multiplexed over multiple LWPs

 Represented by data structures in address space of the

task but kernel knows about them indirectly via LWPs

 We can use ULTs when logical parallelism does not need

to be supported by hardware parallelism

 Ex: Multiple windows but only one is active at any one


 Note versatility of SOLARIS that can operate like

Windows-NT or like conventional Unix

 A UNIX process consists mainly of an address space and

a set of LWPs that share the address space

 Each LWP is like a virtual CPU and the kernel schedules

the LWP by the KLT that it is attached to

 Run-time library (RTL) ties together

 Multiple threads handled by RTL

 If ONE thread makes system call, LWP makes call, LWP

will block, all threads tied to LWP will block

 Any other thread in same task will not block.

 Mechanism for processes to communicate and to synchronize

their actions.

 Message system processes communicate with each other

without resorting to shared variables.

 IPC facility provides two operations:

 send(message) message size fixed or variable

 receive(message)
 If processes P and Q wish to communicate, they need to:

 establish a communication link between them

 exchange messages via send/receive

 Implementation of communication link

 physical (e.g., shared memory, hardware bus)

 logical (e.g., logical properties)

 How are links established?

 Can a link be associated with more than two

 How many links can there be between every pair of
communicating processes?
 What is the capacity of a link?

 Is the size of a message that the link can accommodate

fixed or variable?
 Is a link unidirectional or bi-directional?
Message passing Shared memory
 Processes must name each other explicitly:
 send (P, message) send a message to process P
 receive(Q, message) receive a message from process Q
 Properties of communication link
 Links are established automatically

 A link is associated with exactly one pair of

communicating processes
 Between each pair there exists exactly one link

 The link may be unidirectional, but is usually bi-

 Messages are directed and received from mailboxes
(also referred to as ports)
 Each mailbox has a unique id

 Processes can communicate only if they share a

 Properties of communication link
 Link established only if processes share a common
 A link may be associated with many processes

 Each pair of processes may share several

communication links
 Link may be unidirectional or bi-directional
 Operations

 create a new mailbox

 send and receive messages through mailbox

 destroy a mailbox

 Primitives are defined as:

 send(A, message) send a message to mailbox A

 receive(A, message) receive a message from mailbox A

 Mailbox sharing ( Problem)
 P1, P2 and P3 share mailbox A

 P1 sends; P2 and P3 receive

 Who gets the message?

 Solutions
 Allow a link to be associated with at most two processes

 Allow only one process at a time to execute a receive


 Allow the system to select arbitrarily the receiver.

 Sender is notified who the receiver was.

 Message passing may be either blocking or non-blocking
 Blocking is considered synchronous
 Blocking send has the sender block until the message is
 Blocking receive has the receiver block until a message
is available
 Non-blocking is considered asynchronous
 Non-blocking send has the sender send the message and
 Non-blocking receive has the receiver receive a valid
message or null
 Queue of messages attached to the link.

Implemented in one of three ways

1. Zero capacity

 0 messages
Sender must wait for receiver (rendezvous)

2. Bounded capacity
 finite length of n messages
Sender must wait if the link is full

3. Unbounded capacity
 infinite length
Sender never waits