You are on page 1of 48

 Process Concept

 Process Scheduling
 Operations on Processes
 Interprocess Communication
 An operating system executes a variety of
programs:
◦ Batch system – jobs
◦ Time-shared systems – user programs or tasks
 Textbook uses the terms job and process
almost interchangeably
 Process – a program in execution; process
execution must progress in sequential fashion
 A process includes:
◦ program counter and process registers
◦ stack and heap
◦ text section
◦ data section
 As a process executes, it changes state
◦ new: The process is being created
◦ running: Instructions are being executed
◦ waiting: The process is waiting for some event to occur
(such as an I/O completion or reception of a signal)‫‏‬
◦ ready: The process is waiting to be assigned to a
processor
◦ terminated: The process has finished execution
 Each process is represented in the operating
system by a process control block (PCB) also
called a task control block.
 PCB is the data structure used by the
operating system. Operating system groups
all information that needs about particular
process.
Information associated with each
process
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information
 Process control block includes CPU scheduling,
I/O resource management, file management
information etc. The PCB serves as the repository
for any information which can vary from process
to process. Loader/linker sets flags and registers
when a process is created. If that process gets
suspended, the contents of the registers are
saved on a stack and the pointer to the particular
stack frame is stored in the PCB. By this
technique, the hardware state can be restored so
that the process can be scheduled to run again.
 When CPU switches to another process
(because of an interrupt), the system must
save the state of the old process and load
the saved state for the new process
 Context-switch time is overhead; the
system does no useful work while switching
 Time dependent on hardware support
 A context switch is the mechanism to store
and restore the state or context of a CPU in
Process Control block so that a process
execution can be resumed from the same
point at a later time. Using this technique a
context switcher enables multiple processes
to share a single CPU. Context switching is an
essential part of a multitasking operating
system features.
 When the scheduler switches the CPU from
executing one process to execute another, the
context switcher saves the content of all
processor registers for the process being
removed from the CPU, in its process descriptor.
The context of a process is represented in the
process control block of a process. Context
switch time is pure overhead. Context switching
can significantly affect performance as modern
computers have a lot of general and status
registers to be saved.
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in
main memory, ready and waiting to execute
 Device queues – set of processes waiting for an
I/O device
 Processes migrate among the various queues
 Process scheduling is an essential part of a
Multiprogramming operating system. Such
operating systems allow more than one
process to be loaded into the executable
memory at a time and loaded process shares
the CPU using time multiplexing.
 Scheduling queues refers to queues of
processes or devices. When the process
enters into the system, then this process is
put into a job queue. This queue consists of
all processes in the system. The operating
system also maintains other queues such as
device queue. Device queue is a queue for
which multiple processes are waiting for a
particular I/O device.
An interrupt occurs
 This figure shows the queuing diagram of
process scheduling.
- Queue is represented by rectangular box.
- The circles represent the resources that
serve the queues.
- The arrows indicate the process flow in
the system.
 Queues are of two types
◦ Ready queue
◦ Device queue
 A newly arrived process is put in the ready
queue. Processes waits in ready queue for
allocating the CPU. Once the CPU is assigned to a
process, then that process will execute. While
executing the process, any one of the following
events can occur.
◦ The process could issue an I/O request and then it
would be placed in an I/O queue.
◦ The process could create new sub process and will wait
for its termination.
◦ The process could be removed forcibly from the CPU, as
a result of interrupt and put back in the ready queue.
 Long-term scheduler (or job scheduler)
– selects which processes should be
brought into the ready queue
 Short-term scheduler (or CPU scheduler)
– selects which process should be
executed next and allocates CPU
 Schedulers are special system software which
handles process scheduling in various ways.
Their main task is to select the jobs to be
submitted into the system and to decide
which process to run. Schedulers are of three
types:

 Long Term Scheduler


 Short Term Scheduler
 Medium Term Scheduler
 It is also called job scheduler. Long term
scheduler determines which programs are
admitted to the system for processing. Job
scheduler selects processes from the queue
and loads them into memory for execution.
Process loads into the memory for CPU
scheduling. The primary objective of the job
scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor
bound.
 It is also called CPU scheduler. Main objective
is increasing system performance in
accordance with the chosen set of criteria. It
is the change of ready state to running state
of the process. CPU scheduler selects process
among the processes that are ready to
execute and allocates CPU to one of them.
 Short term scheduler also known as
dispatcher, execute most frequently and
makes the fine grained decision of which
process to execute next.
 Medium term scheduling is part of the
swapping. It removes the processes from the
memory. It reduces the degree of
multiprogramming. The medium term
scheduler is in-charge of handling the
swapped out-processes.
 Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast)‫‏‬
 Long-term scheduler is invoked very
infrequently (seconds, minutes)  (may be
slow)‫‏‬
 The long-term scheduler controls the degree of
multiprogramming(act as medium term)
 Processes can be described as either:
◦ I/O-bound process – spends more time doing I/O
than computations, many short CPU bursts
◦ CPU-bound process – spends more time doing
computations; few very long CPU bursts
 Some systems have no long-term scheduler
◦ Every new process is loaded into memory
◦ System stability effected by physical limitation and
self-adjusting nature of the human user
 Parent process creates children
processes, which, in turn create other
processes, forming a tree of processes
 Resource sharing options
◦ Parent and children share all resources
◦ Children share subset of parent’s resources
◦ Parent and child share no resources
 Execution options
◦ Parent and children execute concurrently
◦ Parent waits until some or all of its children
have terminated
 Address space options
◦ Child process is a duplicate of the parent
process (same program and data)‫‏‬
◦ Child process has a new program loaded into
it
 UNIX example
◦ fork() system call creates a new process
◦ exec() system call used after a fork() to
replace the memory space of the process with
a new program
int main(void)‫‏‬
{
pid_t processID;

processID = fork(); // Create a process


if (processID < 0)
{ // Error occurred
fprintf(stderr, "Fork failed");
exit(-1);
} // End if
else if (processID == 0)
{ // Inside the child process (no execlp() this time)‫‏‬
printf ("(Child) I am doing something");
} // End else if
else
{ // Inside the parent process
printf("(Parent) I am waiting for PID#%d to finish\n", processID);
wait(NULL);
printf ("\n(Parent) I have finished waiting; the child is done");
exit(0);
} // End else
return 0;
} // End main
 Process executes last statement and asks the
operating system to terminate it (via exit)‫‏‬
◦ Exit (or return) status value from child is received by
the parent (via wait())‫‏‬
◦ Process’ resources are deallocated by operating
system
 Parent may terminate execution of children
processes (kill() function)‫‏‬
◦ Child has exceeded allocated resources
◦ Task assigned to child is no longer required
◦ If parent is exiting
 Some operating system do not allow child to continue if
its parent terminates
 All children terminated - cascading termination
 An independent process is one that cannot affect or
be affected by the execution of another process
 A cooperating process can affect or be affected by
the execution of another process in the system
 Advantages of process cooperation
◦ Information sharing (of the same piece of data)
◦ Computation speed-up (break a task into smaller
subtasks)‫‏‬
◦ Modularity (dividing up the system functions)‫‏‬
◦ Convenience (to do multiple tasks simultaneously)‫‏‬
 Two fundamental models of interprocess
communication
◦ Shared memory (a region of memory is shared)‫‏‬
◦ Message passing (exchange of messages between
processes)‫‏‬
Message passing Shared memory
 Shared memory requires communicating
processes to establish a region of shared
memory
 Information is exchanged by reading and
writing data in the shared memory
 A common paradigm for cooperating
processes is the producer-consumer
problem
 A producer process produces information
that is consumed by a consumer process
◦ unbounded-buffer places no practical limit on
the size of the buffer
◦ bounded-buffer assumes that there is a fixed
buffer size
 Mechanism to allow processes to communicate and to synchronize
their actions
 No address space needs to be shared; this is particularly useful in a
distributed processing environment (e.g., a chat program)‫‏‬
 Message-passing facility provides two operations:
◦ send(message) – message size can be fixed or variable
◦ receive(message)‫‏‬
 If P and Q wish to communicate, they need to:
◦ establish a communication link between them
◦ exchange messages via send/receive
 Logical implementation of communication link
◦ Direct or indirect communication
◦ Synchronous or asynchronous communication
◦ Automatic or explicit buffering
 How are links established?
 Can a link be associated with more than
two processes?
 How many links can there be between
every pair of communicating processes?
 What is the capacity of a link?
 Is the size of a message that the link can
accommodate fixed or variable?
 Is a link unidirectional or bi-directional?
 Processes must name each other explicitly:
◦ send (P, message) – send a message to process P
◦ receive(Q, message) – receive a message from
process Q
 Properties of communication link
◦ Links are established automatically between every
pair of processes that want to communicate
◦ A link is associated with exactly one pair of
communicating processes
◦ Between each pair there exists exactly one link
◦ The link may be unidirectional, but is usually bi-
directional
 Disadvantages
◦ Limited modularity of the resulting process
definitions
◦ Hard-coding of identifiers are less desirable than
indirection techniques
 Messages are directed and received from
mailboxes (also referred to as ports)‫‏‬
◦ Each mailbox has a unique id
◦ Processes can communicate only if they share a
mailbox
 send(A, message) – send message to mailbox A
 receive(A, message) – receive message from mailbox
A
 Properties of communication link
◦ Link is established between a pair of processes
only if both have a shared mailbox
◦ A link may be associated with more than two
processes
◦ Between each pair of processes, there may be
many different links, with each link corresponding
to one mailbox
 For a shared mailbox, messages are
received based on the following methods:
◦ Allow a link to be associated with at most two
processes
◦ Allow at most one process at a time to execute
a receive() operation
◦ Allow the system to select arbitrarily which
process will receive the message (e.g., a round
robin approach)‫‏‬
 Mechanisms provided by the operating
system
◦ Create a new mailbox
◦ Send and receive messages through the
mailbox
◦ Destroy a mailbox
 Message passing may be either blocking or
non-blocking
 Blocking is considered synchronous
◦ Blocking send has the sender block until the
message is received
◦ Blocking receive has the receiver block until a
message is available
 Non-blocking is considered asynchronous
◦ Non-blocking send has the sender send the
message and continue
◦ Non-blocking receive has the receiver receive a
valid message or null
 When both send() and receive() are blocking,
we have a rendezvous between the sender and
the receiver
 Whether communication is direct or indirect,
messages exchanged by communicating
processes reside in a temporary queue
 These queues can be implemented in three
ways:
1. Zero capacity – the queue has a maximum length
of zero
- Sender must block until the recipient receives the
message
2. Bounded capacity – the queue has a finite length
of n
- Sender must wait if queue is full
3. Unbounded capacity – the queue length is
unlimited
Sender never blocks

You might also like