You are on page 1of 14

PROCESSES

3
Introduction

 Process Management is concerned with the management of the physical


processors, specifically the assignment of processors to processes.
 A process can be thought of as a program in execution. A process will need certain
resources – such as CPU time, memory, files, and I / O devices – to accomplish the
task. These resources are allocated to the process either when it is created or while it is
executing.
 A process is the unit of work in most systems. Such a system consists of a collection
of processes. Operating-system processes execute system code, and user processes
execute user code. All these processes may execute concurrently.
 The foreground process is the one that accepts input from the keyboard, mouse, or
other input device.
 Background processes cannot accept interactive input from a user, but they can
access data stored on a disk and write data to the video display.
 The operating system is responsible for the following activities in connection with
process and thread management: the creation and deletion of both user and system
processes; the scheduling of processes; and the provision of mechanism for
synchronization, communication and deadlock handling for processes.
Process Concept
 A process defines the fundamental unit of computation for the computer.
 OS perform the following action after creation of process.
3.2 Processes

1. Create a process control block (PCB) for the process.


2. Assign process id and priority.
3. Allocate memory and other resources to the process.
4. Set up the process environment
5. Initialize resource accounting information for the process.
1. The Process
 A process is a program in execution.
 A process is more than the program code, which is sometimes known as the text
section.
 It also includes the current activity, as represented by the value of the program
counter and the content of the processor’s registers.
 A process generally includes the process stack, which contains temporary data (such as
method parameters, return addresses, and local variables), and a data section, which
contains global variables.
 A program by itself is not a process; a program is a passive entity, such as the
contents of a file stored on disk, whereas a process is an active entity; with a program
counter specifying the next instruction to execute and a set of associated resources.
2. Process State
 The state of a process is defined in part by the current activity of that process. Each
process may be in one or more of the following states:
 New: The process is being created.
 Running: Instructions are being executed.
 Blocked (or Waiting): The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.
 These state names are arbitrary, and they vary across operating systems. Only one
process can be running on any processor at any instant, although many processes may
be ready and waiting.
Operating System 3.3

 The state diagram corresponding to these states is presented in Figure 3.1.

Figure 3.1: Diagram of process state

3. Process Control Block (PCB)


 The OS maintains the information about each process in a record or a data
structure called PCB or task control block. Each user process has a PCB. It is
created when a user creates a process and removed from the system when the process
is killed. All these PCBs are kept in the memory reserved for the OS.
 A PCB is shown Figure 3.2. It contains important information about the specific
process including these:

Figure 3.2: Process Control Block (PCB)

 Process State: The state may be new, ready, running, and waiting, halted and so on.
 Program Counter: The counter indicates the address of the next instruction to be
executed for this process.
 CPU Registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
3.4 Processes

purpose registers, plus any condition-code information. Along with the program
counter, the state information must be saved when an interrupt occurs.
 CPU-Scheduling Information: This information includes a process priority, pointers
to scheduling queues, and any other scheduling parameters.
 Memory-Management Information: This information may include such information
as the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
 Accounting Information: This information includes the amount of CPU real time
used, time limits, account numbers, job or process numbers, and so on.
 I/O Status Information: The information includes the list of I/O devices allocated to
this process, a list of open files and so on.
4. Threads
 A thread is a lightweight process with a reduced state.
 A thread is a single sequence stream within in a process. Because threads have some of
the properties of processes, they are sometimes called lightweight processes.
 In a process, threads allow multiple executions of streams.
 A thread can be in any of several states (Running, Blocked, Ready or terminated).
Each thread has its own stack.
 A thread consists of a program counter (PC), a register set, and a stack space.
 Threads are not independent of one other like processes as a result threads shares with
other threads their code section, data section, OS resources also known as task, such
as open files and signals.
Process Scheduling
 The act of determining which process is in the ready state, and should be moved
to the running state is known as Process Scheduling.
 Process Scheduling is an essential part of the Multiprogramming operating systems.
 The prime aim of the process scheduling system is to keep the CPU busy all the time
and to deliver minimum response time for all programs.
 A uni processor system can have only one running process. If more processes exist, the
rest must wait until the CPU is free and can be rescheduled.
Operating System 3.5

1. Scheduling Queues
 The Scheduling Queues in the Systems are: (1) Job Queue, (2) Ready Queue and (3)
Device Queue.
 When the process enters into the system, they are put into a Job Queue. This queue
consists of all processes in the system.
 The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the Ready Queue. This queue is generally stored as a linked
list. A ready-queue header contains pointers to the first and final PCBs in the list.
 The list of processes waiting for a particular I/O device is called a Device Queue.
 A common representation of process scheduling is a Queuing Diagram, shown in
Figure 3.3.

Figure 3.3: Queuing-diagram representation of process scheduling

 In the figure, each rectangular box represents a queue. The circles represent the
resources that serve the queues. The arrows indicate the flow of processes in the
system.

 A new process is initially put in the ready queue. It waits in the ready queue until it is
selected for execution (or dispatched). One the process is assigned to the CPU and is
executing, one of the several events could occur:

 The process could issue an I/O request, and then place in an I/O queue.
 The process could create a new sub process and waits for its termination.
 The process could be removed forcibly from the CPU, as a result of interrupt and
be put back in the ready queue.
3.6 Processes

2. Schedulers
 A Scheduler is an operating system module that selects the next job to be
admitted into the system and the next process to run.
 Schedulers are of three types.
i) Long-Term Scheduler
ii) Short-Term Scheduler
iii) Medium-Term Scheduler
i) Long-Term Scheduler
 The Long-Term Scheduler, or Job Scheduler, selects processes from the pool
and loads them into memory for execution.
 The long-term scheduler executes much less frequently.
 The long-term scheduler controls the degree of Multiprogramming – the number
of processes in memory.
 The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and CPU bound.
 An I/O bound process spends more of the time doing I/O than it spends doing
computations.
 A CPU-bound process, on the other hand, generates I/O requests infrequently,
using more of its time doing computation than I/O-bound process uses.
 On some systems, the long-term scheduler may be absent or minimal. For
example, time-sharing systems such as UNIX often have no long-term scheduler.
When process changes the state from new to ready, then there is a long-term
scheduler.
ii) Short-Term Scheduler
 The Short-Term Scheduler, or CPU Scheduler; selects from among the
processes that are ready to execute, and allocates the CPU to one of them.
 The short-term scheduler must select a new process for the CPU frequently. A
process may execute for only a few milliseconds before waiting for an I/O
request.
 The short-term scheduler executes at least once every 100 milliseconds.
 The short-term scheduler must be fast.
Operating System 3.7

iii) Medium-Term Scheduler


 The Medium-Term Scheduler removes processes from memory, and thus
reduces the degree of multiprogramming.
 At some later time, the process can be reintroduced into memory and its execution
can be continued where it is left off. This scheme is called Swapping.
 Saving the image of a suspended process in secondary storage is called swapping,
 The process is swapped out, and is later swapped in, by the medium-term
scheduler.
 Swapping may be necessary to improve the process mix.
 The medium-term scheduler is shown in Figure 3.4.

Figure 3.4: Addition of medium-term scheduling to the queuing diagram

3. Context Switch
 Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context
Switch.
 The context of a process is represented in the PCB of a process; it includes the value of
the CPU registers, the process state, and the memory-management information.
 When a context switch occurs, the kernel saves the context of the old processes in its
PCB and loads the saved context of the new process scheduled to run.
 Context switching can significantly affect performance, since modern computers have
a lot of general and status registers to be saved.
 Context-switch times are highly dependent on hardware support.
3.8 Processes

 A context switch simply includes changing the pointer to the current register set. Also,
the more complex the operating system, the more work must be done during a context
switch.
Operations on Processes
 The operations of process carried out by an operating system are primarily of two
types:
1. Process Creation
2. Process Termination
1) Process Creation
 Process Creation is a task of creating new processes. There are different ways
to create new process.
 A new process can be created at the time of initialization of operating system or
when system calls such as fork() are initiated by other processes.
 The process, which creates a new process using system calls, is called parent
process while the new process that is created is called child process. The child
processes can create new processes using system calls.
 A new process can also be created by an operating system based on the request
received from the user.
2) Process Termination
 Process Termination is an operation in which a process is terminated after
the execution of its last instruction. This operation is used to terminate or end
any process.
 When a process is terminated, the resources that were being utilized by the
process are released by the operating system.
 When a child process terminates, it sends the status information back to the parent
process before terminating. The child process can also be terminated by the parent
process if the task performed by the child process is no longer needed.
 When a parent process terminates, it has to terminate the child process as well
became a child process cannot run when its parent process has been terminated.
Operating System 3.9

Inter process communication


 Processes executing concurrently in the operating system may be either independent
processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other process is
independent.
 A process is cooperating if it can affect or be affected by the other processes
executing in the system. Any process that shares data with other processes is a
cooperating process.
 There are several reasons for providing an environment that allows process
cooperation:
 Information Sharing. Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to such information.
 Computation Speedup. If we want a particular task to run faster, we must break
it into subtasks, each of which will be executing in parallel with the others. Notice
that such a speedup can be achieved only if the computer has multiple processing
cores.
 Modularity. We may want to construct the system in a modular fashion, dividing
the system functions into separate processes or threads.
 Convenience. Even an individual user may work on many tasks at the same time.
For instance, a user may be editing, listening to music, and compiling in parallel.
 Cooperating processes require an inter process communication (IPC) mechanism
that will allow them to exchange data and information.
 There are two fundamental models of inter process communication: shared memory
And message passing.
 In the shared-memory model, a region of memory that is shared by cooperating
processes is established. Processes can then exchange information by reading and
writing data to the shared region.
3.10 Processes

 In the message-passing model, communication takes place by means of messages


exchanged between the cooperating processes. The two communications models are
contrasted in Figure 3.5.

Figure 3.5: Communications models. (a) Message passing. (b) Shared memory.

 Inter-process Communication (IPC) is a mechanism that allows the exchange of


data between processes.
 IPC is particularly useful in a distributed environment where the communicating
processes may reside on different computers connected with a network.
 IPC is best provided by a message-passing system, and message systems can be
defined in many ways.
1. Message-Passing System
 Message passing is used as a method of communication in microkernels.
 Communication among the user processes is accomplished through the passing of
messages.
 An IPC facility provides at least two operations: send (message) and receive
(message).
 Messages sent by a process can be of either fixed size or of variable size.
 If only fixed-size messages can be sent, the system-level implementation is
straightforward. On the other hand, variable-sized messages require a more complex
system-level implementation, but the programming task becomes simpler.
Operating System 3.11

 If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them. This link
can be implemented in a variety of ways.
 There are several methods for logically implementing a link and the send/receive
operations:
 Direct or indirect communication
 Symmetric or asymmetric communication
 Automatic or explicit buffering
 Send by copy or send by reference
 Fixed-sized or variable-sized messages
2. Naming
 The various schemes for specifying processes in send and receive primitives are of
two types:
 Direct Communication
 Indirect Communication
1. Direct Communication
 With direct communication, each process that wants to communicate must explicitly
name the recipient or sender of the communication.
 The send and receive primitives are defines as :
send(P, message) – Send a message to process P
receive(Q, message) – Receive a message from process Q
 A communication link in this scheme has the following properties:
 A link is established automatically between every pair of processes that want
to communicate.
 A link is associated with exactly two processes.
 This scheme exhibits symmetry in addressing; that is, both the sender and
receiver processes use the name for communication.
 A variant of this scheme employs asymmetry in addressing is also possible in the
direct communication. Only the sender names the recipient; the recipient is not
required to name the sender.
3.12 Processes

 The send and receive primitives in this scheme are defined as follows::
Send (P, message) - Send a message to process P.
Receive (id, message) – Receive a message from any process; the variable id is set to
the name of the process with which communication has taken place.
2. Indirect Communication
 With indirect communication, the messages are sent to and receive from mailboxes,
or ports.
 A mailbox may be owned either by a process or by the operating system.
 A process can communicate with some other process via a number of different
mailboxes.
 The send and receive primitives are defined as follows:
Send (A, message) – Send a message to mailbox A.
Receive (A, message) – Receive a message from mailbox A.
 In this scheme, a communication link has the following properties:
 A link is established between a pair of processes only if both members of the pair have
a shared mailbox.
 A link may be associated with more than two processes.
 A number of different links may exist between each pair of communicating processes,
with each link corresponding to one mailbox.
3. Synchronization
 Synchronization is a term used to describe the process of taking
multiple hardware devices and making their information identical.
 Message passing may be either Blocking or Non blocking - also known as
Synchronous and asynchronous.
 Blocking send: The sending process is blocked until the message is received by
the receiving process or by the mailbox.
 Non blocking send: The sending process sends the message and resumes
operation.
 Blocking receive: The receiver blocks until a message is available.
 Non blocking receive: The receiver retrieves either a valid message or a null.
Operating System 3.13

4. Buffering
 The buffer is an area in the main memory that is used to store or hold the
data temporarily. The act of storing data temporarily in the buffer is called
buffering.
 Buffering is used in direct and indirect communication. Messages exchanged by
communicating processes reside in a temporary queue. Buffering is implemented in
three ways:
1. Zero Capacity
2. Bounded Capacity
3. Unbounded Capacity
 Zero Capacity: Maximum length of the queue is 0 (zero). Communication
link cannot have any messages waiting in it. In this case, the sender must
block until the recipient receives the message.
 Bounded Capacity: The queue has finite length n; thus at most n messages
can reside in it. If the queue is not full when a new message is sent, the latter
is placed in the queue, and the sender can continue execution without waiting.
The link has a finite capacity. If the link is full, the sender must block until
space is available in the queue.
 Unbounded Capacity: The queue has potentially infinite length; thus, any
number of messages can wait in it. The sender never blocks.
Exercises:

1. What do you mean by Process Management?


2. Define Process.
3. What are Foreground and Background Processes?
4. What are the actions performed by the Operating System after creating a process?
5. Distinguish between a Program and a Process.
6. Discuss briefly about the State of a Process.
7. What is PCB?
8. Explain about Process Scheduling.
9. What are Scheduling Queues?
3.14 Processes

10. What are Schedulers? Discuss the different types of Schedulers in OS.
11. What do you mean by Long-Term Scheduler? What are its functions?
12. Write short notes on Short-Term Scheduler.
13. Discuss briefly about Medium-Term Scheduler.
14. What is meant by Context Switch?
15. What are the various operations that can be performed on a Process?
16. What are Cooperating Processes? What are its advantages?
17. Explain about Inter-Process Communication.
18. Differentiate between Direct and Indirect Communication.
19. What is meant by Synchronization?
20. What do you mean by Buffering? Discuss its types.

*******************************

You might also like