You are on page 1of 54

CHAPTER TWO

Processes and process management

Operating System
CoSc 3023

Muluken E.
(MSc)
Email: mulukenemb@gmail.com
Process Concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost
interchangeably.
 An instance of a computer program that is being executed in
one of the operating system (virtual process) is known as
process.
 When a program begins its execution is known as process.
 Process – a program in execution

 Process execution must progress in sequential fashion.


 Illusion of parallelism (pseudo parallelism)
 True parallelism in multi-processor
Slide 2
Process Concept
 A Process consists of
Execution context
Code
Data
Stack
 The process is divided in to execution contexts.
 Execution context having - program counter (Pc)
- stack pointer (Sp)
- data register

Slide 3
Process Concept
 A Process consists of
Execution context
Code
Data
Stack
 The process is divided in to execution contexts.
 Execution context having - program counter (Pc)
- stack pointer (Sp)
- data register

Slide 4
Process concept…
S.N. Component & Description

Stack
1
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
Heap
2
This is dynamically allocated memory to a process during its run time.
Text
3
This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.
Data
4
This section contains the global and static variables.
Process concept…
Single PC Multiple PCs  Multiprogramming of four
(CPU’s point of view) (process point of view) programs
 Conceptual model
A  4 independent processes
B A C D  Processes run sequentially
B  Only one program active at any
C
instant!
B  That instant can be very short…
D

D
C
B
A
Time
Process concept…
 The rate at which process perform it’s computation is not uniform
 Thus processes must not be programmed with built-in assumption
about timing.
 The difference b/n Program and Process is subtle but crucial.
 The key idea is a process is an activity of some kind.
 it has
 A program
 Input/out
 state
Operation on Process
OS has a responsible for providing a mechanism to
create and destroy processes dynamically.
 Process Creation
 Process Termination

.
Process Creation…
 Parent process creates children processes.
 The child processes can also create other processes forming a tree of process.
 Generally, process identified and managed via a process identifier (pid).
Address space-
 Child duplicate of parent
 Child has a new process to be loaded into it
Resource sharing
 Parent and children share all resources.
 Children share subset of parent’s resources.
 Parent and child share no resources.
Execution
 Parent and children execute concurrently.
 Parent waits until some or all of its children terminate.
Process creation…
 fork(): The fork () system call is used to create a separate, duplicate
process.
 This call creates an exact clone of the calling process.
 Parent and child process may share
 Same memory image
 Same environmental strings
 Same open file

 exec(): When an exec () system call is invoked, the program specified in the
parameter to exec () will replace the entire process-including all threads.
Process Creation…
 But in Windows
 Windows has no concept of process hierarchy

 all processes are created equal

 A single win32 function call create process

 Create process handles both

 Process creation
 Loading the correct program

 this call has 10 parameters


 BOOL CreateProcess (
LPCTSTR lpApplicationName, // specify the executable program
LPTSTR lpCommandLine, // specify the command line arguments
LPSECURITY_ATTRIBUTES lpsaProcess, //point to the process security attrib
LPSECURITY_ATTRIBUTES lpsaThread, //thread security attribute structures
BOOL bInheritHandles, //indicates whether the new process inherits copies of
the calling process
DWORD dwCreationFlags,// combines several flags
LPVOID lpEnvironment,//points to an environment block for the new process
LPCTSTR lpCurDir,//specifies the drive and directory for the new process
LPSTARTUPINFO lpStartupInfo, //specifies the main window appearance and
standard device handles for the new process.
LPPROCESS_INFORMATION lpProcInfo // structure contains a handle and
the identifiers to the newly created
process and its thread

)
Return: TRUE only if the process and thread are successfully created.
Process Termination
 Process executes last statement and asks the OS to decide
it (exit ()).
 The output data is from child to parent (by wait())
 Process resources are deallocated by OS.
 Parent may terminate execution of children processes
(abort()).
 Child has exceeded allocation of resources.
 Task assigned to child is no longer required.
Process Termination…
 Conditions that terminate processes can be
 Voluntary
 Involuntary
 Voluntary
 Normal exit
 Error exit
 Involuntary
 Fatal error (only sort of involuntary)
 Killed by another process
Process Termination
 Normal exit (Voluntary)
 When a compiler finishes its work it executes a system call
 Exit Unix
 ExitProcess Windows
 Screen oriented programs support Voluntary exit

 Error exit (Voluntary)


 If a user writes a command cc foo.c and if no file exits, then the
compiler simply exits.
Process Termination…
 Fatal Error (involuntary)

 Error caused due to a program bug


 Illegal instruction
 Referencing non-existent memory
 Dividing by zero

 Killed by another process (involuntary)

 A system call telling the OS to kill other process


 Kill UNIX
 TerminateProcessWindows
Process State
 Two State Process Model

dispatch

Exit
enter
Not running
running

Pause
Process States

 Possible process states


 running
 blocked
 ready
Process State
 As a process executes, it changes state
 new: The process is being created.
 running: Instructions are being
executed.
 waiting: The process is waiting for
some event to occur.
 ready: The process is waiting to be
assigned to a process.
 terminated: The process has finished
execution.

Slide 19
Process state transition
 Transition 1- Occurs when the process discovers that it can’t continue
 This state transition is:
Block(process Name): RunningBlock

 Transition 2– occurs when the scheduler decides time allotted for the process is
expired
 This state transition is:
Time-Run-Out(process Name): RunningReady
 Transition 3 -occurs when all other process have had their share and it is time for
the first process to run again
 This state transition is :
Dispatch(process Name): ReadyRunning
Process state transition…
 Transition 4 -occurs when the external event for which a process was waiting
happens
 This state transition is :
Wakeup(Process Name): Blocked Ready

 Transition 5 -Occurs when the process is created


 This state transition is :
Admitted(Process Name): New Ready

 Transition 6 - occurs when the process has finished execution


 This state transition is :
Exit(Process Name): RunningTerminated
Process management
 Information maintained by OS for process management
o process context

o process control block

 OS virtualization of CPU for each process.


o Context switching

o Dispatching loop
Implementation…process context
 Contains all states necessary
to run a program
 The information the process
needs to do the job: code,
data, stack, heap.
 This is known as User level
context.
Implementation…PCB
 To implement a process
model OS maintains a
table (array of
structures)called the
process table also
called PCB (process
control block)
 One entry for each
Process
Inter-process Communication
 Processes executing concurrently in the operating system may
be either independent process or cooperating processes.
 Independent process cannot affect or be affected by the execution of
another process.
 Cooperating process can affect or be affected by the execution of
another process
 Advantages of process cooperation
 Information sharing
 Computation speed-tasks get divided into multiple tasks, all task
run parallel and achieve the required result.
 Modularity by dividing system functions into separate processes
 Convenience – if a user is using the same data in different tasks
conflict can be arise so this system try to avoid conflicts.
Cooperating Processes
Cooperating processes require an inter-process communication (IPC)
mechanism to exchange data and information.
Exchange of data between 2 or more separate and independent
process is called inter process communication.
Processes frequently need to communicate with other
processes/threads to share.
• Messages
• Semaphores
• Shared memory

Operating system provides facilities for IPC.


There are two fundamental models of inter-process
communications.
 Shared memory models

 Message passing system


Cooperating Processes
 (a) Message passing
 In the message passing model, communication takes

place by means of messages exchanged between the


cooperating process.
 useful for small amounts of data

 easier to implement than shared memory

 requires system calls and thus intervention of the kernel

 (b) Shared memory


 In the shared-memory model, a region of memory that is

shared by cooperating process is established.


 Process can then exchange information by reading and

writing data to the shared region.


 maximum speed (speed of memory) and convenience

 system calls required only to establish the shared

memory regions; further I/O does not require the kernel


Message passing and shared
Memory
IPC – unicast and multicast
 In distributed computing, two or more processes engage in IPC
using a protocol agreed upon by the processes.
 A process may be a sender at some points during a protocol, a

receiver at other points.


Inter process communication can be divided in to two
i. Unicast inter-process communication- a communication
between one process to single other process.
ii. Multicast inter-process communication- communication is
from one process to group of processes.
P2 P2 P3 ... P4

m
m m m

P1 P1

uni c as t m ul ti c as t

Socket communication Publish/Subscribe Message model,


IPC – Event Diagram
Pro ce s s B
Pro ce s s A

tim e
r eq u es t 1

r es p o n s e 1

r eq u es t 2
in ter p r o c es s c o m m u n ic atio n
ex ec u tio n f lo w
r es p o n s e2
p r o c es s b lo c k ed

E ve nt di ag r am fo r a pr o t o c o l
Why we need to study-IPC
First – How one process can pass information to another.

 APIs in distributed computing

Second – conflict resolution when processes are engaging in critical

activities.
 Protocols and related issues.

Third- proper sequencing when dependencies are present.

 Data production and printing

Two of the above issues are equally well works for threads and the first one

is even more simple.


 Same problems exist- same solutions apply
Race Condition
 Where two or more processes are reading or writing same
shared data/shared memory and the final result depends on
who runs precisely.

 Race conditions are also possible in OS.


Critical Section
That part of the program where the shared memory is accessed

by various processes is called the critical section (CS)

If we could arrange matters such that no two processes were

ever in their critical sections at the same time, we could avoid

race conditions.

Find some way to prohibit more than one process from reading

and writing the shared data at the same time.(mutual exclusion)


Critical Section….
Mutual Exclusion
 A way of making sure that if one process is using a shared
modifiable data, the other processes will be excluded from
doing the same thing.

 Note that mutual exclusion needs to be enforced only


when processes access shared modifiable data.

 Rather it needs to support concurrent programming.


Mutual Exclusion Conditions

1. No two processes may at the same moment inside their


critical sections.

2. No assumptions are made about relative speeds of processes


or number of CPUs.

3. No process outside its critical section should block other


processes.

4. No process should wait arbitrary long to enter its critical


section.
Threads
 Thread is a basic unit of CPU utilization.

 Each process is divided into thread of execution.

 Thread (light weight process) is a path of execution with in a process.

 A way of process to split itself into two or more simultaneously


running tasks will call it as thread.
 All threads share the resources of process. Each thread haven’t their
own resources.
 A thread executes its own piece of code, independently from other
threads.
 A thread is the unit of execution with in a process/cpu execution,
consisting of program counter, thread ID, stack, set of registers.
Threads

 In traditional OS (heavyweight process), each process has an


address space and a single thread of control.
 It shares with other threads belonging to the same process its
code section, data section, and other OS resources, such as
open files and signals.
 If a process has multiple threads of control, it can perform
more than one tasks at a time.
Thread stack

 Each thread has it’s own stack, register and program


counter in multi-threading system
Process Vs Threads
 As we mentioned earlier that in many respect threads operate in the same
way as that of processes. Some of the similarities and differences are:
 Similarities
o Like processes threads share CPU and only one thread active (running)
at a time.
o Like processes, threads within a processes execute sequentially.
o Like processes, thread can create children.
o And like process, if one thread is blocked, another thread can run.
 Differences
o Unlike processes, threads are not independent of one another.
o Unlike processes, all threads can access every address in the task .
o Unlike processes, thread are design to assist one other.
o Note that: processes might or might not assist one another because processes
may originate from different users.
Advantage of threads over Multiple processes
Sharing of resources
 Treads allow the sharing of a lot resources that cannot be
shared in process e.g. Sharing code section, data section, open file (UNIX?)
Disadvantage of threads over Multi-processor
 Blocking - The major disadvantage is that if the kernel is single threaded, a
system call of one thread will block the whole process and CPU may be idle
during the blocking period.
Disadvantage of threads over Multi-processor….

 Security   

 Since there is, an extensive sharing among threads there is a

potential problem of security.

 Think through potential deathtrap that comes with resource

sharing. Most importantly sharing of memory image/Address

space
The benefit of multithreaded programming can be broken
down into four major categories:
1. Responsiveness
 It is an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy
operation.
2. Resource sharing
 It allows an application to have several different threads of
activity within the same address space.
Disadvantage of threads over Multi-processor….

3. Economy
 Allocating memory and resources for process creation is costly.

4. Utilization of multiprocessor architecture 


 The benefit of multithreading can be greatly increased in a
multiprocessor architecture, where threads may be running in
parallel on different processor. A single threaded process can only
run on one CPU , no matter how many processor are available.
 Multithreading on a multi-CPU machine increases concurrency.  
Thread state
Just like process a thread can be in one of the following states
 Running
 Blocked
 Terminated
 Ready

 Thread transitions are the same with process transition


Thread implementation

 Three kinds of thread implementation


 User level threads

 Kernel level threads

 Hybrid implementation
User level threads
 Implement in user-level
libraries, rather than via
systems calls
 Thread switching does not
need to call operating system
and to cause interrupt
to the kernel.
 In fact, the kernel knows
nothing about user-level
threads and manages them
as if they were single-threaded
processes.
User level threads-Advantage
 Require no modification to operating systems.
 Some OS doesn’t support thread package implementation
 Simple Representation
 Each thread is represented simply by a PC, registers, stack and a
small control block, all stored in the user process address space
called thread table.
 Simple management
 creating a thread, switching between threads and synchronization
between threads.
 Help process to have their own customized scheduling.
 Fast and Efficient
 Thread switching is not much more expensive than a procedure
call.
User level threads- Disadvantage
 Lack of coordination b/n threads and OS kernel.
 User-level threads requires non-blocking systems call
 Ex. Read on call the keyboard before any char is buffered…. Creates
changes on the semantics of read
 The UNIX version alternative ex. Safe select system call by jacket/ wrapper
code
 Error during a page fualt
Kernel-Level Threads
 In this method, the kernel knows about
and manages the threads.
 No runtime system is needed in this case.
Instead of thread table in each process,
the kernel has a thread table that keeps
track of all threads in the system.
 In addition, the kernel also maintains
the traditional process table to keep
track of processes. Operating Systems
kernel provides system call to create and
manage threads.

                    
Kernel-Level Threads- Advantages
 Because kernel has full knowledge of all threads, Scheduler may decide to
give more time to a process having large number of threads than process
having small number of threads.
 Kernel-level threads are especially good for applications that frequently
block.
 Kernel level threads do not require any new, non- blocking system calls
 Thread recycling is possible

Kernel-Level Threads- Disadvantages


 The kernel-level threads are slow and inefficient.
 For instance, threads operations are hundreds of times slower than that
of user-level threads.
 It require a full thread control block (TCB) for each thread to maintain
information about threads.
 As a result there is significant overhead and increased in kernel
complexity.
Implementing threads
Process

Thread

Kernel

Run-time Thread Process Process Thread


system table table table table

User-level threads Kernel-level threads


+ No need for kernel support + More flexible scheduling
- May be slower than kernel threads + Non-blocking I/O
 Harder to do non-blocking I/O  Not portable
Hybrid implementation
 Combining advantages of user level threads with kernel
level threads

You might also like