Professional Documents
Culture Documents
By:Dr. P.S.Tanwar
O.S.
Definition
Types of O.S.
By:Dr. P.S.Tanwar
Definition
O.S. is a system software that
manages the computer H/W.
OS
Application S/W
User
By:Dr. P.S.Tanwar
Components of OS
End User
Application
Program
O.S.
H/W
By:Dr. P.S.Tanwar
OS
By:Dr. P.S.Tanwar
Goals of O.S.
convenient to user and
By:Dr. P.S.Tanwar
Definition
O.S. is a system software which is
used as an interface between user
and hardware of computer so that
it is convenient for user and
By:Dr. P.S.Tanwar
O.S.
Always resides in the memory.
By:Dr. P.S.Tanwar
Bootstrap Program
For a computer to start running it needs to have an initial
program to run that initial program is known as boot strap
program.
By:Dr. P.S.Tanwar
Functions of Operating
System
1. Process Management
2. Memory Management
5. File Management
By:Dr. P.S.Tanwar
Functions of Operating
System
7. Job switching
By:Dr. P.S.Tanwar
Early Systems (Without
batchprocessing)
By:Dr. P.S.Tanwar
Early Systems
Drawbacks
Low throughput
By:Dr. P.S.Tanwar
Batch Processing OS
hardware is expensive,
By:Dr. P.S.Tanwar
Batch Processing OS
Jobs with similar requirements were
batched together and run through the
computer as a group.
By:Dr. P.S.Tanwar
Batch Processing OS
By:Dr. P.S.Tanwar
Batch Processing OS
Job Cards
Loader
Job Sequencing
By:Dr. P.S.Tanwar
Batch Processing OS
Main Memory
Loader
Resident Monitor Job sequencing
Control card interpreter
User
program area
By:Dr. P.S.Tanwar
Batch Processing OS
Drawbacks
Inefficient use of CPU
• Speed mismatch between Fast CPU and slow I/O devices
By:Dr. P.S.Tanwar
Efficiency Measures of O.S.
Throughput
num. of processes per unit time
Waiting time
Total Waiting time in the Ready Queue to be executed by CPU
Response time
Time interval between Job submission and First Response time given
by CPU
By:Dr. P.S.Tanwar
Types of O.S.
Batch processing O.S.
Multiprogramming O.S.
Distributed O.S.
Network O.S.
By:Dr. P.S.Tanwar
SPOOLing
Simultaneous Peripheral Operation
On-Line
By:Dr. P.S.Tanwar
Multiprogramming
More than one job resides in the
memory at the same time.
OS
Job1(wait or
blocked)
Job2 (running)
Job3 (ready)
By:Dr. P.S.Tanwar
Multiprogramming
States of Process
exit
Ready(queue)
Running
Block(Wait)
New job
By:Dr. P.S.Tanwar
Multiprogramming OS
CPU Scheduling
Job Scheduling
Memory Mgmt.
By:Dr. P.S.Tanwar
Multiprogramming
Advantages
Increased Throughput
By:Dr. P.S.Tanwar
Time Sharing OS
MultiTasking OS
P1 P2 P3 P1 P2 P3 P1 P3 P3 P3
(t) (t) (t) (t) (t) (t) (t) (t) (t) (t)
By:Dr. P.S.Tanwar
TimeSharing
States of Process
New
Ready
Running
Block(Waiting)
terminated
By:Dr. P.S.Tanwar
Time Sharing OS
Advantages
Increased Throughput
By:Dr. P.S.Tanwar
Parallel Systems
Tightly coupled Systems
More than one processors are in close communication,
sharing the computer bus,the clock and sometimes the
memory and peripheral devices.
n Processors
By:Dr. P.S.Tanwar
Parallel Systems
Types of Parallel System
Symmetric multiprocessing OS
Assymmetric multiprocessing OS
By:Dr. P.S.Tanwar
Symmetric Multiprocessing
Systems
Each processor runs an identical copy
of the OS and these copies
communicate with one another as
needed.
By:Dr. P.S.Tanwar
Assymmetric
Multiprocessing Systems
Each processor is assigned a specific task.
Ex Sun OS ver. 4
By:Dr. P.S.Tanwar
Distributed OS
Loosely coupled Systems
More than one processors are not in close
communication,
By:Dr. P.S.Tanwar
Distributed OS
Loosely coupled Systems
Processors may vary in size and function.
By:Dr. P.S.Tanwar
Distributed OS
Node3
Node1
Network
Node2
Node4
By:Dr. P.S.Tanwar
Distributed OS
Functions
Resource Sharing
Computation Speedup
Reliability
Communication
By:Dr. P.S.Tanwar
Real time OS
Rigid time requirement on the operation
of a processor or the flow of data
Weapon system
Flight simulators
By:Dr. P.S.Tanwar
Real time OS
Types
Soft Real time O.S.:
• Less restrictive type of OS
By:Dr. P.S.Tanwar
Computer System
By:Dr. P.S.Tanwar
Storage device Hierarchy
By:Dr. P.S.Tanwar
Dual Mode Operation
To protect OS from malfuntion
hardware mode protection is
provided by many OS.
User Mode
By:Dr. P.S.Tanwar
Dual Mode Operation
Mode bit is added to the hardware of the
computer to indicate the mode
Monitor mode(0)
• Executing on behalf of OS
User mode(1)
• Executing on behalf of user
By:Dr. P.S.Tanwar
Dual Mode Operation
At boot time hardware starts in monitor
mode
then OS loaded
By:Dr. P.S.Tanwar
Dual Mode Operation
By:Dr. P.S.Tanwar
Dual Mode Operation
By:Dr. P.S.Tanwar
Dual Mode Operation
By:Dr. P.S.Tanwar
System Calls
Process
Program in execution is called Process
Kernel Mode
• Direct access to resources like memory or
hardware
By: Dr. P.S.Tanwar
System Call
Process
(Program in Execution)
System Calls
OS
Kernel
Hardware
User Mode
Call the Return from
Process
System Call System Call
Kernel Mode
Execute the
System Call
B. Kernel Mode.
2. File manipulation
3. Device manipulation
4. Information Maintenance
5. Communication
load/execute
Open / close
B. File manipulation
C. Device manipulation
D. Information Maintenance
Application Program
System program
O.S.
H/W
Status Information
File Modification
Communication
Application Programs
Compilers
Assemblers
Interpreters
• Relocatable loaders
• Linkage editors
• Overlay editors
• Text formatters
• Plotting packages
• Database systems
• spreadsheets
• Games
Process Scheduling
Operations on Processes
Interprocess Communication
C. Process is a method
D. All of these
admitted
new p1
Interrupt Exit terminated
ready running
Scheduler Dispatch
I/O or Event I/O or event
Completion wait
waiting
admitted
new Interrupt Exit terminated
p2
p3
ready running
p1
Scheduler Dispatch
I/O or Event I/O or event
Completion wait
waiting
admitted
new Interrupt Exit terminated
ready running
p2 p1 p3
Scheduler Dispatch
I/O or Event I/O or event
Completion wait
waiting
admitted
new Interrupt Exit terminated
ready running
p3 p2
Ready Queue
Scheduler Dispatch
I/O or Event I/O or event
Completion wait
waiting
I/O Queue1
Program counter
CPU registers
Memory-management information
Accounting information
CPU-bound process–spends more time doing computations; few very long CPU
bursts
D. None of these
D. None of these
False
Resource sharing
Parent and children share all resources
Execution
Parent and children execute concurrently
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the process’ memory space with a
new program
If parent is exiting
terminates
• All children terminated - cascading termination
Computation speedup
Modularity
Convenience
Message passing
Communications Models
(a) Message passing. (b) shared memory.
Cooperating Processes
Independent process cannot affect or be affected by the
execution of another process
Computation speed-up
Modularity
Convenience
Interprocess Communication –
Shared Memory
An area of memory shared among the
processes that wish to communicate
receive(message)
By:Dr. Prakash
SinghTanwar
Overview
Threads are light weight process
A stack
Resource Sharing
Economy
Scalability
Balance
Data splitting
Data dependency
C. A and B Both
D. None of these
Win32 threads
Java threads
By:Dr. Prakash SinghTanwar
Kernel Threads
Supported by the Kernel
Examples
Windows XP/2000
Solaris
Linux
Tru64 UNIX
Mac OS X
By:Dr. Prakash SinghTanwar
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Examples
Windows NT/XP/2000
Linux
Examples
IRIX
HP-UX
Tru64 UNIX
Signal handling
Thread pools
Thread-specific data
Scheduler activations
• 3. Signal is handled
Options:
• Deliver the signal to the thread to which the signal applies
Advantages:
• Usually slightly faster to service a request with an existing thread
than create a new thread
Linux Threads
Register set
The register set, stacks, and private storage area are known as
the context of the threads
The primary data structures of a thread include:
ETHREAD (executive thread block)
Throughput
number of processes per unit time
Waiting time
Total Waiting time in the Ready Queue to be executed by CPU
Response time
The time interval between Job submission and first response time given by CPU
B. Throughput
C. Turnaround time
D. Response time
E. Waiting time
B. Throughput
C. Turnaround time
D. Response time
E. Waiting time
B. Throughput
C. Turnaround time
D. Response time
E. Waiting time
B. Throughput
C. Turnaround time
D. Response time
E. Waiting time
4. Terminates
4. Terminates
switching context
Max throughput
Priority scheduling
Multilevel queue
Gantt Chart
P1 P2 P3
0 24 27 30
a)Throughput
Throughput=
= = 0.10
b)Turnaround Time
Turnaround time for P1= 24
Turnaround time for P2= 27
Turnaround time for P3= 30
( )
Average Turnaround time = =
= 27
c)Waiting Time
Waiting time for P1= 0
Waiting time for P2= 24
Waiting time for P3= 27
( )
Average Waiting time = =
= 17
d)Response Time
Response time for P1= 0
Response time for P2= 24
Response time for P3= 27
( )
Average Response time = =
= 17
Ex2:
Suppose that the processes arrive in the
order: P2 , P3 , P1.
Process Burst Time
P1 24 Arrival time=0
P2 3
P3 3
P2 P3 P1
0 3 6 30
By: Dr. Prakash Singh Tanwar
First-Come, First-Served (FCFS)
Scheduling
Average T.a.t.=(30+3+6)/3=39/3=13
By: Dr. Prakash Singh Tanwar
First-Come, First-Served (FCFS)
Scheduling
Process Burst
Gantt Chart
Time P2 P3 P1
P1 24
P2 3 0 3 6 30
P3 3 Waiting Time
Waiting time for P1 =6
Process Burst
Gantt Chart
Time P2 P3 P1
P1 24
P2 3 0 3 6 30
P3 3 Response Time
Response time for P1 =6
=3 unit time
By: Dr. Prakash Singh Tanwar
Shortest-Job-First (SJF)
Scheduling
Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time.
P4 P1 P3 P2
0 3 9 16 24
0 3 9 16 24
Turnaround Time
=Job Completion Time - Job Submission Time
24
D. 24
D. 24
P1 P2 P4 P1 P3
0 1 2 3 5 10 17 26
At 5 unit time At 10 unit time
P1: 7 P1: 7 At 17 unit time
P3: 9 P3: 9 P3: 9
P4: 5
By: Dr. Prakash Singh Tanwar
SJF Pre-emptive
P1 P2 P4 P1 P3
0 1 2 3 5 10 17 26
0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time
0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time
0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time
P1 0 8
P2 1 4
Pre-emptive SJF
P3 2 9
Gantt Chart P4 3 5
P1 P2 P4 P1 P3
a). Throughput
Throughput= =0.15 0 1 2 3 5 10 17 26
b). Turnaround Time c). Waiting Time d). Response Time
(0-0)+
Turnaround time for P1= 17 Waiting time for P1= 10-1=9 Response time for P1= 0
Turnaround time for P2= 4 Waiting time for P2= (1-1)=0 Response time for P2= 0
Turnaround time for P3= 24 Waiting time for P3= (17-2)=15 Response time for P3= 15
Turnaround time for P4= 7 Waiting time for P4= (5-3)=2 Response time for P4= 2
A. 0
Process Arrival Burst
Time Time
B. 1 P1 0 8
P2 1 4
P3 2 9
C. 2 P4 3 5
D. 3
A. 17
Process Arrival Burst
Time Time
B. 10 P1 0 8
P2 1 4
P3 2 9
C. 15 P4 3 5
D. 26
A. 5
Process Arrival Burst
Time Time
B. 2 P1 0 8
P2 1 4
P3 2 9
C. 3 P4 3 5
D. 6
A. 0
Process Arrival Burst
Time Time
B. 1 P1 0 8
P2 1 4
P3 2 9
C. 2 P4 3 5
D. 10
The CPU is allocated to the process with the highest priority (smallest
integer highest priority)
Preemptive
Non-preemptive
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt Chart
P2 P5 P1 P3 P4
0 1 6 16 18 19
P2 1 1
P3 2 4
Priority Scheduling
P4 1 5
P5 5 2
Gantt Chart
P2 P5 P1 P3 P4
a). Throughput
Throughput= =0.26 0 1 6 16 18 19
b). Turnaround Time c). Waiting Time d). Response Time
Turnaround time for P1= 16 Waiting time for P1= 6 Response time for P1= 6
Turnaround time for P2= 1 Waiting time for P2= 0 Response time for P2= 0
Turnaround time for P3= 18 Waiting time for P3= 16 Response time for P3= 16
Turnaround time for P4= 19 Waiting time for P4= 18 Response time for P4= 18
Turnaround time for P5= 6 Waiting time for P5= 1 Response time for P5= 1
Average Turnaround time = Average Waiting time = Average Response time =
( ) ( ) ( )
= = = = = == =
12 = 8.5 = 8.5
By: Dr. Prakash Singh Tanwar
Q/A
Waiting time for P1 (according to
Priority Scheduling) Process Burst Priority
Time
P1 10 3
A. 0
P2 1 1
P3 2 4
B. 1
P4 1 5
P5 5 2
C. 6
D. 3 P2 P5 P1 P3 P4
0 1 6 16 18 19
D. 19 P2 P5 P1 P3 P4
0 1 6 16 18 19
D. 4 P2 P5 P1 P3 P4
0 1 6 16 18 19
D. 4 P2 P5 P1 P3 P4
0 1 6 16 18 19
D. 14 P2 P5 P1 P3 P4
0 1 6 16 18 19
0 1 6 16 18 19
Turnaround Time
Process Burst Priority
Turnaround time for P1 =(16-0)=16 Time
P1 10 3
Turnaround time for P2 =(1-0)=1
P2 1 1
Turnaround time for P3 =(18-0)=18 P3 2 4
P4 1 5
Turnaround time for P4 =(19-0)=19
P5 5 2
Turnaround time for P5 =(6-0)=6
Average Turnaround =
=(16+1+18+19+6)/5=60/5=12 unit time
By: Dr. Prakash Singh Tanwar
Priority Scheduling
P2 P5 P1 P3 P4
0 1 6 16 18 19
Waiting Time
Process Burst Priority
Waiting time for P1 =(6-0)=6 Time
P1 10 3
Waiting time for P2 =(0-0)=0
P2 1 1
Waiting time for P3 =(16-0)=16 P3 2 4
P4 1 5
Waiting time for P4 =(18-0)=18
P5 5 2
Waiting time for P5= =(1-0)=1
0 1 6 16 18 19
Response Time
Process Burst Priority
Response time for P1 =(6-0)=6 Time
P1 10 3
Response time for P2 =(0-0)=16
P2 1 1
Response time for P3 =(16-0)=16 P3 2 4
P4 1 5
Response time for P4 =(18-0)=18 P5 5 2
Response time for P5 =(1-0)=1
Average Response time=(6+0+16+18+1)/5
=41/5=8.2 unit time
Performance
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
10
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
7
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
7
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
10
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
4
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
10
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Average waiting.t=
=(6+4+7)/3=17/3=5.66 unit time
P3 3
Gantt Chart
a). Throughput P1 P2 P3 P1 P1 P1 P1 P1
Throughput= =0.1 0 4 7 10 14 18 22 26 30
b). Turnaround Time c). Waiting Time d). Response Time
Turnaround time for P1= 30 Waiting time for P1= (10-4)=6 Response time for P1= 0
Turnaround time for P2= 7 Waiting time for P2= 4 Response time for P2= 4
Turnaround time for P3= 10 Waiting time for P3= 7 Response time for P3= 7
P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P1 P1 P1 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
foreground (interactive)
background (batch)
foreground – RR
background – FCFS
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU,
job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.
hard affinity
Solaris scheduling
Windows XP scheduling
Linux scheduling
Implementation
By: Dr. Prakash Singh Tanwar
Evaluation of CPU Schedulers
by Simulation
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: Process Syncronization
Background
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Mutex Locks
Semaphores
Classic Problems of Synchronization
Monitors
Synchronization Examples
Alternative Approaches
Operating System Concepts – 9th Edition 5.2 Silberschatz, Galvin and Gagne ©2013
Objectives
To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data
Operating System Concepts – 9th Edition 5.3 Silberschatz, Galvin and Gagne ©2013
Background
Operating System Concepts – 9th Edition 5.4 Silberschatz, Galvin and Gagne ©2013
Producer
while (true) {
/* produce an item in next produced */
Operating System Concepts – 9th Edition 5.5 Silberschatz, Galvin and Gagne ©2013
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}
Operating System Concepts – 9th Edition 5.6 Silberschatz, Galvin and Gagne ©2013
Race Condition
counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
register2 = counter
register2 = register2 - 1
counter = register2
Operating System Concepts – 9th Edition 5.7 Silberschatz, Galvin and Gagne ©2013
Critical Section Problem
Consider system of n processes {p0, p1, … pn-1}
Each process must ask permission to enter critical section in entry section, may
follow critical section with exit section, then remainder section
Operating System Concepts – 9th Edition 5.8 Silberschatz, Galvin and Gagne ©2013
Critical Section
General structure of process pi is
Operating System Concepts – 9th Edition 5.9 Silberschatz, Galvin and Gagne ©2013
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that
wish to enter their critical section, then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
Operating System Concepts – 9th Edition 5.10 Silberschatz, Galvin and Gagne ©2013
’s Solution
Peterson’
Good algorithmic description of solving the problem
Two process solution
Assume that the load and store instructions are atomic; that is, cannot be
interrupted
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
Operating System Concepts – 9th Edition 5.11 Silberschatz, Galvin and Gagne ©2013
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = false;
remainder section
} while (true);
Provable that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Operating System Concepts – 9th Edition 5.12 Silberschatz, Galvin and Gagne ©2013
Synchronization Hardware
Many systems provide hardware support for critical section code
Operating System Concepts – 9th Edition 5.13 Silberschatz, Galvin and Gagne ©2013
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Operating System Concepts – 9th Edition 5.14 Silberschatz, Galvin and Gagne ©2013
test_and_set Instruction
Definition:
Operating System Concepts – 9th Edition 5.15 Silberschatz, Galvin and Gagne ©2013
Solution using test_and_set()
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
Operating System Concepts – 9th Edition 5.16 Silberschatz, Galvin and Gagne ©2013
compare_and_swap Instruction
Definition:
Operating System Concepts – 9th Edition 5.17 Silberschatz, Galvin and Gagne ©2013
Solution using compare_and_swap
Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key
Solution:
do {
while (compare and swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Operating System Concepts – 9th Edition 5.18 Silberschatz, Galvin and Gagne ©2013
Mutex Locks
Previous solutions are complicated and generally inaccessible to application
programmers
OS designers build software tools to solve critical section problem
Simplest is mutex lock
Product critical regions with it by first acquire() a lock then release() it
Boolean variable indicating if lock is available or not
Operating System Concepts – 9th Edition 5.19 Silberschatz, Galvin and Gagne ©2013
acquire() and release()
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
release() {
available = true;
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
Operating System Concepts – 9th Edition 5.20 Silberschatz, Galvin and Gagne ©2013
Semaphore
Synchronization tool that does not require busy waiting
Semaphore S – integer variable
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Can only be accessed via two indivisible (atomic) operations
Original definitions of wait() and signal() proposed by Dijsktra
Busy waiting version
Operating System Concepts – 9th Edition 5.21 Silberschatz, Galvin and Gagne ©2013
Semaphore Usage
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0 and 1
Then a mutex lock
Can implement a counting semaphore S as a binary semaphore
Can solve various synchronization problems
Consider P1 and P2 that require S1 to happen before S2
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
Operating System Concepts – 9th Edition 5.22 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the critical section
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and therefore
this is not a good solution
Operating System Concepts – 9th Edition 5.23 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation
with no Busy waiting
Two operations:
block – place the process invoking the operation on the appropriate
waiting queue
wakeup – remove one of processes in the waiting queue and place it in
the ready queue
Operating System Concepts – 9th Edition 5.24 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation with
no Busy waiting (Cont.)
typedef struct{
int value;
struct process *list;
} semaphore;
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Operating System Concepts – 9th Edition 5.25 Silberschatz, Galvin and Gagne ©2013
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
signal(S); signal(Q);
signal(Q); signal(S);
Operating System Concepts – 9th Edition 5.26 Silberschatz, Galvin and Gagne ©2013
Classical Problems of Synchronization
Classical problems used to test newly-proposed synchronization schemes
Bounded-Buffer Problem
Dining-Philosophers Problem
Operating System Concepts – 9th Edition 5.27 Silberschatz, Galvin and Gagne ©2013
Bounded-Buffer Problem
n buffers, each can hold one item
Operating System Concepts – 9th Edition 5.28 Silberschatz, Galvin and Gagne ©2013
Bounded Buffer Problem (Cont.)
The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Operating System Concepts – 9th Edition 5.29 Silberschatz, Galvin and Gagne ©2013
Bounded Buffer Problem (Cont.)
The structure of the consumer process
do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Operating System Concepts – 9th Edition 5.30 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write
Problem – allow multiple readers to read at the same time
Only one single writer can access the shared data at the same time
Several variations of how readers and writers are treated – all involve priorities
Shared Data
Data set
Semaphore rw_mutex initialized to 1
Semaphore mutex initialized to 1
Integer read_count initialized to 0
Operating System Concepts – 9th Edition 5.31 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem (Cont.)
The structure of a writer process
do {
wait(rw mutex);
...
/* writing is performed */
...
signal(rw mutex);
} while (true);
Operating System Concepts – 9th Edition 5.32 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem (Cont.)
The structure of a reader process
do {
wait(mutex);
read count++;
if (read count == 1)
wait(rw mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read count == 0)
signal(rw mutex);
signal(mutex);
} while (true);
Operating System Concepts – 9th Edition 5.33 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem Variations
First variation – no reader kept waiting unless writer has permission to use shared
object
Operating System Concepts – 9th Edition 5.34 Silberschatz, Galvin and Gagne ©2013
Dining-Philosophers Problem
Operating System Concepts – 9th Edition 5.35 Silberschatz, Galvin and Gagne ©2013
Dining-Philosophers Problem Algorithm
The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
Operating System Concepts – 9th Edition 5.36 Silberschatz, Galvin and Gagne ©2013
Problems with Semaphores
Incorrect use of semaphore operations:
Operating System Concepts – 9th Edition 5.37 Silberschatz, Galvin and Gagne ©2013
Monitors
A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
Abstract data type, internal variables only accessible by code within the procedure
Only one process may be active within the monitor at a time
But not powerful enough to model some synchronization schemes
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
Operating System Concepts – 9th Edition 5.38 Silberschatz, Galvin and Gagne ©2013
Schematic view of a Monitor
Operating System Concepts – 9th Edition 5.39 Silberschatz, Galvin and Gagne ©2013
Condition Variables
condition x, y;
Operating System Concepts – 9th Edition 5.40 Silberschatz, Galvin and Gagne ©2013
Monitor with Condition Variables
Operating System Concepts – 9th Edition 5.41 Silberschatz, Galvin and Gagne ©2013
Condition Variables Choices
If process P invokes x.signal (), with Q in x.wait () state, what should happen
next?
If Q is resumed, then P must wait
Options include
Signal and wait – P waits until Q leaves monitor or waits for another
condition
Signal and continue – Q waits until P leaves the monitor or waits for
another condition
Both have pros and cons – language implementer can decide
Monitors implemented in Concurrent Pascal compromise
P executing signal immediately leaves the monitor, Q is resumed
Implemented in other languages including Mesa, C#, Java
Operating System Concepts – 9th Edition 5.42 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
Operating System Concepts – 9th Edition 5.43 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers (Cont.)
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Operating System Concepts – 9th Edition 5.44 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers (Cont.)
Each philosopher i invokes the operations pickup() and putdown() in the following
sequence:
DiningPhilosophers.pickup (i);
EAT
DiningPhilosophers.putdown (i);
Operating System Concepts – 9th Edition 5.45 Silberschatz, Galvin and Gagne ©2013
Monitor Implementation Using Semaphores
Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0;
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
Operating System Concepts – 9th Edition 5.46 Silberschatz, Galvin and Gagne ©2013
Resuming Processes within a Monitor
If several processes queued on condition x, and x.signal() executed, which should be
resumed?
Operating System Concepts – 9th Edition 5.47 Silberschatz, Galvin and Gagne ©2013
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}
Operating System Concepts – 9th Edition 5.48 Silberschatz, Galvin and Gagne ©2013
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.
Chapter 7: Deadlocks!
Chapter 7: Deadlocks!
■ Process
"
■ Resource Type with 4 instances"
"
Pi!
"
R j!
■ Pi is holding an instance of Rj!
Pi"
R j!
■ Deadlock Prevention"
■ Deadlock Avoidance"
■ Deadlock Detection"
■ No Preemption –"
● If a process that is holding some resources requests
another resource that cannot be immediately allocated to
it, then all resources currently being held are released."
● Preempted resources are added to the list of resources
for which the process is waiting."
● Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
"
■ Circular Wait – impose a total ordering of all resource types,
and require that each process requests resources in an
increasing order of enumeration."
Claim Edge
Claim Edge
■ Banker’s Algorithm"
■ Resource-Request Algorithm"
■ Safety Algorithm"
■ Multiple instances.
"
■ Each process must a priori claim maximum use.
"
■ When a process requests a resource it may have to wait.
"
■ When a process gets all its resources it must return them in
a finite amount of time."
End of Chapter 7!
Deadlock
Banker’s Algorithm
b) using thread
c) using pipes
Need[i,j] = Max[i,j]-Allocation[i,j]
Allocation Max Available Need
A B C A B C
A B C A B C
P0 0 1 0 7 5 3
3 3 2 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2
6 0 0
P3 2 1 1 2 2 2
0 1 1
P4 0 0 2 4 3 3
4 3 1
p1 p3 p4
Allocation Max Available Need
A B C A B C A B C A B C
Available
P0 0 1 0 7 5 3 3 3 2 7 4 3 A B
A B C C
P1 2 0 0 3 2 2 1 2 2 3+
3+ 3+
3+ 2+
2+
P2 3 0 2 9 0 2 6 0 0 2+ 0+ 0+
2
2+ 0
0+ 0
0+
P3 2 1 1 2 2 2 0 1 1 2+
=5
2 1+
=3
1 1+
=2
1
0
=7 0
=4 2
=3
P4 0 0 2 4 3 3 4 3 1 =7 =4 =5
P2 2 9 7
P2,P0,P1
Current
P1,P2,P0 Available
Resources
3
operating system
resources