Professional Documents
Culture Documents
CHAPTER 2
PROCESS MANAGEMENT
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
OBJECTIVES
Identify the separate components of a process and illustrate how they are represented and
scheduled in an operating system.
Describe how processes are created and terminated in an operating system, including
developing programs using the appropriate system calls that perform these operations.
Identify the basic components of a thread, and contrast threads and processes
Describe the benefits and challenges of designng multithreaded applications
Describe various CPU scheduling algorithms
Describe the critical-section problem and illustrate a race condition
Illustrate solutions to the critical-section problem using “busy waiting” solutions and «
sleep and wakeup » solutions.
3
PROCESS CONCEPT
4
PROCESS CONCEPT (CONT.)
Execution of program started via GUI mouse clicks, command line entry of its name,
etc.
One program can be several processes
Consider multiple users executing the same program
5
PROGRAM VS PROCESS
6
PROGRAM VS PROCESS
7
PROCESS IN MEMORY
8
MEMORY LAYOUT OF A C PROGRAM
9
PROCESS STATE
10
PROCESS STATES EXAMPLES
12
PROCESS CONTROL BLOCK (PCB)
13
PROCESS SCHEDULING
Process scheduler selects among available processes for next execution on CPU
core
Goal -- Maximize CPU use, quickly switch processes onto CPU core
Maintains scheduling queues of processes
Ready queue – set of all processes residing in main memory, ready and waiting to execute
Wait queues – set of processes waiting for an event (i.e., I/O)
Processes migrate among the various queues
14
READY AND WAIT QUEUES
15
REPRESENTATION OF PROCESS SCHEDULING
16
PROCESS SCHEDULING QUEUES
A new process is initially put in the ready queue. It waits there until it is selected for
execution. Once the process is allocated the CPU and is executing, one of several
events could occur:
The process could issue an I/O request and then be placed in an I/O queue.
The process could create a new child process and wait for the child’s termination.
The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in
the ready queue.
17
CPU SWITCH FROM PROCESS TO PROCESS
18
PROCESS SCHEDULING QUEUES
19
CONTEXT SWITCH
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is pure overhead; the system does no useful work while switching
The more complex the OS and the PCB the longer the context switch
20
CONTEXT SWITCH
21
MULTITASKING IN MOBILE SYSTEMS
Some mobile systems (e.g., early version of iOS) allow only one process to run, others
suspended
Due to screen real estate, user interface limits iOS provides for a
Single foreground process- controlled via user interface
Multiple background processes– in memory, running, but not on the display, and with limits
Limits include single, short task, receiving notification of events, specific long-running tasks like audio
playback
Android runs foreground and background, with fewer limits
Background process uses a service to perform tasks
Service can keep running even if background process is suspended
Service has no user interface, small memory use
22
PROCESS CREATION
Parent process creates children processes, which, in turn create other processes,
forming a tree of processes.
Generally, process identified and managed via a process identifier (pid)
Resource sharing options
Parent and children share all resources.
Children share subset of parent’s resources.
Parent and child share no resources.
Execution options
Parent and children execute concurrently.
Parent waits until children terminate.
23
PROCESS CREATION (CONT.)
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork() system call creates new process
exec() system call used after a fork() to replace the process’ memory space with a new
program
Parent process calls wait()waiting for the child to terminate
24
A TREE OF PROCESSES ON A TYPICAL UNIX SYSTEM
25
UNIX PROCESS CREATION
#include <stdio.h>
#include <unistd.h>
int main()
{
int ret;
Printf(“before fork \n”);
ret=fork();
//Creating a child process. This will now throw 2 return values. One>0 and other==0. >0 is parent, equal to 0 is child.
If(ret>0)
{
printf(“ \n A Parent “);
printf(“ \n My PID is %d”, getpid());
}
if (ret==0)
printf (“ \n A CHILD MAN “);
printf(“ \n My PID is %d”, getpid());
printf(“ \n My Parent PID is %d”, getppid());
}
Printf(“common Work \n”);
}
26
UNIX PROCESS CREATION
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main() {
pid_t pid;
pid = fork(); /* fork a child process */
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}
27
EXAMPLE
29
PROCESS CREATION (CONT.)
On UNIX systems, we can obtain a listing of processes by using the ps command. For
example, the command ps –el will list complete information for all processes
currently active in the system
pstree is a UNIX command that shows the running processes
Gnome-system-monitor: Show process manage by GUI
30
UNISTD C LIBRARY
getpid()
getppid()
fork()
wait()
31
PROCESS TERMINATION
Process executes last statement and asks the operating system to decide
it (exit).
Output data from child to parent (via wait).
Process’ resources are deallocated by operating system.
Parent may terminate execution of children processes (abort).
Child has exceeded allocated resources.
Task assigned to child is no longer required.
Parent is exiting. Operating system does not allow child to continue if its parent
terminates.
32
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
33
SCHEDULERS
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue controls the degree of multiprogramming (may be slow)
Medium-term: Select which process should be (swap in) and swap out
34
LONG-TERM SCHEDULER VS SHORT-TERM SCHEDULER
Ready queue
Waiting queue
35
ADDITION OF MEDIUM TERM SCHEDULING
36
SCHEDULING ALGORITHMS
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.
The selection process is carried out by the short-term scheduler, or CPU
scheduler
37
HISTOGRAM OF CPU-BURST TIMES
38
SCHEDULER
Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
5. New process to ready sate
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
39
PREEMPTIVE VS NON-PREEMPTIVE SCHEDULING
Non-preemptive
The running process keeps the CPU until it voluntarily gives up the CPU
process exits
switches to blocked state
Preemptive:
The running process can be interrupted and must release the CPU (can be forced to give up CPU)
40
SCHEDULING CRITERIA
CPU utilization:In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily loaded system).
Throughput: the number of processes that are completed per time unit
Turnaround time: The interval from the time of submission of a process to the time of
completion. = waiting to get into memory + waiting in the ready queue + executing on the
CPU + doing I/O.
Waiting time. is the sum of the periods spent waiting in the ready queue.
Response time. the time from the submission of a request until the first response is
produced
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting
time, and response time.
41
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED
43
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED (CONT)
44
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED(CONT)
45
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST (SJF)
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Estimate
46
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST(CONT)
47
SHORTEST-JOB-FIRST(NON PREEMPTIVE) WITH ARRIVAL TIME
48
SHORTEST-JOB-FIRST(NON PREEMPTIVE) WITH ARRIVAL TIME
Non-preemptive scheduling
The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
For long-term (job) scheduling in a batch system, we can use the process time limit that a user
specifies when he submits the job
SJF scheduling is used frequently in long-term scheduling.
One approach to this problem is to try to approximate SJF scheduling. We may not know the length
of the next CPU burst, but we may be able to predict its value.
50
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST(CONT)
51
EXAMPLE OF SHORTEST-REMAINING-TIME-FIRST (SJF PREEMPTIVE)
Now we add the concepts of varying arrival times and preemption to the analysis
53
SHORTEST-REMAINING-TIME-FIRST (SJF PREEMPTIVE)
Remaining time
Process Arrival time Bust time P1 P2 P3 P4 P5
P1 0 11
P2 3 7
P3 5 4 ready queue Running queue
P4 5 2
P5 2 8
P1
Gantt
0 2
Preemption scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing systems.
Each process gets a small unit of time, called a time quantum (q) or time slice, is defined. A time
quantum is generally from 10 to 100 milliseconds in length.
After this time has elapsed, the process is preempted and added to the end of the ready queue.
The ready queue is treated as a circular queue.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch, otherwise overhead is too high
55
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a
row (unless it is the only runnable process)
The performance of the RR algorithm depends heavily on the size of the time quantum.
At one extreme, if the time quantum is extremely large, the RR policy
if the time quantum is extremely small, the RR approach can result in a large number of context switches
56
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
57
SHORTEST-REMAINING-TIME-FIRST (SJF PREEMPTIVE
Process Arrival time Bust time
P1 0 11
P2 3 7
P3 5 4
P4 5 2
P5 2 8
58
ROUND ROBIN WITH ARRIVAL TIME
Remaining time
Process Arrival time Bust time P1 P2 P3 P4 P5
P1 0 11 Play
P2 3 7
P3 5 4 ready queue Running Process
P4 5 2 P4 P3 P2 P5 P1
P5 2 8
Gantt chart P1
0 3
Q=3
Tp1 Tp2 Tp3 Tp4 Tp5
Tcp1 =0+(9−3)+(23−12)+(30−26)=21 Tcp2=6+(20−9)+(29−23)−3=20 Tcp3 =12+(26−15)−5=18 Tcp4 =15−5=10 Tcp5=3+(17−6)+(27−20)−2=19
The average waiting time: 17.6 milliseconds
The Average turnaround time: 17.6 + 32/5= 24 milliseconds. 59
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
60
SCHEDULING ALGORITHMS: PRIORITY
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095.
Some systems use low numbers to represent low priority; others use low numbers for high priority
In this text, we assume that low numbers represent high priority
Preemptive
Nonpreemptive
61
EXAMPLE OF PRIORITY SCHEDULING
62
SCHEDULING ALGORITHMS: PRIORITY(CONT)
64
PRIORITY WITH ARRIVAL TIME PREEMPTIVE
65
PRIORITY WITH ARRIVAL TIME PREEMPTIVE
Process Arrival time Bust time Priority
P1 0 11 2 ready queue
P2 3 7 3
P3 5 4 1
P4 5 2 3
P5 2 8 2
P1 P3 P5 P1 P2 P4
Gantt chart 0 5 9 17 23 30 32
69
SCHEDULING ALGORITHMS: MULTILEVEL QUEUE
70
SCHEDULING ALGORITHMS (CONT)
71
SCHEDULING ALGORITHMS: MULTILEVEL FEEDBACK QUEUE
A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that process needs service
72
EXAMPLE OF MULTILEVEL FEEDBACK QUEUE
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved to queue Q1
At Q1 job is again served FCFS and receives 16 additional
milliseconds
If it still does not complete, it is preempted and moved to queue
Q2
73
SCHEDULING ALGORITHMS: LOTTERY SCHEDULING WORKS
74
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
75
COOPERATING PROCESSES - INTERPROCESS COMMUNICATION
76
COOPERATING PROCESSES - COMMUNICATIONS MODELS
message passing
77
PRODUCER-CONSUMER PROBLEM
Two variations:
unbounded-buffer places no practical limit on the size of the buffer:
Producer never waits
Consumer waits if there is no buffer to consume
78
IPC – SHARED MEMORY
79
BOUNDED-BUFFER – SHARED-MEMORY SOLUTION
Shared data
in
#define BUFFER_SIZE 10
typedef struct {
. . . counter
} item; P C
item buffer[BUFFER_SIZE];
int in = 0;
out
int out = 0;
80
WHAT ABOUT FILLING ALL THE BUFFERS?
81
PRODUCER PROCESS – SHARED MEMORY
83
IMPLEMENTATION QUESTIONS
84
MESSAGE PASSING SYSTEM
Direct Communication
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
85
INDIRECT COMMUNICATION
Messages are directed and received from mailboxes (also referred to as ports).
Each mailbox has a unique id.
Processes can communicate only if they share a mailbox.
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes.
Each pair of processes may share several communication links.
Link may be unidirectional or bi-directional.
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
86
INDIRECT COMMUNICATION (CONTINUED)
Mailbox sharing
P1, P2, and P3 share mailbox A.
P1, sends; P2 and P3 receive.
Who gets the message?
Solutions
Allow a link to be associated with at most two processes.
Allow only one process at a time to execute a receive operation.
Allow the system to select arbitrarily the receiver. Sender is notified who the receiver
was.
87
PRODUCER-CONSUMER: MESSAGE PASSING
Producer
message next_produced;
while (true) {
/* produce an item in next_produced */
send(next_produced);
}
Consumer
message next_consumed;
while (true) {
receive(next_consumed)
/* consume the item in next_consumed */
}
88
BUFFERING
89
SOCKET COMMUNICATION
90
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
91
THREADS
A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:
program counter
register set
stack space
A thread shares with its peer threads its:
code section
data section
operating-system resources
collectively know as a task.
A traditional or heavyweight process is equal to a task with one thread
93
THREADS (CONT.)
Benefits:
Responsiveness
Resource Sharing
Economy
Scalability
94
SINGLE AND MULTITHREADED PROCESSES
95
MULTITHREADED SERVER ARCHITECTURE
96
PROCESS VS THREAD
97
THREADS (CONT.)
98
MULTITHREADING MODELS
Many-to-one model
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be
in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
99
MULTITHREADING MODELS
One-to-one model.
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples
Windows
Linux
100
MULTITHREADING MODELS
Many-to-many model.
101
THREAD LIBRARIES
A thread library provides the programmer with an API for creating and managing
threads.
Three main thread libraries are in use today:
POSIX Pthreads
Windows
Java.
102
THREAD LIBRARIES
103
PTHREADS
May be provided either as user-level or kernel-level
A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
Specification, not implementation
API specifies behavior of the thread library, implementation is up to development of
the library
Common in UNIX operating systems (Linux & Mac OS X)
104
PTHREADS EXAMPLE
105
#include <pthread.h>
#include <stdio.h>
int sum=0; /* this data is shared by the thread(s) */
void* runner(void *param); /* threads call this function */
int amout1=5; amout2=8;
int main(int argc, char *argv[]) {
pthread_t tid1; /* the thread identifier */
pthread_t tid2; /* the thread identifier */
pthread_attr_t attr; /* set of thread attributes */
pthread_attr_init(&attr); /* get the default attributes */
pthread_create(&tid1,&attr,runner,&amout1); /* create the thread */
pthread_create(&tid2,&attr,runner,&amout2); /* create the thread */
pthread_join(tid1,NULL); /* wait for the thread to exit */
pthread_join(tid2,NULL); /* wait for the thread to exit */
printf("sum = %d\n",sum);
}
void* runner(void* arg) { /* Thread will begin control in this function */
int *param_ptr=(int *) arg;
int param =*param_ptr;
for (int i = 1; i <= param; i++)
sum += i;
pthread_exit(0);
}
WINDOWS THREADS
The technique for creating threads using theWindows thread library is similar to the Pthreads
technique in several ways.
Threads are created in the Windows API using the CreateThread() function, and - just as in Pthreads a
set of attributes for the thread is passed to this function
Once the summation thread is created, the parent must wait for it to complete before outputting the
value of Sum, as the value is set by the summation thread
Windows API using the WaitForSingleObject() function to wait for the summation thread
107
#include <Windows.h> int main(int argc, char* argv[])
#include <process.h> {
#include <stdio.h> HANDLE myhandle1, myhandle2;
int sum1=0,sum2=0; myhandle1 = (HANDLE)_beginthread(&mythread1, 0, 0);
int i; myhandle2 = (HANDLE)_beginthread(&mythread2, 0, 0);
void mythread1(void* data) WaitForSingleObject(myhandle1, INFINITE);// join
{ WaitForSingleObject(myhandle2, INFINITE);// join
printf("mythread1 ID %d \n", GetCurrentThreadId());
for (i = 0; i < 10; i++) return 0;
if (i%2==0){ }
sum1=sum1+i;}
printf("sum= %d \n",sum1);
}
void mythread2(void* data)
{
for (i = 0; i < 10; i++)
if (i%2!=0){
sum2=sum2+i;}
printf("sum= %d \n",sum2);
printf("mythread2 ID %d \n", GetCurrentThreadId());
}
JAVA THREADS
Threads are the fundamental model of program execution in a Java program, and the Java
language and its API provide a rich set of features for the creation and management of thread
All Java programs comprise at least a single thread of control—even a simple Java program
consisting of only a main() method runs as a single thread in the JVM
Java threads may be created by:
Extending Thread class
Implementing the Runnable interface
109
public class Main {
public static void main(String[] args) {
System.out.println("Start");
Thread t1 = new Thread (new Runnable(){
@Override
public void run(){
for (int i=0;i<10;i++){
System.out.println("Thread 1>"+i);
}
}
});
Thread t2 = new Thread (new Runnable(){
@Override
public void run(){
for (int i=0;i<10;i++){
System.out.println("Thread 2>"+i);
}
}
});
t1.start();
t2.start();
System.out.println("Finish");
}}
IMPLICIT THREADING
Implicit Threading: strategy for designing multithreaded programs that can take
advantage of multicore processors
Thread Pools
OpenMP (Open Multiple Processing)
GDC (Grand Central Dispatch)
111
THREAD POOLS
The general idea behind a thread pool is to create a number of threads at process startup and place
them into a pool, where they sit and wait for work
112
THREAD POOLS
113
THEAD POOLS
114
OPENMP
115
#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
/* sequential code */
#pragma omp parallel
{
printf("I am a parallel region.");
}
/* sequential code */
return 0;
}
#pragma omp parallel for
for (i = 0; i < N; i++) {
c[i] = a[i] + b[i];
}
MULTIPLE THREADS WITHIN A TASK
117
THREADS SUPPORT IN SOLARIS 2
Solaris 2 is a version of UNIX with support for threads at the kernel and user levels, symmetric multiprocessing,
and
real-time scheduling.
LWP – intermediate level between user-level threads and kernel-level threads.
Resource needs of thread types:
Kernel thread: small data structure and a stack; thread switching does not require changing memory access information –
relatively fast.
LWP: PCB with register data, accounting and memory information,; switching between LWPs is relatively slow.
User-level thread: only ned stack and program counter; no kernel involvement means fast switching. Kernel only sees the
LWPs that support user-level threads.
118
SOLARIS 2 THREADS
119
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
120
PROCESS SYNCHRONIZATION
121
BOUNDED-BUFFER
122
BOUNDED-BUFFER (CONT.)
Consumer process
repeat in
while counter = 0 do no-op;
nextc := buffer [out];
out := out + 1 mod n; counter
counter := counter – 1;
…
P C
consume the item in nextc
…
until false;
out
The statements:
counter := counter + 1;
counter := counter - 1;
must be executed atomically.
123
RACE CONDITION
register2 = counter
register2 = register2 - 1
counter = register2
124
THE CRITICAL-SECTION PROBLEM
126
SOLUTION TO CRITICAL-SECTION PROBLEM
127
“BUSY – WAITING” SOLUTION
Synchronization Software
Lock
While (unpermission) do nothing();
Peterson’s Solution
Critical section;
Synchronization Hardware
Give up permission with CS non-interruptible
Test&Set Instruction
128
INITIAL ATTEMPTS TO SOLVE PROBLEM
while(true){
while lock ==1 do no-op; /* waiting lock=0*/
//lock =0
lock=1; //set lock =1 to prevent other process (lock)
critical section
lock=0; // finish CS the release resource (unlock)
reminder section
}
130
ALGORITHM 2
Process Pi
do {
while turn ==j do no-op; /* waiting turn #j*/
critical section
turn = j;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress and bounded waiting because of strict alternation
131
ALGORITHM FOR PROCESS PI (CONT.)
Satisfies mutual exclusion, but not progress and bounded waiting because of strict
alternation 132
PETERSON’S SOLUTION
The variable turn indicates whose turn it is to enter the critical section
The interess array is used to indicate if a process is ready to enter the critical
section.
interesse[2] = true implies that process Pi is ready!
133
PETERSON SOLUTIONS
134
PETERSON SOLUTIONS FOR PROCESS PI (CONT.)
135
CORRECTNESS OF PETERSON’S SOLUTION
136
SYNCHRONIZATION HARDWARE
137
MUTUAL EXCLUSION WITH TEST-AND-SET
138
“SLEEP & WAKE UP” SOLUTIONS
while(true){
if(busy){
++blocked;
If (not have permission) Sleep(); sleep();}
else busy =1;//set busy =1 to prevent
Critical section; other
critical section
Wakeup (somebody); busy =0;// set busy=0
if(blocked)//check any waited process?
{ wakeup(process);// wakup other
blocked --;}
remainder section
}
139
SEMAPHORE
signal(S) //wakeup
{ S ++; }
140
EXAMPLE: CRITICAL SECTION OF N PROCESSES
Shared variables
var mutex : semaphore
initially mutex = 1
Process Pi
repeat
wait(mutex);
critical section
signal(mutex);
remainder section
until false;
141
SEMAPHORE IMPLEMENTATION
142
IMPLEMENTATION (CONT.)
Semaphore operations now defined as
typedef struct{
int value;
struct process *Ptr;
} Semaphorre
do { do{
{create new product} if(counter==0) block();
if(counter==size) block(); {take 1 product from Buffer}
{Place new product to Buffer} wait(Mutex);
wait(Mutex) Counter++
counter++ signal(Mutex);
signal(Mutex); if(counter==size-1)
if(counter==1) wakeup(Producer);
wakeup(Consumer); {do something with retrieved product}
}while(1) }while(1);
144
DEADLOCK AND STARVATION
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by
only one of the waiting processes.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
signal(S); signal(Q);
signal(Q) signal(S);
Starvation – indefinite blocking. A process may never be removed from the semaphore queue
in which it is suspended.
145
TWO TYPES OF SEMAPHORES
146
IMPLEMENTING S AS A BINARY SEMAPHORE
Data structures:
varS1: binary-semaphore;
S2: binary-semaphore;
S3: binary-semaphore;
C: integer;
Initialization:
S1 = S3 = 1
S2 = 0
C = initial value of semaphore S
147
IMPLEMENTING S (CONT.)
wait operation
wait(S3);
wait(S1);
C := C – 1;
if C < 0
then begin
signal(S1);
wait(S2);
end
else signal(S1);
signal(S3);
signal operation
wait(S1);
C := C + 1;
if C 0 then signal(S2);
signal(S)1;
148
OUTLINE
Overview
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
149
DEADLOCK CHARACTERIZATION
Deadlock can arise if four conditions hold simultaneously.