Professional Documents
Culture Documents
Process Synchronization - 1
3.2
Communications Models
(a) Shared memory. (b) Message passing.
3.3
Producer-Consumer Problem
▪ Paradigm for cooperating processes:
• producer process produces information that is consumed by a
consumer process
▪ Two variations:
• unbounded-buffer places no practical limit on the size of the
buffer:
Producer never waits
Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed buffer size
Producer must wait if all buffers are full
Consumer waits if there is no buffer to consume
3.4
IPC – Shared Memory
3.5
Bounded-Buffer – Shared-Memory Solution
▪ Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
3.6
Producer Process – Shared Memory
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
3.7
Consumer Process – Shared Memory
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
3.8
What about Filling all the Buffers?
▪ Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers.
▪ We can do so by having an integer counter that keeps track
of the number of full buffers.
▪ Initially, counter is set to 0.
▪ The integer counter is incremented by the producer after it
produces a new buffer.
▪ The integer counter is and is decremented by the consumer
after it consumes a buffer.
3.9
Producer
while (true) {
/* produce an item in next produced */
3.10
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
3.11
Race Condition
▪ counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
▪ counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
3.12
Threads
Motivation
3.14
Single and Multithreaded Processes
3.15
Multithreaded Server Architecture
3.16
Benefits
3.17
Concurrency vs. Parallelism
▪ Parallelism implies a system can perform more than one task
simultaneously
▪ Concurrency supports more than one task making progress
▪ Single processor / core, scheduler providing concurrency
▪ Concurrent execution on single-core system:
3.18
User Threads and Kernel Threads
3.19
User and Kernel Threads
Multithreading Models
▪ Many-to-One
▪ One-to-One
▪ Many-to-Many
3.20
Many-to-One
▪ Many user-level threads mapped to single kernel thread
▪ One thread blocking causes all to block
▪ Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
▪ Few systems currently use this model
3.21
One-to-One
3.22
Many-to-Many Model
▪ Allows many user level threads to be mapped to many kernel threads
▪ Allows the operating system to create a sufficient number of kernel
threads
▪ Windows with the ThreadFiber package
▪ Otherwise not very common
3.23
Thread Libraries
3.24
Java Threads
3.25
Java Threads
Implementing Runnable interface:
Creating a thread:
Waiting on a thread:
3.26