You are on page 1of 30

Chapter 3: Processes

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Multiprocess Architecture – Chrome Browser

▪ Many web browsers ran as single process (some still do)


• If one web site causes trouble, entire browser can hang or crash
▪ Google Chrome Browser is multiprocess with 3 different types of
processes:
• Browser process manages user interface, disk and network I/O
• Renderer process renders web pages, deals with HTML,
Javascript. A new renderer created for each website opened
 Runs in sandbox restricting disk and network I/O, minimizing effect
of security exploits
 A sandbox is a security mechanism for separating running
programs, usually in an effort to mitigate system failures and/or
software vulnerabilities from spreading
• Plug-in process for each type of plug-in
 A software component that adds a specific feature to an existing computer program

Operating System Concepts – 10th Edition 3.2 Silberschatz, Galvin and Gagne ©2018
Interprocess Communication
▪ Processes within a system may be independent or cooperating
 A process is independent if it does not share data with any other
processes executing in the system
 A process is cooperating if it can affect or be affected by the other
processes executing in the system (share data with other processes)
▪ Reasons for cooperating processes:
• Information sharing
 Access to the same files concurrently e.g copying and pasting
• Computation speedup
 If we want a particular task to run faster, we must break it into subtasks, each
of which will be executing in parallel with the others
• Modularity (complex tasks into smaller subtasks)
 To construct the system in a modular fashion, dividing the system functions
into separate processes or threads
• Convenience (printing, compiling, editing, etc. It is more convenient if these
activities may be managed through cooperating processes)
▪ Cooperating processes need interprocess communication (IPC)
Operating System Concepts – 10th Edition 3.3 Silberschatz, Galvin and Gagne ©2018
Interprocess Communication
▪ Two models of IPC
• Shared memory
 A region of memory that is shared by the cooperating processes is
established. Processes can then exchange information by reading and
writing data to the shared region
 Shared memory can be faster than message passing
 System calls are required only to establish shared memory regions
 All accesses are treated as routine memory accesses, and no assistance
from the kernel is required
• Message passing
 Communication takes place by means of messages exchanged between the
cooperating processes
 Message passing is useful for exchanging smaller amounts of data,
because no conflicts need be avoided
 Implemented using system calls and thus require the more time-
consuming task of kernel intervention
• Both models are common in operating systems, and many systems
implement both
Operating System Concepts – 10th Edition 3.4 Silberschatz, Galvin and Gagne ©2018
Communications Models
(a) Shared memory. (b) Message passing.

Operating System Concepts – 10th Edition 3.5 Silberschatz, Galvin and Gagne ©2018
IPC in Shared Memory System
▪ Inter-process communication using shared memory requires communicating
processes to establish a region of shared memory
▪ Typically, a shared-memory region resides in the address space of the
process creating the shared-memory segment
▪ Other processes that wish to communicate using this shared-memory
segment must attach it to their address space
▪ Recall that, normally, the operating system tries to prevent one process
from accessing another process’s memory
▪ Shared memory requires that two or more processes agree to remove this
restriction
▪ They can then exchange information by reading and writing data in the
shared areas
▪ The form of the data and the location are determined by these processes and
are not under the operating system’s control
▪ The processes are also responsible for ensuring that they are not writing to
the same location simultaneously
▪ Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory
Operating System Concepts – 10th Edition 3.6 Silberschatz, Galvin and Gagne ©2018
Producer-Consumer Problem
▪ Paradigm for cooperating processes:
• producer process produces information that is consumed by a consumer
process
• For example, a compiler may produce assembly code that is consumed by an
assembler
• The assembler, in turn, may produce object modules that are consumed by the
loader
▪ A useful metaphor for the client–server paradigm
• For example, a web server produces web content such as HTML files and
images, which are consumed by the client web browser requesting the
resource
▪ One solution to the producer–consumer problem uses shared memory
▪ To allow producer and consumer processes to run concurrently, we must
have available a buffer of items that can be filled by the producer and
emptied by the consumer
▪ This buffer will reside in a region of memory that is shared by the producer
and consumer processes

Operating System Concepts – 10th Edition 3.7 Silberschatz, Galvin and Gagne ©2018
Producer-Consumer Problem
▪ A producer can produce one item while the consumer is consuming another item
▪ The producer and consumer must be synchronized, so that the consumer
does not try to consume an item that has not yet been produced
▪ Two variations:
• unbounded-buffer places no practical limit on the size of the buffer:
 Producer never waits
 Consumer waits if there is no itemr to consume
• bounded-buffer assumes that there is a fixed buffer size
 Producer must wait if all buffers are full
 Consumer waits if there is no item to consume

Operating System Concepts – 10th Edition 3.8 Silberschatz, Galvin and Gagne ©2018
Bounded-Buffer – Shared-Memory Solution
▪ Shared data: The following variables reside in a region of memory shared by the
producer and consumer processes 1. Buffer is implemented in
#define BUFFER_SIZE 10 circular array
typedef struct {
2. Two logical pointers in &
. . . out
} item;
3. in points to the next free
position in the buffer
item buffer[BUFFER_SIZE];
int in = 0; 4. out points to the first full
position in the buffer
int out = 0;
5. The buffer is empty when
▪ Solution is correct in == out
• but can only use BUFFER_SIZE-1 elements
6. The buffer is full when ((in
+ 1) % BUFFER SIZE) ==
out

Operating System Concepts – 10th Edition 3.9 Silberschatz, Galvin and Gagne ©2018
Producer Process – Shared Memory
the new item to be produced is stored

item next_produced;

while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}

Operating System Concepts – 10th Edition 3.10 Silberschatz, Galvin and Gagne ©2018
Consumer Process – Shared Memory
the new item to be consumed
item next_consumed;

while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}

Operating System Concepts – 10th Edition 3.11 Silberschatz, Galvin and Gagne ©2018
What about Filling all the Buffers?
▪ Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers.
▪ We can do so by having an integer counter that keeps track
of the number of full buffers.
▪ Initially, counter is set to 0.
▪ The integer counter is incremented by the producer after it
produces a new buffer.
▪ The integer counter is and is decremented by the consumer
after it consumes a buffer.

Operating System Concepts – 10th Edition 3.12 Silberschatz, Galvin and Gagne ©2018
Producer

while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE)


; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

Operating System Concepts – 10th Edition 3.13 Silberschatz, Galvin and Gagne ©2018
Consumer

while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}

Operating System Concepts – 10th Edition 3.14 Silberschatz, Galvin and Gagne ©2018
Race Condition
▪ counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
▪ counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2

▪ Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

Operating System Concepts – 10th Edition 3.15 Silberschatz, Galvin and Gagne ©2018
Race Condition (Cont.)
▪ Question – why was there no race condition
in the first solution (where at most N – 1)
buffers can be filled?
▪ More in Chapter 6.

Operating System Concepts – 10th Edition 3.16 Silberschatz, Galvin and Gagne ©2018
IPC – Message Passing
▪ Processes communicate with each other without resorting to shared variables
▪ Provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing memory
▪ Useful in a distributed environment
• For example, an Internet chat program could be designed so that chat participants
communicate with one another by exchanging messages
▪ IPC facility provides two operations:
• send(message)
• receive(message)

▪ The message size is either fixed or variable


 Fixed-sized messages makes the system-level implementation is
straightforward but makes the task of programming more difficult
 Variable-sized messages makes the system-level implementation is difficult
but makes the task of programming easy

Operating System Concepts – 10th Edition 3.17 Silberschatz, Galvin and Gagne ©2018
Message Passing (Cont.)
▪ If processes P and Q wish to communicate, they need to:
• Establish a communication link between them
• Exchange messages via send/receive
▪ Communication links concerned here not with the link’s physical
implementation only on logical implementation
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering
▪ Implementation issues:
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating
processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate fixed or variable?
• Is a link unidirectional or bi-directional?
Operating System Concepts – 10th Edition 3.18 Silberschatz, Galvin and Gagne ©2018
Implementation of Communication Link

▪ Physical:
• Shared memory
• Hardware bus
• Network
▪ Logical:
• Direct or indirect
• Synchronous or asynchronous
• Automatic or explicit buffering

Operating System Concepts – 10th Edition 3.19 Silberschatz, Galvin and Gagne ©2018
Direct Communication
▪ Naming:
▪ Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
▪ Properties of communication link
• Links are established automatically
• A link is associated with exactly two processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional
▪ This scheme exhibits symmetry in addressing; that is, both the sender process
and the receiver process must name the other to communication
▪ Asymmetry in addressing, only the sender names the recipient; the recipient
is not required to name the sender
▪ send(P, message)—Send a message to process P.
▪ receive(id, message)—Receive a message from any process. The variable id
is set to the name of the process with which communication has taken
place
Operating System Concepts – 10 Edition
th
3.20 Silberschatz, Galvin and Gagne ©2018
Indirect Communication
▪ Disadvantage of Direct Communication:
• Changing the identifier of a process may necessitate examining all other
process definitions
• All references to the old identifier must be found, so that they can be modified
to the new identifier
▪ Messages are directed and received from mailboxes (also referred to as ports)
• Each mailbox has a unique id
 For example, POSIX message queues use an integer value to identify a
mailbox
• Processes can communicate only if they share a mailbox
▪ Properties of communication link
• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links (link shows
mail box)
• Link may be unidirectional or bi-directional
Operating System Concepts – 10th Edition 3.21 Silberschatz, Galvin and Gagne ©2018
Indirect Communication (Cont.)

▪ Operations
• Create a new mailbox (port)
• Send and receive messages through mailbox
• Delete a mailbox
▪ Primitives are defined as:
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from mailbox A

Operating System Concepts – 10th Edition 3.22 Silberschatz, Galvin and Gagne ©2018
Indirect Communication (Cont.)
▪ Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
▪ Solutions
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive operation
• Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.
• Define an algorithm (Round Robin)
▪ A mailbox may be owned either by a process or by the operating
system
▪ If the mailbox is owned by a process, then we distinguish between the
owner and the user
▪ Each mailbox has a unique owner

Operating System Concepts – 10th Edition 3.23 Silberschatz, Galvin and Gagne ©2018
Synchronization
▪ Communication between processes takes place through calls to send() and
receive() primitives
▪ Message passing may be either blocking or non-blocking
▪ Blocking is considered synchronous
• Blocking send -- the sender is blocked until the message is received
• Blocking receive -- the receiver is blocked until a message is available
▪ Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message and continue
• Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
▪ Different combinations possible
• If both send and receive are blocking, we have a rendezvous
• The solution to the producer–consumer problem becomes trivial when
we use blocking send() and receive() statements

Operating System Concepts – 10th Edition 3.24 Silberschatz, Galvin and Gagne ©2018
Producer-Consumer: Message Passing

▪ Producer
message next_produced;
while (true) {
/* produce an item in next_produced */

send(next_produced);
}

▪ Consumer
message next_consumed;
while (true) {
receive(next_consumed)

/* consume the item in next_consumed */


}

Operating System Concepts – 10th Edition 3.25 Silberschatz, Galvin and Gagne ©2018
Buffering
▪ Whether communication is direct or indirect,”
• messages exchanged by communicating processes reside in a temporary
queue
▪ Queue of messages attached to the link.
▪ Implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait (block) for receiver (rendezvous)
1. Referred as no buffering
2. Bounded capacity – finite length of n messages
Sender must wait if link full, if not Full then sender can continue
3. Unbounded capacity – infinite length
Sender never waits
3. 2 & 3 referred as automatic buffering

Operating System Concepts – 10th Edition 3.26 Silberschatz, Galvin and Gagne ©2018
Examples of IPC Systems - POSIX
▪ POSIX shared memory is organized using memory-mapped files, which
associate the region of shared memory with a file
▪ POSIX Shared Memory
• Process first creates shared memory segment using shm_open system call
Shared memory object Creating shared memory object

• shm_fd = shm_open(name, O_CREAT | O RDWR, 0666);


• Also used to open an existing segment
rights File access
• Set the size of the object Permission

ftruncate(shm_fd, 4096);
• Use mmap() to memory-map a file pointer to the shared memory object
• Reading and writing to shared memory is done by using the pointer returned
by mmap().

Operating System Concepts – 10th Edition 3.27 Silberschatz, Galvin and Gagne ©2018
IPC POSIX Producer

Operating System Concepts – 10th Edition 3.28 Silberschatz, Galvin and Gagne ©2018
IPC POSIX Consumer

Operating System Concepts – 10th Edition 3.29 Silberschatz, Galvin and Gagne ©2018
End of Chapter 3

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018

You might also like