You are on page 1of 44

Introduction to Operating Systems

Lecture 4

Lecture 4 contents
1. Inter-Process Communication 2. Process Synchronization Race Condition and producer-consumer problem Solution

Petersons Solution Semaphore Operations

Deadlock Problem Solution


Prevention Avoidance
2

1. Inter-Process Communication

Inter-process Communication

Inter process Communication: The OS provides the means for cooperating processes to communicate with each other via an inter process communication (IPC) facility. Processes within a system may be independent or cooperating: Independent process cannot affect or be affected by the execution of another process Cooperating process can affect or be affected by other processes, including sharing data. Cooperating processes need interprocess communication (IPC).Two models of IPC (Message passing, Shared memory)

Direct Communication

Direct Communication: each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as: send(P, message)- Send a message to process P. receive(Q, message)- Receive a message from process Q. A communication link in this scheme has the following properties: A link is established automatically between every pair of processes that want to communicate. The processes need to know only each others identity to communicate. A link is associated with exactly two processes. Exactly one link exists between each pair of processes. Disadvantage : The names of processes must be known - they can't be easily changed since they are explicitly named in the send and receive.
5

Indirect Communication

Indirect communication: the messages are sent to and received from mailboxes, or ports. Each mailbox has a unique identification. In this scheme, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows: send (A, message)- Send a message to mailbox A receive (A, message)- Receive a message from mailbox A. In this scheme, a communication link has the following properties: A link is established between a pair of processes only if both members of the pair have a shared mailbox. A link may be associated with more than two processes. A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox. Disadvantage : May cause confusion with multiple receivers - if several processes have outstanding receives on a mailbox, which one gets a message?

Blocking and Non-Blocking

Message passing may be either blocking or non-blocking Blocking is considered synchronous Blocking send has the sender block until the message is received Blocking receive has the receiver block until a message is available \ Non-blocking is considered asynchronous Non-blocking send has the sender send the message and continue Non-blocking receive has the receiver receive a valid message or null

Local and Remote message passing

Local procedure call (LPC) Only works between processes on the same system. Uses ports (like mailboxes) to establish and maintain communication channels Communication works as follows: The client opens a handle to the subsystems connection port object The client sends a connection request The server creates two private communication ports and returns the handle to one of them to the client The client and server use the corresponding port handle to send messages or callbacks and to listen for replies Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. Uses Sockets in addition to ports. A socket is defined as an endpoint for communication. Concatenation of IP address and port. The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 Communication consists between a pair of sockets
8

Local and Remote message passing


Local Message Passing
Remote Message Passing

2. Process Synchronization

10

Race Condition

11

Race Condition Problem


Fact of Life 1: Concurrent access to shared data may result in data inconsistency Fact of Life 2: Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

Race Condition: The situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the particular order in which the access take place. Critical section : Section (Segment) of the code were the shared data is accessed n competing processes.
Entry Section : Code that requests permission to enter its critical section. Exit Section : Code that is run after exiting the critical section

12

Producer- Consumer Problem


Concurrent access to shared data may result in data inconsistency

Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
An example that shows this concurrency is the access of a consumer and producer procedure for data in a buffer

Producer- Consumer Problem


Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. Have an integer count that tracks the number of full buffers. Initially, count is set to 0. Producer increments count after producing a buffer Consumer decrements after consuming a buffer

Producer

Consumer

14

Group1

Producer- Consumer Problem

count++ could be implemented as register1 = count register1 = register1 + 1 count = register1 count-- could be implemented as

Critical Section

register2 = count register2 = register2 - 1 Race Condition count = register2 Possible execution (with count = 5 initially): S0: producer executes register1 = count {register1 = 5} S2: consumer executes register2 = count {register2 = 5} S1: producer executes register1 = register1 + 1 {register1 = 6} S3: consumer executes register2 = register2 - 1 {register2 = 4} S4: producer executes count = register1 {count = 6 } S5: consumer executes count = register2 {count = 4}

Shared counters among threads

Possible result: lost update!

hits = 0 time

T1

T2
read hits (0) hits = 0 + 1

read hits (0) hits = 0 + 1 hits = 1

One other possible result: everything works. Difficult to debug Called a race condition

Solution to Critical-Section Problem


1.

2.

3.

Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely Bounded Waiting - A bound must exist on the entry number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes
17

Solution to Critical-Section Problem


Mutual Exclusion

GMU CS 571

Solution to Critical-Section Problem

To the Solve the Critical-Section Problem, we are going to show two algorithms: 1. Petersons Solution : Serve two processes only 2. Semaphore : Suffers from deadlock

19

Petersons Solution

Petersons solution is restricted to two processes that alternate execution between their critical sections and remainder sections. Assume two processes P0 and P1 share two data items: int turn; boolean flag[2] The variable turn indicates whose turn it is to enter its critical section. If turn == i, Then Pi is allowed to execute in its critical section. The flag array is used to indicate if a process is ready to enter its critical section. If flag[i] is true, Then Pi is ready to enter its critical section.
20

Petersons Solution
Process Pi: repeat flag[i]:=true;
// I want in Process flag his need to enter the critical section Change the turn to j Wait or Continue testing until

turn:=j;
// but you can go first!

while(flag[j]&& turn==j); process Pj change the turn to i CS When the turn is changed to i, the process goes to the critical section. Process flag the finalization of the critical section flag[i]:=false;
// Im done

RS forever

21

Petersons Solution
Process P0: repeat flag[0]:=true;
// 0 wants in

Process P1: repeat flag[1]:=true;


// 1 wants in

turn:= 1;
// 0 gives a chance to 1

turn:=0;
// 1 gives a chance to 0

while (flag[1]&turn=1); CS flag[0]:=false;


// 0 is done

while (flag[0]&turn=0); CS flag[1]:=false;


// 1 is done

RS forever

RS forever
22

Petersons Solution
Meets all three requirements for the solving the critical section problem;

Mutual exclusion : The following two observations imply that P0 and P1 could not have successfully executed their while statement at the same time:

Each Pi enters its critical section only if either flag[j]==false or turn == i. If both processes are in the CS, then flag[j]=flag[i]=0.

Progress : The ready process Pi {of flag[i] is true} will enter the critical section, When process Pj is done with the critical section and sets the turn value . Bounded Waiting : Pi enters the critical section only after at most entry by Pj .
23

This solution solves the critical-section problem for two processes only;

Semaphores Solution

Semaphore is an integer variable S, of value 0 (in use) or 1 (free to access). The value semaphore S is initialy free. The critical section has a semaphore and the queue of waiting processes. If a process P tries access the critical section, it checks S, if its values is 0, it waits until its value is changed to a value greater than zero, one. When S is 1, the lock is released, this indicates that the critical section is free be accessed by the next waiting process. P access the critical section, and set S to 0. After P finishes working in the critical section it signals the release of the lock by setting S to 1.
Shared : semaphore S; Init: S = 1; Wait(S) { while(S 0); S--; } Signal(S) { S++; } Process Pi from the queue do { wait(S); Critical Section signal(S); } while (1); 24

Semaphores Solution
Shared data:
int Balance; semaphore mutex; // initially mutex = 1

Process A: . wait (mutex); Balance = Balance 100; signal (mutex);

Process B: . wait (mutex); Balance = Balance + 200; signal (mutex);

GMU CS 571

Semaphores Solution
Meets all three requirements for the solving the critical section problem;

Mutual exclusion : If multiple processes are blocked on the same semaphore s, only one of them will be awakened when another process performs signal(s) operation. Progress : The highest priority queued waiting process Pi will enter the critical section, When semaphore is released from the critical section. Bounded Waiting : The implementation of the waiting queue may result in a situation where two processes are waiting indefinitely for a signal() call from other waiting processes. This is named as a deadlock status. Or a process may wait indefinitely for the critical section, because its turn or priority for entry is not valid, this is named as starvation.
26

This solution suffers from the possibility of deadlock;

DeadLock

27

Deadlock and starvation

In deadlock, processes halt because they cannot proceed and the resources are never released. In starvation, the system overall makes progress using (and reusing) the resources, but particular processes consistently miss out on being granted their resource request.
Example of Starvation Processes P1, P2, P3, and Resource R Each process requires periodic access to R If P1 and P2 are repeatedly granted access resource R, then P2 may indefinitely be denied access to R.

Example of Deadlock Processes P1 and P2, Resources S1 and S2 P1 is holding S2 P2 is holding S1 P1 is waiting S1 P2 is waiting S2
P1 will not release S2 until it has S1 P2 will not release S1 until it has S2

28

Deadlock and starvation

Wait-For-Graph (WFG)

Nodes Processes in the system Directed Edges Wait-For blocking relation Held By Resource 1 Waits For Process 2 Resource 2 Held By

Process 1
Waits For

A Cycle represents a Deadlock Starvation - A process execution is permanently halted. 29

Resource allocation graph


P1 P2 P1 P2

r1

r2

P3

P3 With deadlock

Resource allocation graph Without deadlock

Deadlock [Dining-Philosophers]
Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks. A philosopher may pick up only one chopstick at a time, and cannot pick up a chopstick already in use. When done, he puts down both of his chopsticks, one after the other.

Shared data

semaphore chopstick[5];
Initially all semaphore values are 1
31

Deadlock [Dining-Philosophers]

Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S); M M signal(S); signal(Q); signal(Q) signal(S); This attempt at a solution fails: It allows the system to reach a deadlock state in which each philosopher has picked up the chopstick to the left, waiting for the chopstick to the right to be put downwhich never happens
Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

Deadlock [Dining-Philosophers]

This attempt at a solution fails: It allows the system to reach a deadlock state in which each philosopher has picked up the chopstick to the left, waiting for the chopstick to the right to be put downwhich never happens, because A. Each right fork is another philosopher's left chopstick , and no philosopher will put down that fork until s/he eats, and B. No philosopher can eat until s/he acquires the chopstick to his/her own right, which has already been picked up by the philosopher to his/her right as described above

33

Deadlock Conditions

Mutual exclusion: The mutual exclusion condition must hold for non sharable resources. For example, a printer cannot be simultaneously shared by several processes. Shared resources on the other hand, do not require mutually exclusive access, and thus cannot be involved in the deadlock. In general, however it is not possible to prevent deadlocks by denying the mutual exclusion condition. Hold and wait: To ensure that the hold and wait condition never occurs in the system, we must guarantee that, whenever a process requests a resource, it does not hold any other resources. A process may request some resources and use them before it can request any other resources, however it must release all the resources that it is currently allocated. No preemption: The third necessary condition is that there be no preemption of resources that have already been allocated. If a process that is holding some resources requests another resource that it cannot be immediately allocated to it. Then all resources currently being held are preempted. Circular Wait: One way to ensure that the circular wait condition never holds is to impose a total ordering of all resources types, and to ensure that each process requests resources in an increasing order of enumeration.
34

Strategies for handling deadlocks

Deadlock prevention. Prevents deadlocks by restraining requests made to ensure that at least one of the four deadlock conditions cannot occur. Deadlock avoidance. Dynamically grants a resource to a process if the resulting state is safe. A state is safe if there is at least one execution sequence that allows all processes to run to completion. Deadlock detection and recovery. Allows deadlocks to form; then finds and breaks them.

Deadlock Prevention
1. A process acquires all the needed resources simultaneously before it begins its execution, therefore breaking the hold and wait condition. E.g. In the dining philosophers problem, each philosopher is required to pick up both forks at the same time. If he fails, he has to release the fork(s) (if any) he has acquired. Drawback: over-cautious. All resources are assigned unique numbers. A process may request a resource with a unique number I only if it is not holding a resource with a number less than or equal to I and therefore breaking the circular wait condition. E.g. In the dining philosophers problem, each philosopher is required to pick a fork that has a larger id than the one he currently holds. That is, philosopher P5 needs to pick up fork F5 and then F1; the other philosopher Pi should pick up fork Fi followed by Fi-1. Drawback: over-cautions. Each process is assigned a unique priority number. The priority numbers decide whether process Pi should wait for process Pj and therefore break the non-preemption condition. E.g. Assume that the philosophers priorities are based on their ids, i.e., Pi has a higher priority than Pj if i <j. In this case Pi is allowed to wait for Pi+1 for I=1,2,3,4. P5 is not allowed to wait for P1. If this case happens, P5 has to abort by releasing its acquired fork(s) (if any). Drawback: starvation. The lower priority one may always be rolled back. Solution is to raise the priority every time it is victimized. 36

2.

3.

Deadlock Avoidance

Resource Allocation Denial do not grant an incremental resource request to a process if this allocation might lead to deadlock. State of the system reflects the current allocation of resources to processes. Safe state is one in which there is at least one sequence of resource allocations to processes that does not result in a deadlock. Unsafe state is a state that is not safe.

37

Determination of a Safe State

State of a system consisting of four processes and three resources Allocations have been made to the four processes

Amount of existing resources

Resources available after allocation

P2 Runs to Completion

P1 Runs to Completion

P3 Runs to Completion

Thus, the state defined originally is a safe state

Determination of an Unsafe State

Deadlocks in Distributed Systems

Resource Deadlock

Most Common.

Occurs due to lack of requested Resource.


A Process waits for certain messages before it can proceed.

Communication Deadlock

43

Introduction to Operating Systems

Lecture 4

You might also like