You are on page 1of 22

NATIONAL UNIVERSITY LAGUNA

College of Computer Studies

CCOPSYSL-OPERATING SYSTEMS
Activity Sheet Week 5 and Week 6
Name: (Last Name, First Name) Date: (Date of Submission)
(1)
(2)
(3)
Program/Section: Instructor: Mr. Reynaldo E. Castillo
Topic: Concurrency: Process Synchronization and Deadlocks
Instructions:
1. Fill out the needed information above.
2. This is an individual submission.
2. Study and perform the tasks below as a group.
3. Perform imbedded instructions and answer the given questions below.
4. Save your activity sheet as PDF with the prescribed file name
5. Submit via:
o Student Submission
▪ Week 5_6
• LastName_FirstName (create this folder)
o PDF (Activity Sheet Week 5 and Week 6, submit until May 13)
o PPT – Task 2 output
6. In case, all screen shots should reflect the step you are performing and your student
information. Please sample below:

List of Activities

• Task 1: Process Synchronization


• Task 2: Safe and Unsafe State

At the end of the end of this activity sheet, the student should be able to:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Analyze the tradeoffs inherent in operating system design
• Learn mechanism of OS Concurrency
• Learn the mechanism involved in Process Synchronization and Deadlocks Management

A system typically consists of several (perhaps hundreds or even thousands) of threads running
either concurrently or in parallel. Threads often share user data. Meanwhile, the operating system
continuously updates various data structures to support multiple threads. A race condition exists
when access to shared data is not controlled, possibly resulting in corrupt data values.

Process synchronization involves using tools that control access to shared data to avoid race
conditions. These tools must be used carefully, as their incorrect use can result in poor system
performance, including deadlock.

Synchronization Tools

A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space (that is, both code
and data) or be allowed to share data only through shared memory or message passing.
Concurrent access to shared
data may result in data inconsistency, however. In this module, we discuss various mechanisms
to ensure the orderly execution of cooperating processes that share a logical address space, so
that data consistency is maintained.

Race Condition

Race condition - a situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place.

• To guard against the race condition, we need to ensure that only one process at a time can
be manipulating the variable count. To make such a guarantee, we require that the
processes be
synchronized in some way.

The Critical-Section Problem

• Consider a system of n processes {p0, p1, … pn-1}


• Each process has a critical section segment of code
• The process may be changing common variables, updating tables, writing files, etc.
• When one process in a critical section, no other may be in its critical section
• Critical section problem is to design protocol to solve this

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Each process must ask permission to enter critical section in the entry section, may
follow critical section with exit section, then the remainder section

Requirements for a solution to a critical-section problem

• Mutual Exclusion - If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections
• Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the process that will enter the
critical section next cannot be postponed indefinitely
• Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
o Assume that each process executes at a nonzero speed
o No assumption concerning the relative speed of the n processes

Interrupt-based Solution

• Entry section: disable interrupts


• Exit section: enable interrupts
• Will this solve the problem?
o What if the critical section is code that runs for an hour?
o Can some processes starve – never enter their critical section.
o What if there are two CPUs?

Software Solution 1

• Two process solution


• Assume that the load and store machine-language instructions are atomic; that is, cannot
be interrupted
• The two processes share one variable:
o int turn;
• The variable turn indicates whose turn it is to enter the critical section

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Algorithm for Process Pi

Correctness of the Software Solution


• Mutual exclusion is preserved
Pi enters critical section only if:
turn = I
and turn cannot be both 0 and 1 at the same time
• What about the Progress requirement?
FOObar
• What about the Bounded-waiting requirement?

Peterson’s Solution

• Two process solution


• Assume that the load and store machine-language instructions are atomic; that is, cannot
be interrupted
• The two processes share two variables:
o int turn;
o boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical section
• The flag array is used to indicate if a process is ready to enter the critical section.
o flag[i] = true implies that process Pi is ready!

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Algorithm for Process Pi

Correctness of Peterson’s Solution


• Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
1. Progress requirement is satisfied
2. Bounded-waiting requirement is met

Peterson’s Solution and Modern Architecture

• Although useful for demonstrating an algorithm, Peterson’s Solution is not guaranteed to


work on modern architectures.
o To improve performance, processors and/or compilers may reorder operations that
have no dependencies
• Understanding why it will not work is useful for better understanding race conditions.
• For single-threaded this is ok as the result will always be the same.
• For multithreaded the reordering may produce inconsistent or unexpected results!

Memory Barrier

• To ensure that Peterson’s solution will work correctly on modern computer architecture
we must use Memory Barrier.
• Memory model are the memory guarantees a computer architecture makes to application
programs.
• Memory models may be either:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Strongly ordered – where a memory modification of one processor is immediately
visible to all other processors.
• Weakly ordered – where a memory modification of one processor may not be
immediately visible to all other processors.
• A memory barrier is an instruction that forces any change in memory to be propagated
(made visible) to all other processors.

Memory Barrier Instructions

• When a memory barrier instruction is performed, the system ensures that all loads and
stores are completed before any subsequent load or store operations are performed.
• Therefore, even if instructions were reordered, the memory barrier ensures that the store
operations are completed in memory and visible to other processors before future load or
store operations are performed.

Synchronization Hardware

• Many systems provide hardware support for implementing the critical section code.
• Uniprocessors – could disable interrupts
o Currently running code would execute without preemption
o Generally too inefficient on multiprocessor systems
▪ Operating systems using this not broadly scalable
• We will look at three forms of hardware support:

1.
1. Hardware instructions
2. Atomic variables

Hardware Instructions

• Special hardware instructions that allow us to either test-and-modify the content of a


word, or two swap the contents of two words atomically (uninterruptedly.)
• Test-and-Set instruction
• Compare-and-Swap instruction

Atomic Variables

• Typically, instructions such as compare-and-swap are used as building blocks for other
synchronization tools.
• One tool is an atomic variable that provides atomic (uninterruptible) updates on basic
data types such as integers and booleans.
• For example:
o Let sequence be an atomic variable
o Let increment() be operation on the atomic variable sequence
o The Command:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
increment(&sequence);

ensures sequence is incremented without interruption:

Mutex Locks

• Previous solutions are complicated and generally inaccessible to application


programmers
• OS designers build software tools to solve critical section problem
• Simplest is mutex lock
o Boolean variable indicating if lock is available or not
• Protect a critical section by
o First acquire() a lock
o Then release() the lock
• Calls to acquire() and release() must be atomic
o Usually implemented via hardware atomic instructions such as compare-and-
swap.
• But this solution requires busy waiting
o This lock therefore called a spinlock

Solution to CS Problem Using Mutex Locks

Semaphore

• Synchronization tool that provides more sophisticated ways (than Mutex locks) for
processes to synchronize their activities.
• Semaphore S – integer variable
• Can only be accessed via two indivisible (atomic) operations
o wait() and signal()
▪ Originally called P() and V()
• Definition of the wait() operation

wait(S) {

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
while (S <= 0)
; // busy wait
S--;
}
• Definition of the signal() operation
signal(S) {
S++;
}

• Counting semaphore – integer value can range over an unrestricted domain


• Binary semaphore – integer value can range only between 0 and 1
• Same as a mutex lock
• Can implement a counting semaphore S as a binary semaphore
• With semaphores we can solve various synchronization problems

Semaphore Implementation

• Must guarantee that no two processes can execute the wait() and signal() on the same
semaphore at the same time
• Thus, the implementation becomes the critical section problem where the wait and signal
code are placed in the critical section
• Could now have busy waiting in critical section implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical sections and therefore this is not
a good solution

Semaphore Implementation with no Busy waiting

• With each semaphore there is an associated waiting queue


• Each entry in a waiting queue has two data items:
o Value (of type integer)
o Pointer to next record in the list
• Two operations:
o block – place the process invoking the operation on the appropriate waiting queue
o wakeup – remove one of processes in the waiting queue and place it in the ready
queue

Problems with Semaphores

• Incorrect use of semaphore operations:


o signal(mutex) …. wait(mutex)
o wait(mutex) … wait(mutex)
o Omitting of wait (mutex) and/or signal (mutex)
• These – and others – are examples of what can occur when semaphores and other
synchronization tools are used incorrectly.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Monitors

• A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
• Abstract data type, internal variables only accessible by code within the procedure
• Only one process may be active within the monitor at a time
• Pseudocode syntax of a monitor:

monitor monitor-name
{
// shared variable declarations
function P1 (…) { …. }
function P2 (…) { …. }
function Pn (…) {……}
initialization code (…) { … }
}

Single Resource allocation

• Allocate a single resource among competing processes using priority numbers that
specify the maximum time a process plans to use the resource

R.acquire(t);
...
access the resurce;
...
R.release;

• Where R is an instance of type ResourceAllocator

Liveness

• Processes may have to wait indefinitely while trying to acquire a synchronization tool
such as a mutex lock or semaphore.
• Waiting indefinitely violates the progress and bounded-waiting criteria discussed at the
beginning of this chapter.
• Liveness refers to a set of properties that a system must satisfy to ensure processes make
progress.
• Indefinite waiting is an example of a liveness failure.

• Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);

• Consider if P0 executes wait(S) and P1 wait(Q). When P0 executes wait(Q), it must wait
until P1 executes signal(Q)
• However, P1 is waiting until P0 execute signal(S).
• Since these signal() operations will never be executed, P0 and P1 are deadlocked.

Other forms of deadlock:

o Starvation – indefinite blocking


o A process may never be removed from the semaphore queue in which it is
suspended
o Priority Inversion – Scheduling problem when lower-priority process holds a
lock needed by higher-priority process
o Solved via priority-inheritance protocol

Classical Problems of Synchronization

• Classical problems used to test newly-proposed synchronization schemes


o Bounded-Buffer Problem
o Readers and Writers Problem
o Dining-Philosophers Problem

Bounded-Buffer Problem

• n buffers, each can hold one item


• Semaphore mutex initialized to the value 1
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the value n

Readers-Writers Problem

• A data set is shared among a number of concurrent processes


o Readers – only read the data set; they do not perform any updates
o Writers – can both read and write
• Problem – allow multiple readers to read at the same time
o Only one single writer can access the shared data at the same time
• Several variations of how readers and writers are considered – all involve some form of
priorities
• Shared Data
o Data set

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Semaphore rw_mutex initialized to 1
o Semaphore mutex initialized to 1
o Integer read_count initialized to 0

Readers-Writers Problem Variations

• The solution in previous slide can result in a situation where a writer process never
writes. It is referred to as the “First reader-writer” problem.
• The “Second reader-writer” problem is a variation the first reader-writer problem that
state:
o Once a writer is ready to write, no “newly arrived reader” is allowed to read.
• Both the first and second may result in starvation. leading to even more variations
• Problem is solved on some systems by kernel providing reader-writer locks

Dining-Philosophers Problem

• N philosophers’ sit at a round table with a bowl of rice in the middle.


• They spend their lives alternating thinking and eating.
• They do not interact with their neighbors.
• Occasionally try to pick up 2 chopsticks (one at a time) to eat from bowl
▪ Need both to eat, then release both when done
• In the case of 5 philosophers, the shared data
▪ Bowl of rice (data set)
▪ Semaphore chopstick [5] initialized to 1

Kernel Synchronization - Windows

• Uses interrupt masks to protect access to global resources on uniprocessor systems


• Uses spinlocks on multiprocessor systems
o Spinlocking-thread will never be preempted
• Also provides dispatcher objects user-land which may act mutexes, semaphores, events,
and timers
o Events
▪ 4An event acts much like a condition variable
o Timers notify one or more thread when time expired
o Dispatcher objects either signaled-state (object available) or non-signaled state
(thread will block)
• Mutex dispatcher object

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Linux Synchronization

• Linux:
o Prior to kernel Version 2.6, disables interrupts to implement short critical sections
o Version 2.6 and later, fully preemptive
• Linux provides:
o Semaphores
o Atomic integers
o Spinlocks
o Reader-writer versions of both
• On single-CPU system, spinlocks replaced by enabling and disabling kernel preemption

POSIX Synchronization

• POSIX API provides


o mutex locks
o semaphores
o condition variable
• Widely used on UNIX, Linux, and macOS

POSIX Semaphores

• POSIX provides two versions – named and unnamed.


• Named semaphores can be used by unrelated processes, unnamed cannot.

Java Synchronization

• Java provides rich set of synchronization features:


o Java monitors
o Reentrant locks
o Semaphores
o Condition variables

Java Monitors

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Every Java object has associated with it a single lock.
• If a method is declared as synchronized, a calling thread must own the lock for the
object.
• If the lock is owned by another thread, the calling thread must wait for the lock until it is
released.
• Locks are released when the owning thread exits the synchronized method.

• A thread that tries to acquire an unavailable lock is placed in the object’s entry set:

• Similarly, each object also has a wait set.


• When a thread calls wait():
1. It releases the lock for the object
2. The state of the thread is set to blocked
3. The thread is placed in the wait set for the object

• A thread typically calls wait() when it is waiting for a condition to become true.
• How does a thread get notified?
• When a thread calls notify():

1. An arbitrary thread T is selected from the wait set


2. T is moved from the wait set to the entry set
3. Set the state of T from blocked to runnable.

• T can now compete for the lock to check if the condition it was waiting for is now true.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Alternative Approaches

• Transactional Memory
• OpenMP
• Functional Programming Languages

Transactional Memory

Functional Programming Languages

• Functional programming languages offer a different paradigm than procedural languages


in that they do not maintain state.
• Variables are treated as immutable and cannot change state once they have been assigned
a value.
• There is increasing interest in functional languages such as Erlang and Scala for their
approach in handling data races.

In a multiprogramming environment, several threads may compete for a finite number of


resources. A thread requests resources; if the resources are not available at that time, the thread
enters a waiting state. Sometimes, a waiting thread can never again change state, because the
resources it has requested are held by other waiting threads. This situation is called a deadlock.
We discussed this issue briefly in Module 3.1 as a form of liveness failure. There, we defined

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
deadlock as a situation in which every process in a set of processes is waiting for an event that
can be caused only by another process in the set.

System Model

• A system consists of resources


• Resource types R1, R2,... ., Rm
o CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as follows:
o request
o use
o release

Deadlock with Semaphores

• Data:
o A semaphore S1 initialized to 1
o A semaphore S2 initialized to 1
• Two processes P1 and P2
• P1:
wait(s1)
wait(s2)
• P2:
wait(s2)
wait(s1)

Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously.

• Mutual exclusion: only one process at a time can use a resource


• Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
• No preemption: a resource can be released only voluntarily by the process holding it
after that process has completed its task
• Circular wait: there exists a set {P0, P1, …, Pn} of waiting for processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2,
…, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that
is held by P0.

Resource-Allocation Graph

A set of vertices V and a set of edges E.

• V is partitioned into two types:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
• request edge – directed edge Pi ® Rj
• assignment edge – directed edge Rj ® Pi

Resource allocation graph example

• One instance of R1
• Two instances of R2
• One instance of R3
• Three instance of R4
• T1 holds one instance of R2 and is waiting for an instance of R1
• T2 holds one instance of R1, one instance of R2, and is waiting for an instance of R3
• T3 holds one instance of R3

Resource Allocation Graph Example

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Resource Allocation Graph with a Deadlock

Graph with a Cycle But no Deadlock

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Basic Facts

• If the graph contains no cycles Þ no deadlock


• If the graph contains a cycle Þ
• If only one instance per resource type, then deadlock
• If several instances per resource type, possibility of deadlock

Methods for Handling Deadlocks

• Ensure that the system will never enter a deadlock state:


o Deadlock prevention
o Deadlock avoidance
• Allow the system to enter a deadlock state and then recover
• Ignore the problem and pretend that deadlocks never occur in the system.

Deadlock Prevention

Invalidate one of the four necessary conditions for deadlock:

• Mutual Exclusion – not required for sharable resources (e.g., read-only files); must hold
for non-sharable resources
• Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources
o Require process to request and be allocated all its resources before it begins
execution, or allow the process to request resources only when the process has
none allocated to it.
o Low resource utilization; starvation possible
• No Preemption:
o If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released
o Preempted resources are added to the list of resources for which the process is
waiting
o The process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting
• Circular Wait:
o Impose a total ordering of all resource types, and require that each process
requests resources in increasing order of enumeration

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Task 1: Process Synchronization

Instruction: Define the following key terms. (20 Points)

Note:

• Please refrain from copying the definition directly from the internet. Read and rephrase
the source of your answer. Then you may include it as your term definition.
• Answer this activity individually.
• One point for each correct and not copied answer from the internet.

Terms Related to Process Synchronization (1 point per correct answer).

1. Process synchronization
2. Cooperating process
3. Race Condition
4. Critical section problem
5. Mutual Exclusion
6. Progress
7. Bounded Waiting
8. Memory Barrier
9. Memory model
10. Atomic Variables
11. Mutex Locks
12. Semaphore
13. Monitors
14. Liveness
15. Deadlock
16. Starvation
17. Aging
18. Priority Inversion
19. Readers-Writers Problem
20. Dining-Philosophers Problem

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Task 2: Safe and Unsafe State

Instruction: Define other possible safe state for Example 1 and Example 2: (40 points)

Example 1:

• No. of Process: 5 (P0 to P4)


• Resources-Instance : A-3, B-14, C-12, D-12

Example 2:

• No. of Process: 5 (P0 to P4)


• Resources-Instance : A-10, B-5, C-7

Instructions:

• Aside from the derived safe state, you should identify other safe states.
• Use Example 1 and Example 2 as given.
• Provide at least one safe state aside from the defined safe state in the slide presentation.
• Prove that the state is a Safe Sate by providing a detailed PowerPoint Simulation for
Example 1 and Example 2.
• The PowerPoint for submission should contain your simulation for both Example 1 and
Example 2
• Submit PowerPoint Via Week 5_6 under your created folder.
o Use the file name: LASTNAME_FIRSTNAME_Task_2.ppt
• Submit the ppt together with this activity sheet via created folder.
• Make sure to make a detailed simulation
o Step by step process on deriving the Safe State for each given example

Pointing System
20 points - Correct, Complete
18 points - Correct, Incomplete (1 to 2 steps missing)
16 points - Correct, Incomplete (3 to 4 steps missing)
14 points - Correct, Incomplete ( 5 to 6 steps missing)
12 points - Correct, Incomplete (Too many steps missing)
10 points - Incorrect (1 incorrect item)
8 points - Incorrect (2 incorrect items)
6 points - Incorrect (too many incorrect items)
0 point - No submission

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
LESSON SUMMARY

• A race condition occurs when processes have concurrent access to shared data and the
final result depends on the particular order in which concurrent accesses occur. Race
conditions can result in corrupted values of shared data.
• A critical section is a section of code where shared data may be manipulated and a
possible race condition may occur. The critical-section problem is to design a protocol
whereby processes can synchronize their activity to cooperatively share data.
• A solution to the critical section problem must satisfy the following three requirements:
(1) mutual exclusion, (2) progress, and (3) bounded waiting.
• Mutual exclusion ensures that only one process at a time is active in its critical section.
Progress ensures that programs will cooperatively determine what process will next enter
its critical section. Bounded waiting limits how much time a program will wait before it
can enter its critical section.
• Software solutions to the critical-section problem, such as Peterson’s solution, do not
work well on modern computer architectures.
• Hardware support for the critical-section problem includes memory barriers; hardware
instructions, such as the compare-and-swap instruction; and atomic variables.
• A mutex lock provides mutual exclusion by requiring that a process acquire a lock before
entering a critical section and release the lock on exiting the critical section.
• Semaphores, like mutex locks, can be used to provide mutual exclusion. However,
whereas a mutex lock has a binary value that indicates if the lock is available or not, a
semaphore has an integer value and can, therefore, be used to solve a variety of
synchronization problems.
• A monitor is an abstract data type that provides a high-level form of process
synchronization. A monitor uses condition variables that allow processes to wait for
certain conditions to become true and to signal one another when conditions have been
set to true.
• Solutions to the critical-section problem may suffer from liveness problems, including
deadlock.
• The various tools that can be used to solve the critical-section problem as well as to
synchronize the activity of processes can be evaluated under varying levels of contention.
Some tools work better under certain contention loads than others.
• Classic problems of process synchronization include the bounded-buffer, readers–writers,
and dining-philosophers problems. Solutions to these problems can be developed using
the tools presented, including mutex locks, semaphores, monitors, and condition
variables.
• Windows uses dispatcher objects as well as events to implement process synchronization
tools.
• Linux uses a variety of approaches to protect against race conditions, including atomic
variables, spinlocks, and mutex locks.
• The POSIX API provides mutex locks, semaphores, and condition variables. POSIX
provides two forms of semaphores: named and unnamed. Several unrelated processes can
easily access the same-named semaphore by simply referring to its name. Unnamed
semaphores cannot be shared as easily, and require placing the semaphore in a region of
shared memory.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Java has a rich library and API for synchronization. Available tools include monitors
(which are provided at the language level) as well as reentrant locks, semaphores, and
condition variables (which are supported by the API).
• Alternative approaches to solving the critical-section problem include transactional
memory, OpenMP, and functional languages. Functional languages are particularly
intriguing, as they offer a different programming paradigm from procedural languages.
Unlike procedural languages, functional languages do not maintain state and therefore are
generally immune from race conditions and critical sections.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph

You might also like