Professional Documents
Culture Documents
CCOPSYSL-OPERATING SYSTEMS
Activity Sheet Week 5 and Week 6
Name: (Last Name, First Name) Date: (Date of Submission)
(1)
(2)
(3)
Program/Section: Instructor: Mr. Reynaldo E. Castillo
Topic: Concurrency: Process Synchronization and Deadlocks
Instructions:
1. Fill out the needed information above.
2. This is an individual submission.
2. Study and perform the tasks below as a group.
3. Perform imbedded instructions and answer the given questions below.
4. Save your activity sheet as PDF with the prescribed file name
5. Submit via:
o Student Submission
▪ Week 5_6
• LastName_FirstName (create this folder)
o PDF (Activity Sheet Week 5 and Week 6, submit until May 13)
o PPT – Task 2 output
6. In case, all screen shots should reflect the step you are performing and your student
information. Please sample below:
List of Activities
At the end of the end of this activity sheet, the student should be able to:
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Analyze the tradeoffs inherent in operating system design
• Learn mechanism of OS Concurrency
• Learn the mechanism involved in Process Synchronization and Deadlocks Management
A system typically consists of several (perhaps hundreds or even thousands) of threads running
either concurrently or in parallel. Threads often share user data. Meanwhile, the operating system
continuously updates various data structures to support multiple threads. A race condition exists
when access to shared data is not controlled, possibly resulting in corrupt data values.
Process synchronization involves using tools that control access to shared data to avoid race
conditions. These tools must be used carefully, as their incorrect use can result in poor system
performance, including deadlock.
Synchronization Tools
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space (that is, both code
and data) or be allowed to share data only through shared memory or message passing.
Concurrent access to shared
data may result in data inconsistency, however. In this module, we discuss various mechanisms
to ensure the orderly execution of cooperating processes that share a logical address space, so
that data consistency is maintained.
Race Condition
Race condition - a situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place.
• To guard against the race condition, we need to ensure that only one process at a time can
be manipulating the variable count. To make such a guarantee, we require that the
processes be
synchronized in some way.
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Each process must ask permission to enter critical section in the entry section, may
follow critical section with exit section, then the remainder section
Interrupt-based Solution
Software Solution 1
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Algorithm for Process Pi
Peterson’s Solution
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Algorithm for Process Pi
Memory Barrier
• To ensure that Peterson’s solution will work correctly on modern computer architecture
we must use Memory Barrier.
• Memory model are the memory guarantees a computer architecture makes to application
programs.
• Memory models may be either:
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Strongly ordered – where a memory modification of one processor is immediately
visible to all other processors.
• Weakly ordered – where a memory modification of one processor may not be
immediately visible to all other processors.
• A memory barrier is an instruction that forces any change in memory to be propagated
(made visible) to all other processors.
• When a memory barrier instruction is performed, the system ensures that all loads and
stores are completed before any subsequent load or store operations are performed.
• Therefore, even if instructions were reordered, the memory barrier ensures that the store
operations are completed in memory and visible to other processors before future load or
store operations are performed.
Synchronization Hardware
• Many systems provide hardware support for implementing the critical section code.
• Uniprocessors – could disable interrupts
o Currently running code would execute without preemption
o Generally too inefficient on multiprocessor systems
▪ Operating systems using this not broadly scalable
• We will look at three forms of hardware support:
1.
1. Hardware instructions
2. Atomic variables
Hardware Instructions
Atomic Variables
• Typically, instructions such as compare-and-swap are used as building blocks for other
synchronization tools.
• One tool is an atomic variable that provides atomic (uninterruptible) updates on basic
data types such as integers and booleans.
• For example:
o Let sequence be an atomic variable
o Let increment() be operation on the atomic variable sequence
o The Command:
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
increment(&sequence);
Mutex Locks
Semaphore
• Synchronization tool that provides more sophisticated ways (than Mutex locks) for
processes to synchronize their activities.
• Semaphore S – integer variable
• Can only be accessed via two indivisible (atomic) operations
o wait() and signal()
▪ Originally called P() and V()
• Definition of the wait() operation
wait(S) {
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
while (S <= 0)
; // busy wait
S--;
}
• Definition of the signal() operation
signal(S) {
S++;
}
Semaphore Implementation
• Must guarantee that no two processes can execute the wait() and signal() on the same
semaphore at the same time
• Thus, the implementation becomes the critical section problem where the wait and signal
code are placed in the critical section
• Could now have busy waiting in critical section implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical sections and therefore this is not
a good solution
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Monitors
• A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
• Abstract data type, internal variables only accessible by code within the procedure
• Only one process may be active within the monitor at a time
• Pseudocode syntax of a monitor:
monitor monitor-name
{
// shared variable declarations
function P1 (…) { …. }
function P2 (…) { …. }
function Pn (…) {……}
initialization code (…) { … }
}
• Allocate a single resource among competing processes using priority numbers that
specify the maximum time a process plans to use the resource
R.acquire(t);
...
access the resurce;
...
R.release;
Liveness
• Processes may have to wait indefinitely while trying to acquire a synchronization tool
such as a mutex lock or semaphore.
• Waiting indefinitely violates the progress and bounded-waiting criteria discussed at the
beginning of this chapter.
• Liveness refers to a set of properties that a system must satisfy to ensure processes make
progress.
• Indefinite waiting is an example of a liveness failure.
• Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
• Consider if P0 executes wait(S) and P1 wait(Q). When P0 executes wait(Q), it must wait
until P1 executes signal(Q)
• However, P1 is waiting until P0 execute signal(S).
• Since these signal() operations will never be executed, P0 and P1 are deadlocked.
Bounded-Buffer Problem
Readers-Writers Problem
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Semaphore rw_mutex initialized to 1
o Semaphore mutex initialized to 1
o Integer read_count initialized to 0
• The solution in previous slide can result in a situation where a writer process never
writes. It is referred to as the “First reader-writer” problem.
• The “Second reader-writer” problem is a variation the first reader-writer problem that
state:
o Once a writer is ready to write, no “newly arrived reader” is allowed to read.
• Both the first and second may result in starvation. leading to even more variations
• Problem is solved on some systems by kernel providing reader-writer locks
Dining-Philosophers Problem
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Linux Synchronization
• Linux:
o Prior to kernel Version 2.6, disables interrupts to implement short critical sections
o Version 2.6 and later, fully preemptive
• Linux provides:
o Semaphores
o Atomic integers
o Spinlocks
o Reader-writer versions of both
• On single-CPU system, spinlocks replaced by enabling and disabling kernel preemption
POSIX Synchronization
POSIX Semaphores
Java Synchronization
Java Monitors
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Every Java object has associated with it a single lock.
• If a method is declared as synchronized, a calling thread must own the lock for the
object.
• If the lock is owned by another thread, the calling thread must wait for the lock until it is
released.
• Locks are released when the owning thread exits the synchronized method.
• A thread that tries to acquire an unavailable lock is placed in the object’s entry set:
• A thread typically calls wait() when it is waiting for a condition to become true.
• How does a thread get notified?
• When a thread calls notify():
• T can now compete for the lock to check if the condition it was waiting for is now true.
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Alternative Approaches
• Transactional Memory
• OpenMP
• Functional Programming Languages
Transactional Memory
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
deadlock as a situation in which every process in a set of processes is waiting for an event that
can be caused only by another process in the set.
System Model
• Data:
o A semaphore S1 initialized to 1
o A semaphore S2 initialized to 1
• Two processes P1 and P2
• P1:
wait(s1)
wait(s2)
• P2:
wait(s2)
wait(s1)
Deadlock Characterization
Resource-Allocation Graph
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
• request edge – directed edge Pi ® Rj
• assignment edge – directed edge Rj ® Pi
• One instance of R1
• Two instances of R2
• One instance of R3
• Three instance of R4
• T1 holds one instance of R2 and is waiting for an instance of R1
• T2 holds one instance of R1, one instance of R2, and is waiting for an instance of R3
• T3 holds one instance of R3
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Resource Allocation Graph with a Deadlock
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Basic Facts
Deadlock Prevention
• Mutual Exclusion – not required for sharable resources (e.g., read-only files); must hold
for non-sharable resources
• Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources
o Require process to request and be allocated all its resources before it begins
execution, or allow the process to request resources only when the process has
none allocated to it.
o Low resource utilization; starvation possible
• No Preemption:
o If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released
o Preempted resources are added to the list of resources for which the process is
waiting
o The process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting
• Circular Wait:
o Impose a total ordering of all resource types, and require that each process
requests resources in increasing order of enumeration
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Task 1: Process Synchronization
Note:
• Please refrain from copying the definition directly from the internet. Read and rephrase
the source of your answer. Then you may include it as your term definition.
• Answer this activity individually.
• One point for each correct and not copied answer from the internet.
1. Process synchronization
2. Cooperating process
3. Race Condition
4. Critical section problem
5. Mutual Exclusion
6. Progress
7. Bounded Waiting
8. Memory Barrier
9. Memory model
10. Atomic Variables
11. Mutex Locks
12. Semaphore
13. Monitors
14. Liveness
15. Deadlock
16. Starvation
17. Aging
18. Priority Inversion
19. Readers-Writers Problem
20. Dining-Philosophers Problem
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Task 2: Safe and Unsafe State
Instruction: Define other possible safe state for Example 1 and Example 2: (40 points)
Example 1:
Example 2:
Instructions:
• Aside from the derived safe state, you should identify other safe states.
• Use Example 1 and Example 2 as given.
• Provide at least one safe state aside from the defined safe state in the slide presentation.
• Prove that the state is a Safe Sate by providing a detailed PowerPoint Simulation for
Example 1 and Example 2.
• The PowerPoint for submission should contain your simulation for both Example 1 and
Example 2
• Submit PowerPoint Via Week 5_6 under your created folder.
o Use the file name: LASTNAME_FIRSTNAME_Task_2.ppt
• Submit the ppt together with this activity sheet via created folder.
• Make sure to make a detailed simulation
o Step by step process on deriving the Safe State for each given example
Pointing System
20 points - Correct, Complete
18 points - Correct, Incomplete (1 to 2 steps missing)
16 points - Correct, Incomplete (3 to 4 steps missing)
14 points - Correct, Incomplete ( 5 to 6 steps missing)
12 points - Correct, Incomplete (Too many steps missing)
10 points - Incorrect (1 incorrect item)
8 points - Incorrect (2 incorrect items)
6 points - Incorrect (too many incorrect items)
0 point - No submission
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
LESSON SUMMARY
• A race condition occurs when processes have concurrent access to shared data and the
final result depends on the particular order in which concurrent accesses occur. Race
conditions can result in corrupted values of shared data.
• A critical section is a section of code where shared data may be manipulated and a
possible race condition may occur. The critical-section problem is to design a protocol
whereby processes can synchronize their activity to cooperatively share data.
• A solution to the critical section problem must satisfy the following three requirements:
(1) mutual exclusion, (2) progress, and (3) bounded waiting.
• Mutual exclusion ensures that only one process at a time is active in its critical section.
Progress ensures that programs will cooperatively determine what process will next enter
its critical section. Bounded waiting limits how much time a program will wait before it
can enter its critical section.
• Software solutions to the critical-section problem, such as Peterson’s solution, do not
work well on modern computer architectures.
• Hardware support for the critical-section problem includes memory barriers; hardware
instructions, such as the compare-and-swap instruction; and atomic variables.
• A mutex lock provides mutual exclusion by requiring that a process acquire a lock before
entering a critical section and release the lock on exiting the critical section.
• Semaphores, like mutex locks, can be used to provide mutual exclusion. However,
whereas a mutex lock has a binary value that indicates if the lock is available or not, a
semaphore has an integer value and can, therefore, be used to solve a variety of
synchronization problems.
• A monitor is an abstract data type that provides a high-level form of process
synchronization. A monitor uses condition variables that allow processes to wait for
certain conditions to become true and to signal one another when conditions have been
set to true.
• Solutions to the critical-section problem may suffer from liveness problems, including
deadlock.
• The various tools that can be used to solve the critical-section problem as well as to
synchronize the activity of processes can be evaluated under varying levels of contention.
Some tools work better under certain contention loads than others.
• Classic problems of process synchronization include the bounded-buffer, readers–writers,
and dining-philosophers problems. Solutions to these problems can be developed using
the tools presented, including mutex locks, semaphores, monitors, and condition
variables.
• Windows uses dispatcher objects as well as events to implement process synchronization
tools.
• Linux uses a variety of approaches to protect against race conditions, including atomic
variables, spinlocks, and mutex locks.
• The POSIX API provides mutex locks, semaphores, and condition variables. POSIX
provides two forms of semaphores: named and unnamed. Several unrelated processes can
easily access the same-named semaphore by simply referring to its name. Unnamed
semaphores cannot be shared as easily, and require placing the semaphore in a region of
shared memory.
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
• Java has a rich library and API for synchronization. Available tools include monitors
(which are provided at the language level) as well as reentrant locks, semaphores, and
condition variables (which are supported by the API).
• Alternative approaches to solving the critical-section problem include transactional
memory, OpenMP, and functional languages. Functional languages are particularly
intriguing, as they offer a different programming paradigm from procedural languages.
Unlike procedural languages, functional languages do not maintain state and therefore are
generally immune from race conditions and critical sections.
Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph