You are on page 1of 7

Chapter 1: Process Synchronization

1.1Process Synchronization

In a multi-process or multi-threaded environment, it is crucial to ensure proper synchronization


among processes to avoid race conditions and data inconsistencies. Process synchronization involves
coordinating the execution of processes or threads to ensure mutually exclusive access to shared
resources. In this chapter, we will discuss process synchronization techniques and explore the critical
section problem and its solutions.

1.2 The Critical Section Problem

The critical section problem arises when multiple processes or threads compete for shared resources
or variables. The critical section refers to the portion of the code where the shared resource is
accessed. To prevent race conditions and maintain data integrity, we need to enforce mutual exclusion,
ensuring that only one process can execute its critical section at a time while others wait.

1.2 The Producer Process Code

The producer process is responsible for producing or generating data that will be consumed by other
processes. Here is an example code snippet for a producer process:

while True:
# Produce an item
item = produce_item()

# Acquire the lock before entering the critical section


acquire_lock()

# Append the item to the shared buffer


append_item(item)

# Release the lock after leaving the critical section


release_lock()
# Continue producing more items

1.4 The Consumer Process Code

The consumer process is responsible for consuming or using the data produced by the producer
process. Here is an example code snippet for a consumer process:

while True:
# Acquire the lock before entering the critical section
acquire_lock()

# Retrieve an item from the shared buffer


item = retrieve_item()

# Release the lock after leaving the critical section


release_lock()

# Consume the item


consume_item(item)

# Continue consuming more items

1.5 Solution to the Critical Section Problem

To solve the critical section problem, we need to enforce mutual exclusion, progress, and bounded
waiting. Several algorithms and synchronization primitives have been developed to achieve these
properties. In this chapter, we will discuss three 2-process solutions to the critical section problem.

1.2.1 Algorithm 1:

Peterson's Solution Peterson's solution is a simple and widely used algorithm for two-process
synchronization. It employs two shared variables, turn and flag, to coordinate the execution of
processes. The algorithm is as follows:

Process 0:
while True:
flag[0] = True
turn = 1

while flag[1] and turn == 1:


# Wait

# Critical section
# Perform operations on shared resources

flag[0] = False

# Remainder section
# Perform non-critical operations

Process 1:
while True:
flag[1] = True
turn = 0

while flag[0] and turn == 0:


# Wait

# Critical section
# Perform operations on shared resources

flag[1] = False

# Remainder section
# Perform non-critical operations

1.1.2 Algorithm 2:

Dekker's Solution Dekker's solution is another algorithm for two-process synchronization. It uses the
turn variable and a flag array to achieve mutual exclusion. The algorithm is as follows:

Process 0:
while True:
flag[0] = True

while flag[1]:
if turn == 1:
flag[0] = False

while turn == 1:
# Wait

flag[0] = True

# Critical section
# Perform operations on shared resources

turn

Algorithm 3: Bakery Algorithm


In the Bakery Algorithm, each process has a choosing array to indicate its intention to enter the
critical section. The number array holds the unique numbers assigned to each process. The processes
take turns based on their numbers, allowing only the process with the lowest number to enter the
critical section.

Process 0:
while True:
choosing[0] = True
number[0] = 1 + max(number[0], number[1])
choosing[0] = False

for j in range(N):
while choosing[j]:
# Wait until other process has chosen its number
while number[j] != 0 and (number[j] < number[0] or (number[j] == number[0] and j < 0)):
# Wait if other process has lower number or same number but higher index

# Critical section
# Perform operations on shared resources

number[0] = 0

# Remainder section
# Perform non-critical operations

Process 1:
while True:
choosing[1] = True
number[1] = 1 + max(number[0], number[1])
choosing[1] = False

for j in range(N):
while choosing[j]:
# Wait until other process has chosen its number
while number[j] != 0 and (number[j] < number[1] or (number[j] == number[1] and j < 1)):
# Wait if other process has lower number or same number but higher index

# Critical section
# Perform operations on shared resources

number[1] = 0

# Remainder section
# Perform non-critical operations

1.6 N-Process Critical Section Problem


So far, we have discussed the critical section problem and its solutions for two processes. However, in
real-world scenarios, we often encounter systems with more than two processes. The N-process
critical section problem extends the challenge of achieving mutual exclusion, progress, and bounded
waiting to multiple processes.
In the N-process critical section problem, each process must acquire a shared resource while ensuring
that no two processes can access the critical section simultaneously. Solving this problem requires
synchronization techniques that go beyond the two-process solutions discussed earlier.

1.7 Hardware Solutions for the Critical Section Problem

In addition to software-based solutions, hardware support can also aid in achieving mutual exclusion.
Certain hardware features, such as atomic operations and special instructions, can be leveraged to
implement synchronization primitives efficiently.
Hardware solutions typically involve using atomic instructions, such as test-and-set or compare-and-
swap, which allow for atomic read-modify-write operations. These instructions ensure that no other
process can intervene between the read and write phases, thereby enabling atomic updates to shared
variables.

1.8 Hardware Solutions


In addition to software-based solutions, hardware support can play a crucial role in achieving efficient
and reliable synchronization in the critical section problem. Hardware solutions leverage specific
features and mechanisms provided by the underlying hardware architecture. Let's explore some of the
hardware solutions commonly employed:

1.8.1 Atomic Instructions

Atomic instructions are low-level hardware instructions that guarantee the atomicity of certain
operations. These instructions allow for read-modify-write operations on shared variables, ensuring
that no other process can intervene between the read and write phases. Examples of atomic
instructions include test-and-set, compare-and-swap, fetch-and-add, and load-linked/store-conditional
instructions. Atomic instructions are powerful building blocks for implementing synchronization
primitives efficiently.

1.8.2 Mutex Hardware Support

Some modern processors provide dedicated hardware support for mutexes, which are synchronization
objects used to protect critical sections. This hardware assistance can significantly enhance
performance by reducing the overhead associated with acquiring and releasing locks. Mutex hardware
support often includes specialized instructions or cache-coherence protocols that enable efficient
mutual exclusion and synchronization.
1.8.3 Barrier Instructions

Barrier instructions are hardware instructions that allow processes to synchronize at specific points in
their execution. These instructions ensure that all processes reach a designated synchronization point
before proceeding further. Barrier instructions are particularly useful in scenarios where multiple
processes need to coordinate their actions collectively. They are commonly employed in parallel
computing and multi-threaded systems.

1.8.4 Inter-Processor Interrupts (IPIs)

Inter-Processor Interrupts (IPIs) enable communication and coordination between processors in a


multiprocessor system. IPIs can be used to send interrupts or signals to other processors, allowing for
inter-processor synchronization. This mechanism is beneficial when processes on different processors
need to coordinate their execution, especially in scenarios where critical sections span multiple
processors.

1.9 Semaphores

Semaphores are synchronization primitives commonly used in operating systems to manage critical
sections and coordinate access to shared resources. Semaphores can be implemented using software or
hardware. They provide a mechanism for processes to signal and block each other to control the
execution flow. Semaphores can be used to enforce mutual exclusion or allow a certain number of
processes to access a shared resource simultaneously.

1.10 Problems with Semaphores

Although semaphores are widely used and powerful synchronization primitives, they can introduce
new challenges if not used correctly. Two common problems associated with semaphores are
deadlocks and starvation.

1.11 Problems with Semaphores


While semaphores are powerful synchronization primitives, improper usage can lead to certain issues
such as deadlocks and starvation.

1.11.1 Deadlocks
Deadlocks occur when processes or threads enter a state of permanent blocking, unable to proceed
because they are waiting for resources that will never become available. Deadlocks can arise when
multiple processes or threads acquire resources and hold them indefinitely, waiting for other resources
that are currently being held by other processes.

Deadlocks typically involve the following four conditions:


1. Mutual Exclusion: Resources cannot be simultaneously used by multiple processes.
2. Hold and Wait: Processes hold at least one resource while waiting for others.
3. No Preemption: Resources cannot be forcefully taken away from a process.
4. Circular Wait: A circular chain of processes exists, where each process holds a resource that
the next process in the chain is waiting for.
To prevent deadlocks, strategies such as resource allocation ordering, deadlock detection algorithms,
and resource preemption can be employed.

1.11.2 Starvation

Starvation occurs when a process is perpetually denied access to resources or execution due to the
scheduling or resource allocation policies. In a multi-process system, certain processes may be
continuously prioritized over others, causing some processes to be starved and unable to make
progress.
Starvation can be mitigated by employing fair scheduling algorithms and resource allocation policies
that ensure all processes have a fair chance of accessing shared resources.

1.12 Violation of Mutual Exclusion

Violation of mutual exclusion refers to situations where multiple processes or threads simultaneously
access and modify shared resources without proper synchronization. When mutual exclusion is not
enforced, race conditions can occur, leading to data inconsistencies and unexpected results.
To ensure mutual exclusion, synchronization mechanisms such as locks, semaphores, or atomic
operations should be used to allow only one process or thread to access a critical section or shared
resource at a time. By enforcing mutual exclusion, conflicts and data corruption can be avoided.

You might also like