You are on page 1of 16

Concurrent Programming

LECTURE 12 to 14 NOTES

Compositional Linearizability, the


Non-blocking Property, and Progress
Conditions
● Compositional Linearizability

■ Compositional linearizability is a correctness condition for concurrent objects that


exploits the semantics of abstract data types. It is a stronger condition than
sequential consistency, but weaker than linearizability.

■ In a compositional linearizable system, the behavior of a complex object can be


inferred from the behavior of its constituent objects. This means that if two objects
are compositionally linearizable, then their composition is also compositionally
linearizable.

■ Compositional linearizability is a useful property for concurrent systems because it


allows them to be verified more easily. If a system is compositionally linearizable,
then it is sufficient to verify the correctness of its constituent objects. This can be
done using a variety of techniques, such as model checking or theorem proving.

■ There are a few ways to achieve compositional linearizability:

○ One way is to use locks or transactions to ensure that concurrent accesses


to objects are serialized.

○ Another way is to use a technique called ‘abstract interpretation’ to reason


about the behavior of concurrent objects.

■ Compositional linearizability is a promising approach to the verification of


concurrent systems. It is a relatively weak condition that is still strong enough to
ensure correctness in many cases. It is also a relatively easy condition to verify,
which makes it a practical choice for many systems.

■ Here are some of the advantages of using compositional linearizability:

○ It can be used to verify the correctness of complex concurrent systems.

○ It can be used to reason about the behavior of concurrent systems.

○ It can be used to improve the performance of concurrent systems.

■ Here are some of the disadvantages of using compositional linearizability:

○ It can be difficult to implement compositional linearizability in some cases.

○ It can be difficult to reason about the behavior of concurrent systems that


are not compositionally linearizable.

■ Overall, compositional linearizability is a useful tool for the verification of


concurrent systems. It is a relatively weak condition that is still strong enough to
ensure correctness in many cases. It is also a relatively easy condition to verify,
which makes it a practical choice for many systems.
■ Compositional linearizability is a concept in concurrent programming and
distributed systems that defines a property of concurrent objects, allowing for
modular reasoning about the correctness and behavior of complex systems. It's a
middle ground between the stronger linearizability and the weaker sequential
consistency. This property allows you to reason about the behavior of composite
(composed of multiple) objects based on the behaviors of their constituent objects.

■ Here's a more detailed explanation of compositional linearizability:

○ Modular Reasoning: Compositional linearizability is centered around the


idea that the behavior of a complex object can be understood and reasoned
about by examining the behaviors of its constituent objects in isolation.

○ Object Composition: In a system where objects can be composed of simpler


objects, compositional linearizability ensures that if each object satisfies
linearizability, the composed object also satisfies a form of linearizability
about its constituent operations.

○ Correctness Inference: Instead of analyzing the entire system's


interactions, compositional linearizability enables you to focus on verifying
the correctness of each object's behavior. This simplifies the verification
process and reduces the complexity of reasoning about complex systems.

○ Trade-Off Between Guarantees: Compositional linearizability provides


stronger guarantees than sequential consistency but relaxes some of the
stringent requirements of full linearizability. This trade-off allows for more
efficient implementations and verifications in practice.

○ Applicability: Compositional linearizability is particularly useful in


situations where the correctness of individual objects can be well-defined
and verified, and you want to ensure that the interactions of these objects
maintain a certain level of consistency.

○ Verification Techniques: Techniques such as model checking, abstract


interpretation, and theorem proving can be employed to verify the
compositional linearizability of individual objects and, consequently, the
composed system.

○ Ease of Verification: The key advantage of compositional linearizability is


that it simplifies the verification process by allowing you to focus on
smaller, isolated parts of the system, which can lead to more efficient and
manageable verification efforts.

○ Practical Use: Many real-world systems benefit from compositional


linearizability due to the ease of reasoning about the correctness of
individual components while still achieving meaningful guarantees for the
entire system.

■ Compositional linearizability is a concept related to the correctness and


consistency of concurrent and distributed systems. It combines the notions of
‘linearizability’ and ‘compositionality’ to ensure the correctness of complex
systems built from smaller, independently functioning components.

○ Linearizability: Linearizability is a property that guarantees that each


operation in a distributed system appears to take effect instantaneously at a
single point in time between its invocation and response, and it behaves as if
it occurred at that exact point in time. This property ensures that the
system's behavior is consistent with that of a single, sequentially executing
process.

○ Compositionality: Compositionality is the principle that states that the


behavior of a complex system can be understood and verified by analyzing
the behaviors of its components and how they interact with each other. In
the context of concurrent and distributed systems, compositionality means
that the correctness of the overall system can be deduced from the
correctness of its components.

■ Compositional Linearizability: Combines the above two concepts by providing a


framework to reason about the correctness of a larger distributed system by
considering the correctness of its components. It ensures that the global behavior
of the composed system is consistent with the behaviors of its components, taking
into account the linearizability property.

■ When dealing with complex systems that consist of multiple interacting


components, compositional linearizability allows you to analyze and reason the
correctness of the system in a more modular and structured manner. It provides a
way to verify that the interactions between components preserve the linearizability
property, thereby ensuring the correctness of the entire system.

■ In summary, compositional linearizability is a powerful approach to designing and


verifying correct concurrent and distributed systems by combining the concepts of
linearizability and compositionality. It helps ensure that the interactions between
components maintain the desired correctness properties, ultimately leading to a
more reliable and robust system.

■ Compositionality is important because it allows concurrent systems to be designed


and constructed in a modular fashion; linearizable objects can be implemented,
verified, and executed independently.
● Linearizability is Compositional:

Theorem .1. H is linearizable if, and only if, for each object x, H|x is linearizable.

(Figure: The pending enq(x) method call must take effect early to justify the deq() call that
returns x.)

■ Proof: The ‘only if’ part is left as an exercise.

■ For each object x, pick a linearization of H|x. Let Rx be the set of responses
appended to H|x to construct that linearization, and let →x be the corresponding
linearization order. Let H be the history constructed by appending to H each
response in Rx.

■ We argue by induction on the number of method calls in Hr. For the base case, if Hr
contains only one method call, we are done. Otherwise, assume the claim for every
H containing fewer than k > 1 method calls. For each object x, consider the last
method call in Hr x. One of these calls ‘m’ must be maximal concerning H: that is,
there is no mr such that m H mr. Let Gr be the history defined by removing m from
Hr. Because m is maximal, Hr is equivalent to Gr m. By the induction hypothesis, Gr
is linearizable to a sequential history Sr, and both Hr and H are linearizable to Sr ·
m. Q

○ Base Case: You start by considering the base case where Hr contains only
one method that calls for some object x. In this case, you already have a
linearization for that single method call, and you are done.

○ Inductive Hypothesis: You assume that the claim holds for any history (H)
containing fewer than k > 1 method calls.

○ Linearization for Each Object: For each object x in Hr, you pick a
linearization of the history of method calls for that specific object, denoted
as H|x. This linearization includes the responses and a linearization order
→x.

○ Rx and →x: You define Rx as the set of responses appended to H|x to


construct the linearization, and →x as the corresponding linearization order.
○ Constructing History H: You create a new history H by appending to it each
response in Rx. This essentially constructs the history H by adding the
responses from each object's linearization.

○ Maximal Method Call: As you did in the previous part of the argument, you
consider the last method call (m) in Hr for each object x. You argue that one
of these calls (m) must be maximal concerning H, meaning there is no other
method call (mr) in Hr such that m precedes mr in H.

○ Removing Maximal Call: Similar to the previous argument, you create a new
history Gr by removing the maximal call m from Hr. Because m is maximal,
Hr is equivalent to Gr followed by m.

○ Induction Hypothesis Application: By the induction hypothesis (assuming


the claim for histories with fewer method calls), you claim that Gr can be
linearized to a sequential history Sr.

○ Linearizability of Hr and H: You conclude that both Hr and H can be


linearized to Sr followed by the maximal call m.

● The Nonblocking Property

■ In computer science, an algorithm is called non-blocking if the failure or


suspension of any thread cannot cause the failure or suspension of another thread;
for some operations, these algorithms provide a useful alternative to traditional
blocking implementations.

■ A non-blocking algorithm is lock-free if there is guaranteed system-wide progress,


and wait-free if there is also guaranteed per-thread progress. ‘Non-blocking’ was
used as a synonym for ‘lock-free’ in the literature until the introduction of
obstruction-freedom in 2003.

■ The word ‘non-blocking’ was traditionally used to describe telecommunications


networks that could route a connection through a set of relays ‘without having to
re-arrange existing calls’. Also, if the telephone exchange ‘is not defective, it can
always make the connection’ (see nonblocking minimal spanning switch).

■ Introduction

○ The traditional approach to multi-threaded programming is to use locks to


synchronize access to shared resources. Synchronization primitives such as
mutexes, semaphores, and critical sections are all mechanisms by which a
programmer can ensure that certain sections of code do not execute
concurrently if doing so would corrupt shared memory structures. If one
thread attempts to acquire a lock that is already held by another thread, the
thread will block until the lock is free.
○ Blocking a thread can be undesirable for many reasons. An obvious reason
is that while the thread is blocked, it cannot accomplish anything: if the
blocked thread had been performing a high-priority or real-time task, it
would be highly undesirable to halt its progress.

○ Other problems are less obvious. For example, certain interactions between
locks can lead to error conditions such as deadlock, livelock, and priority
inversion. Using locks also involves a trade-off between coarse-grained
locking, which can significantly reduce opportunities for parallelism, and
fine-grained locking, which requires more careful design, increases locking
overhead, and is more prone to bugs.

○ Unlike blocking algorithms, non-blocking algorithms do not suffer from


these downsides, and in addition are safe for use in interrupt handlers: even
though the preempted thread cannot be resumed, progress is still possible
without it.

○ In contrast, global data structures protected by mutual exclusion cannot


safely be accessed in an interrupt handler, as the preempted thread may be
the one holding the lock—but this can be rectified easily by masking the
interrupt request during the critical section.

○ A lock-free data structure can be used to improve performance. A lock-free


data structure increases the amount of time spent in parallel execution
rather than serial execution, improving performance on a multi-core
processor, because access to the shared data structure does not need to be
serialized to stay coherent.

○ With few exceptions, non-blocking algorithms use atomic read-modify-write


primitives that the hardware must provide, the most notable of which is
compare and swap (CAS). Critical sections are almost always implemented
using standard interfaces over these primitives (in the general case, critical
sections will be blocked, even when implemented with these primitives).

○ In the 1990s all non-blocking algorithms had to be written ‘natively’ with the
underlying primitives to achieve acceptable performance. However, the
emerging field of software transactional memory promises standard
abstractions for writing efficient non-blocking code.

○ Much research has also been done in providing basic data structures such
as stacks, queues, sets, and hash tables. These allow programs to easily
exchange data between threads asynchronously.

○ Additionally, some non-blocking data structures are weak enough to be


implemented without special atomic primitives. These exceptions include:
➢ a single-reader single-writer ring buffer FIFO, with a size that evenly
divides the overflow of one of the available unsigned integer types,
can unconditionally be implemented safely using only a memory
barrier

➢ Read-copy-update with a single writer and any number of readers.


(The readers are wait-free; the writer is usually lock-free until it
recovers memory).

➢ Read-copy-update with multiple writers and any number of readers.


(The readers are wait-free; multiple writers generally serialize with a
lock and are not obstruction-free).

○ Several libraries internally use lock-free techniques, but it is difficult to


write lock-free code that is correct.

○ Non-blocking algorithms generally involve a series of read,


read-modify-write, and write instructions in a carefully designed order.
Optimizing compilers can aggressively re-arrange operations. Even when
they don't, many modern CPUs often re-arrange such operations (they have
a ‘weak consistency model’), unless a memory barrier is used to tell the CPU
not to reorder.

■ Wait-Freedom

○ Wait freedom is the strongest non-blocking guarantee of progress,


combining guaranteed system-wide throughput with starvation freedom. An
algorithm is wait-free if every operation has a bound on the number of steps
the algorithm will take before the operation completes. This property is
critical for real-time systems and is always nice to have as long as the
performance cost is not too high.

○ It was shown in the 1980s that all algorithms can be implemented wait-free,
and many transformations from serial code, called universal constructions,
have been demonstrated. However, the resulting performance does not in
general match even naïve blocking designs. Several papers have since
improved the performance of universal constructions, but still, their
performance is far below blocking designs.

○ Several papers have investigated the difficulty of creating wait-free


algorithms. For example, it has been shown that the widely available atomic
conditional primitives, CAS and LL/SC, cannot provide starvation-free
implementations of many common data structures without memory costs
growing linearly in the number of threads.

○ But in practice, these lower bounds do not present a real barrier as


spending a cache line or exclusive reservation granule (up to 2 KB on ARM)
of store per thread in the shared memory is not considered too costly for
practical systems (typically the amount of store logically required is a word,
but physically CAS operations on the same cache line will collide, and LL/SC
operations in the same exclusive reservation granule will collide, so the
amount of store physically require is greater).

○ Wait-free algorithms were rare until 2011, both in research and in practice.
However, in 2011 Kogan and Petrank presented a wait-free queue building
on the CAS primitive, generally available on common hardware. Their
construction expanded the lock-free queue of Michael and Scott, which is
an efficient queue often used in practice.

○ A follow-up paper by Kogan and Petrank provided a method for making


wait-free algorithms fast and used this method to make the wait-free queue
practically as fast as its lock-free counterpart. A subsequent paper by
Timnat and Petrank provided an automatic mechanism for generating
wait-free data structures from lock-free ones. Thus, wait-free
implementations are now available for many data structures.

■ Lock-Freedom

○ Lock-freedom allows individual threads to starve but guarantees


system-wide through. An algorithm is lock-free if, when the program
threads are run for a sufficiently long time, at least one of the threads
makes progress (for some sensible definition of progress). All wait-free
algorithms are lock-free.

○ In particular, if one thread is suspended, then a lock-free algorithm


guarantees that the remaining threads can still make progress. Hence, if
two threads can contend for the same mutex lock or spinlock, then the
algorithm is not lock-free. (If we suspend one thread that holds the lock,
then the second thread will block.)

○ An algorithm is lock-free if infinitely often operation by some processors


will succeed in a finite number of steps. For instance, if N processors are
trying to execute an operation, some of the N processes will succeed in
finishing the operation in a finite number of steps, and others might fail and
retry on failure. The difference between wait-free and lock-free is that
wait-free operation by each process is guaranteed to succeed in a finite
number of steps, regardless of the other processors.

○ In general, a lock-free algorithm can run in four phases: completing one's


operation, assisting an obstructing operation, aborting an obstructing
operation, and waiting. Completing one's operation is complicated by the
possibility of concurrent assistance and abortion, but is invariably the
fastest path to completion.
○ The decision about when to assist, abort, or wait when an obstruction is met
is the responsibility of a contention manager. This may be very simple (assist
higher priority operations, abort lower priority ones), or maybe more
optimized to achieve better throughput or lower the latency of prioritized
operations.

○ Correct concurrent assistance is typically the most complex part of a


lock-free algorithm, and often very costly to execute: not only does the
assisting thread slow down, but thanks to the mechanics of shared memory,
the thread being assisted will be slowed, too, if it is still running.

■ Obstruction-Freedom

○ Obstruction-freedom is the weakest natural non-blocking progress


guarantee. An algorithm is obstruction-free if, at any point, a single thread
executed in isolation (i.e., with all obstructing lines suspended) for a
bounded number of steps will complete its operation. All lock-free
algorithms are obstruction-free.

○ Obstruction-freedom demands only that any partially completed operation


can be aborted and the changes made rolled back. Dropping concurrent
assistance can often result in much simpler algorithms that are easier to
validate. Preventing the system from continually live-locking is the task of a
contention manager.

○ Some obstruction-free algorithms use a pair of ‘consistency markers’ in the


data structure. Processes reading the data structure first read one
consistency marker, then read the relevant data into an internal buffer, then
read the other marker, and then compare the markers. The data is
consistent if the two markers are identical. Markers may be non-identical
when the read is interrupted by another process updating the data
structure. In such a case, the process discards the data in the internal buffer
and tries again.

○ The nonblocking property, often referred to as nonblocking synchronization


or nonblocking algorithms, is a concept in concurrent programming and
parallel computing. It describes an approach to designing concurrent
systems where threads or processes can make progress and perform
operations without getting stuck or blocked by other threads' actions.

○ In nonblocking systems, operations can be executed concurrently without


relying heavily on traditional locking mechanisms, such as mutexes or
semaphores, that can lead to contention and blocking. Nonblocking
algorithms aim to avoid situations where a thread's progress is hindered by
the actions of other threads, which can help improve system
responsiveness, scalability, and performance.
■ Key Characteristics of Nonblocking Systems:

○ Progress Guarantee: The primary goal of nonblocking systems is to ensure


that at least one thread can make progress at any given time, even in the
presence of contention or delays caused by other threads.

○ Lock-Free and Wait-Free: Nonblocking algorithms are often categorized as


lock-free or wait-free. A lock-free algorithm guarantees that in the presence
of contention, some thread will complete its operation in a finite number of
steps. A wait-free algorithm ensures that every thread will complete its
operation in a finite number of steps, regardless of contention.

○ Avoiding Deadlocks and Contentions: Nonblocking designs help prevent


common problems like deadlocks (where threads wait indefinitely for each
other) and excessive contention (where many threads try to access the
same resource, leading to performance bottlenecks).

○ Scalability: Nonblocking systems tend to scale better as the number of


threads or processors increases because they reduce contention and
blocking that can hinder performance.

○ Complexity and Overhead: While nonblocking algorithms can provide


benefits, they can also be more complex to design and implement compared
to traditional locking-based approaches. Ensuring correctness and handling
edge cases can require careful consideration.

○ Suitability for Certain Scenarios: Nonblocking techniques are particularly


useful in scenarios where highly concurrent access to shared resources is
required, and traditional locking mechanisms would become a bottleneck.

○ Amdahl's Law: Nonblocking systems can help mitigate the impact of


Amdahl's Law, which states that the speedup of a parallel program is limited
by the sequential portion of the program. By allowing nonblocking
parallelism, more parts of the program can run concurrently.

○ Memory Consistency: Nonblocking algorithms often need to consider


memory consistency and visibility issues due to the lack of strong
synchronization primitives such as locks. Techniques such as memory
barriers and atomic operations play a role in ensuring correctness.

○ Overall, the nonblocking property is a powerful concept in concurrent


programming that focuses on ensuring progress and reducing contention
among threads or processes. It's an approach that aims to improve
responsiveness, scalability, and performance in concurrent systems by
allowing threads to execute independently without excessive blocking.
● Progress Conditions

■ What is Progress Condition:

■ Progress condition in concurrent programs is a property that guarantees that a


thread will eventually make progress, even if other threads are accessing the same
shared resources.

■ There are a few different types of progress conditions, each with its strengths and
weaknesses.

○ Wait freedom is the strongest progress condition. A wait-free program


guarantees that every thread will eventually make progress, no matter what
other threads are doing. This is achieved by ensuring that every thread can
always make progress, even if it is blocked by other threads. Wait-free
programs are typically the most difficult to implement, but they offer the
best guarantees of progress.

○ Lock-freedom is a weaker progress condition than wait-freedom. A


lock-free program guarantees that every thread will eventually make
progress, as long as no thread is infinitely looping or waiting for a lock that
it will never acquire. Lock-free programs are typically easier to implement
than wait-free programs, but they offer slightly weaker guarantees of
progress.

○ Obstruction-freedom is a weaker progress condition than lock-freedom. An


obstruction-free program guarantees that every thread will eventually make
progress, as long as no thread is blocked by another thread that is itself
blocked. Obstruction-free programs are typically even easier to implement
than lock-free programs, but they offer the weakest guarantees of progress.

○ Starvation-freedom is a property that is orthogonal to wait-freedom,


lock-freedom, and obstruction-freedom. A starvation-free program
guarantees that no thread will be indefinitely prevented from making
progress. This means that even if a thread is blocked by other threads, it will
eventually be unblocked and allowed to make progress. Starvation freedom
can be achieved by using a variety of techniques, such as priority
scheduling or fairness policies.

■ The specific needs of a program determine which progress condition to use. If the
program must guarantee that every thread will eventually make progress, then
wait freedom is the best choice. However, if the program can tolerate some delays,
then lock-freedom or obstruction-freedom may be sufficient. If the program is not
concerned about starvation, then any of the three conditions may be used.

■ Here are some of the advantages of using progress conditions in concurrent


programs:
○ They can help to ensure that the program will eventually terminate.

○ They can help to prevent deadlocks and livelocks.

○ They can help to improve the performance of the program.

■ Here are some of the disadvantages of using progress conditions in concurrent


programs:

○ They can make the program more complex to implement.

○ They can make the program less efficient.

○ They may not be suitable for all applications.

■ Overall, progress conditions are a useful tool for ensuring the correctness and
performance of concurrent programs. The choice of which progress condition to
use depends on the specific needs of the program.

■ Here are some common progress conditions in concurrent programming:

○ Lock Freedom: A system or algorithm is considered lock-free if at least one


thread is guaranteed to make progress in a finite number of steps,
regardless of the contention or delays caused by other threads. Lock
freedom ensures that threads don't get stuck waiting indefinitely.

○ Wait Freedom: Wait freedom is a stronger condition than lock freedom. It


guarantees that every thread will complete its operation in a finite number
of steps, regardless of the contention. In other words, no thread will be
stuck waiting indefinitely.

○ Obstruction Freedom: An algorithm or system is said to be obstruction-free


if every thread that performs a finite number of steps eventually completes
its operation, provided there is no contention. This means that threads are
guaranteed to make progress as long as there's no contention.

○ Starvation Freedom: This condition ensures that no thread is starved or


deprived of the opportunity to execute its operation indefinitely. In other
words, a thread's request to operate is eventually granted, even if other
threads are frequently requesting operations.

○ Livelock Freedom: Livelock freedom guarantees that threads won't get


stuck in a state where they are actively trying to resolve contention issues
but can't make any real progress. Livelocks are different from deadlocks,
where threads are truly stuck.
● Dependent Progress Conditions

■ The wait-free and lock-free nonblocking progress conditions guarantee that the
computation as a whole makes progress, independently of how the system
schedules threads. The two progress conditions for blocking implementations are
the deadlock-free and starvation-free properties. These properties are dependent
on progress conditions—progress occurs only if the underlying platform (i.e., the
operating system) provides certain guarantees.

■ In principle, the deadlock-free and starvation-free properties are useful when the
operating system guarantees that every thread eventually leaves every critical
section. In practice, these properties are useful when the operating system
guarantees that every thread eventually leaves every critical section on time.
Classes whose methods rely on lock-based synchronization can guarantee, at best,
dependent progress properties.

■ Does this observation mean that lock-based algorithms should be avoided? Not
necessarily. If preemption in the middle of a critical section is sufficiently rare, then
dependent blocking progress conditions are effectively indistinguishable from
their no-blocking counterparts. If preemption is common enough to cause concern,
or if the cost of preemption-based delay is sufficiently high, then it is sensible to
consider nonblocking progress conditions. There is also a dependent nonblocking
progress condition: the obstruction-free property. We say that a method call
executes in isolation if no other threads take steps.

■ Definition: A method is obstruction-free if, from any point after which it executes in
isolation, it finishes in a finite number of steps.

● Progress of a Process

■ During the execution of several processes simultaneously, the order of execution


of the statements in the critical section could affect the final state of the values in
the critical section. This is nothing but a race condition and gives rise to
inconsistencies in the code. These are removed with the help of mutual exclusion,
but there might still be a chance of starvation of other processes because of it.
When this starvation extends to infinity, it leads to a deadlock.

■ Hence, mutual exclusion alone cannot guarantee the simultaneous execution of


processes without any problems—a second condition known as progress is
required to ensure no deadlock occurs during such execution.

■ A formal definition of progress is stated by Galvin as

○ “If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not
executing in their remainder section can participate in deciding which will
enter its critical section next, and this selection cannot be postponed
indefinitely.”

○ Let’s use an example to see how valid our statement is. Suppose in the
clothes section of a departmental store, a boy A and a girl B want to use the
changing room.

○ Boy A decides to use the changing room first, but cannot decide as to how
many clothes to take inside with him. As a result, even though the changing
room is empty, girl B (who has decided how many clothes to try out) cannot
enter the changing room as she is obstructed by boy A.

○ In other words, boy A prevents girl B from using the changing room even
though he doesn’t need to use it. This is what the concept of progress was
made to prevent.

■ According to the main definition of progress, the only processes that can
participate in the decision-making as to who can enter the critical section are those
that are about to enter the critical section or are executing some code before
entering the critical section. Processes that are in their reminder section, which is
the section succeeding the critical section, are not allowed to participate in this
decision-making process.
■ The main job of progress is to ensure one process is executed in the critical section
at any point in time (so that some work is always being done by the processor). This
decision cannot be ‘postponed indefinitely’, in other words, it should take a limited
amount of time to select which process should be allowed to enter the critical
section. If this decision cannot be taken in a finite time, it leads to a deadlock.

You might also like