Professional Documents
Culture Documents
LECTURE 12 to 14 NOTES
Theorem .1. H is linearizable if, and only if, for each object x, H|x is linearizable.
(Figure: The pending enq(x) method call must take effect early to justify the deq() call that
returns x.)
■ For each object x, pick a linearization of H|x. Let Rx be the set of responses
appended to H|x to construct that linearization, and let →x be the corresponding
linearization order. Let H be the history constructed by appending to H each
response in Rx.
■ We argue by induction on the number of method calls in Hr. For the base case, if Hr
contains only one method call, we are done. Otherwise, assume the claim for every
H containing fewer than k > 1 method calls. For each object x, consider the last
method call in Hr x. One of these calls ‘m’ must be maximal concerning H: that is,
there is no mr such that m H mr. Let Gr be the history defined by removing m from
Hr. Because m is maximal, Hr is equivalent to Gr m. By the induction hypothesis, Gr
is linearizable to a sequential history Sr, and both Hr and H are linearizable to Sr ·
m. Q
○ Base Case: You start by considering the base case where Hr contains only
one method that calls for some object x. In this case, you already have a
linearization for that single method call, and you are done.
○ Inductive Hypothesis: You assume that the claim holds for any history (H)
containing fewer than k > 1 method calls.
○ Linearization for Each Object: For each object x in Hr, you pick a
linearization of the history of method calls for that specific object, denoted
as H|x. This linearization includes the responses and a linearization order
→x.
○ Maximal Method Call: As you did in the previous part of the argument, you
consider the last method call (m) in Hr for each object x. You argue that one
of these calls (m) must be maximal concerning H, meaning there is no other
method call (mr) in Hr such that m precedes mr in H.
○ Removing Maximal Call: Similar to the previous argument, you create a new
history Gr by removing the maximal call m from Hr. Because m is maximal,
Hr is equivalent to Gr followed by m.
■ Introduction
○ Other problems are less obvious. For example, certain interactions between
locks can lead to error conditions such as deadlock, livelock, and priority
inversion. Using locks also involves a trade-off between coarse-grained
locking, which can significantly reduce opportunities for parallelism, and
fine-grained locking, which requires more careful design, increases locking
overhead, and is more prone to bugs.
○ In the 1990s all non-blocking algorithms had to be written ‘natively’ with the
underlying primitives to achieve acceptable performance. However, the
emerging field of software transactional memory promises standard
abstractions for writing efficient non-blocking code.
○ Much research has also been done in providing basic data structures such
as stacks, queues, sets, and hash tables. These allow programs to easily
exchange data between threads asynchronously.
■ Wait-Freedom
○ It was shown in the 1980s that all algorithms can be implemented wait-free,
and many transformations from serial code, called universal constructions,
have been demonstrated. However, the resulting performance does not in
general match even naïve blocking designs. Several papers have since
improved the performance of universal constructions, but still, their
performance is far below blocking designs.
○ Wait-free algorithms were rare until 2011, both in research and in practice.
However, in 2011 Kogan and Petrank presented a wait-free queue building
on the CAS primitive, generally available on common hardware. Their
construction expanded the lock-free queue of Michael and Scott, which is
an efficient queue often used in practice.
■ Lock-Freedom
■ Obstruction-Freedom
■ There are a few different types of progress conditions, each with its strengths and
weaknesses.
■ The specific needs of a program determine which progress condition to use. If the
program must guarantee that every thread will eventually make progress, then
wait freedom is the best choice. However, if the program can tolerate some delays,
then lock-freedom or obstruction-freedom may be sufficient. If the program is not
concerned about starvation, then any of the three conditions may be used.
■ Overall, progress conditions are a useful tool for ensuring the correctness and
performance of concurrent programs. The choice of which progress condition to
use depends on the specific needs of the program.
■ The wait-free and lock-free nonblocking progress conditions guarantee that the
computation as a whole makes progress, independently of how the system
schedules threads. The two progress conditions for blocking implementations are
the deadlock-free and starvation-free properties. These properties are dependent
on progress conditions—progress occurs only if the underlying platform (i.e., the
operating system) provides certain guarantees.
■ In principle, the deadlock-free and starvation-free properties are useful when the
operating system guarantees that every thread eventually leaves every critical
section. In practice, these properties are useful when the operating system
guarantees that every thread eventually leaves every critical section on time.
Classes whose methods rely on lock-based synchronization can guarantee, at best,
dependent progress properties.
■ Does this observation mean that lock-based algorithms should be avoided? Not
necessarily. If preemption in the middle of a critical section is sufficiently rare, then
dependent blocking progress conditions are effectively indistinguishable from
their no-blocking counterparts. If preemption is common enough to cause concern,
or if the cost of preemption-based delay is sufficiently high, then it is sensible to
consider nonblocking progress conditions. There is also a dependent nonblocking
progress condition: the obstruction-free property. We say that a method call
executes in isolation if no other threads take steps.
■ Definition: A method is obstruction-free if, from any point after which it executes in
isolation, it finishes in a finite number of steps.
● Progress of a Process
○ “If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not
executing in their remainder section can participate in deciding which will
enter its critical section next, and this selection cannot be postponed
indefinitely.”
○ Let’s use an example to see how valid our statement is. Suppose in the
clothes section of a departmental store, a boy A and a girl B want to use the
changing room.
○ Boy A decides to use the changing room first, but cannot decide as to how
many clothes to take inside with him. As a result, even though the changing
room is empty, girl B (who has decided how many clothes to try out) cannot
enter the changing room as she is obstructed by boy A.
○ In other words, boy A prevents girl B from using the changing room even
though he doesn’t need to use it. This is what the concept of progress was
made to prevent.
■ According to the main definition of progress, the only processes that can
participate in the decision-making as to who can enter the critical section are those
that are about to enter the critical section or are executing some code before
entering the critical section. Processes that are in their reminder section, which is
the section succeeding the critical section, are not allowed to participate in this
decision-making process.
■ The main job of progress is to ensure one process is executed in the critical section
at any point in time (so that some work is always being done by the processor). This
decision cannot be ‘postponed indefinitely’, in other words, it should take a limited
amount of time to select which process should be allowed to enter the critical
section. If this decision cannot be taken in a finite time, it leads to a deadlock.