Professional Documents
Culture Documents
Textbook Fault Tolerant Message Passing Distributed Systems An Algorithmic Approach Michel Raynal Ebook All Chapter PDF
Textbook Fault Tolerant Message Passing Distributed Systems An Algorithmic Approach Michel Raynal Ebook All Chapter PDF
https://textbookfull.com/product/adaptive-and-fault-tolerant-
control-of-underactuated-nonlinear-systems-1st-edition-
jiangshuai-huang/
https://textbookfull.com/product/advanced-methods-for-fault-
diagnosis-and-fault-tolerant-control-steven-x-ding/
https://textbookfull.com/product/intelligent-video-surveillance-
systems-an-algorithmic-approach-first-edition-maheshkumar-h-
kolekar/
https://textbookfull.com/product/fault-tolerant-systems-2nd-
edition-koren-d-sc-electrical-engineering-israel-institute-of-
technology-haifa/
Robust Integration of Model-Based Fault Estimation and
Fault-Tolerant Control Jianglin Lan
https://textbookfull.com/product/robust-integration-of-model-
based-fault-estimation-and-fault-tolerant-control-jianglin-lan/
https://textbookfull.com/product/robust-and-fault-tolerant-
control-neural-network-based-solutions-krzysztof-patan/
https://textbookfull.com/product/computational-network-science-
an-algorithmic-approach-1st-edition-hexmoor/
https://textbookfull.com/product/advances-in-gain-scheduling-and-
fault-tolerant-control-techniques-1st-edition-damiano-rotondo-
auth/
https://textbookfull.com/product/bio-inspired-fault-tolerant-
algorithms-for-network-on-chip-1st-edition-muhammad-athar-javed-
sethi-author/
Michel Raynal
Fault-Tolerant
Message-Passing
Distributed
Systems
An Algorithmic Approach
Fault-Tolerant Message-Passing Distributed Systems
Michel Raynal
Fault-Tolerant
Message-Passing
Distributed Systems
An Algorithmic Approach
Michel Raynal
IRISA-ISTIC Université de Rennes 1
Institut Universitaire de France
Rennes, France
Parts of this work are based on the books “Fault-Tolerant Agreement in Synchronous Message-
Passing Systems” and “Communication and Agreement Abstractions for Fault-Tolerant Asynchro-
nous Distributed Systems”, author Michel Raynal, © 2010 Morgan & Claypool Publishers (www.
morganclaypool.com). Used with permission.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Je suis arrivé au jour où je ne me souviens plus quand j’ai cessé d’être immortel.
In Livro de Crónicas, António Lobo Antunes (1942)
1
French: Mais j’ai déjà fourni une vaste carrière, il est temps de dételer les chevaux tout fumants.
English: But now I have traveled a very long way, and the time has come to unyoke my steaming horses.
v
vi Preface
What is distributed computing? Distributed computing was born in the late 1970s when researchers
and practitioners started taking into account the intrinsic characteristic of physically distributed sys-
tems. The field then emerged as a specialized research area distinct from networking, operating sys-
tems, and parallel computing.
Distributed computing arises when one has to solve a problem in terms of distributed entities
(usually called processors, nodes, processes, actors, agents, sensors, peers, etc.) such that each entity
has only a partial knowledge of the many parameters involved in the problem that has to be solved.
While parallel computing and real-time computing can be characterized, respectively, by the terms
efficiency and on-time computing, distributed computing can be characterized by the term uncertainty.
This uncertainty is created by asynchrony, multiplicity of control flows, absence of shared memory
and global time, failure, dynamicity, mobility, etc. Mastering one form or another of uncertainty is
pervasive in all distributed computing problems. A main difficulty in designing distributed algorithms
comes from the fact that no entity cooperating in the achievement of a common goal can have an
instantaneous knowledge of the current state of the other entities, it can only know their past local
states.
Although distributed algorithms are often made up of a few lines, their behavior can be difficult
to understand and their properties hard to state and prove. Hence, distributed computing is not only
a fundamental topic but also a challenging topic where simplicity, elegance, and beauty are first-class
citizens.
Why this book? In the book “Distributed algorithms for message-passing systems” (Springer, 2013),
I addressed distributed computing in failure-free message-passing systems, where the computing enti-
ties (processes) have to cooperate in the presence of asynchrony. Differently, in my book “Concurrent
programming: algorithms, principles and foundations” (Springer, 2013), I addressed distributed com-
puting where the computing entities (processes) communicate through a read/write shared memory
(e.g., multicore), and the main adversary lies in the net effect of asynchrony and process crashes
(unexpected definitive stops).
The present book considers synchronous and asynchronous message-passing systems, where pro-
cesses can commit crash failures, or Byzantine failures (arbitrary behavior). Its aim is to present in a
comprehensive way basic notions, concepts and algorithms in the context of these systems. The main
difficulty comes from the uncertainty created by the adversaries managing the environment (mainly
asynchrony and failures), which, by its very nature, is not under the control of the system.
A quick look at the content of the book The book is composed of four parts, the first two are on
communication abstractions, the other two on agreement abstractions. Those are the most important
abstractions distributed applications rely on in asynchronous and synchronous message-passing sys-
tems where processes may crash, or commit Byzantine failures. The book addresses what can be done
and what cannot be done in the presence of such adversaries. It consequently presents both impossi-
bility results and distributed algorithms. All impossibility results are proved, and all algorithms are
described in a simple algorithmic notation and proved correct.
• Parts on communication abstractions.
– Part I is on the reliable broadcast abstraction.
Preface vii
• Parts on agreement.
– Part III is on agreement in synchronous systems.
– Part IV is on agreement in asynchronous systems.
On the presentation style When known, the names of the authors of a theorem, or of an algorithm,
are indicated together with the date of the associated publication. Moreover, each chapter has a bib-
liographical section, where a short historical perspective and references related to that chapter are
given.
Each chapter terminates with a few exercises and problems, whose solutions can be found in the
article cited at the end of the corresponding exercise/problem.
From a vocabulary point of view, the following terms are used: an object implements an abstrac-
tion, defined by a set of properties, which allows a problem to be solved. Moreover, each algorithm
is first presented intuitively with words, and then proved correct. Understanding an algorithm is a
two-step process:
• First have a good intuition of its underlying principles, and its possible behaviors. This is nec-
essary, but remains informal.
• Then prove the algorithm is correct in the model it was designed for. The proof consists in a
logical reasoning, based on the properties provided by (i) the underlying model, and (ii) the
statements (code) of the algorithm. More precisely, each property defining the abstraction the
algorithm is assumed to implement must be satisfied in all its executions.
Only when these two steps have been done, can we say that we understand the algorithm.
Audience This book has been written primarily for people who are not familiar with the topic and
the concepts that are presented. These include mainly:
• Senior-level undergraduate students and graduate students in informatics or computing engineer-
ing, who are interested in the principles and algorithmic foundations of fault-tolerant distributed
computing.
• Practitioners and engineers who want to be aware of the state-of-the-art concepts, basic princi-
ples, mechanisms, and techniques encountered in fault-tolerant distributed computing.
Prerequisites for this book include undergraduate courses on algorithms, basic knowledge on operat-
ing systems, and notions on concurrency in failure-free distributed computing. One-semester courses,
based on this book, are suggested in the section titled “How to Use This Book” in the Afterword.
Origin of the book and acknowledgments This book has two complementary origins:
• The first is a set of lectures for undergraduate and graduate courses on distributed computing I
gave at the University of Rennes (France), the Hong Kong Polytechnic University, and, as an
invited professor, at several universities all over the world.
Hence, I want to thank the numerous students for their questions that, in one way or another,
contributed to this book.
• The second is the two monographs I wrote in 2010, on fault-tolerant distributed computing,
titled “Communication and agreement abstractions for fault-tolerant asynchronous distributed
viii Preface
I also want to thank my colleagues (in no particular order) A. Mostéfaoui, D. Imbs, S. Rajsbaum,
V. Gramoli, C. Delporte, H. Fauconnier, F. Taı̈ani, M. Perrin, A. Castañeda, M. Larrea, and Z. Bouzid,
with whom I collaborated in the recent past years. I also thank the Polytechnic University of Hong
Kong (PolyU), and more particularly Professor Jiannong Cao, for hosting me while I was writing parts
of this book. My thanks also to Ronan Nugent (Springer) for his support and his help in putting it all
together.
Last but not least (and maybe most importantly), I thank all the researchers whose results are pre-
sented in this book. Without their work, this book would not exist. (Finally, since I typeset the entire
text myself – LATEX2 for the text and xfig for figures – any typesetting or technical errors that remain
are my responsibility.)
Academia Europaea
Institut Universitaire de France
Professor IRISA-ISTIC, Université de Rennes 1, France
Chair Professor, Hong Kong Polytechnic University
June–December 2017
Rennes, Saint-Grégoire, Douelle, Saint-Philibert, Hong Kong,
Vienna (DISC’17), Washington D.C. (PODC’17), Mexico City (UNAM)
Contents
I Introductory Chapter 1
ix
x Contents
8 A Broadcast Abstraction
Suited to the Family of Read/Write Implementable Objects 131
8.1 The SCD-broadcast Communication Abstraction . . . . . . . . . . . . . . . . . . . . . 132
8.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.1.2 Implementing SCD-broadcast in CAMP n,t [t < n/2] . . . . . . . . . . . . . . 133
8.1.3 Cost and Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.1.4 An SCD-broadcast-based Communication Pattern . . . . . . . . . . . . . . . . 139
8.2 From SCD-broadcast to an MWMR Register . . . . . . . . . . . . . . . . . . . . . . . 139
8.2.1 Building an MWMR Atomic Register in CAMP n,t [SCD-broadcast] . . . . . . 139
8.2.2 Cost and Proof of Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.2.3 From Atomicity to Sequential Consistency . . . . . . . . . . . . . . . . . . . 142
8.2.4 From MWMR Registers to an Atomic Snapshot Object . . . . . . . . . . . . . 143
8.3 From SCD-broadcast to an Atomic Counter . . . . . . . . . . . . . . . . . . . . . . . . 144
8.3.1 Counter Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.3.2 Implementation of an Atomic Counter Object . . . . . . . . . . . . . . . . . . 145
8.3.3 Implementation of a Sequentially Consistent Counter Object . . . . . . . . . . 146
8.4 From SCD-broadcast to Lattice Agreement . . . . . . . . . . . . . . . . . . . . . . . . 147
8.4.1 The Lattice Agreement Task . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.4.2 Lattice Agreement from SCD-broadcast . . . . . . . . . . . . . . . . . . . . . 148
8.5 From SWMR Atomic Registers to SCD-broadcast . . . . . . . . . . . . . . . . . . . . 148
8.5.1 From Snapshot to SCD-broadcast . . . . . . . . . . . . . . . . . . . . . . . . 148
8.5.2 Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.8 Exercises and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11 Expediting Decision
in Synchronous Systems with Process Crash Failures 189
11.1 Early Deciding and Stopping Interactive Consistency . . . . . . . . . . . . . . . . . . . 189
11.1.1 Early Deciding vs Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . 189
11.1.2 An Early Decision Predicate . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
11.1.3 An Early Deciding and Stopping Algorithm . . . . . . . . . . . . . . . . . . . 191
11.1.4 Correctness Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11.1.5 On Early Decision Predicates . . . . . . . . . . . . . . . . . . . . . . . . . . 194
11.1.6 Early Deciding and Stopping Consensus . . . . . . . . . . . . . . . . . . . . . 195
11.2 An Unbeatable Binary Consensus Algorithm . . . . . . . . . . . . . . . . . . . . . . . 196
11.2.1 A Knowledge-Based Unbeatable Predicate . . . . . . . . . . . . . . . . . . . 196
11.2.2 PREF0() with Respect to DIFF() . . . . . . . . . . . . . . . . . . . . . . . . 197
11.2.3 An Algorithm Based on the Predicate PREF0(): CGM . . . . . . . . . . . . . 197
11.2.4 On the Unbeatability of the Predicate PREF0() . . . . . . . . . . . . . . . . . 200
11.3 The Synchronous Condition-based Approach . . . . . . . . . . . . . . . . . . . . . . . 200
xiv Contents
16 Consensus:
Power and Implementability Limit in Crash-Prone Asynchronous Systems 287
16.1 The Total Order Broadcast Communication Abstraction . . . . . . . . . . . . . . . . . 287
16.1.1 Total Order Broadcast: Definition . . . . . . . . . . . . . . . . . . . . . . . . 287
16.1.2 A Map of Communication Abstractions . . . . . . . . . . . . . . . . . . . . . 288
16.2 From Consensus to TO-broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
16.2.1 Structure of the Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
16.2.2 Description of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
16.2.3 Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
16.3 Consensus and TO-broadcast Are Equivalent . . . . . . . . . . . . . . . . . . . . . . . 292
16.4 The State Machine Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
16.4.1 State Machine Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
16.4.2 Sequentially-Defined Abstractions (Objects) . . . . . . . . . . . . . . . . . . 294
16.5 A Simple Consensus-based Universal Construction . . . . . . . . . . . . . . . . . . . . 295
16.6 Agreement vs Mutual Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
16.7 Ledger Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
16.7.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
16.7.2 Implementation of a Ledger in CAMP n,t [TO-broadcast] . . . . . . . . . . . . 299
16.8 Consensus Impossibility in the Presence of Crashes and Asynchrony . . . . . . . . . . 300
16.8.1 The Intuition That Underlies the Impossibility . . . . . . . . . . . . . . . . . . 300
16.8.2 Refining the Definition of CAMP n,t [∅] . . . . . . . . . . . . . . . . . . . . . 301
16.8.3 Notion of Valence of a Global State . . . . . . . . . . . . . . . . . . . . . . . 303
16.8.4 Consensus Is Impossible in CAMP n,1 [∅] . . . . . . . . . . . . . . . . . . . . 304
16.9 The Frontier Between Read/Write Registers and Consensus . . . . . . . . . . . . . . . 309
16.9.1 The Main Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
16.9.2 The Notion of Consensus Number in Read/Write Systems . . . . . . . . . . . 310
16.9.3 An Illustration of Herlihy’s Hierarchy . . . . . . . . . . . . . . . . . . . . . . 310
16.9.4 The Consensus Number of a Ledger . . . . . . . . . . . . . . . . . . . . . . . 313
16.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
16.11 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
16.12 Exercises and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
17.4 Enriching CAMP n,t [t < n/2] with an Eventual Leader . . . . . . . . . . . . . . . . . 323
17.4.1 The Weakest Failure Detector to Implement Consensus . . . . . . . . . . . . . 323
17.4.2 Implementing Consensus in CAMP n,t [t < n/2, Ω] . . . . . . . . . . . . . . 324
17.4.3 Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
17.4.4 Consensus Versus Eventual Leader Failure Detector . . . . . . . . . . . . . . 329
17.4.5 Notions of Indulgence and Zero-degradation . . . . . . . . . . . . . . . . . . 329
17.4.6 Saving Broadcast Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
17.5 Enriching CAMP n,t [t < n/2] with Randomization . . . . . . . . . . . . . . . . . . . 330
17.5.1 Asynchronous Randomized Models . . . . . . . . . . . . . . . . . . . . . . . 330
17.5.2 Randomized Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
17.5.3 Randomized Binary Consensus in CAMP n,t [t < n/2, LC] . . . . . . . . . . . 331
17.5.4 Randomized Binary Consensus in CAMP n,t [t < n/2, CC] . . . . . . . . . . . 334
17.6 Enriching CAMP n,t [t < n/2] with a Hybrid Approach . . . . . . . . . . . . . . . . . 337
17.6.1 The Hybrid Approach: Failure Detector and Randomization . . . . . . . . . . 337
17.6.2 A Hybrid Binary Consensus Algorithm . . . . . . . . . . . . . . . . . . . . . 338
17.7 A Paxos-inspired Consensus Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 339
17.7.1 The Alpha Communication Abstraction . . . . . . . . . . . . . . . . . . . . . 340
17.7.2 Consensus Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
17.7.3 An Implementation of Alpha in CAMP n,t [t < n/2] . . . . . . . . . . . . . . 341
17.8 From Binary to Multivalued Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.8.1 A Reduction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.8.2 Proof of the Reduction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 345
17.9 Consensus in One Communication Step . . . . . . . . . . . . . . . . . . . . . . . . . . 346
17.9.1 Aim and Model Assumption on t . . . . . . . . . . . . . . . . . . . . . . . . 346
17.9.2 A One Communication Step Algorithm . . . . . . . . . . . . . . . . . . . . . 346
17.9.3 Proof of the Early Deciding Algorithm . . . . . . . . . . . . . . . . . . . . . 347
17.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
17.11 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
17.12 Exercises and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
18 Implementing Oracles
in Asynchronous Systems with Process Crash Failures 353
18.1 The Two Facets of Failure Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
18.1.1 The Programming Point of View: Modular Building Block . . . . . . . . . . . 354
18.1.2 The Computability Point of View: Abstraction Ranking . . . . . . . . . . . . 354
18.2 Ω in CAMP n,t [∅]: a Direct Impossibility Proof . . . . . . . . . . . . . . . . . . . . . . 355
18.3 Constructing a Perfect Failure Detector (Class P ) . . . . . . . . . . . . . . . . . . . . 356
18.3.1 Reminder: Definition of the Class P of Perfect Failure Detectors . . . . . . . . 356
18.3.2 Use of an Underlying Synchronous System . . . . . . . . . . . . . . . . . . . 357
18.3.3 Applications Generating a Fair Communication Pattern . . . . . . . . . . . . . 358
18.3.4 The Theta Assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
18.4 Constructing an Eventually Perfect Failure Detector (Class 3P ) . . . . . . . . . . . . . 361
18.4.1 Reminder: Definition of an Eventually Perfect Failure Detector . . . . . . . . 361
18.4.2 From Perpetual to Eventual Properties . . . . . . . . . . . . . . . . . . . . . . 361
18.4.3 Eventually Synchronous Systems . . . . . . . . . . . . . . . . . . . . . . . . 361
18.5 On the Efficient Monitoring of a Process by Another Process . . . . . . . . . . . . . . 363
18.5.1 Motivation and System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 363
18.5.2 A Monitoring Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
18.6 An Adaptive Monitoring-based Algorithm Building 3P . . . . . . . . . . . . . . . . . 366
18.6.1 Motivation and Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
xviii Contents
18.6.2 A Monitoring-Based Adaptive Algorithm for the Failure Detector Class 3P . . 366
18.6.3 Proof the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
18.7 From the t-Source Assumption to an Ω Eventual Leader . . . . . . . . . . . . . . . . . 369
18.7.1 The 3t-Source Assumption and the Model CAMP n,t [3t-SOURCE] . . . . . 369
18.7.2 Electing an Eventual Leader in CAMP n,t [3t-SOURCE] . . . . . . . . . . . . 370
18.7.3 Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
18.8 Electing an Eventual Leader in CAMP n,t [3t-MS PAT] . . . . . . . . . . . . . . . . . 372
18.8.1 A Query/Response Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
18.8.2 Electing an Eventual Leader in CAMP n,t [3t-MS PAT] . . . . . . . . . . . . 374
18.8.3 Proof of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
18.9 Building Ω in a Hybrid Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
18.10 Construction of a Biased Common Coin from Local Coins . . . . . . . . . . . . . . . . 377
18.10.1 Definition of a Biased Common Coin . . . . . . . . . . . . . . . . . . . . . . 377
18.10.2 The CORE Communication Abstraction . . . . . . . . . . . . . . . . . . . . . 377
18.10.3 Construction of a Common Coin with a Constant Bias . . . . . . . . . . . . . 380
18.10.4 On the Use of a Biased Common Coin . . . . . . . . . . . . . . . . . . . . . . 381
18.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
18.12 Bibliographic notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
18.13 Exercises and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
VI Appendix 409
Afterword 425
Bibliography 431
Index 453
Notation
Symbols
The notation broadcast TYPE(m), where TYPE is a message type and m a message content, is used
as a shortcut for “for each j ∈ {1, · · · , n} do send TYPE(m) to pj end for”. Hence, if it is not faulty
during its execution, pi sends the message TYPE(m) to each process, including itself. Otherwise there
is no guarantee on the reception of TYPE(m).
(In Chap. 1 only, j ∈ {1, · · · , n} is replaced by j ∈ neighborsi .)
xxi
xxii Notation
Acronyms (1)
Acronyms (2)
CO Causal order
FIFO First in first out
TO Total order
SCD Set-constrained delivery
FC Fair channel
CRDT Conflict-free replicated data type
MS PAT Message pattern
ADV Adversary
FD Failure detector
HB Heartbeat
MS PAT Message pattern
SO Send omission
GO General omission
MS Message scheduling assumption
LC Local coin
CC Common coin
BCCB Binary common coin with bias
3.1 Uniform reliable broadcast in CAMP n,t [- FC, t < n/2] (code for pi ) . . . . . . . . 45
3.2 Building Θ in CAMP n,t [- FC, t < n/2] (code for pi ) . . . . . . . . . . . . . . . . 50
3.3 Quiescent uniform reliable broadcast in CAMP n,t [- FC, Θ, P ] (code for pi ) . . . . 53
3.4 Quiescent uniform reliable broadcast in CAMP n,t [- FC, Θ, HB ] (code for pi ) . . . 56
3.5 An example of a network with fair paths . . . . . . . . . . . . . . . . . . . . . . . . 60
xxv
xxvi List of Figures and Algorithms
7.1 Building a failure detector of the class Σ in CAMP n,t [t < n/2] . . . . . . . . . . . 120
7.2 An algorithm for an atomic SWSR register in CAMP n,t [Σ] . . . . . . . . . . . . . 121
7.3 Extracting Σ from a register D-based algorithm A . . . . . . . . . . . . . . . . . . 122
7.4 Extracting Σ from a failure detector-based register algorithm A (code for pi ) . . . . 124
7.5 From atomic registers to URB-broadcast (code for pi ) . . . . . . . . . . . . . . . . 127
7.6 From the failure detector class Σ to the URB abstraction (1 ≤ t < n) . . . . . . . . 128
7.7 Two examples of the hybrid communication model . . . . . . . . . . . . . . . . . . 129
8.1 An implementation of SCD-broadcast in CAMP n,t [t < n/2] (code for pi ) . . . . . 134
8.2 Message pattern introduced in Lemma 16 . . . . . . . . . . . . . . . . . . . . . . . 137
8.3 SCD-broadcast-based communication pattern (code for pi ) . . . . . . . . . . . . . . 139
8.4 Construction of an MWMR atomic register in CAMP n,t [SCD-broadcast] (code for
pi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.5 Construction of an MWMR sequentially consistent register in CAMP n,t [SCD-broadcast]
(code for pi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Example of a run of an MWMR atomic snapshot object . . . . . . . . . . . . . . . 143
8.7 Construction of an MWMR atomic snapshot object in CAMP n,t [SCD-broadcast] . . 144
8.8 Construction of an atomic counter in CAMP n,t [SCD-broadcast] (code for pi ) . . . . 145
List of Figures and Algorithms xxvii
10.1 A simple (unfair) t-resilient consensus algorithm in CSMP n,t [∅] (code for pi ) . . . . 175
10.2 A simple (fair) t-resilient consensus algorithm in CSMP n,t [∅] (code for pi ) . . . . . 176
10.3 The second case of the agreement property (with t = 3 crashes) . . . . . . . . . . . 177
10.4 A t-resilient interactive consistency algorithm in CSMP n,t [∅] (code for pi ) . . . . . 179
10.5 Three possible one-round extensions from Et−1 . . . . . . . . . . . . . . . . . . . . 183
10.6 Extending the k-round execution Ek . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.7 Extending two (k + 1)-round executions . . . . . . . . . . . . . . . . . . . . . . . 185
10.8 Extending again two (k + 1)-round executions . . . . . . . . . . . . . . . . . . . . 185
13.1 A consensus-based NBAC algorithm in CSMP n,t [∅] (code for pi ) . . . . . . . . . . 232
13.2 Impossibility of having both fast commit and fast abort when t ≥ 3 (E3) . . . . . . . 234
13.3 Impossibility of having both fast commit and fast abort when t ≥ 3 (E4, E5) . . . . 235
13.4 Fast commit and weak fast abort NBAC in CSMP n,t [3 ≤ t < n] (code for pi ) . . . . 237
13.5 Fast abort and weak fast commit NBAC in CSMP n,t [3 ≤ t < n] (code for pi ) . . . . 242
xxviii List of Figures and Algorithms
13.6 Fast commit and fast abort NBAC in the system model CSMP n,t [t ≤ 2] (code for pi ) 243
14.1 Interactive consistency for four processes despite one Byzantine process (code for pi ) 248
14.2 Proof of the interactive consistency algorithm in BSMP n,t [t = 1, n = 4] . . . . . . 249
14.3 Communication graph (left) and behavior of the t Byzantine processes (right) . . . . 251
14.4 EIG tree for n = 4 and t = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
14.5 Byzantine EIG consensus algorithm for BSMP n,t [t < n/3] . . . . . . . . . . . . . 253
14.6 EIG trees of the correct processes at the end of the first round . . . . . . . . . . . . 254
14.7 EIG tree tree2 at the end of the second round . . . . . . . . . . . . . . . . . . . . . 255
14.8 Constant message size Byzantine consensus in BSMP n,t [t < n/4] . . . . . . . . . . 258
14.9 From binary to multivalued Byzantine consensus in BSMP n,t [t < n/3] (code for pi ) 260
14.10 Proof of Property PR2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
14.11 Deterministic vs non-deterministic scenarios . . . . . . . . . . . . . . . . . . . . . 263
14.12 A Byzantine signature-based consensus algorithm in BSMP n,t [SIG; t < n/2]
(code for pi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.1 Stacking of abstraction layers for distributed renaming in CAMP n,t [t < n/2] . . . . 273
15.2 A simple snapshot-based size-adaptive (2p − 1)-renaming algorithm (code for pi ) . 274
15.3 A simple snapshot-based approximate algorithm (code for pi ) . . . . . . . . . . . . 277
15.4 What is captured by Lemma 62 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
15.5 Safe agreement in CAMP n,t [t < n/2] (code for process pi ) . . . . . . . . . . . . . 281
16.1 Adding total order message delivery to various URB abstractions . . . . . . . . . . 288
16.2 Adding total order message delivery to the URB abstraction . . . . . . . . . . . . . 289
16.3 Building the TO-broadcast abstraction in CAMP n,t [CONS] (code for pi ) . . . . . . 290
16.4 Building the consensus abstraction in CAMP n,t [TO-broadcast] (code for pi ) . . . . 293
16.5 A TO-broadcast-based universal construction (code for pi ) . . . . . . . . . . . . . . 295
16.6 A state machine does not allow us to retrieve the past . . . . . . . . . . . . . . . . . 298
16.7 Building the consensus abstraction in CAMP n,t [LEDGER] (code for pi ) . . . . . . 298
16.8 A TO-broadcast-based ledger construction (code for pi ) . . . . . . . . . . . . . . . 299
16.9 Synchrony rules out uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
16.10 To wait or not to wait in presence of asynchrony and failures? . . . . . . . . . . . . 301
16.11 Bivalent vs univalent global states . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
16.12 There is a bivalent initial configuration . . . . . . . . . . . . . . . . . . . . . . . . 305
16.13 Illustrating the sets S1 and S2 used in Lemma 70 . . . . . . . . . . . . . . . . . . . 306
16.14 Σ2 contains 0-valent and 1-valent global states . . . . . . . . . . . . . . . . . . . . 307
16.15 Valence contradiction when i = i . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
16.16 Valence contradiction when i = i . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
16.17 k-sliding window register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
16.18 Solving consensus for k processes from a k-sliding window (code for pi ) . . . . . . 311
16.19 Schedule illustration: case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
16.20 Schedule illustration: case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
16.21 Building the TO-broadcast abstraction in CAMP n,t [- FC, CONS] (code for pi ) . . . 316
17.1 Binary consensus in CAMP n,t [t < n/2, MS] (code for pi ) . . . . . . . . . . . . . 319
17.2 A coordinator-based consensus algorithm for CAMP n,t [P ] (code for pi ) . . . . . . 322
17.3 Ω is a consensus computability lower bound . . . . . . . . . . . . . . . . . . . . . . 325
17.4 An algorithm implementing consensus in CAMP n,t [t < n/2, Ω] (code for pi ) . . . 326
17.5 The second phase for AS n,t [t < n/3, Ω] (code for pi ) . . . . . . . . . . . . . . . . 330
17.6 A randomized binary consensus algorithm for CAMP n,t [t < n/2, LC] (code for pi ) 332
17.7 What is broken by a random oracle . . . . . . . . . . . . . . . . . . . . . . . . . . 333
List of Figures and Algorithms xxix
17.8 A randomized binary consensus algorithm for CAMP n,t [t < n/2, CC] (code for pi ) 336
17.9 A hybrid binary consensus algorithm for CAMP n,t [t < n/2, Ω, LC] (code for pi ) . . 338
17.10 An Alpha-based consensus algorithm in CAMP n,t [t < n/2, Ω] (code for pi ) . . . . 340
17.11 An algorithm implementing Alpha in CAMP n,t [t < n/2] . . . . . . . . . . . . . . 342
17.12 A reduction of multivalued to binary consensus in CAMP n,t [BC] (code for pi ) . . . 344
17.13 Consensus in one communication step in CAMP n,t [t < n/3, CONS] (code for pi ) . 347
17.14 Is this consensus algorithm for CAMP n,t [t < n/2, AΩ] correct? (code for pi ) . . . 351
19.1 Binary consensus in BAMP n,t [t < n/3, TMS] (code for pi ) . . . . . . . . . . . . . 387
19.2 An algorithm implementing BV-broadcast in BAMP n,t [t < n/3] (code for pi ) . . . 390
19.3 A BV-broadcast-based binary consensus algorithm for the model BAMP n,t [n >
3t, CC] (code for pi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
19.4 From multivalued to binary Byzantine consensus in BAMP n,t [t < n/3, BBC]
(code of pi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
19.5 VBB-broadcast on top of reliable broadcast in BAMP n,t [t < n/3] (code of pi ) . . . 400
19.6 From multivalued to binary consensus in BAMP n,t [t < n/3, BBC] (code for pi ) . . 403
19.7 Local blockchain representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
14.1 Upper bounds on the number of faulty processes for consensus . . . . . . . . . . . . . 245
xxxi
Part I
Introductory Chapter
1
Another random document with
no related content on Scribd:
BLACK CURRANT JAM AND MARMALADE.
No fruit jellies so easily as black currants when they are ripe; and
their juice is so rich and thick that it will bear the addition of a very
small quantity of water sometimes, without causing the preserve to
mould. When the currants have been very dusty, we have
occasionally had them washed and drained before they were used,
without any injurious effects. Jam boiled down in the usual manner
with this fruit is often very dry. It may be greatly improved by taking
out nearly half the currants when it is ready to be potted, pressing
them well against the side of the preserving-pan to extract the juice:
this leaves the remainder far more liquid and refreshing than when
the skins are all retained. Another mode of making fine black currant
jam—as well as that of any other fruit—is to add one pound at least
of juice, extracted as for jelly, to two pounds of the berries, and to
allow sugar for it in the same proportion as directed for each pound
of them.
For marmalade or paste, which is most useful in affections of the
throat and chest, the currants must be stewed tender in their own
juice, and then rubbed through a sieve. After ten minutes’ boiling,
sugar in fine powder must be stirred gradually to the pulp, off the fire,
until it is dissolved: a few minutes more of boiling will then suffice to
render the preserve thick, and it will become quite firm when cold.
More or less sugar can be added to the taste, but it is not generally
liked very sweet.
Best black currant jam.—Currants, 4 lbs.; juice of currants, 2 lbs.:
15 to 20 minutes’ gentle boiling. Sugar, 3 to 4 lbs.: 10 minutes.
Marmalade, or paste of black currants.—Fruit, 4 lbs.: stewed in its
own juice 15 minutes, or until quite soft. Pulp boiled 10 minutes.
Sugar, from 7 to 9 oz. to the lb.: 10 to 14 minutes.
Obs.—The following are the receipts originally inserted in this
work, and which we leave unaltered.
To six pounds of the fruit, stripped carefully from the stalks, add
four pounds and a half of sugar. Let them heat gently, but as soon as
the sugar is dissolved boil the preserve rapidly for fifteen minutes. A
more common kind of jam may be made by boiling the fruit by itself
from ten to fifteen minutes, and for ten minutes after half its weight of
sugar has been added to it.
Black currants, 6 lbs.; sugar, 4-1/2 lbs.: 15 minutes. Or: fruit, 6 lbs.:
10 to 15 minutes. Sugar, 3 lbs.: 10 minutes.
Obs.—There are few preparations of fruit so refreshing and so
useful in illness as those of black currants, and it is therefore
advisable always to have a store of them, and to have them well and
carefully made.
NURSERY PRESERVE.
(A New Receipt.)
The market-price of our English pines is generally too high to
permit their being very commonly used for preserve; and though
some of those imported from the West Indies are sufficiently well-
flavoured to make excellent jam, they must be selected with
judgment for the purpose, or they will possibly not answer for it. They
should be fully ripe, but perfectly sound: should the stalk end appear
mouldy or discoloured, the fruit should be rejected. The degree of
flavour which it possesses may be ascertained with tolerable
accuracy by its odour; for if of good quality, and fit for use, it will be
very fragrant. After the rinds have been pared off, and every dark
speck taken from the flesh, the pines may be rasped on a fine and
delicately clean grater, or sliced thin, cut up quickly into dice, and
pounded in a stone or marble mortar; or a portion may be grated,
and the remainder reduced to pulp in the mortar. Weigh, and then
heat and boil it gently for ten minutes; draw it from the fire, and stir to
it by degrees fourteen ounces of sugar to the pound of fruit; boil it
until it thickens and becomes very transparent, which it will be in
about fifteen minutes, should the quantity be small: it will require a
rather longer time if it be large. The sugar ought to be of the best
quality and beaten quite to powder; and for this, as well as for every
other kind of preserve, it should be dry. A remarkably fine
marmalade may be compounded of English pines only, or even with
one English pine of superior growth, and two or three of the West
Indian mixed with it; but all when used should be fully ripe, without at
all verging on decay; for in no other state will their delicious flavour
be in its perfection.
In making the jam always avoid placing the preserving-pan flat
upon the fire, as this of itself will often convert what would otherwise
be excellent preserve, into a strange sort of compound, for which it is
difficult to find a name, and which results from the sugar being
subjected—when in combination with the acid of the fruit—to a
degree of heat which converts it into caramel or highly-boiled barley-
sugar. When there is no regular preserving-stove, a flat trivet should
be securely placed across the fire of the kitchen-range to raise the
pan from immediate contact with the burning coals, or charcoal. It is
better to grate down, than to pound the fruit for the present receipt
should any parts of it be ever so slightly tough; and it should then be
slowly stewed until quite tender before any sugar is added to it; or
with only a very small quantity stirred in should it become too dry. A
superior marmalade even to this, might probably be made by adding
to the rasped pines a little juice drawn by a gentle heat, or expressed
cold, from inferior portions of the fruit; but this is only supposition.
A FINE PRESERVE OF THE GREEN ORANGE PLUM.
When the plums are thoroughly ripe, take off the skins, stone,
weigh, and boil them quickly without sugar for fifty minutes, keeping
them well stirred; then to every four pounds add three of good sugar
reduced quite to powder, boil the preserve from five to eight minutes
longer, and clear off the scum perfectly before it is poured into the
jars. When the flesh of the fruit will not separate easily from the
stones, weigh and throw the plums whole into the preserving-pan,
boil them to a pulp, pass them through a sieve, and deduct the
weight of the stones from them when apportioning the sugar to the
jam. The Orleans plum may be substituted for greengages in this
receipt.
Greengages, stoned and skinned, 6 lbs.: 50 minutes. Sugar, 4-1/2
lbs.: 5 to 8 minutes.
PRESERVE OF THE MAGNUM BONUM, OR MOGUL PLUM.
Prepare, weigh, and boil the plums for forty minutes; stir to them
half their weight of good sugar beaten fine, and when it is dissolved
continue the boiling for ten additional minutes, and skim the preserve
carefully during the time. This is an excellent marmalade, but it may
be rendered richer by increasing the proportion of sugar. The
blanched kernels of a portion of the fruit stones will much improve its
flavour, but they should be mixed with it only two or three minutes
before it is taken from the fire. When the plums are not entirely ripe,
it is difficult to free them from the stones and skins: they should then
be boiled down and pressed through a sieve, as directed for
greengages, in the receipt above.
Mogul plums, skinned and stoned, 6 lbs.: 40 minutes. Sugar, 3
lbs.: 5 to 8 minutes.
TO DRY OR PRESERVE MOGUL PLUMS IN SYRUP.
Pare the plums, but do not remove the stalks or stones; take their
weight of dry sifted sugar, lay them into a deep dish or bowl, and
strew it over them; let them remain thus for a night, then pour them
gently into a preserving-pan with all the sugar, heat them slowly, and
let them just simmer for five minutes; in two days repeat the process,
and do so again and again at an interval of two or three days, until
the fruit is tender and very clear; put it then into jars, and keep it in
the syrup, or drain and dry the plums very gradually, as directed for
other fruit. When they are not sufficiently ripe for the skin to part from
them readily, they must be covered with spring water, placed over a
slow fire, and just scalded until it can be stripped from them easily.
They may also be entirely prepared by the receipt for dried apricots
which follows, a page or two from this.
MUSSEL PLUM CHEESE AND JELLY.
Fill large stone jars with the fruit, which should be ripe, dry, and
sound; set them into an oven from which the bread has been drawn
several hours, and let them remain all night; or, if this cannot
conveniently be done, place them in pans of water, and boil them
gently until the plums are tender, and have yielded their juice to the
utmost. Pour this from them, strain it through a jelly bag, weigh, and
then boil it rapidly for twenty-five minutes. Have ready, broken small,
three pounds of sugar for four of the juice, stir them together until it is
dissolved, and then continue the boiling quickly for ten minutes
longer, and be careful to remove all the scum. Pour the preserve into
small moulds or pans, and turn it out when it is wanted for table: it
will be very fine, both in colour and in flavour.
Juice of plums, 4 lbs.: 25 minutes. Sugar, 3 lbs.: 10 minutes.
The cheese.—Skin and stone the plums from which the juice has
been poured, and after having weighed, boil them an hour and a
quarter over a brisk fire, and stir them constantly; then to three
pounds of fruit add one of sugar, beaten to powder; boil the preserve
for another half hour, and press it into shallow pans or moulds.
Plums, 3 lbs.: 1-1/4 hour. Sugar, 1 lb.: 30 minutes.
APRICOT MARMALADE.
(French Receipt.)
Take apricots which have attained their full growth and colour, but
before they begin to soften; weigh, and wipe them lightly; make a
small incision across the top of each plum, pass the point of a knife
through the stalk end, and gently push out the stones without
breaking the fruit; next, put the apricots into a preserving-pan, with
sufficient cold water to float them easily; place it over a moderate
fire, and when it begins to boil, should the apricots be quite tender,
lift them out and throw them into more cold water, but simmer them,
otherwise, until they are so. Take the same weight of sugar that there
was of the fruit before it was stoned, and boil it for ten minutes with a
quart of water to the four pounds; skim the syrup carefully, throw in
the apricots (which should previously be well drained on a soft cloth,
or on a sieve), simmer them for one minute, and set them by in it
until the following day, then drain it from them, boil it for ten minutes,
and pour it on them the instant it is taken from the fire; in forty-eight
hours repeat the process, and when the syrup has boiled ten
minutes, put in the apricots, and simmer them from two to four
minutes, or until they look quite clear. They may be stored in the
syrup until wanted for drying, or drained from it, laid separately on
slates or dishes, and dried very gradually: the blanched kernels may
be put inside the fruit, or added to the syrup.
Apricots, 4 lbs., scalded until tender; sugar 4 lbs.; water, 1 quart:
10 minutes. Apricots, in syrup, 1 minute; left 24 hours. Syrup, boiled
again, 10 minutes, and poured on fruit: stand 2 days. Syrup, boiled
again, 10 minutes, and apricots 2 to 4 minutes, or until clear.
Obs.—The syrup should be quite thick when the apricots are put in
for the last time; but both fruit and sugar vary so much in quality and
in the degree of boiling which they require, that no invariable rule can
be given for the latter. The apricot syrup strained very clear, and
mixed with twice its measure of pale French brandy, makes an
agreeable liqueur, which is much improved by infusing in it for a few
days half an ounce of the fruit-kernels, blanched and bruised, to the
quart of liquor.
We have found that cherries prepared by either of the receipts
which we have given for preserving them with sugar, if thrown into
the apricot syrup when partially dried, just scalded in it, and left for a
fortnight, then drained and dried as usual, become a delicious
sweetmeat. Mussel, imperatrice, or any other plums, when quite ripe,
if simmered in it very gently until they are tender, and left for a few
days to imbibe its flavour, then drained and finished as usual, are
likewise excellent.
PEACH JAM, OR MARMALADE.
The fruit for this preserve, which is a very delicious one, should be
finely flavoured, and quite ripe, though perfectly sound. Pare, stone,
weigh, and boil it quickly for three-quarters of an hour, and do not fail
to stir it often during the time; draw it from the fire, and mix with it ten
ounces of well-refined sugar, rolled or beaten to powder, for each
pound of the peaches; clear it carefully from scum, and boil it briskly
for five minutes; throw in the strained juice of one or two good
lemons; continue the boiling for three minutes only, and pour out the
marmalade. Two minutes after the sugar is stirred to the fruit, add
the blanched kernels of part of the peaches.
Peaches, stoned and pared, 4 lbs.; 3/4 hour. Sugar, 2-1/2 lbs.: 2
minutes. Blanched peach-kernels: 3 minutes. Juice of 2 small
lemons: 3 minutes.
Obs.—This jam, like most others, is improved by pressing the fruit
through a sieve after it has been partially boiled. Nothing can be finer
than its flavour, which would be injured by adding the sugar at first;
and a larger proportion renders it cloyingly sweet. Nectarines and
peaches mixed, make an admirable preserve.
TO PRESERVE, OR TO DRY PEACHES OR NECTARINES.
The fruit for this jam should be freshly gathered and quite ripe.
Split, stone, weigh, and boil it quickly for forty minutes; then stir in
half its weight of good sugar roughly powdered, and when it is
dissolved, give the preserve fifteen minutes additional boiling,
keeping it stirred, and thoroughly skimmed.
Damsons, stoned, 6 lbs.: 40 minutes. Sugar, 3 lbs.: 15 minutes.
Obs.—A more refined preserve is made by pressing the fruit
through a sieve after it is boiled tender; but the jam is excellent
without.