Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
A Concurrent ML Library in Concurrent Haskell

A Concurrent ML Library in Concurrent Haskell

Ratings: (0)|Views: 0|Likes:

More info:

Published by: captain_recruiter_robot on Nov 13, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

11/15/2013

pdf

text

original

A Concurrent ML Library in Concurrent Haskell
Avik Chaudhuri
University of Maryland, College Park
avik@cs.umd.edu
Abstract

In Concurrent ML, synchronization abstractions can be de\ufb01ned and passed as values, much like functions in ML. This mecha- nism admits a powerful, modular style of concurrent programming, calledhigher-order concurrent programming. Unfortunately, it is not clear whether this style of programming is possible in lan- guages such as Concurrent Haskell, that support only \ufb01rst-order message passing. Indeed, the implementation of synchronization abstractions in Concurrent ML relies on fairly low-level, language- speci\ufb01c details.

In this paper we show, constructively, that synchronization ab- stractions can be supported in a language that supports only \ufb01rst- order message passing. Speci\ufb01cally, we implement a library that makes Concurrent ML-style programming possible in Concurrent Haskell. We begin with a core, formal implementation of synchro- nization abstractions in the\u03c0-calculus. Then, we extend this imple- mentation to encode all of Concurrent ML\u2019s concurrency primitives (and more!) in Concurrent Haskell.

Our implementation is surprisingly ef\ufb01cient, even without pos- sible optimizations. Preliminary experiments suggest that our li- brary can consistently outperform OCaml\u2019s standard library of Concurrent ML-style primitives.

At the heart of our implementation is a new distributed syn- chronization protocol that we prove correct. Unlike several previ- ous translations of synchronization abstractions in concurrent lan- guages, we remain faithful to the standard semantics for Concurrent ML\u2019s concurrency primitives. For example, we retain the symme- try ofchoose, which can express selective communication. As a corollary, we establish that implementing selective communication on distributed machines is no harder than implementing \ufb01rst-order message passing on such machines.

1. Introduction

As famously argued by Reppy (1999), there is a fundamental con- \ufb02ict between selective communication (Hoare 1978) and abstrac- tion in concurrent programs. For example, consider a protocol run between a client and a pair of servers. In this protocol, selec- tive communication may be necessary for liveness\u2014if one of the servers is down, the client should be able to interact with the other. This may require some details of the protocol to be exposed. At the same time, abstraction may be necessary for safety\u2014the client should not be able to interact with a server in an unexpected way. This may in turn require those details to be hidden.

An elegant way of resolving this con\ufb02ict, proposed by Reppy (1992), is to separate the process of synchronization from the mech- anism for describing synchronization protocols. More precisely, Reppy introduces a new type constructor,event, to type syn- chronous operations in much the same way as-> (\u201carrow\u201d) types functional values. A synchronous operation, orevent, describes a synchronization protocol whose execution is delayed until it is explicitlysynchronized. Thus, roughly, an event is analogous to

a function abstraction, and event synchronization is analogous to
function application.
This abstraction mechanism is the essence of a powerful, mod-
ular style of concurrent programming, calledhigher-order concur-
rent programming. In particular, programmers can describe sophis-

ticated synchronization protocols asevent values, and compose them modularly. Complexevent values can be constructed from simpler ones by applying suitable combinators. For instance, se- lective communication can be expressed as achoice amongevent values\u2014and programmer-de\ufb01ned abstractions can be used in such communication without breaking those abstractions (Reppy 1992).

Reppy implements events, as well as a collection of such suit- able combinators, in an extension of ML called Concurrent ML (CML) (Reppy 1999). We review these primitives informally in Section 2; their formal semantics can be found in (Reppy 1992). The implementation of these primitives in CML relies on fairly low-level, language-speci\ufb01c details, such as support for continu- ations and signals (Reppy 1999). In turn, these primitives immedi- ately support higher-order concurrent programming in CML.

Other languages, such as Concurrent Haskell (Jones et al. 1996), seem to be more modest in their design. Following the\u03c0-calculus (Milner et al. 1992), such languages support only \ufb01rst-order mes- sage passing. While functions for \ufb01rst-order message passing can be encoded in CML, it is unclear whether, conversely, the concur- rency primitives of CML can be expressed in those languages.

ContributionsIn this paper, we show that CML-style concur-

rency primitives can in fact be implemented as a library, in a lan- guage that already supports \ufb01rst-order message passing. Such a li- brary makes higher-order concurrent programming possible in a language such as Concurrent Haskell. Going further, our imple- mentation has other interesting consequences. For instance, the designers of Concurrent Haskell deliberately avoid a CML-style choice primitive (Jones et al. 1996, Section 5), partly concerned that such a primitive may complicate a distributed implementation of Concurrent Haskell. By showing that such a primitive can be encoded in Concurrent Haskell itself, we eliminate that concern.

At the heart of our implementation is a new distributed protocol for synchronization of events. Our protocol is carefully designed to ensure safety, progress, and fairness. In Section 3, we formalize this protocol as an abstract state machine, and prove its correctness. Then, in Section 4, we describe a concrete implementation of this protocol in the\u03c0-calculus, and prove its correctness as well. This implementation can serve as a foundation for other implementa- tions in related languages. Building on this implementation, in Sec- tions 5, 6, and 7, we show how to encode all of CML\u2019s concurrency primitives, and more, in Concurrent Haskell. Our implementation is very concise, requiring less than 150 lines of code; in contrast, a previous implementation in Haskell requires more than 1300.

In Section 8, we compare the performance of our library against OCaml\u2019s standard library of CML-style primitives. Our implemen- tation consistently outperforms the latter, even without possible op- timizations.

1
2009/3/2

Finally, unlike several previous implementations of CML-style primitives in other languages, we remain faithful to the standard se- mantics for those primitives (Reppy 1999). For example, we retain the symmetry ofchoose, which can express selective communica- tion. Indeed, we seem to be the \ufb01rst to implement a CML library that relies purely on \ufb01rst-order message passing, and preserves the standard semantics. We defer a more detailed discussion on related work to Section 9.

2. Overview of CML

In this section, we present a brief overview of CML\u2019s concurrency primitives. (Space constraints prevent us from motivating these primitives any further; the interested reader can \ufb01nd a comprehen- sive account of these primitives, with several programming exam- ples, in (Reppy 1999).) We provide a small example at the end of this section.

Note thatchannel andevent are polymorphic type construc-
tors in CML, as follows:
\u2022The typechannel tau is given to channels that carry values
of typetau.
\u2022The typeevent tau is given to events that return values of
typetau on synchronization (see the functionsync below).
The combinatorsreceive andtransmit build events for syn-
chronous communication.
receive : channel tau -> event tau
transmit : channel tau -> tau -> event ()
\u2022receive creturns an event that, on synchronization, accepts
a messageM on channelc and returnsM. Such an event must
synchronize withtransmit c M.
\u2022transmit c Mreturns an event that, on synchronization, sends
the messageM on channelc and returns() (that is, \u201cunit\u201d). Such
an event must synchronize withreceive c.

Perhaps the most powerful of CML\u2019s concurrency primitives is the combinatorchoose; it can nondeterministically select an event from a list of events,so that the selected event can be synchronized. In particular,choose can express selective communication. Several implementations need to restrict the power ofchoose in order to tame it (Russell 2001; Reppy and Xiao 2008). Our implementation is designed to avoid such problems (see Section 9).

choose : [event tau] -> event tau
\u2022choose Vreturns an event that, on synchronization, synchro-
nizes one of the events in listV and \u201caborts\u201d the other events.
Several events may be aborted on a selection. The combinator
wrapabortcan specify an action that is spawned if an event is
aborted.
wrapabort : (() -> ()) -> event tau -> event tau
\u2022wrapabort f vreturns an event that either synchronizes the
eventv, or, if aborted, spawns a thread that runs the codef ().
The combinatorsguard andwrap can specify pre- and post-
synchronization actions.
guard : (() -> event tau) -> event tau
wrap : event tau -> (tau -> tau\u2019) -> event tau\u2019
\u2022guard freturns an event that, on synchronization, synchro-
nizes the event returned by the codef ().
\u2022wrap v freturns an event that, on synchronization, synchro-
nizes the eventv and applies the functionf to the result.
Finally, the functionsync can synchronize an event and return
the result.
sync : event tau -> tau

By design, an event can synchronize only at some \u201cpoint\u201d, where a message is either sent or accepted on a channel. Such a point, called thecommit point, may be selected among several other candidates at run time. Furthermore, some code may be run before, and after, synchronization\u2014as speci\ufb01ed byguard functions, by

wrapfunctions that enclose the commit point, and by wrapabort
functions that do not enclose the commit point.
For example, consider the following value of typeevent ().
(Here,c andc\u2019 are values of typechannel ().)
val v =
choose
[guard (fn () ->
...;
wrapabort ...
(choose [wrapabort ... (transmit c ());
wrap (transmit c\u2019 ()) ... ] ) );
guard (fn () ->
...;
wrap
(wrapabort ... (receive c))
... ) ]
The eventv describes a fairly complicated protocol that, on syn-
chronization, selects among the communication eventstransmit
c (), transmit c\u2019 (), and receive c, and runs some code
(elided by...s) before and after synchronization. Now, suppose
that we run the following ML program.

val _ =
spawn (fn () -> sync v);
sync (receive c\u2019)

This program spawnssync v in parallel withsync (receive
c\u2019). In this case, the event transmit c\u2019 ()is selected inside v,
so that it synchronizes withreceive c\u2019. The \ufb01gure below depicts
sync vas a tree. The point marked\u2022 is the commit point; this

point is selected among the other candidates, marked\u25e6, at run time. Furthermore, (only) code speci\ufb01ed by the combinators marked in boxes are run before and after synchronization, following the semantics outlined above.

choose\ue001Z
guard\u2014
guard\u2014 wrap\u2014 wrapabort\u2014\u25e6
wrapabort\u2014 choose\ue000Q
wrapabort\u2014\u25e6
wrap\u2014\u2022
3. A distributed protocol for synchronization
We now present a distributed protocol for synchronizing events.
We focus on events that are built with the combinatorsreceive,
transmit, and choose. While the other combinators are important
for describing computations, they do not fundamentally affect the
nature of the protocol; we consider them later, in Sections 5 and 6.
3.1 A source language
For brevity, we simplify the syntax of the source language. Letc
range over channels. We use the following notations:\u2212\u2192
\u03d5\ue001denotes
a sequence of the form\u03d51 ,...,\u03d5n, where\ue002\u22081..n; further-
more,{\u2212\u2192
\u03d5\ue001}denotes the set {\u03d51 ,...,\u03d5n}, and[\u2212\u2192
\u03d5\ue001]denotes the list
[\u03d51 ,...,\u03d5n].
The syntax of the language is as follows.
2
2009/3/2
\u2022Actions\u03b1,\u03b2,...are of the form cor c(input or output on
c). Informally, actions model communication events built with
receiveand transmit.
\u2022Programsare of the formS1| ...| Sm (parallel composition
ofS1 ,...,Sm), where eachSk (k\u22081..m) is either an action
\u03b1, or a selection of actions,select(\u2212\u2192\u03b1i). Informally, a selection
of actions models the synchronization of a choice of events,
following the CML functionselect.
select : [event tau] -> tau
select V = sync (choose V)
Further, we consider only the following local reduction rule:
c\u2208{\u2212\u2192\u03b1i}
c\u2208{\u2212\u2192\u03b2j}
select(\u2212\u2192\u03b1i )| select(\u2212\u2192\u03b2j )\u2212\u2192c |c
(SEL COMM)

This rule models selective communication. We also consider the usual structural rules for parallel composition. However, we ignore reduction of actions at this level of abstraction.

3.2 A distributed abstract state machine for synchronization

Our synchronization protocol is run by a distributed system of prin- cipals that include channels,points, andsynchronizers. Informally, every action is associated with a point, and everyselect is associ- ated with a synchronizer.

The reader may draw an analogy between our setting and one of arranging marriages, by viewing points as prospective brides and grooms, channels as matchmakers, and synchronizers as parents whose consents are necessary for marriages.

We formalize our protocol as a distributed abstract state ma- chine that implements the rule (SEL COMM). Let\u03c3 range over states of the machine. These states are built by parallel composi- tion|, inaction0, and name creation\u03bd (Milner et al. 1992) over various states of principals.

States of the machine\u03c3
\u03c3::=
states of the machine
\u03c3| \u03c3\ue000
parallel composition
0
inaction
(\u03bd\u2212\u2192pi )\u03c3
name creation
\u03c2
state of principals

The various states of principals are shown in Figure 1. Roughly, principals in speci\ufb01c states react with each other to cause transi- tions in the machine. The rules that govern these reactions appear later in the section.

Letp ands range over points and synchronizers. A synchronizer can be viewed as a partial function from points to actions; we represent this function as a parallel composition of bindings of the formp\ue002\u2192 \u03b1. Further, we require that each point is associated with a unique synchronizer, that is, for anys ands\ue000,s\ue001= s\ue000\u21d2dom(s)\u2229

dom(s\ue000) =\u2205.

The semantics of the machine is described by the local transition rules in Figure 2 (explained below), plus the usual structural rules for parallel composition, inaction, and name creation as in the\u03c0- calculus (Milner et al. 1992).

Intuitively, the rules in Figure 2 may be read as follows.
(1) Two pointsp andq, bound to complementary actions on chan-
nelc, react withc, so thatp andq become matched (\u2665p and
\u2665q) and the channel announces their match (\u2295c(p, q)).
(2.i\u2013ii) Next,p (and likewise,q) reacts with its synchronizers. If
the synchronizer is open (\ue000s), it now becomes closed (\ue000s),
States of principals\u03c2
\u03c2p::=
states of a point
p\ue002\u2192 \u03b1
active
\u2665p
matched
\u03b1
released
\u03c2c::=
states of a channel
\ue000c
free
\u2295c(p, q)
announced
\u03c2s::=
states of a synchronizer
\ue000s
open
\ue000s
closed
\ue002s(p)
selected
\u00d7(p)
refused
..
\ue000s(p)
con\ufb01rmed
..
\ue001s
canceled
Figure 1.
Operational semantics\u03c3\u2212\u2192 \u03c3\ue000
(1)p\ue002\u2192 c| q\ue002\u2192 c| \ue000c \u2212\u2192 \u2665p | \u2665q | \u2295c(p, q)|\ue000c
(2.i)
p\u2208dom(s)
\u2665p |\ue000s \u2212\u2192\ue002s(p) |\ue000s
(2.ii)
p\u2208dom(s)
\u2665p |\ue000s \u2212\u2192 \u00d7(p) |\ue000s
(3.i)\ue002s(p)| \ue002s\ue000(q)| \u2295c (p, q)\u2212\u2192
..
\ue000s(p)|..
\ue000s\ue000(q)
(3.ii)\ue002s(p)| \u00d7 (q)| \u2295c(p, q)\u2212\u2192
..
\ue001s
(3.iii)\u00d7(p) |\ue002s(q) | \u2295c(p, q) \u2212\u2192
..
\ue001s
(3.iv)\u00d7(p) | \u00d7(q) | \u2295c(p, q) \u2212\u21920
(4.i)
s(p) = \u03b1
..
\ue000s(p)\u2212\u2192 \u03b1
(4.ii)
..
\ue001s\ue001(\u03bd\u2212\u2192pi) (\ue000s| s)wheredom(s) ={\u2212\u2192pi}
Figure 2.
andp is declared selected bys (\ue002s(p)). If the synchronizer is
already closed, thenp is refused (\u00d7(p)).
(3.i\u2013iv) If bothp andq are selected,c con\ufb01rms the selections to
both parties (..
\ue000s(p)and..
\ue000s\ue000
(q)). If only one of them is
selected,c cancels that selection (..
\ue001s).

(4.i\u2013ii) If the selection ofp is con\ufb01rmed, the action bound top is released. Otherwise, the synchronizer \u201creboots\u201d with fresh names for the points in its domain.

3.3 Compilation

Next, we show how programs in the source language are com- piled on to this machine. Let\u03a0 denote indexed parallel compo- sition; using this notation, for example, we can write a program

S1| ...| Smas\u03a0k\u22081..mSk. Suppose that the set of channels in
a program\u03a0k\u22081..mSk isC. We compile this program to the state
3
2009/3/2

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->