You are on page 1of 27

Higher-Order and Symbolic Computation, 12, 47–73 (1999)

°
c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

Using a Continuation Twice and Its Implications for


the Expressive Power of call/cc
HAYO THIELECKE* ht@dcs.qmw.ac.uk
Department of Computer Science, Queen Mary and Westfield College, University of London

Abstract. We study the implications for the expressive power of call/cc of upward continuations, specifically
the idiom of using a continuation twice. Although such control effects were known to Landin and Reynolds when
they invented J and escape, the forebears of call/cc, they still act as a conceptual pitfall for some attempts to
reason about continuations. We use this idiom to refute some recent conjectures about equivalences in a language
with continuations, but no other effects. This shows that first-class continuations as given by call/cc have
greater expressive power than one would expect from goto or exits.

Keywords: call/cc, continuations, upward continuations, expressiveness, program equivalence.

1. Introduction

You can enter a room once, and yet leave it twice.


(Peter Landin)
A common informal explanation of continuations is the comparison with forward goto.
This is in some sense a very apt simile: forward gotos obviously do not give rise to loops,
and continuations, without some reflexive type, do not add this ability either — unlike
backward gotos, or ml exceptions with their general [21] type [19].
Pushed too far, the goto analogy can be harmful. It is intuitively obvious that if all jumps
are forward, one cannot jump to the same label twice. But it would be unsafe to conclude
that, by analogy, one cannot invoke the same continuation twice.
It was realised by Landin [17] (reprinted as a journal paper [18]) that his J (for jump)
operator allows control quite unlike a forward goto.
We may however, observe that, when J is added, the function-producing feature
amounts roughly to the possibility of jumping to a label after leaving the block in
which it was introduced.
Similarly, Reynolds writes about his escape:
For example, it is possible that the evaluation of the body of an escape expression
may not cause the application of the escape function, but may produce the escape
function (or some function that may call the escape function) as its value. [24]
In a more LISP-inspired jargon, “caus[ing] the application of the escape function” is called
a downward use of the captured continuations, whereas “ the escape function (or some
* Part of this work was carried out while the author was an EU HCM postdoc at INRIA Sophia-Antipolis, France;
the author is currently funded from EPSRC grant GR/L54639.
48 THIELECKE

function that may call the escape function)” produced as the value is an upward continuation
[9].
The first idiom of upward continuations that Reynolds mentions is escape k in
k, or written with call/cc, (call/cc (lambda (k) k)). This implies a self-
application of the continuation. With another self-application, one can define recursive
functions, as in this obfuscated Scheme looping idiom [31]:
(define cycle-proc
(lambda (proc)
(let ((loop (call/cc (lambda (k) k))))
(begin
(proc)
(loop loop)))))
In the present paper, we would like to study the expressive power of call/cc in as
simple a setting as possible, without the possibility of using it for recursion, so we will be
concerned with the second idiom mentioned in the quote: the argument of call/cc may
produce some function that may call the escape function as its value. Arguably the simplest
procedure that has this kind of behaviour is the following procedure that calls the escape
function (captured continuation):
(call/cc
(lambda (k)
(lambda (x)
(k (lambda (y) x)))))
This idiom will be our canonical example of a control effect that is quite unlike what one
would expect from a forward goto.
A consequence of the addition of computational effects to a language is that it becomes
more sensitive to details of the evaluation. Consider ((lambda (x) #f) (k #t)).
One cannot dispense with evaluating the argument, even if the result is thrown away. In the
presence of continuations, the context (call/cc (lambda (k) [ ])) can separate

((lambda (x) #f) (k #t)) and #f

giving the answers #t and #f, respectively. This simple use of an exit is perhaps the most
elementary example of the discriminating power of call/cc.
But there are more subtle “non-equivalences” due to call/cc’s “defiance of the usual
substitution rules” [17] — some of which were in fact conjectured to be equivalences.
Furthermore, these non-equivalences can be shown using the control effects (upward con-
tinuations) mentioned above.
As far as formalisms for continuations are concerned, considerable progress has been
made. At any rate, definitional interpreters or cps transforms have been around for decades.
Yet as we will see the conceptual subtleties remain when one reasons informally (as in:
“continuations without state are too weak an effect to merit interest”), or pre-formally (when
trying to find plausible conjectures, before formally verifying them).
Strachey and Wadsworth [32] put somewhat less emphasis than Landin and Reynolds on
the possibility of passing arguments to continuations. So in that setting, state in the form
THE EXPRESSIVE POWER OF CALL/CC 49

of assignable labels is necessary for upward continuations. Furthermore, when upward


continuations are used in the literature, they are commonly used in conjunction with state.
For instance, when “obtaining coroutines with continuations” [15, 23] the local control
state of each coroutine stores its current continuation. This is done to enhance modular-
ity, and should not be taken as an indication that state is somehow necessary for upward
continuations.

1.1. Related Work

The examples that we present show that first-class continuations are good at distinguishing,
or discriminating between, λ-terms, and thus that they are a “powerful” effect in the sense of
Felleisen’s “expressive power” [6], or Boudol’s “discriminating power” [3]. Landin [17, 18]
already used inequivalences to show the additional expressive power of J (though the exact
inequivalences used there are idiosyncrasies of J and do not generalise to call/cc [36]).
The particular idiom of using a continuation twice that we call arg-fc in the sequel
was already used as a counterexample to polymorphic type assignment [14]. Griffin [10]
used a continuation twice for establishing the excluded middle in his formulae-as-types
interpretation of control operators.
The expressive power of call/cc is discussed (and branded unreasonable) by Riecke
[25, 20]; it was further analysed in Felleisen’s expressiveness framework [6]. In both cases,
downward continuations are considered. Here, by contrast, we focus on upward continu-
ations and control effects that are quite different from those of downward continuations.
In Haynes’s paper on “logic continuations” [11], upward failure continuations are used to
implement nondeterminism (already mentioned by Reynolds [24]) by way of backtrack-
ing. We found this connection to backtracking illuminating even for the highly idealised
(stateless) setting of the present paper.

1.2. Outline

We recall the standard idealised language and the semantic setting that we will use, a
call-by-value λ-calculus with call/cc, in Section 2. In Section 3, two functions that
use their current continuation twice are discussed. Such an idiom forms the basis of the
counterexamples in the rest of the paper. Section 4 refutes Filinski’s [7] view of the
total morphisms as effect-free, and as a corollary the “idempotency hypothesis for control
effects”, which was informally conjectured by Amr Sabry. In Section 5, we give further
evidence for the expressive power of call/cc by showing that two λ-terms that cannot
be distinguished in various extensions of “lazy” or weak call-by-name λ-calculus can be
separated with an analogue of the technique we had used in call-by-value. Section 6
concludes and points towards further work.
A reader chiefly interested in call/cc as part of Scheme or Standard ML of New Jersey
may wish to skim much of the formalism, focusing on Section 3 and the Scheme code in
the figures. Conversely, a more theoretically-minded reader could skim Section 3 and trace
definitions and proofs from Section 2 onwards.
50 THIELECKE

1.3. Acknowledgements

We should like to point out that the conjectures that we will study were interesting attempts
at pinning down continuations in terms of global algebraic properties; the fact that they
can be refuted does not distract from that: on the contrary, it should be seen as proof
that continuations are sufficiently interesting and subtle to surprise even the experts. In
that regard, I would like to thank Andrzej Filinski for telling me about the idempotency
hypothesis. Thanks also to Peter Landin, Gérard Boudol, Davide Sangiorgi, Paul Levy
and Jim Laird for discussions. Several people made valuable comments on the draft, in
particular Peter O’Hearn. The anonymous referees spotted bugs and made many suggestions
for improvement.

2. Setting

We will be concerned with the discriminating power that call/cc adds to the call-by-value
λ-calculus, more specifically with the notion of separable terms adapted from the theory of
the λ-calculus [2]. Roughly speaking, two terms are separable if, in some context, they give
different answers, say true and false. If two terms are separable, they are not equivalent in
any reasonable sense. The counterexamples rely on what amounts to a very small subset
of Scheme consisting of lambda and call/cc and some base type constants, like the
booleans #t and #f. Some syntactic sugar, such as let and begin is used for readability,
but could be expanded out. When addressing what we call copyability, we also need cons,
car and cdr. Where the top-level define is used, it could be eliminated in favour of
lambda. With the exception of Section 5, which deals with an untyped λ-calculus, we
stay inside a simply-typed subset.
The classical way of giving the semantics of a language with control operators is via a cps
interpreter [24], or a cps transform. We recall cps transforms, but also give an operational
semantics based on evaluation context in the style developed by Felleisen and others, as
this allows calculations on source-level terms that may be helpful for seeing what is going
on.

2.1. The language

Up to minor variations, both the language and its semantics are taken off-the-shelf from the
continuations literature: see the typing of callcc in Standard ML [5, 13] for the language
and its type system, and work by Felleisen and others [6, 29] for the evaluation-context
operational semantics.
We recall the type system for continuations in ML [5]:
Γ, x : σ ` M : τ
Γ, x : τ ` x : τ Γ ` λx.M : σ → τ
Γ`M :σ→τ Γ`N :σ
Γ ` MN : τ
Γ ` M : ¬τ → τ Γ ` M : ¬τ Γ ` N : τ
Γ ` callcc M : τ Γ ` throw M N : σ
THE EXPRESSIVE POWER OF CALL/CC 51

Γ ` Mi : τ i Γ ` M : τ1 × τ 2
Γ ` (M1 , M2 ) : τ1 × τ2 Γ ` πi M : τi
We choose as our idealised language a fragment of ml rather than Scheme, because it
is trivial to move from the typed continuations of an ml fragment to Scheme, while the
other direction may be less obvious. One consequence of the simple typing is that all
programs terminate (in fact, this would hold even if the language were polymorphic [14]);
the phenomena that we study manifest themselves even in such a simple setting (Section 5,
however, is inherently about untyped λ-calculus.)
Some syntactic sugar is added as follows:
def
let x = N in M = (λx.M ) N
def
M; N = (λx.N ) M x∈
/ FV(N )

2.2. The cps transforms

Definition 1. The call-by-value cps transform ( ) [24, 22, 5] is defined as follows:

x = λk.kx
λx.M = λk.k(λxk.M k)
MN = (λk.M (λm.N (λn.mnk))
callcc M = λk.M (λf.f kk)
throw M N = λk.M (λm.N (λn.mn))
(M, N ) = λk.M (λm.N (λn.k(λp.pmn)))
πj M = λk.M (λm.m(λx1 x2 .kxj ))

The transform for pairs makes use of a “classical” encoding similar to Griffin’s [10].

2.3. Felleisen-style operational semantics

We give an evaluation-context semantics in the style developed by Felleisen [29, 6].

Definition 2. Values V , evaluation contexts E and general terms M are given by the
following BNFs:

V ::= x | λx.M
| γx.M
| (V, V )
E ::= [ ] | EM | V E
| throw E M | throw V E
| (E, M ) | (V, E) | πi E
M ::= V | M M |
52 THIELECKE

| throw M M | callcc M
| (M, M ) | πi M

A redex is of the following form:


R ::= (λx.M )V | callcc M | throw (γx.M ) V | πi (V1 , V2 )

A closed term M is either a value, or it can be decomposed as M = E[R] into a redex R


and an evaluation context E. Such a decomposition is unique. (Proof by induction on M .)
The operational semantics is given by decomposing the term into an evaluation context and
a redex.
E[(λx.M )V ] → E[M [x 7→ V ]]
E[callcc M ] → E[M (γx.E[x])]
E[throw(γx.M )V ] → M [x 7→ V ]
E[πi (V1 , V2 )] → E[Vi ]
γ-abstractions (like exception packets in ml) are not part of the programming language,
but there are terms that reduce to them. The result of a term (program) M is the value V
such that M →∗ V .
This is essentially the same semantics as devised by Felleisen, except that it avoids the
abort operator A. Felleisen’s rules are closer to Scheme in that the continuation is wrapped
into an escape procedure by call/cc:
E[A M ] → M
E[call/cc M ] → E[M (λx.AE[x])]

2.4. Separating terms

Adapting a notion from the λ-calculus [2] to the call-by-value setting, given values T and
F which we regard as distinct, we say that two terms M and N are separable iff there is a
context C such that
C[M ] →∗ T and C[N ] →∗ F
In an idealised calculus, one could always choose the distinct values as the λ-calculus
booleans
def def
T = λxy.x and F = λxy.y
In Scheme we cannot directly observe the difference between procedures, so it is more
convenient to pick ground type constants, say the booleans #t and #f.
Note that this notion of separability is stronger than two terms merely not being observa-
tionally equivalent: we can observe them being different, rather than observing termination
in one case and nothing in the other. A consequence of this observability of the difference
is that we can machine-verify it by an evaluator for the operational semantics, i.e., Scheme
or ml. By contrast, if we were to distinguish terms by showing that one yields an answer
THE EXPRESSIVE POWER OF CALL/CC 53

and the other diverges, a separate proof of its divergence would be required, as no finite
amount of observation could demonstrate this.
In the sequel, we will regard β-equality of the cps transforms as a strong notion of
equivalence of programs, and separability as a strong notion of inequivalence. (The usual
notion of operational equivalence is properly contained between β-equivalence of the cps
transforms of two terms, and their inseparability. See also Sitaram and Felleisen on full
abstraction [29].)
To make sure that these two notions are not in conflict, we need some fairly standard
lemmas relating the operational semantics and the cps transform. (See also Sitaram and
Felleisen’s adequacy results [29].)

Definition 3. The call-by-value cps transform is extended as follows:


γz.M = λk.k(λz.M (λx.x))
[] = λk.k
EM = λk.E(λf.M (λx.f xk))
VE = λk.V (λf.E(λx.f xk))
throw E M = λk.E(λh.M (λx.hx))
throw V E = λk.V (λh.E(λx.hx))
(E, M ) = λk.E(λx.M (λy.k(λp.pxy)))
(V, E) = λk.V (λx.E(λy.k(λp.pxy)))
πj E = λk.E(λm.m(λx1 x2 .kxj ))

Lemma 1 V (λx.M ) =β M [x 7→ V ]

Lemma 2 E[M ] =β M ◦ E
Proof: Induction on E.

Lemma 3 If M → M 0 , then M (λx.x) =β M 0 (λx.x).


Proof: Using Lemmas 1 and 2, one verifies that:
E[(λx.M )V ] =β E[M [x 7→ V ]]
E[πi (V1 , V2 )] =β E[Vi ]
and
E[callcc M ](λx.x) =β E[M (γx.E[x])](λx.x)
E[throw(γx.M )V ](λx.x) =β M [x 7→ V ](λx.x)

Proposition 1 If M =β N , then M and N are not separable.


54 THIELECKE

Proof: Suppose M →∗ T and N →∗ F. By Lemma 3, F(λx.x) =β T(λx.x). Hence


for any two λ-terms P and Q, P =β Q. Contradiction.

3. arg-fc and twice/cc

We have already seen a term that uses its current continuation twice; in our idealised
notation, it is a one-liner:

callcc(λk.λx.throw k (λy.x))

To emphasise the fact that k is invoked twice, we could add another (technically redundant)
throw immediately after k is captured:

callcc(λk.throw k (λx.throw k (λy.x)))

This term passes a function to its current continuation k. When this function is called with
an argument x, the constant function always returning that argument is passed to k. Hence
the function eventually (on the second call to the current continuation) returned by the term
is the function always returning the argument to the first call.
For example (assuming for the moment that map proceeds over the list left-to-right, rather
than in unspecified order [16]), we have
(map
(call/cc (lambda (k) (lambda (x) (k (lambda (y) x)))))
(list 1 2 3 4))
(1 1 1 1)
The answers printed by the interpreter are written in slanted font. (Some readers may
wonder how this can work without assignment; in Section 4 we will consider in detail
the step-by-step evaluation of a very similar example. However, in the remainder of this
Section, we will pursue the analogy with assignment a little further.)
We can regard this as a solution to the following continuation programming exercise:
Define a function f such that all calls to f return the argument of the first call of f.
Do not use state.
We name this arg-fc, for “argument of first call”.
[tb]
At first sight, it seems hard to see how to write such a function without state: the obvious
solution, after all, uses two variables (or in ml, references): a non-local variable to hold
the argument of the first invocation and a boolean flag to record if the function has been
called before (if not, then the variable needs to be assigned). See Figure 2 for a version of
arg-fc with local state. We can give a better analogue of arg-fc with continuations by
using a function that updates its own definition when it is called (Figure 3). Note that the
last occurrence of f needs to be wrapped or protected by a lambda for this to work (see
“Scheme and the Art of Programming” [30] for a discussion on this and safe-letrec).
The connection to state in Figure 3 gives evidence for a point already made by Landin
about what would now be called upward continuations:
THE EXPRESSIVE POWER OF CALL/CC 55

(define arg-fc
(lambda ()
(call/cc
(lambda (k)
(k
(lambda (x)
(k
(lambda (y) x))))))))

Figure 1. arg-fc (in Scheme)

(define arg-fc-save-arg
(lambda ()
(let ((fc #t)
(arg ’anything))
(lambda (x)
(if fc
(begin
(set! fc #f)
(set! arg x)))
arg))))

Figure 2. An analogue of arg-fc with local state for its argument

(define arg-fc-proc
(lambda ()
(letrec ((f (lambda (x)
(begin
(set! f (lambda (y) x))
x))))
(lambda (z) (f z)))))

Figure 3. An analogue of arg-fc with local state for the procedure

This situation is similar to that arising when label assignment or procedure


assignments are introduced [. . .]
56 THIELECKE

([17, pp10f], emphasis mine).


The above is meant as an illustration: we hope that code is more concise than a prose
narrative of the behaviour of arg-fc, or at least a helpful addition to it. We do not, however,
claim that the versions of arg-fc with state are equivalent to the one with control; for
instance a context which itself uses state can easily separate them:
(define backtrack?
(lambda (testee)
(let ((ratchet (list ’anything #f #t)))
(let ((f (testee)))
(begin
(set! ratchet (cdr ratchet))
(f ’anything)
(car ratchet))))))

(backtrack? arg-fc)
#t
(backtrack? arg-fc-proc)
#f

A better metaphor than state may be backtracking by way of upward failure continuations
[11]. The function
λx.throw k (λy.x)
which is passed to the current continuation k could be seen as an upward failure continuation
in the sense that once it is applied to an argument x, it fails, by causing backtracking, so that
the constant function λy.x is returned to k instead. Although the very point of backtracking
seems to be to combine it with (local) state, so that a different value can be passed to
the continuation upon each backtrack, this restricted form of backtracking is nonetheless
possible even without state.
In the above, we were concerned with source-level terms. We can also arrive at terms
using their current continuation twice by working at the cps level instead. Consider the
def
cps transform of the function twice = λf.λx.f (f x).
twice f
= λx.f (f x)
= λk.k(λxh.(λq.f xq)(λy.f yh))
What is relevant here is the idiom

(λq.f xq) (λy.f yh)

for the function composition f ◦f in cps. (We could of course simplify this to f x(λy.f yh),
but for the present purpose it is better to leave the intermediate continuation q named.) The
first call to f is passed an argument x together with a continuation q pointing to the place
where the second call to f is awaiting its argument y; this second call is thus fed the result
of the first call together with the overall result continuation h. We can write, modulo
uncurrying, the same cps term when f is not some function, but the current continuation
THE EXPRESSIVE POWER OF CALL/CC 57

(define twice/cc
(lambda (p)
(call/cc (lambda (k)
((lambda (y)
(k (cons y (cdr p))))
(call/cc (lambda (q)
(k (cons (car p) q)))))))))

Figure 4. twice/cc in continuation-grabbing style

(λq.k(x, q)) (λy.k(y, h))

The only slight difficulty in translating this cps term into Scheme or ml lies in implementing
the binding of the continuation q. We do this by λ-abstracting on q and applying call/cc.
In Standard ML of New Jersey this is quite straightforward:

fun twicecc (x,h) = callcc(fn k =>


(fn y => throw k (y,h))
(callcc(fn q => throw k (x,q))));

twicecc : ’a * ’a cont -> ’a * ’a cont

A minor difficulty specific to Scheme, however, is that prior to the addition of call-with-
values in the latest (revised5 ) report [16], one cannot have multi-argument continuations
as in (k x q) by analogy with throw k (x,q) above. In the minimal Scheme that
we use in this paper, one would have to put the arguments into a list, which is then the single
argument to the continuation, i.e., (k (list x q)). (See Figure 4 on page 57).
As explained in the present author’s PhD thesis [34], this is an instance of a more general
translation from cps back to the source language, generalising what Sabry calls “Continu-
ation Grabbing Style” [26]. We could also give a more rigorous derivation of twice/cc
from twice [34, 35, 33].
Written as the cps term kx(λy.kyh), twice/cc seems to be the simplest way of using
the current continuation twice. Furthermore, it does so without mixing continuations and
higher-order procedures. Despite being much simpler to write at the source level, arg-fc
is more complicated than twice/cc as a cps term:

λk.k(λxp.k(λyq.qx))

In previous versions of the material presented here [34, 35, 33], we put greater emphasis
on twice/cc. We use arg-fc here instead, as its definition and the separating contexts
are easier to understand, particularly in direct style, rather than cps. Nonetheless, it is
insightful to have another, and quite different, function that gives rise to similar phenomena
by also using a continuation twice.
58 THIELECKE

4. arg-fc is discardable, but not copyable

Filinski [7] developed a framework for continuation semantics on the assumption that
discardable (there called total) terms can be considered effect-free. Here, a term is called
discardable iff evaluating it and then throwing away the result is the same as not evaluating
it at all. Similarly, a term is called copyable if copying the term and copying its value are
the same.

Definition 4. Given a notion of equality on terms, we say a term M is copyable iff


(λx.(x, x))M = (M, M ). M is called discardable iff (λx.y)M = y.

In the sequel, when we prove two terms to be equal, we use the β-equality of their cps
transforms, which is a very strong and robust notion of equality (used, for instance, by cps
compilers for optimization). This implies that the terms are also equal in weaker senses,
such as not being separable or typical notions of contextual equivalence. Dually, when we
show two terms not to be equal, we show they can be separated, in which case they cannot
be considered equal in any reasonable sense.
Exits are evidently not discardable, where by an exit we mean an invocation of a continu-
ation with an argument that does not itself contain the continuation, syntactically throw k V
with k not free in V . As an example, consider

(λx.F )(throw k T ) and F

These can be separated by the context callcc(λk.[ ]):


callcc(λk.(λx.F ) (throw k T )) →∗ T
callcc(λk.F ) →∗ F
Discarding the value of the computation by applying a term λx.M with x not free in M to
it does not amount to discarding the exit. One could therefore hope that the inability to be
discarded this way is somehow characteristic of control effects. This characterisation was
attempted by Filinski [7], in that it was assumed that the subcategory of total maps had finite
products; and this amounts to saying that all discardable terms should also be copyable.

Remark. A term of the form throw M N is copyable.

Proof:
(throw M N, throw M N )
=β λk.(λk.M (λm.N (λn.mn)))(λx.(λk.M (λm.N (λn.mn)))(λy.k(λp.pxy)))
=β λk.M (λm.N (λn.mn))
=β λk.(λk.M (λm.N (λn.mn)))(λx.k(λp.pxx))
=β λk.(λk.k(λxk.(x, x)k))(λf.throw M N (λy.f yk))
≡ (λx.(x, x)) (throw M N )
THE EXPRESSIVE POWER OF CALL/CC 59

Remark. A term of the form callcc(λe.M ), where e is used only as an exit from some
evaluation context E, i.e., M = E[throw e V ] with e not free in V , is also copyable. In
fact, one can simply erase the E and throw.

Proof: Let V be a value with e not free in V . Hence V = λk.kN for some N with e and
k not free in N . Using E[M ] =β M ◦ E (Lemma 2), we can simplify:

callcc(λe.E[throw e V ])
≡ λk.(λk.k(λek.E[throw e V ]k))(λf.f kk)
=β λk.(λk.k(λek.[throw e V ](Ek)))(λf.f kk)
=β λk.(λk.k(λek.(λk.eN )(Ek)))(λf.f kk)
=β λk.(λk.k(λek.eN )))(λf.f kk)
=β λk.(λf.f kk)(λek.eN )
=β λk.(λek.eN )kk
=β λk.kN
≡ V
Copyability then follows from the soundness of the β-value law.

So we now turn to a use of callcc that goes beyond the simple exit idiom in Remark 4,
specifically arg-fc. Let

def
A = callcc(λk.λx.throw k (λy.x))

Lemma 4 A is discardable.
Proof: Straightforward:

(λx.y) (callcc(λk.λx.throw k (λy.x)) =β y

Let T and F be distinct values. As a a first step towards showing the failure of copyability,
we explain how
(A T ; A F ) and (λf.(f T ; f F )) A
give different answers. The point is that the following derivations show the essence of the
use of backtracking to separate terms. The same phenomenon is at work with copyability,
but the larger separating context makes the corresponding derivation somewhat unwieldy
to display. Remember that the semicolon is syntactic sugar, i.e.,

def
M ; N = (λx.N ) M for x ∈
/ FV(N )

Lemma 5 (A T ; A F ) and (λf.(f T ; f F )) A can be separated.


60 THIELECKE

Proof:
Let E = ([ ] T ; callcc(λk.λx.throw k (λy.x)) F ). The first derivation is
(callcc(λk.λx.throw k (λy.x)) T ; callcc(λk.λx.throw k (λy.x)) F )
≡ E[callcc(λk.λx.throw k (λy.x))]
→ E[(λk.λx.throw k (λy.x)) (γz.E[z])]
→ E[λx.throw (γz.E[z]) (λy.x)]
≡ ((λx.throw (γz.E[z]) (λy.x)) T ; callcc(λk.λx.throw k (λy.x)) F )
→ throw (γz.E[z]) (λy.T ); callcc(λk.λx.throw k (λy.x)) F
→ E[λy.T ]
≡ ((λy.T ) T ; callcc(λk.λx.throw k (λy.x)) F )
→ (T ; callcc(λk.λx.throw k (λy.x)) F )
→ callcc(λk.λx.throw k (λy.x)) F
→ (λk.λx.throw k (λy.x)) (γz.zF ) F
→ (λx.throw (γz.zF ) (λy.x)) F
→ throw (γz.zF ) (λy.F )
→ (λy.F ) F
→ F
But in the other case, the backtracking affects the result. Formally, the backtracking is
manifested in the re-use of the evaluation context E = (λf.(f T ; f F )) [ ].
let f = callcc(λk.λx.throw k (λy.x)) in (f T ; f F )
≡ (λf.(f T ; f F )) (callcc(λk.λx.throw k (λy.x)))
→ E[(λk.λx.throw k (λy.x)) (γz.E[z])]
→ (λf.(f T ; f F )) (λx.throw (γz.E[z]) (λy.x))
→ ((λx.throw (γz.E[z]) (λy.x)) T ; (λx.throw (γz.E[z]) (λy.x)) F )
→ (throw (γz.E[z]) (λy.T ); (λx.throw (γz.E[z]) (λy.x)) F )
→ E[λy.T ]
= (λf.(f T ; f F )) (λy.T )
→ (λy.T ) T ; (λy.T ) F
→ T ; (λy.T ) F
→ (λy.T ) F
→ T

This derivation shows quite clearly the characteristic control behaviour of arg-fc. f is
applied to T ; the result of this call is then completely forgotten, as “;” causes evaluation to
happen, but discards the result. But the subsequent call of f with F has been affected, in
that it will return T , the argument that was passed to the first call.
THE EXPRESSIVE POWER OF CALL/CC 61

(define copy-val
(lambda (M)
(let ((x (M)))
(list x x))))

(define copy-comp
(lambda (M)
(let ((x (M)))
(let ((y (M)))
(list x y)))))

(define copy-separator
(lambda (copier)
(let ((funs (copier arg-fc)))
(let ((f (car funs)))
(let ((g (cadr funs)))
(begin
(f #t)
(g #f)))))))

Figure 5. Copying a computation, copying its value, and a separating context

We could write the above in Scheme as follows:


(let ((f (arg-fc)))
(begin
(f #t)
(f #f)))
#t

(begin
((arg-fc) #t)
((arg-fc) #f))
#f

We have argued that A can be understood as generating a function that backtracks as soon
as it is applied to an argument. What is essential, therefore, is whether we have functions
generated from the same copy of A or from separate copies. In the first case the calls will
influence each other; in the second case, they will not. With that understanding, it should be
possible to reduce the failure of copyability to Lemma 5. It is in fact possible by appealing
to the cps semantics, corroborating the intuition we have developed about A.
62 THIELECKE

Proposition 2 Discardability does not imply copyability.


Proof: A is discardable by Lemma 4. To refute copyability, let M = A and let the
separating context be given by

E = (λp.π1 pT ; π2 pF ) [ ]

For a value V , let V 0 be the λ-term such that V ≡ λk.kV 0 . Note that we can simplify
πj pV =β λk.p(λx1 x2 .xj V 0 k). Then we have, by expanding out the backtracking of the
first occurrence of A:

(λp.π1 pT ; π2 pF ) (A, A)
=β λk.(λh.A(λm.A(λn.h(λq.qmn))))(λp.p(λx1 x2 .x1 T 0 (λz.p(λx1 x2 .x2 F 0 k))))
=β λk.A(λm.A(λn.mT 0 (λz.nF 0 k)))
=β λk.(λm.A(λn.mT 0 (λz.nF 0 k)))(λyh.hT 0 )
=β λk.(λm.mT 0 (λz.A(λn.nF 0 k)))(λyh.hT 0 )
=β λk.A(λm.mT 0 (λz.A(λn.nF 0 k)))
=β AT ; AF

For the other case, we can appeal to standard equivalences validated by the cps transform:

(λp.π1 pT ; π2 pF )((λx.(x, x))A)


=β (λx.(λp.π1 pT ; π2 pF )(x, x))A
=β (λx.π1 (x, x)T ; π2 (x, x)F )A
=β (λx.xT ; xF )A

By Proposition 1, the above and Lemma 5 imply that:

(λp.π1 pT ; π2 pF )((λx.(x, x)A)) →∗ T


(λp.π1 pT ; π2 pF )(A, A) →∗ F

So (λx.(x, x))A and (A, A) can be separated, even though A is discardable.

The Scheme code for separating the two terms is in Figure 5. (Explicit sequencing by
way of let is necessary to make it legal Scheme [16].) Evaluation produces:

(copy-separator copy-val)
#t
(copy-separator copy-comp)
#f

4.1. Discardable and copyable are orthogonal

Considering that values are copyable and discardable, whereas jumps (throw) are copyable
but not discardable, we can summarise that copyable and discardable are orthogonal.
THE EXPRESSIVE POWER OF CALL/CC 63

Table 1. Copyability and discardability are orthogonal

copyable not copyable


discardable x λx.M (twice/cc a) (arg-fc)
not discardable throw M N (call/cc h)

While it is evident that values can be discarded and jumps cannot, the upper-right box
in the table (discardable, but not copyable) was previously thought to be unoccupied, in
that Filinski [7] thought that discardability, separating the top from the bottom row, was
sufficient for separating value from all control effects.
None of the formal proofs in Filinski’s categorical account of continuations are shown
to be actually false. What has been shown to be erroneous is the underlying assumption
that all discardable morphisms are copyable. That is, the developments from the (categor-
ical) axiomatisations may still be sound, but these axioms fail to characterise effect-free
computations in the presence of first-class continuations.
A corollary of this table is that a first-order jump like throw k V with k not free in V ,
is not maximally effectful. When it comes to being effectful, it is self-defeating in that
it forgets the current continuation. This implies that is it copyable, because one jump
(λx.(x, x))(throw k 42) is as good as two

(throw k 42, throw k 42)

The first jump will ignore it continuation containing the second jump, so that it does not
matter whether the latter is present or not. By contrast, the quintessential first-class jump
(call/cc h) (where h is a continuation), is not oblivious to its continuation, as this is
passed as an argument. This makes call/cc sufficiently sensitive to its continuation to
defy copying.
We have argued elsewhere [35, 34] that a better criterion for the effect-freeness of a
program in the presence of continuations is centrality, a kind of insensitivity to evaluation
order. This corresponds roughly to another traditional use of continuations: the use of
breakpoints from which a program may be restarted [30].

4.2. Control is not an idempotent effect

A similar context to the one used for copyability also gives us a counterexample to Amr
Sabry’s informal conjecture that, in Filinski’s words, “control is an idempotent effect”.
Thanks to Andrzej Filinski for pointing out to me that the refutation of the idempotency
hypothesis follows as a corollary from Proposition 2 [Andrzej Filinski, personal communi-
cation]. The idempotency conjecture holds that (λx.(x, x))M should be indistinguishable
from (λx.x(λy.(x, y)))M M . We say, somewhat loosely, that a particular language con-
struct is “idempotent” if this equivalence holds even if M uses that construct; for instance,
“jumps are an idempotent effect” is shorthand for saying that the equivalence holds for
terms of the form M ≡ throw P Q.
To explain the “idempotent effect” slogan, consider assignment. An example of an
“idempotent effect” is an assignment like (set! x 42). This expression has an effect,
because the value in the location x is changed, but the effect is idempotent, in the sense that
64 THIELECKE

(define idempotency-separator
(lambda (testee)
(let ((procs (testee)))
(begin
((car procs) #t)
((cdr procs) #f)))))

(define arg-fc-1copy
(lambda ()
(let ((x (arg-fc)))
(cons x x))))

(define arg-fc-2copies
(lambda ()
(let ((x (arg-fc)))
(let ((y (arg-fc)))
(cons x y)))))

Figure 6. Refutation of the idempotency hypothesis

assigning the same value to x twice is as good as doing it once. By contrast, the assignment
(set! x (+ x 1)) is not idempotent, as evaluating it twice increments x by 2. It may
be interesting to compare the “idempotency” of assignment to control transfers: after all, a
jump could be seen as an assignment to the program counter. In fact, jumps are idempotent.
More precisely:
Remark. Let M = throw P Q. Then

(λx.(λy.(x, y)))M M =β (λx.(x, x))M

Because (λx.(λy.(x, y)))M M =β (M, M ), this follows.


Continuing the analogy with assignment, we could compare arg-fc to

(set! x (+ x 1))

in that it both reads and writes to the program counter, so to speak. The analogy becomes
clearer if we write it as

callcc(λk.throw k (λx.throw k (λy.x)))

Here the first throw may be seen as an assignment to the program counter. But k is also
saved in the closure (λx.throw k (λy.x)), as it were. So we would expect arg-fc, like
(set! x (+ x 1)) not to be idempotent — which is indeed the case.
More formally, we have as a consequence of the failure of copyability:
THE EXPRESSIVE POWER OF CALL/CC 65

Corollary 1 There is a term M such that

(λx.(x, x))M and (λx.(λy.(x, y)))M M

can be separated.
Proof: Because (λx.(λy.(x, y)))M M =β (M, M ), the result follows from Proposition
2.

With idempotency-separator defined in figure 6, we have:

(idempotency-separator arg-fc-1copy)
#t
(idempotency-separator arg-fc-2copies)
#f

The conjecture could perhaps be supported by informal arguments about first-order jumps.
These can be copied essentially because they are oblivious to their continuation, so that it
does not matter if another jump follows. Hence the idempotency could be defended for
values, as well as for first-order jumps. What it fails to take into account are terms that do
not simply pass something to the current continuation, but do not ignore it either. There
seems to be an assumption of a kind of excluded middle here, along the line of: functions
in continuations semantics can return a value or else they are goto’s. To the extent that
such ideological conclusions can be drawn from this example, we should like to argue that
it is misleading to think of first-class control as a form of non-termination due to jumping
(we therefore prefer to use “discardable” rather than “total” [7]).

5. Separating by extending (weak) call-by-name

In this section, we proceed to generalise our separation technique from call-by-value to


separating (weak) call-by-name terms. Specifically, we show that upward continuations
are powerful enough to separate the terms λx.xx and λx.x(λy.xy) when the appropriate
operators are added to a weak (sometimes called “lazy”) call-by-name λ-calculus. The
terms λx.xx and λx.(λy.xy) are one of the canonical examples cited as evidence of the
expressive power of the π-calculus [28]. Here, the key ingredient for making the distinction
is a term that uses its current continuation twice.
We use “call-by-name” in the sense in which it is used in the continuations literature
[22]; though one often finds the term “lazy” following Abramsky [1] — unfortunately
“lazy” itself is often used to mean memoizing. To avoid confusion between this notion and
the “real” λ-calculus (e.g.,[2]), we qualify call-by-name with “weak” to indicate that no
reduction takes place under a λ [3].
In related work, various computational effects have been added to the weak call-by-name
λ-calculus to increase its expressive power: nondeterminism [27], “resources” [3] and
“parallel observers” [4], a mixture of concurrency and call-by-value.
66 THIELECKE

5.1. Extending the weak call-by-name λ-calculus

An evaluation context semantics for the weak call-by-name λ-calculus could be defined as
follows:
E ::= [ ] | EM
and
E[(λx.M )N ] →n E[M [x 7→ N ]]
However, if all evaluation contexts were of the form [ ]M1 . . . Mj , a control operator that
gives access to such contexts would not allow us to get the backtracking behaviour that was
so crucial for arg-fc in call-by-value and whose discriminating power we want to study.
So we find it necessary to extend the calculus with a construct for call-by-value contexts,
which can then be seized by callcc. Nonetheless, λ-abstractions and applications are
call-by-name (albeit “weak”).

Definition 5. Values V , evaluation contexts E and general terms M are given by the
following BNFs:
V ::= λx.M | γx.M
M ::= V | x | M M |
callcc M | throw M M |
letval x = M in M
E ::= [ ] | EM |
throw E M |
letval x = E in M
Reduction →n is defined as follows:
E[(λx.M )N ] →n E[M [x 7→ N ]]
E[callcc M ] →n E[M (γz.E[z])]
E[throw (γz.M ) N ] →n M [z 7→ N ]
E[letval x = V in M ] →n E[M [x 7→ V ]]

Remark. The calculus in Definition 5 could be given a cps semantics by extending the
call-by-name cps transform [22] as follows.
x = λk.xk
λx.M = λk.k(λxk.M k)
MN = λk.M (λm.mN k)
callcc M = λk.M (λf.f (λp.pk)k)
throw M N = λk.M (λm.N (λn.mn))
letval x = N in M = λk.N (λy.(λx.M k)(λp.py))
THE EXPRESSIVE POWER OF CALL/CC 67

The letval construct fits naturally into this semantics — not surprisingly so, since weak
call-by-name factors over call-by-value [12].

5.2. Separating λx.xx and λx.x(λy.xy)

In a weak λ-calculus, Ω ≡ (λx.xx)(λx.xx) and λy.Ω are not operationally equivalent.


So the problem arises of distinguishing terms of the form M and λy.M y (y fresh), in
contrast with the ordinary (non-weak) λ-calculus, where separation is considered only up
to η [2]. At first sight, it would seem that a very drastic control effect would be well suited
for the purpose of discriminating x and λy.xy. But again, the disruptive nature of exits
turns out to be self-defeating as far as discriminating is concerned. For suppose we plan to
discriminate x and λy.xy by plugging an exit into x as above. When we then try to move
on to discriminate the slightly more complicated terms xx and x(λy.xy), our separating
context is hoist with its own petard: in both cases, the exit occurs during the first call to
x, leaving no possibility to discriminate its argument, x and λy.xy, respectively. We have
encountered this self-defeating nature of exits before. Again it turns out that not all control
effects are like exits.
The lynch-pin of the development is once again arg-fc. The term we use is the same
as in call-by-name, even though both callcc and λ have now a quite different semantics.
Let A = callcc(λk.λx.throw k (λy.x)).
As in call-by-value, much depends on the fact that A is not overly disruptive; more
precisely, in some contexts, it behaves like the identity in that the backtracking does not
affect the result (compare Lemma 5).

Lemma 6 For all evaluation contexts E and terms M , E[AM ] →∗n E[M ].
Proof:

E[(callcc(λk.λx.throw k (λy.x)))M ]
→n E[(λk.λx.throw k (λy.x))(γz.E[zM ])M ]
→n E[(λx.throw (γz.E[zM ]) (λy.x))M ]
→n E[throw (γz.E[zM ]) (λy.M )]
→n E[(λy.M )M ]
→n E[M ]

Lemma 6 also corroborates that we need to have evaluation contexts other than those of
the form [ ]M1 . . . Mj if we want to use A for separating.

Proposition 3 In the weak call-by-name λ-calculus extended with callcc and letval
as in Definition 5, the terms λx.xx and λx.x(λy.xy) can be separated.
Proof: We show that for each value P and term Q, there exists an evaluation context E such that

E[λx.xx] →∗n P and E[λx.x(λy.xy)] →∗n Q


68 THIELECKE

Let

E = letval f = [ ]A in letval z = f P in f Q
E2 = letval f = [ ] in letval z = f P in f Q

First, consider λx.xx. Using Lemma 6, we have

E[λx.xx]
≡ E2 [(λx.xx)A]
→n E2 [AA]
→∗n E2 [A]
≡ E2 [callcc(λk.λx.throw k (λy.x))]
→n E2 [(λk.λx.throw k (λy.x))(γz.E2 [z])]
→n E2 [λx.throw (γz.E2 [z]) (λy.x)]
→n letval z = (λx.throw (γz.E2 [z]) (λy.x))P in (λx.throw (γz.E2 [z]) (λy.x))Q
→n letval z = throw (γz.E2 [z]) (λy.P ) in (λx.throw (γz.E2 [z]) (λy.x))Q
→n E2 [λy.P ]
≡ letval f = λy.P in letval z = f P in f Q
→n letval z = (λy.P )P in (λy.P )Q
→n letval z = P in (λy.P )Q
→n (λy.P )Q
→n P

By contrast, for λx.x(λy.xy), we have:

E[λx.x(λy.xy)]
≡ E2 [(λx.x(λy.xy))A]
→n E2 [A(λy.Ay)]
→∗n E2 [λy.Ay]
→n letval z = (λy.Ay)P in (λy.Ay)Q
→n letval z = AP in (λy.Ay)Q
→∗n letval z = P in (λy.Ay)Q
→n (λy.Ay)Q
→n AQ
→∗n Q

It may be worthwhile to compare the proof of Proposition 3 with that of Lemma 5. In each
case, we use backtracking to communicate arguments between different calls of the same
function, so as to separate a term from another in which the backtracking is too localised to
THE EXPRESSIVE POWER OF CALL/CC 69

affect the outcome. This narrative can be made precise in the re-use of evaluation contexts
or, in cps, continuations.
Sangiorgi [27, Theorem 8.5] proves that the weak call-by-name λ-calculus (there called
“lazy”), even if enriched with confluent operators whose reduction rules conform to the
so-called GV format, cannot distinguish between λx.xx and λx.x(λy.xy). We refer the
reader to Sangiorgi’s account [27] for the details of the GV (Groote/ Vaandrager) format.
We only remark here that this format is quite a general form of introducing new operators
into operational semantics, specifying the result of an operator application in terms of the
results of its subterms. Crucially, one thing the GV format does not permit is for a term to
depend on an evaluation context, as in Felleisen-style semantics for callcc.
Corollary 2 The weak call-by-name λ-calculus extended with callcc and letval
can separate two terms that cannot be discriminated in any extension of the calculus con-
forming to the GV format.
Furthermore, because the letval construct could be defined in the GV format, we can
be sure that the increase in expressive power is due to callcc.

5.3. A generalisation of arg-fc

In the remainder of this section, we give an n-argument generalisation of arg-fc and


sketch its use in a general separation technique called Böhming out.
We recall the following remark on Böhming out in a weak call-by-name calculus by
Boudol and Laneve [3]:
[. . .] what is important is to be able to distinguish the successive appearances of
a given variable in the head position. As shown by Böhm, the λ-calculus provides
part of this ability. But it generally fails distinguishing subterms like xM1 . . . Mk
and λy.xM1 . . . Mk y [. . .]
The first ability is due to the traditional Böhm tupling combinator
def
Pn = λx1 . . . xn p.px1 . . . xn

In order to make the second kind of distinction (between values and non-values) one uses
additional features, broadly speaking computational effects, that are sensitive to whether
they appear under a λ or not.
We define combinators Unj and Pnj as follows:

def
Unj = λx1 . . . xn .xj

def
Pnj = λx1 . . . xj .callcc(λk.λxj+1 .throw k (λyxj+2 . . . xn p.px1 . . . xn ))

Note that Pnj is a hybrid of Pn = λx1 . . . xn p.px1 . . . xn and A. When applied to n


arguments, Pnj behaves like the tupling combinator. But because it backtracks when it
receives the j + 1st argument, it is sensitive to whether it is applied to j arguments, as
70 THIELECKE

in Pnj M1 . . . Mj , or more than j, as in λy.Pnj M1 . . . Mj y. This generalises our use of A


above.
This following Lemma makes precise to what extent Pnj behaves like a tupling combinator
when applied to n > j arguments.

Lemma 7 E[Pnj L1 . . . Ln Unk ] →∗n E[Lk ]


Proof:

E[Pnj L1 . . . Ln Unk ]
≡ E[(λx1 . . . xj .callcc(λk.λxj+1 .throw k (λyxj+2 . . . xn p.px1 . . . xn )))
L1 . . . Ln Unk ]
→jn E[callcc(λk.λxj+1 .throw k (λyxj+2 . . . xn p.pL1 . . . Lj xj+1 . . . xn ))
Lj+1 . . . Ln Unk ]
→2n E[(λxj+1 .throw (γz.E[zLj+1 . . . Ln Unk ]) (λyxj+2 . . . xn p.pL1 . . . Lj xj+1 . . . xn ))
Lj+1 . . . Ln Unk ]
→n E[throw (γz.E[zLj+1 . . . Ln Unk ])
(λyxj+2 . . . xn p.pL1 . . . Lj+1 xj+2 . . . xn )Lj+2 . . . Ln Unk ]
→n E[(λyxj+2 . . . xn p.pL1 . . . Lj+1 xj+2 . . . xn )Lj+1 . . . Ln Unk ]
→∗n E[Unk L1 . . . Ln ]
≡ E[(λx1 . . . xn .xk )L1 . . . Ln ]
→nn E[Lk ]

As an example, consider these terms (where we assume that x is not free in L1 , L2 , L3 , N1


or N2 ):

M1 = xN1 (x(xL1 L2 L3 ))N2


M2 = xN1 (x(λy.xL1 L2 L3 y))N2

We need to separate xL1 L2 L3 and λy.xL1 L2 L3 y, while simultaneously bringing them to


the top level (“Böhming out”). The first occurrence of x should behave like the second
projection; the second occurrence of x like the first projection; while the third occurrence
of x should be effectful, so that xL1 L2 L3 can be separated from λy.xL1 L2 L3 y.
First, let C1 = (λx.[ ])P43 RU42 RRRU41 , where R is an arbitrary term. Note that

C1 [M1 ] →∗n P43 L1 L2 L3 and C1 [M2 ] →∗n λy.P43 L1 L2 L3 y

Applying Lemma 7 twice, we have:

C1 [M1 ]
≡ (λx.xN1 (x(xL1 L2 L3 ))N2 )P43 RU42 RRRU41
→n P43 N1 (P43 (P43 L1 L2 L3 ))N2 RU42 RRRU41
→∗n P43 (P43 L1 L2 L3 )RRRU41
→∗n P43 L1 L2 L3
THE EXPRESSIVE POWER OF CALL/CC 71

(In fact, this holds in any evaluation context E.)


Roughly speaking, by plugging in P43 for x and applying the result to the appropriate
projections U42 and U41 with R as padding, we can Böhm out the second argument of the
first occurrence and the first argument of the second occurrence of x, bringing (a substitution
instance of) the subterm xL1 L2 L3 to the top level.
C1 succeeds in bringing the two differing subterms to the top level; it remains to separate
them. To do so, let arbitrary terms P and Q be arbitrary terms.

E2 = letval f = [ ] in letval z = f P in f QU44

Combining these contexts, let C = E2 [C1 ]. Then we have

C[M1 ] →∗n P and C[M2 ] →∗n Q

This is only an illustration, and the reader may consult the literature on Böhming out [2, 3].
However, the above encourages the hope that the separation by backtracking as in arg-fc
could be refined to a systematic technique.

6. Conclusions and directions for further work

The examples we have considered, while not “unreasonable” in a formal sense, show
certain pitfalls one may become entrapped in when trying to reason about continuations in
a preformal way, e.g., when thinking by analogy with exceptions or goto. The strand that
runs through all the examples is that they are based on using a continuation twice, so that
some of the pitfalls can be avoided if this simple fact is kept in mind.
Section 5 provides preliminary evidence that call/cc (provided it can seize call-by-
value contexts) could be similarly discriminating as the π-calculus and the lazy λ-calculus
with resources [3]. The construction of a distinguishing context used there could probably
be refined to yield the necessary Böhm-out technique, with a generalisation of arg-fc as
the Böhm-out combinator.
The ability to use a continuation twice appears to be a crucial aspect of the discriminating
power of continuations. The equivalences broken by such multiple use of a continuation
may hence be a good criterion for distinguishing first-class continuations from other forms
of control, such as goto, exceptions and one-shot continuations [8]. Although it may
be obvious to readers who understand exceptions, it may be worth emphasising that the
techniques for discriminating terms by using a continuation twice do not generalise to
exceptions. A naive transliteration of arg-fc with exceptions instead of continuations
could be attempted like this:

fun argfcexn () =
let
exception e of int -> int
in
(raise e (fn x => raise e (fn y => x)))
handle e f => f
end;
72 THIELECKE

However, the function returned by argfcexn does not have the backtracking behaviour
that was so crucial for arg-fc; instead it will raise an uncaught exception when called
outside the dynamic extent of the handler (despite the fact that the scope of the local
exception name e is static). Contrasting the expressive power of call/cc with that of
exceptions is beyond the scope of the present paper. We hope to pursue this in future work.

References

1. S. Abramsky and L. Ong. Full abstraction in the lazy lambda calculus. Information and Computation,
105(2):159–267, 1993.
2. H. Barendregt. The Lambda-Calculus: Its Syntax and Semantics. North-Holland, Amsterdam, 1980.
3. G. Boudol and C. Laneve. The discriminating power of multiplicities in the λ-calculus. Information and
Computation, 126(1):83–102, April 1996.
4. M. Dezani-Ciancaglini, J. Tiuryn, and P. Urzyczyn. Discrimination by parallel observers. In Proceedings,
Twelth Annual IEEE Symposium on Logic in Computer Science, pages 396–407. IEEE Computer Society
Press, 1997.
5. B. Duba, R. Harper, and D. MacQueen. Typing first-class continuations in ML. In Proc. ACM Symp.
Principles of Programming Languages, pages 163–173, January 1991.
6. M. Felleisen. On the expressive power of programming languages. In Science of Computer Programming,
volume 17, pages 35–75, 1991. Preliminary version in: Proc. European Symposium on Programming,
Lecture Notes in Computer Science, 432. Springer-Verlag (1990), 134–151.
7. A. Filinski. Declarative continuations: an investigation of duality in programming language semantics. In
D. H. Pitt et al., editors, Category Theory and Computer Science, number 389 in Lecture Notes in Computer
Science, pages 224–249. Springer-Verlag, 1989.
8. D.P. Friedman and C.T. Haynes. Constraining control. In Proc. 12th ACM Symposium on Principles of
Programming Languages, pages 245–254, 1985.
9. D.P. Friedman, M. Wand, and C.T. Haynes. Essentials of Programming Languages. MIT Press and McGraw-
Hill, 1992.
10. T.G. Griffin. A formulae-as-types notion of control. In Proc. 17th ACM Symposium on Principles of
Programming Languages, pages 47–58, San Francisco, CA USA, 1990.
11. C.T. Haynes. Logic continuations. Journal of Logic Programming, 4:157–176, 1987.
12. J. Hatcliff and O. Danvy. Thunks and the λ-calculus. Journal of Functional Programming, 7(2):303–319,
1997.
13. R. Harper, B. Duba, and D. MacQueen. Typing first-class continuations in ML. Journal of Functional
Programming, 3(4), October 1993.
14. R. Harper and M. Lillibridge. Operational interpretations of an extension of Fω with control operators.
Journal of Functional Programming, 6(3):393–417, May 1996.
15. C.T. Haynes, D.P. Friedman, and M. Wand. Obtaining coroutines with continuations. Journal of Computer
Languages, 11(3/4):143–153, 1986.
16. R. Kelsey, W. Clinger, and J. Rees, editors. Revised5 report on the algorithmic language Scheme. Higher-
Order and Symbolic Computation, 11(3):7–105, 1998.
17. P.J. Landin. A generalization of jumps and labels. Report, UNIVAC Systems Programming Research,
August 1965.
18. P.J. Landin. A generalization of jumps and labels. Higher-Order and Symbolic Computation, 11(2), 1998.
19. M. Lillibridge. Exceptions are strictly more powerful than Call/CC. Technical Report CMU-CS-95-178,
Carnegie Mellon University, July 1995.
20. A.R. Meyer and J.G. Riecke. Continuations may be unreasonable (preliminary report). In Proceedings of
the 1988 ACM Conference on Lisp and Functional Programming, pages 63–71. ACM, 1988.
21. R. Milner and M. Tofte. Commentary on Standard ML. MIT Press, 1991.
22. G. Plotkin. Call-by-name, call-by-value, and the λ-calculus. Theoretical Computer Science, 1(2):125–159,
1975.
23. J.C. Reynolds. GEDANKEN — A simple typeless language based on the principle of completeness and the
reference concept. Communications of the ACM, 13:308–319, 1970.
THE EXPRESSIVE POWER OF CALL/CC 73

24. J.C. Reynolds. Definitional interpreters for higher-order programming languages. In Proceedings of the
25th ACM National Conference, pages 717–740. ACM, August 1972.
25. J.G. Riecke. Should a function continue? Master’s thesis, Massachusetts Institute of Technology, 1989.
Available as technical report MIT/LCS/TR-459 (MIT Laboratory for Computer Science).
26. A. Sabry. Note on axiomatizing the semantics of control operators. Technical Report CIS-TR-96-03,
University of Oregon, 1996.
27. D. Sangiorgi. The lazy lambda calculus in a concurrency scenario. Information and Computation,
111(1):120–153, May 1994.
28. D. Sangiorgi. Lazy functions and mobile processes. Technical Report RR-2515, INRIA-Sophia Antipolis,
1995.
29. D. Sitaram and M. Felleisen. Reasoning with continuations II: full abstraction for models of control. In
M. Wand, editor, Lisp and Functional Programming. ACM, 1990.
30. G. Springer and D.P. Friedman. Scheme and the Art of Programming. MIT Press, 1989.
31. G. L. Steele Jr and G.J. Sussman. Scheme: An interpreter for extended lambda calculus. AI Memo 349,
Massachusetts Institute of Technology Artificial Intelligence Laboratory, Cambridge, Massachusetts, 1975.
32. C. Strachey and C.P. Wadsworth. Continuations: A mathematical semantics for handling full jumps. Tech-
nical Monograph PRG-11, Oxford University Computing Laboratory, January 1974.
33. H. Thielecke. Continuation passing style and self-adjointness. In Proceedings 2nd ACM SIGPLAN Workshop
on Continuations, number NS-96-13 in BRICS Notes Series, December 1996.
34. H. Thielecke. Categorical Structure of Continuation Passing Style. PhD thesis, University of Edinburgh,
1997. Also available as technical report ECS-LFCS-97-376.
35. H. Thielecke. Continuation semantics and self-adjointness. In Proceedings MFPS XIII,
volume 6 of Electronic Notes in Theoretical Computer Science. Elsevier, 1997. URL:
http://www.elsevier.nl/locate/entcs/volume6.html.
36. H. Thielecke. An introduction to Landin’s “A generalization of jumps and labels”. Higher-Order and
Symbolic Computation, 11(2), 1998.

You might also like