Pi-calculus

syntax and reduction semantics
Francesco Zappa Nardelli
INRIA Rocquencourt, MOSCOVA research team
francesco.zappa nardelli@inria.fr
MPRI Concurrency course with:
Pierre-Louis Curien (PPS), Roberto Amadio (PPS), Catuscia Palamidessi (INRIA Futurs)
MPRI - Concurrency October 20, 2006
High-level programming languages
For non-distributed, non-concurrent programming, they are pretty good. We have
ML (SML/OCaml), Haskell, Java, C#, with:
• type safety
• rich concrete types — data types and functions
• abstraction mechanisms for program structuring — ML modules and abstract
types, type classes and monads, classes and objects, ...
But this is only within single executions of single, sequential programs.
What about distributed computation?
1
Challenges (idiosyncratic survey)
• Local concurrency: π-calculus, Join, Pict, ...
• Mobile computations: JoCaml, Nomadict Pict, ...
• Marshalling: choice of distributed abstractions, and trust assumptions: Acute, HashCaml, ...
• Dynamic (re)binding and evaluation strategies: exchanging values between programs
• Type equality between programs: run-time type names, type-safe and abstraction-safe
interaction (and type equality within programs)
• Typed interaction handles: establishing shared expression-level names between programs
• Version change: type safety in the presence of dynamic linking. Controlling dynamic linking.
Dynamic update
• Semantics for real-world network abstractions, TCP, UDP, Sockets
• Security: security policies, executing untrusted code, protocols, language based
• Module structure again: first-class/recursive/parametric modules. Exposing interfaces to other
programs via communication
2
Local concurrency
Local: within a single failure domain, within a single trust domain, low-latency
interaction.
• Pure (implicit parallelism or skeletons — parallel map, etc.)
• Shared memory
— mutexes, cvars (incomprehensible, uncomposable, common)
— transactional (Venari, STM Haskell/Java, AtomCaml, ...)
• Message passing
semantic choices: asynchronous/synchronous, different synchronisation styles
(CSP/CCS, Join, ...), input-guarded/general nondeterministic choice, ...
cf Erlang [AVWW96], Telescript, Facile [TLK96,Kna95], Obliq [Car95], CML [Rep99], Pict
[PT00], JoCaml [JoC03], Alice [BRS+05], Esterel [Ber98], ...
3
In these lectures...
• Simplify by considering just interaction.
• What are the equations of interactions?
• Find a logic for interaction.
• Find new/correct paradigms for programming.
• What’s about distribution?
• Mobility?
• Security?
Understand some key concepts behind concurrency theory
(from a programming language perspective).
4
CCS, synchronisation
In CCS, a system evolves when two threads synchronise over the same name:
b.P

b.Q P

Q
We will focus on reductions for the time being (that is, forget about LTSs until
next lecture). Summary...
5
CCS, reduction semantics
We define reduction, denoted , by
a.P

a.Q P

Q
P P

P

Q P

Q
P P

(νx)P (νx)P

P ≡ P

Q

≡ Q
P Q
where, the structural congruence relation, denoted ≡, is defined as:
P

Q ≡ Q

P (P

Q)

R ≡ P

(Q

R)
P

0 ≡ P !P ≡ P

!P (νa)P

Q ≡ (νa)(P

Q) if a ,∈ fn(Q)
Theorem P Q iff P
τ
−−→≡ Q.
6
Value passing
Names can be interpreted as channel names: allow channels to carry values, so
instead of pure outputs a.P and inputs a.P allow e.g.: a¸15, 3).P and a(x, y).Q.
Value 6 being sent along channel x:
x¸6)

x(u).y¸u) (y¸u))¦
6
/
u
¦ = y¸6)
Restricted names are different from all others:
x¸5)

(νx)(x¸6)

x(u).y¸u)) x¸5)

(νx)(y¸6))
≡ ≡
x¸5)

(νx

)(x

¸6)

x

(u).y¸u)) x¸5)

(νx

)(y¸6))
(note that we are working with alpha equivalence classes).
7
Exercise
Program a server that increments the value it receives.
!x(u).x¸u + 1)
Argh!!! This server exhibits exactly the problems we want to avoid when
programming concurrent systems:
x¸3).x(u).P

x¸7).x(v).Q

!x(u).x¸u + 1) . . .
. . . P¦
8
/
u
¦


4
/
u
¦

!x(u).x¸u + 1)
8
Ideas...
Allow those values to include channel names.
A new implementation for the server:
!x(u, r).r¸u + 1)
This server prevents confusion provided that the return channels are distinct.
How can we guarantee that the return channels are distinct?
The restriction operator we have is overly restrictive...
9
The π-calculus
1. A name received on a channel can then be used itself as a channel name for
output or input — here y is received on x and the used to output 7:
x¸y)

x(u).u¸7) y¸7)
2. A restricted name can be sent outside its original scope. Here y is sent on
channel x outside the scope of the (νy) binder, which must therefore be moved
(with care, to avoid capture of free instances of y). This is scope extrusion:
(νy)(x¸y)

y(v).P)

x(u).u¸7) (νy)(y(v).P

y¸7))
(νy)(P¦
7
/
v
¦)
10
The (simplest) π-calculus
Syntax:
P, Q ::= 0 nil

P

Q parallel composition of P and Q

c¸v).P output v on channel c and resume as P

c(x).P input from channel c

(νx)P new channel name creation

!P replication
Free names (alpha-conversion follows accordingly):
fn(0) = ∅ fn(P

Q) = fn(P) ∪ fn(Q)
fn(c¸v).P) = ¦c, v¦ ∪ fn(P) fn(c(x).P) = (fn(P) ¸ ¦x¦) ∪ ¦c¦
fn((νx)P) = fn(P) ¸ ¦x¦ fn(!P) = fn(P)
11
π-calculus, reduction semantics
Structural congruence:
P

0 ≡ P P

Q ≡ Q

P
(P

Q)

R ≡ P

(Q

R) !P ≡ P

!P
(νx)(νy)P ≡ (νy)(νx)P
P

(νx)Q ≡ (νx)(P

Q) if x ,∈ fn(P)
Reduction rules:
c¸v).P

c(x).Q P


v
/
x
¦
P P

P

Q P

Q
P P

(νx)P (νx)P

P ≡ P

Q

≡ Q
P Q
12
Expressiveness
A small calculus (and the semantics only involves name-for-name substitution,
not term-for-variable substitution), but very expressive:
• encoding data structures
• encoding functions as processes (Milner, Sangiorgi)
• encoding higher-order π (Sangiorgi)
• encoding synchronous communication with asynchronous (Honda/Tokoro,
Boudol)
• encoding polyadic communication with monadic (Quaglia, Walker)
• encoding choice (or not) (Nestmann, Palamidessi)
• ...
13
Example: polyadic with monadic
Let us extend our notion of monadic channels, which carry exactly one name, to
polyadic channels, which carry a vector of names, i.e.
P ::= x¸y
1
, ..., y
n
).P output

x(y
1
, ..., y
n
).P input
with the main reduction rule being:
x¸y
1
, ..., y
n
)P

x(z
1
, ..., z
n
).Q P


y
1
,...y
n
/
z
1
,...,z
n
¦
Is there an encoding from polyadic to monadic channels?
14
Polyadic with monadic, ctd.
We might try:
[[x¸y
1
, ..., y
n
).P]] = x¸y
1
). . . . .x¸y
n
).[[P]]
[[x(y
1
, ..., y
n
).P]] = x(y
1
). . . . .x(y
n
).[[P]]
but this is broken! Why?
The right approach is use new binding:
[[x¸y
1
, ..., y
n
).P]] = (νz)(x¸z).z¸y
1
). . . . .z¸y
n
).[[P]])
[[x(y
1
, ..., y
n
).P]] = x(z).z(y
1
). . . . .z(y
n
).[[P]]
where z ,∈ fn(P) (why?). (We also need some well-sorted assumptions.)
15
Recursion
Alternative to replication: recursive definition of processes.
Recursive definition:
K = (˜ x).P
Constant application:
K¸˜ a|
Reduction rule:
K = (˜ x).P
K¸˜ a| P¦
˜ a
/
˜ x
¦
16
Recursion vs. Replication
Theorem Any process involving recursive definitions is representable using
replication, and conversely replication is redundant in presence of recursion.
The proof requires some techniques we have not seen, but...
Intuition: given
F = (˜ x).P
where P may contain recursive calls to F of the form F¸˜ z|, we may replace the
RHS with the following process abstraction containing no mention of F:
(˜ x).(νf)(f¸˜ x)

!f(˜ x).P

)
where P

is obtained by replacing every occurrence of F¸˜ z| by f¸˜ z) in P, and f
is fresh for P.
17
Data as processes: booleans
Consider the truth-values ¦True, False¦. Consider the abstractions:
T = (x).x(t, f).t¸) and F = (x).x(t, f).f¸)
These represent a located copy of a truth-value at x. The process
R = (νt)(νf)b¸t, f).(t().P

f().Q)
where t, f ,∈ fn(P, Q) can test for a truth-value at x and behave accordingly as
P or Q:
R

T¸b| P

(νt, f)f().Q
The term obtained behaves as P because the thread (νt, f)f().Q is deadlocked.
18
Data as processes: integers
Using a unary representation.
[[k]] = (x).x(z, o).(o¸))
k
.z¸)
where (o¸))
k
abbreviates o¸).o¸). . . . .o¸) (k occurrences).
Operations on integers can be expressed as processes. For instance,
succ = (x, y).!x(z, o).o¸).y¸z, o)
Which is the role of the final output on z? (Hint: omit it, and try to define the test for zero).
19
Another representation for integers
type Nat = zero | succ Nat
Define:
[[zero]] = (x).!x(z, s).z¸)
[[succ]] = (x, y).!x(z, s).s¸y)
and for each e of type Nat:
[[succ e]] = (x).(νy)([[succ]]¸x, y|

[[e]]¸y|)
This approach generalises to arbitrary datatypes.
20
A step backward: defining a language
Recipe:
1. define the syntax of the language (that is, specify what a program is);
2. define its reduction semantics (that is, specify how programs are executed);
3. define when two terms are equivalent (that is, hum...?!).
Share and enjoy the new language...
21
Equivalent?
Suppose that P and Q are equivalent (in symbols: P · Q).
Which properties do we expect?
Preservation under contexts For all contexts C[−], we have C[P] · C[Q];
Same observations If P ↓ x then Q ↓ x, where P ↓ x means that we can
observe x at P (or P can do x);
Preservation of reductions P and Q must mimic their reduction steps (that is,
they realise the same nondeterministic choices).
22
Formally
A relation 1 between processes is
preserved by contexts: if P 1 Q implies C[P] 1 C[Q] for all contexts C[−].
barb preserving: if P 1 Q and P ↓ x imply Q ⇓ x, where P ⇓ x holds if there
exists P

such that P

P

and P

↓ x, while
P ≡ (ν˜ n)(x¸y).P

P

) or P ≡ (ν˜ n)(x(u).P

P

) for x ,∈ ˜ n ;
reduction closed: if P 1 Q and P P

, imply that there is a Q

such that
Q

Q

and P

1 Q

(

is the reflexive and transitive closure of ).
23
Reduction-closed barbed congruence
Let reduction barbed congruence, denoted ·, be the largest symmetric
relation over processes that is preserved by contexts, barb preserving, and
reduction closed.
Remark: reduction barbed congruence is a weak equivalence: the number of
internal reduction steps is not important in the bisimulation game imposed by
“reduction closed”.
24
Some equivalences (?)
Compare the processes
1. P = x¸y) and Q = 0
2. P = a¸x) and Q = a¸z)
3. P = (νx)x¸).R and Q = 0
4. P = (νx)(x¸y).R
1

x(z).R
2
) and Q = (νx)(R
1

R
2
¦
y
/
z
¦)
Argh... we need other proof techniques to show that processes are equivalent!
Remark: we can reformulate barb preservation as “if P 1 Q and P ⇓ x imply
Q ⇓ x”. This is sometimes useful...
25
Example: local names are different from global names
Show that in general
(νx)!P ,· !(νx)P
Intuition: the copies of P in (νx)!P can interact over x, while the copies of (νx)P cannot.
We need a process that interacts with another copy of itself over x, but that cannot interact with
itself over x. Take
P = x ⊕ x().b
where Q
1
⊕ Q
2
= (νw)(w

w().Q
1

w().Q
2
.
We have that (νx)!P ⇓ b, while !(νx)P ⇓ b.
26
Exercises
1. Compare the transitions of Fu, v, where F = (x, y).x(y).Fy, x to those of its
encoding in the recursion free calculus (use replication).
2. Consider the pair of mutually recursive definitions
G = (u, v).(u().Hu, v

k().Hu, v)
H = (u, v).v().Gu, v
Write the process Gx, y in terms of replication (you have to invent the tecnique to translate
mutually recursive definitions yourself).
3. Implement a process that negates at location a the truth-value found at location b. Implement
a process that sums of two integers (using both the representations we have seen).
4. Design a representation for lists using π-calculus processes. Implement list append.
27
References
Books
• Robin Milner, Communicating and mobile systems: the π-calculus. (CUP,1999).
• Robin Milner, Communication and concurrency. (Prentice Hall,1989).
• Davide Sangiorgi, David Walker, The π-calculus: a theory of mobile processes. (CUP, 2001).
Tutorials available online:
• Robin Milner, The polyadic pi-calculus: a tutorial. Technical Report ECS-LFCS-91-180,
University of Edinburgh.
• Joachim Parrow, An introduction to the pi-calculus. http://user.it.uu.se/~joachim/intro.ps
• Peter Sewell. Applied pi — a brief tutorial. Technical Report 498, University of Cambridge.
http://www.cl.cam.ac.uk/users/pes20/apppi.ps
28

High-level programming languages
For non-distributed, non-concurrent programming, they are pretty good. We have ML (SML/OCaml), Haskell, Java, C#, with: • type safety • rich concrete types — data types and functions • abstraction mechanisms for program structuring — ML modules and abstract types, type classes and monads, classes and objects, ... But this is only within single executions of single, sequential programs. What about distributed computation?
1

.. HashCaml.Challenges (idiosyncratic survey) • Local concurrency: π -calculus.. • Marshalling: choice of distributed abstractions. protocols. . executing untrusted code. language based • Module structure again: first-class/recursive/parametric modules. Join. .. Sockets • Security: security policies. TCP. Nomadict Pict. type-safe and abstraction-safe interaction (and type equality within programs) • Typed interaction handles: establishing shared expression-level names between programs • Version change: type safety in the presence of dynamic linking. Dynamic update • Semantics for real-world network abstractions. UDP.. • Dynamic (re)binding and evaluation strategies: exchanging values between programs • Type equality between programs: run-time type names. . Exposing interfaces to other programs via communication 2 . • Mobile computations: JoCaml. Pict. Controlling dynamic linking. and trust assumptions: Acute..

common) — transactional (Venari. Pict [PT00]. . cvars (incomprehensible. . STM Haskell/Java. Join. etc. Obliq [Car95].. Alice [BRS+05]. Telescript.).... 3 . • Pure (implicit parallelism or skeletons — parallel map.) • Message passing semantic choices: asynchronous/synchronous. Facile [TLK96. JoCaml [JoC03]... . CML [Rep99]. input-guarded/general nondeterministic choice. ... Esterel [Ber98].Local concurrency Local: within a single failure domain. cf Erlang [AVWW96]. different synchronisation styles (CSP/CCS. low-latency interaction. within a single trust domain. uncomposable.) • Shared memory — mutexes. AtomCaml.Kna95].

In these lectures. 4 . • Simplify by considering just interaction. • Find new/correct paradigms for programming. • What are the equations of interactions? • Find a logic for interaction... • What’s about distribution? • Mobility? • Security? Understand some key concepts behind concurrency theory (from a programming language perspective).

5 ... a system evolves when two threads synchronise over the same name: b.CCS. Summary. synchronisation In CCS.P b.Q P Q We will focus on reductions for the time being (that is. forget about LTSs until next lecture).

− 6 .P P P Q P P Q a. is defined as: P P 0≡P Q≡Q P !P τ (P (νa)P Q) R≡P (Q R) !P ≡ P Q ≡ (νa)(P Q) if a ∈ fn(Q) Theorem P Q iff P −→≡ Q. reduction semantics We define reduction. by a.CCS. denoted .Q P (νx)P P P (νx)P Q P ≡P P Q ≡Q Q where. the structural congruence relation. denoted ≡.

y u ) x (u).P and inputs a.P and a(x.g. 7 . 3 . Value 6 being sent along channel x: x6 x(u). so instead of pure outputs a.Value passing Names can be interpreted as channel names: allow channels to carry values.P allow e.: a 15.Q.y u ) x5 x5 (νx)(y 6 ) ≡ (νx )(y 6 ) (note that we are working with alpha equivalence classes).y u (y u ){6/u} = y 6 Restricted names are different from all others: x5 x5 (νx)(x 6 ≡ (νx )(x 6 x(u). y).

. !x(u).Exercise Program a server that increments the value it receives... P {8/u} !x(u).x(u).x u + 1 8 .x(v). !x(u).x u + 1 Q{4/u} .Q .x u + 1 Argh!!! This server exhibits exactly the problems we want to avoid when programming concurrent systems: x 3 .P x 7 ..

. 9 .Ideas.r u + 1 This server prevents confusion provided that the return channels are distinct.. Allow those values to include channel names. r).. A new implementation for the server: !x(u. How can we guarantee that the return channels are distinct? The restriction operator we have is overly restrictive..

Here y is sent on channel x outside the scope of the (νy) binder. A restricted name can be sent outside its original scope.u 7 (νy)(y(v). to avoid capture of free instances of y). which must therefore be moved (with care.P ) x(u).The π-calculus 1. This is scope extrusion: (νy)(x y y(v).u 7 y 7 2.P (νy)(P {7/v }) 10 y 7) . A name received on a channel can then be used itself as a channel name for output or input — here y is received on x and the used to output 7: xy x(u).

The (simplest) π-calculus Syntax: P. Q ::= 0 P Q c v .P ) = (fn(P ) \ {x}) ∪ {c} fn(!P ) = fn(P ) 11 .P (νx)P !P nil parallel composition of P and Q output v on channel c and resume as P input from channel c new channel name creation replication Free names (alpha-conversion follows accordingly): fn(0) = ∅ fn(c v . v} ∪ fn(P ) fn((νx)P ) = fn(P ) \ {x} fn(P Q) = fn(P ) ∪ fn(Q) fn(c(x).P c(x).P ) = {c.

reduction semantics Structural congruence: (P P Q) 0 ≡ P R ≡ P P (Q R) Q ≡ Q !P ≡ P P !P (νx)(νy)P ≡ (νy)(νx)P P Reduction rules: c v .π-calculus.Q P (νx)P P (νx)P P Q{v/x} P ≡P P Q ≡Q Q 12 (νx)Q ≡ (νx)(P Q) if x ∈ fn(P ) .P P P Q P P Q c(x).

Walker) • encoding choice (or not) (Nestmann. 13 . but very expressive: • encoding data structures • encoding functions as processes (Milner..Expressiveness A small calculus (and the semantics only involves name-for-name substitution. Palamidessi) • . Sangiorgi) • encoding higher-order π (Sangiorgi) • encoding synchronous communication with asynchronous (Honda/Tokoro. Boudol) • encoding polyadic communication with monadic (Quaglia.. not term-for-variable substitution).

.e.Q P Q{y1... yn). zn).Example: polyadic with monadic Let us extend our notion of monadic channels....zn } Is there an encoding from polyadic to monadic channels? 14 . ..... which carry exactly one name.. yn .. to polyadic channels. i..P x(y1.. P ::= x y1.P output input with the main reduction rule being: x y1.. which carry a vector of names. .yn/z1... yn P x(z1.. .. .

. .[[P ]]) [[x(y1. .z y1 .P ]] = x(z)... ..P ]] = x y1 . yn).P ]] = x(y1). ctd. . .[[P ]] but this is broken! Why? The right approach is use new binding: [[x y1. ...Polyadic with monadic.P ]] = (νz)(x z ... .. yn .[[P ]] where z ∈ fn(P ) (why?). yn . .x(yn).) 15 . . . We might try: [[x y1.. . . . .x yn .z(yn). (We also need some well-sorted assumptions.[[P ]] [[x(y1. yn). . .z yn . . ..z(y1). . ..

Recursion Alternative to replication: recursive definition of processes. Recursive definition: K = (˜).P x Constant application: K a ˜ Reduction rule: K = (˜).P x K a ˜ ˜ P {a/x} ˜ 16 .

but. Intuition: given F = (˜). we may replace the ˜ RHS with the following process abstraction containing no mention of F : (˜)..P x where P may contain recursive calls to F of the form F z . and conversely replication is redundant in presence of recursion. and f ˜ is fresh for P .(νf )(f x x ˜ !f (˜).. The proof requires some techniques we have not seen.Recursion vs. Replication Theorem Any process involving recursive definitions is representable using replication.P ) x ˜ where P is obtained by replacing every occurrence of F z by f z in P . 17 .

The process R = (νt)(νf )b t.f These represent a located copy of a truth-value at x. False}.Q) where t.t and F = (x). f ). f ). f . Q) can test for a truth-value at x and behave accordingly as P or Q: R T b P (νt. f )f ().x(t. f ∈ fn(P. 18 .Q is deadlocked.Q The term obtained behaves as P because the thread (νt.Data as processes: booleans Consider the truth-values {True. Consider the abstractions: T = (x).(t().P f (). f )f ().x(t.

19 .x(z.Data as processes: integers Using a unary representation. o Which is the role of the final output on z ? (Hint: omit it. .!x(z. . o). succ = (x. o).o . y). [[k]] = (x). For instance. and try to define the test for zero).(o )k . . . Operations on integers can be expressed as processes.o .o (k occurrences).z where (o )k abbreviates o .y z.

s y and for each e of type Nat: [[succ e]] = (x). y).!x(z. s).!x(z.Another representation for integers type Nat = zero | succ Nat Define: [[zero]] = (x).z [[succ]] = (x. 20 [[e]] y ) .(νy)([[succ]] x. y This approach generalises to arbitrary datatypes. s).

define its reduction semantics (that is. specify what a program is).?!). hum.A step backward: defining a language Recipe: 1. Share and enjoy the new language.. 2. 3... specify how programs are executed). define when two terms are equivalent (that is.. 21 . define the syntax of the language (that is.

Preservation of reductions P and Q must mimic their reduction steps (that is. we have C[P ] C[Q]. where P ↓ x means that we can observe x at P (or P can do x). Preservation under contexts For all contexts C[−]. they realise the same nondeterministic choices). Same observations If P ↓ x then Q ↓ x.Equivalent? Suppose that P and Q are equivalent (in symbols: P Which properties do we expect? Q). 22 .

23 . imply that there is a Q such that Q ∗ Q and P R Q ( ∗ is the reflexive and transitive closure of ). where P ⇓ x holds if there exists P such that P ∗ P and P ↓ x.Formally A relation R between processes is preserved by contexts: if P R Q implies C[P ] R C[Q] for all contexts C[−]. barb preserving: if P R Q and P ↓ x imply Q ⇓ x.P ˜ P ) or P ≡ (ν n)(x(u).P ˜ P ) for x ∈ n . while P ≡ (ν n)(x y . ˜ reduction closed: if P R Q and P P .

be the largest symmetric relation over processes that is preserved by contexts. denoted . barb preserving.Reduction-closed barbed congruence Let reduction barbed congruence. Remark: reduction barbed congruence is a weak equivalence: the number of internal reduction steps is not important in the bisimulation game imposed by “reduction closed”. and reduction closed. 24 .

P = (νx)(x y .R1 x(z). P = x y and Q = 0 2..Some equivalences (?) Compare the processes 1.R and Q = 0 4.. 25 . we need other proof techniques to show that processes are equivalent! Remark: we can reformulate barb preservation as “if P R Q and P ⇓ x imply Q ⇓ x”.R2) and Q = (νx)(R1 R2{y/z }) Argh. This is sometimes useful. P = a x and Q = a z 3. P = (νx)x ...

Example: local names are different from global names Show that in general (νx)!P !(νx)P Intuition: the copies of P in (νx)!P can interact over x. 26 . while !(νx)P ⇓ b.Q1 w(). Take P = x ⊕ x().b where Q1 ⊕ Q2 = (νw)(w w(). but that cannot interact with itself over x.Q2. We have that (νx)!P ⇓ b. We need a process that interacts with another copy of itself over x. while the copies of (νx)P cannot.

G u. 27 . v). y). v . where F = (x. Implement a process that sums of two integers (using both the representations we have seen). y in terms of replication (you have to invent the tecnique to translate mutually recursive definitions yourself).H u. Implement list append. v k().x(y).Exercises 1. Design a representation for lists using π -calculus processes. Compare the transitions of F u. 4. x encoding in the recursion free calculus (use replication). v ) Write the process G x.F y. Consider the pair of mutually recursive definitions to those of its G H = = (u.H u. Implement a process that negates at location a the truth-value found at location b. v). 2. v (u. 3.v().(u().

Communicating and mobile systems: the π -calculus.References Books • Robin Milner. David Walker. Technical Report 498. • Robin Milner.uu. http://www. • Davide Sangiorgi.cl.ps • Peter Sewell. University of Cambridge. Technical Report ECS-LFCS-91-180. University of Edinburgh. (Prentice Hall.it. Communication and concurrency.ac. 2001).1999). http://user. (CUP. An introduction to the pi-calculus. Applied pi — a brief tutorial. (CUP.cam. The π -calculus: a theory of mobile processes.1989). The polyadic pi-calculus: a tutorial.ps 28 . Tutorials available online: • Robin Milner.uk/users/pes20/apppi. • Joachim Parrow.se/~joachim/intro.

Sign up to vote on this title
UsefulNot useful