You are on page 1of 11

Computational Higher Type Theory (CHTT)

Robert Harper
Lecture Notes of Week 3 by Michael Coblenz and Ryan Kavanagh

1 Setting the Scene

Last week, we discovered the need for Kripke logical relations, which are related to the notion
of pre-sheaves from category theory. The problem, we discovered, is that we need to allocate
fresh variables, but when we add them to the context, we have to worry about whether all
the things we proved in the previous context still hold with the addition a new variable. We
will establish an ordering on worlds ∆ and show that properties that hold one world hold in
future worlds too.
When we write HN∆
A (M ):

• ∆ is the “world” of indeterminates, that is, variables and their types;


• A is the type at which to consider M.

We define hereditary normalization as follows:

HN∆
b (M ) , M normβ
0 0
0
HN∆ ∆ ∆
A1 →A2 (M ) , ∀∆ ≥ ∆. if HNA1 (M1 ) then HNA2 (M M1 )

That is, a function is hereditarily normalizing if, when applied to a hereditarily normalizing
argument, the application is hereditarily normalizing.
Review from last time:
Lemma 1 (Head expansion). If HN∆ 0 0 ∆
A (M ) and M 7→ M then HNA (M ).

Lemma 2 (Workhorse).

1. If HN∆
A (M ) then M normβ .

2. If E normβ then HN∆


A (E {x}) where ∆ ` x : C and ∆ ` E : C A.
Theorem 3 (Fundamental theorem of logical relations (FTLR)). If Γ ` M : A and
γ : ∆ → Γ, then HNΓ∆ (γ) implies HN∆
A (γ̂(M )).

Corollary 4. If Γ ` M : A then M normβ . i.e. well-typed terms are normalizing.

It is important to note that these considerations are behavioral, that is, they pertain to
the behavior of terms. This is much deeper than structural considerations. However, the
important thing about the setup here is that the structure of a term results in specific
behavior. The point is that we can know that a term normalizes (a behavioral question) by
knowing only that it has a type (a structural question).
In order to prove theorem 3, we need to show: if ∆ ` x : A, then HN∆
A (x).
Informally, we need to show that xM is hereditarily normalizing whenever M is hereditarily
normalizing. We also need to show that HN∆ A1 →A2 (M ) implies M normβ (Lemma 9 from
last week). These two proofs rely on each other via mutual induction.

1
Lecture Notes Week 3 Lecture Notes Week 3

The problem is that in the function case, we need an argument for M , which we don’t have,
because in our negative formulation we can only reason about M by applying it something.
The question is, what do we apply it to? Our idea is to allocate a fresh variable. We have
to do so at type smaller than A1 → A2 so that the induction hypothesis will hold, and
conveniently, A1 , the argument type of M , is such a type. Consider M x, which is of type
A2 , and note that the induction hypothesis applies! Then we can conclude that M x normβ ,
so therefore M normβ by head expansion.
In order to allocate a variable, we need to move from ∆ (which does not include the fresh
variable) to ∆0 = ∆, x : A1 (which does). But now we have a problem, stability: Do facts
from the old world ∆ still hold in the new0 world ∆0 ? What we need is to know that for a
given proposition P, HN∆ ∆ 0
A (P ) implies HNA (P ) for all ∆ ≥ ∆
These worlds are ordered by a relation ≥. We also need ≥ to be transitive so that once a fact is
established in a given world, it is also established in all future worlds. Furthermore, if ∆0 ≥ ∆,
∆0
then HN∆ A (M ) implies HNA (M ); this property is called “functoriality” or “monotonicity”.
In our context, this is going to mean that we can allocate a fresh variable without altering
the behavior of a given program.
Exercise 5 (Adding products). Add “negative” products to the language and prove termi-
nation and normalization. The products look like this:
hM1 , M2 i value
M · 1, M · 2 projections
hM1 , M2 i.1 7→ M1
hM1 , M2 i.1 7→ M1
hM1 , M2 i.2 7→ M2
Define:

1. HTA1 ×A2 (M ), and reprove termination.

2. HN∆
A1 ×A2 (M ), and reprove termination.

Consider this for negative and positive types. The negative formulation takes the approach
that if one projects from it, it is well-behaved; that is, one uses a negative pair by projecting
from it. The positive formulation would be defined in terms of pattern matching:

• let hi be M in N
• let hx, yi be M in Nx,y

2 Booleans

This brings us to showing hereditary termination and hereditary normalization for the
booleans. We can think of booleans as the prime example of positive types.

• true, false are the values, otherwise known as the introduction or canonical forms.
• if(M ; P ; Q) is the elimination form.
• We extend the evaluation contexts: E ::= . . . | if(E; P ; Q)

• Dynamic semantics:
– if(true; P ; Q) 7→ P
– if(false; P ; Q) 7→ Q

2
Lecture Notes Week 3 Lecture Notes Week 3

Γ ` true : bool Γ ` false : bool

Γ ` M : bool Γ`P :A Γ`Q:A


Γ ` if A (M ; P ; Q) : A

The subscript A in if is a bit of syntax for the formalism, but upon type erasure, it would go
away.
Note that if is a recursor : it witnesses the fact that bool is inductively defined by true and
false. Importantly, if has nothing to do with logical implication! It merely witnesses that
these are the two booleans. Perhaps a better name for if would have been boolcase because
it is the thing that case-analyzes on the two booleans, but that is not the case for historical
reasons.

2.1 Termination for closed interpretations

In this section, we prove termination for closed interpretations in the presence of booleans.
The key idea here is our definition of hereditary termination:

HTbool (M ) , M 7→∗ true or M 7→∗ false

This is a positive interpretation because if is itself positive. We have not formulated if in a


style of “if you were to do something with an if, this is how it would behave. . . ”.
By our updated definition of hereditary termination, if HTbool (M ) then M termβ (since bool
is a base type). Head expansion is also straightforward because of the way we defined the
dynamic semantics; one can always back evaluation up a step.
The proof of the fundamental theorem of linear relations needs to be updated for bool, but
this is easy. We need to show that if Γ ` M : A and HTΓ (γ) then HTA (γ̂(M )). Specifically:

if Γ ` M : bool and HTΓ (γ) then HTbool (γ̂(M )).

By case analysis on the typing judgement, we see that either M = true or M = false. In
either case, M contains no variables, so γ̂(M )) = M . By definition of the dynamic semantics,
M is a value.
A corollary: if M : A then M termβ .
Exercise 6. Show hereditary termination for positive products.

HT1 (M ) , M 7→∗ hi
HTA1 ×A1 (M ) , M 7→∗ hM1 , M2 i such that HTA1 (M1 )and HTA2 (M2 )

As an aside, is it possible to give a negative formulation of bool? It would look like this:

1. A type
HTbool (M ) , if 2. HTA (P )
3. HTA (Q)

3
Lecture Notes Week 3 Lecture Notes Week 3

But now we would be in trouble in the proof: A is not smaller than bool, so the proof doesn’t
go through!
Now, observe regarding the negative formulation of booleans:

bool = ∀A.A → A → A

These are also called Church booleans! This is exactly how one encodes the booleans in
lambda calculus. Presumably Church observed this correspondence directly when inventing
this representation of the booleans.

2.2 Open terms

We need to show normalization for open terms in the presence of booleans. The fundamental
theorem of logical relations says:

If Γ ` M : A and γ : ∆ → Γ, then HN∆ ∆


Γ (γ) implies HNA (γ̂(M )).

Now we need to define HNbool


∆ (M ) so that we are able to prove the above. How should we
do it?
A first attempt:
∗ ∗
HN∆
bool (M ) , M 7→ true or M 7→ false

But this makes no sense because M is open. M may well be (for example) x, or xN , or even
if(xN ; P ; Q).
This shows a constraint regarding our definition. We must validate:
Lemma 7 (HN for booleans).

1. HN∆ ∆
bool (true) and HNbool (false)

2. If HN∆ ∆ ∆ ∆
bool (M̂ ) and HNA (P̂ ) and HNA (Q̂), then HNA (if(M̂ ; P̂ ; Q̂)).

If we choose a definition of hereditary termination that meets these criteria, we will be done.
But what definition should that be? Let us discover it.
By the hypothesis to Lemma 7.2 and the workhorse lemma, we know that M̂ normβ . The
key fact is that for any M such that M normβ , you have that M 7→∗ N for some N that is
head irreducible, that is, some N that is in head normal form. In our proof of the work-horse
lemma, we will want proceed by case analysis on the head normal form of M̂ , and so it is
sufficient for us to require that HN∆
bool (M ) if and only if M normβ .

By workhorse lemma, we conclude P̂ normβ and Q̂ normβ (since we assumed that HN∆
A (P̂ )

and HNA (Q̂). As a reminder, the workhorse lemma is:
Lemma (Workhorse lemma, duplicated for reference).

1. If HN∆
A (M ) then M normβ .

2. If E normβ then HN∆


A (E {x}) where ∆ ` x : C and ∆ ` E : C A.

Proof. We consider the cases for N. It will be easy to satisfy part 1 of theorem 7, but we
must choose carefully so that we can show part 2: HN∆ A (if(M̂ ; P̂ ; Q̂)).
case: N = true. We will need to choose HN∆ ∆
A (·) so that we can show HNA (if(M̂ ; P̂ ; Q̂)).
Since M̂ 7→ true, we know that the whole if expression reduces to P̂ , where HN∆

A (P̂ ) (by
assumption). Therefore, we have HN∆
A (if(M̂ ; P̂ ; Q̂)) by head expansion.

4
Lecture Notes Week 3 Lecture Notes Week 3

case: N = false. This case is similar to the case for N = true but with Q̂ instead of P̂ .
case: otherwise: what do we know about N ? N does not reduce, yet N = 6 true and N = 6 false.
How can this be, given that ∆ ` N : bool? One possibility is that N = x. This is possible if
∆ ` x : bool. Another case is N = xR; this is possible if ∆ ` x : A → bool and ∆ ` R : A.
In general, N can be any evaluation context that starts with a variable (otherwise it would
have been head-reducible)! So N = E{x}. We will need to require that E normβ .
By part 2 of the workhorse lemma, since E normβ , we conclude:

HN∆
A (E{x})

What we are really interested in is showing:

HN∆
A (if(E{x}; P̂ ; Q̂))

But if(E{x}; P̂ ; Q̂) 7→ E{x}, so we have the result we need by applying the head expansion
lemma.
We conclude that the following definition of hereditary normalization is the right one:

HN∆
bool (M ) , M normβ .

3 Adding Sum Types

Having wet our feet with booleans, we extend our development to handle more interesting sum
types. We begin by extending our grammar of types to include the nullary sum—void—and
binary sums:
A, B ::= · · · | void | A + B.
In the formal setting, these come with a collection of typing rules. The introduction rules
are:
Γ`M :A (Sum-L-I) Γ`M :B (Sum-R-I)
Γ`1·M :A+B Γ`2·M :A+B
Note that there is no introduction rule for void. The elimination rules are:
Γ ` M : void (void-E)
Γ ` caseA M { } : A

and
Γ ` M : A + B Γ, x : A ` P : C Γ, x : B ` Q : C
(Sum-E)
Γ ` caseC M {1 · x ,→ P | 2 · x ,→ Q} : C
Strictly speaking, 1 · M should be thought of as short hand for something like in[1]{A, B}(M ),
but we will dispense with such pedantry. We equip the corresponding terms with the following
dynamics:

M 7→ M 0 (Case-S)
caseC M {1 · x ,→ P | 2 · x ,→ Q} 7→ caseC M 0 {1 · x ,→ P | 2 · x ,→ Q}

(Case-L)
caseC 1 · M {1 · x ,→ P | 2 · x ,→ Q} 7→ [M/x]P
(Case-R)
caseC 2 · M {1 · x ,→ P | 2 · x ,→ Q} 7→ [M/x]Q

5
Lecture Notes Week 3 Lecture Notes Week 3

3.1 In a closed setting

The closed setting is again relatively straightforward. We define the corresponding hereditary
termination predicates:

HTvoid (M ) , (never),
7 ∗ 1 · N and HTA (N ),

either M →
HTA+B (M ) ,
or M →7 ∗ 2 · N and HTB (N ).

In particular, for any choice of M , HTvoid (M ) is a contradiction. We remark that this is a


positive formulation. Suppose we were to attempt a negative formulation, perhaps something
along the lines of:

 1. for all N , if HT (N ), then HT ([N/x]P ), and
A C
HTA+B (M ) if and only if, if
 2. for all N , if HTB (N ), then HTC ([N/x]Q),
then HTC (caseC M {1 · x ,→ P | 2 · x ,→ Q}).

The inductive character of our enterprise would then break down, and we would face the
same problems as with a negative formulation of bool. We could, however, read off a Church
encoding for sums:
∀C.(A → C) → (B → C) → C.

We expand the proof of the head-expansion lemma (see theorem 2) to account for these new
clauses:
Lemma 8 (Head expansion with sums). If HTT (M 0 ) and M 7→ M 0 , then HTT (M ).

Proof. The proof was by induction on the type. We consider the following new cases for T :

• void. It is never the case that HTvoid (M 0 ), and so the result follows ex falso.

• A + B. Our positive formulation of HTA+B (M 0 ) tells us that, without loss of generality,


M 7→∗ 1 · N 0 for some N 0 such that HTA (N 0 ). Because the reduction relation 7→ is
deterministic, it follows that M 7→∗ 1 · N 0 . From this, we conclude HTA+B (M ).

We now prove the additional cases for the FTLR:

Theorem 9 (FTLR with sums). If Γ ` M : A and the closing substitution γ : · → Γ is such


that HTΓ (γ), then HTA (γ̂(M )).

Proof. The proof was by induction on the typing judgment. We consider the following new
typing cases, remarking that there were no introduction rules for void.

• (Sum-L-I). By the induction hypothesis, we know that HTA (γ̂(M )). We must show
that HTA+B (γ̂(1 · M )). But this is immediate from the fact that γ̂(1 · M ) = 1 · γ̂(M )
and the fact that 1 · γ̂(M ) 7→∗ 1 · γ̂(M ).
• (Sum-R-I). Follows by a symmetric argument to the previous case.
• (void-E). By the induction hypothesis, we know that HTvoid (γ̂(M )). But this is a
contradiction, and so we are done.

• (Sum-E). We want to show that HTC (γ̂(caseC M {1 · x ,→ P | 2 · x ,→ Q})), where


γ̂(caseC M {1 · x ,→ P | 2 · x ,→ Q}) = caseC γ̂(M ) {1 · x ,→ γ̂(P ) | 2 · x ,→ γ̂(Q)}. By
the induction hypothesis, we know the following:

6
Lecture Notes Week 3 Lecture Notes Week 3

– for any γ : · → Γ satisfying HTΓ (γ), that HTA+B (γ̂(M )),


– for any γA : · → Γ, x : A satisfying HTΓ,x:A (γA ), that HTC (γ̂A (P )), and
– for any γB : · → Γ, x : B satisfying HTΓ,x:B (γB ), that HTC (γ̂B (Q)).
From HTA+B (γ̂(M )), without loss of generality, we know that γ̂(M ) 7→∗ 1·N for some N
such that HTA (N ). Let γA = γ[x 7→ N ]. It follows that HTΓ,x:A (γA ), so HTC (γ̂A (P )).
But γ̂A (P ) = [N/x]γ̂(P ) and γ̂(caseC M {1 · x ,→ P | 2 · x ,→ Q}) 7→∗ [N/x]γ̂(P ), so
the result follows by head expansion (theorem 8).

3.2 In an open setting

In the open setting, things are a bit trickier. We begin by extending the grammar of
evaluation contexts and the associated judgments to cope with sums. First, we introduce
the new evaluation context forms:
E , · · · | caseC E { } | caseC E {1 · x ,→ P | 2 · x ,→ Q}.
Next, we introduce the typing rules for the new evaluation contexts:
E : (Γ B D) (Γ0 B void)
(E-void)
caseC E { } : (Γ B D) (Γ0 B C)
E : (Γ B D) (Γ0 B A + B) Γ0 , x : A ` P : C Γ0 , x : B ` Q : C
(E-Sum)
caseC E {1 · x ,→ P | 2 · x ,→ Q} : (Γ B D) (Γ0 B C)
Finally, we add a rule to specify when such contexts are normalizing:
E normβ
(E-void-Norm)
caseC E { } normβ
E normβ P normβ Q normβ
(E-Sum-Norm)
caseC E {1 · x ,→ P | 2 · x ,→ Q} normβ

We must now turn our attention to what the definition of hereditary normalization for sums
ought to be. Consider the rule (Sum-E) and the induction hypothesis it would give us when
proving the FTLR:

1. HNΓA+B (M ), and

2. if HNΓA (N ), then HNΓC ([N/x]P ), and


3. if HNΓB (N ), then HNΓC ([N/x]Q).

Further considering how the proof might go, we see that we would proceed by case analysis
on the reduction γ̂(M ) 7→∗ N nfβ : either N is an injection, or it is in head normal form,
that is, of the form E{z} for some variable z. We are then lead to the following definition:
HNΓvoid (M ) , M normβ ,

1. M normβ , and






HNΓA+B (M ) , 2. if M 7→∗ 1 · N , then HNΓA (N ), and


7 ∗ 2 · N , then HNΓB (N ).

 3. if M →

(Before proceeding, think about where the proof(s) will break down if, instead of requiring
M normβ , we had a definition analogous to HTvoid (M ) and said that HNΓvoid (M ) never held.)
We again begin by completing the proof of the head expansion lemma to deal with sums.

7
Lecture Notes Week 3 Lecture Notes Week 3

Lemma 10 (Head expansion with sums). If HNΓT (M 0 ) and M 7→ M 0 , then HNΓT (M ).

Proof. The proof is by induction on the structure of T . We add the missing cases:

• void. By HNΓvoid (M 0 ), we have M 0 normβ . Because M 7→ M 0 , it then follows that


M normβ , and so we conclude HNΓvoid (M ).

• A + B. By HNΓA+B (M 0 ), we have M 0 normβ . Because M 7→ M 0 , it then follows that


M normβ . Assume, without loss of generality, that M 7→∗ 1 · N . Then because the 7→
relation is deterministic, it must be because M 7→ M 0 7→∗ 1 · N . By the hypothesis
that HNΓA+B (M 0 ), we know that HNΓA (N ). From this, we are justified in concluding
that HNΓA+B (M ).

We turn our attention to expanding the proof of the work-horse lemma (theorems 8 and 13).
Lemma 11 (Work-horse lemma).

1. If E : (Γ B C) (Γ B D) is an evaluation context such that E normβ and Γ ` x : C,


then HNΓD (E{x}).
2. If HNΓA (M ), then M normβ .

Proof. The proof is by induction on A. We add the missing cases:

• void. We show the two parts in turn.


1. An induction on E gives us that E{x} normβ , from which we conclude the result.
2. Immediate by definition of HNΓvoid (M ).
• A + B. We show the two parts in turn.
1. We proceed by induction on the typing of E.
– If E = · : (Γ B A + B) (Γ B A + B), then the result is immediate from the
fact that x nfβ and that neither x 7→∗ 1 · N nor x 7→∗ 2 · N .
– If E = E 0 M : (Γ B C) (Γ B A + B) because Γ ` M : A1 and E 0 :
(Γ B C) (Γ B A1 → A + B), then we know by the induction hypothesis that
HNΓA1 →A+B (E 0 {x}). By applying the induction hypothesis to this, we know
that E 0 {x} normβ . Let N be the normal form of E 0 {x}, that’s to say, let N
be such that E 0 {x} 7→∗ N nfβ . We know that x must be in head position of
N . By the hypothesis that EM normβ , we know that M normβ , and so let
N 0 be its normal form. Because x is in head position of N , we know N N 0 nfβ
and that N N 0 is not an injection. But E{x} 7→∗ N N 0 , so E{x} normβ . From
this, we conclude that HNΓA+B (E{x}).
– If E = caseA+B E 0 { } : (Γ B C) (Γ B A + B) by (E-void), then we know
E 0 : (Γ B C) (Γ B void), and by the induction hypothesis, that HNΓvoid (E 0 x).
By definition of HNΓvoid (·), we know that E 0 {x} normβ . Let N be its normal
form, then E{x} 7→∗ caseA+B N { } nfβ . We remark that this normal form is
not an injection and conclude HNΓA+B (E{x}).
– If E = caseD E 0 {1 · y ,→ P | 2 · y ,→ Q} : (Γ B C) (Γ B A + B) by
(E-Sum), then for some F and G, we have E 0 : (Γ B C) (Γ B F + G),
Γ, y : F ` P : A + B, and Γ, y : G ` Q : A + B. By the induction
hypothesis, we know that HNΓF +G (E 0 {x}), and by definition of HNΓF +G (·),
that E 0 {x} normβ . Because the variable x occupies the head position of
E 0 {x}, we know that E 0 {x} 7→∗ N nfβ and that N is not of the form 1 · N 0

8
Lecture Notes Week 3 Lecture Notes Week 3

or 2 · N 0 . Observe that E{x} = caseD E 0 {x} {1 · y ,→ P | 2 · y ,→ Q}, and so


E{x} 7→∗ caseD N {1 · y ,→ P | 2 · y ,→ Q} nfβ . Thus, E{x} normβ and E{x}
does not normalize to an injection. From this, we conclude HNΓA+B (E{x}).
2. Immediate by definition of HNΓA+B (M ).

Finally, we ready to expand our proof of the fundamental theorem (theorem 7) to handle
sums:
Theorem 12 (FTLR). If Γ ` M : A and HN∆ ∆
Γ (γ), then HNA (γ̂(M )).

Proof. The proof is by induction on derivation of Γ ` M : A. We add the missing cases:

• (Sum-L-I). We consider Γ ` 1·M : A+B. The injection γ̂(1·M ) = 1· γ̂(M ) is a normal


form, and so 1 · γ̂(M ) normβ . We have γ̂(1 · M ) 7→∗ 1 · γ̂(M ) reflexively, and we know
HN∆ ∆
A (γ̂(M )) by the induction hypothesis. From this, we conclude HNA+B (γ̂(1 · M )).

• (Sum-R-I). Symmetric to the case (Sum-L-I).


• (void-E). We consider Γ ` caseA M { } : A. By the induction hypothesis, we know
HN∆ ∆
void (γ̂(M )). By definition of HNvoid (·), this implies that γ̂(M ) normβ . But the only
normal forms of type void are of the form E 0 {x} for some E 0 : (∆ B C) (∆ B void) and
∆ ` x : C. Let E = caseA E 0 { } : (∆ B C) (∆ B A). Then caseA M { } 7→∗ E{x} nfβ .
By the work-horse lemma (theorem 11), we know HN∆ A (E{x}), and so we conclude the
result by head expansion (theorem 10).
• (Sum-E). We now consider Γ : caseC M {1 · x ,→ P | 2 · x ,→ Q}. By the induction
hypothesis, we know:
– for any HN∆ ∆
Γ (γ), that HNA+B (γ̂(M )),

– for any HN∆,x:A ∆,x:A


Γ,x:A (γ), that HNC (γ̂(P )), and
– for any HN∆,x:B ∆,x:B
Γ,x:B (γ), that HNC (γ̂(Q)).

Let γ such that HN∆ Γ (γ) be arbitrary. Then we know by the work-horse lemma
(theorem 11) that γ̂(M ) normβ , and so we fall into one of the following three subcases:

1. M 7→∗ 1 · N . Because HN∆ ∆


A+B (γ̂(M )), we know that HNA (N ). It then follows
that γ̂(caseC M {1 · x ,→ P | 2 · x ,→ Q}) 7→∗ [N/x]γ̂(P ). Because HN∆ A (N ) and
∆,x:A 0
HN∆ 0
Γ (γ), we have HNΓ,x:A (γ ), where γ = [γ | x 7→ N ]. Then by the induction
hypothesis, we get HN∆,x:A
C (γ̂ 0 (P )), and we conclude the result by head expansion
(theorem 10).
2. M 7→∗ 2 · N . This case is symmetric to the previous one.
3. M 7→∗ E 0 {z} for some evaluation context E 0 and variable z : W ∈ Γ. Let
E = caseC E 0 {1 · x ,→ γ̂(P ) | 2 · x ,→ γ̂(Q)}. If we could show that E normβ ,
then by the work-horse lemma (theorem 11), we would know that HNΓC (E{z}).
From this, we could use the fact that caseC M {1 · x ,→ P | 2 · x ,→ Q} 7→∗ E{z}
and conclude the result by head expansion. To show that E normβ , we it is
sufficient to show that γ̂(P ) normβ and γ̂(Q) normβ , because we already know
that E 0 normβ . Let γ 0 = [γ | x 7→ x], then γ 0 (y) = γ(y) for all y ∈ dom(γ) and
γ 0 (x) = x. We claim HN∆,x:A 0 0
Γ,x:A (γ ). Indeed, for all y ∈ dom(γ ) such that y 6= x,
(−)
we know that HN∆,x:A
A (γ 0 (y)) by functoriality of HNA (y) and the hypothesis
that HNΓ (γ). As for γ (x), we know that HN∆,x:A
∆ 0
A (x) by the work-horse lemma
(theorem 11) using the evaluation context · : (∆, x : A B A) (∆, x : A B A).
So applying the induction hypothesis on γ and P , we get that HN∆,x:A
0
C (γ̂ 0 (P )).

9
Lecture Notes Week 3 Lecture Notes Week 3

By the work-horse lemma, we then get that γ̂ 0 (P ) normβ . But γ̂ 0 (P ) = γ̂(P ), so


γ̂(P ) normβ . An analogous argument gives us that γ̂(Q) normβ . This gives us
that E normβ , so we are done.
Remark 13. We are now in a perfect position to show the above results for a positive
formulation of products:

let h i be M in N

let hx, yi be M in Nx,y

4 Natural numbers

We now consider a fragment of Gödel’s System T; we refer the reader to [Harper, 2016,
Chapter 9] for the full system. Gödel proved that normalization of system T is equivalent
to the consistency of Peano Arithmetic. The types and terms are given by the following
grammar:

A ::= nat | A1 → A2
M ::= z | s(M ) | recA {P ; x.Q}(M ).

The statics of this fragment are:

Γ ` M : nat
(Var) (nat-z-I) (nat-s-I)
Γ, x : A ` x : A Γ ` z : nat Γ ` s(M ) : nat

Γ ` M : nat Γ ` P : A Γ, x : A ` Q : A
(nat-E)
Γ ` recA {P ; x.Q}(M ) : A
The dynamics of this fragment are:

M value (s-Val) M 7→ M 0 (s-Step)


(z-Val)
z value s(M ) value s(M ) 7→ s(M 0 )

M 7→ M 0 (rec-Step)
recA {P ; x.Q}(M ) 7→ recA {P ; x.Q}(M 0 )
s(M ) value
(rec-z) (rec-s)
recA {P ; x.Q}(z) 7→ P recA {P ; x.Q}(s(M )) 7→ [recA {P ; x.Q}(M )/x]Q

As before, our goal is to show termination in the closed setting and normalization in the
open setting. Explicitly, we want to show:

1. If M : A, then M termβ .
2. If Γ ` M : A, then M normβ .

4.1 In a closed setting

We are going to show a stronger property than item 1 or hereditary termination. What we
will show is that:

If Γ ` M : A and HTΓ (γ), then HTA (γ̂(M )).

10
Lecture Notes Week 3 Lecture Notes Week 3

Our key desideratum is that HTnat (M ) imply M termβ . Seeing that nat is a positive type,
we do the obvious thing:

either M 7→∗ z,

HTnat (M ) ,
or M 7→∗ s(M 0 ) and HTnat (M 0 ).

We should be sceptical of this definition! Indeed, this definition is circular, and not all
circular definitions make sense, so we need to be careful.
A potentially useful way of thinking about the definition of HTA (M ) is that it is recursively
defined along two axes. It has a “vertical” inductive structure on the type A, and when A is
nat, it has a “horizontal” structure at that level.
We can formally define HTnat (·) to be the strongest predicate P on terms such that:

1. if M 7→∗ z, then P (M ), and


2. if M 7→∗ s(N ) and P (N ), then P (M ).

Equivalently, one could define the corresponding functional F and define


[
HTnat = F (i) (∅),
i≥0

where F (0) never holds, and F (k+1) is defined in terms of F (k) .


We remark that this definition is not mathematical induction! We are reasoning about a
program’s computation, not natural numbers.
In showing the result at the beginning of this subsection, you will need to curry-out the M
in the rule (nat-E) to get:

Γ ` P : A Γ, x : A ` Q : A
(nat-E0 )
Γ, z : nat ` recA {P ; x.Q}(z) : A

This case will then be proven using the horizontal induction principle.

4.2 In an open setting

One follows the same development, using the definition



1. M normβ , and






HNΓnat (M ) , 2. if M 7→∗ z, then (true), and


 ∗ Γ
 3. if M 7→ s(N ), then HNnat (N ).

References

Kurt Gödel. Über eine bisher noch nicht benÜtzte erweiterung des finiten standpunktes.
Dialectica, 12(3-4):280–287, 1958. ISSN 1746-8361. doi: 10.1111/j.1746-8361.1958.tb01464.
x. URL http://dx.doi.org/10.1111/j.1746-8361.1958.tb01464.x.
Robert Harper. Practical Foundations for Programming Languages. Cambridge University
Press, 2nd edition, 2016.

11

You might also like