You are on page 1of 2

Hattendorf’s Theorem

Let
T
L = vTx bTx − ∫ x vt πt dt
0
be the future loss (prospective loss) random variable in a general fully-continuous life insurance
model.
=
Theorem: Var[ L] E[[vTx (bTx − TxV )]2 ] .
Proof: It follows from the following form of Thiele’s differential equation,
vt π=
t dt d (v tV ) + v (bt − tV )µ x +t dt ,
t t

that
T T
L= vTx bTx − ∫ x d(vt tV ) − ∫ x vt (bt − tV )µ x +t dt
0 0
or
T
V vTx (bTx − TxV ) − ∫ x vt (bt − tV )µ x +t dt .
L − 0=
0

Because 0V = E[ L] , the theorem is proved if we can show that


T
E[[vTx (bTx − TxV ) − ∫ x vt (bt − tV )µ x +=
t dt ] ] E[[v (bTx − TxV )] ] ,
2 Tx 2
0
or that
T T
E[[ ∫ x vt (bt − tV )µ x +=
t dt ] ] 2E[v (bTx − TxV ) × ∫ v (bt − tV )µ x +t dt ] ,
2 Tx x t
(*)
0 0
which we now show by means of integration by parts. By the law of the unconscious statistician
and because
fTx ( s )ds = −d s px ,
the LHS of (*) is
s =∞ s
−∫ [ ∫ vt (bt − tV )µ x +t dt ]2 d s px
s =0 0

 s s =∞ s =∞ s 
− [ ∫ vt (bt − tV )µ x +t dt ]2 × s px
= − ∫s=0 s px × d[ ∫ vt (bt − tV )µ x +t dt ]2 
 0 s =0 0

s =∞ s
+∫
= px × d[ ∫ vt (bt − tV )µ x +t dt ]2
s =0 s 0
s =∞ s s
= ∫s=0 s px × 2[ ∫ vt (bt − tV )µ x +t dt ]2−1 × d[ ∫ vt (bt − tV )µ x +t dt ]
0 0
s =∞ s t
= 2∫ px [ ∫ v (bt − tV )µ x +t dt ] × v s (bs − sV )µ x + s ds
s =0 s 0

which is the RHS of (*) as s px × µ x + s = fTx ( s ) . Q.E.D.


Because we do not need to assume 0V to be zero, we obtain the following corollary.
The variance of the time-t future loss (prospective loss) r.v.,
Tx+t t
t=
L vTx+t bt +Tx+t − ∫ v πt + s ds ,
0
is

( )
2
Var[ t L] E  vTx+t bt +Tx+t − t +Tx+tV   .
=
  

Applications: AM Exercise 8.24, Exercise 6.6 For Exercise 6.6, you first note that if µx+τ = µ for
all τ > 0, then tV ( Ax ) = 0 for all t. Hence, by Hattendorf’s Theorem,
µ
Var[ L] =E[[vTx (1 − 0)]2 ] = 2 Ax = .
µ + 2δ

Remarks (i) Hattendorf’s Theorem for the full discrete case has an extra factor:
 Kx  
( ) 
2
− ∑ v j π j  E  v K x +1 bK x +1 − K x +1V  × px + K x  .
Var v K x +1bK x +1=
   
 j =0 
Again, it is not necessary to assume 0V to be zero. You may want to try to derive the formula by
summation by parts; it is not easy.

(ii) Consider the last part in Exercise 6.12 of AM. Here, Kx is a geometric random variable.
Then, px+k = r for all integers k, and jVx = 0 for all integer j. Hence,
Var[ L]= E[[v K x +1 (1 − 0)]2 × r ]= 2
Ax × r ,
which immediately gives the solution to HW Set 6, Problem 1(ii).

(iii) Professor Hans U. Gerber formula would illustrate the decomposition,


vt π=
t dt d (v tV ) + v (bt − tV )µ x +t dt ,
t t

as follows:

You might also like