You are on page 1of 176

Batch3

Subject CS2
Revision Notes
For the 2019 exams

Markov jump processes


Booklet 3

covering

Chapter 4 Time-homogeneous Markov jump processes


Chapter 5 Time-inhomogeneous Markov jump processes

The Actuarial Education Company


Batch3
Batch3

CONTENTS

Contents Page

Links to the Course Notes and Syllabus 2


Overview 4
Core Reading 5
Past Exam Questions 41
Solutions to Past Exam Questions 73
Factsheet 164

Copyright agreement

All of this material is copyright. The copyright belongs to Institute and


Faculty Education Ltd, a subsidiary of the Institute and Faculty of Actuaries.
The material is sold to you for your own exclusive use. You may not hire
out, lend, give, sell, transmit electronically, store electronically or photocopy
any part of it. You must take care of your material to ensure it is not used or
copied by anyone at any time.

Legal action will be taken if these terms are infringed. In addition, we may
seek to take disciplinary action through the profession or through your
employer.

These conditions remain in force after you have finished using the course.

© IFE: 2019 Examinations Page 1


Batch3

LINKS TO THE COURSE NOTES AND SYLLABUS

Material covered in this booklet

Chapter 4 Time-homogeneous Markov jump processes


Chapter 5 Time-inhomogeneous Markov jump processes

These chapter numbers refer to the 2019 edition of the ActEd course notes.

Syllabus objectives covered in this booklet

The numbering of the syllabus items is the same as that used by the Institute
and Faculty of Actuaries.

3.3 Define and apply a Markov process.

3.3.1 State the essential features of a Markov process model.


3.3.2 Define a Poisson process, derive the distribution of the
number of events in a given time interval, derive the
distribution of inter-event times, and apply these results.
3.3.3 Derive the Kolmogorov equations for a Markov process
with time independent and time/age dependent transition
intensities.
3.3.4 Solve the Kolmogorov equations in simple cases.
3.3.5 Describe simple survival models, sickness models and
marriage models in terms of Markov processes and
describe other simple applications.
3.3.6 State the Kolmogorov equations for a model where the
transition intensities depend not only on age/time, but also
on the duration of stay in one or more states.
3.3.7 Describe sickness and marriage models in terms of
duration dependent Markov processes and describe other
simple applications.
3.3.8 Demonstrate how Markov jump processes can be used as
a tool for modelling and how they can be simulated.

Page 2 © IFE: 2019 Examinations


Batch3

4.3 Derive maximum likelihood estimators for transition intensities.

4.3.1 Describe an observational plan in respect of a finite


number of individuals observed during a finite period of
time, and define the resulting statistics, including the
waiting times.
4.3.2 Derive the likelihood function for constant transition
intensities in a Markov model of transfers between states
given the statistics in 4.3.1.
4.3.3 Derive maximum likelihood estimators for the transition
intensities in 4.3.2 and state their asymptotic joint
distribution.

© IFE: 2019 Examinations Page 3


Batch3

OVERVIEW

This booklet covers Syllabus objectives 3.3 and 4.3.1-4.3.3, relating to


Markov jump processes.

A Markov jump process is a stochastic process with a continuous time set


and a discrete state space that satisfies the Markov property. The simplest
type of Markov jump process is the Poisson process, and that is the first
model that is covered in this part of the course.

We then go on to discuss Markov jump processes where the transition rates


do not vary over time. Such processes are known as time-homogeneous.

We also consider time-inhomogeneous Markov jump processes, where the


transition rates vary over time.

Examples of Markov jump processes include the two-state (alive-dead)


model, the health-sickness-death model and the marriage model.

We finally consider how to estimate the parameters of a Markov jump


process, and how such a process can be simulated.

Exam questions on this topic can ask you to calculate transition probabilities
or derive expressions for such probabilities in terms of forces of transition.
To do this, we might write down and solve differential equations or integral
equations. Exam questions may be based around examples that are not
described in the Core Reading, eg machines breaking down or spaces
available in a car park. Try not to be put off by this – the same principles
apply in every case.

Page 4 © IFE: 2019 Examinations


Batch3

CORE READING

All of the Core Reading for the topics covered in this booklet is contained in
this section.

We have inserted paragraph numbers in some places, such as 1, 2, 3 …, to


help break up the text. These numbers do not form part of the Core
Reading.

The text given in Arial Bold font is Core Reading.

The text given in Arial Bold Italic font is additional Core Reading that is not
directly related to the topic being discussed..
____________

Chapter 4 – Time-homogeneous Markov jump processes

Notation

Different authors tend to use different notation for the same quantities,
and the Markov model is an example of this. Actuaries have often use
notations derived from the standardised International Actuarial
Notation, in which the ‘transition rate’ is the force of mortality m x , and
the corresponding probabilities are the life table probabilities t px and
t qx . Moreover the index x is generally understood to indicate age
(eg age now, or age at policy inception) and the index t indicates
duration since age x . Probabilists and statisticians tend to use
different notations.

The non-homogeneous Markov model offers particularly rich, and


potentially confusing, opportunities to invent different notations for the
same quantities. To try to limit any such confusion, we make the
following remarks.

1. We have written pij (s , t ) to mean the probability of the process


being in state j at time t , conditional on being in state i at
time s £ t .

© IFE: 2019 Examinations Page 5


Batch3

The traditional actuarial notation would reserve the symbol t for


duration since time s , in which case the above probability would
be expressed pij (s , s + t ) . Just as likely, the life table symbol t ps
ij
would be adapted, so that pij (s , s + t ) would be written as t ps .
Other variants, such as t pij (s ) , may be encountered.

2. We have written mij (s ) to mean the transition rate from state i to


state j at time s . Following the actuarial tradition, the time (or
age) may be indicated by a subscript, so that the same rate may be
written m sij .

While a standard international actuarial notation was adopted for the


life table and its derived quantities, the same is not true for the richer
models needed to represent insurance contracts that depend on more
than just being alive or dead. The actuarial reader must always be
prepared to assimilate the notation that each particular author decides
to use.

The Poisson process

The Poisson process is the simplest example of a Markov jump


process in continuous time. In studying the Poisson process we shall
encounter features which are of general applicability in this chapter.
____________

1 The standard time-homogeneous Poisson process is a counting


process in continuous time, {Nt , t ≥ 0} , where Nt records the number
of occurrences of some type of event within the time interval from 0
to t . The events of interest occur singly and may occur at any time.

The probability that an event occurs during the short time interval from
time t to time t + h is approximately equal to l h for small h ; the
parameter l is called the rate of the Poisson process.

The Poisson process is very commonly used to model the occurrence


of unpredictable incidents, such as car accidents or arrival of claims at
an office.
____________

Page 6 © IFE: 2019 Examinations


Batch3

The above definition should be made more precise if it is to be used for


calculations.
____________

2 Formally, an integer-valued process {Nt , t ≥ 0} , with filtration

{Ft , t ≥ 0} , is a Poisson process if:

P ÈÎNt + h - Nt = 1| Ft ˘˚ = l h + o(h )

P ÎÈNt + h - Nt = 0 | Ft ˚˘ = 1 - l h + o(h ) (4.1)

P ÈÎNt + h - Nt π 0,1| Ft ˘˚ = o(h )

f (h )
where the statement that f (h) = o(h) as h Æ 0 means lim = 0.
h Æ0 h
____________

3 As may be seen from the definition, the increment Nt + h - Nt of the


Poisson process is independent of past values of the process and has
a distribution which does not depend on t . It therefore follows that the
Poisson process is a process with stationary, independent increments
and, in addition, satisfies the Markov property.
____________

It is far from obvious that the process defined above coincides with the
Poisson process characterised in Booklet 1 as having independent
stationary Poisson-distributed increments. That is one of the
properties that we shall prove.
____________

4 Nt is a Poisson random variable with mean l t .

More generally, Nt + s - Ns is a Poisson random variable with mean l t ,


independent of anything that has occurred before time s .
____________

© IFE: 2019 Examinations Page 7


Batch3

5 Proof

Define p j (t ) = P (Nt = j ) , the probability that there have been


exactly j events by time t . The proof will be complete if we can verify
that, for each j ≥ 0 :

e - lt (l t )
j
p j (t ) = (4.2)
j!

For any j > 0 , and for small positive h :

p j (t + h) = P (Nt + h = j )
= P (Nt = j and Nt + h = Nt )
+P (Nt = j - 1 and Nt + h = Nt + 1) + o(h)
= p j (t )(1 - l h) + p j -1(t )l h + o(h )

Rearranging this equation, and letting h Æ 0 , we obtain, again


for j > 0 :

dp j
= - l p j (t ) + l p j -1(t ) (4.3)
dt

with initial condition p j (0) = 0 . The same analysis yields, in the


case j = 0 :

dp0
= - l p0 (t ) (4.4)
dt

with p0 (0) = 1 .

It is now straightforward to verify that the suggested solution (4.2)


satisfies both the differential equations (4.3) and (4.4) as well as the
initial conditions.
____________

Page 8 © IFE: 2019 Examinations


Batch3

6 In view of the fact that the increments of N are stationary and are
independent of the past, this result may be generalised to a statement
that Nt + s - Ns is a Poisson random variable with mean l t ,
independent of anything that has occurred before time s .
____________

Inter-event times

7 Since the Poisson process Nt changes only by unit upward jumps, its
sample paths are fully characterised by the times at which the jumps
take place. Denote by T0 , T1, T2 , .... the successive inter-event times (or
holding times), a sequence of random variables.

Xt

t
T0 T1 T2 T3 T4 T5 T6

Note that we choose (by convention) the sample paths of X t to be


right-continuous so that XT0 = 1 , XT0 +T1 = 2 , … .
____________

© IFE: 2019 Examinations Page 9


Batch3

8 T0 , T1, T2 , .... is a sequence of independent exponential random


variables, each with parameter l .
____________

Proof

9 P (T0 > t ) is the probability that no events occur between time 0 and

time t , which is also equal to P (Nt = 0) = p0 (t ) = e - l t .


____________

10 Now the distribution function of T0 is F (t ) = P (T0 £ t ) = 1 - e - l t , t > 0,


implying that T0 is exponentially distributed.
____________

11 Consider now the conditional distribution of T1 given the value of T0 .

P ÈÎT1 > t T0 = s ˘˚ = P ÈÎNt + s = 1 T0 = s ˘˚


= P ÈÎNt + s - Ns = 0 T0 = s ˘˚
= P ÈÎNt + s - Ns = 0 ˘˚
= p0 (t )
= e - lt

where the third equality reflects the independence of the increment


Nt + s - Ns from the past of the process (up to and including time s ).
____________

12 The above calculation proves two results at once: T1 is independent


of T0 and has the same exponential distribution. The calculation can
be repeated for T2 , T3 ,  .
____________

Page 10 © IFE: 2019 Examinations


Batch3

Time-homogeneous Markov jump processes

13 A continuous-time Markov process X t , t ≥ 0 with a discrete (ie finite


or countable) state space S is called a Markov jump process.
____________

14 In this section consideration will be given to the time-homogeneous


case, where probabilities P X t = j X s = i ( ) depend only on the length
of the time interval, t - s .
____________

15 The transition probabilities of the Markov jump process:

(
pij (t ) = P X t = j X 0 = i )
obey the Chapman-Kolmogorov equations:

pij (t + s ) = Â pik (s ) pkj (t ) for all s , t > 0 (4.5)


k ŒS
____________

The derivation of the Chapman-Kolmogorov equations in continuous


time is identical to the derivation in discrete time. See Booklet 2.
____________

16 Denoting by P (t ) the matrix with entries pij (t ) , known as the


transition matrix, Equation (4.5) reads:

P (t + s ) = P (s ) P (t ) for all s , t > 0


____________

© IFE: 2019 Examinations Page 11


Batch3

17 If we know the transition matrix P (t ) and the initial probability


distribution qi = P ( X 0 = i ) , we can find general probabilities involving
the process X t by using the Markov property.

For instance when 0 < t1 < t2 <  < t n :

P È X 0 = i , X t1 = j1, X t2 = j2 , ..., X tn = jn ˘
Î ˚
= qi pij1 (t1) p j1 j2 (t2 - t1)  p jn -1 jn (t n - t n -1)

and:

P ÈÎ X t1 = j1, X t2 = j2 , ..., X tn = jn ˘˚
= Â qi pij1 (t1) p j1 j2 (t 2 - t1)  p jn -1 jn (t n - t n -1)
i ŒS
____________

We will assume that the functions pij (t ) are continuously


differentiable. This is a large assumption to make; indeed, the full
theory of Markov jump processes permits transition probabilities that
do not satisfy this requirement. Such processes are called irregular.
They are of little use in practical modelling, however, and the loss
involved in restricting our attention to regular Markov processes is not
significant for the purposes of this course.
____________

18 Noting that:

Ï0 if i π j
pij (0) = d ij = Ì (4.6)
Ó1 if i = j

the assumption of differentiability implies the existence of the


following quantities:

d pij (h) - d ij
mij = pij (t )Ωt = 0 = lim
dt h Æ0 h
____________

Page 12 © IFE: 2019 Examinations


Batch3

19 Equivalently, the following relations hold as h Æ 0 ( h > 0 ):

ÏÔh mij + o(h ) if i π j


pij (h) = Ì (4.7)
ÔÓ 1 + h mii + o(h) if i = j
____________

20 The interpretation of the first line of (4.7) is simply that the probability
of a transition from i to j during any short time interval [ s , s + h ] is
proportional to h ; hence the name transition rate given to mij .
____________

21 Note finally that, as a result of (4.7), mij ≥ 0 for i π j , but mii £ 0 . In


fact differentiating the identity  pij (t ) = 1 with respect to t at t = 0
j ŒS
yields:

mii = - Â mij
j πi
____________

22 Hence each row of the matrix A has zero sum.


____________

Kolmogorov’s forward differential equations

23 Transition rates are of fundamental importance in that they


characterise fully the distribution of Markov jump processes. In order
to see this, substitute t = h and s = t in (4.5):

pij (t + h ) =  pik (t ) pkj (h) = pij (t ) + h  pik (t ) mkj + o(h)


k ŒS k ŒS
____________

24 This leads to the differential equation:

d
pij (t ) = Â pik (t ) mkj for all i , j (4.8)
dt k ŒS
____________

© IFE: 2019 Examinations Page 13


Batch3

25 These are known as Kolmogorov’s forward equations; they can be


written in compact form as:

d
P (t ) = P (t ) A
dt

where A is the matrix with entries mkj .


____________

Recall that A is often called the generator matrix of the Markov jump
process.

If the state space S is finite, (4.8) gives for each fixed i a finite linear
system of differential equations (in fact the index i enters only through
the initial condition (4.6)). Accordingly, for given transition rates mij ,
Equation (4.8) has a unique solution compatible with (4.6). For this
reason Markov models are normally formulated simply by specifying
their transition rates mij .
____________

Kolmogorov’s backward differential equations

26 Substituting s = h in (4.5) and proceeding as before, we obtain a


different set of equations:

d
P (t ) = A P (t )
dt

known as Kolmogorov’s backward equations.


____________

Under ‘normal’ circumstances, the forward and the backward systems


are equivalent; this is so in particular when the transition rates are
bounded

sup mij < •


i, j

However, when this condition fails, the backward system is of more


fundamental importance.

Page 14 © IFE: 2019 Examinations


Batch3

The forward equations are more useful in numerical work for actuarial
applications because we usually have an initial condition such as
knowing that a policyholder is healthy when a policy is sold, thus we
want equations that we can solve forwards in time from that starting
point.
____________

The Poisson process revisited

27 Consider the Markov jump process with state space S = {0, 1, 2, ...} and
transition rates:

Ï-l if j = i
Ô
mij = Ìl if j = i + 1
Ô0 otherwise
Ó

The diagram representation is:

      

0 1 2 3 i i+1

____________

28 The generator matrix A in Kolmogorov's equations is:

Ê -l l 0ˆ
Á -l l ˜
Á ˜
A=Á -l l ˜
Á ˜
Á  ˜
ÁË 0 ˜¯
____________

© IFE: 2019 Examinations Page 15


Batch3

29 This leads to the forward equations:

ÏÔ pi¢0 (t ) = - l pi 0 (t )
Ì
ÔÓ pij¢ (t ) = l pij -1(t ) - l pij (t ), j > 0

essentially identical to (4.3) and (4.4).


____________

30 It is interesting also to consider the backward equations:

pij¢ (t ) = - l pij (t ) + l pi +1, j (t )

which of course have the same solution as the forward equations


despite looking dissimilar.
____________

Holding times

31 The exponential character of the holding times of the Poisson process


is no accident: the memoryless property:

P [T > t + u | T > t ] = P [T > u ]

which characterises exponentially distributed random variables is a


necessary requirement for the holding times of time-homogeneous
Markov processes.
____________

32 Consider the first holding time T0 = inf {t : X t π X 0 } . The first holding


time of a time-homogeneous Markov jump process with transition
rates mij is exponentially distributed with parameter:

li = - mii = Â mij
j πi

Page 16 © IFE: 2019 Examinations


Batch3

In other words:

P ÈÎT0 > t | X 0 = i ˘˚ = e - li t
____________

The proof of this result is beyond the syllabus.

In the Poisson process the timing of the jumps is everything. However,


in the more general setting of a Markov jump process, we must also
characterise the state to which the process jumps; this is remarkably
simple.
____________

33 The jump takes place from X0 = i to X T0 = j with probability


proportional to the transition rate mij and moreover the destination of
the jump X T0 is independent of the holding time T0 .
____________

34 In order to see this consider for j π i :

P ÈÎ X t + h = j , t < T0 £ t + h | X 0 = i ˘˚
= P ÈÎ X t + h = j , T0 > t | X 0 = i ˘˚
= P ÈÎ X t + h = j | X 0 = i , T0 > t ˘˚ P ÈÎT0 > t | X 0 = i ˘˚
= P ÈÎ X t + h = j | X s = i , 0 £ s £ t ˘˚ e - li t
- li t
= Pij (h )e

Now, divide by h and let h Æ 0 : the joint probability


distribution/density of X T0 and T0 is, conditionally on X 0 = i , equal to:

mij e - li t

So it is the product of the density of the holding time li e - li t and


mij
of .
li

© IFE: 2019 Examinations Page 17


Batch3

This proves two results at once: the probability that the jump out of i
to state j is:

m
P ÈÎ X T0 = j | X 0 = i ˘˚ =
ij
li
( j π i)

and moreover X T0 is independent of T0 .


____________

35 As a result of the Markov property, the pattern is identical for


successive jumps: after some state j is entered, the process stays
there for an exponentially distributed time with parameter l j . It then
m jk
jumps to state k with probability .
lj
____________

1
36 Note that the mean holding time of state j is ; this is an important
lj
thing to remember when assigning numerical values to the transition
rates.
____________

The jump chain

37 If a Markov jump process is examined only at the times of its


transitions, the resulting process, denoted { Xˆ n : n = 0, 1, } , where X̂ 0
is the initial state, and for n ≥ 1 :

Xˆ n = X T0 +T1 ++Tn -1

is called the jump chain associated with X.


____________

Page 18 © IFE: 2019 Examinations


Batch3

38 The foregoing analysis shows that Xˆ n is independent of


T0 + T1 +  + Tn -1 , ie of the time of the n th transition, and is also
independent of anything that happened prior to the (n - 1) th transition:
in fact, the distribution of Xˆ n depends only on Xˆ n -1 . In other words,
the jump chain possesses the Markov property and is a Markov chain
in its own right.
____________

39 The only way in which the jump chain differs from a standard Markov
chain is when the jump process {X t , t ≥ 0} encounters an absorbing
state. From that time on it makes no further transitions, implying that
time stops for the jump chain. In order to deal with the jump chain
entirely within the framework of Markov chains it is permissible to treat
the absorbing state in the same way as for a Markov chain, so that
transitions continue to occur but the chain remains in the same state
after the transition.
____________

Questions dealing solely with the sequence of states visited by the


Markov jump process, such as ‘What is the probability that it visits
state i0 before it reaches the absorbing state?’ or ‘Is state j visited
infinitely often?’, can be answered equally well with reference to the
jump chain, since the two processes take identical paths through the
state space. The theory of Markov chains can therefore be employed
to arrive at solutions to such questions. Questions dealing with the
time taken to visit a state, however, are likely to have very different
answers in the two cases and are only accessible using the theory of
Markov jump processes.
____________

© IFE: 2019 Examinations Page 19


Batch3

Solutions of the Kolmogorov equations in elementary cases

40 In certain elementary cases, the solutions of the Kolmogorov equations


can simply be written down, and the two-state model is often an
intuitive guide. For example, consider the two-decrement model in
which the transition rates are constant:

0 = ACTIVE

01 02

1 = DEAD 2 = RETIRED

We have:

m01
p01 ( x , x + t ) = È1 - e - ( m01 + m02 )t ˘
m01 + m02 Î ˚

m02
p02 ( x , x + t ) = È1 - e -( m01 + m02 )t ˘
m01 + m02 Î ˚

These probability formulae are easily interpreted – the term in brackets


is the probability of having left the active state, and the fraction gives
the conditional probability of each decrement having occurred, given
that one of them has occurred.
____________

41 However, in practice we do not always work with such simple models,


or with constant transition intensities, and it is not possible to rely on
solving the equations explicitly. Fortunately this does not matter; the
Kolmogorov equations are quite simple to solve using ordinary
numerical techniques.
____________

Page 20 © IFE: 2019 Examinations


Batch3

The maximum likelihood estimator in the general model

42 As we saw earlier in this chapter, the two-state model can be extended


to any number of states, with arbitrary transitions between them,
including increments and repeated transitions. Consider again the
illness-death model, which has three states: healthy (H), sick (S) and
dead (D):

(t)
H: Healthy S: Sick
(t)

(t) v(t)

D: Dead

The observations in respect of a single life are now:

(a) the times between successive transitions; and

(b) the numbers of transitions of each type.

If the transition intensities are constant, each spell of length t in the


able or ill states contributes a factor of the form e ( ) or e ( )
- m +s t - n +r t

respectively to the likelihood, so it suffices to record the total waiting


time spent in each state.
____________

© IFE: 2019 Examinations Page 21


Batch3

43 Define:

Vi = Waiting time of the i th life in the healthy state


Wi = Waiting time of the i th life in the sick state
Si = Number of transitions healthy  sick by the i th life
Ri = Number of transitions sick  healthy by the i th life
Di = Number of transitions healthy  dead by the i th life
Ui = Number of transitions sick  dead by the i th life

N
We also need to define totals V = Â Vi (and so on).
i =1
____________

44 Using lower case symbols for the observed samples as usual, it is


easily shown that the likelihood for the four parameters m , n , s , r ,
given the data is proportional to:

L( m ,n , s , r ) = e - ( m +s )v e - (n + r )w m dn us s r r
____________

45 The likelihood factorises into functions of each parameter of the


form e - m v m d . The maximum likelihood estimators are:

D U S R
m = , n = , s = , r =
V W V W
____________

46 The asymptotic properties of these estimators follow from results


similar to equations (3.2) and (3.3) in Booklet 2, and the fact that the
random variables (Di - m Vi ) , (U i - n Wi ) , (Si - s Vi ) , (Ri - r Wi ) are
uncorrelated, that is:

E [(Di - m Vi )(U i - n Wi )] = 0 etc


____________

Page 22 © IFE: 2019 Examinations


Batch3

47 The estimators are not independent: Di and U i are both 0 or 1,


but Di U i π 1 , while (assuming that the i th life starts in the able
state) Si = Ri or Ri + 1 . The estimators are, however, asymptotically
independent.
____________

48 The same argument as in the two-state model shows that:

 the vector ( m ,n , s , r ) has an asymptotic multivariate normal


distribution;
 each component has a marginal asymptotic distribution of the
same form as before:

Ê m ˆ
m ~ Normal Á m , etc
Ë E[V ] ˜¯

 asymptotically, the components are uncorrelated and so


independent (being normal).
____________

49 The calculation of the estimates m , etc, requires the total waiting time
to be computed. This can be done exactly in some circumstances, but,
if the exposure data are in census form, the simple census formulae in
Booklet 5 provide estimates. Multiple state models are, therefore,
especially well suited to the data available in many actuarial
investigations.
____________

A range of packages have been written in R to implement multiple state


models. Several of these are described, with illustrative code, in
Beyersmann, J., Schumacher, M. and Allignol, A. Competing Risks and
Multistate Models with R (London, Springer, 2012).
____________

© IFE: 2019 Examinations Page 23


Batch3

Chapter 5 – Time-inhomogenous Markov jump processes

50 The more general continuous-time Markov process {X t , t ≥ 0} has


transition probabilities:

Pij (s , t ) = P ÈÎ X t = j | X s = i ˘˚ (s £ t )
which obey a version of the Chapman-Kolmogorov equations, written
in matrix form as:

P ( s , t ) = P (s , u ) P (u , t ) for all s < u < t


____________

51 Proceeding as in the time-homogeneous case, we obtain:

ÏÔh mij (s ) + o(h) if i π j


pij ( s , s + h ) = Ì
ÔÓ 1 + h mii (s ) + o(h ) if i = j
____________

52 We see that the only difference between this case and the
time-homogeneous case studied earlier is that the transition rates
mij (s ) are allowed to change over time.
____________

Kolmogorov’s forward equations

53 Kolmogorov’s forward equations can be derived.

Written in matrix form they are:


P ( s , t ) = P ( s , t ) A (t )
∂t

where A(t) is the matrix with entries mij (t ) .


____________

Page 24 © IFE: 2019 Examinations


Batch3

Kolmogorov’s backward equations

54 The matrix form of Kolmogorov’s backward equations is:


P (s , t ) = - A (s ) P (s , t )
∂s
____________

It is still the case that:

mii (s ) = - Â mij (s )
j πi

Hence each row of the matrix A (s ) has zero sum.

The general theory of time-inhomogeneous Markov jump processes is


rather too complicated to fall within the scope of the current syllabus,
but the methods used can be illustrated by means of a number of
practical examples.
____________

55 A two-state model

Consider the following survival model: transition from the alive state A
to the dead state D takes place at rate m AD (t ) , which has been
abbreviated to m (t ) here, since it is the only transition in the model.

(t)
A D

In other words m (t ) is the force of mortality.


____________

© IFE: 2019 Examinations Page 25


Batch3

Ê - m (t ) m (t )ˆ
56 Since A (t ) = Á the forward equations give:
Ë 0 0 ˜¯


p (s , t ) = - m (t ) pAA ( s , t )
∂ t AA
____________

57 The solution corresponding to the initial condition pAA (s , s ) = 1 is:

Ê t ˆ
pAA ( s , t ) = exp Á - Ú m ( x ) dx ˜
Ë s ¯
____________

Equivalently the probability for an individual aged s to survive for a


further period of length at least w is:

Ê s +w ˆ Ê w ˆ
w ps = pAA ( s , s + w ) = exp Á - Ú m ( x )dx ˜ = exp Á - Ú m ( s + y )dy ˜
Ë s ¯ Ë 0 ¯
(5.1)

This illustrates the need for time-dependent rates in mortality and


many other actuarial models: a constant force of mortality m would
give rise to an age-independent survival probability w ps , an absurd
result.
____________

Residual holding times

As it stands, (5.1) is peculiar to the specific survival model under


consideration; however, if properly reinterpreted it is but an instance of
a general formula.
____________

Page 26 © IFE: 2019 Examinations


Batch3

58 For a general Markov jump process, X t , define the residual holding


time Rs as the (random) amount of time between s and the next jump:

{Rs > w , X s = i } = {X u = i , s £ u £ s + w }
____________

Formula (5.1) gives the probability that Rs > w given that the state at
time s is A.
____________

59 In general one can prove:

Ê s +w ˆ
P ÎÈRs > w | X s = i ˚˘ = exp Á - Ú li (t ) dt ˜ (5.2)
Ë s ¯

____________

60 Moreover, the characterisation of the state:

X s+ = X s + Rs

to which the jump takes place is similar to the time-homogeneous


case:

mij (s + w )
P È X s+ = j | X s = i , Rs = w ˘ = (5.3)
Î ˚ li (s + w )
____________

© IFE: 2019 Examinations Page 27


Batch3

Integrated form of the Kolmogorov equations

61 The above is more than a neat picture for the behaviour of Markov
jump processes: it is also a powerful computational tool. Indeed,
conditioning on Rs and X s+ we have using the law of total probability:

pij ( s , t ) = P ÈÎ X t = j | X s = i ˘˚
t -s s +w
li (u )du
e Ús
-
= Â Ú mil (s + w ) P È X t = j | X s = i , Rs = w , X s+ = l ˘dw
Î ˚
l πi 0

and therefore:

t -s s +w
li (u )du
e Ús
-
pij ( s , t ) = Â Ú mil (s + w ) Plj (s + w , t ) dw (5.4)
l πi 0

This is the integrated form of the backward equation, as can be


checked by differentiation with respect to s .
____________

62 The formula may look intimidating but it conforms to intuition: since


j π i , the process must jump out of i at some stage. By (5.2), the first
jump after time s takes place at s + w with probability density:

Ê s +w ˆ
li (s + w ) exp Á - Ú li (u ) du ˜
Ë s ¯

By (5.3) the process jumps to l over time period [ s , s + w ] with


mil ( s + w )
probability . It then remains to effect a transition from l to j
li (s + w )
over the remaining time period [ s + w , t ] :

state i l j

time s s + w t

____________

Page 28 © IFE: 2019 Examinations


Batch3

Ê t ˆ
63 When i = j there is an additional term exp Á - Ú li (u ) du ˜ because the
Ë s ¯
process can remain in state i throughout [ s , t ] .
____________

If instead of considering the first jump after s one focuses on the last
jump before t , one can obtain an intuitive derivation of the integrated
form of the forward equations.
____________

64 The forward equation when i π j is:

t
t -s - Ú l j (u )du
pij ( s , t ) = Â Ú Pik (s , t - w ) mk j (t - w )e t -w dw (5.5)
kπ j 0
____________

For a full justification of this equation one needs to appeal to the


properties of the current holding time Ct , namely the time between the
last jump and t :

{Ct  w, Xt = j} = { X u = j, t – w  u  t}
____________

© IFE: 2019 Examinations Page 29


Batch3

Applications

Marriage

65 Describe the marital status of an individual as one of the following:


bachelor (never married) (B), married (M), widowed (W), divorced (D),
dead ( D ). We can define a Markov jump process on the state space
{B, M, W, D, D } as illustrated below:

M
(t)
(t)
v(t) r(t)
d(t)

B W D
(t)

(t)
(t) (t)

____________

66 In the above the deaths rate m (t ) has been taken to be independent of


the marital status for simplicity.
____________

Page 30 © IFE: 2019 Examinations


Batch3

67 The probability of being married at time t and of having been so for at


least w given that you are a bachelor at time s is (assuming
w < t - s ):

P ÎÈ X t = M , Ct > w | X s = B ˚˘
t -s
= Ú ÈÎ pBB (s , t - v ) a (t - v ) + pBW (s , t - v ) r (t - v )
w
t

+ pBD (s , t - v ) r (t - v ) ˘˚ e Út -v
- ( m (u )+n (u ) +d (u ))dudv

state B k M

time s t – v t – w t

where k is any of the states leading to M, namely B, W and D.


____________

Sickness and death

68 Describe the state of a person as ‘healthy’, ‘sick’ or ‘dead’. For given


time-dependent (ie age-dependent) transition rates, we can construct a
Markov jump process with state space {H, S, D}:

(t)

H: Healthy S: Sick
(t)

(t) v(t)

D: Dead

____________

© IFE: 2019 Examinations Page 31


Batch3

69 The matrix A(t ) in Kolmogorov’s equations is:

Ê -s (t ) - m (t ) s (t ) m (t )ˆ
A (t ) = Á r (t ) - r (t ) - n (t ) n (t ) ˜
Á ˜
ËÁ 0 0 0 ˜¯
____________

70 In particular:

lH (t ) = s (t ) + m (t ) lS (t ) = r (t ) + n (t ) and lD = 0
____________

71 The easiest probabilities to calculate are those of remaining


continuously healthy or continuously sick over [ s , t ] . Using (5.2) these
are:

Ê t ˆ
( )
P ÈÎRs > t - s | X s = H ˘˚ = exp Á - Ú s (u ) + m (u ) du ˜ (5.6)
Ë s ¯

and:

Ê t ˆ
( )
P ÎÈRs > t - s | X s = S ˚˘ = exp Á - Ú r (u ) + n (u ) du ˜
Ë s ¯
____________

Page 32 © IFE: 2019 Examinations


Batch3

72 Transition probabilities can be related to each other as in (5.4) and


(5.5). For instance:

s +w
t -s - Ú (s (u ) + m (u ))du
pHS ( s , t ) = Ú PSS ( s + w , t ) s ( s + w ) e s dw
0

state H S S

time s s + w t

____________

73 Extra conditions on residual or current holding times can be handled


without difficulty. Consider for instance the probability of being sick at
time t and of having been so for at least w , given that you are healthy
at time s . This is:

P ÈÎ X t = S , Ct > w | X s = H ˘˚
t
t -s - Ú ( r (u ) +n (u ))du
= Ú pH H (s , t - v ) s (t - v ) e t -v
dv
w
____________

Sickness and death with duration dependence

74 The Markov property implies that:

P ÈÎ X t = H | X s = S , Cs = w ˚˘ = P ÎÈ X t = H | X s = S ˚˘
____________

75 In other words, the duration of your current illness has no bearing on


your future health prospects.
____________

© IFE: 2019 Examinations Page 33


Batch3

76 In order to remove this undesirable feature, we modify the model by


allowing the rates of transition out of S to depend on the current
holding time Ct :

(t)
H: Healthy S: Sick
(t Ct)

(t) v(t Ct)

D: Dead

____________

77 This appears to take us outside the scope of this unit, as the value
of Ct must now be incorporated into the state, so that the state space
is not countable (ie discrete as opposed to continuous) any more.

However, the framework from the start of this section can still be used
provided that there is careful conditioning on the relevant current
holding time.

In fact, since the transition rates s and m do not depend on Ct the


probability of remaining continuously healthy during [ s , t ] is given by
(5.6) as before.

On the other hand to calculate the probability of remaining


continuously sick during [ s , t ] given a current illness period [ s - w , s ] ,
one needs to update the values of r and n as the illness progresses:

state H S

time s – w s u t
w – s + u
____________

Page 34 © IFE: 2019 Examinations


Batch3

t
- Ú ( r (u ,w - s +u )+n (u ,w - s +u ))du
78 P ÎÈ X t = S , Rs > t - s | X s = S , Cs = w ˚˘ = e s

____________

79 As a final example the probability of being healthy at time t given that


you are sick at time s with current illness duration w can be written
as:

pSw H (s , t ) = P ÈÎ X t = H | X s = S , Cs = w ˘˚
v
t - Ú ( r (u ,w - s +u ) +n (u ,w - s +u ))du
= Úe s r (v , w - s + v ) pHH (v , t )dv
s

state H S H H

time s – w s u v t
w – s + u
____________

Modelling and simulation

80 Time-homogeneous Poisson process models

A time-homogeneous Poisson process has a single parameter l . The


estimation of this parameter given a collection of data is
straightforward.

Example

An insurance office observes that m claims arrive in a total of T time


units. If the company decides that a Poisson process model is
appropriate, the most suitable estimate for l would appear to
be lˆ = m / T .
____________

This intuitive estimate is confirmed by more formal procedures such as


maximum likelihood estimation. Having estimated the parameter, all
that remains is to test goodness of fit.
____________

© IFE: 2019 Examinations Page 35


Batch3

81 Divide the total time T into k equal intervals. If the Poisson process
model fits, the number of claims arriving in the k intervals should
form a sequence of independent Poisson variates, each with
mean lT / k . There are two things to test here:

 whether the distribution is Poisson and


 whether the observations are independent.

A standard c 2 goodness of fit test can be employed to determine


whether the Poisson distribution fits.
____________

Assuming that the fit is adequate, independence is probably best


tested against the alternative that there is some form of serial
dependence. Tests for serial correlation are covered in Booklet 6.

____________

82 Time-inhomogeneous Poisson process models

In some classes of business, such as insurance against storm damage,


the intensity of arrival of claims may vary predictably with time, in the
sense that the insurer can tell in advance that some time intervals will
have more claim arrivals than other intervals of equal length.
A suitable model here is the time-inhomogeneous Poisson process, for
which the arrival rate of claims is a function l (t ) . In the given
example l will be periodic, with a period of one year.
____________

83 It is impractical to attempt to estimate the value of l (t ) separately for


each value of t . A common procedure is to divide the whole time
period up into pieces of a suitable size and to estimate the arrival rate
separately for each piece.
____________

Page 36 © IFE: 2019 Examinations


Batch3

Thus data for a whole year may be divided into months, giving 12
estimated claim arrival rates. Tests of goodness of fit should be
carried out for each month separately, but tests for serial correlation
should use the whole data set at once.
____________

Time-homogeneous Markov models

84 The structural analysis of Paragraphs 31 to 36 is of paramount


importance when it comes to modelling continuous-time Markov jump
processes.
____________

85 Recall that each visit to any given state i is of exponential duration


with mean 1 li and is independent of the durations of previous visits
to that state and of the destination after the next jump. Further, the
probability that the next transition is to state j is mij li .
____________

86 This suggests that it is feasible to separate the two estimation


procedures.

 First the li may be estimated: look at the data for the durations of

visits to state i and let 1 lˆ i be equal to the sample mean of this


collection.

 Next proceed as in Booklet 2: let ni be the number of completed


visits to state i , nij the number of direct transitions from state i
to state j , and set pˆ ij = nij / ni . Since pij is equal to mij li , a

sensible estimator for mij is mˆ ij = lˆi pˆ ij .


____________

87 Tests for goodness of fit are more problematical, if only because there
is a vast collection of possible alternative hypotheses. It is reasonable
to test whether the visits to a given state really are exponentially
distributed: a c 2 goodness of fit test will do this.
____________

© IFE: 2019 Examinations Page 37


Batch3

It is also reasonable to test whether the jump chain really does exhibit
the Markov property: see Booklet 2 for a discussion. But there are
other implications of the Markov structure that should be tested and
the procedure is not always clear.
____________

88 For example, to derive a formal test as to whether the destination of a


jump is independent of the duration of the previous holding time we
would need to do something like this:

 look at all visits to state i and classify them as long-duration,


medium-duration or short-duration
 for each duration category, estimate the transition probabilities of
the jump chain separately, giving estimates pˆ ij(L ) , pˆ ij(M ) and pˆ ij(S )

 determine whether the differences between the sets of estimated


transition probabilities are significant.

However, it is by no means clear what test statistic could be employed


or what its distribution might be. In practice the investigation of this
question would be accomplished graphically: for each visit to state i ,
plot a point on a graph whose x -coordinate represents the duration of
the visit, y -coordinate the destination of the next jump. If a pattern
appears, reject the assumption of independence.
____________

Other tests, such as testing whether the first visit to a given state is
significantly longer than subsequent visits, are also best treated
graphically.
____________

Time-inhomogeneous Markov models

89 The structural decomposition of the time-homogeneous Markov model


does not apply to the time-inhomogeneous case. The estimation of
time-dependent transition rates, such as the force of mortality or
age-dependent rate of recovery from sickness, is best treated within
the context of the particular model being studied.
____________

Page 38 © IFE: 2019 Examinations


Batch3

Simulation

There are two approaches to the task of simulating a


time-homogeneous Markov jump process. The first is an approximate
method and the second exact.
____________

90 Approximate method

Divide time into very short intervals of width h , say, where h mij is
much smaller than 1 for each i and j . The transition matrix P (h ) of
the Markov chain has entries approximately given by:

pij* (h ) = d ij + h mij

Using the techniques of Booklet 2 we may simulate a discrete-time


Markov chain {Yn , n ≥ 0} with these transition probabilities, then
write X t = Y[t h] .
____________

This simplistic method is not very satisfactory, as its long-term


distribution may differ significantly from that of the process being
modelled.

An improved version of this method is available, which uses the exact


transition probabilities pij (h ) instead of pij* (h ) , but this naturally
requires that the exact probabilities be calculated in advance. General
techniques for such calculations, where not covered by this booklet,
are beyond the scope of the syllabus.
____________

© IFE: 2019 Examinations Page 39


Batch3

91 Exact method

This takes advantage of the structural decomposition of the jump


process. First simulate the jump chain of the process as a Markov
chain with transition probabilities pij = mij li . Once the path

{ Xˆ n : n = 0,1, } has been generated, the holding times {Tn : n = 0,1, }


are a sequence of independent exponential random variables, Tn
having rate parameter given by l Xˆ .
n

____________

Time-inhomogeneous processes

Given the transition rates of a time-inhomogeneous Markov chain and


given the state X t at time t , it is in principle possible to determine the
density function of the time until the next transition and the destination
of the next jump: see Paragraphs 65 to 79 for examples. This means
that standard simulation techniques can be deployed to generate an
exact simulation of the process.
____________

92 In practice, however, such a procedure is cumbersome in the extreme,


unless the number of states is very small, and a more usual approach
is to use the approximate method outlined above. The exact transition
probabilities pij (t , t + h ) will seldom be to hand, meaning that the less

satisfactory approximate values pij* (t , t + h ) = d ij + h mij (t ) must be used


instead. The method is acceptable for short-term simulations but is
unreliable in the long term.
____________

Page 40 © IFE: 2019 Examinations


Batch3

PAST EXAM QUESTIONS

This section contains all the relevant exam questions from 2008 to 2017 that
are related to the topics covered in this booklet.

Solutions are given after the questions. These give enough information for
you to check your answer, including working, and also show you what an
outline examination answer should look like. Further information may be
available in the Examiners’ Report, ASET or Course Notes. (ASET can be
ordered from ActEd.)

We first provide you with a cross-reference grid that indicates the main
subject areas of each exam question. You can use this, if you wish, to
select the questions that relate just to those aspects of the topic that you
may be particularly interested in reviewing.

Alternatively, you can choose to ignore the grid, and attempt each question
without having any clues as to its content.

© IFE: 2019 Examinations Page 41


Batch3

Cross-reference grid

Question attempted
Generator matrix

Poisson process
Markov property

Differential eqns
Formulation as
Defn of MJP /

Waiting times
Integral eqns

probabilities
Jump chain

Occupancy
Transitionn
Question

diagram

MLEs
MJP

1   
2   
3 
4   
5   
6  
7   
8   
9  
10     
11   
12     
13    
14   
15   
16  
17    
18      
19     
20   
21     
22   
23    
24     
25     
26 
27    
28     
29  
30 
31   
32  
33   

Page 42 © IFE: 2019 Examinations


Batch3

1 Subject CT4 April 2008 Question 10

An internet service provider (ISP) is modelling the capacity requirements for


its network. It assumes that if a customer is not currently connected to the
internet (‘offline’) the probability of connecting in the short time interval
[t , t + dt ] is 0.2dt + o(dt ) . If the customer is connected to the internet
(‘online’) then it assumes the probability of disconnecting in the time interval
is given by 0.8dt + o(dt ) .

The probabilities that the customer is online and offline at time t are PON (t )
and POFF (t ) respectively.

(i) Explain why the status of an individual customer can be considered as a


Markov Jump Process. [2]

¢ (t ) .
(ii) Write down Kolmogorov’s forward equation for POFF [2]

(iii) Solve the equation in part (ii) to obtain a formula for the probability that a
customer is offline at time t , given that they were offline at time 0. [3]

(iv) Calculate the expected proportion of time spent online over the period
[0, t ] . [HINT: Consider the expected value of an indicator function
which takes the value 1 if offline and 0 otherwise.] [4]

(v) (a) Sketch a graph of your answer to (iv) above.

(b) Explain its shape. [3]


[Total 14]

© IFE: 2019 Examinations Page 43


Batch3

2 Subject CT4 April 2008 Question 11

An investigation was carried out into the relationship between sickness and
mortality in an historical population of working class men. The investigation
used a three-state model with the states:

1 Healthy
2 Sick
3 Dead

Let the probability that a person in state i at time x will be in state j at


time x + t be t pxij . Let the transition intensity at time x + t between any two
states i and j be m xij + t .

(i) Draw a diagram showing the three states and the possible transitions
between them. [2]

(ii) Show from first principles that:

∂ 23 21 13 22 23
t px = t p x m x + t + t px m x + t [5]
∂t

(iii) Write down the likelihood of the data in the investigation in terms of the
transition rates and the waiting times in the Healthy and Sick states,
under the assumption that the transition rates are constant. [3]

The investigation collected the following data:


 man-years in Healthy state 265
 man-years in Sick state 140
 number of transitions from Healthy to Sick 20
 number of transitions from Sick to Dead 40

(iv) Derive the maximum likelihood estimator of the transition rate from Sick
to Dead. [3]

(v) Hence estimate:

(a) the value of the constant transition rate from Sick to Dead

(b) 95 per cent confidence intervals around this transition rate. [4]
[Total 17]

Page 44 © IFE: 2019 Examinations


Batch3

3 Subject CT4 September 2008 Question 4

In the village of Selborne in Southern England in the year 1637 the number
of babies born each month was as follows:

January 2 July 5
February 1 August 1
March 1 September 0
April 2 October 2
May 1 November 0
June 2 December 3

Data show that over the 20 years before 1637 there was an average of 1.5
births per month. You may assume that births in the village historically
follow a Poisson process.

An historian has suggested that the large number of births in July 1637 is
unusual.

(i) Carry out a test of the historian’s suggestion, stating your conclusion. [4]

(ii) Comment on the assumption that births follow a Poisson process. [1]
[Total 5]

4 Subject CT4 September 2008 Question 9

A company pension scheme, with a compulsory scheme retirement age of


65, is modelled using a multiple state model with the following categories:
1 currently employed by the company
2 no longer employed by the company, but not yet receiving a pension
3 pension in payment, pension commenced early due to ill health
retirement
4 pension in payment, pension commenced at scheme retirement age
5 dead

(i) Describe the nature of the state space and time space for this process.
[2]

(ii) Draw and label a transition diagram indicating appropriate transitions


between the states. [2]

© IFE: 2019 Examinations Page 45


Batch3

For i , j in {1,2,3,4,5}, let:

1i
t px the probability that a life is in state i at age x + t , given they are in
state 1 at age x

m xij + t the transition intensity from state i to state j at age x + t .

(iii) Write down equations which could be used to determine the evolution of
1i
t px (for each i ) appropriate for:

(a) x + t < 65
(b) x + t = 65
(c) x + t > 65 . [6]
[Total 10]

5 Subject CT4 April 2009 Question 8

There is a population of ten cats in a certain neighbourhood. Whenever a


cat which has fleas meets a cat without fleas, there is a 50% probability that
some of the fleas transfer to the other cat such that both cats harbour fleas
thereafter. Contacts between two of the neighbourhood cats occur
according to a Poisson process with rate m , and these meetings are equally
likely to involve any of the possible pairs of individuals. Assume that once
infected a cat continues to have fleas, and that none of the cats’ owners has
taken any preventative measures.

(i) If the number of cats currently infected is x , explain why the number of
possible pairings of cats which could result in a new flea infection is
x (10 - x ) . [1]

(ii) Show how the number of infected cats at any time, X (t ) , can be
formulated as a Markov jump process, specifying:

(a) the state space

(b) the Kolmogorov differential equations in matrix form. [4]

(iii) State the distribution of the holding times of the Markov jump process. [2]

(iv) Calculate the expected time until all the cats have fleas, starting from a
single flea-infected cat. [2]
[Total 9]

Page 46 © IFE: 2019 Examinations


Batch3

6 Subject CT4 April 2009 Question 11

An investigation into mortality by cause of death used the four-state Markov


model shown below.

1 Alive

m 12
x +t
m14
x +t
m13
x +t

2 Dead from heart 3 Dead from 4 Dead from


disease cancer other causes

(i) Show from first principles that:

∂ 12 12 11
t px = m x + t t px [5]
∂t

The investigation was carried out separately for each year of age, and the
transition intensities were assumed to be constant within each single year of
age.

(ii) (a) Write down, defining all the terms you use, the likelihood for the
transition intensities.

(b) Derive the maximum likelihood estimator of the force of mortality


from heart disease for any single year of age. [5]

The investigation produced the following data for persons aged 64 last
birthday:

Total waiting time in the state Alive 1,065 person-years

Number of deaths from heart disease 34


Number of deaths from cancer 36
Number of deaths from other causes 42

© IFE: 2019 Examinations Page 47


Batch3

(iii) (a) Calculate the maximum likelihood estimate (MLE) of the force of
mortality from heart disease at age 64 last birthday.

(b) Estimate an approximate 95% confidence interval for the MLE of


the force of mortality from heart disease at age 64 last birthday. [3]

(iv) Discuss how you might use this model to analyse the impact of risk
factors on the death rate from heart disease and suggest, giving
reasons, a suitable alternative model. [3]
[Total 16]

7 Subject CT4 September 2009 Question 6

The complaints department of a company has two employees, both of whom


work five days per week.

The company models the arrival of complaints using a Poisson process with
rate 1.25 per working day.

(i) List the assumptions underlying the Poisson process model. [2]

On receipt of a complaint, it is immediately assessed as being


straightforward, of medium difficulty or complicated. 60% of cases are
assessed as straightforward and 10% are assessed as complicated. The
time taken in person-days’ effort to prepare responses is assumed to follow
an exponential distribution, with parameters 2 for straightforward complaints,
1 for medium difficulty complaints and 0.25 for complicated complaints.

(ii) Calculate the average number of person-days’ work expected to be


generated by complaints arriving during a five-day working week. [2]

(iii) Define a state space under which the number of outstanding complaints
can be modelled as a Markov jump process. [2]

The company has a service standard of responding to complaints within a


fixed number of days of receipt. It is considering using this Markov jump
process to model the probability of failing to meet this service standard.

(iv) Discuss the appropriateness of using the model for this purpose, with
reference to the assumptions being made. [3]
[Total 9]

Page 48 © IFE: 2019 Examinations


Batch3

8 Subject CT4 September 2009 Question 8

A researcher is studying a certain incurable disease. The disease can be


fatal, but often sufferers survive with the condition for a number of years.
The researcher wishes to project the number of deaths caused by the
disease by using a multiple state model with state space:

{ H – Healthy, I – Infected, D(from disease) – Dead (caused by the disease),


D(not from disease) – Dead (not caused by the disease)}.

The transition rates, dependent on age x , are as follows:


 a mortality rate from the Healthy state of m ( x )

 a rate of infection with the disease s ( x )

 a mortality rate from the Infected state of u ( x ) of which r ( x ) relates to


deaths caused by the disease.

(i) Draw a transition diagram for the multiple state model. [2]

(ii) Write down Kolmogorov’s forward equations governing the transitions


by specifying the transition matrix. [3]

(iii) Determine integral expressions, in terms of the transition rates and any
expressions previously determined, for:

(a) PHH ( x, x + t )

(b) PHI ( x, x + t )

(c) PHD( from disease) ( x, x + t ) [5]


[Total 10]

© IFE: 2019 Examinations Page 49


Batch3

9 Subject CT4 April 2010 Question 7

A government has introduced a two-tier driving test system. Once someone


applies for a provisional licence they are considered a Learner driver.
Learner drivers who score 90% or more on the primary examination (which
can be taken at any time) become Qualified. Those who score between
50% and 90% are obliged to sit a secondary examination and are given
driving status Restricted. Those who score 50% or below on the primary
examination remain as Learners. Restricted drivers who pass the secondary
examination become Qualified, but those who fail revert back to Learner
status and are obliged to start again.

(i) Sketch a diagram showing the possible transitions between the states.[2]

(ii) Write down the likelihood of the data, assuming transition rates between
states are constant over time, clearly defining all terms you use. [3]

Figures over the first year of the new system based on those who applied for
a provisional licence during that time in one area showed the following:
 Person-months in Learner State 1,161
 Person-months in Restricted State 1,940
 Number of transitions from Learner to Restricted 382
 Number of transitions from Restricted to Learner 230
 Number of transitions from Restricted to Qualified 110
 Number of transitions from Learner to Qualified 217

(iii) (a) Derive the maximum likelihood estimator of the transition rate from
Restricted to Learner.

(b) Estimate the constant transition rate from Restricted to Learner. [3]
[Total 8]

Page 50 © IFE: 2019 Examinations


Batch3

10 Subject CT4 April 2010 Question 11

A reinsurance policy provides cover in respect of a single occurrence of a


specified catastrophic event. If such an event occurs, future cover is
suspended. However if a reinstatement premium is paid within one time
period of occurrence of the event then the insurance coverage is reinstated.
If a second specified event occurs it is not permitted to reinstate the cover
and the policy will lapse.

The transition rate for the hazard of the specified event is a constant 0.1.
Whilst policies are eligible for reinstatement, the transition rate for
resumption of cover through paying a reinstatement premium is 0.05.

(i) Explain whether a time homogeneous or time inhomogeneous model


would be more appropriate for modelling this situation. [2]

(ii) (a) Explain why a model with state space {Cover In Force, Suspended,
Lapsed} does not possess the Markov property.

(b) Suggest, giving reasons, additional state(s) such that the expanded
system would possess the Markov property. [3]

(iii) Sketch a transition diagram for the expanded system. [2]

(iv) Derive the probability that a policy remains in the Cover In Force state
continuously from time 0 to time t . [2]

(v) Derive the probability that a policy is in the Suspended state at


time t > 1 if it is in state Cover In Force at time 0. [5]
[Total 14]

© IFE: 2019 Examinations Page 51


Batch3

11 Subject CT4 September 2010 Question 10

A study is undertaken of marriage patterns for women in a country where


bigamy is not permitted. A sample of women is interviewed and asked about
the start and end dates of all their marriages and, where the marriages had
ended, whether this was due to death or divorce (all other reasons can be
ignored). The investigators are interested in estimating the rate of first
marriage for all women and the rate of re-marriage among widows.

(i) Draw a diagram illustrating a multiple-state model which the


investigators could use to make their estimates, using the four states:
‘Never married’, ‘Married’, ‘Widowed’ and ‘Divorced’. [1]

(ii) Derive from first principles the Kolmogorov differential equation for first
marriages. [5]

(iii) Write down the likelihood of the data in terms of the waiting times in
each state, the numbers of transitions of each type, and the transition
intensities, assuming the transition intensities are constant. [3]

(iv) Derive the maximum likelihood estimator of the rate of first marriage. [2]
[Total 11]

12 Subject CT4 September 2010 Question 11

At a certain airport, taxis for the city centre depart from a single terminus.
The taxis are all of the same make and model, and each can seat four
passengers (not including the driver). The terminus is arranged so that
empty taxis queue in a single line, and passengers must join the front taxi in
the line. As soon as it is full, each taxi departs. A strict environmental law
forbids any taxi from departing unless it is full. Taxis are so numerous that
there is always at least one taxi waiting in line.

Customers arrive at the terminus according to a Poisson process with a


rate b per minute.

(i) Explain how the number of passengers waiting in the front taxi can be
modelled as a Markov jump process. [2]

(ii) Write down, for this process:

(a) the generator matrix

(b) Kolmogorov’s forward equations in component form. [4]

Page 52 © IFE: 2019 Examinations


Batch3

(iii) Calculate the expected time a passenger arriving at the terminus will
have to wait until his or her taxi departs. [4]

The four-passenger taxis were highly polluting, and the government


instituted a ‘scrappage’ scheme whereby taxi drivers were given a subsidy to
replace their old four-passenger taxis with new ‘greener’ models. Two such
models were on the market, one of which had a capacity of three
passengers and the other of which had a capacity of five passengers (again,
not including the driver in each case). Half the taxis were replaced with
three-passenger models, and half with five-passenger models.

Assume that, after the replacement, three-passenger and five-passenger


models arrive randomly at the terminus.

(iv) Write down the transition matrix of the Markov jump chain describing the
number of passengers in the front taxi after the vehicle replacement. [2]

(v) Calculate the expected waiting time for a passenger arriving at the
terminus after the vehicle scrappage scheme and compare this with
your answer to part (iii). [3]
[Total 15]

13 Subject CT4 April 2011 Question 9

(i) Define a Markov jump process. [2]

A study of a tropical disease used a three-state Markov process model with


states:

1. Not suffering from the disease


2. Suffering from the disease
3. Dead.

ij
The disease can be fatal, but most sufferers recover. Let t px be the
probability that a person in state i at age x is in state j at age x + t . Let
m xi j+ t be the transition intensity from state i to state j at age x + t .

(ii) Show from first principles that:

∂ 13 11 13 12 23
t px = t p x m x + t + t p x m x + t [4]
∂t

© IFE: 2019 Examinations Page 53


Batch3

The study revealed that sufferers who contract the disease a second or
subsequent time are more likely to die, and less likely to recover, than first-
time sufferers.

(iii) Draw a diagram showing the states and possible transitions of a model
which allows for this effect yet retains the Markov property. [3]
[Total 9]

14 Subject CT4 September 2011 Question 6

A recording instrument is set up to observe a continuous time process, and


stores the results for the most recent 250 transitions. The data collected are
as follows:

Total time Number of transitions to


State spent in
i state i State A State B State C
(hours)
Not
A 35 60 45
applicable
Not
B 150 50 25
applicable
Not
C 210 55 15
applicable

It is proposed to fit a Markov jump model using the data.

(i) (a) State all the parameters of the model.

(b) Outline the assumptions underlying the model. [4]

(ii) (a) Estimate the parameters of the model.

(b) Write down the estimated generator matrix of the model. [4]

(iii) Specify the distribution of the number of transitions from state i to


state j , given the number of transitions out of state i . [1]
[Total 9]

Page 54 © IFE: 2019 Examinations


Batch3

15 Subject CT4 September 2011 Question 8

A continuous-time Markov process with states { Able to work ( A ),


Temporarily unable to work ( T ), Permanently unable to work ( P ), Dead
( D ) } is used to model the cost of providing an incapacity benefit when a
person is permanently unable to work.

The generator matrix, with rates expressed per annum, for the process is
estimated as:

A T P D
A Ê -0.15 0.1 0.02 0.03ˆ
T Á 0.45 -0.6 0.1 0.05˜
Á ˜
P Á 0 0 -0.2 0.2 ˜
Á ˜
D Ë 0 0 0 0 ¯

(i) Draw the transition graph for the process. [2]

(ii) Calculate the probability of a person remaining in state A for at least 5


years continuously. [2]

Define F (i ) to be the probability that a person, currently in state i , will


never be in state P .

(iii) Derive an expression for:

(a) F ( A) by conditioning on the first move out of state A

(b) F (T ) by conditioning on the first move out of state T . [3]

(iv) Calculate F ( A) and F (T ) . [2]

(v) Calculate the expected future duration spent in state P , for a person
currently in state A . [2]
[Total 11]

© IFE: 2019 Examinations Page 55


Batch3

16 Subject CT4 April 2012 Question 10

An investigation was conducted into the effect marriage has on mortality and
a model was constructed with three states: 1 Single, 2 Married and 3 Dead.
It is assumed that transition rates between states are constant.

(i) Sketch a diagram showing the possible transitions between states. [2]

(ii) Write down an expression for the likelihood of the data in terms of
transition rates and waiting times, defining all the terms you use. [3]

The following data were collected from information on males and females in
their thirties.

Years spent in Married state 40,062


Years spent in Single state 10,298
Number of transitions from Married to Single 1,382
Number of transitions from Single to Dead 12
Number of transitions from Married to Dead 9

(iii) Derive the maximum likelihood estimator of the transition rate from
Single to Dead. [4]

(iv) Estimate the constant transition rate from Single to Dead and its
variance. [2]
[Total 11]

Page 56 © IFE: 2019 Examinations


Batch3

17 Subject CT4 September 2012 Question 7

The volatility of equity prices is classified as being High (H ) or Low (L )


according to whether it is above or below a particular level. The volatility
status is assumed to follow a Markov jump process with constant transition
rates fLH = m and fHL = r .

(i) Write down the generator matrix of the Markov jump process. [1]

(ii) State the distribution of holding times in each state. [1]

A history of equity price volatility is available over a representative time


period.

(iii) Explain how the parameters m and r can be estimated. [2]

Let t psij be the probability that the process is in state j at time s + t given

that it was in state i at time s (i , j = H, L ) , where t ≥ 0 . Let ii


t ps be the
probability that the process remains in state i from time s to time s + t .

∂ ∂
(iv) Write down Kolmogorov’s forward equations for pLL , pLL
∂t t s ∂t t s

and pLH . [2]
∂t t s

Equity price volatility is Low at time zero.

(v) Derive an expression for the time after which there is a greater than
50% chance of having experienced a period of high equity price
volatility. [2]

(vi) Solve the Kolmogorov equation to obtain an expression for t p0LL . [4]
[Total 12]

© IFE: 2019 Examinations Page 57


Batch3

18 Subject CT4 September 2012 Question 10

On a small distant planet lives a race of aliens. The aliens can die in one of
two ways, either through illness, or by being sacrificed according to the
ancient custom of the planet. Aliens who die from either cause may, some
time later, become zombies.

(i) Draw a multiple-state diagram with four states illustrating the process by
which aliens die and become zombies, labelling the four states and the
possible transitions between them. [2]

(ii) Write down the likelihood of the process in terms of the transition
intensities, the numbers of events observed and the waiting times in the
relevant states, clearly defining all the terms you use. [4]

(iii) Derive the maximum likelihood estimator of the death rate from illness.
[3]

The aliens take censuses of their population every ten years (where the year
is an ‘alien year’, which is the length of time their planet takes to orbit their
sun). On 1 January in alien year 46,567, there were 3,189 live aliens in the
population. On 1 January in alien year 46,577 there were 2,811 live aliens in
the population. During the intervening ten alien years, a total of 3,690 aliens
died from illness and 2,310 were sacrificed, and the annual death rates from
illness and sacrifice were constant and the same for each alien.

(iv) Estimate the annual death rates from illness and from sacrifice over the
ten alien years between alien years 46,567 and 46,577. [2]

The rate at which aliens who have died from either cause become zombies
is 0.1 per alien year.

(v) Calculate the probabilities that an alien alive in alien year 46,567 will,
ten alien years later:

(a) still be alive

(b) be dead but not a zombie. [7]


[Total 18]

Page 58 © IFE: 2019 Examinations


Batch3

19 Subject CT4 April 2013 Question 8

During a football match, the referee can caution players if they commit an
offence by showing them a yellow card. If a player commits a second
offence which the referee deems worthy of a caution, they are shown a red
card, and are sent off the pitch and take no further part in the match. If the
referee considers a particularly serious offence has been committed, he can
show a red card to a player who has not previously been cautioned, and
send the player off immediately.

The football team manager can also decide to substitute one player for
another at any point in the match so that the substituted player takes no
further part in the match. Due to the risk of a player being sent off, the
manager is more likely to substitute a player who has been shown a yellow
card. Experience shows that players who have been shown a yellow card
play more carefully to try to avoid a second offence.

The rate at which uncautioned players are shown a yellow card is 1/10 per
hour.

The rate at which those players who have already been shown a yellow card
are shown a red card is 1/15 per hour.

The rate at which uncautioned players are shown a red card is 1/40 per
hour.

The rate at which players are substituted is 1/10 per hour if they have not
been shown a yellow card, and 1/5 if they have been shown a yellow card.

(i) Sketch a transition graph showing the possible transitions between


states for a given player. [2]

(ii) Write down the compact form of the Kolmogorov forward equations,
specifying the generator matrix. [3]

A football match lasts 1.5 hours.

(iii) Solve the Kolmogorov equation for the probability that a player who
starts the match remains in the game for the whole match without being
shown a yellow card or a red card. [2]

(iv) Calculate the probability that a player who starts the match is sent off
during the match without previously having been cautioned. [3]

Consider a match that continued indefinitely rather than ending after 1.5
hours.

© IFE: 2019 Examinations Page 59


Batch3

(v) (a) Derive the probability that in this instance a player is sent off without
previously having been cautioned.

(b) Explain your result. [2]


[Total 12]

20 Subject CT4 April 2013 Question 10

(i) State the Markov property. [1]

A certain non-fatal medical condition affects adults. Adults with the condition
suffer frequent episodes of blurred vision. A study was carried out among a
group of adults known to have the condition. The study lasted one year, and
each participant in the study was asked to record the duration of each
episode of blurred vision. All participants remained under observation for
the entire year.

The data from the study were analysed using a two-state Markov model with
states:

1. not suffering from blurred vision


2. suffering from blurred vision.

Let the transition rate from state i to state j at time x + t be m xij + t , and let
the probability that a person in state i at time x will be in state j at time
x + t be t pxij .

(ii) Derive from first principles the Kolmogorov forward equation for the
transition from state 1 to state 2. [5]

The results of the study were as follows:

Participant-days in state 1 21,650


Participant-days in state 2 5,200
Number of transitions from state 1 to state 2 4,330
Number of transitions from state 2 to state 1 4,160

Assume the transition intensities are constant over time.

(iii) Calculate the maximum likelihood estimates of the transition intensities


from state 1 to state 2 and from state 2 to state 1. [1]

Page 60 © IFE: 2019 Examinations


Batch3

(iv) Estimate the probability that an adult with the condition who is presently
not suffering from blurred vision will be suffering from blurred vision in 3
days’ time. [6]
[Total 13]

21 Subject CT4 September 2013 Question 8

Outside an apartment block there is a small car park with three parking
spaces. A prospective purchaser of an apartment in the block is concerned
about how often he would return in his car to find that there was no empty
parking space available. He decides to model the number of parking spaces
free at any time using a time homogeneous Markov jump process where:
 the probability that a car will arrive seeking a parking space in a short
interval dt is A.dt  o(dt )

 for each car which is currently parked, the probability that its owner
drives the car away in a short interval dt is B.dt  o(dt )

where A, B  0 .

(i) Specify the state space for the above process. [1]

(ii) Draw a transition graph of the process. [2]

(iii) Write down the generator matrix for the process. [2]

(iv) Derive the probability that, given all the parking spaces are full, they will
remain full for at least the next two hours. [2]

(v) Explain what is meant by a jump chain. [1]

(vi) Specify the transition matrix for the jump chain associated with this
process. [2]

Suppose there are currently two empty parking spaces.

(vii) Determine the probability that all the spaces become full before any cars
are driven away. [1]

(viii) Derive the probability that the car park becomes full before the car park
becomes empty. [3]

(ix) Comment on the prospective purchaser’s assumptions regarding the


arrival and departure of cars. [3]
[Total 17]

© IFE: 2019 Examinations Page 61


Batch3

22 Subject CT4 April 2014 Question 7

A team of medical researchers is interested in assessing the effect of a


certain condition on mortality. The condition is, in itself, non-fatal and is
curable, but is believed to increase the risk of death from heart disease. The
team proposes to use a model with four states: (1) ‘Alive, without condition’,
(2) ‘Alive, with condition’, (3) ‘Dead from heart disease’ and (4) ‘Dead from
other causes’.

(i) Draw a diagram showing the possible transitions between the four
states. [2]

Let the transition intensity between state i and state j at time x + t be


mxij +t . Let the probability that a person in state i at time x will be in state j
at time x + t be t pxij .

(ii) Show, from first principles, that:


∂t
( p )=
24
t x
21 14
t px m x + t + t px22 m x24+ t [5]

An empirical investigation using data for persons aged between 60 and 70


years produces the following results:
 Waiting time in state ‘Alive, without condition’ is 2,046 person-years
 Waiting time in state ‘Alive, with condition’ is 1,139 person-years
 10 deaths from heart disease to persons ‘Alive, without condition’
 30 deaths from other causes to persons ‘Alive, without condition’
 25 deaths from heart disease to persons ‘Alive, with condition’
 20 deaths from other causes to persons ‘Alive, with condition’.

(iii) Show that there is a statistically significant difference (at the 95%
confidence level) between the death rates from heart disease for
persons with and without the condition. [5]
[Total 12]

Page 62 © IFE: 2019 Examinations


Batch3

23 Subject CT4 April 2015 Question 10

In a computer game a player starts with three lives. Events in the game
which cause the player to lose a life occur with a probability m dt + o(dt ) in a
small time interval dt . However, the player can also find extra lives. The
probability of finding an extra life in a small time interval dt is l dt + o(dt ) .
The game ends when a player runs out of lives.

(i) Outline the state space for the process which describes the number of
lives a player has. [1]

(ii) Draw a transition graph for the process, including the relevant transition
rates. [3]

(iii) Determine the generator matrix for the process. [2]

(iv) Explain what is meant by a Markov jump chain. [1]

(v) Determine the transition matrix for the jump chain associated with the
process. [2]

(vi) Determine the probability that a game ends without the player finding an
extra life. [1]
[Total 10]

© IFE: 2019 Examinations Page 63


Batch3

24 Subject CT4 April 2015 Question 11

A new disease has been discovered which is transmitted by an airborne


virus. Anyone who contracts the disease suffers a high fever and then in
60% of cases dies within an hour and in 40% of cases recovers. Having
suffered from the disease once, a person builds up antibodies to the disease
and thereafter is immune.

(i) Draw a multiple state diagram illustrating the process, labelling the
states and possible transitions between states. [2]

(ii) Express the likelihood of the process in terms of the transition intensities
and other observable quantities, defining all the terms you use. [4]

(iii) Derive the maximum likelihood estimator of the rate of first time
sickness. [2]

Three years ago medical students visited the island where the disease was
first discovered and found that, of the population of 2,500 people, 860 had
suffered from the disease but recovered. They asked the leaders of the
island to keep records of the occurrence and the outcome of each incidence
of the disease. The students intended to return exactly three years later to
collect the information.

(iv) Derive an expression (in terms of the transition intensities) for the
probability that an islander who has never suffered from the disease will
still be alive in three years’ time. [4]

(v) Set out the information which the students would need when they
returned three years later in order to calculate the rate of sickness from
the disease. [2]
[Total 14]

Page 64 © IFE: 2019 Examinations


Batch3

25 Subject CT4 September 2015 Question 8

(i) Define a Markov jump process. [1]

A company provides phones on contracts under which it is responsible for


repairing or replacing any phones which break down.

When a customer reports a fault with a phone, it is immediately taken to the


company’s repair shop and it is assessed whether it can be fixed (meaning
fixable at reasonable cost). Based on previous experience, it is estimated
that the probability of a phone being fixable is 0.75. If a phone is not fixable
it is discarded and the customer is provided with a new phone.

If a repaired phone breaks again the company, in line with its customer
charter, will not attempt to repair it again, and so discards the phone and
replaces it with a new one.

The status of a phone is to be modelled as a Markov jump process with state


space {Never Broken (NB), Repaired (R), Discarded (D)}.

The company considers the rate at which phones break down to vary
according to whether a phone has previously been repaired as follows:

Probability of breakdown in small


Status interval of time dt

Never Broken 0.1dt + o(dt )


Repaired 0.2dt + o(dt )

(ii) Draw a transition diagram for the possible transitions between the
states, including the associated transition rates. [2]

Let PNB (t ) , PR (t ) and PD (t ) be the probabilities that a phone is in each


state after time t since it was provided as a new phone.

(iii) Determine Kolmogorov’s forward equations in component form for


PNB (t ) , PR (t ) and PD (t ) . [2]

(iv) Solve the equations in part (iii) to obtain PNB (t ) and PR (t ) . [4]

(v) Calculate the probability that a phone has not been discarded by time t .
[1]
[Total 10]

© IFE: 2019 Examinations Page 65


Batch3

26 Subject CT4 September 2015 Question 9

Doctors at a health centre are carrying out an investigation to see if obesity


affects the likelihood of dying from heart disease. They propose to use a
model with four states:
1. Obese
2. Not obese
3. Dead due to heart disease
4. Dead due to other causes.

(i) Write down, defining all the terms you use, the likelihood for the
transition intensities. [3]

(ii) Derive the maximum likelihood estimator of the force of mortality from
heart disease for Obese people. [3]

The investigation has followed several thousand people aged 50-59 years
for five years and has the following data:

Waiting time in state Obese (in person-years) 14,392


Waiting time in state Not obese (in person-years) 18,109

Number of deaths due to heart disease for those


persons who are Obese 178

Number of deaths due to heart disease for those


persons who are Not obese 190

Number of deaths due to other causes for those


persons who are Obese 89

Number of deaths due to other causes for those


persons who are Not obese 53

The doctors want to promote healthy living and therefore wish to claim that
Obese people have a much higher chance, statistically, of dying from heart
disease than do people who are Not obese.

(iii) Test whether this claim is true at the 90% confidence level. [5]
[Total 11]

Page 66 © IFE: 2019 Examinations


Batch3

27 Subject CT4 April 2016 Question 7

(i) State the condition needed for a Markov Jump Process to be time
inhomogeneous. [1]

(ii) Describe the principal difficulties in modelling using a Markov Jump


Process with time inhomogeneous rates. [2]

A multi-tasking worker at a children’s nursery observes whether children are


being ‘Good’ or ‘Naughty’ at all times. Her observations suggest that the
probability of a child moving between the two states varies with the time, t ,
since the child arrived at the nursery in the morning. She estimates that the
transition rates are:

From Good to Naughty: 0.2 + 0.04t


From Naughty to Good: 0.4 - 0.04t

where t is measured in hours from the time the child arrived in the morning,
0 £ t £ 8.

A child is in the ‘Good’ state when he arrives at the nursery at 9am.

(iii) Calculate the probability that the child is Good for all the time up until
time t . [3]

(iv) Calculate the time by which there is at least a 50% chance of the child
having been Naughty at some point. [2]

Let PG (t ) be the probability that the child is Good at time t .

(v) Derive a differential equation just involving PG (t ) which could be used


to determine the probability that the child is Good on leaving the nursery
at 5pm. [2]
[Total 10]

© IFE: 2019 Examinations Page 67


Batch3

28 Subject CT4 April 2016 Question 9

Orange trees are susceptible to the disease Citrus Greening. There is no


known cure for this disease and, although trees often survive for some time
with the disease, it can ultimately be fatal.

A researcher decides to model the progression of the disease using a


time-homogeneous continuous-time Markov model with the following state
space:

{Healthy (ie not infected with Citrus Greening);


Infected with Citrus Greening;
Dead (caused by Citrus Greening);
Dead (other causes)}.

The researcher chooses to label the transition rate parameters as follows:


 a mortality rate from the Healthy state, m

 a rate of infection with Citrus Greening, s

 a total mortality rate from the Infected state, r

 a mortality rate caused by Citrus Greening, t .

(i) Draw a transition diagram for the chosen model, including the transition
rates. [2]

(ii) Determine Kolmogorov’s forward equations governing the transitions,


specifying the generator matrix. [3]

Infected trees display clear symptoms of the disease. This has enabled the
researcher to record the following data on trees in the area of his study:

Tree-months in Healthy State 1,200


Tree-months in Infected State 600
Total number of deaths of trees 40
Number of deaths of Healthy trees 10
Number of deaths from Citrus Greening 30

(iii) Give the likelihood of these data. [3]

(iv) Derive the maximum likelihood estimator of the mortality rate caused by
Citrus Greening, t . [3]

(v) Estimate t . [1]


[Total 12]

Page 68 © IFE: 2019 Examinations


Batch3

29 Subject CT4 September 2016 Question 3

Describe the similarities and differences between the following processes:


 Markov chain
 Markov jump chain
 Markov jump process. [4]

30 Subject CT4 September 2016 Question 4

An insurance company’s business consists only of policies covering a


specified event and which pay a sum assured of £ Z immediately on
occurrence of this event. Claims on the portfolio of policies are considered
to occur in accordance with a Poisson process with annual rate l.

The insurance company currently has assets of £S ( £ Z > £S ) . It charges


a premium which is to be 50% more than the expected outgo. Premiums
can be assumed to be received continuously. The insurance company’s
expenses are small and can be ignored.

(i) Derive the total annual premium charged by the insurance company on
the portfolio. [1]

(ii) Show that the probability that the insurance company has insufficient
assets to pay the next claim made is given by:

È 1 Ê Sˆ˘
1 - exp Í - Á1 - Z ˜¯ ˙
Î 1.5 Ë ˚
[3]
[Total 4]

© IFE: 2019 Examinations Page 69


Batch3

31 Subject CT4 September 2016 Question 9

(i) Describe how transition rates can be estimated under multiple state
models with constant transition rates, including a statement of the data
required. [3]

A specialist insurance policy provides cover only for the theft of valuable
items (such as jewellery) stored in safety deposit boxes in a bank vault. The
premium for cover for an item worth £C is paid in advance. If a claim is
made, the cover ceases.

Claims are modelled using a two state model as follows, where m is a


constant transition rate:

µ
Active Theft
policy claim

(ii) Give Kolmogorov’s forward equations for this process. [1]

(iii) Determine the expected cost of claims incurred by time T . [2]

If the item is no longer stored in the safety deposit box (for example, if the
item is sold) then the insurance cover lapses. The transition rate for lapses
of such policies is a constant l.

(iv) Draw a transition diagram for a revised process allowing for lapses. [2]

(v) Derive the revised expected cost of claims incurred by time T . [3]
[Total 11]

Page 70 © IFE: 2019 Examinations


Batch3

32 Subject CT4 April 2017 Question 11 (part)

Only parts (i)-(iii) of this question are given here. Part (iv) involves exposed
to risk and is included in Booklet 5.

A large company operates a health benefits scheme which pays a sickness


benefit to any employee who is unable to work through ill-health and a death
benefit to any employee who dies.

(i) Draw a transition diagram with three states which could be used to
analyse data from this scheme. [2]

(ii) Give the likelihood of the data, defining all the terms you use. [4]

(iii) Derive the maximum likelihood estimator of the rate of falling sick. [3]
[Total 9]

Subject CT4 September 2017 Question 3 is about a Poisson process and is


included in Booklet 1.

33 Subject CT4 September 2017 Question 7

The following diagram shows the transitions under a Healthy-Sick-Dead


multiple state model under which:
 transition rates are dependent on time, t.

 transitions out of the Sick state are dependent on the duration, Ct , a


person has been in the Sick state as well as on time.

© IFE: 2019 Examinations Page 71


Batch3

(i) Show from first principles, that if pij ( x, t ) is the probability of being in
state j at time t conditional on being in state i at time x, that:


pHH ( x, t ) = pHH ( x, t ) ( -s (t ) - m (t )) + PHS ( x, t ) r (t , Ct ) [5]
∂t

(ii) Determine the probability that a life is in the Healthy state throughout the
period 0 to t if the life is in the Healthy state at time 0. [2]

(iii) Describe how integrated Kolmogorov equations can be constructed by


conditioning on the first or the last jump, illustrating your answer with a
diagram. [3]

(iv) Explain the difference in approach between deriving forward and


backward integrated Kolmogorov equations. [1]

An actuarial student suggests the following integrated Kolmogorov equation


for this model:

P ÈÎ X t = H X s = S,Cs = w ˘˚
y
t - Ú ( r (u,w - s +u ) +u (u,w - s +u ))du
= Úe 0 u ( y , w - s + y ) pHH ( y , t ) dy
0

(v) Identify TWO errors in this equation. [2]


[Total 13]

Page 72 © IFE: 2019 Examinations


Batch3

SOLUTIONS TO PAST EXAM QUESTIONS

The solutions presented here are just outline solutions for you to use to
check your answers. See ASET for full solutions.

1 Subject CT4 April 2008 Question 10

(i) Why the status of a customer can be considered as a Markov jump


process

There are 2 states, offline and online, so the process has a discrete state
space.

Transitions between states are possible at any time. So the time set is
continuous.

The probability of a customer connecting/disconnecting in the short time


interval [t , t + dt ] depends only on the current state and is independent of
the past history of the process. So the process has the Markov property.

Hence the status of a customer can be considered as a Markov jump


process.

(ii) Kolmogorov’s forward differential equation

The required forward equation is:

¢ (t ) = 0.8PON (t ) - 0.2POFF (t )
POFF

(iii) Solution of the differential equation

First note that, since there are only 2 states:

PON (t ) = 1 - POFF (t )

So we can write the equation in terms of POFF (t ) only:

¢ (t ) = 0.8 (1 - POFF (t )) - 0.2POFF (t ) = 0.8 - POFF (t )


POFF

© IFE: 2019 Examinations Page 73


Batch3

We then move all the POFF (t ) terms over to the LHS so that we can use the
integrating factor method:

¢ (t ) + POFF (t ) = 0.8
POFF

The integrating factor is et . Multiplying through by the integrating factor, we


get:

et POFF
¢ (t ) + et POFF (t ) = 0.8 et

Now integrating both sides with respect to t :

et POFF (t ) = 0.8 et + C (*)

where C is a constant of integration.

We are told in the question that the customer was offline at time 0. So:

POFF (0) = 1

Substituting t = 0 into (*) gives:

1 = 0.8 + C

ie:

C = 0.2

So:

et POFF (t ) = 0.8et + 0.2

and:

POFF (t ) = 0.8 + 0.2e - t

Page 74 © IFE: 2019 Examinations


Batch3

(iv) Expected proportion of time spent online

Let Is denote the indicator function such that:

Ï1 if the customer is offline at time s


Is = Ì
Ó0 if the customer is online at time s

Then:

E (Is ) = 1 ¥ POFF (s ) + 0 ¥ PON (s ) = 0.8 + 0.2e -s

and the expected amount of time spent offline over the period [0, t ] is:

Ú0 E (Is ) ds = Ú0 (0.8 + 0.2e ) ds


t t -s

t
= È0.8s - 0.2e - s ˘
Î ˚0
= 0.8t - 0.2e - t + 0.2

The expected amount of time spent online over the period [0, t ] is:

( )
t - 0.8t - 0.2e -t + 0.2 = 0.2t + 0.2e - t - 0.2

and the expected proportion of time spent online over the period [0, t ] is:

-t
0.2t + 0.2e - t - 0.2 0.2 t + e - 1
=
( )
t t

(v)(a) Graph

To help you draw the graph, think about the series expansion of e - t :

t2 t3
e -t = 1 - t + + -
2! 3 !

Ê e -t 1ˆ
You should then see that f (0) = 0 and f (t ) = 0.2 Á1 + - ˜ Æ 0.2 as
Ë t t¯
t Æ•.

© IFE: 2019 Examinations Page 75


Batch3

The graph of f (t ) , which represents the expected proportion of time spent


online over the period [0, t ] is shown below:

f(t)
0.25

0.2

0.15

0.1

0.05

2 4 6 8 10 12 14 16 18 20 t

(v)(b) Explanation of the shape of the graph

The graph starts at 0 as expected, since we’re assuming that the customer is
offline at time 0. As time goes on the expected proportion of time spent
online increases and tends to 0.2 as t Æ • . Again, this is as expected,
since:

m01 0.2
= = 0.2
m01 + m10 0.2 + 0.8

where 0 = offline and 1 = online.

Page 76 © IFE: 2019 Examinations


Batch3

2 Subject CT4 April 2008 Question 11

(i) Transition diagram

The transition diagram is shown below:

m 12
x

1: Healthy 2: Sick

m x21

m 13
x m x23

3: Dead

(ii) Derivation

Consider the interval from time x to time x + t + h , where h is small.


Assuming that the Markov property holds, we have:

23
t + h px = t px21 h p13 22 23 23 33
x + t + t p x h p x + t + t p x h px + t

33
Since state 3 is the dead state h px + t = 1 . We now assume that, for i π j :

ij
h px + t = h m xij + t + o(h )

So:

23
t + h px (
= t px21 h m13 ) 22 23
( ) 23
x + t + o ( h ) + t px h m x + t + o ( h ) + t px

Rearranging gives:

23
t + h px - t px23 o( h )
= t px21 m13 22 23
x + t + t px m x + t +
h h

© IFE: 2019 Examinations Page 77


Batch3

Letting h Æ 0 , we get:

∂ 23 21 13 22 23
t px = t px m x + t + t p x m x + t
∂t

(iii) Likelihood

The likelihood function is:

( )e -t ( m )
(m ) (m ) (m ) (m )
- t1 m 12 + m 13 21
+ m 23 n12 n21 n13 n23
12 21 13 23
L = Ce
2

where:

C is a constant

ti is the total observed waiting time in state i

nij is the observed number of transitions from state i to state j

(iv) Maximum likelihood estimator of m 23

The log-likelihood is:

( ) (
ln L = ln C - t1 m 12 + m 13 - t2 m 21 + m 23 )
+ n12 ln m 12 + n21 ln m 21 + n13 ln m 13 + n23 ln m 23

Differentiating with respect to m 23 gives:

∂ ln L n23
= -t 2 +
∂m 23 m 23

Setting the derivative equal to 0, we obtain:

n23
mˆ 23 =
t2

Page 78 © IFE: 2019 Examinations


Batch3

Checking the second derivative:

∂ 2 ln L n23
=- < 0 fi max
( ) ( )
2 2
23
∂ m m 23

n23
So the maximum likelihood estimate of m 23 is mˆ 23 = , and the
t2
N23
corresponding maximum likelihood estimator is m 23 = , where N23 is
T2
the random variable number of transitions from sick to dead and T2 is the
random variable waiting time in the sick state.

(v)(a) Estimated value of m 23

From the given data, we have:

n23 40 2
mˆ 23 = = = = 0.28571
t2 140 7

(v)(b) 95% confidence interval for m 23

A 95% confidence interval for m 23 is given by:

mˆ 23 ± 1.96 var m 23 ( )
The variance is estimated by:

2
Ê 2ˆ
(m )
2
ˆ 23 ÁË 7 ˜¯ 1
= = = 0.0020408
n23 40 490

Hence a 95% confidence interval for m 23 is:

2 1
± 1.96 = (0.19717, 0.37426)
7 490

© IFE: 2019 Examinations Page 79


Batch3

3 Subject CT4 September 2008 Question 4

(i) Test to see if July 1637 had an unusually high number of births

First we can look at the single observation of 5 births in July 1637. This
represents an increment of the process, and so we need to test the following
hypotheses:
H0 : births per month follow a Poisson distribution with parameter 1.5
H1 : births per month do not follow a Poisson distribution with parameter 1.5

Then, according to H0 :

Pr ( X ≥ 5) = 1 - Pr ( X £ 4) = 1 - 0.98142 = 0.01858

This value is sufficiently small (considerably less than 5%) for us to reject
H0 . Therefore, on the basis of this one observation in isolation, we would
conclude that the July 1637 value is atypically large.

However, this ignores the other information that we have, ie that there are 11
other increments all of smaller value than this. So we can see that 8% (one
in twelve) of our observed increments have values of 5 or more, compared
with an expected frequency of 2%, which makes the observation much more
reasonable.

We can also examine whether the data as a whole conform to sampling from
a Poisson distribution with mean 1.5 per month, using a chi-squared
goodness-of-fit test. To do this, we would need to group the data, so that
the expected number in each cell was at least 5, so as to make the test
valid. We obtain the following, according to H0 :

Group Actual Expected


Jan-Apr 6 6
May-Aug 9 6
Sep-Dec 5 6
Total 20 18

The chi-squared statistic is:

(6 - 6)2 + (9 - 6)2 + (5 - 6)2 = 1.667


6 6 6

Page 80 © IFE: 2019 Examinations


Batch3

From the Tables we find that:

( )
P c 32 ≥ 1.869 = 0.60

( )
and P c 32 ≥ 1.424 = 0.70

As our value is between 1.424 and 1.869, then there is more than a 60%
probability of obtaining data like these according to H0 , so there is very little
evidence in support of rejecting this hypothesis. Therefore, in the context of
the year’s experience as a whole, the July 1637 observation does seem
perfectly reasonable.

(ii) Do births follow a Poisson process?

One requirement for a Poisson process is that no more than one event can
occur at a single moment in time: however, multiple births do occur, which
invalidates this assumption.

For births to follow a (time-homogeneous) Poisson process, they would also


have to occur uniformly over time, ie they would have to occur at a constant
(expected) rate over time.

There may be a number of reasons, especially in the past, why conception


(and hence births) would not necessarily occur uniformly over calendar time,
in which case the process would not be time-homogenous.

There may be seasonal reasons for this (ie differences according to month
of year, which may be largely related to climate), and also there may be
different birth rates by calendar year (eg as a result of significant
demographic changes caused by epidemics, food shortages, war, etc, or by
progressive changes over time, such as from medical improvements to
survival rates feeding back into lower fertility rates, etc).

It may, however, be possible to model births as a time-inhomogeneous


Poisson process.

A larger sample than the one here would be necessary in order to test these
possibilities.

© IFE: 2019 Examinations Page 81


Batch3

4 Subject CT4 September 2008 Question 9

(i) State space and time space

The state space consists entirely of discrete states.

Transitions occur at any point in continuous time both before and after
age 65.

However, at exact age 65, all lives in any state other than dead transfer into
the normal retired state. This occurs at a fixed point in time, and therefore is
a discrete time transition.

The process is therefore defined as being of mixed type, having a


combination of continuous and discrete time transitions.

(ii) Transition diagram

3 2 4

 The transition with no cross-line can occur at any age.


 Transitions with one cross-line only occur before age 65.
 Transitions with two cross-lines occur only at exact age 65.
 The transition with three cross-lines occurs only after age 65.

It is not clear from the question whether persons in State 2 will be allowed to
start an ill-health pension before age 65 should they become sick, as we are
not informed of the scheme rules in this regard. We have assumed that this
is possible, and so transition from State 2 to State 3 is included in the above
diagram and is allowed for as appropriate in all the formulae that follow.

Page 82 © IFE: 2019 Examinations


Batch3

(iii) Equations for the evolution of transition probabilities from State 1

(a) x + t < 65

∂ 11
∂t
11 12
( 13 15
t px = - t px m x + t + m x + t + m x + t )
∂ 12
∂t
11 12 12
(
23 25
t px = t px m x + t - t px m x + t + m x + t )
∂ 13 11 13 12 23 13 35
t px = t p x m x + t + t px m x + t - t p x m x + t
∂t
14
t px =0

∂ 15 11 15 12 25 13 35
t px = t p x m x + t + t px m x + t + t p x m x + t
∂t

(b) x  t  65
11
t px = t p12
x =0

∂ 13 13 35
t p x = - t px m x + t
∂t
14
t px = t- p11 12
x + t - px (x + t - indicates the instant
immediately preceding age 65)
∂ 15 13 35 14 45
t p x = t p x m x + t + t px m x + t
∂t

(c) x + t > 65 (and x < 65 )


11
t px = t p12
x =0

∂ 13 13 35
t px = - t px m x + t
∂t
∂ 14 14 45
t px = - t px m x + t
∂t
∂ 15 13 35 14 45
t px = t px m x + t + t p x m x + t
∂t

Where x ≥ 65 , t p1xi = 0 "i .

© IFE: 2019 Examinations Page 83


Batch3

5 Subject CT4 April 2009 Question 8

(i) Number of possible pairings

For the possibility of a new infection to arise, an infected cat would have to
Ê xˆ
meet with an uninfected cat. There are Á ˜ = x ways of choosing one cat
Ë 1¯
Ê10 - x ˆ
out of the x infected cats and Á = 10 - x ways of choosing one
Ë 1 ˜¯
uninfected cat out of the 10 - x uninfected cats. So the number of possible
pairings that could result in a new infection is x (10 - x ) .

(ii) How the number of infected cats can be formulated as a Markov jump
process

A Markov jump process is a stochastic process with a continuous time set


and a discrete state space for which the Markov property is satisfied.

X (t ) is the number of infected cats at any time t . So we have a continuous


time set.

(ii)(a) State space

The state space is {1, 2,...,10} , assuming that we have to start with at least
one infected cat. So the state space is discrete.

The Markov property holds because meetings occur according to a Poisson


process and the probability that any given meeting results in a new infection
depends only on the number of cats that currently have fleas and not on the
history of the process.

Hence X (t ) is a Markov jump process.

(ii)(b) Kolmogorov differential equations

The matrix form of the Kolmogorov forward differential equations is:

d
P ( t ) = P (t ) A
dt

where P (t ) is the matrix of transition probabilities, pij (t ) , and A is the


matrix of transition rates, mij .

Page 84 © IFE: 2019 Examinations


Batch3

For this process, we have:

Ï i (10 - i )
Ô m
90 for j = i + 1, i = 1, 2,..., 9
Ô
Ô i (10 - i )
mij = Ì - m
90 for j = i , i = 1, 2,..., 9
Ô
Ô 0 otherwise
Ô
Ó

Alternatively, you could have given the backward differential equation:

d
P (t ) = A P (t )
dt

(iii) Distribution of the holding times

For i = 1, 2,.., 9 , the holding time in State i is exponentially distributed with


i (10 - i )
parameter m . The holding time in State 10 is infinite.
90

(iv) Expected time until all the cats have fleas

Let Ti denote the holding time in State i , i = 1, 2,..., 9 . Then the expected
time until all the cats have fleas, starting from a single infected cat is:

E (T1) + E (T2 ) +  + E (T9 )

90 Ê 1 1 1 1 1 1 1 1 1ˆ
= + + + + + + + +
m ÁË 9 16 21 24 25 24 21 16 9 ˜¯
50.92
=
m

© IFE: 2019 Examinations Page 85


Batch3

6 Subject CT4 April 2009 Question 11

(i) Derivation of differential equation

Consider a time interval of length t + h that starts at age x . By the Markov


assumption, and conditioning on the state occupied at time t :

12
t + h px = t p11 12 12 22
x h px + t + t px h p x + t

Since it is impossible to leave State 2:

22
h px + t =1

So we have:

12
t + h px = t p11 12 12
x h px + t + t px

Now, for small h , we assume that:

12
h px + t = h m 12
x + t + o( h )

So:

12
t + h px = t p11 12 12
x h m x + t + t px + o ( h )

We can rearrange this to get:

12
t + h px - t p12 o( h )
x
= t p11 12
x mx +t +
h h

Then, taking the limit as h Æ 0 gives:

∂ 12 11 12
t px = t px m x + t
∂t

o( h )
since lim =0.
h Æ0 h

Page 86 © IFE: 2019 Examinations


Batch3

(ii)(a) Likelihood function

The likelihood function is:

( )
(m ) (m ) (m )
- t1 m 12 + m 13 + m 14 n12 n13 n14
12 13 14
L = Ce

where:
 C is a constant

 t1 is the total observed waiting time in State 1 for all the lives in the
investigation

 n1 j is the total number of transitions from State 1 to State j , j = 2, 3, 4 ,


observed from the lives in the investigation.

(ii)(b) Derivation of maximum likelihood estimator

The log-likelihood function is:

( )
ln L = ln C - t1 m 12 + m13 + m14 + n12 ln m12 + n13 ln m13 + n14 ln m14

Differentiating with respect to m 12 gives:

∂ n12
ln L = -t1 +
∂m 12 m12

Setting this equal to 0 gives:

n12
m12 =
t1

n12
So the maximum likelihood estimate of m12 is mˆ 12 = and the
t1
N12
corresponding maximum likelihood estimator is , where N12 is m 12 =
T1
the random variable representing the number of transitions from State 1 to
State 2 and T1 is the random variable representing the waiting time in
State 1.

© IFE: 2019 Examinations Page 87


Batch3

(iii)(a) Calculation of maximum likelihood estimate

The maximum likelihood estimate of the force of mortality from heart disease
at age 64 last birthday is:

34
= 0.031925
1, 065

(iii)(b) Approximate 95% confidence interval

Asymptotically, the variance of the maximum likelihood estimator m 12


m12
is . This can be approximated by:
E (T1)

mˆ 12 34
= = 2.9976 ¥ 10 -5
t1 1, 0652

Alternatively, the variance can be calculated using the Cramér-Rao lower


bound, which is given on Page 23 of the Tables. We approximate the
CRLB by:

( mˆ )
2
12
=
(34 / 1,065)2 =
34
n12 34 1,0652

Since maximum likelihood estimators are asymptotically normally distributed,


an approximate 95% confidence interval for m12 is:

mˆ 12 ± 1.96 var m 12 =( ) 34
1, 065
± 1.96
34
1, 0652
= (0.0212, 0.0427)

(iv) Analysing the impact of risk factors

To analyse the impact of different risk factors on the death rate from heart
disease, we would need to subdivide our observed lives into homogeneous
groups, ie groups that have the same characteristics. We would then
estimate the transition rates for each group separately. We could compare
the estimated death rate from heart disease for the different groups to
assess the impact of risk factors on this mortality rate.

Page 88 © IFE: 2019 Examinations


Batch3

However, if we subdivide our group, we end up with smaller sample sizes


and our results are less reliable. One way to get round this problem is to
use a Cox regression model where the covariates are the risk factors that we
are investigating and death from heart disease is the decrement of interest.
Deaths from other causes would be treated as censored observations in this
Cox model.

7 Subject CT4 September 2009 Question 6

(i) Poisson process model

A Poisson process with rate l is a continuous-time, integer-valued process


N (t ) , t ≥ 0, with the following properties:

1. N (0) = 0

2. N (t ) has independent increments

3. N (t ) has Poisson distributed stationary increments, ie:

- l (t - s ) n
e È l (t - s )˘
P ÎÈN (t ) - N (s ) = n ˚˘ = Î ˚
n!

for 0 £ s < t and n = 0, 1, 2, ... .

(ii) Average number of person-days’ work

The expected number of complaints arriving in one day is 1.25. So the


expected number of complaints arriving in 5 days is:

5 ¥ 1.25 = 6.25

For the 3 types of complaints we have the following information:

Type of
Straightforward Medium Complicated
complaint
Probability 0.6 0.3 0.1
Expected
1
response time 1 4
(in person-days) 2

© IFE: 2019 Examinations Page 89


Batch3

So the expected response time for a single complaint is:

1
0.6 ¥ + 0.3 ¥ 1 + 0.1 ¥ 4 = 1 person-day
2

and the expected number of person-days’ work generated by complaints


arriving during a 5-day working week is:

6.25 ¥ 1 = 6.25 person-days

(iii) State space

The number of outstanding complaints of each type must be a non-negative


integer. Since the response time depends on the type of complaint, the
state space must be of the form (s, m, c ) , where each of s , m and c takes
one of the values 0, 1, 2, ... . The values of s , m and c represent the
number of outstanding complaints of types straightforward, medium and
complicated, respectively.

(iv) Appropriateness of the model

If the company wants to make sure that it responds to each complaint within
a certain number of days, the state space of the model needs to include a
duration component, which measures the time since each complaint arrived.
The model in part (iii) does not include this feature and so it is not
appropriate for this task.

Other reasons that the model is inappropriate include the following:


 The model assumes that complaints arrive at a constant rate of 1.25 per
working day. In practice, it is likely that the arrival rate varies over time.
For example, the company might be affected by external events such as
a postal strike. So a time-inhomogeneous model might be preferable.
 The model assumes that the process has independent increments.
However, if the company experiences a quality control issue, there may
be may be a cluster of complaints all relating to the same issue.
 The probabilities of being straightforward, medium and complicated may
be different from those stated.
 The response times are unlikely to be exponentially distributed, because
the mode of an exponential distribution is 0, and this is not going to be
the most common response time. Also, the expected response time for
each category may be different to those stated.

Page 90 © IFE: 2019 Examinations


Batch3

 The model makes no allowance for the employees taking holiday or


sickness leave.
 The model takes no account of taking on extra staff.
 Experience of staff is likely to be a factor. If a staff member has dealt
with a similar complaint in the past, he is likely to take less time to deal
with the current one. So the memoryless property of the exponential
distribution is unlikely to be realistic in this context.

8 Subject CT4 September 2009 Question 8

(i) Transition diagram

The transition diagram is:

 (x)

Healthy Infected

(x) v(x) –  (x)


 (x)

Dead (not Dead (from


from disease) disease)

© IFE: 2019 Examinations Page 91


Batch3

(ii) Kolmogorov forward equations

The matrix form of the Kolmogorov forward differential equations is:


P (s, t ) = P (s, t ) A (t )
∂t

where P (s, t ) is the matrix of transition probabilities over the interval (s, t )
and:

Ê - (s (t ) + m (t )) s (t ) 0 m (t ) ˆ
Á 0 -u ( t ) r ( t ) u ( t ) - r (t )˜˜
A (t ) = Á
Á 0 0 0 0 ˜
Á ˜
Ë 0 0 0 0 ¯

A(t ) is the generator matrix at time t assuming that the states are in the
order given in the question, ie Healthy, Infected, Dead (from disease), Dead
(not from disease).

(iii)(a) Integral expression for PHH ( x, x + t )

An integral expression for PHH ( x, x + t ) is:

Î 0
t
( )
PHH ( x, x + t ) = exp ÈÍ - Ú s ( x + s ) + m ( x + s ) ds ˘˙
˚

(iii)(b) Integral expression for PHI ( x, x + t )

An integral expression for PHI ( x, x + t ) is:

PHI ( x, x + t ) = Ú0 PHH ( x, x + s ) s ( x + s ) PII ( x + s, x + t ) ds


t

where:

PII ( x + s, x + t ) = PII ( x + s, x + t ) = exp ÈÍ - Ú u ( x + u ) du ˘˙


t
Î s ˚

Page 92 © IFE: 2019 Examinations


Batch3

(iii)(c) Integral expression for PHD( from disease ) ( x, x + t )

The backward integral equation for PHD(from disease) ( x, x + t ) is:

PHD(from disease) ( x, x + t )

Ú0 PHH ( x, x + s ) s ( x + s ) PID(from disease) ( x + s, x + t ) ds


t
=

where:

PID(from disease) ( x + s, x + t ) = Ús PII ( x + s, x + u ) r ( x + u ) du


t

Alternatively, we could have given the forward integral equation:

PHD(from disease) ( x, x + t ) = Ú0 PHI ( x, x + s ) r ( x + s ) ds


t

9 Subject CT4 April 2010 Question 7

(i) Sketch a diagram

Working from the description given, we see that the model looks like this:


L R

 v

Here ‘L’ denotes a learner, ‘R’ denotes a restricted driver and ‘Q’ denotes a
qualified driver.

© IFE: 2019 Examinations Page 93


Batch3

(ii) Likelihood

By analogy with the HSD model, (with the H, S and D states replaced with
L, R and Q), we can label the transition rates using the usual Greek letters.
The likelihood function is then:

L(s , r, m,n ) = K e -(s + m )TL ¥ e -( r +n )TR ¥ s nLR ¥ r nRL ¥ m


nLQ nRQ
¥n

where:
 Ti is the total number of person-months spent in state i ( i = L or R )

 nij is the total number of transitions from state i to state j for the
permitted transitions
 K is a constant of proportionality.

(iii)(a) Derive the maximum likelihood estimator

The log-likelihood function is:

log L(s , r , m,n ) = -(s + m )TL - ( r + n )TR


+ nLR log s + nRL log r + nLQ log m + nRQ logn + log K

Differentiating with respect to the parameter r and equating to zero to find


a maximum gives:

∂ n
log L = -TR + RL = 0
∂r r

So the MLE of r is:

nRL
rˆ =
TR

We can confirm that this does give a maximum value (as a function of the
parameter r ) by showing that the second derivative is negative:

∂2 nRL
2
log L = - < 0 (provided that nRL > 0 )
∂r r2

Page 94 © IFE: 2019 Examinations


Batch3

(iii)(b) Estimating the transition rate

The estimate of the transition rate from Restricted to Learner is the MLE
of r , which equals:

230
rˆ = = 0.1186
1, 940

10 Subject CT4 April 2010 Question 11

(i) Time-homogeneous?

The transition rates for this process take constant values. So, in general, the
probability that the process will be in state j at time t , given that it is in
state i at time s , depends only on the length of the time interval t - s .

However, there is an issue with the suspended state. This has a


discontinuity when the policy has been suspended for one year because the
transition rate for reinstatement then suddenly drops to zero.

So, for calculations that do not involve the suspended state, for example
calculating the expected time until the first catastrophic event, it would be
appropriate to model the situation using a time-homogeneous approach.
However, for calculations where the suspended state may be involved, we
need to allow for the sudden change in the rate.

Additional clarification:

The situation described here is really ‘duration-inhomogeneous’, as the


transition rate in the suspended state depends on how long the policy has
been in that state. It’s unclear whether this counts as time-homogeneous or
time-inhomogeneous.

To illustrate the point, suppose we let pSS (s, t ) denote the probability that
the policy remains in the suspended state until time t , given that it was in
the suspended state at time s .

Now consider the probabilities pSS (0.25,0.75) and pSS (0.75,1.25) , which
both have t - s = 0.5 , so they are both probabilities of remaining suspended
for another 6 months.

© IFE: 2019 Examinations Page 95


Batch3

Because 0.75 < 1 , for the first probability, the one-year limit cannot have
taken effect yet, so pSS (0.25,0.75) would equal e -0.05 ¥ 0.5 , ie e -0.05(t - s ) .

The second probability pSS (0.75,1.25) would equal e -0.05 ¥ 0.5 , ie

e -0.05(t - s ) , provided the policy became suspended after time 0.25 (and so
we haven’t hit the one-year limit yet), but it would equal zero if it became
suspended before time 0.25 (because time 1.25 is more than one year later).
So, even when we know the values of s and t , the second probability is not
fully defined without knowing when the policy became suspended (or
equivalently, the duration since suspension occurred).

The Examiners’ Report gives this numerical illustration and concludes that
the model is time-inhomogeneous because some probabilities do not just
depend on the value of t - s .

Note, however, that the question just asks which approach would be more
appropriate for modelling the situation, which makes the question more
open-ended. So, provided you argued your case satisfactorily, you should
have picked up some marks whether you said it was time-homogeneous or
time-inhomogeneous.

(ii)(a) Why a 3-state model would not be Markov

This model would not be Markov because risks that are in the Cover In
Force state include both risks that have not experienced a catastrophic
event and those that have, but were subsequently reinstated. The future
probabilities for these two types are not the same, since the first type still has
the option for future reinstatement, whereas the second type does not.

(ii)(b) Expanded model

To preserve the Markov property, we need to split the Cover In Force state
into the two different types:

– C = Cover in force and no catastrophic event has occurred

– R = Cover in force as a result of a previous reinstatement of cover.

Page 96 © IFE: 2019 Examinations


Batch3

(iii) Sketch the transition diagram

If we label the suspended state as S and the lapsed state as L, the model
looks like this:

(up to duration 1)
C R
0.05
0.1
0.1

S L

(iv) Derive the probability (cover in force)

The probability of remaining in state C continuously between time 0 and time


t is equal to:

pCC (t ) = exp Ê Ú mCC ds ˆ = exp Ê Ú ( -0.1)ds ˆ = e -0.1t


t t
Ë 0 ¯ Ë 0 ¯

(v) Derive the probability (suspended)

This situation will arise if a catastrophic event occurred at some point during
the time period (t - 1, t ) and the risk has not been reinstated by time t .

Event

CCCCCCCCCCCCCCCCCCCCCCCCCC SSSSSSSSSSS

0 t-1 s t

Suppose the catastrophic event occurred during the short time interval
(s, s + h ) , where t - 1 £ s < t .

© IFE: 2019 Examinations Page 97


Batch3

The sequence of events required to create this situation must have been:
 the risk remained in state C continuously until time s

 during the interval (s, s + h ) the risk ‘jumped’ to state S

 the risk then remained in state S continuously until time t .

We can write this probability as an integral:

t
Út -1 pCC (s )

 
¥ 0.1
 ds ¥ pSS (t - s )
 
Jump to state S
Remain in state C during the short period ( s,s + h ) Remain in state S
until time s until time t

From part (iv), we know that pCC (s ) = e -0.1s .

Using a similar method, we see that pSS (t - s ) = e -0.05(t - s ) .

So the integral expression becomes:

t t
0.1Ú e -0.1s e -0.05( t - s )ds = 0.1e -0.05t Ú e -0.05s ds
t -1 t -1
t
-0.05t
È e -0.05s ˘
= 0.1e Í- ˙
ÎÍ 0.05 ˚˙ t -1

= 2e -0.05t Èe -0.05( t -1) - e -0.05t ˘


Î ˚
-0.1t È 0.05 ˘
= 2e e -1
Î ˚
= 0.1025e -0.1t

Page 98 © IFE: 2019 Examinations


Batch3

11 Subject CT4 September 2010 Question 10

(i) Diagram

Widowed

Never married Married

Divorced

(ii) Kolmogorov differential equation for first marriages

This is first marriage so that we are considering the simplified model:

Widowed

Never married Married

Divorced

 N = never married
 M = married
 W = widowed
 D = divorced
ij
 t px = probability that a woman is in state j at age x + t given that she
is in state i at age x .
ij
 for i π j and small h , h px + t = h m xij + t + o(h )

© IFE: 2019 Examinations Page 99


Batch3

Working from age x to age x + t + h , using the Markov assumption where


h is small:

NM
t + h px = t pxNN h pxNM NM MM NW
+ t + t px h px + t + t px
WM ND DM
h px + t + t px h p x + t

= t pxNN h pxNM NM MM
+ t + t p x h px + t

= t pxNN h pxNM NM
+ t + t px 1 - h pxMW ( MD
+ t - h px + t )
= t pxNN h m xNM NM
+ t + t px (1 - h m MW
x +t - h m xMD
+t ) + o( h )
Rearranging gives:

( )
NM
t + h px - t pxNM o( h )
= t pxNN m xNM NM
+ t - t px m xMW MD
+t + mx +t +
h h

o( h )
Since Æ 0 as h Æ 0 , we have:
h


∂t
NM
t px = t pxNN m xNM NM
+ t - t px m xMW MD
+t + mx +t ( )
(iii) Likelihood function

NM MW
+ m MD ) - tW mWM - tD m DM
L = Ce - tN m e - tM ( m e e

( ) (m ) (m ) (m ) (m )
nNM nMW nMD nWM nDM
¥ m NM MW MD WM DM

where:

 C is a constant

 ti is the total observed waiting time in state i

 m ij is the constant force of transition from state i to state j

 nij is the observed number of transitions from state i to state j .

Page 100 © IFE: 2019 Examinations


Batch3

(iv) Maximum likelihood estimator

The log-likelihood function is:

ln L = ln C - tN m NM - tM ( m MW + m MD ) - tW m WM - tD m DM
+ nNM ln m NM + nMW ln m MW + nMD ln m MD + nWM ln mWM
+ nDM ln m DM

Differentiating this with respect to m NM and setting this equal to 0 gives:

nNM
m NM =
tN

Differentiating twice gives:

∂ 2 ln L nNM
=- < 0 fi max
( ) ( )
2 2
NM
∂ m m NM

nNM
So, the MLE is mˆ NM = and the corresponding maximum likelihood
tN
NNM
estimator is m NM = , where NNM and TN are random variables.
TN

12 Subject CT4 September 2010 Question 11

(i) How the situation can be modelled as a Markov jump process

Let X t = number of passengers waiting in the front taxi at time t .

 The taxi leaves as soon as it has 4 passengers, so the state space of


the process X t is {0,1, 2, 3} , which is discrete.

 X t has a continuous time set with t ≥ 0 .

 The Markov property holds because the number of passengers waiting


in the front taxi at any given time in the future depends only on the
number currently waiting, and not on the past history of the process.

© IFE: 2019 Examinations Page 101


Batch3

Alternatively, we could say that the Markov property holds because arrivals
occur according to a Poisson process and Poisson processes are Markov
because they have independent increments.

(ii)(a) Generator matrix

The transition diagram for this process is as follows:


0 1

 

3  2

and the generator matrix is:

Ê -b b 0 0 ˆ
Á 0 -b b 0 ˜
A=Á ˜
Á 0 0 -b b ˜
Á ˜
Ë b 0 0 -b ¯

(ii)(b) Kolmogorov’s forward differential equations

The matrix form of the forward equations is:

d
P (t ) = P (t ) A
dt

where:

Ê p00 (t ) p01(t ) p02 (t ) p03 (t )ˆ


Á p (t ) p11(t ) p12 (t ) p13 (t )˜
P (t ) = Á 10 ˜
Á p20 (t ) p21(t ) p22 (t ) p23 (t )˜
Á ˜
Ë p30 (t ) p31(t ) p32 (t ) p33 (t )¯

Page 102 © IFE: 2019 Examinations


Batch3

pij (t ) is the probability of going from State i to State j in a time period t, and
A is the generator matrix from part (ii)(a).

In component form, the forward differential equations are:

d
pij (t ) = pi , j -1(t ) b - pij (t )b for i = 0,1, 2, 3 and j = 1,2,3
dt

d
and pi 0 (t ) = pi 3 (t ) b - pi 0 (t )b for i = 0,1, 2, 3
dt

(iii) Expected waiting time

Let:

 T j denotes the holding time in state j . Then T j ~ Exp ( b ) and

( )
E Tj =
1
b
.

 m j = expected waiting time for a passenger arriving given that there


are j passengers waiting in the front taxi once they have arrived.

Then the total expected waiting time for a new passenger is:

p 0 m1 + p 1 m2 + p 2 m3

where:

1
pj = = probability there are j passengers waiting in the front taxi
4

Now:

3
m1 = E (T1) + E (T2 ) + E (T3 ) =
b
2
m2 = E (T2 ) + E (T3 ) =
b
1
m3 = E (T3 ) =
b

© IFE: 2019 Examinations Page 103


Batch3

So, the expected waiting time for a passenger arriving at the terminus is:

1Ê 3 2 1ˆ 6 3
+ + = = minutes
4 ÁË b b b ˜¯ 4 b 2b

(iv) Transition matrix of the Markov jump chain

The transition diagram for the number of passengers waiting in the front taxi
now looks like this:


0 1

 
0.5

4 2

 0.5
3

The jump chain is the sequence of states that the process occupies. Its
transition matrix is:

Ê0 1 0 0 0ˆ
Á0 0 1 0 0˜
Á ˜
Á½ 0 0 ½ 0˜
Á ˜
Á0 0 0 0 1˜
ÁË 1 0 0 0 0˜¯

Page 104 © IFE: 2019 Examinations


Batch3

(v) Expected waiting time after the vehicle scrappage scheme

If the front taxi is a 3-seater, the total expected waiting time is:

p 0 m1 + p 1 m2

where:

1 2 1
p 0 = p1 = p 2 = , m1 = , m2 =
3 b b

1Ê 2 1ˆ 3 1
So the expected waiting time = Á + ˜= = minutes.
3 Ë b b ¯ 3b b

If the front taxi is a 5-seater, the total expected waiting time is:

p 0 m1 + p 1 m2 + p 2 m3 + p 3 m4

where:

1
p 0 = p1 = p 2 = p 3 = p 4 =
5
4 3 2 1
m1 = , m2 = , m3 = , m4 =
b b b b

So the expected waiting time is:

1 Ê 4 3 2 1 ˆ 10 2
Á + + + ˜= = minutes
5 Ë b b b b ¯ 5b b

We know that:
 The arrival rate is the same for 3-passenger and 5-passenger taxis.
3
 The expected waiting time at the front of queue is for a 3-passenger
b
taxi.
5
 The expected waiting time at the front of queue is for a 5-passenger
b
taxi.

© IFE: 2019 Examinations Page 105


Batch3

 So, the probability that the front taxi takes 3 passengers when a new
3
person arrives at the terminus is and the probability it takes 5
8
5
passengers is .
8

So the total expected waiting time for a passenger arriving at the terminus is:

1 3 2 5 13
E= ¥ + ¥ = minutes
b 8 b 8 8b

Alternatively, we could say that when a passenger arrives there are 8


equally likely possibilities: 3-seater with 0 passengers, 3-seater with 1
passenger, …, 5-seater with 4 passengers. The expected waiting times
2 1 4 3 2 1
before the taxi leaves are , , 0, , , , , 0 , respectively. So the
b b b b b b
overall expected waiting time is:

1Ê2 1 4 3 2 1 ˆ 13
Á + + 0 + + + + + 0˜ =
8Ëb b b b b b ¯ 8b

as before.

This is longer than the expected waiting time before the scrappage scheme
3
( minutes). So, based on this evidence alone, the service has
2b
deteriorated following the scrappage scheme.

Page 106 © IFE: 2019 Examinations


Batch3

13 Subject CT4 April 2011 Question 9

(i) Markov jump process

A Markov jump process is a stochastic process with a discrete state space


that operates in continuous time and has the Markov property, ie the future
probabilities are fully determined by the current state.

(ii) Differential equation

The diagram for this process looks like this:

1=H 2=S

3=D

In this diagram:
 ‘H’ denotes a healthy person not suffering from the disease
 ‘S’ denotes a sick person suffering from the disease
 ‘D’ denotes a dead person.

Starting from the definition of a derivative:

13
∂ 13 t + h px - t p13
x
t px = lim
∂t h Æ0 h

13
Because this is a Markov process, we can ‘split’ the probability t + h px at
time t using the Chapman-Kolmogorov equation:

13
t + h px = t p11 13 12 23 13 33
x ¥ h p x + t + t px ¥ h px + t + t px ¥ h p x + t

The transition rates m xij + t are defined by:

ij
h px + t = h m xij + t + o(h ) , i π j

© IFE: 2019 Examinations Page 107


Batch3

So the three probabilities on the RHS of this equation that relate to the short
time period of length h are:

13
h px + t = h m 13
x + t + o( h )

23
h px + t = h m x23+ t + o(h )

33
h px + t =1

Substituting these results into the original equation gives:

∂ 13
t px
∂t
13
t + h px - t p13
x
= lim
h Æ0 h

= lim
(p 11 13
t x ¥ h px + t + t p12 23 13 33
) 13
x ¥ h p x + t + t px ¥ h px + t - t p x
h Æ0 h

= lim
11
( 13
) 12
(
23
) 13 13
t px ¥ h m x + t + o( h ) + t px ¥ h m x + t + o( h ) + t p x ¥ 1 - t p x
h Æ0 h

We can now cancel the t p13


x ’s on the end, cancel the h ’s in the numerator
o( h )
and denominator, and use the property that lim = 0 to get:
h Æ0 h

11 13 12 23
∂ 13 t px ¥ h m x + t + t px ¥ h m x + t + o( h )
t px = lim
∂t h Æ0 h
11 13 12 23 o( h )
= t px m x + t + t px m x + t + lim
h Æ0 h

= t p11 13 12 23
x m x + t + t px m x + t

Page 108 © IFE: 2019 Examinations


Batch3

(iii) Diagram

We now know that individuals who are suffering from the disease have
different transition rates, depending on whether they have had the disease
before or not. So to preserve the Markov property, we need to have
separate states for those who are suffering from the disease for the first time
and those who have had it before. This also means that we need to split out
the ‘healthy’ state too, so that we can distinguish between those who have
never had the disease and those who have had it before.

This leads to the following diagram:

1=H0 2=S1 4=H1+ 5=S2+

3=D

In this diagram the numbers in the names of the states indicate how many
times the person has had the disease up to that point. So:
 ‘H0’ denotes a healthy person who has never suffered from the disease
 ‘S1’ denotes a sick person suffering from the disease for the first time
 ‘H1+’ denotes a person who is currently healthy but has suffered from
the disease at least once
 ‘S2+’ denotes a sick person suffering from the disease for the second or
subsequent time
 ‘D’ denotes a dead person.

© IFE: 2019 Examinations Page 109


Batch3

14 Subject CT4 September 2011 Question 6

(i)(a) Parameters of the model

The parameters of the model are the transition rates mij between the states,
pij ( h )
where mij = lim , iπ j.
hÆ0 h

So here there are six parameters: m AB , m AC , mBA , mBC , mCA and mCB .

(i)(b) Assumptions underlying the model

The model assumes that the transition rates have constant values.

The probability of a transition occurring between state i and state j during


any short time interval (t , t + h ) is pij (h ) = h mij + o( h ) .

The model is Markov, so that the transition rates depend only on the current
state.

The model also assumes that transitions occur independently.

(ii)(a) Estimate the parameters

nij
The MLE of the transition rate from state i to state j is mˆij = , where nij
ti
is the number of transitions from state i to state j , and ti is the total
waiting time in state i .

Page 110 © IFE: 2019 Examinations


Batch3

So here the parameter estimates (which are rates ‘per hour’) are:

60 12
mˆ AB = = = 1.714
35 7
45 9
mˆ AC = = = 1.286
35 7
50 1
mˆBA = = = 0.333
150 3
25 1
mˆBC = = = 0.167
150 6
55 11
mˆCA = = = 0.262
210 42
15 1
mˆCB = = = 0.071
210 14

(ii)(b) Generator matrix

The estimated generator matrix is:

A B C
A B C
È 12 9˘
A Í -3 7 7˙ A È -3 1.714 1.286 ˘
or Í ˙
B Í 31 - 21 1˙ B Í 0.333 -0.5 0.167 ˙
Í 6˙
C Í 11 ÍÎ0.262 0.071 -0.333 ˙˚
1
- 31 ˙ C
Î 42 14 ˚

(iii) Distribution of the number of transitions

mij
If the current state is i , the jump probability to state j is pij = ,
li
where li is the sum of the transition rates leaving state i . So, if there
are ni transitions out of state i , the number of transitions to state j will

(
have a Binomial ni , pˆ ij ) distribution, where pˆij = lˆij .
i

© IFE: 2019 Examinations Page 111


Batch3

15 Subject CT4 September 2011 Question 8

(i) Transition diagram

The transition diagram for this process looks like this:

A
0.45

0.1 0.02
0.03
T P
0.1
0.2

0.05
D

(ii) Probability of a person remaining in state A for 5 years

The probability of remaining in state A for at least 5 years is:

pAA (5) = e -0.15 ¥ 5 = e -0.75 = 0.4724

(iii) Expressions for F ( A) and F (T )

F ( A) relates to a person currently in state A . When the person leaves


state A they must move to state T , state P or state D . The probability of
the next move being to state T equals the ratio of the transition rates,
0.1
ie 0.15
. If this move occurs, the person will then be in state T and will have
a probability of F (T ) of never being in state P . Considering the other two
states similarly, and summing, leads to the equation:

0.1 0.02 0.03


F ( A) = 0.15
F (T ) + 0.15
F
 (P ) + 0.15
F
 (D )
=0 =1

Page 112 © IFE: 2019 Examinations


Batch3

However, F (P ) = 0 since if the person is currently in state P it is impossible


for them to never visit state P . Also, F (D ) = 1 since state D is an
absorbing state and it is certain that a person in this state will never visit
state P (or any other state).

This equation simplifies to:

F ( A) = 2 F (T ) + 1
3 5

We can derive an equation for F (T ) in a similar way:

0.45 0.1
F (T ) = 0.6
F ( A) + F (P ) + 0.05
0.6  0.6
F
 (D )
=0 =1

3 1
F (T ) = 4
F ( A) + 12

(iv) Calculate F ( A) and F (T )

We can find F ( A) and F (T ) by solving the simultaneous equations derived


in part (iii). Substituting the equation for F (T ) into the equation for F ( A)
gives:

F ( A) = 2 F (T ) + 1
3 5

= 2
3 { 3
4
1 +
F ( A) + 12 } 1
5

Simplifying the fractions and rearranging:

F ( A) = 1 1 +
F ( A) + 18 1
2 5

1 23
fi 2
F ( A) = 90

23
fi F ( A) = 45
= 0.5111

We can then use the equation for F (T ) to find F (T ) :

3 1 3 23 1 7
F (T ) = 4
F ( A) + 12 = 4
¥ 45
+ 12 = 15
= 0.4667

© IFE: 2019 Examinations Page 113


Batch3

(v) Expected future duration in state P

Looking at the diagram, we can see that a person in state A can visit
state P at most once (since, once they left state P they would have to go to
the absorbing state D ).

We have already established that the probability of a person in state A


23
never visiting state P in the future is F ( A) = 45
. So the probability that a
23 22
person in state A will visit state P is 1 - F ( A) = 1 - 45
= 45
.

If they do visit state P the time spent there will have an exponential
distribution with parameter lP = 0.2 , so that the mean waiting time in
1 1
state P will be = =5.
lP 0.2

So the expected future duration spent in state P for a person currently in


state A is:

22 ¥5 = 22 = 2.444 years
45 9

16 Subject CT4 April 2012 Question 10

(i) Diagram

The diagram for this model looks like this:

12

1 (Single) 2 (Married)
21

13 23
3 (Dead)

Page 114 © IFE: 2019 Examinations


Batch3

(ii) Likelihood

The likelihood is:

L μ e -( m12 + m13 )t1 e -( m21 + m23 )t2 m12n12 m13n13 m21n21 m23n23

where:
 mij is the transition rate from state i to state j

 ti is the total waiting time spent in state i

 nij is the number of transitions observed from state i to state j .

(iii) Derive the MLE

The log-likelihood is:

log L = -( m12 + m13 )t1 -  + n13 log m13 + 

where ‘  ‘ denotes terms that do not involve m13 .

Differentiating with respect to m13 :

∂ n
log L = -t1 + 13
∂m13 m13

Setting this equal to zero to find the maximum likelihood estimator gives:

n13 n13
-t1 + = 0 fi m13 =
m13 t1

We can check that this will be a maximum by considering the sign of the
second derivative:

∂2 n13
log L = - <0
2
∂m13 m132

Since this is negative, we have indeed found a maximum value.

© IFE: 2019 Examinations Page 115


Batch3

(iv) Estimate the transition rate and its variance

Substituting the observed values of n13 and t1 given in the question, we get
the following estimate of m13 :

12
mˆ13 = = 0.001165
10,298

The variance of the estimator of m13 is:

∂2 Ê n ˆ m 2
CRLB = -1 log L = -1 Á - 13 ˜ = 13
2 2 n13
∂m13 Ë m13 ¯

So, substituting the numerical values gives a variance of:

0.0011652
= 1.132 ¥ 10-7 or (0.000336)2
12

17 Subject CT4 September 2012 Question 7

(i) Generator matrix

The generator matrix is:

Ê -m mˆ
A=Á
Ë r - r ˜¯

(ii) Distribution of holding times in each state

Let Ti be the holding time in State i . We have:

TL ~ Exp   

and TH ~ Exp   

Page 116 © IFE: 2019 Examinations


Batch3

(iii) How the parameters can be estimated

We would need to decide on the set of n equities to be included in the


investigation and decide what is meant by saying that equity price volatility is
‘high’.

We would record the total time spent in each state by the n equities over
the investigation, tL and tH .

We would also record the number of transitions from one state to the other
over the investigation, nLH and nHL .

The maximum likelihood estimates are:

nLH nHL
m= and r=
tL tH

(iv) Kolmogorov’s forward differential equations

The required equations are:

∂ LL LH LL
t ps = t ps r - t ps m
∂t

∂ LH LL LH
t ps = t ps m - t ps r
∂t

∂ LL LL
t ps = - t ps m
∂t

(v) Expression for the time after which there is a greater than 50% chance
of having experienced a period of high equity price volatility

We know from part (ii) that TL ~ Exp    . So, we require the minimum
time t for which the following is true:

P ÎÈTL > t ˚˘ < 0.5

© IFE: 2019 Examinations Page 117


Batch3

Using the formula for the distribution function of an exponential distribution


from page 11 of the Tables, we get:

e - m t < 0.5

¤ ln 0.5 > - m t

- ln 0.5 ln 2
¤ t> =
m m

ln 2
So, the required time is .
m

(vi) Solve the Kolmogorov equation to obtain an expression for t p0LL

The Kolmogorov differential equation is:

∂ LL LH LL
t p0 = t p0 r - t p0 m
∂t

Now, since there are only two states, we have:

LH
t p0 = 1 - t p0LL

So the differential equation can be written as:


∂t
LL
( LL LL
t p0 = 1 - t p0 r - t p0 m)
= r - ( r + m ) t p0LL

or equivalently:


t p0 + ( r + m ) t p0 = r
LL LL
∂t

The integrating factor in this case is e(


r + m )t
.

Page 118 © IFE: 2019 Examinations


Batch3

Multiplying every term in the previous equation by the integrating factor


gives:

È∂ LL ˘ ( r + m )t
+ ( r + m ) t p0LLe( = r e(
r + m )t r + m )t
Í ∂t t p0 ˙ e
Î ˚

This simplifies to:

∂ Ê LL ( r + m )t
∂t Ë t
p0 e = r e( )
r + m )t

)
n n
∂ Ê
e( dt = Ú r e(
LL r + m )t r + m )t
¤ Ú ∂t Ë t p0 dt
0 0

n n
r È ( r + m )t ˘
¤ ÈÍ t p0LLe(
r + m )t ˘
= e
Î ˚˙0 r + m ÎÍ ˚˙0

¤ LL ( r + m )n
n p0 e -1=
r
r+mË
Êe( r + m )n - 1
)
Substituting t for n and rearranging:

LL
t p0 -e (
- r + m )t
=
r
r+mË
Ê1 - e - ( r + m )t
)
¤ LL
t p0 =
r Ê
r+mË
1- e (
- r + m )t
+e (
- r + m )t
)
me (
- r + m )t
r
= +
r+m r+m

© IFE: 2019 Examinations Page 119


Batch3

18 Subject CT4 September 2012 Question 10

(i) Diagram with four states

Let’s use the notation:


● A = Alive
● I = Dead from illness
● D = Dead from sacrifice
● Z = Zombie

The transition diagram for this process is as follows:

A D

I Z

(ii) Likelihood of the process

We need some notation. Let:


● mij = force of transition from state i to state j ,

● nij = observed number of transitions from state i to state j and

● ti = observed waiting time in state i .

Then, the likelihood function is:

- t A ( m AI + m AD )
L μe m AI nAI m AD nAD e -tI mIZ mIZ nIZ e -tD mDZ mDZ nDZ

Page 120 © IFE: 2019 Examinations


Batch3

(iii) Maximum likelihood estimator

The log-likelihood function is:

ln L = -t A ( m AI + m AD ) + nAI ln m AI + nAD ln m AD
-tI mIZ + nIZ ln mIZ - tD mDZ + nDZ ln mDZ + c

Differentiating this with respect to m AI , we get:

∂ ln L n
= -t A + AI
∂m AI m AI

and setting this equal to 0 gives:

nAI
m AI =
tA

Differentiating again:

∂ 2 ln L nAI
=- <0 fi max
∂ ( m AI ) ( mAI )2
2

nAI
So the maximum likelihood estimate of m AI is mˆ AI = and the
tA
N AI
corresponding maximum likelihood estimator is m AI = (where the upper
TA
case N and T are random variables).

(iv) Annual death rates from illness and sacrifice

It’s a 10-year investigation so the number of transitions per annum is:

nAI = 3,690 / 10 = 369

nAD = 2,310 / 10 = 231

© IFE: 2019 Examinations Page 121


Batch3

Assuming that the population varies linearly between census dates, we can
estimate the total time spent in state A over one year:

1
tA =
2
(3,189 + 2,811) = 3,000

So, from this, the annual death rates are:

nAI 369
mˆ AI = = = 0.123
tA 3,000

nAD 231
mˆ AD = = = 0.077
tA 3,000

(v)(a) Probability an alien alive in year 46,567 will still be alive in 46,577

We have:

-10( mˆ AI + mˆ AD )
pAA (10) = p AA(10) = e
-10(0.123 + 0.077)
=e
= e -2 = 0.135335

(v)(b) Probability an alien alive in year 46,567 will be dead but not a
zombie in 46,577

We can construct an integral equation for the probability that a life goes from
state A to state Z over 10 years:

10
pAZ (10) = Ú { }
pAA (t ) m AI pIZ (10 - t ) + m AD pDZ (10 - t ) dt
0

The rate at which aliens who have died from either cause become zombies
is 0.1 per alien year. This means that pIZ (t ) = pDZ (t ) and we have:

10
pAZ (10) = Ú 0.2e
-0.2t
pIZ (10 - t ) dt
0

Page 122 © IFE: 2019 Examinations


Batch3

Now, the probability of going from state I to state Z and the probability of
going from state I to state I together sum to 1. So:

-0.1(10 - t )
pIZ (10 - t ) = 1 - pII (10 - t ) = 1 - pII (10 - t ) = 1 - e

Substituting these into the above integral, we have:

)
10
pAZ (10) = -0.2t Ê1 - e -0.1(10 - t ) dt
Ú 0.2e Ë
0
10 10
= 0.2 Ú e -0.2t dt - 0.2e -1 Ú e -0.1t dt
0 0
10 10
= È -e -0.2t ˘ + 2e -1 Èe -0.1t ˘
Î ˚0 Î ˚0

(
= 1 - e -2 - 2e -1 1 - e -1 )
= 0.39958

So, the required probability in the question is:

1 - pAZ (10) - pAA (10)


= 1 - 0.39958 - 0.135335
= 0.4651

© IFE: 2019 Examinations Page 123


Batch3

19 Subject CT4 April 2013 Question 8

(i) Transition graph

The transition graph looks like this:

1
40

1 1
10 15
All OK Yellow Sent off

1
1 5
10
Substituted

(ii) Kolmogorov forward equations and generator matrix

If P (t ) denotes the matrix of probabilities for movements between states


over a time period of length t , and A denotes the generator matrix, the
Kolmogorov forward equations in matrix form are:

P ¢(t ) = P (t ) A

The generator matrix for this model (with the states in the order ‘All OK’,
‘Yellow’, ‘Sent off’, ‘Substituted’) is:

OK Yellow Off Sub

OK È- 9 1 1 1 ˘
Í 40 10 40 10 ˙
Yellow Í 0 4
- 15 1 1 ˙
Í 15 5 ˙
Off Í 0 0 0 0˙
Sub Í ˙
ÍÎ 0 0 0 0 ˙˚

Page 124 © IFE: 2019 Examinations


Batch3

(iii) Solve the Kolmogorov equation

If we number the states 1 to 4, the required probability is P11(1.5) . We can


see from the first column of the generator matrix that the Kolmogorov
forward equation is:

9
¢ (t ) = -
P11 P (t )
40 11

If we divide both sides by P11(t ) , we get:

¢ (t )
P11 9
=-
P11(t ) 40

d 9
ie log P11(t ) = -
dt 40

Integrating both sides:

9
log P11(t ) = - 40
t +c

To find the constant of integration c , we note that P11(0) = 1 .

When t = 0 , we have:

9
log P11(0) = - ¥0+c fic =0
 40
=1
So:
9
log P11(t ) = - 40
t

and:

P11(t ) = exp - ( 9
40
t )
So the required probability is:

P11(1.5) = exp - ( 9
40 )
¥ 1.5 = 0.7136

© IFE: 2019 Examinations Page 125


Batch3

(iv) Probability calculation

The required probability is:

1.5
Ú0 P11(s ) m13 ds

Using the formulae from part (iii), we can evaluate this as:

Ú0
1.5
P11(s ) m13 ds = Ú0
1.5
exp - ( 9
40 )
s ¥ 1
40
ds

( )
1.5
= 1 È- 40
exp - 9
s ˘˙
40 ÎÍ 9 40 ˚0

= 1
9
È1 - exp -
ÎÍ ( )˘˚˙ = 0.0318
27
80

(v)(a) Probability calculation in an indefinite match

If play continued indefinitely, the probability in part (iv), would become:

Ú0

P11(s ) m13 ds = Ú0

exp - ( 9
40
s ¥ ) 1
40
ds

( )

= 1 È- 40
exp - 9
s ˘˙
40 ÍÎ 9 40 ˚0
1 1
= 9
ÈÎ1 - 0 ˘˚ = 9

(v)(b) Explain the result

This answer makes sense because there are three modes of exit from
state 1 (playing without having received a caution), namely: being sent off,
getting a yellow card or being substituted. Based on the rates of transition,
the probabilities of each of these occurring are in the ratio 40 1 : 1 : 1
10 10
or 1: 4 : 4 or 1 : 4 : 4 .
9 9 9

Page 126 © IFE: 2019 Examinations


Batch3

20 Subject CT4 April 2013 Question 10

(i) Markov property

A stochastic process has the Markov property if knowledge of just the


current state fully determines the future probabilities for the process.

(ii) Derive the Kolmogorov forward equation

The Kolmogorov forward differential equation for the transition from state 1

to state 2 is an equation for p12 . This can be defined as:
∂t t x
12
∂ 12 t + h px - t p12
x
t px = lim (1)
∂t hÆ0 h

12
Since this is a Markov process, we can ‘split up’ the probability t + h px using
the Chapman-Kolmogorov equation to get:

12
t + h px = t p11 12 12 22
x h px + t + t px h p x + t

From the definition of the transition rates, we have:

12
h px + t = h m 12
x + t + o( h )

We also have:

22
h px + t ( )
= 1 - h px21+ t = 1 - h m x21+ t + o(h ) = 1 - h m x21+ t + o(h )

Substituting these expressions into equation (1) gives:

11 12 12 22 12
∂ 12 t p x h p x + t + t px h px + t - t p x
t px = lim
∂t h Æ0 h

= lim
11
t px (h m 12
x +t )
+ o(h ) + t p12 ( 21
) 12
x 1 - h m x + t + o( h ) - t px
h Æ0 h

© IFE: 2019 Examinations Page 127


Batch3

We can cancel the t p12


x ’s in the numerator to get:


p12
= lim
11 12
( )
12 21
(
t px h m x + t + o( h ) + t px - h m x + t + o( h ) )
t x
∂t h Æ0 h

= lim
(p11 12
t x mx +t - t p12 21
)
x m x + t h + o( h )
h Æ0 h

o( h )
Since lim = 0 and the remaining h ’s cancel, we get:
h Æ0 h

∂ 12 11 12 12 21
t px = t px m x + t - t px m x + t
∂t

(iii) Maximum likelihood estimates

If we assume that the transition rates are constant, this is a


time-homogeneous jump process and the transition rates can be estimated
nij
as mˆij = . So:
ti

n12 4, 330 n 4,160


mˆ12 = = = 0.2 and mˆ21 = 21 = = 0.8
t1 21, 650 t2 5, 200

(iv) Probability of suffering in 3 days’ time

The probability that a person who is not currently suffering from the condition
12
will have blurred vision in exactly 3 days’ time is 3 px .

If we assume that the transition rates are constant and equal to the values
estimated in part (iii), we have:

∂ 12 11 12
t px = 0.2 t px - 0.8 t px
∂t

Since t p11 12
x + t px = 1 , this can be written as:


∂t
12
( 12
)12 12
t px = 0.2 1 - t px - 0.8 t px = 0.2 - t px

Page 128 © IFE: 2019 Examinations


Batch3

We can solve this using an integrating factor by first rearranging it in the


form:

∂ 12 12
t px + t px = 0.2
∂t

The integrating factor is exp (Ú 1dt ) = et .


Multiplying through by the integrating factor gives:


et 12 t 12
t px + e t px = 0.2e
t
∂t

The LHS can be expressed as the derivative of a product:

(
∂ t 12
∂t
)
e t px = 0.2et

We can now integrate this equation with respect to t to obtain:

et t p12 t
x = 0.2e + c

12
We can use the initial condition 0 px = 0 to find the constant c :

e0 0 p12 0
x = 0.2e + c fi c = -0.2

=0

So:

et t p12 t
x = 0.2e - 0.2

and:

12
t px (
= 0.2 1 - e - t )
So the required probability is:

12
3 px ( )
= 0.2 1 - e -3 = 0.1900

© IFE: 2019 Examinations Page 129


Batch3

21 Subject CT4 September 2013 Question 8

(i) State space

The state space is {0,1,2,3}.

(ii) Transition graph

The transition graph is as follows:

3B 2B B

0 1 2 3

A A A
(full) (empty)

(iii) Generator matrix

The generator matrix is:

0 1 2 3
0   3B 3B 0 0 
 
1  A ( A  2B ) 2B 0 
2  0 A  ( A  B) B 
 
3  0 0 A  A 

(iv) Probability the car park will remain full for two hours

The probability that the car park will remain full for the next two hours is:

p00 (2)  e 3B2  e 6B

(v) Jump chain

If a Markov jump process is examined only at the times of the transitions, the
resulting process is the jump chain associated with the jump process.

In other words, the jump chain is the sequence of states that the process is
observed to take. The time set comprises the times at which the transitions
occur.

Page 130 © IFE: 2019 Examinations


Batch3

(vi) Transition matrix for the jump chain

The transition matrix for the jump chain is:

0 1 2 3
0  0 1 0 0
 
 A 2B 
1 0 0
 A  2B A  2B 
 A B 
2  0 0 
 AB AB 
 0 0 1 0 
3  

(vii) Probability that all spaces become full before any cars drive away

The probability that all the spaces become full before any cars are driven
away is:

A A A2
 
A  B A  2B ( A  B )( A  2B )

(viii) Probability that the car park becomes full before it becomes empty

Let pi be the probability that the car park becomes full before it becomes
empty, given that there are currently i empty spaces (ie the process is in
state i ).

We have:

Ê A ˆ Ê A ˆ Ê A ˆ Ê 2B ˆ
p2 = Á + p
Ë A + B ˜¯ ÁË A + 2B ˜¯ ÁË A + B ˜¯ ÁË A + 2B ˜¯ 2
       
go from go from1 go from go from
2 to 1 to 0 (full) 2 to 1 1 to 2

Rearranging:

È 2 AB ˘ A2
p2 Í1 - ˙=
Î ( A + B )( A + 2B ) ˚ ( A + B )( A + 2B )

© IFE: 2019 Examinations Page 131


Batch3

Now multiplying by ( A + B )( A + 2B ) to get rid of the fractions:

p2 ÎÈ( A + B )( A + 2B ) - 2 AB ˚˘ = A2

So:

A2
p2 =
( A + B )( A + 2B ) - 2 AB
A2
=
A + 3 AB + 2B 2 - 2 AB
2

A2
=
A2 + AB + 2B 2

(ix) Comment

This model is very unrealistic for a number of reasons.

The rates of arrival and departure will not be constant during the day. For
example, there is likely to be very little activity at night.

The rates may be different on weekdays and at weekends, eg visitors may


come round at the weekend.

The last two points suggest that a time-inhomogeneous model might be


more suitable.

The events will not be random because the car owners will probably have
regular times at which they go to work and come home.

The process may not be Markov either if the car owners time their
movements based on past experience of when they think there is likely to be
a space or if they park nearby when it is full and watch out of their windows
until one of the cars leaves.

The model also assumes that vehicles are always parked in a single space,
and never over two spaces.

Page 132 © IFE: 2019 Examinations


Batch3

22 Subject CT4 April 2014 Question 7

(i) Diagram

The diagram for this process looks like this:

1 2

3 4

(ii) Derive the Kolmogorov forward equation

Starting from the definition of a derivative, we can write:

24
∂ 24 t + h px - t px24
t px = lim (1)
∂t h Æ0 h

Assuming that this is a Markov process, we can ‘split up’ the


24
probability t + h px using the Chapman-Kolmogorov equation to get:

24
t + h px = t px21 h p14 22 24 23 34 24 44
x +t + t p x h px +t + t px h p x +t + t px h p x +t

From the definition of the transition rates, we have:

14
h p x +t = h m14
x +t + o(h ) and
24
h p x +t = h m x24+t + o(h)

We can see from the diagram in part (i) that transitions from state 3 to
34
state 4 are not possible. So h p x +t = 0.

We can also see from the diagram that state 4 is an absorbing state.
44
So h p x +t = 1.

© IFE: 2019 Examinations Page 133


Batch3

Substituting these expressions into equation (1) gives:

21 14 22 24 23 24 24
∂ 24 t px h p x + t + t px h p x + t + t px ¥ 0 + t px ¥ 1 - t px
t px = lim
∂t h Æ0 h

= lim
21
t px (h m 14
x +t ) ( )
+ o(h ) + t px22 h m x24+ t + o(h )
h Æ0 h

o(h )
Since lim = 0 and the remaining h ’s cancel, we get:
hÆ0 h


p 24 = t px21m 14 22 24
x + t + t px m x + t
∂t t x

(iii) Statistically significant difference between the death rates

nij
mˆij = , where nij is the number of transitions from state i to state j
ti
and ti is the waiting time in state i .

We are testing the hypotheses:

H0 : m23 = m13 versus H1 : m23 π m13

The approximate distributions of m13 and m 23 , the estimators of m13


and m23 , for large samples are:

Ê m13 ˆ Ê m ˆ
m13 ~ N Á m13 , and m 23 ~ N Á m23 , 23 ˜
Ë t1 ˜¯ Ë t2 ¯

The distribution of the difference between these estimators is therefore:

Ê m23 m13 ˆ
m 23 - m13 ~ N Á m23 - m13 , +
Ë t2 t1 ¯˜

Page 134 © IFE: 2019 Examinations


Batch3

Under H0 , m23 - m13 = 0 , so we would have:

Ê m23 m13 ˆ
m 23 - m13 ~ N Á 0, +
Ë t2 t1 ¯˜

Standardising this gives:

m 23 - m13
Z= ~ N (0,1)
m23 m13
+
t2 t1

The numerical estimates of m13 and m23 are:

n13 10
mˆ13 = = = 0.004888
t1 2,046

and:
n23 25
mˆ 23 = = = 0.021949
t2 1,139

Substituting these into the result above gives:

0.021949 - 0.004888
z= = 3.67
0.021949 0.004888
+
1,139 2,046

As this far exceeds 1.96, the 97.5th percentile of the N (0,1) distribution, we
can confidently reject H0 and conclude that there is a difference between
the death rates from heart disease for those with and without the condition.

© IFE: 2019 Examinations Page 135


Batch3

23 Subject CT4 April 2015 Question 10

(i) State space

The state space for this process is {0,1, 2, 3, 4, } .

(ii) Diagram

The diagram for this process looks like this:

   
0 1 2 3 4
    

(iii) Generator matrix

With the states in the order 0, 1, 2, …, the generator matrix for this process
is:

È0 0 0 0 ˘
Í ˙
Ím -( l + m ) l 0 ˙
Í0 m -( l + m ) l ˙
Í ˙
Í0 0 m -( l + m ) ˙
Í   ˙˚
Î

(iv) Jump chain

A jump chain is a Markov chain derived from a Markov jump process where
we model the successive states the process can visit and the associated
probabilities.

Page 136 © IFE: 2019 Examinations


Batch3

(v) Transition matrix for jump chain

The transition matrix for the jump chain is:

È 1 0 0 0 ˘
Í ˙
Í m 0
l
0 ˙
Íl + m l+m ˙
Í ˙
Í 0 m l
0 ˙
Í l+m l+m ˙
Í ˙
Í 0 m
Í 0 0 ˙˙
l+m
Í ˙
ÎÍ    ˚˙

(vi) Probability

The probability that a game ends without the player finding an extra life is
the probability of the jump chain following the path 3 Æ 2 Æ 1 Æ 0 . This is:
3
m m m Ê m ˆ
¥ ¥ =Á
l+m l+m l + m Ë l + m ˜¯

24 Subject CT4 April 2015 Question 11

(i) Diagram

The effect of the disease on an individual can be described by the following


model:

0.4
H S R
0.6


D

© IFE: 2019 Examinations Page 137


Batch3

The states for this model are:


 H = Healthy
 S = Sick from the disease
 R = Recovered from the disease
 D = Dead.

We have shown State S using a dotted line as this is a transient state with a
one-hour window. Sufferers of the disease pass through this state during
the first hour of sickness before moving on to State R or State D.

The transition rates are:


 s , the rate at which healthy people contract the disease
 m , the mortality rate for people who have never had the disease

 n , the mortality rate for people who have recovered from the disease.

(ii) Likelihood

The likelihood is:

L = e -(s + m )tH ¥ e -n tR ¥ (0.4s )nHSR ¥ (0.6s )nHSD ¥ m nHD ¥ n nRD


= e -(s + m )tH ¥ e -n tR ¥ s nHSR + nHSD ¥ m nHD ¥ n nRD ¥ constant

where:

 tH denotes the total waiting time for lives in the healthy state H

 tR denotes the total waiting time for lives in the recovered state R

 nHSR and nHSD denote the number of transitions observed from the
healthy state H to the recovered state R and to the dead state D (via the
sick state S)

 nHD denotes the number of transitions observed from the healthy state
H to the dead state D (not passing through state S)

 nRD denotes the number of transitions observed from the recovered


state R to the dead state D.

Page 138 © IFE: 2019 Examinations


Batch3

(iii) Maximum likelihood estimator

The log-likelihood is:

ln L = -(s + m )tH - n tR + (nHSR + nHSD ) ln s + nHD ln m + nRD lnn


+constant

Differentiating with respect to s to find the MLE gives:

∂ n + nHSD
ln L = -tH + HSR
∂s s

Equating to zero and rearranging gives the maximum likelihood estimate:

nHSR + nHSD
sˆ =
tH

The second derivative of the log-likelihood is:

∂2 nHSR + nHSD
ln L = -
∂s 2 s2

This must take a negative value, irrespective of the value of s . So the


value we have estimated for s is indeed a maximum.

(iv) Survival probability for 3 years

An inhabitant of the island who has never suffered from the disease will still
be alive in 3 years’ time if they are in either state H or state R at time 3. So
the required probability is pHH (3) + pHR (3) .

pHH (3)

We can derive a formula for pHH (3) by first deriving the Kolmogorov
differential equation for pHH (t ) .

Consider the short time interval (t , t + h ) . From the definition of a derivative:

pHH (t + h ) - pHH (t )
¢ (t ) = lim
pHH
h Æ0 h

© IFE: 2019 Examinations Page 139


Batch3

Using Chapman-Kolmogorov, we know that:

pHH (t + h ) = pHH (t )pHH (h )

From the definition of the transition rates, we also know that:

pHH (h ) = 1 - s h - m h + o(h ) = 1 - (s + m )h + o(h )

o( h )
where lim =0.
h Æ0 h

So:

pHH (t )[1 - (s + m )h + o(h )] - pHH (t )


¢ (t ) = lim
pHH = -(s + m )pHH (t )
h Æ0 h

We can now solve this differential equation:

d p ¢ (t )
ln pHH (t ) = HH = -(s + m )
dt pHH (t )

fi ln pHH (t ) = -(s + m )t + c

Since pHH (0) = 1 , c = 0 .

So:

pHH (t ) = e -(s + m )t fi pHH (3) = e -3(s + m )

pHR (3)

We can derive a Kolmogorov integrated equation for this probability by


considering the time of the first jump during the 3 years. This could be either
to state R or to state D (either directly or via state S). So we have:

3
pHR (3) = Ú0 pHH (w ){ 0.4s pRR (3 - w ) + 0.6s pDR (3 - w ) + m pDR (3 - w ) }dw

3
= Ú0 pHH (w ) 0.4s pRR (3 - w ) dw

Page 140 © IFE: 2019 Examinations


Batch3

Writing the integrand in terms of force of transition, we obtain:

3
pHR (3) = 0.4s Ú e -(s + m )w e -n (3 -w )dw
0
3
= 0.4s e -3n Ú e -(s + m -n )w dw
0
Ê 1 - e -3(s + m -n ) ˆ
= 0.4s e -3n Á ˜
Ë s + m -n ¯
Ê e -3n - e -3(s + m ) ˆ
= 0.4s Á ˜
Ë s + m -n ¯

So the probability that an islander who has never suffered from the disease
will still be alive in 3 years’ time is:

Ê e -3n - e -3(s + m ) ˆ
pHH (3) + pHR (3) = e -3(s + m ) + 0.4s Á ˜
Ë s + m -n ¯

(v) Information required

The students would need to be able to calculate the following statistics for
the 3-year period:
 the total waiting time for lives in the healthy state H ( tH )

 the number of transitions observed from the healthy state H to the


recovered state R via the sick state S ( nHSR )

 the number of transitions observed from the healthy state H to the dead
state D via the sick state S ( nHSD ).

To do this, they would need the following data:


 date of birth of all births in the last three years
 date of death of all ‘healthy’ deaths in the last three years
 dates of any other immigration or emigration of healthy people in the last
three years
 date at which each person who contracted the disease fell ill.

© IFE: 2019 Examinations Page 141


Batch3

25 Subject CT4 September 2015 Question 8

(i) Define a Markov jump process

A Markov jump process is a stochastic process with a discrete state space


that operates in continuous time and has the Markov property, ie the
probabilities for future states depend only on the current state (not on the
previous history).

(ii) Diagram

The transition rate from state R to state D is 0.2.

The total transition rate out of state NB is 0.1. We need to split this in
proportion to the probabilities of 0.75 going to state R (so the transition rate
is 0.75 ¥ 0.1 = 0.075 ) and 0.25 going to state D (so the transition rate is
0.25 ¥ 0.1 = 0.025 ).

So the diagram for this process looks like this:

0.075
NB R

0.2
0.025
D

(iii) Determine the Kolmogorov forward equations

The generator matrix for this process is:

NB R D
NB È -0.1 0.075 0.025 ˘
Í ˙
R Í 0 -0.2 0.2 ˙
D ÍÎ 0 0 0 ˙˚

Page 142 © IFE: 2019 Examinations


Batch3

Since this is a time-homogeneous process, we can determine the


Kolmogorov forward differential equations by adapting the general formula
pij¢ (t ) = Â pik (t ) mkj to the notation used in the question.
k

¢ (t ) = Â Pk (t ) mk ,NB = PNB (t ) mNB,NB + PR (t ) mR,NB + PD (t ) mD,NB


So: PNB
k
= PNB (t ) ¥ ( -0.1) + PR (t ) ¥ 0 + PD (t ) ¥ 0
ie ¢ (t ) = -0.1PNB (t )
PNB

Also:

PR¢ (t ) = Â Pk (t ) mk ,R = PNB (t ) mNB,R + PR (t ) mR,R + PD (t ) mD,R


k
= PNB (t ) ¥ 0.075 + PR (t ) ¥ ( -0.2) + PD (t ) ¥ 0

ie PR¢ (t ) = 0.075PNB (t ) - 0.2PR (t )

and:

PD¢ (t ) = Â Pk (t ) mk ,D = PNB (t ) mNB,D + PR (t ) mR,D + PD (t ) mD,D


k
= PNB (t ) ¥ 0.025 + PR (t ) ¥ 0.2 + PD (t ) ¥ 0

ie PD¢ (t ) = 0.025PNB (t ) + 0.2PR (t )

(iv) Solve the differential equations

¢ (t ) = -0.1PNB (t )
The boundary condition for the first equation PNB is
PNB (0) = 1 , since we know with certainty that the phone is not broken
initially.

This equation can be rearranged in the form:

¢ (t ) d
PNB
= log PNB (t ) = -0.1
PNB (t ) dt

Integrating gives:

log PNB (t ) = -0.1t + c

© IFE: 2019 Examinations Page 143


Batch3

When t = 0 , this is:

log PNB (0) = 0 + c fi c = 0



 
=1

So: log PNB (t ) = -0.1t

and PNB (t ) = e -0.1t

The boundary condition for the second equation


PR¢ (t ) = 0.075PNB (t ) - 0.2PR (t ) is PR (0) = 0 , since the phone cannot be in
the Repaired state initially.

Substituting the formula we have just found for PNB (t ) gives:

PR¢ (t ) = 0.075e -0.1t - 0.2PR (t )

We can solve this using an integrating factor. We first rearrange it in the


form:

PR¢ (t ) + 0.2PR (t ) = 0.075e -0.1t

The integrating factor required is:


0.2dt
= e0.2t

Multiplying through by the integrating factor gives:

e0.2t PR¢ (t ) + 0.2e0.2t PR (t ) = 0.075e0.1t

d È 0.2t
ie e PR (t )˘ = 0.075e0.1t
dt Î ˚

Integrating gives:

e0.1t
e0.2t PR (t ) = 0.075 + c = 0.75e0.1t + c
0.1

Page 144 © IFE: 2019 Examinations


Batch3

When t = 0 , this is:

e0 PR (0) = 0.75 + c fi c = -0.75



=0

So: e0.2t PR (t ) = 0.75e0.1t - 0.75

(
and PR (t ) = 0.75e -0.1t - 0.75e -0.2t = 0.75 e -0.1t - e -0.2t )
(v) Calculate the probability that a phone has not been discarded

Since the process must be in one of the three states at time t :

PNB (t ) + PR (t ) + PD (t ) = 1

So the probability that the phone has not been discarded by time t is:

1 - PD (t ) = PNB (t ) + PR (t )

( )
= e -0.1t + 0.75 e -0.1t - e -0.2t = 1.75e -0.1t - 0.75e -0.2t

26 Subject CT4 September 2015 Question 9

(i) Likelihood

The likelihood function for this process is:

L μ e -( m12 + m13 + m14 )t1e -( m21 + m23 + m24 )t2 m1212 m1313 m1414 m2121 m2323 m2424
n n n n n n

where:
 mij denotes the transition rate from state i to state j
 nij denotes the observed number of transitions from state i to state j
 ti denotes the observed waiting time in state i .

© IFE: 2019 Examinations Page 145


Batch3

(ii) Derive the maximum likelihood estimator

The log-likelihood is:

log L = -( m12 + m13 + m14 )t1 + n13logm13 + terms not involving m13

Differentiating with respect to m13 gives:


∂ n
log L = -t1 + 13
∂m13 m13

To obtain the MLE we set this equal to zero and rearrange to get:

n13
mˆ13 =
t1

To verify that this does indeed give a maximum, we can check that the
second derivative is negative:

∂2 n13
2
log L = - 2
<0
∂m13 m13

(iii) Testing the significance of the difference between the death rates

We are testing the hypotheses:

H0 : m13 = m23 versus H1 : m13 > m23

The approximate distributions of m13 and m 23 , the estimators of m13 and


m23 , for large samples are:

Ê m13 ˆ Ê m ˆ
m13 ~ N Á m13 , and m 23 ~ N Á m23 , 23 ˜
Ë t1 ˜¯ Ë t2 ¯

The distribution of the difference between these estimators is therefore:

Ê m13 m23 ˆ
m13 - m 23 ~ N Á m13 - m23 , +
Ë t1 t2 ¯˜

Page 146 © IFE: 2019 Examinations


Batch3

Under H0 , m13 - m23 = 0 , so we would have:

Ê m13 m23 ˆ
m13 - m 23 ~ N Á 0, +
Ë t1 t2 ¯˜

Standardising this gives:

m13 - m 23
Z= ~ N (0,1)
m13 m23
+
t1 t2

The numerical estimates of m13 and m23 are:

n13 178 n 190


mˆ13 = = = 0.012368 and mˆ23 = 23 = = 0.010492
t1 14,392 t2 18,109

Substituting these into the result above gives:

0.012368 - 0.010492
z= = 1.564
0.012368 0.010492
+
14,392 18,109

As this exceeds 1.2816, the 90th percentile of the N (0,1) distribution, we


reject H0 at the 10% level (ie with 90% confidence). We conclude that
obese people do experience a higher mortality rate for heart disease than
non-obese people.

© IFE: 2019 Examinations Page 147


Batch3

27 Subject CT4 April 2016 Question 7

(i) Condition for a time-inhomogeneous process

A Markov jump process is time-inhomogeneous if the transition rates do not


remain constant over time.

(ii) Difficulties with time-inhomogeneous rates

Difficulties include:
 Estimating the transition rates is more difficult, as we have to estimate a
function for each type of transition, rather than just a single value.
 We will also need more data to estimate these functions accurately.
 Solving the equations for the process, eg the Kolmogorov differential
equations, will be more difficult algebraically.
 If the equations cannot be solved directly, it may be necessary to
employ Monte-Carlo techniques to apply the model.

(iii) Probability that the child remains Good

The probability of remaining in the Good state up to time t is:

pGG (t ) = exp Ê - Ú mGN (s )ds ˆ


t
Ë 0 ¯

Ë 0
t
¯ (
= exp Ê - Ú (0.2 + 0.04s )ds ˆ = exp -0.2t - 0.02t 2 )
(iv) Time for a 50% chance of being naughty

This is the median time spent in the Good state, which will satisfy the
equation:

pGG (t ) = 0.5

fi ( )
exp -0.2t - 0.02t 2 = 0.5

Taking logs:

-0.2t - 0.02t 2 = ln 0.5

fi 0.02t 2 + 0.2t + ln 0.5 = 0

Page 148 © IFE: 2019 Examinations


Batch3

Applying the quadratic formula gives:

-0.2 ± (0.2)2 - 4(0.02) ln 0.5 -0.2 ± 0.309


t= = = 2.72 (or - 12.72)
2(0.02) 0.04

So the required time is 2.72 hours or 2 hours 43 minutes.

(v) Differential equation

Although this part of the question says ‘Derive’ (which usually means ‘Derive
from first principles’), it is only worth 2 marks, which is too few to suggest
that we have to give a full derivation from first principles.

This is a time-inhomogeneous model since the transition rates vary over


time. The probability PG (t ) given in the question is the same as the
probability PGG (0, t ) .

For a time-inhomogeneous Markov jump process, the matrix form of the


Kolmogorov forward differential equations is:


P (s, t ) = P (s, t ) A(t )
∂t

where P (s, t ) denotes the matrix of transition probabilities over the interval
(s, t ) and A(t ) is the generator matrix at time t .

For this model, we have:

∂ È pGG (s, t ) pGN (s, t )˘


Í ˙
∂t Î pNG (s, t ) pNN (s, t )˚

È p (s, t ) pGN (s, t )˘ È -(0.2 + 0.04t ) 0.2 + 0.04t ˘


= Í GG ˙Í ˙
p
Î NG ( s, t ) pNN ( s, t ) ˚Î 0.4 - 0.04t - (0.4 - 0.04t )˚

and hence:

∂ ∂
pG (t ) = pGG (0, t )
∂t ∂t
= pGG (0, t ) ¥ ÈÎ -(0.2 + 0.04t )˘˚ + pGN (0, t ) ¥ (0.4 - 0.04t )

© IFE: 2019 Examinations Page 149


Batch3

Also:

PGG (0, t ) + PGN (0, t ) = 1 fi PGN (0, t ) = 1 - PGG (0, t )

So:


p (t ) = pGG (0, t ) ¥ ÈÎ -(0.2 + 0.04t )˘˚ + (1 - pGG (0, t )) ¥ (0.4 - 0.04t )
∂t G
= pGG (0, t ) ¥ ÈÎ -(0.2 + 0.04t ) - (0.4 - 0.04t )˘˚ + 0.4 - 0.04t

= -0.6 pG (t ) + 0.4 - 0.04t

28 Subject CT4 April 2016 Question 9

(i) Transition diagram

If we abbreviate the states to H , I , DC and DO (in the same order they are
given in the question), the transition diagram for this process is:


H I

 
–

DO DC

(ii) Kolmogorov’s forward equations

If P (t ) denotes the matrix of transition probabilities for a time period of


length t , the Kolmogorov forward differential equations can be written in
matrix form as:

P ¢(t ) = P (t )A

Page 150 © IFE: 2019 Examinations


Batch3

Here A is the generator matrix:

H I DO DC
H È -s - m s m 0˘
Í
I Í 0 -r r - t t ˙˙
DO Í 0 0 0 0˙
Í ˙
DC ÍÎ 0 0 0 0 ˙˚

(iii) Likelihood

If ti denotes the observed waiting time in state i and nij denotes the
observed number of transitions from state i to state j , the likelihood is:

nHDO nIDO nIDC


L( m, r , s , t ) = e -( m +s )tH e - r tI s nHI m (r -t ) t ¥C … (1)

where C is a constant.

If we substitute the data values, this becomes:

L( m, r , s ,t ) = e -1,200( m +s )e -600 rs nHI m10 ( r - t )0t 30 ¥ C


= e -1,200( m +s )e -600 rs nHI m10t 30 ¥ C

Note that nIDO = 0 because all the 40 deaths are accounted for by the 10
deaths of healthy trees and the 30 deaths from citrus greening.

We are not told the number of trees that became affected during the study,
so the value of nHI is not known. But, since we are not interested in the
infection rate, this doesn’t affect our calculations here.

(iv) Maximum likelihood estimator

To derive the maximum likelihood estimator for t , we start from


Equation (1) and take logs to find the log-likelihood:

ln L = -( m + s )tH - r tI + nHI ln s + nHDO ln m + nIDO ln( r - t )


+nIDC ln t + ln C

© IFE: 2019 Examinations Page 151


Batch3

Differentiating with respect to the parameter we wish to estimate gives:

∂ nIDO nIDC
ln L = - + … (2)
∂t r -t t

Equating this to zero gives:

nIDO nIDC
=
r -t t
fi nIDO t = nIDC ( r - t )
fi (nIDO + nIDC )t = nIDC r
nIDC
fi t = r … (3)
nIDO + nIDC

Since this expression for t involves the unknown value for the parameter
r , we also need to maximise with respect to r by setting the derivative
with respect to r equal to zero:

∂ nIDO
ln L = -tI + =0
∂r r -t
fi ( r - t )tI = nIDO
nIDO
or r= +t … (4)
tI

Solving Equations (3) and (4) simultaneously, by substituting Equation (4)


into Equation (3) and rearranging, gives:

nIDC nIDC Ê nIDO ˆ


t = r= Á +t ˜
nIDO + nIDC nIDO + nIDC Ë tI ¯
Ê nID ˆ
fi (nIDO + nIDC )t = nIDC Á O + t ˜
Ë It ¯

Cancelling the last term on each side then gives:

nIDO nIDC
nIDO t = nIDC fi t =
tI tI

Page 152 © IFE: 2019 Examinations


Batch3

nIDC
So the maximum likelihood estimate for t is tˆ = and the
tI
NIDC
corresponding estimator is , where NIDC and TI are random variables
TI
representing the number of transitions from state I to state DC , and the
waiting time in state I .

We can check this is consistent with a maximum by evaluating the second


derivative:

∂2 nIDO nIDC
ln L = - - <0
∂t 2 ( r - t )2 t2

So this is indeed a maximum.

(v) Estimate t

Using the formula derived in part (iv), or by general reasoning, we have:

30
tˆ = = 0.05 (in units of tree-months)
600

29 Subject CT4 September 2016 Question 3

All three types of process share the Markov property, ie the probabilities for
future states depend only on the current state (not on the previous history).

Markov chains and Markov jump chains both have a discrete time set
whereas Markov jump processes have a continuous time set.

All three types of process have a discrete state space.

Markov chains and Markov jump chains are both specified in terms of
probabilities whereas Markov jump processes are specified in terms of
transition rates.

The diagram for a Markov chain may contain ‘loops’ (movements to the
same state) whereas Markov jump processes cannot. Markov jump chains
only contain loops where there is an absorbing state.

© IFE: 2019 Examinations Page 153


Batch3

Markov chains and Markov jump processes can be either


time-homogeneous or time-inhomogeneous whereas Markov jump chains
provide no information about the timing of the transitions.

30 Subject CT4 September 2016 Question 4

(i) Total annual premium

The annual number of claims has a Poisson( l ) distribution. So the


expected number of claims will be l . Allowing for the 50% premium
loading, the total annual premium will be:

Annual premium = 1.5 l Z

(ii) Probability the insurance company can’t pay the next claim

If the first claim occurs at time t , the insurance company’s assets


immediately before the claim has been made will be:

S + 1.5l Zt

The insurer will have insufficient funds to pay this claim (which will be for an
amount Z ) if

S + 1.5l Zt < Z

Z -S 1 Ê Sˆ
ie t< = ÁË 1 - Z ˜¯
1.5l Z 1.5l

So the insurer will have insufficient funds to pay the claim if the first claim
1 Ê Sˆ
occurs before time t = 1 - ˜ . The probability is:
1.5l ÁË Z¯

È 1 Ê Sˆ˘ È 1 Ê Sˆ˘
P ÍT < 1- ˙=F Í Á1 - Z ˜¯ ˙
Î 1.5 l ÁË Z ˜¯ ˚ T Î 1.5l Ë ˚
È 1 Ê Sˆ˘
= 1 - exp Í - l Á1 - Z ˜¯ ˙
Î 1.5l Ë ˚
È 1 Ê Sˆ˘
= 1 - exp Í - Á1 - Z ˜¯ ˙
Î 1.5 Ë ˚

Page 154 © IFE: 2019 Examinations


Batch3

31 Subject CT4 September 2016 Question 9

(i) Estimating the transition rates

The transition rate from state i to state j can be estimated using the
maximum likelihood method, which gives:

nij
mˆij =
ti

Here:
 nij denotes the total number of transitions from state i to state j
observed during the observation period
 ti denotes the total waiting time in state i , ie the total time spent by all
individuals in state i during the observation period.

(ii) Kolmogorov equations

The generator matrix for this process is:

A T
A Ê -m mˆ
T ÁË 0 0 ˜¯

where A and T denote the Active Policy and Theft Claim states
respectively.

So the Kolmogorov forward equations are:

¢ (t ) = - m PAA (t ) and PAT


PAA ¢ (t ) = m PAA (t )

(iii) Expected cost of claims

By time T the number of claims will be either 0 or 1. So the expected cost


is:

0 ¥ P (0 claims) + C ¥ P (1 claim) = C PAT (T )

© IFE: 2019 Examinations Page 155


Batch3

To find PAT (t ) , we can solve the corresponding Kolmogorov equation in part


(ii) using the fact that PAA (t ) + PAT (t ) = 1 . So:

¢ (t ) = m PAA (t ) = m ÈÎ1 - PAT (t )˘˚


PAT

or ¢ (t ) + m PAT (t ) = m
PAT

The integrating factor here is exp (Ú m dt ) = e mt .


So: e mt PAT
¢ (t ) + m e mt PAT (t ) = m e mt

d È mt
ie e PAT (t )˘ = m e m t
dt Î ˚

Integrating from time 0 to time T :

T T
È e m t P (t )˘ = m e mt dt
Î AT ˚0 Ú0
T
fi e mT PAT (T ) - PAT (0) = Èe mt ˘

  Î ˚0
=0
mT mT
fi e PAT (T ) = e -1

Dividing by the integrating factor then gives:

PAT (T ) = 1 - e - mT

So the expected cost of claims by time T is:

(
C 1 - e - mT )

Page 156 © IFE: 2019 Examinations


Batch3

(iv) Transition diagram for the revised process

If we denote the Lapsed state by L and the lapse rate by l , the revised
transition diagram is:


Active Theft
policy claim


Lapse

(v) Revised expected cost of claims

The revised generator matrix is:

A T L
A Ê -m - l m lˆ
T Á 0 0 0˜
Á ˜
L ÁË 0 0 0 ˜¯

The revised Kolmogorov forward equations are:

¢ (t ) = -( m + l )PAA (t ) , PAT
PAA ¢ (t ) = m PAA (t ) and PAL
¢ (t ) = l PAA (t )

From the first equation:

¢ (t )
PAA
= -( m + l )
PAA (t )

d
ie ln PAA (t ) = -( m + l )
dt

© IFE: 2019 Examinations Page 157


Batch3

Integrating from time 0 to time T :

T T
ÈÎln PAA (t )˘˚ 0 = - Ú ( m + l )dt
0

fi ln PAA (T ) - ln PAA (0) = -( m + l )T



 
=1
So: PAA (T ) = exp ÈÎ -( m + l )T ˘˚

From the second Kolmogorov equation, we then have:

¢ (t ) = m PAA (t ) = m exp ÎÈ -( m + l )t ˚˘
PAT

Integrating from time 0 to time T :

T
T
ÎÈ PAT (t )˚˘ 0 = Ú m exp ÈÎ -( m + l )t ˘˚ dt
0
T
È m ˘
fi PAT (T ) - PAT (0) = Í - exp ÎÈ -( m + l )t ˚˘ ˙

  Î m+l ˚0
=0
m
fi PAT (T ) = È1 - exp ÈÎ -( m + l )T ˘˚ ˘˚
m+l Î

So the revised expected cost of claims by time T is:

m
C È1 - exp ÈÎ -( m + l )T ˘˚ ˘˚
m+l Î

Page 158 © IFE: 2019 Examinations


Batch3

32 Subject CT4 April 2017 Question 11 (part)

(i) Transition diagram

The transition diagram looks like this:


H S

 

The states used here are:


State H : Employees who are able to work (= Healthy)
State S : Employees who are unable to work due to ill-health (= Sick)
State D : Employees who are dead.

(ii) Likelihood

If ti denotes the observed waiting time in state i and nij denotes the
observed number of transitions from state i to state j , the likelihood is:

L( m, s ,n , r ) = e -( m +s )tH e -(n + r )tS m nHD s nHS n nSD r nSH ¥ C

where C is a constant.

(iii) Maximum likelihood estimator

To derive the maximum likelihood estimator for s , we start from the


likelihood and take logs to find the log-likelihood:

ln L = -( m + s )tH - (n + r )tS + nHD ln m + nHS ln s + nSD lnn


+ nSH ln r + ln C

© IFE: 2019 Examinations Page 159


Batch3

Differentiating with respect to the parameter we wish to estimate gives:

∂ n
ln L = -tH + HS
∂s s

Equating this to zero and rearranging gives:

nHS
s =
tH

We can check this is consistent with a maximum by evaluating the second


derivative:

∂2 nHS
2
ln L = - <0
∂s s2

So this is indeed a maximum.

nHS
Hence the maximum likelihood estimate of s is sˆ = . The maximum
tH
NHS
likelihood estimator of s is s = , where NHS is the random variable
TH
representing the number of transitions from State H to State S and TH is the
random variable representing the total waiting time in State H.

33 Subject CT4 September 2017 Question 7

We can start from the definition of the derivative:

∂ p ( x, t + h ) - pHH ( x, t )
pHH ( x, t ) = lim HH
∂t h Æ0 h

By conditioning on the possible states at time t , we have:

pHH ( x, t + h ) = pHH ( x, t )pHH (t , t + h ) + pHS ( x, t )pSH (t , t + h )

We have excluded the third term pHD ( x, t ) pDH (t , t + h ) because


pDH (t , t + h ) = 0 .

Page 160 © IFE: 2019 Examinations


Batch3

From the definitions of the transition rates and the law of total probability:

pSH (t , t + h ) = h r (t ,Ct ) + o(h )

and:

pHH (t , t + h ) = 1 - hs (t ) - h m (t ) + o(h )

o( h )
where o(h ) represents a function for which lim =0.
h Æ0 h

Substituting these expressions and simplifying gives:

∂ p ( x, t )pHH (t , t + h ) + pHS ( x, t )pSH (t , t + h ) - pHH ( x, t )


pHH ( x, t ) = lim HH
∂t hÆ0 h

= lim
hÆ0
1
h
{
pHH ( x, t ) ÈÎ 1 - hs (t ) - h m (t ) + o(h )˘˚

}
+ pHS ( x, t ) ÈÎ h r (t ,Ct ) + o(h )˘˚ - pHH ( x, t )

Ï o( h ) ¸
= lim Ì pHH ( x, t ) ÈÎ -s (t ) - m (t )˘˚ + pHS ( x, t ) r (t ,Ct ) + ˝
hÆ0 Ó h ˛

= pHH ( x, t ) ÎÈ -s (t ) - m (t )˚˘ + pHS ( x, t ) r (t ,Ct )

(ii) Probability of being in the Healthy state throughout

The total of the transition rates out of the Healthy state at time t equals
s (t ) + m (t ) . The probability of remaining in the Healthy state from time 0 to
time t can be found by integrating the negative of this quantity over the time
interval (0, t ) and applying an exponential function. This gives:

pHH (0, t ) = exp Ê - Ú {s (s ) + m (s )} ds ˆ


t
Ë 0 ¯

(iii) How to construct integrated Kolmogorov equations

As an example, consider the derivation of the backward form of the


integrated Kolmogorov equation for the probability pHS (0, t ) .

© IFE: 2019 Examinations Page 161


Batch3

In order to start in state H at time 0 and finish in state S at time t , there must
be at least one jump between time 0 and time t .

Consider the first such jump, and assume that it occurs during the short time
interval (w , w + dw ) . Here, this jump can only be from state H to state S.

The sequence of events must therefore be:


 Remain in state H from time 0 to time w
 Jump from state H to state S during the short period (w , w + dw )

 Start and finish in state S over the remaining period between time w
and time t (possibly not moving at all).

This is illustrated in the diagram below:

H H x S S

Wait in State H First jump Go from State S to State S

To finish, we note that the first jump could occur at any time between 0
and t . So we need to integrate over all possible values of w to obtain the
final equation:

t
pHS (0, t ) = Ú0 pHH (0, w ) ¥ s (w )dw ¥ pSS (w , t )
Ê ˆ
Ú0 exp Ë - Ú0 (s (u ) + m (u )) du ¯ s (w )pSS (w , t )dw
t w
=

(iv) Difference between forward and backward equations

In the backward form we condition on the first jump during the time period,
whereas in the forward form we condition on the last jump.

Page 162 © IFE: 2019 Examinations


Batch3

(v) Two errors

The correct form of this equation is:

P ÈÎ X t = H X s = S,Cs = w ˘˚
y
t - Ú ( r (u,w - s + u ) +u (u,w - s +u ))du
= Úe s r ( y , w - s + y ) pHH ( y , t ) dy
s

Comparing this with the version in the question reveals that it contains some
errors:
 The lower limits for both the outer and inner integrals should be s
rather than 0.
 The transition rate for the jump should be r rather than u .

© IFE: 2019 Examinations Page 163


Batch3

FACTSHEET

This factsheet summarises the main methods, formulae and information


required for tackling questions on the topics discussed in this booklet.

Definition of Markov jump process

A Markov process with a continuous time set and discrete state space is
called a Markov jump process.

Poisson processes

The Poisson process is one of the simplest examples of a Markov jump


process. An integer-valued process Nt , t  0 , with filtration Ft , t  0 , is
a Poisson process if any of the following four equivalent conditions hold:
 Nt , t  0 has stationary, independent increments, and for each t , Nt
has a Poisson distribution with parameter  t .

 Nt , t  0 is a Markov jump process with independent increments and


transition probabilities over a short time period of length h given by:

P Nt  h  Nt  1| Ft    h  o(h )

P Nt  h  Nt  0| Ft   1   h  o(h )

P Nt  h  Nt  0,1| Ft   o(h )

 The holding times T0 ,T1,... of Nt , t  0 are independent Exp   


random variables and NT0 T1 Tn 1  n .

 Nt , t  0 is a Markov jump process with independent increments and


transition rates given by:

  if j  i

 ij    if j  i  1
0 otherwise

Page 164 © IFE: 2019 Examinations


Batch3

Thinning a Poisson process

When events in a Poisson process are of different types and each type
occurs at random with a fixed probability, then events of each particular type
form a thinned Poisson process. These thinned processes are independent
of one another and for each thinned process the rate is equal to the original
rate multiplied by the probability of getting that type of event.

Fitting a Poisson process model

Fitting a Poisson process model involves estimating the parameter l .


Testing the model involves checking that the waiting times are exponential
and checking that the numbers of events are consistent with the rate l .

Transition probabilities

For a Markov jump process X t , the transition probabilities pij (s, t ) are
defined by:

pij (s, t ) = P ( X t = j | X s = i )

If the process is time-homogeneous, these probabilities depend only on the


length of the interval t - s .

Chapman-Kolmogorov equations

The Chapman-Kolmogorov equations are:

pij (s, t ) = Â pik (s, u )pkj (u, t )


k

Here u is any intermediate time between s and t (possibly equal to s


or t ) that is convenient for the calculation.

Transition rates

The transition rates of a Markov jump process are:

d pij (s, t + h ) - pij (s, t )


mij (t ) = pij (s,t ) = lim
dt h Æ0 h

© IFE: 2019 Examinations Page 165


Batch3

Sometimes the notation s ij (t ) is used instead of mij (t ) to denote the


transition rate from State i to State j at time t . This is the notation used in
the formulae given in the Tables.

The transition rates are constant for a time-homogeneous process.

Generator matrix

The generator matrix of a Markov jump process is the matrix of transition


rates. The rows of the generator matrix sum to 0 since mii (t ) = - Â mij (t ) .
j πi

Differential equations

For a time-homogeneous Markov jump process, Kolmogorov’s forward


differential equation is:

d
pij (t ) = Â pik (t ) mkj
dt k

For a time-homogeneous Markov jump process, Kolmogorov’s backward


differential equation is:

d
pij (t ) = Â mik pkj (t )
dt k

For a time-inhomogeneous Markov jump process, Kolmogorov’s forward


differential equation is:


pij (s, t ) = Â pik (s, t ) mkj (t )
∂t k

For a time-inhomogeneous Markov jump process, Kolmogorov’s backward


differential equation is:


pij (s, t ) = -Â mik (s ) pkj (s, t )
∂s k

Note: this is the only one of the Kolmogorov differential equations that
contains a minus sign.

Page 166 © IFE: 2019 Examinations


Batch3

Waiting times and occupancy probabilities

For a time-homogeneous Markov jump process, the probability of remaining


in State i throughout an interval of length t is:

pii (t ) = e - li t

where li = Â mik = - mii is the total force of transition out of State i . So the
k πi
holding time in State i follows an Exp ( li ) distribution. The expected
holding time in State i is 1/ li .

For a time-inhomogeneous Markov jump process, the probability of


remaining in State i throughout the interval (s, t ) is:

pii (s, t ) = exp Ê - Ú li (u ) du ˆ


t
Ë s ¯

Given that a Markov jump process is in State i at time s and stays there
until time s + w , the probability that it goes into State j when it leaves
State i is:

mij (s + w ) force of transition from State i to State j at time s + w


=
li (s + w ) total force of transition out of State i at time s + w

Expected time to reach State k

To find the expected time to reach a given state, State k , starting from
State i , apply the following formula recursively:

1 mij
mi =
li
+ Â li j
m
j π i ,k

This formula is on page 38 of the Tables. It only works for processes that
are time-homogeneous.

Jump chain

The jump chain of a Markov process is the sequence of states that the
process enters. The time spent in each state is irrelevant.

© IFE: 2019 Examinations Page 167


Batch3

Estimating transition rates

For a time-homogeneous Markov jump process, the maximum likelihood


estimate of the transition rate mij , i π j is:

nij number of observed transitions from State i to State j


mˆij = =
ti total observed waiting time in State i

The maximum likelihood estimator of mij has the following properties:

 it is asymptotically normally distributed

 it is asymptotically unbiased (ie if q is an estimator of q , then


()
E q = q )

 asymptotically, its variance is given by the Cramér-Rao lower bound


(CRLB). The CRLB is given on page 23 of the Tables.

mii can be estimated by mˆii = - Â mˆij .


j πi

Alternatively, the estimation can be done in two stages as follows:

(a) Estimate li by setting 1/ lˆi equal to the sample mean of the duration
of the visits to State i .

nij
(b) Set pˆ ij = where nij is the number of direct transitions from
 nik
k

State i to State j . Then estimate mij by mˆij = lˆi pˆ ij .

Intregral equations

For a time-inhomogeneous Markov jump process, Kolmogorov’s forward


integral equation is:

t -s
pij (s, t ) = Â Ú0 pik (s, t - w ) mkj (t - w ) p jj (t - w , t ) dw iπ j
kπ j

If i = j , the probability pii (s, t ) must be added on.

Page 168 © IFE: 2019 Examinations


Batch3

For a time-inhomogeneous Markov jump process, Kolmogorov’s backward


integral equation is:

t -s
pij (s, t ) = Â Ú0 pii (s, s + w ) mik (s + w ) pkj (s + w , t ) dw iπ j
k πi

If i = j , the probability pii (s, t ) must be added on.

Testing a model

A model can be tested by carrying out a number of possible tests, eg a test


to see whether the holding times in each state are exponentially distributed,
or a test to see whether the underlying jump chain really does satisfy the
Markov property.

© IFE: 2019 Examinations Page 169


Batch3

NOTES

Page 170 © IFE: 2019 Examinations


Batch3

NOTES

© IFE: 2019 Examinations Page 171


Batch3

NOTES

Page 172 © IFE: 2019 Examinations


Batch3

NOTES

© IFE: 2019 Examinations Page 173


Batch3

NOTES

Page 174 © IFE: 2019 Examinations

You might also like