## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Weijie Chen Department of Political and Economics Studies University of Helsinki 16 Aug, 2011

Contents

1 First Order Diﬀerence Equation 1.1 Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 General Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 One Example . . . . . . . . . . . . . . . . . . . . . . . 2 Second-Order Diﬀerence Equation 2.1 Complementary Solution . . . . . . . . . . . . . . . . . . . . . 2.2 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . 2.3 One Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 pth -Order Diﬀerence Equation 3.1 Iterative Method . . . . . . . . . . . 3.2 Analytical Solution . . . . . . . . . . 3.2.1 Distinct Real Eigenvalues . . 3.2.2 Distinct Complex Eigenvalues 3.2.3 Repeated Eigenvalues . . . . 2 2 4 6 7 7 9 9 10 11 13 14 19 20

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

4 Lag Operator 20 th -order diﬀerence equation with lag operator . . . . . . . . 21 4.1 p 5 Appendix 21 5.1 MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1

Abstract Diﬀerence equations are close cousin of diﬀerential equations, they have remarkable similarity as you will soon ﬁnd out. So if you have learned diﬀerential equations, you will have a rather nice head start. Conventionally we study diﬀerential equations ﬁrst, then diﬀerence equations, it is not simply because it is better to study them chronologically, it is mainly because diﬀence equations has a naturally stronger bond with computational science, sometimes we even need to study how to make a diﬀerential equation into its discrete version-diﬀerence equation-since which can be simulated by MATLAB. Diﬀerence equations are always the ﬁrst lesson of any advanced time series analysis course, diﬀerence equation largely overshadows its econometric brother, lag operation, that is because diﬀerence equation can be expressed by matrix, which tremendously increase it power, you will see its shockingly powerful application of space-state model and Kalman ﬁlter.

1

First Order Diﬀerence Equation

Diﬀerence equations emerge because we need to deal with discrete time model, which is more realistic when we study econometrics, time series datasets are apparently discrete. First we will discuss about iterative mathod, which is almost the topic of ﬁrst chapter of every time series textbook. In preface of Ender (2004)[5],‘In my experience, this material (diﬀerence equation) and a knowledge of regression analysis is suﬃcient to bring students to the point where they are able to read the professional journals and embark on a serious applied study.’ Although I do not full agree with his optimism, I do concur that the knowledge of diﬀerence equation is the key to all further study of time series and advanced macreconomic theory. A simple diﬀerence equation is a dynamic model, which describes the time path of evolution, highly resembles the diﬀerential equations, so its solution should be a function of t, completed free of yt+1 − yt , a time instant t can exactly locate the position of its variable.

1.1

Iterative Method

This method is also called recursive substitution, basically it means if you know the y0 , you know y1 , and rest of yt can be expressed by a recursive relation. We start with a simple example which appears in very time series textbook, yt = ayt−1 + wt (1)

2

where

w0 w1 w = . . . . wt

w is a determinstic vector, later we will drop this assumption when we study stochastic diﬀerence equations, but for the time being and simplicity, we assume w nonstochastic yet. Notice that y0 = ay−1 + w0 y1 = ay0 + w1 y2 = ay1 + w2 . . . yt = ayt−1 + wt

Assume that we know y−1 and w. Thus, substitute y0 into y1 , y1 = a(ay−1 + w0 ) + w1 = a2 y−1 + aw0 + w1 Now we have y1 , follow this procedure, substitute y1 into y2 , y2 = a(a2 y−1 + aw0 + w1 ) + w2 = a3 y−1 + a2 w0 + aw1 + w2 Again, substitute y2 into y3 , y3 = a(a3 y−1 + a2 w0 + aw1 + w2 ) + w3 = a4 y−1 + a3 w0 + a2 w1 + aw2 + w3

**I trust you are able to perceive the pattern of the dynamics,
**

t

yt = at+1 y−1 +

i=0

ai wt−i

Actually what we have done is not the simplest version of diﬀerence equation, to show your the outrageously simplest version, let us run through next example in a lightning fast speed. A homogeneous diﬀerence equation1 , myt − nyt−1 = 0

1

If this is ﬁrst time you hear about this, refer to my notes of diﬀerential equation[2].

3

To rearrange it in a familiar manner, yt = n yt−1 m

As the recurisve methods implies, if we have an intial value y0 2 , then y1 = y2 = ... yt = Substitute y1 into y2 , y2 = Then substitute y2 into y3 , y3 = n m n m

2

n y0 m n y1 m n yt−1 m

n m

n y0 = m

n m

2

y0

y1 =

n m

3

y0

**Well the pattern is clear, and solution is yt = n m
**

t

y0

If we denote (n/m)t as bt and y0 as A, it becomes Abt , this is the counterpart of solution of ﬁrst order diﬀerential equation Aert . They both play the fundamental role in solving diﬀerential or diﬀerence equations. Notice that the solution is a function of t, here only t is a varible, y0 is an initial value which is a known constant.

1.2

General Method

As you no doubt have guess the general solution will diﬀerence equation will also be the counterpart of its diﬀerential version. y = yp + yc where yp is the particular solution and yc is the comlementary solution 3 .

For sake of mathematical convenience, we assume initial value y0 rather than y−1 this time, but the essence is identical. 3 All these terminology is full explained in my notes of diﬀerential equations.

2

4

The process will be clear with the help of an example, yt+1 + ayt = c We follow the standard procedure, we will ﬁnd the complementary solution ﬁrst, which is the solution of the according homogeneous diﬀerence equation, yt+1 + ayt = 0 With the knowledge of last section, we can try a solution of Abt , when I say ‘try’ it does not mean we just randomly guess, because we don’t need to perform the crude iterative method every time, we can make use of the solution of previously solved equation, so yt+1 + ayt = Abt+1 + aAbt = 0, Cancel the common factor, b+a=0 b = −a If b = −a, this solution will work, so the complementary solution is yc = Abt = A(−a)t Next step, ﬁnd the particular solution of yt+1 + ayt = c. The most intreguing thing comes, to make the equation above hold, we can choose any yt to satisfy it. Might to your surprise, we can even choose yt = k for −∞ < t < ∞, which is just a constant time series, every period we have the same value k. So k + ak = c k = yp = c 1+a

You can easily notice that if we want to make this solution work, then a = −1. Then the question we ask will simply be, what if a = −1? Of course it is not deﬁne, we have to change the form of the solution, here we use yt = kt which is similar to the one we used in diﬀerential equation, (k + 1)t + akt = c k= c , t + 1 + at

5

because a = −1, k = c,

But this time yp = kt, so yp = ct, still it is a function of t. Add yp and yc together, yt = A(−a)t + c 1+a yt = A(−a)t + ct if a = −1 if a = −1 (2) (3)

Last, of course you can solve A if you have an initial condition, say yt = y0 , and a = −1, c c y0 = A + A = y0 − 1+a 1+a If a = 1, y0 = A(−a)0 + 0 = A Then just plug them back to the according solution (2) and (3). 1.2.1 Solve yt+1 − 2yt = 2 First solve the complementary equation, yt+1 − 2yt = 0 use Abt , Abt+1 − 2Abt = 0 b−2=0 b=2 So, complementary solution, yc = A(2)t To ﬁnd particular solution, let yt = k for −∞ < t < ∞, k − 2k = 2 k = yp = −2 So the general solution, yt = yp + yc = −2 + 2t A 6 One Example

If we are given an initial value, y0 = 4, 4 = −2 + 20 A A=6 then the deﬁnite solution is yt = −2 + 6 · 2t

2

Second-Order Diﬀerence Equation

Second order diﬀerence equation is that an equation involves ∆2 yt , which is ∆(∆yt ) = ∆(yt+1 − yt ) ∆2 yt = ∆yt+1 − ∆yt ∆2 yt = (yt+2 − yt+1 ) − (yt+1 − yt ) ∆2 yt = yt+2 − 2yt+1 + yt We deﬁne a linear second-order diﬀerence equation as, yt+2 + a1 yt+1 + a2 yt = c But we better not use iterative method to mess with it, trust me, it is more confusing then illuminating, so we come straightly to the general solution. As we we have studied by far, the general solution is the addition of the complementary solution and particular solution, we will talk about them next.

2.1

Complementary Solution

As interestingly as diﬀerential equation, we have several situations to discuss. So ﬁrst we try to solve the complementary equation, some textbook might call this reduced equation, it is simply the homogeneous version of the orginal equation, yt+2 + a1 yt+1 + a2 yt = 0 We learned from ﬁrst-order diﬀerence equation that homogeneous diﬀerence equation has a solution form of yt = Abt , we try it, Abt+2 + a1 Abt+1 + a2 Abt = 0 (b2 + a1 b + a2 )Abt = 0 We assume that Abt is nonzero, b2 + a1 b + a2 = 0 7

This is our characteristic equation, same as we had seen in diﬀerential equation, high school math can sometime bring us a little fun, b= −a1 ± a2 − 4a2 1 2

I guess you are much familiar with it in textbook form, if we have a qudractic eqution x2 + bx + c = 0, √ −b ± b2 − 4ab x= 2a The same as we did in second-order diﬀerential equations, now we have three cases to discuss. Case I a2 − 4a2 > 0 Then we have two distinct real roots, and the 1 complementary solution will be yc = A1 bt + A2 bt 1 2 Note that, A1 bt and A2 bt are linearly independent, we can’t only use one of 1 2 them to represent complementary solution, because we need two constants A1 and A2 . Case II a2 − 4a2 = 0 Only one real root is available to us. 1 b = b1 = b2 = − Then the complementary solution collapses, yc = A1 bt + A2 bt = (A1 + A2 )bt We just need another constant to ﬁll the position, say A4 , yc = A3 bt + A4 tbt where A3 = A1 + A2 , and tbt is just old trick we have similiarly used in diﬀerential equation, which we used tert . Case III a2 − 4a2 < 0 Complex numbers are our old friends, we need 1 to make use of them again here. b1 = α + iβ b2 = α − iβ √ where α = − a1 , 2 β=

4a2 −a2 1 . 2

a1 2

Thus,

**yc = A1 (α + iβ)t + A2 (α − iβ)t Here we simply need to make use of De Moivre’s theorem, yc = A1 b1
**

t

[cos (tθ) + i sin (tθ)] + A2 b2 8

t

[cos (tθ) − i sin (tθ)]

where b1 = b2 , because b1 = b2 = Thus, yc = A1 b t [cos (tθ) + i sin (tθ)] + A2 b t [cos (tθ) − i sin (tθ)] = b t {A1 [cos (tθ) + i sin (tθ)] + A2 [cos (tθ) − i sin (tθ)]} = b t [(A1 + A2 ) cos (tθ) + (A1 − A2 )i sin (tθ) = b t [A5 cos (tθ) + A6 sin (tθ)] α2 + β 2

2.2

Particular Solutions

We pick any yp to satisfy yt+2 + a1 yy+1 + a2 yt = c The simplest case is to choose yt+2 = yt+1 = yt = k, thus k + a1 k + a2 k = c, and k = c 1 + a1 + a2

But we have to make sure that a1 + a2 = −1. If a1 + a2 = −1 happens we will use choose yp = kt, and following this pattern there will be kt2 , kt3 , etc to choose depending on the situation we have.

2.3

Solve

One Example

yt+2 − 4yy+1 + 4yt = 7

First, complementary solution, its characteristic equation is b2 − 4b + 4 = 0 (b + 2)(b − 2) = 0

The complementary solution is yc = A1 b2 + A2 b−2 1 2 For particular solution, we try yt+2 = yt+1 = yt = k, then k − 4k + 4k = 7, which is k = 7 Then the general solution is y = yc + yp = A1 2t − A2 (−2)t + 7 9

And we have two initial conditions, y0 = 1 and y1 = 3, y0 = A1 − A2 + 7 = 1 y1 = 2A1 + 2A2 + 7 = 3 Solve for A1 = −4 Diﬁnite solution is y = yc + yp = −4 · 2t − 2(−2)t + 7 A2 = 2

3

pth -Order Diﬀerence Equation

Previous sections are simply teaching your to solve the low order diﬀerence equations, no much of insight and diﬃculty. From this section on, diﬃculty increases signiﬁcantly, for those of you who do not prepare linear algebra well enough, it might not be a good idea to study this section in a hurry. Every mathematics is built on another, it is not quite possible to progress in jumps. And one warning, as the mathematics is becoming deeper, the notation is also becoming complicated. However, don’t get dismayed, this is not rocket science. We generalize our diﬀerence equation into pth -Order, yt = a1 yt−1 + a2 yt−2 + · · · + ap yt−p + ωt (4)

This is actually a generalization of (1), we need to set ω here in order to prepare for stochastic diﬀerence equations. It is a conventional that we diﬀence backwards when deal with high orders. But here we still take it as constant. We can’t handle this diﬀerence equation in this form since it doesn’t leave us much of room to perform any useful algebraic operation. Even if you try to use old method, its characteristic equation will be bt − a1 bt−1 − a2 bt−2 · · · + ap bt−p = 0, factor out bt−p , bp − a1 bp−1 − a2 bp−2 · · · ap = 0 If the p is high enough make you have headache, it means this isn’t the right way to advance. Everything will be reshaped into its matrix form, deﬁne yt yt−1 ψ t = yt−2 . . . yt−p+1 10

which is a p × 1 vector. And deﬁne a1 a2 a3 1 0 0 F =0 1 0 . . . . . . . . . 0 Finally, deﬁne 0 0

··· ··· ··· ··· ···

ap−1 ap 0 0 0 0 . . . . . . 1 0

ωt 0 vt = 0 . . . 0 Put them together, we have new vector form ﬁrst-order diﬀerence equation, ψ t = F ψ t−1 + v t you will see what is ψ t−1 in next explicit form, yt a1 a2 a3 · · · ap−1 ap yt−1 ωt yt−1 1 0 0 · · · yt−2 0 0 0 yt−2 0 1 0 · · · 0 0 yt−3 + 0 = . . . . . . . . . . ··· . . . . . . . . . . . . . . yt−p+1 0 0 0 ··· 1 0 yt−p 0 The ﬁrst equation is yt = a1 yt−1 + a2 yt−2 + · · · + ap yt−p + ωt which is exactly (4). And it is quite obvious for you that from second to pth equation is simple a yi = yi form. The reason that we write it like this is not obvious till now, but one reason is that we reduce the system down to a ﬁrst-order diﬀerence equation, although it is in matrix form, we can use old methods to analyse it.

3.1

Iterative Method

We list evolution of diﬀerence equations as follows, ψ 0 = F ψ −1 + v 0 ψ1 = F ψ0 + v1 ψ2 = F ψ1 + v2 . . . ψ t = F ψ t−1 + v t 11

And we need to assume that ψ −1 and v 0 are known. Then old tricks of recursive substitution, ψ 1 = F (F ψ −1 + v 0 ) + v 1 = F 2 ψ −1 + F v 0 + v 1 Again, ψ2 = F ψ1 + v2 = F (F 2 ψ −1 + F v 0 + v 1 ) + v 2 = F 3 ψ −1 + F 2 v 0 + F v 1 + v 2 Till step t, ψ t = F t+1 ψ −1 + F t v 0 + F t−1 v 1 + . . . + F v t−1 + v t

t

= F t+1 ψ −1 +

i=0

F i v t−i

which has explicit form, yt y−1 ω0 ω1 yt−1 y−2 0 0 yt−2 = F t+1 y−3 + F t 0 + F t−1 0 + · · · + F . . . . . . . . . . . . yt−p+1 y−p 0 0

ωt−1 ωt 0 0 0 0 + . . . . . . 0 0

Note that how we use v i here. Unfortunatly, more notations are needed. We (t) denote the (1, 1) elelment of F t as f11 , and so on so forth. We made these notation in order to extract the ﬁrst equation from above unwieldy system, thus yt = f11

(t+1) (t)

y−1 + f12

(t+1)

y−2 + f13

(t+1)

y−3 + . . . + f1p

(t+1)

y−p

+ f11 ω0 + f11

(t−1)

ω1 + . . . + f11 ωt−1 + ωt

I have to admit, most of time mathematics looks more diﬃcult than it really is, notation is evil, and trust me, more evil stuﬀ you haven’t seen yet. However, keep on telling youself this is just the ﬁrst equation of the system, nothing more. Because the idea is the same in matrix form, we are told that we know ψ −1 and v 0 as initial value. Here we just write them in a explicit way, yt is a function of initial values from y−1 to y−p , and a sequence of ωi . The scalar ﬁrst-order diﬀerence equation need one initial value, and pth -order need p initial values. But if we turn it into vector form, it need one initial value again, which is ψ −1 . 12

If you want, you can even take partial derivative to ﬁnd dynamic multiplier, say we ﬁnd ∂yt (t) = f11 ∂ω0 this measure one-unit increase in ω0 , and is give by f11 .

(t)

3.2

Analytical Solution

We need to study further about matrix F , because it is the core of the system, all information are hidden in this matrix. We’d better use a small scale F to study the general pattern. Let’s set F = a1 a2 1 0

So the ﬁrst equation of the system, we can write, yt = a1 yt−1 + a2 yt−2 + ωt , or you might like, yt − a1 yt−1 − a2 yt−2 = ωt It is your familiar form, a nonhomogenenous second-order diﬀerence equation. And calculate its eigenvalue, a1 − λ a2 =0 1 −λ Write determinant in algebraic form, λ2 − a1 λ − a2 = 0 We ﬁnally reveal the myth why we always solve a characteristic equation ﬁrst, because we are ﬁnding it eigenvalues. In general the characterstic equal of pth -order diﬀerence equation will be, λp − a1 λp−1 − a2 λp−2 − · · · − ap = 0 (5)

we of course can prove it by using |F − λI| = 0, but the process looks rather messy, the basic idea is to perform row operations to turn it into upper triangle matrix, then the determinant is just the product of diagonal entries. Not much insight we can get from it, so we omit the process here. In second order diﬀerential or diﬀerence equation, we usually discuss three cases of solution, two distinct roots (now you know they are eigenvalues), one repeated root, complex roots. We have a root formula two catagorize them b2 − 4ac, however, that is only used in second order, when we come to high orders, we don’t have anything like that. But it actually makes high orders more interesting than low ones, because we can use linear algebra. 13

3.2.1

Distinct Real Eigenvalues

Distinct eigenvalue category here corresponds to two distinct real roots in second order. It is becoming hardcore in following content, be sure you are well-prepared in linear algebra. If F has p distinct eigenvalues, you should be happy, because diagonalization is waving hands towards you. Recall that A = P DP −1 where P is a nonsingular matrix, because distinct eigenvalues assure that we have linearly dependent eigenvectors. We need to make some cosmetic change to suit our need, F = P ΛP −1 We use a capital Λ to indicate that all eigenvalues on the principle diagonal. You should naturally respond that F t = P Λt P −1 , simply because, F 2 = P ΛP −1 P ΛP −1 = P ΛΛP −1 = P Λ2 P

−1

Diagonal matrix shows its convenience, t λ1 0 · · · 0 λt · · · 2 Λt = . . .. . . . . . 0 0 ···

λt p

0 0 . . .

And unfortunately again, we need more notation, we denote tij to be the ith row, j th column entry of P , and tij to be the ith row, j th column entry of P −1 . We try to write F t in explicit matrix form, t 11 12 λ1 0 · · · 0 t11 t12 · · · t1p t t · · · t1p t21 t22 · · · t2p 0 λt · · · 0 t21 t22 · · · t2p 2 t F = . . .. . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . 0 0 · · · λt tp1 tp2 · · · tpp tp1 tp2 · · · p 11 12 t11 λt t12 λt · · · t1p λt t t · · · t1p p 1 2 t21 λt t22 λt · · · t2p λt t21 t22 · · · t2p p 1 2 = . . . . . . .. .. . . . . . . . . . . . . . . p1 tp2 · · · tpp t t λt · · · t λt t tp1 λ1 p2 2 pp p 14 tpp

t Then f11 is

t f11 = t11 t11 λt + t12 t21 λt + · + t1p tp1 λt 1 2 p

**if we denote ci = t1i ti1 , then
**

t f11 = c1 λt + c2 λt + · + cp λt 1 2 p

Obviously if you pay attention to, c1 + c2 + . . . + cp = t11 t11 + t12 t21 + . . . + t1p tp1 , you would realize it is a scalar product, it is from the ﬁrst row of P and ﬁrst column of P −1 . In other words, it is the ﬁrst element of P P −1 . Magically, P P −1 is an identity matrix. Thus, c1 + c2 + . . . + cp = 1 This time if you want to calculate dynamic multiplier, ∂yt (t) = f11 ∂ω0 = c1 λt + c2 λt + · + cp λt 1 2 p the dynamic multiplier is a weighted average of all tth powered eigenvalues. Here one problem must be solved before we move on, how can we ﬁnd ci ? Or is it just some unclosed form expression, we can stop here? What we do next might look very strange to you, but don’t stop and ﬁnish it you will get a sense of what we are preparing here. Set ti to be the ith eigenvalue, p−1 λi λp−2 i . ti = . . λ1

i

1 Then, p−1 a1 a2 a3 · · · ap−1 ap λi 1 0 0 ··· λp−2 0 0 i 0 1 0 ··· 0 0 . F ti = . . . . . . .1 . . ··· . . λ . . . . . . i 0 0 0 ··· 1 0 1 a1 λp−1 + a2 λp−2 + . . . + ap−1 λ1 + ap i i i λp−1 i p−2 λi = . . . λ1 i 15

Recall you have seen the characteristic equation of pth -order (5), λp − a1 λp−1 − a2 λp−2 − · · · − ap−1 λ − ap = 0 Rearange, λp = a1 λp−1 + a2 λp−2 + · · · + ap−1 λ + ap (6) Interestingly, we get what we want here, the right-hand side of last equation is just the ﬁrst element of F ti . p λi λp−1 i p−2 F ti = λi . . . λ1 i We factor out a λi , p−1 λi λp−2 i . F ti = λi . = λi ti . λ1

i

1 We have successfully showed that ti is the eigenvalue of F . Now we can set up a P , its every column is eigenvalue, p−1 p−1 λp−1 · · · λp λ1 2 λp−2 λp−2 · · · λp−2 p 1 2 . . . .. . . . P = . . . . 1 λ1 λ ··· λ1

1 2 p

1

1

···

1

Actually this is a transposed Vandermonde matrix, which is mainly made use of in signal processing and polynomial interpolation. Since we do not know P −14 yet. We use the notation of tij , we postmultiply P by the ﬁrst column of P −1 , p−1 λ1 λp−1 · · · λp−1 11 1 p 2 λp−2 λp−2 · · · λp−2 t 0 p 21 1 2 . . . t = . .. . . . . . . . . . . . . λ1 0 λ1 · · · λ1 p1 p 1 2 t 0 1 1 ··· 1

Don’t ever try to calculate its inverse matrix by ajugate matrix or Gauss-Jordon elimination by hands, it is very inaccurate and time consuming.

4

16

This is a linear equation system, solution will look like, t11 = t21 1 (λ1 − λ2 )(λ1 − λ3 ) · · · (λ1 − λp ) 1 = (λ2 − λ2 )(λ2 − λ3 ) · · · (λ2 − λp ) . . . 1 = (λp − λ2 )(λp − λ3 ) · · · (λp − λp )

tp1

Thus, c1 = t11 t

11

λp−1 1 = (λ1 − λ2 )(λ1 − λ3 ) · · · (λ1 − λp ) λp−1 2 (λ2 − λ1 )(λ2 − λ3 ) · · · (λ2 − λp )

**c2 = t12 t21 = . . . cp = t1p t
**

p1

p−1 λp = (λp − λ2 )(λp − λ3 ) · · · (λp − λp−1 )

I know all this must been quite uncomfortable to you if you don’t fancy mathematics too much. We’d better go through a small example to get familiar with these artilleries. Let’s look at a second-order diﬀerence equation, yt = 0.4yt−1 + 0.7yt−2 + ωt . But I guess you would prefer to it like this, yt+2 − 0.4yt+1 − 0.7yt = ωt+2 . Calculate its characteristic equation, |F − λI| = 0, which is F − λI = 0.4 0.7 λ 0 0.4 − λ 0.7 − = 1 0 0 λ 1 −λ

Now calculate its determinant, F − λI = λ2 − 0.4λ − 0.7 17

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 5 10 15 20 25

Figure 1: Dynamic multiplier as a function of t. Use root formula, λ1 = λ2 = For ci , 1.0602 λ1 = 0.6163 = λ1 − λ2 1.0602 + 0.6602 λ2 −0.6602 c2 = = = 0.3837 λ2 − λ1 −0.6602 − 1.0602 c1 = And note that 0.6163 + 0.3837 = 1. Dynamic multiplier is ∂yt = c1 λt + c2 λt = 0.6163 · 1.0602t + 0.3837 · (−0.6602)t ∂ωt The ﬁgure 1 shows the dynamic multiplier as a function of t, MATLAB code is at the appendix. The dynamic multiplier will explode as t → ∞, because we have an eigenvalue λ1 > 1. 18 0.4 + 0.4 − (−0.4)2 − 4(−0.7) = 1.0602 2 (−0.4)2 − 4(−0.7) = −0.6602 2

3.2.2

Distinct Complex Eigenvalues

We also need to talk about distinct complex eigenvalues, but the example will still resort to second-order diﬀerence equation, because we have a handy root formula. The pace might be a little fast, but easily understandable. Suppose we have two complex eigenvalues, λ1 = α + iβ λ2 = α − iβ And modulus of the conjugate pair is the same, λ = Then rewritten conjugate pair as, λ1 = λ (cos θ + i sin θ) λ2 = λ (cos θ − i sin θ) According to De Moivre’s theorem, to raise the power of complex number, λt = λ t (cos tθ + i sin tθ) 1 λt = λ t (cos tθ − i sin tθ) 2 Back to dynamic multiplier, ∂yt = c1 λt + c2 λt 1 2 ∂ωt = c1 λ t (cos tθ + i sin tθ) + c2 λ t (cos tθ − i sin tθ) rearrange, we have = (c1 + c2 ) λ t cos tθ + i(c1 − c2 ) λ t sin tθ However, ci is calculated from λi , so since eigenvalues are complexe number, and so are ci ’s. We can denote the conjugate pair as, c1 = γ + iδ c2 = γ − iδ Thus, c1 λt + c2 λt = [(γ + iδ) + (γ − iδ)] λ t cos tθ + i[((γ + iδ) − (γ − iδ)] λ t sin tθ 1 2 = 2γ λ t cos tθ + i2iδ λ t sin tθ = 2γ λ t cos tθ − 2δ λ t sin tθ it turns out real number again. And the time path is determined by modulus λ , if λ > |1|, the dynamic multiplier will explode, if λ < |1| dynamic multiplier follow either a oscillating or nonoscillating decaying pattern. 19 α2 + β 2

3.2.3

Repeated Eigenvalues

If we have repeat eigenvalues, diagonalization might not be the choice, since we might encounter singular P . But since we have enough mathematical artillery to use, we just switch to another kind of more general diagonalization, which is called the Jordon decomposition. Let’s give a deﬁnition ﬁrst, a h × h matrix which is called Jordon block matrix, is deﬁned as λ 1 0 ··· 0 0 λ 1 · · · 0 J h (λ) = 0 0 λ · · · . . . .. . . . . . . . . . . 0 0 0 ··· λ This is a Jordon canonical form,5 The Jordon decomposition says, if we have a m×m matrix A, then there must be a nonsingular matrix B such that, J h1 (λ1 ) (0) ··· (0) (0) J h2 (λ2 ) · · · (0) B −1 AB = J = . . . .. . . . . . . . (0) (0) · · · J hr (λr ) Cosmetically change to our notation, F = M J M −1 As you can imagine, F t = M J t M −1 And t J h1 (λ1 ) (0) ··· (0) J t 2 (λ2 ) · · · h Jt = . . .. . . . . . (0) (0) ··· (0) (0) . . .

J t r (λr ) h

What you need to do is just to calculate J t i (λi ) one by one, of course we h should leave this to computer. Then you can calculate dynamic multiplier (t) as usual, once you ﬁgure out the F , then pick the (1, 1) entry f11 the rest will be done.

4

Lag Operator

In econometrics, the convention is to use lag operator to function as what we did with diﬀerence equation. Essentially, they are the identical. But there

5

Study my notes of Linear Algebra and Matrix Analysis II [1].

20

is still some subtle conceptual discrepancy, diﬀerence equation is a kind of equation, a balanced expression on both sides, but lag operator represents a kind of operation, no diﬀerent from addition, substraction or multiplication. Lag operator will turn a time series into a diﬀerence equation. We conventionally use L to represent lag operator, for instance, Lxt = xt−1 L2 xt = xt−2

4.1

pth -order diﬀerence equation with lag operator

Basically this section is to reproduce the crucial results of what we did with diﬀerence equation. We are trying to show you that we can reach the same goal with the help of either diﬀerence equations or lag operators. Turn this pth -order diﬀerence equation into lag operator form, yt − a1 yt−1 − a2 yt−2 − · · · − ap yt−p = ωt which will be (1 − a1 L − a2 L2 − · · · − ap Lp )yt = ωt Because it is an operation in the equation above, some steps below might not be appropriate, but if we switch to (1 − a1 z − a2 z 2 − · · · − ap z p )yt = ωt , we can use some algebraic operation to analyse it. Multiply both sides by z −p , z −p (1 − a1 z − a2 z 2 − · · · − ap z p )yt = z −p ωt (z −p − a1 zz −p − a2 z 2 z −p − · · · − ap z p z −p )yt = z −p ωt (z −p − a1 z 1−p − a2 z 2−p − · · · − ap )yt = z −p ωt And deﬁne λ = z −1 , and set the right-hand side zero, λp − a1 λp−1 − a2 λp−2 − · · · − ap−1 λ − ap = 0 This is the characteristic equation, reproduction of equation (6). to be ﬁnished!Actually the rest of reproduction has no insight at all, simply technical manipulation to reproduce the same results as diﬀerence equqation.

5

5.1

Appendix

MATLAB code

x=1:20; y=y=0.6163*1.0602.^x+0.3837*(-0.6602).^x; bar(x,y)

21

References

[1] Chen W. (2011): Linear Algebra and Matrix Analysis II, study notes [2] Chen W. (2011): Diﬀerential Equations, study notes [3] Chiang A.C. and Wainwright K.(2005): Fundamental Methods of Mathematical Economics, McGraw-Hill [4] Simon C. and Blume L.(1994): W.W.Norton & Company Mathematics For Economists,

[5] Enders W. (2004): Applied Econometric Time Series, John Wiley & Sons

22

- DIFFERENCE EQUATIONS
- 2001 Flutter analysis - JWEIA
- Difference Equations
- CH3.PDF
- Image Processing 3
- Difference Equations
- Multiple Degree of Freedom Systems
- Matrix 3
- Minimal Maxima
- Mat204 Final
- hw2-sol
- pjm-v1-n3-p
- httpieeexplore.ieee.orgstampstamp.jsptp=&arnumber=371543
- On approximation of functions by exponential sums
- Complex Eigenvalues best++++
- canonica traduccion
- Determinants
- Main
- Neural Network
- Reconst Periodic Jacobi Matrices
- Graph Laplacian
- Linear Algebra - Solved Assignments - Fall 2006 Semester
- eth-5479-02
- 11084392
- Survey of the Stability of Linear FD Equations
- Eng Math 4 2 20112012
- Eigenvalues of Graphs
- Equilibrium Conditions Of A Class I Tensegrity Structure
- DE Final Study Guide.pdf
- Notes

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading