Professional Documents
Culture Documents
Chapter 13
Time Series:
Descriptive Analyses, Models, &
Forecasting
Descriptive Analysis:
Index Numbers
250.0
200.0
150.0
100.0
50.0
0.0
1990
1991
19921993
1994
1995
1996
19971998
1999
2000
2001
2002
20032004
2005
2006
80.0
60.0
40.0
20.0
0.0
J-05 M-05 M-05 J-05 S-05 N-05 J-06 M-06 M-06 J-06 S-06 N-06
∑Q it0
Pit
i=1
It = k
× 100
∑Q it0
Pit
0
i=1
© 2011 Pearson Education, Inc
Laspeyres Index Number
Example
The table shows the closing stock prices on
1/31/2005 and 12/29/2006 for Daimler–
Chrysler, Ford, and GM. On 1/31/2005 an
investor purchased the indicated number of
shares of each stock. Construct the Laspeyres
Index using 1/31/2005 as the base period.
Daimler–Chrysler GM Ford
Shares Purchased 100 500 200
1/31/2005 Price 45.51 13.17 36.81
12/29/2006 Price 61.41 7.51 30.72
© 2011 Pearson Education, Inc
Laspeyres Index Solution
Weighted total for base period (1/31/2005):
k
∑Q
i =1
it0 Pit0 = 100(45.51) + 500(13.17) + 200(36.81)
= 18498
∑Q
i =1
it0 Pit = 100(61.41) + 500(7.51) + 200(30.72)
= 16040
© 2011 Pearson Education, Inc
Laspeyres Index Solution
k
∑Q P
i ,1/ 31/ 05 i ,12 / 29 / 06
It = i =1
k
× 100
∑Q
i =1
P
i ,1/ 31/ 05 i ,1/ 31/ 05
16040
= × 100
18498
= 86.7
Indicates portfolio value had decreased by
13.3% (100–86.7) ©between 1/31/2005
2011 Pearson Education, Inc
and
12/29/2006.
Paasche Index
• Uses quantities for each period as weights
– Appropriate when quantities change over time
• Compare current prices to base period prices at
current purchase levels
• Disadvantages
– Must know purchase quantities for each time period
– Difficult to interpret a change in index when base
period is not used
∑Q P
i ,1/ 31/ 05 i ,1/ 31/ 05
I1/ 31/ 05 = i =1
k
× 100
∑Q
i =1
P
i ,1/ 31/ 05 i ,1/ 31/ 05
∑Q P i12 / 29 / 06 i12 / 29 / 06
I12 / 29 / 06 = i =1
k
× 100
∑Q
i =1
P
i12 / 29 / 06 i1/ 31/ 05
Descriptive Analysis:
Exponential Smoothing
Et = wYt + (1 – w)Et–1
© 2011 Pearson Education, Inc
Exponential Smoothing
Example
The closing stock prices on the last
day of the month for Daimler–
Chrysler in 2005 and 2006 are
given in the table. Create an
exponentially smoothed series
using w = .2.
60 Actual Series
50
40 Smoothed Series
30 (w = .2)
20
10
Jan-05
Feb-05 Apr-05
Mar-05 Jul-05
Jun-05 Oct-05
Nov-05 Jan-06
Feb-06 Apr-06
Mar-06 Jul-06
Jun-06 Oct-06
Nov-06
May-05 Aug-05
Sep-05 Dec-05 May-06 Aug-06
Sep-06 Dec-06
60 Actual Series
50
40
Smoothed Series Smoothed Series
30
(w = .2) (w = .8)
20
10
Jan-05
Feb-05 Apr-05
Mar-05 Jul-05
Jun-05 Oct-05
Nov-05 Jan-06
Feb-06 Apr-06
Mar-06 Jul-06
Jun-06 Oct-06
Nov-06
May-05 Aug-05
Sep-05 Dec-05 May-06 Aug-06
Sep-06 Dec-06
Forecasting:
Exponential Smoothing
Forecasting Trends:
Holt’s Method
60
Smoothed
55
50
Price
45
40
35
30 Actual
Jan-05
Mar-05 Jul-05
May-05 Sep-05 Jan-06
Nov-05 Mar-06 Jul-06
May-06 Nov-06
Sep-06
Date
2/28/2007 is two–steps–ahead:
F2/28/07 = E12/29/06 + 2T12/29/06
= 61.39 + 2(3.00) = 67.39
© 2011 Pearson Education, Inc
Holt Thinking Challenge
The data shows the
average undergraduate
tuition at all 4–year
institutions for the years
1996–2004 (Source: U.S.
Dept. of Education).
Calculate the Holt–
Winters components
using w = .7 and v = .5.
© 2011 Pearson Education, Inc
Holt Solution
w = .7 v = .5
E2 = Y2 and T2 = Y2 – Y1
E2 = 9206 and T2 = 9206 – 8800 = 406
$14,000
$13,000
$12,000
Tuition
$11,000
$10,000
$9,000
$8,000
Actual Smoothed
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004
Year
∑ Y −F t t
t= n+1
MAD =
m
RMSEI = = 6.06
4
© 2011 Pearson Education, Inc
Forecasting Accuracy
Example
Model II
−2.82 + 4.15 + 5.50 + 8.63
MADII = = 5.28
4
RMSEII = = 5.70
4
© 2011 Pearson Education, Inc
Forecasting Accuracy
Example
Model III
−3.45 + 2.42 + 2.67 + 4.71
MADIII = = 3.31
4
RMSEIII = = 3.44
4
© 2011 Pearson Education, Inc
13.7
Forecasting Trends:
Simple Linear Regression
$14,000
$13,000
Yˆt = 7997.533 + 528.158t
$12,000
$11,000
Tuition
$10,000
$9,000
$8,000
1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Year
yˆ ± tα / 2 s 1 + +
n SStt
1 (11 − 5.5 )
2
13006.21
© 2011≤Pearson
y11 ≤Education,
14608.33 Inc
13.8
400
200
0
Residuals
0 2 4 6 8 10 12
-200
-400
t
© 2011 Pearson Education, Inc
Durbin–Watson Test
• H0: No first–order autocorrelation of residuals
• Ha: Positive first–order autocorrelation of
residuals
• Test Statistic
n
∑( )
2
Rˆt − Rˆt −1
d= t =2
n
ˆ
∑ Rt 2
t =1
© 2011 Pearson Education, Inc
Interpretation of Durbin-
n
Watson d-Statistic
∑( R̂ t
−R̂t−1 )
d= t=2
n
R ange of d : 0 ≤d ≤4
∑ t
R̂ 2
t=1
1. If the residuals are uncorrelated, then d ≈ 2.
2. If the residuals are positively autocorrelated,
then d < 2, and if the autocorrelation is very
strong, d ≈ 2.
3. If the residuals are negatively autocorrelated,
then d >2, and if the autocorrelation is very
strong, d ≈ 4. © 2011 Pearson Education, Inc
Rejection Region for the Durbin–
Watson d Test
Rejection region:
evidence of
positive
autocorrelation
d
0 1 dL dU 2 3 4
Possibly significant Nonrejection region:
autocorrelation insufficient evidence of
positive autocorrelation
© 2011 Pearson Education, Inc
Durbin–Watson d-Test for
Autocorrelation
One-tailed Test
H0: No first–order autocorrelation of residuals
Ha: Positive first–order autocorrelation of
residuals
(or Ha: Negative first–order autocorrelation)
n
∑( )
2
Test Statistic Rˆt − Rˆt −1
d= t =2
n
∑ Rˆ 2
t
© 2011 Pearson Education, Inc
t =1
Durbin–Watson d-Test for
Autocorrelation
Rejection Region:
d < dL,
[or (4 – d) < dL,
If Ha : Negative first-order autocorrelation
where dL, is the lower tabled value corresponding
to k independent variables and n observations. The
corresponding upper value
dU, defines a “possibly significant” region between
dL, and dU,
© 2011 Pearson Education, Inc
Durbin–Watson d-Test for
Autocorrelation
Two-tailed Test
H0: No first–order autocorrelation of residuals
Ha: Positive or Negative first–order
autocorrelation of residuals
Test Statistic
n
∑( )
2
Rˆt − Rˆt −1
d= t =2
n
∑ Rˆ 2
t
© 2011 Pearson Education, Inc
t =1
Durbin–Watson d-Test for
Autocorrelation
Rejection Region:
d < dL, or (4 – d) < dL,
d
0 2 4
.88 1.32 © 2011 Pearson Education, Inc
Durbin–Watson Solution
Test Statistic
n
∑( )
2
Rˆt − Rˆt −1
d= t =2
n
Rˆ
∑ t 2
t =1