You are on page 1of 15

1

A Simple Moving Average Example
Moving Average Forecast =

 demand in previous n periods
n

Where n is the number of periods in the moving average
Example: The demand for Gingerbread Men is shown in the table. Forecast the demand for
month 7.
Month
1
2
3
4

Demand
1500
2200
2700
4200

5

7800

6

5400

Calculation

Forecast

1500  2200  2700
 2133
3

2133

7

Bakery Gingerbread Man Sales
9000
8000
7000

Sales

6000
5000

Demand

4000

Forecast

3000
2000
1000
0
1

2

3

4
Period

5

6

7

three months ago 3 Period Last Month Two Months Ago Three Months Ago Sum of Weights Month 1 2 3 4 Demand 1500 2200 2700 4200 5 7800 6 5400 7 Weight 3 2 1 Calculation Forecast (1500  1)  (2200  2)  (2700  3)  2333 6 2333 . two months ago 2. Forecast the demand for month 7.2 A Weighted Moving Average Example Weighted Moving Average Forecast =  (weight for period n)  (demand in period n)  weights Example: The demand for Gingerbread Men is shown in the table. weighting the past three months as follows: last month 3.

using a smoothing constant of 0.4. Forecast the demand for month 7.3 Bakery Gingrbread Man Sales 9000 8000 7000 6000 5000 Series1 Series2 4000 3000 2000 1000 0 1 2 3 4 5 6 7 P eriod An Exponential Smoothing Example Exponential Smoothing Forecast Ft = Ft-1 + (At-1 – Ft-1) Where Ft Ft-1  At-1 = New Forecast = Previous Forecast = Smoothing Constant: (0    1) = Previous Period’s Actual Demand Example: The demand for Gingerbread Men is shown in the table. The forecast for month 1 is 1500 units. Month 1 2 Demand 1500 2200 Calculation 1500 + 0.4(1500 – 1500) Forecast 1500 1500 3 2700 1500 + 0.4(2200 – 1500) 1780 4 4200 5 7800 6 5400 7 .

Month 1 2 3 Demand 1500 2200 2700 Forecast 1500 1500 1780 4 4200 2148 5 7800 2969 6 5400 4901 Error 1500 – 1500 = 0 2200 – 1500 = 700 | Error | 0 700 Total: MAD: A Mean Squared Error Example .4 Bakery Gingerbread Man Sales 9000 8000 7000 Sales 6000 5000 Demand Forecast 4000 3000 2000 1000 0 1 2 3 4 5 6 7 Period A Mean Absolute Deviation Example MAD =  forecast errors n Example: Calculate the MAD for the value of  used in the Exponential Smoothing Example.

The forecast for month 1 is 1500 units and the trend for month 1 is 200 units. and a smoothing constant for the trend of 0.)T t-1 Where Ft Tt At   = Exponentially smoothed forecast for period t = Exponentially smoothed trend for period t = Actual demand for period t = Smoothing constant for the average (0    1) = Smoothing constant for the trend (0    1) Example: The demand for Gingerbread Men is shown in the table. Month 1 2 3 Demand Forecast 1500 1500 2200 0.2(1620 – 1500) + 0. Forecast the demand for month 7.4. Month 1 2 3 Demand 1500 2200 2700 Forecast 1500 1500 1780 4 4200 2148 5 7800 2969 6 5400 4901 Error2 0 490000 Error 1500 – 1500 = 0 2200 – 1500 = 700 Total: MSE: An Exponential Smoothing With Trend Adjustment Example Forecast Including Trend Ft = (At-1) + (1 .8(200) = 184 FIT 1700 1804 .)(Ft-1 + Tt-1) Trend Tt = (Ft – Ft-1) + (1 .2.5 MSE =   forecast errors 2 n Example: Calculate the MAD for the value of  used in the Exponential Smoothing Example.6(1700) = 1620 2700 4 4200 5 7800 Trend 200 0. using a smoothing constant for the average of 0.4(1500) + 0.

bx Example: The demand for Gingerbread Men is shown in the table. Forecast demand for period 7 by fitting a single line trend to the data.6 6 5400 7 B akery Gingerbread M an Sales 9000 8000 7000 6000 Demand 5000 Forecast Trend 4000 FIT 3000 2000 1000 0 1 2 3 4 5 6 7 P er i od Least Squares Trend Projection Example ˆ  a  bx y Where ˆ y a b x = computed value of variable to be predicted (ie dependant variable) = y-axis intercept = slope of regression line = independent variable We can determine a and b with the equations: b  xy . Month (x) 1 2 3 Demand (y) 1500 2200 2700 4 4200 5 7800 x2 1 4 xy (1)(1500) = 1500 (2)(2200) = 4400 .nxy  x  nx 2 2 a  y .

we have: x ˆ y = Seasonal Forecast Example Average Monthly Demand =  Average Annual Demand 12 months Average Annual Demand Seasonal Index = Average Monthly Demand Example: The demand for gingerbread men over the past three years is shown in the table.bx = Thus. If we expect the total yearly demand in 2002 to be 45.000 units.7 6 5400 x = y = x b x2 = x  y n  xy . our trend equation is: ˆ y = + To calculate the forecast for month x = 7. what will be our forecasted monthly demands in 2002? Month 1999 1 1100 2 1800 2000 1300 2000 2001 1500 2200 3 2300 2500 2700 4 3800 4000 4200 5 4500 4700 4900 6 5000 5200 5400 7 5500 5700 5900 8 4800 5000 5200 Average Annual Demand (1100 + 1300 + 1500) / 3 = 1300 .nxy  x  nx 2 xy = 2 y  n = a  y .

Sales (y) x2 xy 1 2 Attendance (x) (.bx Example: We think that there may be a relationship between park attendance and number of gingerbread men sold.nxy  x  nx 2 2 a  y . Data for the first six months are shown in the table. Forecast the number of gingerbread men that will be sold in month 7 if monthly park attendance is forecast as 25000 people.000) 8 12 1500 2200 64 (8)(1500) = 12000 3 14 2700 4 18 4200 5 19 7800 6 22 5400 x = y = x2 = xy = Month .8 9 3000 3200 3400 10 2200 2400 2600 11 1500 1700 1900 12 1200 1400 1600  Average Monthly Demand = Seasonal Index for January = 1300 / Forecast for January 2002 = = 45000  12 = Regression Analysis Example b ˆ  a  bx y  xy .

b xy n2 = Correlation Coefficient Example r n x n  xy   x  y 2     x  n y 2    y  2 2  Example: Compute the Correlation Coefficient of the data in our regression analysis example .x  y 2  a y .b xy n2 Example: Compute the Standard Error of the Estimate for our regression analysis example S y.x  y 2  a y .9 x b x  y n  xy .bx = Thus. we have: + ˆ y x = Standard Error of Estimate Example S y.nxy  x  nx 2 2 y  n = a  y . our regression equation is: = ˆ y To calculate the forecast for month 7.

the topic of this section. Describing Bivariate Data Learning Objectives 1. When there is only one predictor variable. Define linear regression 2. Lane Prerequisites Measures of Variability. The variable we are basing our predictions on is called the predictor variable and is referred to as X. In simple linear regression.10 Introduction to Linear Regression Author(s) David M. The variable we are predicting is called thecriterion variable and is referred to as Y. If you were going to . the predictions of Y when plotted as a function of X form a straight line. the prediction method is calledsimple regression. we predict scores on one variable from the scores on a second variable. Identify errors of prediction in a scatter plot with a regression line In simple linear regression. You can see that there is a positive relationship between X and Y. The example data in Table 1 are plotted in Figure 1.

the higher your prediction of Y. the red point is very near the regression line.11 predict Y from X. The best-fitting line is called a regression line. Example data. The black diagonal line in Figure 2 is the regression line and consists of the predicted score on Y for each possible value of X. As you can see. A scatter plot of the example data.00 2. Table 1.00 2. X Y 1. Linear regression consists of finding the best-fitting straight line through the points.00 3. The vertical lines from the points to the regression line represent the errors of prediction.30 4.00 1.25 Figure 1.00 3.00 1.75 5. its error of . the higher the value of X.00 2.

760 0.600 5.25 2. By contrast. the points are the actual data. Table 2.00 2. the yellow point is much higher than the regression line and therefore its error of prediction is large. A scatter plot of the example data.00 1. Table 2 shows the predicted values (Y') and the errors of prediction (Y-Y'). its error of prediction is -0.210 ­0.578 4.00 1. Therefore. For example.21.635 0.660 0.00 2. and the vertical lines between the points and the black line represent errors of prediction.00 1.12 prediction is small. The error of prediction for a point is the value of the point minus the predicted value (the value on the line).00 and a predicted Y (called Y') of 1.30 2. the first point has a Y of 1. Example data.365 0.436 . X Y Y' Y-Y' (Y-Y')2 1.910 ­0.00 3.060 ­0. Figure 2.133 3.044 2.75 2.210 0.485 1.21.265 1. The black line consists of the predictions.00 1.

For X = 2. However." By far.21. the most commonly-used criterion for the best-fitting line is the line that minimizes the sum of the squared errors of prediction. Statistics for computing the regression line.425)(1) + 0.581 1.425)(2) + 0. COMPUTING THE REGRESSION LINE In the age of computers. the calculations are relatively easy.072 0. and r is the correlation between X and Y. Y' = (0.06 1. The sum of the squared errors of prediction shown in Table 2 is lower than it would be for any other regression line. b is the slope of the line. The calculations are based on the statistics shown in Table 3.64. The equation for the line in Figure 2 is Y' = 0. MY is the mean of Y.785 For X = 1. The formula for a regression line is Y' = bX + A where Y' is the predicted score. the regression line is typically computed with statistical software.785 = 1. and are given here for anyone who is interested. sX is the standard deviation of X. and A is the Y intercept.627 The slope (b) can be calculated as follows: .425X + 0. The last column in Table 2 shows the squared errors of prediction.785 = 1.13 You may have noticed that we did not specify what is meant by "bestfitting line. Y' = (0. That is the criterion that was used to find the line in Figure 2. Formula for standard deviation Formula for correlation Table 3. sY is the standard deviation of Y. MX is the mean of X. MX MY sX sY r 3 2.

b = (0.072)/1.bMX. You can see from the figure that there is a strong positive relationship.785 Note that the calculations have all been shown in terms of sample statistics rather than population parameters. The regression equation is University GPA' = (0. A REAL EXAMPLE The case study "SAT and College GPA" contains high school and university grades for 105 computer science majors at a local state school. We now consider how we could predict a student's university GPA if we knew his or her high school GPA. a student with a high school GPA of 3 would be predicted to have a university GPA of . and the correlation.(0.78. The correlation is 0.14 b = r sY/sX and the intercept (A) can be calculated as A = MY .675)(High School GPA) + 1.097 Therefore. standard deviations. STANDARDIZED VARIABLES The regression equation is simpler if variables are standardized so that their means are equal to 0 and standard deviations are equal to 1. Figure 3 shows a scatter plot of University GPA as a function of High School GPA.581 = 0. The formulas are the same.06 .425 A = 2.627)(1. This makes the regression line: ZY' = (r)(ZX) where ZY' is the predicted standard score for Y. for then b = r and A = 0.425)(3) = 0. For these data. Note that the slope of the regression equation for standardized variables is r. simply use the parameter values for means. r is the correlation. and ZX is the standardized score for X.

ASSUMPTIONS .15 University GPA' = (0.12.097 = 3. Figure 3. University GPA as a function of High School GPA.675)(3) + 1.