You are on page 1of 85

Regression In Excel

1
Contents
2

 Regression Model
 Regression Analysis in Excel
 Simple Linear Regression
 Correlation
 How To Do A Regression in Excel
 Slope
 Intercept
 ANOVA
 References
Regression Model
3

 A multiple regression model is:

y = β1+ β2 x2+ β3 x3+ u


Such that:
 y is dependent variable

 x2 and x3 are independent variables

 β1 is constant

 β2 and β3 are regression coefficients

 It is assumed that the error u is independent with constant variance.

 We wish to estimate the regression line:


y = b1 + b 2 x2 + b 3 x3
Regression Analysis in Excel
4

 We do this using the Data analysis Add-in and Regression.

 Example:
Contd…..
5
Contd…..
6

 The regression output has three components:


 Regression statistics table
 ANOVA table
 Regression coefficients table.
Interpreting Regression Statistics Table
Regression Statistics
7

 The standard error here refers to the estimated standard deviation of


the error term u.
 It is sometimes called the standard error of the regression. It equals
sqrt(SSE/(n-k)).
 It is not to be confused with the standard error of y itself (from
descriptive statistics) or with the standard errors of the regression
coefficients given below.
 R2 = 0.8025 means that 80.25% of the variation of yi around its
mean is explained by the regressors x2i and x3i.
Contd…..
8

 The regression output of most interest is the following table of


coefficients and associated output:
Contd….
9

 Let βj denote the population coefficient of the jth regressor


(intercept, HH SIZE and CUBED HH SIZE). Then

 Column "Coefficient" gives the least squares estimates of βj.


 Column "Standard error" gives the standard errors (i.e.the estimated standard
deviation) of the least squares estimates bj of βj.
 Column "t Stat" gives the computed t-statistic for H0: βj = 0 against Ha: βj ≠ 0.
This is the coefficient divided by the standard error. It is compared to a t with (n-k)
degrees of freedom where here n = 5 and k = 3.
 Column "P-value" gives the p-value for test of H0: βj = 0 against Ha: βj ≠ 0..
This equals the Pr{|t| > t-Stat}where t is a t-distributed random variable with n-k
degrees of freedom and t-Stat is the computed value of the t-statistic given in the
previous column.
Note that this p-value is for a two-sided test. For a one-sided test divide this p-value
by 2 (also checking the sign of the t-Stat).
 Columns "Lower 95%” and "Upper 95%” values define a 95% confidence interval
for βj.
Contd……
10

 A simple summary of the previous output is that the


fitted line is:

y = 0.8966 + 0.3365x + 0.0021z


Regression and Correlation
11
Techniques that are used to establish whether there is a mathematical relationship between
two or more variables, so that the behavior of one variable can be used to predict the
behavior of others. Applicable to “Variables” data only.
• “Regression” provides a functional relationship (Y=f(x)) between the variables; the
function represents the “average” relationship.
• “Correlation” tells us the direction and the strength of the relationship.

The analysis starts with a Scatter Plot of Y vs X.

The analysis starts with a Scatter Plot of Y vs. X


Simple Linear Regression
12

What is it?
Determines if Y
depends on X and
provides a math
equation for the
y
relationship
(continuous data)

Examples:
x
Process conditions

Does Y depend on X? and product properties


Sales and advertising
Which line is correct? budget
Simple Linear Regression
13

rise
m = slope =
run
Y

b = Y intercept

rise
= the Y value
at point that
the line
intersects Y run
axis.
b

0 X
A simple linear relationship can be described mathematically by
Y = mX + b
Simple Linear Regression
14
(6 - 3) 1
rise = =
slope =
run (10 - 4) 2
Y

rise
run
intercept = 1

0 X
0 5 10
Y = 0.5X + 1
Simple Regression Example
15

 An agent for a residential real estate company in a


large city would like to predict the monthly rental
cost for apartments based on the size of the
apartment as defined by square footage. A sample of
25 apartments in a particular residential
neighborhood was selected to gather the information
Size Rent
850 950

The data on size


1450 1600
1085 1200
1232 1500

and rent for the 25


718 950
1485 1700
1136 1650

apartments will be
726 935
700 875
956 1150

analyzed in
1100 1400
1285 1650
1985 2300

EXCEL.
1369 1800
1175 1400
1225 1450
1245 1100
1259 1700
1150 1200
896 1150
1361 1600
1040 1650
755 1200
1000 800
1200 1750
16
Scatter Plot
17

2500
2300
2100
1900
1700
Rent

1500
1300
1100
900
700
500
500 700 900 1100 1300 1500 1700 1900 2100
Size

Scatter plot suggests that there is a ‘linear’ relationship between Rent and Size
Interpreting EXCEL output
18

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

Regression Equation
Rent = 177.121+1.065*Size
Interpretation of the Regression Coefficient
19

 What does the coefficient of Size mean?

For every additional square feet,


Rent goes up by $1.065
Using Regression for Prediction
20

 Predict monthly rent when apartment size is 1000


square feet:

Regression Equation:
Rent = 177.121+1.065*Size
Thus, when Size=1000
Rent=177.121+1.065*1000=$1242 (rounded)
Using Regression for Prediction – Caution!
21

 Regression equation is valid only over the range over which


it was estimated!
 We should interpolate

 Do not use the equation in predicting Y when X values are


not within the range of data used to develop the equation.
 Extrapolation can be risky

 Thus, we should not use the equation to predict rent for an


apartment whose size is 500 square feet, since this value is
not in the range of size values used to create the regression
equation.
Why Extrapolation is Risky
22

Extrapolated relationship

True
Relationship

In this figure, we fit our


regression model using sample
Sample
Data
data – but the linear relation
2.5 4.0 implicit in our regression model
does not hold outside our sample!
By extrapolating, we are making
erroneous estimates!
Correlation (r)
23

 “Correlation coefficient”, r, is a measure of


the strength and the direction of the
relationship between two variables. Values of
r range from +1 (very strong direct
relationship), through “0” (no relationship),
to –1 (very strong inverse relationship). It
measures the degree of scatter of the points
around the “Least Squares” regression line.
Coefficient of Correlation from EXCEL
24

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

The sign of r is the same as that of the coefficient of X (Size) in the regression equation (in our
case the sign is positive). Also, if you look at the scatter plot, you will note that the sign should
be positive.

R=0.85 suggests a fairly ‘strong’ correlation between size and rent.


Coefficient of Determination (r2)
25

 “Coefficient of Determination”, r-
squared, (sometimes R- squared),
defines the amount of the variation in Y
that is attributable to variation in X
Getting r2 from EXCEL
26

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

It is important to remember that r-squared is always positive. It is the square of the coefficient of
correlation r. In our case, r2=0.72 suggests that 72% of variation in Rent is explained by the
variation in Size. The higher the value of r2, the better is the simple regression model.
Standard Error (SE)
27

 Standard error measures the variability or scatter of


the observed values around the regression line.

2100
1900
1700
Rent ($)

1500
1300
1100
900
700
500
500 1000 1500 2000 2500
Size (square feet)
Getting the Standard Error (SE) from EXCEL
28

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

In our example, the standard error associated with estimating rent is $194.60.
Is the Simple Regression Model Statistically Valid?
29

 It is important to test whether the regression model


developed from sample data is statistically valid.
 For simple regression, we can use 2 approaches to
test whether the coefficient of X is equal to zero
1. using t-test
2. using ANOVA
Is the coefficient of X equal to zero?
30

 In both cases, the hypothesis we test is:

H 0 : Slope  0
H1 : Slope  0

What could we say about the linear relationship between X and Y if the slope
were zero?
Using coefficient information for
testing if
31
slope=0
SUMMARY OUTPUT

Regression Statistics
P-value
Multiple R 0.85
R Square 0.72 7.52E-08
Adjusted R Square 0.71
Standard Error 194.60 =7.52*10-8
Observations 25 =0.0000000752
ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

t-stat=7.740 and P-value=7.52E-08. P-value is very small. If it is smaller than our a level,
then, we reject null; not otherwise. If a=0.05, we would reject null and conclude that slope
is not zero. Same result holds at a=0.01 because the P-value is smaller than 0.01. Thus, at
0.05 (or 0.01) level, we conclude that the slope is NOT zero implying that our model is
statistically valid.
Using ANOVA for testing if slope=0 in EXCEL
32

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

F=59.91376 and P-value=7.51833E-08. P-value is again very small. If it is smaller than our
a level, then, we reject null; not otherwise. Thus, at 0.05 (or 0.01) level, slope is NOT zero
implying that our model is statistically valid. This is the same conclusion we reached using
the t-test.
Confidence Interval for the Slope of Size
33

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.85
R Square 0.72
Adjusted R Square 0.71
Standard Error 194.60
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 2268776.545 2268776.545 59.91376452 7.51833E-08
Residual 23 870949.4547 37867.3676
Total 24 3139726

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 177.121 161.004 1.100 0.282669853 -155.942 510.184
Size 1.065 0.138 7.740 7.51833E-08 0.780 1.350

The 95% CI tells us that for every 1 square feet increase in


apartment Size, Rent will increase by $0.78 to $1.35.
How To Do A Regression In
Excel
34
1. Enter the data into a spreadsheet
35
2. Tools/DataAnalysis/Regression
36
3. Enter the dependent variable in the “y” column and the
independent variable (or variables) in the “x” columns
37
4. Indicate where output should go (the 1st cell in the range
works)
38
5. The basic regression is done. (You may need to widen
columns)
39
1. Construct Scatterplot to see if data
looks linear. Click Chart Wizard.

40
Select Scatter
and
this chart sub-type

41
Highlight only cells that
contain the x and y values

42
Enter a chart title
Label x and y axes

43
Store on a new worksheet
Name the worksheet

44
1. Click on grey background -- Delete
2. Click on any horizontal line -- Delete
3. Click on legend -- Delete

45
46
Right mouse click on any data point.
Select “Add Trendline”.
Select Linear from the trendline options.

47
Looks linear.
Return to Original Worksheet.

48
Go to Tools Menu
Select Data Analysis

49
Select Regression

50
1. Highlight cells of y-variable

2. Highlight cells of x-variable


3. Check Labels
(if first row has labels)
4. Check Confidence Level --
for other than 95% intervals
(and change %)
5. Click New Worksheet Ply
and give name

51
r
r2
adj r2 SSR
s SSE
n SSTOTAL

95% Confidence interval for 1


99% Confidence interval for 1

ŷ  46486.49  52.56757x Low p-value


Linear model OK
52
Slope
53

 Returns the slope of the linear regression line


through data points in known_y's and known_x's.
The slope is the vertical distance divided by the
horizontal distance between any two points on the
line, which is the rate of change along the regression
line.
 Syntax
 SLOPE(known_y's,known_x's)
 Known_y's is an array or cell range of numeric
dependent data points.
 Known_x's is the set of independent data points.
INTERCEPT
54

 Calculates the point at which a line will intersect the y-axis


by using existing x-values and y-values. The intercept point
is based on a best-fit regression line plotted through the
known x-values and known y-values. Use the INTERCEPT
function when you want to determine the value of the
dependent variable when the independent variable is 0
(zero). For example, you can use the INTERCEPT function
to predict a metal's electrical resistance at 0°C when your
data points were taken at room temperature and higher.
 Syntax
 INTERCEPT(known_y's,known_x's)
 Known_y's is the dependent set of observations or data.
 Known_x's is the independent set of observations or
data.
The basic ANOVA situation
55

Two variables: 1 Categorical, 1 Quantitative

Main Question: Do the (means of) the quantitative


variables depend on which group (given by categorical
variable) the individual is in?

If categorical variable has only 2 values:


• 2-sample t-test

ANOVA allows for 3 or more groups


An example ANOVA situation
56

Subjects: 25 patients with blisters


Treatments: Treatment A, Treatment B, Placebo
Measurement: # of days until blisters heal

Data [and means]:


• A: 5,6,6,7,7,8,9,10 [7.25]
• B: 7,7,8,9,9,10,10,11 [8.875]
• P: 7,9,9,10,10,10,11,12,13 [10.11]

Are these differences significant?


Informal Investigation
57

Graphical investigation:
• side-by-side box plots
• multiple histograms

Whether the differences between the groups are


significant depends on
• the difference in the means
• the standard deviations of each group
• the sample sizes

ANOVA determines P-value from the F statistic


Side by Side Boxplots
58

13

12

11

10
days

A B P
treatment
What does ANOVA do?
59
At its simplest (there are extensions) ANOVA tests the
following hypotheses:

H0: The means of all the groups are equal.

Ha: Not all the means are equal


 doesn’t say how or which ones differ.
 Can follow up with “multiple comparisons”

Note: we usually refer to the sub-populations as


“groups” when doing ANOVA.
Assumptions of ANOVA
60

 Each group is approximately normal


 check this by looking at histograms and/or normal
quantile plots, or use assumptions
 can handle some non-normality, but not severe
outliers
 Standard deviations of each group are approximately
equal
 rule of thumb: ratio of largest to smallest sample st.
dev. must be less than 2:1
Normality Check
61

We should check for normality using:


• assumptions about population
• histograms for each group
• normal quantile plot for each group

With such small data sets, there really isn’t a really


good way to check normality from data, but we make
the common assumption that physical measurements
of people tend to be normally distributed.
Standard Deviation Check
62

Variable treatment N Mean Median StDev


days A 8 7.250 7.000 1.669
B 8 8.875 9.000 1.458
P 9 10.111 10.000 1.764

Compare largest and smallest standard deviations:


• largest: 1.764
• smallest: 1.458
• 1.458 x 2 = 2.916 > 1.764

Note: variance ratio of 4:1 is equivalent.


Notation for ANOVA
63

• n = number of individuals all together


• I = number of groups
• = mean for entire data set is

x
Group i has
• ni = # of individuals in group i
• xij = value for individual j in group i
• = mean for group i
• si = standard deviation for group i

xi
How ANOVA works (outline)
64

ANOVA measures two sources of variation in the data and


compares their relative sizes

• variation BETWEEN groups


• for each data value look at the difference between
its group mean and the overall mean

x i  x  2

• variation WITHIN groups


• for each data value we look at the difference
between that value and the mean of its group

x ij  xi 
2
The ANOVA F-statistic is a ratio of the Between Group Variaton
divided by the Within Group Variation:

Between MSG
F 
Within MSE

A large F is evidence against H0, since it indicates that there is more


difference between groups than within groups.

65
How are These Computations Made?
66

We want to measure the amount of variation due to BETWEEN group


variation and WITHIN group variation

For each data value, we calculate its contribution to:


• BETWEEN group variation:

• WITHIN group variation:

x  x 
2
i

( xij  xi ) 2
An Even Smaller Example
67

Suppose we have three groups


• Group 1: 5.3, 6.0, 6.7
• Group 2: 5.5, 6.2, 6.4, 5.7
• Group 3: 7.5, 7.2, 7.9
We get the following statistics:
SUMMARY
Groups Count Sum Average Variance
Column 1 3 18 6 0.49
Column 2 4 23.8 5.95 0.176667
Column 3 3 22.6 7.533333 0.123333
Excel ANOVA Output
68

ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 5.127333 2 2.563667 10.21575 0.008394 4.737416
Within Groups 1.756667 7 0.250952

Total 6.884 9

1 less than number number of data values -


of groups number of groups
(equals df for each
1 less than number of individuals group added together)
(just like other situations)
Computing ANOVA F statistic
69
WITHIN BETWEEN
difference: difference
group data - group mean group mean - overall mean
data group mean plain squared plain squared
5.3 1 6.00 -0.70 0.490 -0.4 0.194
6.0 1 6.00 0.00 0.000 -0.4 0.194
6.7 1 6.00 0.70 0.490 -0.4 0.194
5.5 2 5.95 -0.45 0.203 -0.5 0.240
6.2 2 5.95 0.25 0.063 -0.5 0.240
6.4 2 5.95 0.45 0.203 -0.5 0.240
5.7 2 5.95 -0.25 0.063 -0.5 0.240
7.5 3 7.53 -0.03 0.001 1.1 1.188
7.2 3 7.53 -0.33 0.109 1.1 1.188
7.9 3 7.53 0.37 0.137 1.1 1.188
TOTAL 1.757 5.106
TOTAL/df 0.25095714 2.55275

overall mean: 6.44 F = 2.5528/0.25025 = 10.21575


ANOVA Output
70
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00

# of data values - # of groups

(equals df for each group


1 less than # of
added together)
groups

1 less than # of individuals


(just like other situations)
ANOVA Output
71
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00

 (x  xi ) 2
(x  x) 2

(x
ij
obs ij  x) 2
obs
i

obs
SS stands for sum of squares
• ANOVA splits this into 3 parts



ANOVA Output
72
Analysis of Variance for days
Source DF SS MS F P
treatment 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00

MSG = SSG / DFG


MSE = SSE / DFE
P-value
comes from
F = MSG / MSE F(DFG,DFE)

(P-values for the F statistic are in Table E)


So How Big is F?
73

Since F is
Mean Square Between / Mean Square Within

= MSG / MSE

A large value of F indicates relatively more


difference between groups than within groups
(evidence against H0)

To get the P-value, we compare to F(I-1,n-I)-distribution


• I-1 degrees of freedom in numerator (# groups -1)
• n - I degrees of freedom in denominator (rest of df)
Connections between SST, MST, and Standard
Deviation
74

If ignore the groups for a moment and just compute the standard deviation of the
entire data set, we see

    SST
2
x ij x
s 2
  MST
n 1 DFT

So SST = (n -1) s2, and MST = s2. That is, SST and MST measure the TOTAL

variation in the data set.
Connections between SSE, MSE, and Standard
Deviation
75

Remember:
si
2

 x ij  xi 
2


SS[ Within Group i ]
ni  1 dfi
So SS[Within Group i] = (si2) (dfi )

This means that we can compute SSE from the standard deviations and sizes
(df) of each group:

SSE  SS[Within]   SS[Within Group i ]


  s (ni  1)   s (dfi )
2
i
2
i
Pooled Estimate for Standard Deviation
76
One of the ANOVA assumptions is that all groups have the same standard
deviation. We can estimate this with a weighted average:

(n 1)s 2
 (n 1)s 2
 ... (n 1)s 2
s2p  1 1 2 2 I I
nI
(df1)s  (df 2 )s  ... (df I )s
2 2 2
s 
2
p
1 2 I
df1  df 2  ... df I
so MSE is the pooled
SSE estimate of variance
s 
2
p  MSE
DFE

In Summary
77

SST   (x ij  x )  s (DFT)
2 2

obs

SSE   (x ij  x i )   si (df i )
2 2

obs groups

SSG   (x i  x) 2
 n (x i i  x) 2

obs groups

SS MSG
SSE SSG  SST; MS  ; F
DF MSE
R2 Statistic
78

R2 gives the percent of variance due to between


group variation

SS[Between ] SSG
R  2

SS[Total ] SST

We will see R2 again when we study regression.


Where’s the Difference?
79

Once ANOVA indicates that the groups do not all appear to have the same means,
what do we do?

Analysis of Variance for days


Source DF SS MS F P
treatmen 2 34.74 17.37 6.45 0.006
Error 22 59.26 2.69
Total 24 94.00
Individual 95% CIs For Mean
Based on Pooled StDev
Level N Mean StDev ----------+---------+---------+------
A 8 7.250 1.669 (-------*-------)
B 8 8.875 1.458 (-------*-------)
P 9 10.111 1.764 (------*-------)
----------+---------+---------+------
Pooled StDev = 1.641 7.5 9.0 10.5

Clearest difference: P is worse than A (CI’s don’t overlap)


Multiple Comparisons
80

Once ANOVA indicates that the groups do not all


have the same means, we can compare them two
by two using the 2-sample t test

• We need to adjust our p-value threshold because we are


doing multiple tests with the same data.

•There are several methods for doing this.

• If we really just want to test the difference between one


pair of treatments, we should set the study up that way.
Tuckey’s Pairwise Comparisons
81

Tukey's pairwise comparisons


95% confidence
Family error rate = 0.0500
Individual error rate = 0.0199
Use alpha = 0.0199 for
Critical value = 3.55
each test.
Intervals for (column level mean) - (row level mean)

A B
These give 98.01%
B -3.685
0.435
CI’s for each pairwise
difference.
P -4.863 -3.238
-0.859 0.766 Only P vs A is significant
(both values have same sign)
98% CI for A-P is (-0.86,-4.86)
Tukey’s Method in R
82

Tukey multiple comparisons of means


95% family-wise confidence level

diff lwr upr


B-A 1.6250 -0.43650 3.6865
P-A 2.8611 0.85769 4.8645
P-B 1.2361 -0.76731 3.2395
Forecasting: Basic Time Series
Decomposition in Excel
83

 Forecast method 1 – Guess


 Forecast method 2 – Linear Regression
 Forecast method 3 – Time Series Decomposition
(TSD)
References
84

 http://www.wikihow.com/Run-Regression-Analysis-
in-Microsoft-Excel
 http://office.microsoft.com/en-001/excel-
help/slope-HP005209264.aspx
 http://office.microsoft.com/en-in/excel-
help/intercept-HP005209143.aspx
 http://capacitas.wordpress.com/2013/01/14/forecas
ting-basic-time-series-decomposition-in-excel/
THANK YOU
85

You might also like