Professional Documents
Culture Documents
Printing note: If you do not wish to print this page, then select
“Pages from: first page # to: last page #” on the print dialog screen
The Van Houten library has removed some of the
personal information and all signatures from the
approval page and biographical sketches of theses
and dissertations in order to protect the identity of
NJIT graduates and faculty.
ABSTRACT
by
Susan J. Canavan
evaluated.
industries.
EVALUATION OF SALES FORECASTING METHODS:
A CASE STUDY
Hy
Susan J. Canavan
A Thesis
Submitted to the Faculty of
New Jersey Institute of Technology
in Partial Fulfillment for the Degree of
Master of Science in Management
May 1997
APPROVAL PAGE
Susan J. Canavan
Undergraduate Education:
•
Master of Science in Management
New Jersey institute Of Technology, Newark, New Jersey
1997
•
Bachelor of Science in Business Administration,
Ramapo College of New Jersey, Mahwah, New Jersey, 1995
Major: Management
iv
This thesis is dedicated to my wonderful parents for their
continuous encouragement and support.
v
ACKNOWLEDGEMENT
vi
TABLE or CONTENTS
Chapter
Page
1 INTRODUCTION 1
2 LITERATURE REVIEW 4
3 OBJECTIVE 26
4 RESEARCH METHODOLOGY 28
vii
TABLE OF CONTENTS
(Continued)
Chapter Page
5.4 Summary 41
8 REFERENCES 48
viii
LIST OF TABLES
Table Page
ix
CHAPTER 1
INTRODUCTION
future demand for products and services and the resources that
1
2
the actual need for the product. They also are concerned with
LITERATURE REVIEW
4
5
They agree that there is no one best method that can predict
Technique Description
Qualitative Methods
with the level of accuracy they can achieve and the validity
causal.
Granger (1974) offers support for this view. They also noted
concluded that Gross and Ray's results were only valid when
forecast error.
found that the naive approach was most popular with 30.6
the most reliable results for analysis of data with low noise
difficult to understand.
12
methods when using micro data. Although the opposite was found
was used. Dalrymple (1987) reported that the use of the sales
objective methods.
occurring.
16
two models was 7.2 percent. When five models were used the
al. (1986) stated that the simple average was less time
method.
1984).
the mean percentage error (MPE), the mean error (ME), mean
been mixed when these two methods are compared. For example,
OBJECTIVE
Moriarty and Adams 1984, Moriarty 1985, Zbib and Savoie 1989).
26
27
individual products.
RESEARCH METHODOLOGY
Hypotheses 1 (H01):
Combination of forecasts from several quantitative methods
Hypotheses 2 (H02):
In general, objective forecasting methods are superior to
series methods.
Hypotheses 3 (H03):
Combination of forecasts from quantitative and subjective
28
29
The data used in the study consist of actual monthly sales for
three products. For each product, thirty data sets were used
trend.
30
4.4 Forecasting Methods
2)/Gardner's(4-2).
Simple Average:
by Pegels' 1969.
CHAPTER 5
findings.
their overall performance for the three time series using all
33
34
Acc. Model
Measure M2 (R) M3 (R) M5 (R)
Data. Set 1
MAE 6833 2 6308 1 7015 3
MAPE 15.8 2 14.4 1 16.36 3
RMSE 9610 2 9295 1 9614 3
Data Set 2
MAE 3841 2 3495 1 4121 3
MAPE 20.1 2 17.8 1 21.4 3
RMSE 5213 2 4921 1 5374 3
Data Set 3
MAE 4912 2 4561 1 5924 3
MAPE 15.8 2 14.7 1 19.2 3
RMSE 6812 2 6530 1 8048 3
Mean Ranks 2 1 3
Note: R = Rank
MAE = Mean Absolute Error
MAPE = Mean Absolute Percentage Error
RMSE = Root Mean Squared Error
35
performance. Presented are the MPE, MAPE, and RMSE scores for
each technique.
36
Acc. Model
Measure M2 (R) M3 (R) M5 (R) C1 (R)
Data Set 1
MAE 6833 3 6308 2 7015 4 6095 1
MAPE 15.8 3 14.4 2 16.4 4 13.0 1
RMSE 9610 3 9295 2 9614 4 5592 1
Data Set 2
MAE 1841 3 3495 2 4121 4 2975
MAPE 20.1 3 17.8 2 21.4 4 12.0 1
RMSE 5213 3 4921 2 5374 4 3864 1
Data Set 3
MAE 4912 3 4561 2 5924 4 3134 1
MAPE 15.8 3 14.7 2 19.2 4 11.0 1
RMSE 6812 3 6530 2 8048 4 3589 1
Mean Ranks 3 2 4 1
Note: R = Rank
MAE = Mean Absolute Error
NAPE = Mean Absolute Percentage Error
RMSE = Root Mean Squared Error
C1 = Combination 1 (combining all three
models)
generated by time series models. The MPE, MPE and RMSE from
Acc. Model
Measure M2 (R) M3 (R) M5 (R) S (R)
Data Set 1
MAE 6833 2 6308 1 7015 3 13157 4
MAPE 15.8 2 14.4 1 16.4 3 28.4 4
RMSE 9610 2 9295 1 9614 3 45232 4
Data Set 2
MAE 3841 2 3495 1 4121 3 6953 4
MAPE 20.1 2 17.8 1 21.4 3 32.2 4
RMSE 5213 2 4921 1 5374 3 76716 4
Data Set 3
MAE 4912 2 4561 1 5924 3 17621 4
MAPE 15.8 2 14.7 1 19.2 3 51.4 4
RMSE 6812 2 6530 1 8048 3 45818 4
Mean Ranks 2 1 3 4
Note: R = Rank
MAE = Mean Absolue Error
MAPE = Mean Absolute Percentage Error
RIMS E = Root Mean Squared Error
S = Subjective Forecasts
Acc. Model
Measure (R) M3 (R) S (R) C2
Data Set 1
MAE 6308 1 13157 3 8619 2
MAPE 14.4 1 28.4 3 18.0 2
RMSE 9295 1 45232 3 31809 2
Data Set 2
MAE 3495 1 6953 3 4770 2
MAPE 17.8 1 32.2 3 20.6 2
RMSE 4921 1 76716 3 5329 2
Data Set 3
MAE 4561 1 17621 3 8363
MAPE 14.7 1 51.4 3 31.0 2
RMSE 6530 1 45818 3 42982 2
Mean Ranks 1 3 2
Note: R = Rank
MAE = Mean Absolute Error
MAPE = Mean Absolute Percentage Error
RMSE = Root Mean Squared Error
M3 = Model 3 (Quantitative)
S = Subjective Forecasts
02 = Combining 2 (M3 and S)
that the subjective method was much less accurate than Model
5.4 Summary
firm or industry.
41
42
products.
The results of this study also show the benefits that can
forecasts.
three.
44
45
forecasting.
the results. The fact that the findings of this study are
characteristics are.
46
literature.
APPENDIX A
DATASET1.CGEL RAW DATA
47
APPENDIX B
DATASET1.CGEL MODEL M2
48
APPENDIX C
DATASET1.CGEL MODEL M3
49
APPENDIX D
DATASET1.CGEL MODEL M5
50
APPENDIX E
DATASET2.BUFF RAW DATA
51
APPENDIX F
DATASET2.BUFF MODEL M2
52
APPENDIX G
DATASET2.BUFF MODEL M3
53
APPENDIX H
DATASET2.BUFF MODEL M5
54
APPENDIX I
DATASET4.CBRM RAW DATA
55
APPENDIX J
DATASET4.CBRM MODEL M2
55
APPENDIX K
DATASET4.CBRM MODEL M3
57
APPENDIX L
DATASET4.CBRM MODEL M5
58
REFERENCES
59
60