You are on page 1of 90

Appendix 4 / Answers to Review Questions

The importance of the weighting is twofold. First, petroleum products are all
made from crude oil and are to some extent substitutable as far as the producer
is concerned. An index should reflect the average price of every gallon of petro-
leum product purchased. Only the weighted index does this. Secondly, products
such as car petrol, of which large quantities are purchased, will have a bigger
effect on the general public than products, such as kerosene, for which only
small amounts are purchased. Again, the index should reflect this.
(d) A differently constructed index would use different weightings. The price part of
the calculation could be changed by using price relatives but this would have
little effect since the prices are close together.
The different weightings that could be used are:
(i) Most recent year quantity weighting. But this would imply a change in his-
torical index values every year.
(ii) Average quantity for all years’ weighting. This would not necessarily mean
historical changes every year. The average quantities for 2012–14 could be
used in the future. This has the advantage that it guards against the chosen
base year being untypical in any way.

Marking Scheme (out of 20) Marks


(a) Simple index (1 mark off for calculating error) 5
(b) Weighted index (1 mark off for calculating error) 5
(c) Which to use 5
(d) Other indices
– Not changing the price element 1
– Sensible variations in the weighting method 4
Total 20

Module 6

Review Questions
6.1 The correct answer is C. Sampling is necessary because it is quicker and easier than
measuring the whole population while little accuracy is lost. Statement A is not true
because it is not always impossible to take population measurements, although it is
usually difficult. Statement B is untrue because sampling is always less accurate since
fewer observations/measurements are made.
6.2 The correct answer is A. Many sampling methods are based on random selection for
two reasons. First, it helps to make the sample more representative (although it is
unlikely to make it totally representative). Second, it enables the use of statistical
procedures to calculate the range of accuracy of any estimates made from the
sample. A is therefore a correct reason, while B, C and D are incorrect.
6.3 The correct answer is B. Starting top left, the first number is 5; therefore, the first
region chosen is SE England. Moving across the row, the second number is 8 and

Quantitative Methods Edinburgh Business School A4/31


Appendix 4 / Answers to Review Questions

the corresponding region is Scotland. The third number is 5, which is ignored, since
SE England is already included. In this situation, having the same region repeated in
the sample would not make sense. Consequently, we sample without replacement.
The fourth number is 0 and is also ignored since it does not correspond to any
region. The fifth number is 4 and so London completes the sample.
6.4 The correct answer is B, C. Multi-stage sampling has two advantages over simple
random sampling. The population is divided into groups, then each group into
subgroups, then each subgroup into subsubgroups, etc. A random sample of groups
is taken, then for each group selected a random sample of its subgroups is selected
and so on. Therefore, it is not necessary to list the whole population and advantage
B is valid. Since the observations/measurements/interviews of sample elements are
restricted to a few sectors (often geographical) of the population, time and effort
can be saved, as, for example, in opinion polls. Advantage C is therefore also valid.
Multi-stage sampling is solely a way of collecting the sample. Once collected, the
sample is treated as if it were a simple random sample. Its accuracy and the observa-
tions required are therefore just the same as for simple random sampling. Reasons A
and D are false.
6.5 The correct answer is A, C. In stratified sampling the population is split into
sections on the basis of some characteristic (e.g. management status in the absentee-
ism survey). The sample has to have the same sections in the same proportions as
the population (e.g. if there are 23 per cent skilled workers in the population, the
sample has to have 23 per cent skilled workers). In respect of management status,
therefore, the sample is as representative as it can be. In respect of other characteris-
tics (e.g. length of service with the company) the stratified sample is in the same
position as the simple random sample (i.e. its representativeness is left to chance).
Thus a stratified sample is usually more representative than a simple random one
but not necessarily so. Statement A is true.
A cluster sample can also be stratified by making each cluster have the same
proportions of the stratifying characteristics as the population. Statement B is
untrue.
If a total sample of 100 is required, stratification will probably mean that more than
100 elements must be selected. Suppose 23 per cent skilled workers are required and
that, by the time the sample has grown to 70, 23 skilled workers have already been
selected. Any further skilled workers chosen cannot be used. To get a sample of
100, therefore, more than 100 elements will have had to be selected. Only if the cost
of selection is negligible will a stratified sample be as cheap as a simple random
sample. Statement C is true.
6.6 The correct answer is B. Use of a variable sampling fraction means that one section
of the population is deliberately over-represented in the sample. This is done when
the section in question is of great importance. It is over-sampled to minimise the
likelihood of there being error because only a few items from this section have been
measured. Incidentally, D describes weighted sampling.
6.7 The correct answer is A, B, C. Non-random sampling is used in all three situations.

A4/32 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Convenience sampling may be used because random sampling is impossible (e.g.


with a blood sample). Systematic sampling and convenience sampling both some-
times give a sample that is as good as random (e.g. every tenth name from an
alphabetical list). Quota sampling is used to overcome interviewer bias.
6.8 The correct answer is C. The essential difference between stratification and quota
sampling is that the strata in the sample are intended to mimic the population. The
quota sizes can be set according to any criteria deemed suitable.
While quota sampling is usually seen in connection with interviewing, this is not an
essential characteristic. Reason A, therefore, does not always apply. Similarly, B does
not name an essential difference. For example, an interviewer could have used a
random method to make selections within the quota.
6.9 The correct answer is A, C, F. The forest was split into geographical areas because
of the difficulty of listing each tree individually. A random sample of areas was
taken: area sampling was therefore used. Since the areas were classified as sloping or
level and since the final sample had 20 per cent sloping areas, 80 per cent level, just
like the population, stratification was therefore also used. Since sampling was at two
levels, areas and trees, multi-stage sampling was used.
6.10 The correct answer is A. Accuracy is proportional to the square root of the sample
size. In the first case, the sample size is 25 ( ); in the second case, the sample
size is 400 ( ). In the second case the accuracy is 20/5 = 4 times greater.
The average height is thus measured to ±12/4 = ±3 cm.

Case Study 6.1: Business School Alumni


1 A question about the randomness of a sample can only have meaning after the
population it is supposed to represent has been clearly defined. In this case, the
population is the set of all graduates of the school. Do the 1200 usable replies
constitute a random sample of this population?
The sample is not random. Two alphabetical lists are sampled systematically. Every
twentieth name is taken from each after a random start. Although this selection is
random, non-randomness comes in the following ways:
(a) The two lists do not constitute the entire population. There are other graduates
whose addresses are not known. The reasons for their addresses being unknown
are many. Perhaps they did not like the school and did not want to maintain
contact, or they have moved around a lot, or they feel their careers have not
been as successful as those of their peers. The omission of these will bias the
sample.
(b) Not everyone who was sent a questionnaire replied. Only 1200 replies were
received. (We do not know how many were sent but questionnaires often have
response rates as low as 10 or 20 per cent.) What are the reasons for not reply-
ing? Again, they could be many, including a reluctance to report failure, or
relative failure. The omission of these non-replying graduates from the sample is
likely to result in bias.

Quantitative Methods Edinburgh Business School A4/33


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 10) Marks


Marks awarded for answering
– that the selection procedure is random 2
– that the whole population has not been sampled 2
– that the omission of graduates whose addresses are not known 2
could lead to bias
– that non-response results in non-randomness 2
– that the reasons for non-response mean the sample is likely to be 2
biased
Total 10

Case Study 6.2: Clearing Bank


1
(a) Even when all the records are computerised, it is still time-consuming and
expensive to analyse the accounts of several million customers. In addition, with
14 regions each with its own computer, a substantial amount of work is needed
to bring the results of the regions together in compatible form. Sampling cuts
down the accounts to be analysed and the regions to be dealt with.
(b) Two factors affect sample size: the accuracy required and the cost of collecting
the sample. One needs to look at the purposes to which the information is to be
put and try to establish minimum accuracy levels, e.g. ±£75 for average account
profitability, ±£40 transactions for average transaction volume, etc. Each accu-
racy implies a sample size. The largest of these would be the minimum sample
size. The cost of collecting this sample is then estimated and tested against the
budget. By iterative procedures, one hopes to arrive at a sample size through a
consensus of those involved that balances accuracy and expense.
(c) Multi-stage sampling is recommended. Regions are sampled, then branches, then
customers. The regions are chosen at random. At the stage of choosing branch-
es, stratified sampling is used so that the sample contains the same percentage of
each branch size as the population. Systematic sampling can be used for the
account holders. Since they are listed in chronological order, a systematic sample
will in effect be random. The sampling scheme would be as follows:
(i) By a simple random method select three of the 14 regions. The choice of
three is arbitrary to some extent. However, one or two would probably
give an insufficient geographical coverage, whereas more than three
means that one is perhaps dealing with more regions than necessary.
(ii) For each of the three regions, list its branches in three groups according
to size. Select by a simple random or systematic method, two large
branches, two medium and two small. The sample now has similar pro-
portions to the population as shown below.

A4/34 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Size
Small Medium Large
Sample % 33 33 33
Population % 34 34 32

In total we have a sample of 18 branches. As with choosing three regions,


the six branches per region are chosen to give representation without be-
ing unwieldy.
(iii) For each of the 18 branches, obtain a computer listing of chequebook ac-
counts. If a sample of accounts was taken such that an equal number,
2000/18 = 111, was taken from each of the 18 lists, then large branch
customers would be under-represented (since there are more customers at
large branches than small). To overcome this problem, the sample taken
from each size of branch must be in proportion to the number of ac-
counts at that size of branch. Since large branches average 5000 accounts,
medium 2500 and small 1000, accounts from large branches must be
5000/(5000 + 2500 + 1000) of the total (i.e. 10/17). Similarly, medium
branches should be 5/17, and small 2/17. Since 2000/3 = 667 accounts
come from each region:
10/17 × 667 = 392 should be from large branches (i.e. 196 per branch);
5/17 × 667 = 196 should be from medium branches (i.e. 98 per branch);
2/17 × 667 = 78 should be from small branches (i.e. 39 per branch).
Because of rounding, the total sample size is now 1998. The choice of
2000 for the total sample size was not sufficiently precise for this discrep-
ancy to matter.
From the computer list for each branch, select by systematic sampling a
sample of the appropriate size. For example, if a large branch had 6000
accounts in all, then, after a random start, every thirtieth name would be
taken to give a sample of the right size.
(d) To compare the profitability of the accounts of customers from different
socioeconomic groups requires the stratification of the customers at stage (iii)
(above) of the procedure. Suppose 10 per cent of the bank’s chequebook cus-
tomers are known to be of the AB socioeconomic group: 10 per cent of each
sample at branch level must be from this group (i.e. 20 of the large bank sample,
ten of the medium and four of the small). The sampling would continue until
each socioeconomic category had the correct number of accounts.
In order to do stratified sampling of this type, two sets of information are re-
quired:
(i) The percentages of customers in each socioeconomic group (e.g. ten
per cent of all the bank’s customers are from the AB group, etc.). A
further sampling exercise might provide this information.
(ii) Knowledge of the occupation of each customer. Socioeconomic classi-
fication can be done from other information, but in any case the ability
to carry out this last part of the exercise does depend upon the availa-
bility of some personal details for each customer.
(e) The major practical problems are likely to be:

Quantitative Methods Edinburgh Business School A4/35


Appendix 4 / Answers to Review Questions

(i) Some small percentage of accounts will have been closed between the
time of compilation of the computer listings and use of the infor-
mation, thus reducing the sample size and the accuracy. The original
sample may have to be increased to allow for this.
(ii) Some accounts will be dormant (i.e. open but rarely, or never, used). A
decision to include or exclude these accounts must be made. It is usual-
ly made after consideration of the purposes to which the information is
being put.
(iii) To know the occupation of customers requires visiting branches, since
this information may not be computerised. This is time-consuming and
requires the establishment of a working relationship with the bank
manager. It also requires permission to breach the confidentiality be-
tween customers and manager.
(iv) The personal details may well be out of date, since such information is
only occasionally updated.
(v) It may not be possible to classify some account holders into a socioec-
onomic group. For example, the customer may have been classified as a
schoolchild seven years ago. This is a problem of non-response. Omit-
ting such customers from the sample will lead to bias. Extra work must
be done to find the necessary details.

Marking Scheme (out of 20) Marks


(a) Reasons for taking sample
– Reducing effort/computer time 1
– Reducing complication of dealing with all regions 1
(b) Factors influencing sample size
– Accuracy required by user 1
– Cost of sampling 1
(c) For suggesting
– Multi-stage sampling (first regions, then branches, then customers) 2
– Stratification of branches 2
– Systematic sampling of accounts (1 mark if simple random sug- 2
gested)
For getting the numbers right
– 2/4 regions, 10/30 branches 1
– Different sample size from different size of branch 1
(d) For suggesting stratification by socioeconomic group 2
For realising extra information is required
– % customers in each group 1
– Occupation of account holder 1
(e) Practical difficulties
– Need to allow for closed or dormant accounts 1

A4/36 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

– Additional order of difficulty in establishing socioeconomic 1


groups (i.e. branches have to be visited)
– Out-of-date personal details 1
– Non-response problem where account holders are classified into 1
socioeconomic groups
Total 20

Module 7

Review Questions
7.1 The correct answer is B. The probabilities are calculated from the formula:

This is the relative frequency method.


7.2 The correct answer is A. Even though the situation that generates the data appears
to be normal, the distribution is still observed because the probabilities were
calculated from frequencies. Had the data been used to estimate parameters from
which probabilities were to be found in conjunction with statistical tables, the
resulting distribution would have been normal.
7.3 The correct answer is False. The amounts owed are not strictly a continuous
variable since they are measured in currency units. They are not measured, for
example, in tiny fractions of pennies.
7.4 The correct answer is A. The addition rule for mutually exclusive events gives:

7.5 The correct answer is C. The multiplication rule gives:

7.6 The correct answer is C. The basic multiplication rule as given relates to
independent events. They may not be independent. For example, spells of bad
weather are likely to prevent patients attending. The fact that there were no cancella-
tions on day 1 might indicate a spell of good weather and a higher probability of no
cancellations on the following day.
7.7 The correct answer is D. The number of ways of choosing three objects from eight
is .

7.8 The correct answer is B. Knowledge about standard distributions (e.g. for the
normal ±2 standard deviations covers 95 per cent of the distribution) is available
rather than it having to be calculated direct from data.

Quantitative Methods Edinburgh Business School A4/37


Appendix 4 / Answers to Review Questions

A is not correct since some small amount of data has to be collected to check that
the standard distribution is applicable and to calculate parameters.
C is not necessarily correct. Standard distributions are approximations to actual
situations and may not lead to greater accuracy.
7.9 The correct answer is B. The population is split into two types: watchers and non-
watchers. A random sample of 100 is taken from this population. The number of
watchers per sample therefore has a binomial distribution.
7.10 The correct answer is True. Since the programme is described as being popular, the
proportion of people viewing ( ) is likely to be sufficiently high (perhaps about 0.3)
so that and are both greater than 5. The normal approximation to the
binomial can therefore be applied.
7.11 The correct answer is D. The population can be split into two types: those that have
heard and those that have not. A random sample of five is taken from this popula-
tion. The underlying distribution is therefore binomial with = 0.4 and = 5. The
binomial formula is:

Thus:

Or, Table A1.1 could have been used.


7.12 The correct answer is A. The average per clerk per day is 190. There are 12 clerks.
The total per day is therefore 190 × 12 = 2280.
7.13 The correct answer is C. Since the distribution is normal, 68 per cent of clerks will
clear a number of dockets in the range:
Mean ±1 standard deviation
= 190 ±25
= 165 to 215
32 per cent of clerks will clear a number of dockets outside this range. Since a
normal distribution is symmetrical, half of these (16 per cent) will clear fewer than
165. Likewise, 16 per cent will clear more than 215.
16% of 12 = 1.92
Approximately two clerks will clear more than 215 dockets per day.
7.14 The correct answer is A. There is a 95 per cent probability that the number of
dockets cleared by any clerk on any day will lie within the range covered by 95 per
cent of the distribution. This range is, for a normal distribution, the mean ±2
standard deviations.
The range is 190 ± 2 × 25 = 140 to 240.

A4/38 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Case Study 7.1: Examination Grades


1 Since the normal distribution is continuous, a grade of 85 is represented by the
range 84.5–85.5. We are therefore looking for the probability of a mark exceeding
85.5.
(a) This is 15.7 away from the mean (85.5–69.8). In terms of standard deviations,
this is:
= 15.7/11.6
= 1.35
From Table A1.2 and Figure A4.6, for = 1.35 the shaded area A is 0.4115.
Area B = 0.5 0.4115
= 0.0885
Therefore 8.85 per cent of students should exceed a mark of 85 per cent.

Area B
Area A

Figure A4.6 Examination grades


(b) 40 (represented by 39.5–40.5) is 30.3 (69.8 39.5) away from the mean, so:
= 30.3/11.6
= 2.61
Referring to the normal curve Table A1.2 in Appendix 1, the area between the
mean and = 2.61 is 0.4955. Therefore 0.5 0.4955 = 0.0045, or 0.45 per cent
of students should get less than 40.
(c) 50 (represented by 49.5–50.5) is 20.3 (69.8 49.5) away from the mean, so:
= 20.3/11.6
= 1.75
Referring to the normal curve Table A1.2 in Appendix 1, the area between the
mean and = 1.75 is 0.4599. The proportion of students failing is 0.5 0.4599
= 0.0401. In a class of 180, this means 0.0401 × 180 = 7 students.
(d) If 8 per cent of students are awarded distinctions, then the lowest distinction
mark corresponds to the value associated with the shaded area in Figure A4.7
being equal to 0.42. From Table A1.2 in Appendix 1, = 1.40 corresponds to an
area = 0.4192 and = 1.41 corresponds to an area = 0.4207. The value being
sought can therefore be estimated at 1.405.
The lowest distinction mark is 69.8 + 1.405 × 11.6 = 86.1. Since a mark of 86
per cent is in the range 85.5–86.5, if no more than 8 per cent of the class are to
be awarded distinctions, the lowest distinction mark is 87 per cent.

Quantitative Methods Edinburgh Business School A4/39


Appendix 4 / Answers to Review Questions

8%

z = 1.405

Figure A4.7 Examination grades z value for distinction students

Marking Scheme (out of 20) Marks


(a) Method 4
Correct calculation 1
(b) Method 4
Correct calculation 1
(c) Method 4
Correct calculation 1
(d) Method 4
Correct calculation 1
Total 20

Case Study 7.2: Car Components


1
(a) The population from which samples of size 6 are being taken is the entire
production of these components. The population is split into two types, defec-
tive and non-defective. The binomial distribution is likely to apply to this
situation. The procedure is to assume that the process is operating with 10 per
cent defectives and to ascertain whether the evidence is consistent with this as-
sumption. Thus:
Binomial distribution applies with:

From Table A1.1:

A4/40 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

To see whether the results are consistent with an overall 10 per cent defective
rate, the observed results from the 100 samples are compared with the theoreti-
cally expected results calculated above.

Number of defectives 0 1 2 3 4 5 6
Observed no. of samples 52 34 10 4 0 0 0
Theoretical no. of samples 53 35 10 1 0 0 0

There is a close correspondence. The results are consistent with a process defec-
tive rate of 10 per cent. Note that because of rounding the theoretical numbers
of samples do not add up to 100.
(b) A first reservation concerning this conclusion is whether the samples were taken
at random. If the samples were only taken at particular times, say at the start of a
shift, it might be that starting-up problems meant that the defective rate at this
time was high. The results would then suggest the overall rate was higher than it
actually is.
Second, if the samples that contain defectives were mostly towards the end of
the time period during which the samples were collected, this might indicate that
the process used to have a defective rate less than 10 per cent but had deteriorat-
ed.
Third, the fact that there are more samples with three defectives than expected,
and fewer with zero and one defective, suggests greater variability in the process
than expected. This might occur because is not constant at 10 per cent but
varies throughout the shift. The antidote to this problem is either to split the
shift into distinct time periods and take samples from each or to use a more
sophisticated distribution called beta-binomial, which allows for variability in
and which will be described in a later module.

Marking Scheme (out of 20) Marks


(a) Use of correct parameters 3
Correct use of table 4
Comparison of observed/theoretical 4
(b) Random samples 3
Time differences 3
Variability in 3
Total 20

Quantitative Methods Edinburgh Business School A4/41


Appendix 4 / Answers to Review Questions

Case Study 7.3: Credit Card Accounts


1 Expenditure is normally distributed with mean £280 and standard deviation £90.
(a) For £200, = (200 280)/90 = 0.89.
From Table A1.2 in Appendix 1, the area corresponding to = 0.89 is 0.3133.
Therefore,
= 0.5 0.3133
= 0.1867
Consequently, approximately 19 per cent of clients are likely to spend less than
£200 per month.
(b) For £300, = (300 280)/90 = 0.22.
From Table A1.2 in Appendix 1, the area corresponding to = 0.22 is 0.0871.
The range of expenditure £200 to £300 bestrides the mean with = 0.89 on
one side and = 0.22 on the other. Thus, there is an area on either side of the
mean and they must be added together.
Therefore,
= 0.3133 + 0.0871
= 0.4004
Approximately 40 per cent of clients are likely to spend between £200 and £300
per month.
(c) For £400, = (400 280)/90 = 1.33.
From Table A1.2, Appendix 1, the area corresponding to = 1.33 is 0.4082.
This area includes part of the previous expenditure class, that between £280 and
£300.
Therefore,
= 0.4082 0.0871
= 0.3211
Approximately 32 per cent of clients are likely to spend between £300 and £400
per month.
(d) The area from the mean up to £400 has already been found to be 0.4082.
Therefore,
= 0.5 0.4082
= 0.0918
Approximately 9 per cent of clients are likely to spend more than £400 per
month.
As a check, the percentages associated with the four expenditure classes should sum
to 100.

A4/42 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 20) Marks


(a) Method 3
Calculation 2
(b) Method 3
Calculation 2
(c) Method 3
Calculation 2
(d) Method 3
Calculation 2
Total 20

Case Study 7.4: Breakfast Cereals


1 The binomial distribution applies since random samples are being taken from a
population split into two types. The parameters are = 20 and = 0.45. From the
binomial probability Table A1.1 in Appendix 1, the probability of there being any
given number of regular users in a sample can be found.

Total so far 0.0188

Total from here to end 0.0214

From this table and using the subtotals:

Quantitative Methods Edinburgh Business School A4/43


Appendix 4 / Answers to Review Questions

Therefore, there is a 96 per cent probability that a sample will contain from five to
13 regular users of the breakfast cereal. If many samples are taken, it is likely that 96
per cent of them will contain five to 13 users. Because consumers are counted in
whole numbers, there is no range of users equivalent to the 95 per cent requested in
the question.
The question has been answered, but it has been a lengthy process. Since (= 9)
and (= 11) are both greater than five, the binomial can be approximated by
the normal. The parameters are:

For a normal distribution, 95 per cent of it lies between ±2 standard deviations.


Therefore, 95 per cent of samples are likely to be between:
9 (2 × 2.22) and 9 + (2 × 2.22)
4.56 and 13.44
Recall that, because a discrete distribution is being approximated by a continuous
one, a whole number with the binomial is equivalent to a range with the normal. For
example, five users with the binomial corresponds to the range 4.5 to 5.5 with the
normal. In the above calculations, the range 4.56 to 13.44 covers almost (but not
quite) the range five to 13 users.
Approximately, therefore, 95 per cent of the samples are likely to have between five
and 13 users inclusive. This is the same result as obtained by the lengthier binomial
procedure.

Marking Scheme (out of 20)


If binomial used Marks
– Correct parameters 4
– Use of table 6
– 95% range 6
– Know that normal is an alternative 4
Total 20

If normal approximation used Marks


– Know basic situation is binomial 4
– Applying rule 3
– Calculation of parameters 6
– Continuity correction 3
– 95% range 4
Total 20

A4/44 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Module 8

Review Questions
8.1 The correct answer is C. Statistical inference uses sample information to make
statements about populations. The statements are in the form of estimates or
hypotheses.
8.2 The correct answer is C. Inference is based on sample information. Even though a
sample is random, it may not be representative, and therefore there is some chance
that the inference may be incorrect. The other statements are true but they are not
the reasons for using confidence levels.
8.3 The correct answer is A. The mean of the sample = (7 + 4 + 9 + 2 + 8 + 6 + 8 + 1
+ 9)/9 = 6
The variance = [(7 6)2 + (4 6)2 + (9 6)2 + (2 6)2 + (8 6)2 + (6 6)2 + (8
6)2 + (1 6)2 + (9 6)2]/(9 1) = 9
The standard deviation is:

The standard deviation of the distribution of means of sample size 9 is:

8.4 The correct answer is B. The point estimate of the population mean is simply the
sample mean.
8.5 The correct answer is B. The point estimate of the mean is 6. The 90 per cent
confidence limits are (from normal curve tables) 1.645 standard errors either side of
the point estimate. The limits are 6 ± 1.645 × 1 = 4.4 to 7.6 (approximately).
8.6 The correct answer is A. The 95 per cent confidence limits cover a range of 2
standard errors on either side of the mean. A standard error is 150/ where is
the sample size. Thus

8.7 The correct answer is False. Sample evidence does not prove a hypothesis. Because
it is from a sample, it merely shows whether the evidence is statistically significant or
not.
8.8 The correct answer is A. The tester decides on the significance level. He or she may
choose whatever value is thought suitable but 5 per cent has come to be accepted as
the convention. The other statements are true but only after 5 per cent has been
chosen as the significance level.
8.9 The correct answer is True. Critical values are an alternative approach to
significance tests and can be used in both one- and two-tailed tests.
8.10 The correct answer is A. The standard error of the sampling distribution is 6 (=
48/ 64). There is no suggestion that any deviation from the hypothesised mean of 0

Quantitative Methods Edinburgh Business School A4/45


Appendix 4 / Answers to Review Questions

could be in one direction only. Therefore the test is two-tailed. At the 5 per cent
level the critical values are 2 standard errors either side of the mean (i.e. at 12 and
12). Since the observed sample mean is 9.87, at the 5 per cent level the hypothesis is
accepted. At the 10 per cent level the critical values are 1.645 standard errors from
the mean (i.e. at 9.87 and 9.87). At the 10 per cent level the test is inconclusive.
8.11 The correct answer is C. Since the test is one-tailed at the 5 per cent level, the
critical value is 1.645 standard errors away from the null hypothesis mean. The
critical value is therefore 9.87. For the alternative hypothesis the value of 9.87 is
1.69 (= (9.87 20)/6). The corresponding area in normal curve tables is 0.4545.
Since the null hypothesis will be accepted (and the alternative rejected even if true)
when the observed sample mean is less than 9.87, the probability of a type 2 error is
0.0455 (= 0.5 0.4545; i.e. 4.55 per cent).
8.12 The correct answer is D. The power of the test is the probability of accepting the
alternative hypothesis when it is true. The power is therefore:

8.13 The correct answer is C. The test is to determine whether the plea has met its
objective by bringing about an increase of £2500 per month (i.e. whether the
average increase in turnover per branch is £2500 per month). This is equal to £7500
over a three-month period.
8.14 The correct answer is B. The samples being compared are the turnovers before and
after the plea. They are paired in that the same 100 branches are involved. Each
turnover in the first period is paired with the turnover of the same branch in the
second period.
The test is one-tailed. In the circumstances that the plea was well exceeded the
hypothesis would not be rejected. One would only say that the plea had not
succeeded if the observed increase was significantly less than £7500, but not if
significantly more. Therefore only one tail should be considered. The test should
thus be one-tailed, based on paired samples.
8.15 The correct answer is False. The procedure described relates to an unpaired sample
test, not a paired test. A paired test requires a new sample formed from the differ-
ences in each pair of observations.

Case Study 8.1: Food Store


1 The distribution of the means of samples of size 250 is a normal distribution. In
spite of the fact that the distribution of individual amounts due appears (from the
range) to be skewed, the central limit theorem states that the sampling distribution
will be normal. Therefore, at the 95 per cent confidence level, the mean of the one
sample collected must lie within two standard errors of the true population mean
(the average amount outstanding for all overdue accounts). The standard error of
this sampling distribution is equal to:

A4/46 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

The mean of the one sample (£186) must lie within 12 (= 2 × 6) of the true
population mean at the 95 per cent confidence level. Consequently, the true
population mean must be in the range 186 ± 12 = £174 to £198.

Marking Scheme (out of 10) Marks


Central limit theorem 2
Standard error 2
Calculation of standard error 2
96 per cent confidence limits = ±2 standard errors 2
Calculation of final result 2
Total 10

Case Study 8.2: Management Association


1 This is a test of the hypothesis that the new course is no different from the old and
will produce an overall average test score of 242, just as before.
The sample size is 16, but the usually required 30 is not necessary in this case.
First, since the individual distribution of scores was (and is assumed still to be)
normal, the sampling distribution of the mean will be normal whatever the sample
size. Also, since the standard deviation is assumed still to be 52, unchanged from
before, it is not being estimated from the sample. Therefore, the second reason for
needing a sample size greater than 30 does not apply.
The test is two-tailed because the new test could differ from the old by being either
higher or lower.
The significance test follows the usual steps:
(a) The hypothesis is that the new course will still give an overall average test score
of 242.
(b) The sample evidence is the 16 people who have undergone the computer-based
course and achieved an average score of 261.
(c) Choose the conventional significance level of 5 per cent.
(d) The sampling distribution of the mean is normal, with the mean assumed to be
242 (the hypothesis) and the standard deviation equal to

The value for the sample result of an average score of 261 is thus:
= (261 242)/13
= 19/13
= 1.46
From the normal curve table given in Appendix 1 (Table A1.2), the associated
area under the curve is 0.4279. The hypothesis is that the new course would not
change the test score. The possibility that the new course could have led to an
improvement or a deterioration was recognised. The probability of the sample

Quantitative Methods Edinburgh Business School A4/47


Appendix 4 / Answers to Review Questions

result must therefore be seen as the probability of a result as far from the mean
as = 1.46 in either direction (a two-tailed test).

(e) This result is larger than the significance level of 5 per cent and the hypothesis
must be accepted. There is insufficient evidence to suggest that the new course
makes a significant difference to the test scores at the 5 per cent level.

Marking Scheme (out of 10) Marks


Correct type of test, correct hypothesis 1
Two-tailed test 2
Knowing that a sample of 16 is adequate and the reasons 3
Standard error formula and calculation 1
Probability of sample evidence 2
Correct conclusion 1
Total 10

Case Study 8.3: Textile Company


1 The hypothesis is that the supplier is sending yarns of an acceptable mean tensile
strength and therefore that the sample of 50 has come from a population of mean
12 kg.
The test is one-tailed since only the possibility of under-strength yarns is of concern
to the management and needs to be considered.
The distribution of strengths is not known, but since the sample size exceeds 30 the
sampling distribution of the mean is normal, by the central limit theorem.
The significance test follows the usual steps:
(a) The hypothesis is that the sample comes from a population of mean 12 kg.
(b) The evidence is the sample of 50 yarns with mean 11.61 kg and standard
deviation 1.48 kg.
(c) Choose the conventional 5 per cent significance level.
(d) The standard error of the sampling distribution of the mean is:

The value of the observed sample mean is (11.61 12)/0.21 = 1.86.


From the normal curve table in Appendix 1 (Table A1.2), the corresponding area
is 0.4686. The probability of the sample evidence is therefore 0.0314 (= 0.5
0.4686) (i.e. 3.14 per cent).
(e) The probability of the sample evidence is less than the 5 per cent significance
level and the hypothesis must be rejected. It does appear that the supplier is
sending yarn of a significantly lower tensile strength.
The text suggested six possible reservations about significance tests. Only some of
them apply to this case. First, the test result is close to the accept/reject border and
is not fully convincing. It should serve as a warning to check the situation further

A4/48 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

rather than to be the basis for contract action. Second, the assumptions underlying
the test must be met. In this case this means that the sample should have been
selected in a truly random fashion. If not, the whole basis of the test is undermined.
Third, a tensile strength of slightly less than 12 kg might be adequate for the cloth
concerned, and it might not be economic to go to the expense of ensuring the
contract is kept to the letter. On the other hand, a tensile strength of, say, 11 kg or
less might have been more serious because the quality of the cloth was noticeably
reduced. This might suggest the adoption of a test that had 11 kg as the alternative
hypothesis.

Marking Scheme (out of 15) Marks


Correct type of test and hypothesis 1
One-tailed test 2
Use of central limit theorem 2
Calculation of standard error 2
Probability of sample evidence 2
Correct conclusion 1
Reservations 5
Total 15

Case Study 8.4: Titan Insurance Company


1
(a) The data are in the form of two paired samples (each output is paired with an
output for that same salesperson in the other sample). The test should be a
paired significance test, based on a new sample formed from the difference in
output for each salesperson.
The hypothesis is that the new scheme has not increased output (i.e. the new
sample has come from a distribution of mean 0).
Because the sample is 30, the central limit theorem suggests that the sampling
distribution of the mean will be normal.
It is assumed that, since the new scheme increases incentives, only an increase in
output is possible. The test is thus one-tailed.
(b) Form the new sample by subtracting the old output from the new as in Ta-
ble A4.6.

Table A4.6 Titan Insurance: paired sample test


Salesperson Difference
1 5 1 1
2 19 15 225
3 5 9 81
4 7 3 9
5 0 4 16

Quantitative Methods Edinburgh Business School A4/49


Appendix 4 / Answers to Review Questions

Salesperson Difference
6 13 9 81
7 3 7 49
8 6 10 100
9 6 10 100
10 25 21 441
11 17 13 169
12 21 17 289
13 21 17 289
14 14 18 324
15 7 11 121
16 19 15 225
17 7 11 121
18 34 38 1444
19 7 11 121
20 13 9 81
21 13 9 81
22 9 5 25
23 11 15 225
24 11 7 49
25 18 14 196
26 19 23 529
27 8 4 16
28 7 11 121
29 9 5 25
30 18 14 196
120 0 5750

Conduct the significance test in five stages:


(i) The hypothesis is that the new sample comes from a population of mean 0.
(ii) The evidence is the new sample of mean 4 and standard deviation 14.08.
(iii) The significance level is 5 per cent.
(iv) The standard error of the sampling distribution of the mean is:

The value of the observed sample mean:

(v) From the normal curve table in Appendix 1 (Table A1.2), the correspond-
ing area is 0.4406. The probability of such a value is therefore 0.0594 or
5.94 per cent.

A4/50 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

This percentage is slightly higher than the significance level. The hypothesis is
accepted (but only just). The new scheme does not give rise to a significant
increase in output.
(c) The six possible reservations listed in the text suggest:
(i) The sample should be collected at random. This means a true random-
based sampling method should have been used to select the salespeople
covering all grades, areas of the country, etc. It also means that the months
used should not be likely to show variations other than those stemming
from the new scheme. For instance, allowance should be made for seasonal
variations in sales.
(ii) Checks should be made that the structure of the test is right. For instance,
does a simple measure like the total sum assured reflect the profitability of
the company? Profitability may have more to do with the mix of types of
policy than total sum assured.
(iii) The potential cost/profit to the company of taking the right decision in
regard to the incentive scheme suggests that more effort could be put into
the significance test. In particular, a larger sample could be taken.
(iv) The balance between the two types of error should be right. It is more im-
portant to know whether the scheme is profitable than to know whether it
gives a significant increase. The test should have sufficient power to distin-
guish between null and alternative hypotheses. This is discussed below.
(d) If the alternative hypothesis is a mean increase of £5000:
(i) .
(ii) The critical value of the one-tailed test is 1.645 standard errors from the
mean. The critical value is 4.23 (= 1.645 × 2.57). For the alternative hy-
pothesis, the value of 4.23 is 0.30 (= (4.23 5)/2.57) (see Figure A4.8).
From the normal curve table in Appendix 1 (Table A1.2), the correspond-
ing area is 0.1179. The null hypothesis is accepted (and the alternative
hypothesis is rejected) if the observed sample mean is less than the critical
value, 4.23. A type 2 error is the acceptance of the null hypothesis when the
alternative hypothesis truly applies. Therefore:

Quantitative Methods Edinburgh Business School A4/51


Appendix 4 / Answers to Review Questions

Null Shaded Alternative


= p(type 2 error)
= 0.3821

5%

0 4230 5000
Critical
value

Figure A4.8 Average output increase: type 2 error


(iii) The power of the test is the probability of accepting the alternative hypoth-
esis when it truly does apply. This is the complement of .

(e) Note that, although the null hypothesis was accepted, the alternative was more
likely to apply. Under the null hypothesis, = 5.94 per cent;
under the alternative hypothesis the -value of the observed sample mean is 0.39
((4 5)/2.57) and = 34.83 per cent. The problem is that the
power of the test is low. There is a much higher probability of a type 2 error
than a type 1 error.
To balance the situation, if , then the
critical value must be halfway between the means of the null hypothesis distribu-
tion and the alternative hypothesis distribution and also 1.645 standard errors
from both means. The critical value must therefore be at £2500 and (working in
thousands):

(Note that the original estimate of the standard deviation, 14.08, is still used.)

A sample size of 86 would be able to discriminate between hypotheses in a more


balanced way.

Marking Scheme (out of 25) Marks


For each of parts (a) to (e) 5
Total 25

A4/52 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Module 9

Review Questions
9.1 The correct answer is B. A ‘natural’ measurement such as height is a typical example
of a normal distribution. Many small genetic factors presumably cause the varia-
tions. This is highly typical of the sort of situation on which the normal is defined.
9.2 The correct answer is B, C. The binomial formula, with its factorials and powers, is
more difficult to use than the Poisson. Binomial tables extend to many more pages
than the Poisson because the former has virtually one table for each sample size.
A is not a correct reason. If the situation is truly binomial but the Poisson is used,
some accuracy will be lost but the loss will be small if the rule of thumb applies.
9.3 The correct answer is B. The situation looks to be Poisson. Assume this to be the
case. The parameter is equal to the average number of accidents per month: 36/12
= 3. From the Poisson probability table (see Appendix 1, Table A1.3):

Therefore:

Therefore:

9.4 The correct answer is B. The variance is estimated in essentially the same way as the
standard deviation, which is merely the square root of the variance. The same
reasoning that leads to the standard deviation having degrees of freedom leads
to the variance having degrees of freedom.
9.5 The correct answer is False. In addition to the conditions made in the statement, the
distribution from which the sample is taken must also be normal if the sampling
distribution of the mean is to be a -distribution.
9.6 The correct answer is D.

Quantitative Methods Edinburgh Business School A4/53


Appendix 4 / Answers to Review Questions

9.7 The correct answer is D.

9.8 The correct answer is B. For this test there are 17 degrees of freedom. The test is
one-tailed since it is supposed that the fitness of the executives can only have
improved after the course. The table value is therefore in the row for 17 degrees of
freedom and the column headed 0.05.
9.9 The correct answer is B. The observed value is greater than the 5 per cent
significance value. Hence it falls within the 5 per cent tail and the probability of the
sample evidence is less than 5 per cent. The fitness of the executives does show a
significant improvement.
9.10 The correct answer is True. Chi-squared is essentially the ratio between a sample
variance and the variance of the population from which it was drawn. It can
therefore be used to test hypotheses such as that described in the question, provided
the population is normal and the sample is selected at random.
9.11 The correct answer is E. The correct value is taken from the row referring to 18
degrees of freedom, and the column referring to an upper tail area 0.10. The critical
chi-squared value is 25.989.
9.12 The correct answer is False. The chi-squared, not -distribution, is applicable in
such circumstances.
9.13 The correct answer is D. Find the entry for row 11 and column 8 in the table. The
upper entry refers to the 5 per cent tail and the lower to the 1 per cent tail. The
correct answer is 2.95.
9.14 The correct answer is True. The observed ratio is 96/24 = 4.0. The 1 per cent
critical ratio is 3.80, taken from the row corresponding to 14 degrees of freedom
in the denominator and the column corresponding to 12 degrees of freedom in the
numerator. The observed therefore does exceed the 1 per cent critical value.
9.15 The correct answer is C. Essentially the distribution is binomial with and
. For such a low value of and large value of , the distribution can and
would be approximated by the Poisson, which is easier to use in practice than the
binomial.

Case Study 9.1: Aircraft Accidents


1
(a) The situation looks to be Poisson. The sample is the 400-day period. The events
are the incidents. It is possible to count how many incidents have taken place,
but not how many incidents might have taken place but did not. There is thus a
good a priori case for supposing that the Poisson will apply.
To test this, the observed results from 100 aircraft must be compared with what
would be expected theoretically under the assumption of a Poisson distribution.
In order to use the Poisson Table (see Appendix 1, Table A1.3) the parameter of
the distribution (the average number of incidents per 400 days) has to be calcu-
lated.

A4/54 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

On average, therefore, an aircraft was involved in 1.6 incidents over the 400
days. From the column headed 1.6 in the table, the theoretical frequencies of
incidents can be found. For example, the probability of 0 incidents is 0.2019.
Out of 100 aircraft, one would thus expect 20 to be involved in 0 incidents. The
comparison between theoretical and observed is:

No. incidents ( ) 0 1 2 3 4 5 6 7
Observed 23 33 23 11 5 3 1 1
Theoretical Poisson 20 32 26 14 6 2 0 0

There is a good correspondence between theoretical and observed, the discrep-


ancies being small. The two criteria for judging whether a Poisson distribution
fits the data are both satisfied. First, there is a good a priori case for supposing
the situation is Poisson. Second, the observed data are very close to what is an-
ticipated theoretically.
(b) Assuming the distribution of incidents is Poisson, the probabilities of five or
more incidents are, from the table:

= 0.0176
= 0.0047
= 0.0011
= 0.0002
= 0

giving:

= 0.0236

A proportion of 0.0236 (or 2.36 per cent) of the 800 would therefore be ex-
pected to be involved in five or more incidents over a 400-day period (i.e. 19
aircraft).
(c) Reservations about the conclusion are principally to do with whether the
incidents are random. It may be that for this aircraft certain routes/flights/ pi-
lots/times of the year are more prone to accident. If so, the average incident rate
differs from one part of the population to another and a uniform value (1.6)
covering the whole population should not be used. In this case it may be neces-
sary to treat each section of the population differently or to move to a more
sophisticated distribution, possibly the negative binomial.
Another problem may be that the sample is not representative of the population.
Not all routes may be included; not all airlines may be included; there may be a
learning effect, with perhaps fewer errors later in the 400-day period than earlier;
or perhaps the pilots are doubly careful when they first fly a new aircraft. The

Quantitative Methods Edinburgh Business School A4/55


Appendix 4 / Answers to Review Questions

data should be investigated to search for gaps and biases such as these. If the
insurance is to cover all aircraft/routes/airlines then the sample data should be
representative of this population.
Lastly, the data are all about incidents; the insurance companies become involved
financially only when an accident takes place. The former may not be a good
surrogate for the latter. If possible, past records should be used to establish a
relationship between the two and to test just how good a basis the analysis of
incidents is for deciding on accident insurance.
(d) If a check of the data reveals missing routes or airlines then the gaps should be
filled if possible. The data should be split into subsections and the analysis re-
peated to find if there has been a learning effect or if there are different patterns
in different parts of the data. There could be differences on account of seasonali-
ty, routes, airlines, type of flight (charter or scheduled). If differences are
observed then the insurance premium would be weighted accordingly.
Data from the introduction of other makes of aircraft could serve to indicate
learning effects and also the future pattern of incidents.
The large amounts of money at stake in a situation like this would make the extra
statistical work suggested here worthwhile.

Marking Scheme (out of 20) Marks


(a) General method: comparing observed with theoretical 2
A priori justification of Poisson 2
Calculation of parameter 2
Correct use of table 2
(b) Calculation 2
(c) Noticing larger tail than anticipated 2
Suggesting reasons for possible non-randomness 2
Difference between incidents and accidents 2
(d) Possible new data 2
Further analyses 2
Total 20

Case Study 9.2: Police Vehicles


1
(a) Before conducting any significance test, the mean and standard deviation of the
sample have to be calculated.

Car 1 2 3 4 5 6 7 8 9 10 11 12 13 14
21 24 22 24 29 18 21 26 25 19 22 20 28 23
2 1 1 1 6 5 2 3 2 4 1 3 5 0
4 1 1 1 36 25 4 9 4 16 1 9 25 0

A4/56 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

The five stages of a significance test can now be followed:


(i) The hypothesis is that the tyres have made no difference and that the petrol
consumptions come from a population with a mean of 22.4.
(ii) The evidence is the sample of 14 cars’ petrol consumptions with a mean of
23 and a standard deviation of:

(iii) The significance level is 5 per cent.


(iv) The sample size is less than 30, the standard deviation has been estimated
from the sample and the underlying individual distribution is normal. All
the conditions attached to the -distribution are present. The observed
value is:

(v) The test is one-tailed, assuming the tyres could bring about only an im-
provement, not a deterioration, in petrol consumption; the degrees of
freedom are 13. The value corresponding to the 5 per cent level is thus
taken from the row for 13 degrees of freedom and the column for . The
value is 1.771. The observed value is less than this and therefore the hy-
pothesis is accepted. The tyres do not make a significant difference.
(b) On the other hand, the alternative hypothesis is that the tyres result in an
improvement in petrol consumption of 1.5 mpg. Under this hypothesis the sam-
ple would have come from a distribution of mean 23.9 (= 22.4 + 1.5). The
observed value is:

Ignoring the negative sign, this observed value (under the alternative hypothe-
sis) is lower than the critical value of 1.771, just as was the previous observed
value (under the null hypothesis). The sample evidence would therefore be insuf-
ficient to reject either the null or the alternative hypothesis. Clearly the sample
size is too small to discriminate properly between the two hypotheses.
If the probabilities of type 1 and type 2 errors are to be equal then the critical
value, , should be equidistant from both hypotheses (i.e. halfway between
them at 23.15). A sample size larger than 14 is evidently required to do this. As-
suming that the sample size needed is greater than 30 (and therefore the -
distribution can be approximated to the normal):

A sample size of 50 is needed if the test is to discriminate equally between the no


change hypothesis and the hypothesis based on the manufacturer’s claim. To

Quantitative Methods Edinburgh Business School A4/57


Appendix 4 / Answers to Review Questions

achieve such a sample size would presumably involve using cars of a wider age
span than six to nine months.
(c) Many factors affect a car’s petrol consumption. A well-designed significance test
should exclude or minimise the effect of all factors except the one of interest, in
this case, the tyres. Major influences on consumption are the type of car and its
age. By comparing like with like in respect of these factors, their effect is elimi-
nated. Other factors cannot be controlled in this way. Very little can be done
about the type of usage, total mileage, the style of the drivers and the quality of
maintenance. It is hoped that these factors will balance out over the sample of
cars and the time period.
(d) The principal argument in favour of the officer’s suggestion is that it may
eliminate the effect of different maintenance methods from the test so that ob-
served differences are accounted for by the tyres, not the maintenance methods.
The arguments against his proposal are stronger. First, the maintenance methods
are unlikely to be identical in the two forces. The procedures laid down may be
the same but the interpretation of them by different sets of mechanics of differ-
ent levels of skill will almost certainly mean that there are still differences.
Second, his proposals create some new difficulties not present in the original
significance test. Some factors affecting petrol consumption that were eliminated
by the first test are now reintroduced. The geography of the territories served by
the forces will differ; the drivers of the cars will be different; the roles of the cars
may be different. All these factors will cause different fuel consumptions in the
two samples of cars, which may well disguise or overwhelm the influence of the
tyres.
While the officer’s test could certainly be carried out, the new variables his test
introduces would put a question mark over any conclusions drawn. On the
whole, the officer’s suggestion should be rejected but without blunting his en-
thusiasm for using analytical methods to help in decision taking.

Marking Scheme (out of 20) Marks


(a) Calculation of mean and standard deviation 2
Standard error 1
Observed value 2
Table value 2
Drawing the right conclusion 1
(b) Method of finding sample size 3
Correct calculations 3
(c) Reasons for eliminating other influences 3
(d) For and against officer’s proposal 3
Total 20

A4/58 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Module 10

Review Questions
10.1 The correct answer is A, C. Analysis of variance tests the hypothesis that the
samples come from populations with equal means or that they come from a
common population. In the former case, B is an assumption. In both cases, D is an
assumption. B and D are therefore not hypotheses but assumptions underlying the
testing of the hypotheses by analysis of variance.
10.2 The correct answer is A.

10.3 The correct answer is D.

10.4 The correct answer is D.

10.5 The correct answer is True. The hypothesis tested by the analysis of variance is that
the treatments come from populations with equal means (i.e. the treatments have no
effect). But since the observed value exceeds the critical value, the hypothesis
must be rejected. The treatments do have a significant effect.
10.6 The correct answer is A, D. It is hypothesised that B is an attribute of the
populations from which the samples are taken; C is an assumed attribute of the
populations. Since the samples are selected at random, it would be virtually impossi-
ble for the samples to have these attributes.
10.7 The correct answer is B. The grand mean is 4. Total SS is calculated by finding the
deviation of each observation from the grand mean, then squaring and summing.
Taking each row in turn:

10.8 The correct answer is A.

where = number of rows (observations or blocks).

Quantitative Methods Edinburgh Business School A4/59


Appendix 4 / Answers to Review Questions

10.9 The correct answer is D.

where = number of columns (treatments).

10.10 The correct answer is False. A balanced design is one in which all the treatment
groups are the same size (i.e. they all have an equal number of observations).

Case Study 10.1: Washing Powder


1 A one-way analysis of variance will test the hypothesis that the means of the
irritation index for each brand come from populations with the same mean. If the
hypothesis is accepted, the conclusion will be that the powders do not cause
different levels of skin irritation; if rejected, the conclusion will be that the powders
do cause different levels of irritation.
The systematic way of carrying out the test is to base it on an ANOVA table, as
shown in Table A4.7. The details of the calculations are shown below.

Table A4.7 ANOVA for washing powders


Variation Degrees Sums of Mean F
of free- squares square
dom
Explained by SST = MST = 490/5 MST/MSE =
treatments 490 98/10.9
(between columns) = 98 = 8.99
Error or unex- SSE = MSE =
plained 588.2 588.4/54
(within columns) = 10.90
Total SS =
1078.4

The first column of Table A4.7 describes the sources of error. The second relates to
degrees of freedom, always given by and for a one-way analysis of
variance.
The third column requires the calculation of the sums of squares. SST deals with the
‘between’ sums of squares and is concerned with the group means and their
deviations from the grand mean.

SSE deals with (within) sums of squares and is concerned with the individual
observations and the deviations between them and their group means.

A4/60 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Brand 1

Brand 2

Brand 3

Brand 4

Brand 5

Brand 6

Total SS is of course the total sums of squares. It is concerned with all observations
and their deviations from the grand mean. Going through the observations, each
row in turn:

It was not strictly necessary to calculate all three sums of squares since:

Calculating all three provided a check. Table A4.7 shows that the equality is satis-
fied: 1078.4 = 490.0 + 588.4.
Next, the mean squares are calculated by dividing the sums of squares by the
associated degrees of freedom (column 4 in Table A4.7). The ratio of the mean
squares is the observed value of the variable and is calculated in the final column.
To finish the test, the critical value for (5, 54) degrees of freedom at the 5 per cent
level is found from the table of the -distribution. In this case, the value is 2.38. The
observed value, 8.99, greatly exceeds 2.38. The hypothesis is rejected at the 5 per
cent significance level. There is a significant difference in the levels of irritability
caused.
At the 1 per cent level the hypothesis is also rejected. The critical value for (5, 54)
degrees of freedom is 3.37 at the 1 per cent level. The observed value exceeds this
also. The evidence that the powders do not cause the same level of skin irritation is
strong.

Quantitative Methods Edinburgh Business School A4/61


Appendix 4 / Answers to Review Questions

Qualifications
The reservations that should attach to the results of the test are to do with both
statistics and common sense.
(a) An test assumes that the populations from which the samples are drawn are
normally distributed. In this case, it must be assumed that the distribution of
observations for each brand is normal. This may not be true, especially since the
sample size (10) is too small for the central limit theorem to have any real effect.
(b) An test also assumes that the populations from which the samples are drawn
have equal variances. Again, this may not be true although statistical research has
indicated that variances would have to be very different before the results of the
test were distorted.
(c) Since skin irritation is very much a subjective problem and one that is hard to
quantify, there must also be doubts about the validity of the data (i.e. does the
index measure accurately what it is supposed to measure?). The tester should
look carefully at the ways in which the index has been validated by the research-
ers.
(d) The data must also come into question for more fundamental reasons. The
design of the experiment gives rise to the following doubts:
(i) How were the households chosen? Are they a representative group?
(ii) Do the households do their washing any differently because they are being
monitored by the tester?
(iii) How representative are the batches of washing?
(iv) How ‘standard’ are the batches of washing?
(v) Are there factors that make some people more prone to skin irritation and
that should therefore be built into the test?
(vi) Are the data independent (e.g. is there any cumulative effect in the testing)?
Does any brand suffer a higher index because of the effect of brands tested
earlier?

Marking Scheme (out of 20) Marks


Calculating degrees of freedom 2
Calculating total SS, SST, SSE (2 marks each) 6
Calculating mean squares 1
Calculating observed 1
Finding critical 2
Use of ANOVA table 2
Qualifications
– Statistical assumptions about normality and variance 2
– Quality of data 4
Total 20

A4/62 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Case Study 10.2: Hypermarkets


1 To test whether there is any difference between responses at different locations, a
two-way analysis of variance is needed. The treatments are the store locations and
the blocks are the days of the week.
Table A4.8 is Table 10.14 with row and column means and the grand mean added.

Table A4.8 Positive responses to ‘courteous service’ attribute


Store
B C D G L M N S Average
Monday 71 73 66 69 58 60 70 61 66
Tuesday 71 78 81 89 78 85 90 84 82
Wednesday 73 78 76 86 74 80 81 76 78
Thursday 73 75 73 80 75 71 73 72 74
Friday 62 66 69 81 60 64 61 57 65

Average 70 74 73 81 69 72 75 70 Grand mean = 73

After calculating the means, the next step is to construct a two-way analysis of
variance (ANOVA) table as shown in Table A4.9.

Table A4.9 Two-way ANOVA table for hypermarket survey


Variation Degrees of Sums of Mean square F
freedom squares
Explained by SST = 520 MST = 520/7 MST/MSE = 74.3/19.7
treatments
(between = 74.3 = 3.77
columns)
Explained by SSB = 1760 MSB = 1760/4 MSB/MSE = 440/19.7
blocks
(between = 440 = 22.3
rows)
Error or SSE = 552 MSE = 552/28
unexplained
(within = 19.7
columns)
Total SS = 2832

The block sum of squares is calculated from the block (row) means:

Quantitative Methods Edinburgh Business School A4/63


Appendix 4 / Answers to Review Questions

The error sum of squares (SSE) is calculated by first determining the total sum of
squares (Total SS).

Monday 616

Tuesday 928
Wednesday 326
Thursday 62
Friday 900

(a) Do the locations of the store have different effects on the responses? To test the
hypothesis that the location (column) means come from the same population,
the observed value relating to treatments must be compared with a critical
value. If the significance level is chosen to be 5 per cent then the appropriate
critical value, relating to (7, 28) degrees of freedom, is found from the table
to be 2.36. The observed value is 3.77; therefore the hypothesis should be
rejected. At the 5 per cent significance level, location does appear to affect re-
sponses.
(b) It is important to neutralise the effect of the days of the week in a test such as
this. Intuitively there is a likelihood that people’s attitudes will vary between the
beginning of the week and the end, when the weekend approaches. This factor
may affect customers and staff alike.
(c) If the effect of days of the week had not been neutralised, the appropriate test
would have been a one-way analysis of variance as shown in Table A4.10. Total
SS and SST are calculated just as in the two-way case, but SSE is obtained from
the relationship:

The critical value at the 5 per cent level and for (7, 32) degrees of freedom is
2.32. The observed value, 1.03, is less than the critical. The hypothesis should
be accepted. Location does not appear to affect responses. When the effects of
days of the week are not allowed for, the result of the test is the opposite of
when they are allowed for.

A4/64 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Table A4.10 One-way ANOVA table for hypermarket survey


Variation Degrees of Sums of Mean square F
freedom squares
Explained by treatments SST = 520 MST = 520/7 MST/MSE =
74.3/72.2
(between columns) = 74.3 = 1.03
Error or unexplained SSE = 2312 MSE = 2312/32
(within columns) = 72.2
Total SS = 2832

(d) Referring back to Table A4.9, the effect of days of the week on responses can
also be tested. This time the observed value is the ratio MSB/MSE, equal to
22.3. The critical value at the 5 per cent level for (4, 28) degrees of freedom is
2.71. The observed far exceeds this amount. Days of the week have a highly
significant effect on the responses.
(e) Should the analysis of variance be taken further by looking into the possibility of
an interaction effect? The usefulness of such an extension to the study depends
on how much it is thought days of the week and locations have independent
effects on responses. If it were thought that the ‘Monday’ and ‘Friday’ effects
were more marked in some parts of the country than others then an interaction
variable would permit the inclusion of this influence in the analysis of variance.
Intuitively it does not seem likely that people feel particularly worse about Mon-
days (better about Fridays) in some cities than in others. In any case, since the
effect of location has already been demonstrated to have a significant bearing on
responses, the inclusion of a significant interaction term could only make the
effect more marked (by decreasing the SSE while SST remains the same). Over-
all it does not seem worthwhile to extend the analysis to include an interaction
term.

Marking Scheme (out of 30) Marks


Two-way analysis of variance
– Calculating degrees of freedom 2
– Calculating Total SS, SST, SSB, SSE (2 marks each) 8
– Calculating mean squares 2
– Calculating observed F 1
– Finding critical 1
– Use of ANOVA table 4
– Reasons for including blocks 2
One-way analysis of variance
– Calculating sums of squares 2
– ANOVA table 4
Discussion of interaction
– Likelihood of need for interaction variable 2

Quantitative Methods Edinburgh Business School A4/65


Appendix 4 / Answers to Review Questions

– Predicted effect on outcome of test 2


Total 30

Module 11

Review Questions
11.1 The correct answer is C, D. A is untrue because regression is specifying the
relationship between variables; correlation is measuring the strength of the relation-
ship. B is untrue because regression and correlation cannot be applied to unpaired
sets of data. C is true, by definition, and D is true, because if the data were plotted in
a scatter diagram, they would lie approximately along a straight line with a negative
slope.
11.2 The correct answer is B. A is untrue because residuals are measured vertically, not at
right angles to the line. B is true, by definition. C is untrue because actual points
below the line have negative residuals, and D is untrue because residuals are all zero
only when the points all lie exactly on the line (i.e. when there is perfect correlation).
11.3 The correct answer is B.

4 2 4 16 3 9 12
6 4 2 4 1 1 2
9 4 1 1 1 1 1
10 7 2 4 2 4 4
11 8 3 9 3 9 9
40 25 34 24 26

11.4 The correct answer is C.

A4/66 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

11.5 The correct answer is A.

11.6 The correct answer is A.

11.7 The correct answer is A. The evidence of everyday life is that husbands and wives
tend to be of about the same age, with only a few exceptions. One would therefore
expect strong positive correlation between the variables.
11.8 The correct answer is A. If data are truly represented by a straight line, the residuals
should exhibit no pattern. They should be random. Randomness implies that each
residual should not be linked with the previous (i.e. there should be no serial
correlation). Randomness also implies that the residuals should have constant
variance across the range of values (i.e. heteroscedasticity should not be present).
11.9 The correct answer is False. The strong correlation indicates association, not
causality. In any case, it is more likely that if causal effects are present, they work in
the opposite direction (i.e. a longer life means a patient has more time in which to
visit his doctor).
11.10 The correct answer is C. The prediction of sales volume for advertising expenditure
of 5 is:

11.11 The correct answer is B. Unexplained variation = Sum of squared residuals = 900

11.12 The correct answer is A. The difference between a regression of on and one of
on is that and are interchanged in the regression and correlation formulae.
Since the correlation coefficient formula is unchanged if and are swapped round,
the correlation coefficients are the same in both cases. Since the slope and intercept
formulae are changed if and are swapped round, then these two quantities are
different in the two cases (unless by a fluke).

Quantitative Methods Edinburgh Business School A4/67


Appendix 4 / Answers to Review Questions

Case Study 11.1: Railway Booking Offices


1

20

15

10

5
x
0 1 2 3 4 5 6 7 8

Figure A4.9 Railway booking transactions


(a) The scatter diagram is given in Figure A4.9. High values tend to correspond to
high values. The underlying relationship could be linear but there is a lot of
scatter.

3 11 1 1 3 9 3
1 7 3 9 7 49 21
3 12 1 1 2 4 2
4 17 0 0 3 9 0
6 19 2 4 5 25 10
7 18 3 9 4 16 12
24 84 24 112 48

The correlation coefficient is high, confirming the visual evidence of the scatter
diagram that the relationship is linear.
(b) Line (i)
The line goes through the points (1,7) and (6,19). Therefore, the line has slope =
(19 7)/(6 1) = 12/5, (i.e. the line is . Since the line goes
through the point (1,7):

A4/68 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

The line is
Line (ii)
The line goes through the points (1,7) and (7,18). Therefore, the line has slope =
(18 7)/(7 1) = 11/6 = 1.8 (i.e. the line is ). Since the line goes
through (1,7):

The line is
Line (iii)
The regression line is found from the regression formulae:

The line is

Points Line (i) Line (ii) Line (iii)

Fitted Residual Fitted Residual Fitted Residual


3 11 11.8 0.8 10.6 0.4 12.0 1.0
1 7 7.0 0 7 0 8.0 1.0
3 12 11.8 0.2 10.6 1.4 12.0 0
4 17 14.2 2.8 12.4 4.6 14.0 3.0
6 19 19.0 0 16.0 3.0 18.0 1.0
7 18 21.4 3.4 17.8 0.2 20.0 2.0
MAD = 1.2 MAD = 1.6 MAD = 1.3
Variance = 4.0 Variance = 6.5 Variance = 3.2

The residuals are calculated as actual minus fitted values. For example, for line
(i) and the point (3,11), the residual is:

The MADs are calculated as the average of the absolute values of the residuals.
For example, for line (i):

The variances are calculated as the average of the squared residuals (but with a
divisor of 5, not 6, as in the formula for the variance). For example, for line (i):

The mean absolute deviation shows that line (i), connecting the extreme values,
has the smallest residual scatter. On the MAD criterion, line (i) is the best.

Quantitative Methods Edinburgh Business School A4/69


Appendix 4 / Answers to Review Questions

The variance shows that line (iii), the regression line, has the smallest residual
scatter. On the variance criterion (equivalent to least squares), line (iii) is the best.
This has to be the case since the regression line is the line that minimises the
sum of squared residuals.
Clearly different, but equally plausible criteria (minimising the MAD and mini-
mising the variance of the residuals) give different ‘best fit’ lines. Even when one
keeps to one criterion the margin between the ‘best’ line and the others is small
(in terms of the criterion). Yet the three lines (i), (ii) and (iii) differ markedly
from one another and would give distinctly different results if used to forecast.
The conclusion is that, while regression analysis is a very useful concept, it
should be used with caution. A regression line is best only in a particular way
and, even then, only by a small margin.

Marking Scheme (out of 20) Marks


(a) Scatter diagram 2
Correlation coefficient 2
(b) Equations of lines (i) and (ii) 3
Equation of line (iii) 3
(c) Residuals of lines 2
MADs of lines 2
Variances of lines 2
(d) Conclusions as to which lines are best 2
Conclusion on viewing regression with caution 2
Total 20

Case Study 11.2: Department Store Chain


1
(a) The regression equation is:

For average disposable family income = £221:

(b) The goodness of fit can be checked by considering the correlation coefficient
and the residuals. The correlation coefficient is 0.92. This is high, suggesting a
good fit.
The next step is to check the residuals for randomness. They must first be calcu-
lated using the regression equation (see Table A4.11).

A4/70 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Table A4.11 Department stores: regression results


Store Average sales per Average disposable Fitted Residual
number week (£000s) family income
(coded)

1 90 301 88.9 1.1


2 87 267 82.9 4.1
3 86 297 88.2 2.2
4 84 227 75.7 8.3
5 82 273 83.9 1.9
6 80 253 80.4 0.4
7 78 203 71.5 6.5
8 75 263 82.2 7.2
9 70 190 69.1 0.9
10 68 212 73.1 5.1
11 64 157 63.2 0.8
12 61 141 60.4 0.6
13 58 119 56.5 1.5
14 52 133 59.0 7.0
Mean 74 217

A visual inspection of the residuals does not suggest any particular pattern. First,
there is no tendency for the positives and negatives to be grouped together (e.g.
for the positive residuals to refer to the smaller stores and the negatives to the
larger, or vice versa). In other words, there is no obvious evidence of serial cor-
relation. Second, there is no tendency for the residuals to be of different sizes at
different parts of the range (e.g. for the residuals to be, in general, larger for
larger stores and smaller for smaller stores). In short, there is no evidence of
heteroscedasticity.
Visually, the residuals appear to be random. Taken with the high correlation
coefficient, this indicates that there is a linear relationship between sales and
family disposable income.
(c) The scatter of the residuals about the regression line is measured through the
residual standard error. If the residuals are normally distributed, 95 per cent of
them will lie within 2 standard errors of the line. For a point forecast (given by
the line) it may be anticipated, if the future is like the past, that the actual value
will also lie within 2 standard errors of the point forecast.
If residual error were the only source of error, 95 per cent confidence limits for
the forecast could be defined as, in the example given above:
£74 668 ± 2 4720
i.e. £65 228 to £84 108

Quantitative Methods Edinburgh Business School A4/71


Appendix 4 / Answers to Review Questions

However, there are other sources of error (see Module 12) and therefore the
above confidence interval must be regarded as the best accuracy that could be
achieved.
(d) The linear relationship between sales and family disposable income appears to
pass the statistical tests. Further, since it must be a reasonable supposition that
sales are affected to some degree by the economic wealth of the catchment area,
the model has common sense on its side.
On the other hand, there are many influences on a store’s sales besides family
income. These are not included in the forecasting method. Ideally, a method that
can include other variables would be preferable.
A second reservation is concerned with the quality of the data. While store sales
are probably fairly easy to measure and readily available, this is unlikely to be the
case with the disposable family income. If these data are not available, an expen-
sive survey would be required to make estimates. Even then, the data are not
likely to carry a high degree of accuracy.
Last, the catchment area will be difficult to define in many if not all cases, adding
further to the inaccuracy of the data.

Marking Scheme (out of 20) Marks


(a) Specifying equation 2
Making forecast 2
(b) Commenting on correlation coefficient 2
Calculating residuals 3
Discussing randomness 1
including
– serial correlation 1
– heteroscedasticity 1
(c) Knowing meaning of residual standard error 2
and its relationship to forecasting accuracy 2
(d) Non-statistical reservations 4
Total 20

Module 12

Review Questions
12.1 The correct answer is B, C. B and C give synonyms for a right-hand-side variable.
Another synonym is an independent variable, the opposite of A. D is incorrect,
there being no such thing as a residual variable.
12.2 The correct answer is False. The statement on simple regression is correct, but the
statement on multiple regression should be altered to ‘one variable is related to
several variables’.

A4/72 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

12.3 The correct answer is A. The coefficients have standard errors because they are
calculated from a set of observations that is deemed to be a sample and therefore
the coefficients are estimates. Possible variations in the coefficients are calculated
via their standard errors, which are in turn estimated from variation in the residuals.
B is incorrect since, although there may be data errors, this is not what the standard
errors measure. The standard errors are used to calculate values that are used in
multiple regression, but this is not why they arise. Therefore, C and D are incorrect.
12.4 The correct answer is C. The values are found by dividing the coefficient estimate
by the standard error. Thus:
Variable 1: 5.0/1.0 = 5.0
Variable 2: 0.3/0.2 = 1.5
Variable 3: 22/4 = 5.5
12.5 The correct answer is D. The degrees of freedom = No. observations No.
variables 1. Thus, number of observations = 32 + 3 + 1 = 36.
12.6 The correct answer is C. The elimination of variables is based on a significance test
for each variable. The value for each variable is compared with the critical value
for the relevant degrees of freedom. In this case, the number of observations
exceeds 30; therefore, the normal distribution applies and the critical value is 1.96.
12.7 The correct answer is True. The formula for -bar-squared has been adjusted to
take degrees of freedom into account. Since each variable reduces the degrees of
freedom by 1, the number of variables included is allowed for.
12.8 The correct answer is A. Sums of squares (regression) have as many degrees of
freedom as there are right-hand-side variables (i.e. 3).
12.9 The correct answer is C.

12.10 The correct answer is D. The degrees of freedom for sums of squares (residuals)

12.11 The correct answer is C. The critical ratio is for (3,34) degrees of freedom and for
the 5 per cent level. From tables this is found to be 2.88. Since observed exceeds
critical, there is a significant linear relationship.
12.12 The correct answer is D. The independent variables are the right-hand-side
variables. Only (and ) appear on the right-hand side, but in curvilinear regression
squared terms are treated as additional variables. Therefore, the independent
variables are and .
12.13 The correct answer is False. A transformation is used not to approximate a curved
relationship to a linear one but to put the relationship in a different form so that the
technique of linear regression can be applied to it.

Quantitative Methods Edinburgh Business School A4/73


Appendix 4 / Answers to Review Questions

12.14 The correct answer is B. To carry out a regression analysis on the exponential
function the equation is first transformed by taking logarithms (to the base
) of either side to obtain eventually:

This is a linear equation between and . Hence, a linear regression can be


carried out on the variables and .
12.15 The correct answer is E. In its linear form the equation is:

The coefficient of is thus and the constant is . Therefore:

Case Study 12.1: CD Marketing


1
(a) The equation is:

(b) The values of the residuals are found by first calculating the fitted values from
the regression equation. The fitted values are then subtracted from the actual.
(See Table A4.12.) The residuals are not especially unusual. A visual inspection
suggests that they are random, although it is of course difficult to detect patterns
from so few observations.

Table A4.12 CD advertising: regression results


Week Gross News TV Fitted Residuals
reve- adver- adver-
nue tising tising
(£000) (£000) (£000)
1 180 5 1 183 (= 138 + (10 × 5) (5 × 1)) 3 (180 183)
2 165 3 2 158 (= 138 + (10 × 3) (5 × 2)) 7 (165 158)
3 150 3 3 153 (= 138 + (10 × 3) (5 × 3)) 3 (150 153)
4 150 3 3 153 (= 138 + (10 × 3) (5 × 3)) 3 (150 153)
5 185 5 2 178 (= 138 + (10 × 5) (5 × 2)) 7 (185 178)
6 170 4 1 173 (= 138 + (10 × 4) (5 × 1)) 3 (170 173)
7 190 6 1 193 (= 138 + (10 × 6) (5 ×1)) 3 (190 193)
8 200 6 0 198 (= 138 + (10 × 6) (5 × 0)) 2 (200 198)

(c) The correlation coefficient is high at 0.92. But it would be expected to be high
where there are so few observations and two right-hand-side variables. The test
would be a more precise check. An ANOVA table should be drawn up as in
Table A4.13.
The observed value is 37.3. The critical value for (2,5) degrees of freedom at
the 5 per cent significance level is, from the table, 5.79. Since observed ex-

A4/74 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

ceeds the critical , it can be concluded that there is a significant linear relation-
ship.

Table A4.13 CD advertising: ANOVA


Variation Degrees of Sums of Mean square F
freedom squares
Explained =2 SSR = MSR = MST/MSE =
by regres- 2191 2191/2 1095/29.4
sion
= 1095 = 37.3
Error or =5 SSE = 147 MSE = 147/5
unexplained
(residuals) = 29.4
Total =7 SS =
2338

(d) Two additional pieces of information would be useful. The correlation matrix
would help to check on the possible collinearity of the two variables. Calcula-
tions of SE(Pred) would help to determine whether the predictions produced by
the model were of sufficient accuracy to use in decision making.
(e) The model could be used to forecast revenue, provided that conditions do not
change. In particular, this means that the ways in which decisions are taken re-
main the same.
A seeming paradox of the model is the negative coefficient for television adver-
tising expenditure. Does this mean that television advertising causes sales to be
reduced? The answer is almost certainly no. The reason is this: television adver-
tising is only used when sales are disappointing. Consequently, high television
advertising expenditure is always associated with low revenue (but not as low as
it might have been). The causality works in an unexpected way: from sales to
advertising and not the other way around.
Provided decisions about when to use the two types of advertising conform to
the past, the model could be used for predictions. If, however, it was decided to
experiment in advertising policy and make expenditures in different circum-
stances to those that have applied in the past, the model could not be expected
to predict gross revenue.
(f) The prime improvement would be extra data. Eight observations is an unsatis-
factory base for a regression analysis. This is not a statistical point. It is simply
that common sense suggests that too little information will be contained in those
eight observations for the decision at hand, whatever it is.

Marking Scheme (out of 20) Marks


(a) Specifying regression equation 2
(b) Calculating residuals 3
Commenting on their randomness 1
(c) test with ANOVA table 6

Quantitative Methods Edinburgh Business School A4/75


Appendix 4 / Answers to Review Questions

(d) Additional information


– correlation matrix 2
– SE(Pred) 2
(e) Use for prediction, noticing negative coefficient 2
(f) Improvements to model 2
Total 20

Case Study 12.2: Scrap Metal Processing I


1
(a)
(i) The scatter diagram is shown in Figure A4.10. The relationship between
unit costs and capacity is curved. A linear regression analysis will not be the
best model for these data. However, the -bar-squared is high at 0.73 and,
furthermore, the test below will show a significant relationship. This illus-
trates that a regression equation, although significant, may not necessarily
be adequate.

90
Unit costs (£/tonne)

80

70

60

50

40
100 200 300 400
Capacity (tonnes/week)

Figure A4.10 Scrap metal plants: scatter diagram


(ii) The runs test is as follows. The 12 residuals comprise four positive and
eight negative. There are five runs. In Appendix 1, Table A1.7 shows that
the lower critical value in these circumstances is 3. The runs test therefore
contradicts (but only just) the clear visual evidence of the scatter diagram
that the residuals are not random.
(iii) The computer printout has shown the observed value to be 31.0. The
critical value has (1,10) degrees of freedom, there being one right-hand-
side variable and 12 observations. From Table A1.6 in Appendix 1, the crit-
ical value at the 5 per cent significance level is 4.96. Observed exceeds
critical; thus the relationship is significant.
(b)
(i) The scatter diagram shows the clear curve in the data, which should be in-
corporated into the regression model. There are several possible ways of

A4/76 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

doing this, including the taking of logarithms of one or both variables.


However, economic theory suggests a better alternative. The law of econo-
mies of scale would imply that unit costs are related to the reciprocal of
capacity. Since this transformation has sound reasoning behind it, this
should be tried first. Only if it fails will there be the need to resort to trial
and error searches for transformations.
(ii) It could well be the case that there has been a learning curve effect with
respect to the plants. It is plausible to suppose that the company has been
able to improve both design and operations at the later plants in the light of
experiences at the earlier ones. If this is the case, it can be tested by intro-
ducing a second right-hand-side variable: the age of the plant. This would
be done by using information about the dates of construction of the plants.
In 2018 a plant built in 2010 has age 8, one built in 2014 has age 4, and so
on.

Marking Scheme (out of 20) Marks


(a) (i) Scatter diagram – seeing curvature in data 2
(ii) Runs test 4
(iii) test 4
(b) (i) Handling curvature through reciprocal – quoting 5
reasons for using this transformation (3 marks if
logarithms or trial and error suggested)
(ii) Suggesting addition of age variable and giving reasons 5
Total 20

Case Study 12.3: Scrap Metal Processing II


1
(a) All the diagnostic checks for a good regression model indicate that the third
model (relating unit costs with 1/Capacity and Age) is the best. The reasons can
be considered within the structure of the regression analysis steps given at the
end of the module. The steps concerned are as follows:
(i) The model satisfies the common-sense test. There are sound reasons of
prior knowledge for the inclusion of both variables and for the transfor-
mation used.
(ii) The closeness of fit is the best. It has the highest -bar-squared and the
highest value. Note, however, that the values are significant in all three
cases.
(iii) The residuals appear to be random. By inspection, the residuals are:
++ + + +
There seems to be no pattern in them. In this case, the runs test confirms the
visual evidence. The series above has nine runs. For five positives and seven
negatives, the tables in Appendix 1 show that the lower and upper critical
values are 3 and 11. The observed number of runs is within the ‘random’
range. The residuals also appear to be random for the second model (relating

Quantitative Methods Edinburgh Business School A4/77


Appendix 4 / Answers to Review Questions

unit costs to 1/Capacity alone). They do not appear to be random in the first
model as described in Case Study 12.2.
(iv) The third model is the only one of the three that is a multiple regression
model. Both of its right-hand-side variables have been included correctly.
Their values are 23.8 (for 1/Capacity) and 7.3 (for Age), well in excess of
the critical value at the 5 per cent level.
(v) Collinearity is largely absent from the third model. Using the formula given
in Module 11, the correlation coefficient between 1/Capacity and Age is
0.44. Squaring this to obtain -squared, the answer is 0.19. This is a low
value (and an test shows the relationship is not significant).
(vi) The third model has the lowest SE(Pred), at 1.7, compared with 4.0 for the
second model. It is more than twice as accurate as the next best model.
(b)
(i) Although the value of age in making a prediction for a 2018 plant is zero,
age nevertheless has had an effect on the prediction. Age was allowed for in
constructing the regression model. All coefficients were affected by the
presence of age in the regression. One could say that the regression has
separated the effect of capacity from that of age. The (pure) effect of capac-
ity can now be used in predicting for a modern plant.
(ii) The 95 per cent forecast intervals must be based on the -distribution since
the number of observations is less than 30. For 9 degrees of freedom
(12 2 1) is 2.26. The intervals are:

(iii) SE(Pred) takes into account a number of sources of error. One of these is
in the measurement of the variable coefficients. Any prediction involves
multiplying these coefficients by the values of the right-hand-side variables
on which the predictions are based. Therefore, the amount of the error will
vary as these prediction values vary. SE(Pred) will thus be different for dif-
ferent predictions.
(iv) -squared measures variation explained; SE(Pred) deals with unexplained
variation plus other errors. Although, therefore, the two are linked, the rela-
tionship is not a simple or exact one. An increase in -squared from 0.93 to
0.99 in one way appears a small increase. From another point of view it re-
flects a great reduction in the unexplained variation, which is reflected in
the substantial improvement in prediction accuracy.

Marking Scheme (out of 20) Marks


(a) Dimensions along which to judge the best model
(i) Sensibleness 2
(ii) Closeness of fit 2
(iii) Random residuals 2
(iv) -test on variable coefficients 2
(v) Collinearity 2

A4/78 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 20) Marks


(vi) Accuracy of prediction 2
(b) (i) Influence of age 2
(ii) Forecast intervals 2
(iii) Reason for variation in SE(Pred) 2
(iv) Relationship between and SE(Pred) 2
Total 20

Module 13

Review Questions
13.1 The correct answer is False. The techniques are classified into qualitative, causal
modelling and time series; the applications are classified into short-, medium- and
long-term.
13.2 The correct answer is B, D. A is false because time series methods make predictions
from the historical record of the forecast variable only, and do not involve other
variables. B is true because a short time horizon does not give time for conditions to
change and disrupt the structure of the model. C is false since time series methods
work by projecting past patterns into the future and therefore are usually unable to
predict turning points. D is true because some time series methods are able to
provide cheap, automatic forecasts.
13.3 The correct answer is A, C. A is the definition of causal modelling. B is false since
there is no reason why causal modelling cannot be applied to time series as well as
cross-sectional data. C is true because causal modelling tries to identify all the
underlying causes of a variable’s movements and can therefore potentially predict
turning points. D is false since causal modelling can be used for short-term fore-
casts, but its expense often rules it out.
13.4 The correct answer is False. Causal modelling is the approach of relating one
variable to others; least squares regression analysis is a technique for defining the
relationship. There are other ways of establishing the relationship besides least
squares regression analysis.
13.5 The correct answer is True. Qualitative forecasting does not work statistically from a
long data series, as the quantitative techniques tend to. However, in forming and
collecting judgements, numerical data may be used. For example, a judgement may
be expressed in the form of, say, a future exchange rate between the US dollar and
the euro.
13.6 The correct answer is A, B, C, D. The situations A to D are the usual occasions
when qualitative forecasting is used.
13.7 The correct answer is A. A is correct, although ‘expert’ needs careful definition. It
would be better to say that the participants were people with some involvement in
or connection with the forecast. B is not true since the participants are not allowed

Quantitative Methods Edinburgh Business School A4/79


Appendix 4 / Answers to Review Questions

to communicate with one another at all. C is not true because the chairman passes
on a summary of the forecasts, not the individual forecasts. D is not true. The
chairman should bring the process to a stop as soon as there is no further move-
ment in the forecasts, even though a consensus has not been reached.
13.8 The correct answer is True. This is a definition of scenario writing. Each view of the
future is a scenario.
13.9 The correct answer is C, D. A and B are not true since the technique of the cross-
impact matrix does not apply to forecasts or probabilities of particular variables,
whether sales or not. C is true since the technique is based on the full range of
future events or developments and they therefore need to be fully listed. D is true,
being a description of what the technique does.
13.10 The correct answer is False. The essence of an analogy variable is that it should
represent the broad pattern of development expected for the forecast variable. It
does not have to be exactly the same at each point. They could, for example, differ
by a multiplicative factor of ten.
13.11 The correct answer is A. Catastrophe theory applies to ‘jumps’ in the behaviour of a
variable rather than smooth changes, however steep or unfortunate.
13.12 The correct answer is C. C gives the formula for a partial relevance number.

Case Study 13.1: Automobile Design


1 Recall the way in which relevance trees work. The technique starts with a broad
objective, breaks this down into sub-objectives and then further breaks down the
sub-objectives through perhaps several different levels finally to come to specific
technological developments. The elements of the increase are then given ‘relevance
weights’, from which it is possible to calculate the overall relevance of the techno-
logical developments that are at the lowest level of the tree. The outcome of the
technique is a list of those developments that are most important or relevant to the
achievement of the higher-level objective and sub-objectives.
Seven steps in the application of relevance trees were described in the text. For the
design of an automobile, the steps might be as shown below. Bear in mind that here
it is the structure of the answer that matters, not the detail of the answers. In a
practical application of the technique, the details would be important but substantial
assistance from several key people in the automobile industry would be needed to
get them right. Even then there would be considerable disagreement. In other
words, the technique is one for which there is no single right answer.
(a) The relevance tree is given in Figure A4.11.

A4/80 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Objective:
Level Design successful automobile

1 Accommodation Control Information


(on performance)

2 Passengers Baggage Direction Speed Communication Visibility

3 Comfort Protection Instruments


4
5

Figure A4.11 Relevance tree for automobile design


(b) In this case the criteria might be:

A Performance
B Passenger comfort
C Safety
D Running costs
E Capital costs
Weight the importance of each criterion relative to the others. This is done by
asking which criteria are most relevant to the basic objective of designing a suc-
cessful automobile. The weights might be assigned as follows:
Weight
A Performance 0.30
B Passenger comfort 0.20
C Safety 0.10
D Running costs 0.15
E Capital costs 0.25
Total 1.00

(c) Weight the sub-objectives at each level (the elements of the tree) according to
their importance in meeting each criterion. In this case, the result might be as in
Table A4.14.

Table A4.14 Element weights


Criteria Perfor- Com- Safety Run- Capital
mance fort ning costs
costs
Criterion weight 0.30 0.20 0.10 0.15 0.25
Elements at level 1 Element weights
Accommodation 0.10 0.75 0.60 0.05 0.15
Control 0.65 0.15 0.30 0.80 0.75
Information 0.25 0.10 0.10 0.15 0.10

Quantitative Methods Edinburgh Business School A4/81


Appendix 4 / Answers to Review Questions

Criteria Perfor- Com- Safety Run- Capital


mance fort ning costs
costs
1.00 1.00 1.00 1.00 1.00
Elements at level 2 Element weights
Passengers 0.05 0.70 0.55 0.05 0.10
Baggage 0.05 0.05 0.05 0.00 0.05
Direction 0.05 0.00 0.10 0.00 0.05
Speed 0.45 0.10 0.10 0.75 0.60
Communication 0.10 0.00 0.05 0.00 0.05
Instruments 0.15 0.05 0.05 0.15 0.10
Visibility 0.15 0.10 0.10 0.05 0.05
1.00 1.00 1.00 1.00 1.00

The first column shows the assessed relevance of the three elements at level 1 to
the criterion of performance. Accommodation is weighted 10 per cent, control
65 per cent and information 25 per cent. Since the table gives the relative rele-
vance of the elements at each level to the criteria, this part of each column must
sum to 1. The process of assessing relevance weights is carried out in a similar
way for the second level of the tree.
(d) Each element has a partial relevance number (PRN) for each criterion. It is
calculated:

It is a measure of the relevance of that element with respect only to that criteri-
on. For this case the partial relevance numbers are shown in Table A4.15.
For instance, at level 2 the PRN for direction with respect to capital costs is 0.05
× 0.25 = 0.0125.
PRNs are calculated for each element at each level for each criterion.
(e) The LRN for each element is the sum of the PRNs for that element (see Ta-
ble A4.16). It is a measure of the importance of that element relative to others at
the same level in achieving the highest-level objective. For example, at level 2 the
LRN for direction is 0.0375 (= 0.0150 + 0 + 0.0100 + 0 + 0.0125). There is one
LRN for each element at each level.
(f) There is one CRN for each element. They are calculated by multiplying the LRN
of an element by the LRNs of each associated element at a higher level (see Ta-
ble A4.17). This gives each element an absolute measure of its relevance.
For example:

The CRNs at the second level show the comparative importance with respect to
the overall objective of the elements at that level. Thus, speed is the most im-
portant (0.240) and baggage the least important (0.012).

A4/82 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Recall that by this process the bottom row of elements (specific technological
requirements) will have overall measures of their relevance in achieving the ob-
jective at the highest level of the tree. This should lead to decisions about the
importance, timing, resource allocation, etc. of the tasks ahead.

Table A4.15 Partial relevance numbers


Criteria Perfor- Com- Safety Running Capital
mance fort costs costs
Criterion weight 0.30 0.20 0.10 0.15 0.25
Elements at level 1 Element weights
Accommodation 0.10 0.75 0.60 0.05 0.15
Control 0.65 0.15 0.30 0.80 0.75
Information 0.25 0.10 0.10 0.15 0.10
1.00 1.00 1.00 1.00 1.00
Partial relevance numbers
Accommodation 0.0300 0.1500 0.0600 0.0075 0.0375
Control 0.1950 0.0300 0.0300 0.1200 0.1875
Information 0.0750 0.0200 0.0100 0.0225 0.0250
Elements at level 2 Element weights
Passengers 0.05 0.70 0.55 0.05 0.10
Baggage 0.05 0.05 0.05 0.00 0.05
Direction 0.05 0.00 0.10 0.00 0.05
Speed 0.45 0.10 0.10 0.75 0.60
Communication 0.10 0.00 0.05 0.00 0.05
Instruments 0.15 0.05 0.05 0.15 0.10
Visibility 0.15 0.10 0.10 0.05 0.05
1.00 1.00 1.00 1.00 1.00
Partial relevance numbers
Passengers 0.0150 0.1400 0.0550 0.0075 0.0250
Baggage 0.0150 0.0100 0.0050 0.0000 0.0125
Direction 0.0150 0.0000 0.0100 0.0000 0.0125
Speed 0.1350 0.0200 0.0100 0.1125 0.1500
Communication 0.0300 0.0000 0.0500 0.0000 0.0125
Instruments 0.0450 0.0100 0.0050 0.0225 0.0250
Visibility 0.0450 0.0200 0.0100 0.0075 0.0125

Quantitative Methods Edinburgh Business School A4/83


Appendix 4 / Answers to Review Questions

Table A4.16 Local relevance numbers


Criteria Performance Com- Safety Running Capital LRN
fort costs costs
Level 1 Partial relevance numbers
Accommodation 0.0300 0.1500 0.0600 0.0075 0.0375 0.2850
Control 0.1950 0.0300 0.0300 0.1200 0.1875 0.5625
Information 0.0750 0.0200 0.0100 0.0225 0.0250 0.525
1.000
Level 2 Partial relevance numbers
Passengers 0.0150 0.1400 0.0550 0.0075 0.0250 0.2425
Baggage 0.0150 0.0100 0.0050 0.0000 0.0125 0.0425
Direction 0.0150 0.0000 0.0100 0.0000 0.0125 0.0375
Speed 0.1350 0.0200 0.0100 0.1125 0.1500 0.4275
Communication 0.0300 0.0000 0.0500 0.0000 0.0125 0.0475
Instruments 0.0450 0.0100 0.0050 0.0225 0.0250 0.1075
Visibility 0.0450 0.0200 0.0100 0.0075 0.0125 0.0950
1.0000

Table A4.17 Cumulative relevance numbers


Element LRN LRN of higher CRN
element
Passengers 0.2425 0.2850 0.069
Baggage 0.0425 0.2850 0.012

Direction 0.0375 0.5625 0.021


Speed 0.4275 0.5625 0.240
Communication 0.0475 0.5625 0.027
Instruments 0.1075 0.5625
} 0.077
Instruments 0.1075 0.1525
Visibility 0.0975 0.1525 0.015

A4/84 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 20) Marks


Based on two levels of elements
(a) Relevance tree 4
(b) Criteria with weights 2
(c) Element weights 2
(d) Partial relevance numbers 4
(e) Local relevance numbers 4
(f) Cumulative relevance numbers 4
Total 20

Module 14

Review Questions
14.1 The correct answer is C. C is the definition of time series forecasting. A is untrue
because TS methods work for stationary and non-stationary series. Decomposition
is the only method that uses regression even to a small extent. Therefore, B is
untrue. D is partly true. Some, but not all, TS methods are automatic and need no
intervention once set up.
14.2 The correct answer is A, D. TS methods analyse the patterns of the past and project
them into the future. Where conditions are not changing, the historical record is a
reliable guide to the future. TS methods are therefore good in the short term when
conditions have insufficient time to change (situation A) and in stable situations (D).
For the same reason, they are not good at predicting turning points (situation B). In
order to analyse the past accurately, a long data series is needed, thus situation C is
unlikely to be one in which TS methods are used.
14.3 The correct answer is D. A stationary series has no trend and constant variance.
Homoscedastic means ‘with constant variance’. Thus, it is only D that fully defines a
stationary series.
14.4 The correct answer is False. The part of the statement referring to MA is true; the
part referring to ES is false. ES gives unequal weights to past values, but they are
not completely determined by the forecaster. They are partly chosen by the fore-
caster in that a smoothing constant, , is selected.
14.5 The correct answer is A. The three-point moving average for 9 is the average of the
values for periods 6, 7 and 8:

14.6 The correct answer is A.

Quantitative Methods Edinburgh Business School A4/85


Appendix 4 / Answers to Review Questions

Value Old smoothed New smoothed


7 – 7.0
9 7.0 7.8 (= 0.6 × 7 + 0.4 × 9)
8 7.8 7.9 (= 0.6 × 7.8 + 0.4 × 8)
10 7.9 8.7 (= 0.6 × 7.9 + 0.4 × 10)
12 8.7 10.0 (= 0.6 × 8.7 + 0.4 × 12)
11 10.0 10.4 (= 0.6 × 10 + 0.4 × 11)
7 10.4 9.0 (= 0.6 × 10.4 + 0.4 × 7)
10 9.0 9.4 (= 0.6 × 9 + 0.4 × 10)

The forecast for period 9 is 9.4.


14.7 The correct answer is D. The two smoothing constants are chosen independently
with a view only to ensuring the best possible accuracy.
14.8 The correct answer is D. The ratio between an actual and a smoothed value is a
measure of seasonality.
14.9 The correct answer is C. The trend at any point is , where and are the
regression coefficients. Therefore, trend at = 6 is 12.2 + (6 × 3.1) = 30.8.
14.10 The correct answer is C. Smoothing should eliminate random and seasonal
fluctuations. Thus, the smoothed value should contain the cycle and the trend.
Dividing by the trend, the cycle remains:

14.11 The correct answer is D. The length of a cycle is the time it takes to repeat itself. In
this case the time is 20 quarters (i.e. five years).
14.12 The correct answer is A. A forecast at is given by:

Case Study 14.1: Interior Furnishings


1
(a) The forecast for future months is 2442 – see Table A4.18.

Table A4.18 Moving average forecast


Demand Three-month moving average
2016 Oct. 2000
Nov. 1350 1767 = (2000 + 1350 + 1950)/3 (rounded)
Dec. 1950 1758 = (1350 + 1950 + 1975)/3 (rounded)
2017 Jan. 1975 2342 etc.
Feb. 3100 2275
Mar. 1750 2133

A4/86 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Demand Three-month moving average


Apr. 1550 1533
May 1300 1683
June 2200 2092
July 2775 2442
Aug. 2350 2442

(b) The forecast for future months is 2056 – see Table A4.19.

Table A4.19 Exponential smoothing forecast


Demand Exponential Old Calculations for the
smoothing smoothed smoothed values
( = 0.1)
2016 Oct. 2000 2000 –
Nov. 1350 1935 2000 1935 = 0.9 × 2000 + 0.1 ×
1350
Dec. 1950 1936 1935 1936 = 0.9 × 1935 + 0.1 ×
1950
2017 Jan. 1975 1940 1936 1940 = 0.9 × 1936 + 0.1 ×
1975
Feb. 3100 2056 1940 2056 = 0.9 × 1940 + 0.1 ×
3100
Mar. 1750 2025 2056 2025 = 0.9 × 2056 + 0.1 ×
1750
Apr. 1550 1978 2025 1978 = 0.9 × 2025 + 0.1 ×
1550
May 1300 1910 1978 1910 = 0.9 × 1978 + 0.1 ×
1300
June 2200 1939 1910 1939 = 0.9 × 1910 + 0.1 ×
2200
July 2775 2023 1939 2023 = 0.9 × 1939 + 0.1 ×
2775
Aug. 2350 2056 2023 2056 = 0.9 × 2023 + 0.1 ×
2350

(c) In both cases it is assumed that the series are stationary. In other words, there is
no trend in the data and they have constant variance through time.

Marking Scheme (out of 10) Marks


(a) Moving average
– Basic calculation 1
– Continuing calculation through series 1
– Forecast 1

Quantitative Methods Edinburgh Business School A4/87


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 10) Marks


(b) Exponential smoothing
– Starting off with actual value 1
– Basic calculation 2
– Continuing calculation through series 1
– Forecast 1
(c) Assumptions
– No trend 1
– Constant variance 1
Total 10

Case Study 14.2: Garden Machinery Manufacture


1 Holt’s Method is based on the following formulae:

The forecasts for quarterly demand in 2018 are calculated in Table A4.20. The total
forecast is therefore 215.7 + 223.9 + 232.1 + 240.3 = 912.0.

Table A4.20 Holt’s Method forecast


= 0.2 = 0.3
Period Actual Smoothed series Smoothed trend
2016 1 140 140
2 155 155 15.0
3 155 167 = 0.8(155 + 15) + 0.2(155) 14.1 = 0.7(15) + 0.3(167 155)
4 170 178.9 = 0.8(167 + 14.1) + 0.2(170) 13.4 = 0.7(14.1) + 0.3(178.9 167)
2017 1 180 189.8 = 0.8(178.9 + 13.4) + 0.2(180) 12.7 = 0.7(13.4) + 0.3(189.8 178.9)
2 170 196.0 = 0.8(189.8 + 12.7) + 0.2(170) 10.8 = 0.7(12.7) + 0.3(196 189.8)
3 185 202.4 = 0.8(196 + 10.8) + 0.2(185) 9.5 = 0.7(10.8) + 0.3(202.4 196)
4 190 207.5 = 0.8(202.4 + 9.5) + 0.2(190) 8.2 = 0.7(9.5) + 0.3(207.5 202.4)
Forecasts
2018 1 207.5 + 8.2 = 215.7
2 207.5 + (2 × 8.2) = 223.9
3 207.5 + (3 × 8.2) = 232.1

A4/88 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

= 0.2 = 0.3
Period Actual Smoothed series Smoothed trend
4 207.5 + (4 × 8.2) = 240.3

Marking Scheme (out of 10) Marks


Series:
– Starting off 1
– Basic calculation 2
– Continuing calculation 1
Trend:
– Starting off 1
– Basic calculation 2
– Continuing calculations 1
Forecast:
– Individual quarters 1
– Total 1
Total 10

Case Study 14.3: McClune and Sons


1
(a) Keith Scott, assistant to the director of finance at McClune and Sons, is to
develop a forecast of Scotch whisky industry sales for each of the 12 months of
2015 as an input to some financial statements. The historical data available (Ex-
hibits 1 and 2) show an obvious seasonal pattern as well as a trend. This suggests
that forecasts could be made using the technique called classical decomposition.
The basis of this method is the assumption that the series has four component
parts: trend, cycle, seasonality and randomness. The technique involves isolating
the components, which can then be reassembled to make forecasts for future
time periods.

Since the random component is by definition unpredictable and has an average


of 0, it must be omitted from the forecast.
(i) Isolating the trend. If the trend is linear (a straight line), linear regression
can be used to identify it. This process involves regressing the historical
sales data against the time index (a variable taking the values 1, 2, 3, 4, 5, …
for the time periods). The fitted values of this regression constitute the
trend element of the decomposition analysis. Results are:

where trend or fitted value for time period , and = 1, 2, 3, … for suc-
ceeding time periods.

Quantitative Methods Edinburgh Business School A4/89


Appendix 4 / Answers to Review Questions

Exhibit 3 shows the historical data in column 1, the time index in column 2
and the trend in column 3.

Exhibit 3
(1) (2) (3) (4) (5) (6)
Year Month Sales Time Trend Mov. Cycle Season
av.
2006 1 8.12 1 8.89
2006 2 7.76 2 8.94
2006 3 7.97 3 9.00
2006 4 7.88 4 9.05
2006 5 8.45 5 9.11
2006 6 8.68 6 9.16 9.80 1.07 0.89
2006 7 6.77 7 9.22 9.74 1.06 0.70
2006 8 6.60 8 9.27 9.70 1.05 0.68
2006 9 8.39 9 9.33 9.76 1.05 0.86
2006 10 11.88 10 9.38 9.87 1.05 1.20
2006 11 15.58 11 9.44 10.05 1.06 1.55
2006 12 19.50 12 9.49 10.09 1.06 1.93
2007 1 7.43 13 9.55 10.25 1.07 0.72
2007 2 7.26 14 9.60 10.07 1.05 0.72
2007 3 8.67 15 9.66 10.13 1.05 0.86
2007 4 9.26 16 9.71 10.08 1.04 0.92
2007 5 10.55 17 9.77 10.05 1.03 1.05
2007 6 9.17 18 9.82 9.93 1.01 0.92
2007 7 8.66 19 9.88 9.80 0.99 0.88
2007 8 4.45 20 9.93 9.71 0.98 0.46
2007 9 9.10 21 9.99 9.68 0.97 0.94
2007 10 11.32 22 10.04 9.65 0.96 1.17
2007 11 15.23 23 10.10 9.53 0.94 1.60
2007 12 18.02 24 10.15 9.59 0.94 1.88
2008 1 5.87 25 10.21 9.39 0.92 0.62
2008 2 6.19 26 10.26 9.35 0.91 0.66
2008 3 8.34 27 10.32 9.20 0.89 0.91
2008 4 8.91 28 10.37 9.35 0.90 0.95
2008 5 9.05 29 10.43 9.41 0.90 0.96
2008 6 9.98 30 10.48 9.82 0.94 1.02
2008 7 6.26 31 10.54 9.98 0.95 0.63
2008 8 3.98 32 10.59 10.12 0.96 0.39
2008 9 7.24 33 10.65 10.12 0.95 0.72

A4/90 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

(1) (2) (3) (4) (5) (6)


Year Month Sales Time Trend Mov. Cycle Season
av.
2008 10 13.18 34 10.70 10.17 0.95 1.30
2008 11 15.88 35 10.76 10.23 0.95 1.55
2008 12 22.90 36 10.81 10.25 0.95 2.23
2009 1 7.88 37 10.87 10.50 0.97 0.75
2009 2 7.81 38 10.92 10.52 0.96 0.74
2009 3 8.40 39 10.98 10.83 0.99 0.78
2009 4 9.43 40 11.03 10.79 0.98 0.87
2009 5 9.76 41 11.09 11.22 1.01 0.87
2009 6 10.28 42 11.14 11.32 1.02 0.91
2009 7 9.27 43 11.20 11.43 1.02 0.81
2009 8 4.16 44 11.25 11.54 1.03 0.36
2009 9 10.98 45 11.31 11.52 1.02 0.95
2009 10 12.73 46 11.36 11.52 1.01 1.11
2009 11 21.03 47 11.42 11.55 1.01 1.82
2009 12 24.08 48 11.47 11.63 1.01 2.07
2010 1 9.19 49 11.53 11.63 1.01 0.79
2010 2 9.21 50 11.58 11.64 1.01 0.79
2010 3 8.09 51 11.64 11.71 1.01 0.69
2010 4 9.45 52 11.69 11.92 1.02 0.79
2010 5 10.14 53 11.75 12.08 1.03 0.84
2010 6 11.17 54 11.80 12.46 1.06 0.90
2010 7 9.29 55 11.86 12.45 1.05 0.75
2010 8 4.36 56 11.91 12.52 1.05 0.34
2010 9 11.75 57 11.97 12.79 1.07 0.92
2010 10 15.31 58 12.02 12.91 1.07 1.19
2010 11 22.94 59 12.08 13.11 1.09 1.75
2010 12 28.67 60 12.13 13.17 1.09 2.18
2011 1 9.04 61 12.19 13.14 1.08 0.69
2011 2 10.01 62 12.24 13.16 1.07 0.76
2011 3 11.41 63 12.30 13.28 1.08 0.86
2011 4 10.82 64 12.35 13.45 1.09 0.80
2011 5 12.57 65 12.41 13.82 1.11 0.91
2011 6 11.83 66 12.46 14.36 1.15 0.82
2011 7 8.91 67 12.52 14.16 1.13 0.63
2011 8 4.61 68 12.57 13.94 1.11 0.33
2011 9 13.21 69 12.63 13.70 1.08 0.96

Quantitative Methods Edinburgh Business School A4/91


Appendix 4 / Answers to Review Questions

(1) (2) (3) (4) (5) (6)


Year Month Sales Time Trend Mov. Cycle Season
av.
2011 10 17.39 70 12.68 13.59 1.07 1.28
2011 11 27.33 71 12.74 13.16 1.03 2.08
2011 12 35.21 72 12.79 13.01 1.02 2.71
2012 1 6.68 73 12.85 13.16 1.02 0.51
2012 2 7.33 74 12.90 13.14 1.02 0.56
2012 3 8.53 75 12.96 13.14 1.01 0.65
2012 4 9.46 76 13.01 13.05 1.00 0.73
2012 5 7.41 77 13.07 12.84 0.98 0.58
2012 6 10.08 78 13.12 12.67 0.97 0.80
2012 7 10.67 79 13.18 12.94 0.98 0.82
2012 8 4.40 80 13.23 13.00 0.98 0.34
2012 9 13.21 81 13.29 13.19 0.99 1.00
2012 10 16.25 82 13.34 13.39 1.00 1.21
2012 11 24.90 83 13.40 13.82 1.03 1.80
2012 12 33.08 84 13.46 14.01 1.04 2.36
2013 1 9.95 85 13.51 14.10 1.04 0.71
2013 2 8.00 86 13.57 14.08 1.04 0.57
2013 3 10.84 87 13.62 14.24 1.05 0.76
2013 4 11.83 88 13.68 14.35 1.05 0.82
2013 5 12.68 89 13.73 14.36 1.05 0.88
2013 6 12.33 90 13.79 14.27 1.04 0.86
2013 7 11.72 91 13.84 14.36 1.04 0.82
2013 8 4.20 92 13.90 14.44 1.04 0.29
2013 9 15.06 93 13.95 14.50 1.04 1.04
2013 10 17.66 94 14.01 14.53 1.04 1.22
2013 11 24.92 95 14.06 14.45 1.03 1.73
2013 12 32.06 96 14.12 14.54 1.03 2.21
2014 1 11.00 97 14.17 14.47 1.02 0.76
2014 2 9.02 98 14.23 14.42 1.01 0.63
2014 3 11.58 99 14.28 14.40 1.01 0.80
2014 4 12.11 100 14.34
2014 5 11.68 101 14.39
2014 6 13.44 102 14.45
2014 7 10.87 103 14.50
2014 8 3.62 104 14.56

A4/92 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

(1) (2) (3) (4) (5) (6)


Year Month Sales Time Trend Mov. Cycle Season
av.
2014 9 14.87 105 14.61
(ii) Isolating the cycle. For monthly data, a 12-point moving average can be
expected to smooth away both random fluctuations and seasonal variations.
The moving average thus contains the remaining two elements, the trend
and the cycle:

Consequently, the ratio Moving average/Trend is the cycle.


The relevant calculations are also in Exhibit 3. Column 4 shows the 12-point
moving average; column 3 shows the trend; column 5 shows the cycle, calcu-
lated as column 4 divided by column 3.

1.16
1.14
1.12
1.1
1.08
1.06
1.04
Cycle

1.02
1
0.98
0.96
0.94
0.92
0.9
0.88
0 20 40 60 80 100 120
Time

Exhibit 4: Cyclical effect


Exhibit 4 shows the cycle as a graph plotted against time. In the early part of
the series, up to period 70, there is a clear cyclical pattern. After this the pat-
tern is not so clear and seems to be disappearing.
(iii) Isolating the seasonality. The seasonality is calculated from the moving
average and the actual values:

Consequently, the ratio:

Since the seasonality is calculated several times for each month (e.g. for Janu-
ary, an estimate of seasonality is made for 2006, 2007, 2008, …), the

Quantitative Methods Edinburgh Business School A4/93


Appendix 4 / Answers to Review Questions

randomness can, it is hoped, be eliminated by averaging the estimates for a


particular month.
In Exhibit 3, the individual (unaveraged) seasonal factors are shown in col-
umn 6 (calculated as column 1 divided by column 4). In Exhibit 5 these
individual seasonal factors are averaged to give the true seasonality.

Exhibit 5
2006 2007 2008 2009 2010 2011 2012 2013 2014 Aver-
age
Jan. 0.72 0.62 0.75 0.79 0.69 0.51 0.71 0.76 0.69
Feb. 0.72 0.66 0.74 0.79 0.76 0.56 0.57 0.63 0.68
Mar. 0.86 0.91 0.78 0.69 0.86 0.65 0.76 0.80 0.79
Apr. 0.92 0.95 0.87 0.79 0.80 0.73 0.82 0.84
May 1.05 0.96 0.87 0.84 0.91 0.58 0.88 0.87
June 0.89 0.92 1.02 0.91 0.90 0.82 0.80 0.86 0.89
July 0.70 0.88 0.63 0.81 0.75 0.63 0.82 0.82 0.75
Aug. 0.68 0.46 0.39 0.36 0.34 0.33 0.34 0.29 0.40
Sep. 0.86 0.94 0.72 0.95 0.92 0.96 1.00 1.04 0.92
Oct. 1.20 1.17 1.30 1.11 1.19 1.28 1.21 1.22 1.21
Nov 1.55 1.60 1.55 1.82 1.75 2.08 1.80 1.73 1.73
.
Dec. 1.93 1.88 2.23 2.07 2.18 2.71 2.36 2.21 2.20
Total 11.98
Average 1.00

A further adjustment to seasonality has sometimes to be made. If the overall


effect of the calculated seasonality is to increase or decrease the series, as well
as to rearrange the distribution within the year, then this former effect must
be removed. The indication that this effect is present is that the seasonal fac-
tors for each month have an average value different from 1.0. In Exhibit 5 it
can be seen that the seasonal factors do have an average of 1.0 and therefore
the adjustment does not have to be made in this case.
(iv) To forecast the 2015 Scotch whisky sales, the components have to be esti-
mated for each month of the year and these estimates multiplied together.
The forecasts are shown in Exhibit 6. Note that totals may not work out ex-
actly because of rounding. The build-up of the forecast for January 2015 is
explained here as an example.
January 2015 is the 109th time period ( = 109). Thus, the trend for January
2015 is:

The cycle for January 2015 has to be estimated from column 5 of Exhibit 3.
Since the pattern is not a convincing one and it does seem to peter out, it is

A4/94 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

assumed that there is no definite cycle. The cyclical factor for January 2015
and the other months is taken to be 1.0.
Seasonality for January 2015, taken from Exhibit 5, is 0.69.
The forecast for January 2015 is calculated by multiplying the three compo-
nents together:

The forecasts for the whole of 2015 are shown in Exhibit 6.

Exhibit 6: Forecasts for 2015


Year Month Time Trend Cycle Season Forecast
2015 1 109 14.83 1 0.69 10.29
2015 2 110 14.89 1 0.68 10.10
2015 3 111 14.94 1 0.79 11.77
2015 4 112 15.00 1 0.84 12.62
2015 5 113 15.05 1 0.87 13.09
2015 6 114 15.11 1 0.89 13.44
2015 7 115 15.16 1 0.75 11.43
2015 8 116 15.22 1 0.40 6.09
2015 9 117 15.27 1 0.92 14.11
2015 10 118 15.33 1 1.21 18.53
2015 11 119 15.38 1 1.73 26.68
2015 12 120 15.44 1 2.20 33.89

(b) There are a number of assumptions and limitations in the use of these forecasts.
These reservations do not mean of course that the forecasts cannot be used, but
they do mean that they should only be used in full awareness of the problems
involved. The reservations are:
(i) The decomposition method does not allow the accuracy of the forecasts to
be measured.
(ii) Other forecasting methods could be used in such a situation, for example, the
Holt–Winters Method. Keith Scott should try other methods and compare
their accuracy.
(iii) Keith Scott should ensure he discusses the forecasts thoroughly with man-
agement before formalising the method by incorporating the forecasts in pro
formas and the like.

Marking Scheme (out of 30) Marks


(a) The forecasts
(i) Measuring the trend
– method 3
– calculations 2
(ii) Measuring the cycle
– method 3

Quantitative Methods Edinburgh Business School A4/95


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 30) Marks


– calculations 2
– conclusion 2
(iii) Measuring seasonality
– method for basic seasonality 3
– calculations 2
– adjustment 2
(iv) Combining into a forecast
– method 3
– calculations 2
(b) Comments on the approach
(i) Lack of a measure of the forecasting accuracy 2
(ii) Possibility of using other techniques 2
(iii) Need to communicate with management colleagues 2
Total 30

Module 15

Review Questions
15.1 The correct answer is False. Both manager and expert should view forecasting as a
system. They differ in the focus of their skills. The expert knows more about
techniques; the manager knows more about the wider issues.
15.2 The correct answer is A. The people who are to use the system should have primary
responsibility for its development. They will then have confidence in it and see it as
‘their’ system.
15.3 The correct answer is D. Analysing the decision-making process may reveal
fundamental flaws in the system or organisational structure which must be ad-
dressed before any forecasts can hope to be effective.
15.4 The correct answer is False. The conceptual modelling stage includes consideration
of possible causal variables but has wider objectives. The stage should be concerned
with all influences on the forecast variable. Time patterns and qualitative variables
also come into the reckoning.
15.5 The correct answer is D. The smoothed value calculated in period 5 from the actual
values for periods 3–5 is 16.0. This is the forecast for the next period ahead, period
6.
15.6 The correct answer is A. The one-step-ahead forecast error for period 6 is the
difference between the actual value and the one-step-ahead forecast for that period:

A4/96 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

15.7 The correct answer is C. The MAD is the average of the errors, ignoring the sign:

15.8 The correct answer is B. The MSE is the average of the squared errors:

15.9 The correct answer is E. MAD and MSE are different measures of scatter. In
comparing forecasting methods they may occasionally give different answers,
suggesting different methods as being superior.
MAD measures the average error. The method for which it is lower is therefore
more accurate on average. The MSE squares the errors. This can penalise heavily a
method that leaves large residuals. Forecasting method 2 may be better on average
than method 1 (i.e. have a lower MAD), but contained within the MAD for method
2 there may be some large errors that cause the MSE of method 2 to be higher than
that of method 1.
15.10 The correct answer is C. Other measures of closeness of fit (e.g. the correlation
coefficient) are based on the same data as were used to calculate the model parame-
ters. This method keeps the two sets of data apart. A and B are true but not the
reasons why the test is described as an independent one.
15.11 The correct answer is A, B, C. A, B and C summarise the reasons why step 7,
incorporating judgements, is an important part of the system.
15.12 The correct answer is False. A consensus on what the problems are can be just as
difficult to obtain as a consensus on the solutions.

Case Study 15.1: Interior Furnishings


1
(a) Using a three-point moving average (MA), the forecast is as in Table A4.21.

Table A4.21 Moving average forecast error


Time Demand Three- Forecast Error
point MA
2016 Oct. 2000
Nov. 1350 1767
Dec. 1950 1758
2017 Jan. 1975 2342 1767 208
Feb. 3100 2275 1758 1342
Mar. 1750 2133 2342 592
Apr. 1550 1533 2275 725
May 1300 1683 2133 833
June 2200 2092 1533 667
July 2775 2442 1683 1092

Quantitative Methods Edinburgh Business School A4/97


Appendix 4 / Answers to Review Questions

Time Demand Three- Forecast Error


point MA
Aug. 2350 2442 2092 258

(b) Using exponential smoothing (ES) with = 0.1, the forecast is as in Table A4.22.

Table A4.22 Exponential smoothing forecast error


Time Demand ES Forecast Error
2016 Oct. 2000 2000
Nov. 1350 1935
Dec. 1950 1936
2017 Jan. 1975 1940 1936 39
Feb. 3100 2056 1940 1160
Mar. 1750 2025 2056 306
Apr. 1550 1978 2025 475
May 1300 1910 1978 678
June 2200 1939 1910 290
July 2775 2023 1939 836
Aug. 2350 2056 2023 327

(c) Calculating the MSE for the exponential smoothing:

Calculating the MSE for the moving average:

(d) Exponential smoothing has the lower MSE and therefore performs better over
the historical time series. The forecast for September 2017 is the exponential
smoothing forecast. The most recent smoothed value is the forecast for the next
period ahead. Thus, the forecast for September 2017 = 2056.
(e) The reservations about the forecast are:
(i) Exponential smoothing assumes the series is stationary. This has not been
checked (there are insufficient data to do so).
(ii) The possibility of a seasonal effect has been ignored (it would be impossible
to measure seasonality from less than one year’s data).
(iii) There is much volatility in the series, as seen by considering the data and
the MSE. The exponential smoothing forecast, although better than the
moving average, is not particularly good. It may be that different types of
forecasting methods should be used. Perhaps a causal model should be
tried.

A4/98 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Marking Scheme (out of 20) Marks


(a) Calculating the MA smoothed values 2
Translating smoothed values into forecasts 2
(b) Calculating the ES smoothed values 2
Translating smoothed values into forecasts 2
(c) Correct method for calculating
– one-step-ahead errors 1
– MSE 2
Calculating the MSE for the MA forecasts 2
Calculating the MSE for the ES forecasts 2
(d) Correct choice of forecast for September 2017 2
(e) Reservations
– Stationarity 1
– Seasonality 1
– Other methods 1
Total 20

Case Study 15.2: Theatre Company


1 There are nine steps in the guidelines.

Step 1: Analyse the Decision-Making System


In planning future productions, many interlinked decisions must be taken. All
depend in some way on the forecasts:
(a) length of run
(b) seat prices
(c) discounts
(d) promotions
(e) advertising
(f) costs, etc.
The timing of these decisions, the people who take them and how decisions are
altered or reviewed should be mapped out. In the light of this information it may be
necessary to revise the decision-making process.

Step 2: Define Forecasts Needed


Having completed step 1, it should be possible to make a detailed list of the full
range of forecasts required, the levels of accuracy, their timings, their time horizons
and by whom they will be used.

Step 3: Prepare a Conceptual Model


The factors affecting the demand for tickets should be considered and listed. The
main factors are:

Quantitative Methods Edinburgh Business School A4/99


Appendix 4 / Answers to Review Questions

(a) Internal
(i) reputation of play
(ii) reputation of actors
(iii) presence of star names
(iv) ticket prices
(v) advertising/promotional expenditures
(vi) demand for previous, similar productions
(b) External
(i) personal disposable income
(ii) weather
(iii) time of year
(iv) day of week
(v) competition
These are the factors that it is hoped a forecasting model could take into account.

Step 4: Data Available


It is likely that data on all the factors listed in step 3 above will not be available. In
particular, while attendance figures will be available, demand data will not. This is
especially important because of ‘house full’ productions, where demand may well
have been three or four times the attendance. Since it is demand that is to be
forecast, some subjective assessments may have to be made or some surveys may
have to be carried out. Similarly, advertising expenditure may not be available for
individual productions. Expenditures may have to be broken down subjectively.

Step 5: Decide on the Technique


In this case, the technique would probably be a causal model relating demand to the
factors listed under step 3.

Step 6: Test the Accuracy


The accuracy would be tested through the independent test, the second of the two
presented in this module. Data from perhaps two productions would be put to one
side. The coefficients of the forecasting model would be estimated from data on
other productions and this model used to forecast demand for tickets at the two
productions. The performance of the model would be quantified by a measure of
scatter (MSE and MAD). This would be used to choose between different models
and to decide whether the accuracy was sufficient for the decisions to be taken.

Step 7: Decide How to Incorporate Judgement


Situations will arise where special circumstances warrant the making of adjustments
to a statistical forecast. Unusual publicity prior to the opening of a play or the
withdrawal of a famous actor would be examples of such circumstances in this case.
Judgement would be needed to modify the forecasts. A system, perhaps regular
monthly meetings, should be instituted to discuss and make alterations. It would be
essential that changes be carefully considered by all involved rather than hasty or

A4/100 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

unilateral decisions be taken. Those attending the meetings would be accountable


for any alterations made.

Step 8: Implement
The manager of the forecasting system should establish and gain agreement on what
the implementation problems are and how they can be solved. The problems would
depend on the individuals involved, but it is likely that they would centre on the
replacement of purely judgement-based methods by a more scientific one. The
manager should also follow the first set of forecasts through the decision process.

Step 9: Monitor
Implementation refers to just before and after the first use of the forecasts. In
monitoring, the performance of the system is watched, but not in such detail, every
time it is used. The accuracy of the forecasted demand for each production should
be recorded and reasons for deviations explored. In the light of this information, the
usefulness of the system can be assessed and changes to it recommended. The track
records of those making judgemental adjustments to the forecasts should also be
monitored. In this situation, where judgement must play an important part, this
aspect of monitoring will take on particular significance.

Marking Scheme (out of 25) Marks


There are nine steps but some are more difficult than others:
Step 1 4
Step 2 2
Step 3 5
Step 4 3
Step 5 3
Step 6 2
Step 7 2
Step 8 2
Step 9 2
Total 25

Case Study 15.3: Brewery


1 The questions posed at the end of the case correspond to the nine stages of the
guidelines for developing a forecasting system.

Step 1: Analyse the Decision-Making System


The purpose of this analysis is to ensure that the forecasts really do serve the
decision process as intended rather than being unconnected and peripheral. Im-
portant side benefits often accrue from such analyses. Inconsistencies in the way
decisions are taken and in the organisational structure may come to light. A whole

Quantitative Methods Edinburgh Business School A4/101


Appendix 4 / Answers to Review Questions

forecasting project may have to be postponed pending the resolution of these wider
issues.
A thorough analysis of a decision-making system involves systems theory. A lower-
level approach of listing the decisions directly and indirectly affected by the fore-
casts is described here. The list would be determined from an exhaustive study of
the organisational structure and the job descriptions associated with relevant parts
of it. Here is a brief description of the main decisions.

Main decisions Related decisions


Production levels for one year ahead Materials requirements
(monthly) Manpower requirements
Pre-production operating activities
Machinery maintenance
Warehousing requirements
Short-term financial planning
Distribution quantities (quarterly) Warehousing requirements
Transport needs
Short-term financial planning
Marketing action (quarterly) Advertising
Promotions

The list forms the input information for step 2, determining the forecasts required.

Step 2: Determine the Forecasts Required


The decisions all have a time horizon of, at most, one year. The forecasts have
therefore to be for one year ahead.
The production decisions require monthly forecasts, the distribution and marketing
decisions quarterly forecasts. This suggests that the system should produce monthly
forecasts, aggregating in the case of the latter. The timing of the decisions within the
month or quarter dictates the time at which the forecasts should reach the decision
makers.

Step 3: Conceptualise
At this step consideration is given to the factors that influence the forecast variable.
No thought is given to data availability at this step. An ideal situation is envisaged.
An alcoholic beverage is not strictly essential to the maintenance of life. It is a
luxury product. Therefore, its consumption will be affected by economic circum-
stances. It would be strange if advertising and promotions did not result in changes
in demand. In addition, the variability of the production, advertising and promo-
tions of the different competitors must have an effect. In particular, the launch of a
new product changes the state of the market. It is not just competing beer products
that are important. Competition in the form of other products, such as lager and
possibly wines and spirits, must also have an influence.

A4/102 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

The data record in Table 15.4 makes it clear that there is a seasonal effect. In other
words, the time of year and, perhaps, the weather are relevant factors. The occur-
rence of special events may have had a temporary influence. A change in excise duty
as a result of a national budget is an obvious example. More tentatively, national
success in sporting tournaments is rumoured to have an effect on the consumption
of alcoholic beverages.
There are also reasons for taking a purely time series approach to the forecast. First,
the seasonal effect will be handled easily by such methods. Second, the time horizon
for the forecasts is short: less than one year. Within such a period there is little time
for other variables to bring their influence to bear. To some extent, therefore, sales
volume could have a momentum of its own. Third, a time series approach will give a
forecast on the basis ‘if the future is like the past’. Such a forecast would be the
starting point for judging the effect of changing circumstances.

Step 4: Data Availability


The ideals which were the subject of step 3 will be restricted by the availability (or
lack of availability) of data. The search for data records relating to the factors of step
3 will reveal the restrictions. They might be:
(a) The organisation’s data on advertising and promotions might be aggregated and
difficult to separate.
(b) Competitors’ data are likely to be available for some factors (production levels
and new products, for example) but unavailable for others (advertising, promo-
tions). Where competitive data are available, they may not be available
sufficiently early to be used. They may not become available within the one-year
time horizon of the forecasts.
Data that are both available and usable are the organisation’s aggregated data on
advertising and promotions and nationally available data on the economic climate.
In the case of the former, only quarterly data are obtainable. Unfortunately, monthly
forecasts are wanted. Regretfully, it must be decided to produce only quarterly
forecasts.
Consideration of Table 15.4 suggests that data earlier than 2009 should not be
included. The growth in sales volume in the early 2000s is distinctly higher than
subsequently. Whatever the reasons for this, there is no point in requiring the
forecasting techniques to account for it, especially since the post-2008 data record is
sufficiently long for most purposes.

Step 5: Decide Which Techniques to Use


There are good reasons for employing both a time series and a causal modelling
approach to the problem. Both should be used and then judgement made between
them on the basis of their apparent accuracies.
A time series technique will have to have a number of characteristics. It will have to
handle the trend that is evident from Table 15.4. It will also have to deal with the
seasonal effect. There may or may not be a cycle in the data. However, the decision
to restrict the data record to the period 2009–17 means that it will be difficult to

Quantitative Methods Edinburgh Business School A4/103


Appendix 4 / Answers to Review Questions

determine any cyclical effect (these effects are often five or seven years in length).
To summarise, what is required is a technique that can handle trend and seasonality
but not cycles. The obvious choice is the Holt–Winters technique.
The causal modelling technique should be multiple regression analysis with two
independent variables. The first independent variable is the gross domestic product
(GDP) of the UK, as a measure of the economic climate. Other economic variables
can also be used in this role, but GDP is the most usual. The second independent
variable is the sum of advertising and promotional expenditures (ADV/PRO) of the
organisation. Scatter diagrams relating the dependent variable with each independ-
ent variable in turn can verify that it is reasonable to consider GDP and ADV/PRO
as independent variables.
Other potential independent variables will have to be ignored for reasons of the
non-availability of data.

Step 6: Testing the Accuracy


Because two approaches (time series and causal modelling) are being employed,
there are two stages in testing the accuracy of the forecasts. First, it has to be
determined which Holt–Winters model (i.e. what values for the smoothing con-
stants) is the best and whether one or both of the independent variables should be
included in the regression model. Second, the best Holt–Winters model has to be
compared with the best regression model.

Which Holt–Winters Model?


The Holt–Winters model works with three smoothing constants, one each for the
main series, the trend and the seasonality. To decide on the values for the constants,
some experimentation is needed. Several different sets of values should be tried and
the forecasting accuracy of each compared. The accuracy is measured via the mean
square or the mean deviation of the one-step-ahead forecast errors.
A brief reminder of this method is given. For any set of parameters, the data record
should be followed time period by time period, and at each a forecast for one period
ahead calculated. The forecast error is found by subtracting the forecast from the
actual. The scatter of these errors for the whole series is measured by calculating the
mean square error (the average squared error) or the mean absolute deviation (the
average of the absolute values of the errors). This is repeated with other parameter
sets. The set for which one or both of these statistics is the lowest is chosen.
It is best to try a wide range of parameter sets and ‘home in’ on the one that seems
the best. Table A4.23 shows the results of this process for the data of Table 15.4.

Table A4.23 Which Holt–Winters forecast?


Smoothing constants MAD
Series Trend Season
0.5 0.2 0.2 5 33
0.4 0.2 0.2 4 25
0.3 0.2 0.2 4 21

A4/104 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Smoothing constants MAD


Series Trend Season
0.2 0.2 0.2 4 20
0.1 0.2 0.2 4 30
0.2 0.4 0.2 3 17
0.2 0.3 0.2 3 18
0.2 0.1 0.2 4 26
0.2 0.5 0.2 3 16
0.2 0.6 0.2 3 16
0.2 0.4 0.5 2 12*
0.2 0.4 0.4 3 13
0.2 0.4 0.3 3 15
0.2 0.4 0.1 4 21
* Best parameter set.

The table shows the parameter sets in three groups. For the first group the smooth-
ing constant for the main series has been varied; for the second, that for the trend
has been varied by keeping the series constant at its ‘best’ level; finally, the constant
for seasonality has been varied by keeping the other two at their ‘best’ level. The
parameter set with the lowest MAD and MSE is (0.2, 0.4, 0.5). The Holt–Winters
model with these parameters would appear to be the best. Note that the procedure
for finding these parameter values is an approximate one. There is no guarantee that
the truly optimum set has been found. To ensure that this had been done would
have required an exhaustive comparison of all possible parameter sets.

Which Regression Model?


There are three possible regression models:
(a) GDP as independent variable.
(b) ADV/PRO as independent variable.
(c) GDP and ADV/PRO as independent variables.
They should be compared using the following basic criteria:
(a) Sensibleness of models.
(b) Closeness of fit – using -bar-squared.
(c) Randomness of residuals – statistically or by inspection.
Table A4.24 shows how the three models compare according to these criteria.
The first model, using GDP as the only independent variable, is inadequate. The fit
is not a close one ( -bar-squared = 0.27). Nor are the residuals random. They
exhibit a seasonal pattern in that the residuals for quarter 1 are all negative, for
quarter 3 all positive.
The second model, using ADV/PRO as the independent variable, is good. The fit is
a close one and the residuals are random.

Quantitative Methods Edinburgh Business School A4/105


Appendix 4 / Answers to Review Questions

The third model, with GDP and ADV/PRO as independent variables, is slightly
better than the second, having a marginally higher -bar-squared.

Table A4.24 Output for the regression models


Criterion Model (independent variable(s))
GDP ADV/PRO GDP, ADV/PRO
Sensibleness Yes Yes Yes
-bar-squared 0.27 0.91 0.93
Random residuals No Yes Yes
(Seasonality)

Finally, since these are regression models, they should be checked for the existence
of any of the usual reservations: lack of causality, spuriousness, etc. There may
indeed be a problem with causality. The second and third models are superior
because the ADV/PRO variable captures the seasonality, which was a problem in
the first. It is not clear whether it is the seasonality or the expenditure on advertising
and promotion that explains the changes in sales volumes. There will be no difficul-
ty if advertising and promotion expenditures continue to be determined with
seasonal variations as in the past, but if the method of allocation changes then both
models will be inadequate. A new model, dealing with advertising/promotion and
seasonality separately, will have to be tested.
Meanwhile, the model with two independent variables seems to be the best. The
results of this regression analysis are shown in more detail in Table A4.25.

Table A4.25 Output of the regression model linking sales to GDP and
ADV/PRO
Sales volume = –44.3 + 0.49 × GDP + 17.4 × ADV/PRO
-bar-squared = 0.93
Quarter
Residuals Year 1 2 3 4
2009 0.3 4.2 0.4 0.6
2010 2.4 1.3 1.7 4.6
2011 2.6 4.3 0.5 1.4
2012 1.1 5.3 4.7 1.7
2013 2.9 1.8 0.7 2.5
2014 1.9 2.8 1.3 3.0
2015 2.6 1.2 0.7 0.3
2016 1.4 2.2 0 0.1
2017 1.4 5.0 3.5 0.8

Best Overall? Time Series or Regression?

A4/106 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

The best time series model is the Holt–Winters with smoothing constants 0.2 (for
the series), 0.4 (for the trend) and 0.5 (for the seasonality); the best regression model
relates sales volume to GDP and total expenditure on advertising and promotion.
To choose between these two, an independent test of accuracy should be used. This
means that the latter part of the data (2017) is kept apart and the data up to then
(2009–16) used as the basis for forecasting 2017. The better model is the one that
provides forecasts for 2017 that are closer to the actual sales volumes. There are two
reasons for comparing the models in this way.
First, the test is independent in the sense that the data being forecast (2017) are not
used in establishing the forecasting model. Contrast this with the use of -bar-
squared. All of the data, 2009–17, are used to calculate the coefficients of the model;
the residuals are then calculated and -bar-squared measures how close this model is
to the same 2009–17 data. This is not an independent measure of accuracy.
Second, the accuracy of smoothing techniques is usually measured through the
mean square error or mean absolute deviation; the accuracy of regression models is
measured by -bar-squared. These two types of measures are not directly compara-
ble. On the other hand, the independent test of accuracy does provide a directly
comparable measure: closeness to the 2017 data.
The details of the test are as follows. The 2009–16 data are used for each of the two
models as the basis of a forecast for each of the four quarters of 2017. The close-
ness of the two sets of forecasts to the actual 2017 data is measured using the mean
square error and the mean absolute deviation. The model for which both these
measures are smaller is chosen as the better to use in practice. Should the two
measures have contradicted one another, then this would have meant that the model
with the lower MSE tended to be closer to extreme values whereas the model with
lower MAD tended to be closer on average to all values.
Table A4.26 shows the results of this test. The Holt–Winters time series is clearly
superior to the regression model. Both measures, MAD and MSE, demonstrate that
it gives the better forecasts for the already known 2017 data. The Holt–Winters
technique, with smoothing constants 0.2, 0.4 and 0.5, should be chosen to make
forecasts. The whole of the data series, including of course 2017, should be used in
doing this. Forecasts for 2018 are shown in Table A4.27.

Table A4.26 Comparing time series and regression models


Quar- Actual Time series Regression
ter
(of 2017) Forecast Error Error2 Forecast Error Error2
1 37 38 1 1 39 2 4
2 49 50 1 1 54 5 25
3 63 63 0 0 60 3 9
4 48 47 1 1 47 1 1
MAD = 3/4 MSE = 3/4 MAD = 11/4 MSE = 39/4
= 0.75 = 0.75 = 2.75 = 9.75

Quantitative Methods Edinburgh Business School A4/107


Appendix 4 / Answers to Review Questions

Table A4.27 2018 forecasts


Quarter Forecast
(of 2018)
1 39
2 52
3 66
4 49

Step 7: Incorporating Judgement into the Forecast


To incorporate judgement into a forecast, there are two basic tasks. The first is to
draw all the judgements together and try to form a consensus. This can be done
through one of the qualitative forecasting techniques. The second task is to make
the adjustment to the forecast, if one is to be made. This can be accomplished by
getting those people affected by the forecast to agree to the change and then, most
important of all, to make them accountable for it. It is vital that the accuracy of the
alterations be monitored.
It should be noted that the body of ‘experts’ whose views should be considered
could go outside the organisation and include generally recognised industry experts.
Assembling the judgements of these experts, obtaining a consensus and making
them accountable for their views is a difficult, if not impossible, task. Nevertheless,
the approach should be as outlined above. First, the judgements are brought
together; second, decisions to adjust the statistically derived forecasts are made.
Table A4.28 summarises some of the judgements that might be made about the
future of the product. Sometimes the judgements might be expressed in words
rather than in convenient percentage growth terms. When this happens, a quantita-
tive forecast has to be derived from the words. Table A4.28 shows the source of
each judgement, the verbal forecast where one exists and a forecast in terms of
percentage growth, whether actual or derived.

Table A4.28 Experts’ opinions of brand prospects


Source Verbal forecasts % growth p.a.
Business pages of 1. Slight decline in volume next year 1 to 2
newspapers
2. Stagnation for two to three years 0
3. Continuing as in recent years 4
4. No significant changes in the 4
situation
Business journals 1. No cause for optimism 0 (?)
2. No improvement in prospects 3
Industry experts 1. No significant change 4
2. Better than previous years 5
3. Better than previous years 5

A4/108 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Source Verbal forecasts % growth p.a.


Stockbrokers 1. Unlikely to sustain even recent 1
modest growth
2. Unchanged situation 4

To obtain a consensus from these data, a modified version of the Delphi method
might be used. All the experts represented in the table should be approached,
presented with the views of the others and asked if they wish to adjust their opin-
ions. As a result, some of the more extreme views might be altered.
The second stage, the adjustment of the statistical forecasts, is made by people
within the organisation. They should be accountable for any changes made; it is not
of course possible to make the external experts accountable for their views within
the context of an organisation’s forecasts.

Step 8: Implement the Forecasts


There are four questions to be asked and answered in implementing a forecast:
(a) What are the problems?
(b) Do all the participants agree on the problems?
(c) What are the possible solutions to the problems?
(d) Can a consensus on an action plan be obtained?
Implementation in this case commences with the drawing up of a list of everyone
affected by the forecasts. Nearly all the main functional departments – finance,
marketing, production, corporate planning – should be represented on the list.
Those on the list are then interviewed. The purpose is solely to find out what
difficulties they think might block the successful working of the system. The style of
interviewing should be neutral. No attempt should be made to lead the interviewees,
nor to sell the forecasting system to them, nor to encourage them to say nice things
about it. Such approaches might result in the erection of defensive barriers and
prevent the eliciting of real views.
There are likely to be many problems. Only two examples will be described. First,
there might be disagreement about the effect of the launches of other new brands if
any are about to be introduced either by the organisation itself or by competitors.
Second, there may be internal political problems, especially if there have been recent
changes in the organisational structure or in personnel.
The most usual problem is a lack of belief in new techniques and technology on the
part of the users. When present, this is usually because of previous failures in similar
projects.
How might such problems be solved? The problem of the new brands might be
solved by creating two sets of forecasts, each relating to a different set of assump-
tions about the effect of the new brands. In effect, this solution is a form of the
technique of scenario writing. Contingency plans should be prepared so that the
organisation can respond to either scenario.
The problem of political difficulties, its source and its effect, will often go far
beyond the forecasting project. However, the early employment of the user in-

Quantitative Methods Edinburgh Business School A4/109


Appendix 4 / Answers to Review Questions

volvement approach is likely to minimise the effect, since the participants will feel
more like a team than they would otherwise.
Likewise, a lack of belief in a forecasting system will probably have effects that go
far beyond any one particular project, but early user involvement will mitigate the
effects.

Step 9: Monitor Performance


Monitoring is probably the most tedious stage of the nine, being based on soliciting
and recording a considerable amount of information. Thoroughness and persistence
are the virtues required. The performance of this forecasting system should be
monitored through three separate reports.
(a) An annual report should show the accuracy of the forecasts as measured by the
mean absolute deviation (MAD) or the MSE. The MAD achieved should be
compared with the MAD that is expected given the ex post measurement of accu-
racy described in step 6. As well as this indication of the average accuracy,
exceptional inaccuracies should be reported, with suggested reasons for their
occurrence. If some particular reason has appeared consistently, the forecasting
system should be adjusted accordingly. For example, had there been changes in
the trend to which the system had responded too slowly, then the smoothing
constant for the trend equation should be increased. The point is that, although
the smoothing constant as set may have been fine for the historical data, it might
no longer be right if the trend starts to behave in a slightly different way. An
adjustment to it would then have to be considered. If the trend starts to behave
in a radically different way, the whole basis of the system (the use of Holt–
Winters) will have to be re-evaluated.
(b) An annual report should show the performance of the ‘judges’, those who have
proposed qualitative adjustments to the forecasts. This means that their judge-
mental forecasts should be recorded from minutes of meetings and compared
with what has happened. Over time the identity of the good judgemental fore-
casters will begin to emerge. Their views should then carry greater weight in later
forecasts. Many of them may outperform the statistical forecasts. However, be-
fore scrapping the statistical system it will have to be borne in mind that these
judgements are being made as an amendment to the Holt–Winters output. It is
one thing to be able to make marginal improvements to an existing forecast; it is
quite another thing to beat the existing forecast when working in isolation from
it.
(c) After the system has been in operation for a year, a survey of users should be
carried out by an independent party. The users should be asked for their percep-
tions of the system. Exactly how do they use the forecasts? What are the
strengths and weaknesses of the system in their eyes? How can it be improved?
Do they think it has been successful in achieving its objectives? And so on. This
will demonstrate whether the system has credibility among the users and does
support their decisions. Such a survey should be repeated in the future when it is
felt that the whole system is due for review and reappraisal.

A4/110 Edinburgh Business School Quantitative Methods


Appendix 4 / Answers to Review Questions

Conclusions
It should be emphasised that it is probably only for short-term forecasts that a time
series method will seem to be the best. For medium-term forecasts beyond a year
ahead, a causal model is likely to be better. Even for a short-term forecast, however,
uncertainty and volatility in the UK economic environment will eventually cause
problems, and adjustments will have to be made to the Holt–Winters model. For
important medium-term forecasts on which the expenditure of a lot of money is
justified, it may be worthwhile to use all three approaches to forecasting: causal,
time series and qualitative. If all give similar output, there is mutual confirmation of
the correct forecast; if they give different output, then the process of reconciling
them will be a valuable way for the managers involved to gain a better understand-
ing of the future.
This case solution has covered the important aspects of the case, but not all the
aspects. Among the omissions, for example, are statistical tests of randomness.
Furthermore, techniques other than Holt–Winters and some limited causal model-
ling have not been described, but they should have been considered. The emphasis
has, however, been on the role of a manager, not a statistician. The items included
are, in general, the things a manager would need to be aware of in order to be able
to engage in sensible discussions with forecasting experts.

Marking Scheme (out of 30) Marks


Steps
1. Analyse decision 2
2. Forecasts required 2
3. Conceptualise 4
4. Data availability 2
5. Techniques 2
6. Accuracy
– Best Holt–Winters 4
– Best regression 4
– Holt–Winters vs. regression 4
7. Judgements 2
8. Implementation 2
9. Monitoring 2
Total 30

Quantitative Methods Edinburgh Business School A4/111


Index
absolute value 5/15, 11/8 beta-binomial distribution 9/25
accounting data 3/2 binomial distribution 7/15–7/20, 7/22,
special case of 3/13–3/17 7/26–7/29, 9/2–9/4, 9/25
accounting ratios 3/6 characteristics 7/15
accounts 3/13–3/17 deriving 7/16–7/18
accuracy 1/22, 3/4, 3/5, 3/11, 5/2 occurrences 7/15–7/16
in sampling 6/12–6/18, 8/8 parameters 7/19
of business forecasting 12/22, 15/4– tables 7/18
15/8, 15/12–15/13 blocks 10/4, 10/11
addition 2/8, 2/21, 7/9, 7/13 Box–Jenkins Method 14/16–14/17, 15/4
advertising 1/18, 11/2, 12/4–12/7, brackets in algebraic expressions 2/10
12/21 brainstorming 13/8, 15/11
Advertising Standards Authority 1/18 breakeven point 2/33
aggregate index 5/26–5/31 business forecasting 13/1–13/24
algebra 2/5–2/25 accuracy 15/4–15/8, 15/12–15/13
alphabetical order 3/6, 3/12, 4/10 applications 13/4–13/5
alternative hypothesis 8/15–8/19, 8/26 case studies 13/24, 15/24–15/26
ambiguity 1/23 conceptual model 15/5
analogies 13/12 data available 15/5
analysis of variance 10/1–10/22 errors 15/2, 15/4, 15/14–15/16
ANOVA tables 10/8–10/10, 10/13– exploratory approach 13/14
10/15, 12/24 forecasts needed 15/5
applications 10/2–10/5 guidelines for organisation forecasting
case studies 10/21–10/22 system 15/2, 15/4–15/14
extensions 10/14–10/15 implementing the system 15/9–15/12
one-way 10/2–10/10, 10/14 incorporating judgements 15/8–15/9,
two-way 10/2–10/5, 10/10–10/14 15/12
ANOVA tables 10/8–10/10, 10/13– long-term forecasting 13/5
10/15, 12/24 management aspects 15/1–15/26
area sampling 6/10 manager’s role 15/2–15/4
area under the curve 7/20, 7/23 medium-term forecasting 13/4–13/5
arithmetic mean 5/3–5/15, 7/25, 7/26, monitoring performance 15/12–
8/3, 9/8 15/14
distortion by clusters and outliers normative approach 13/14
5/13 problems 15/10–15/11
error by averaging averages 5/14 qualitative techniques 13/2–13/17,
notation 8/7 15/8
pre-eminence 5/13–5/14, 5/19 quantitative techniques 13/6–13/7,
ARMA (autoregressive, moving average) 13/17–13/18
14/16 selection of technique 15/6
Armstrong, J. S. 13/18 short-term forecasting 13/4, 13/5,
autocorrelation coefficients 14/16 14/2, 14/17
averaging of averages 5/14 techniques 13/2–13/3, 15/1, 15/3
base forecast 14/2 time horizons 13/4, 13/5, 15/5
base year 5/24–5/25 time series methods 13/3
bases 2/19–2/24, 5/24–5/26 user involvement 15/2, 15/3, 15/8–
Bayesians 1/4 15/10, 15/14

Quantitative Methods Edinburgh Business School I/1


Index

business reports 3/2 measuring closeness of fit 12/14–


case studies 12/16
analysis of variance 10/21–10/22 statistical basis 12/13–12/23
business forecasting 13/24, 15/24– correlation coefficient 11/10–11/14,
15/26 11/19, 11/24, 15/7
correlation 11/31–11/32 multiple regression analysis 12/3–
data analysis 4/22–4/24 12/4
data communications 3/28 residuals 11/15–11/17
distributions 7/37, 9/34–9/35 cost of living index 5/24, 5/27, 5/28
equations 2/32–2/34 critical values 8/12–8/19, 8/21, 9/23–
formulations 2/32 9/24, 12/16
insurance premium 9/34 crop yield 10/3–10/5, 10/11
oil conservation 3/31 cross-impact matrices 13/11–13/12
regression 11/32, 12/33–12/36 cumulative relevance number (CRN)
relevance trees 13/24 13/17
sampling methods 6/24 curvilinear regression 12/7–12/9
statistical inference 8/38–8/40 cycles 14/9–14/16
summary measures 5/39–5/41 data analysis 4/1–4/24
time series forecasting 14/25–14/26 accounting data 4/3
uses and misuses of statistics 1/30– case studies 4/22–4/24
1/35 comparisons 4/9, 4/12, 4/14
wage negotiations 1/31 definition 4/1–4/2
catastrophe theory 13/13–13/14 exceptions 4/8, 4/12
causal modelling 13/3, 15/6 guidelines 4/6–4/12
central limit theorem 8/5, 8/7, 8/8, implications for the producers of
8/11, 9/10, 9/12, 9/15 statistics 4/13–4/17
certain events 1/3, 8/2–8/3 lack of confidence in 4/3
chi-squared distribution 9/15–9/21, 9/26 management information system
characteristics of 9/16 output 4/5, 4/9, 4/17
occurrences of 9/16 market research 4/5
tables 9/17 model building 4/7, 4/12, 4/13
class sizes 1/11 over-complication by experts 4/3, 4/5
cluster sampling 6/8, 6/11 reduction of data 3/5, 4/5, 4/6
clusters 5/13 re-presentation of data 4/6–4/7, 4/9,
coefficient of variation 5/21–5/22 4/13
collinearity 12/4–12/6, 12/24 data communication 3/1–3/31
combinations 7/10–7/14 accounting data 3/13–3/17
common-sense test 1/23, 5/2 case studies 3/31
comparisons 4/9, 4/12, 4/14, 5/12 presentation rules 3/3–3/13
computers 3/18, 3/23, 4/17, 11/8, 12/5, use of graphs 3/17–3/23
12/25 data points 1/6
confidence limits 8/3, 8/6–8/8, 8/10, data presentation rules 3/3–3/17, 3/20–
8/27, 9/10, 10/1–10/2 3/23, 4/3, 4/6–4/7
constants 2/5, 2/8, 2/13, 2/18 clarity of labelling 3/9–3/10, 3/12,
control group 8/20 3/22, 4/7
convenience sampling 6/12 interchange of rows and columns
coordinates 2/2–2/8, 2/12, 2/13 3/6–3/7, 3/12, 3/22, 4/7
correlation 11/1–11/3 minimal use of space and lines 3/8,
case studies 11/31–11/32 3/12, 3/22, 4/7
limitations 11/21–11/24 reordering of numbers 3/5–3/6, 3/12,
3/22

I/2 Edinburgh Business School Quantitative Methods


Index

rounding to two effective figures 3/3– case study 2/32–2/34


3/5, 3/11, 3/22, 4/7 dependent 2/16
use of summary measures 3/12, 3/22, inconsistent 2/15, 2/16
4/7 linear 2/11–2/18, 11/15, 11/22
use of verbal summary 3/10–3/12, manipulation 2/8–2/11
3/22, 4/7 simultaneous 2/14–2/18
data reduction 3/5, 4/5, 4/6 error sum of squares (SSE) 10/6, 10/9,
decay functions 2/21 10/11, 10/12
decision making 4/13, 4/17, 8/26, 8/27, errors
15/2, 15/4, 15/9 in business forecasting 15/2, 15/4,
decision support systems 1/2 15/13–15/16
decomposition method 14/9 in sampling 6/13, 6/18
degrees of freedom 9/8–9/9, 9/12, in significance tests 8/15–8/19
9/14–9/22, 10/7, 12/15 logical 1/21
Delphi forecasting 13/6, 13/9–13/10, standard 8/7, 12/5
13/18, 15/8 statistical 1/2, 1/22–1/25, 5/23
dice 7/12–7/14, 7/16–7/18 estimation 8/2, 8/6–8/9, 8/22, 8/27,
distributions, statistical 1/5–1/17, 7/2– 9/1, 9/17
7/38 evidence 1/22
average 1/14 experimental design 10/15
binomial 7/15–7/20, 7/22, 7/26– exponential functions 2/18–2/25, 12/10
7/29, 9/2–9/5 and linear functions 2/24
case studies 7/37 exponential smoothing 14/4–14/6
chi-squared 9/15–9/21, 9/26 exponents 2/19–2/20
continuous 1/9–1/12 extrapolation 11/23
discrete 1/5–1/12, 7/20–7/22 factorial notation 7/11, 7/12
F-distribution 9/21–9/25, 10/7, factors 10/15
12/14 F-distribution 9/21–9/25, 10/7, 12/14
non-symmetrical 9/17 characteristics 9/21
normal 1/12–1/16, 7/14–7/15, 7/20– occurrences 9/22–9/23
7/28 tables 9/23–9/24, 10/13
observed 1/12–1/14, 1/24, 7/2–7/8, feedback 4/17, 15/11
7/14, 7/28–7/29, 9/2 financial analysts 1/21, 3/2
Poisson 9/2–9/8, 9/26 financial budgets 3/4
probability concepts 7/8–7/14 financial data 3/13–3/17
reverse J-shaped 5/11–5/12, 5/32, fitted values 11/6, 11/14–11/17, 12/5
7/2 fixed weight index 5/28
standard 1/12–1/18, 7/2, 7/8, 7/14– forecast interval 12/22, 12/23
7/15, 7/26, 9/2, 9/15 forecasting
standard deviation 1/14, 1/17, 5/18– methods and applications 13/14
5/19, 5/23, 7/21–7/28, 8/21, formulations, case study 2/32
8/22 frequency histogram 1/7–1/9, 1/31, 7/4
symmetrical 5/9, 5/32, 7/19 frequency table 1/6, 7/2–7/8
U-shaped 5/10, 5/32, 7/2 functions 2/5–2/8
division 2/9, 2/20 Gossett, W. S. 9/10, 9/25
dummy variables 12/6 gradient 2/11
econometric forecasts 13/3, 15/16 graphics 1/19–1/20
effective figures 3/3–3/5 graphs 1/19–1/22, 1/23–1/25, 2/2–2/8,
Ehrenberg, A. S. C. 3/5, 4/5 2/13, 2/23
elasticity 2/8 helpful uses 3/17–3/20
equations 2/5, 2/8–2/25

Quantitative Methods Edinburgh Business School I/3


Index

in data communication 3/2, 3/17– logarithms 2/21, 2/24, 12/9–12/13


3/23 logical errors 1/21
unhelpful uses 3/17–3/20 long-range forecasting 13/18
gridlines 3/8, 4/7 long-term forecasting 13/5
gross domestic product (GDP) 11/3– management data 1/18, 3/2, 3/10, 4/2
11/4, 12/4–12/6, 12/21, 15/16 management information systems 1/2,
growth function 2/18, 2/21 3/2, 3/6, 3/18, 3/23
heteroscedasticity 11/20 output 4/5, 4/9, 4/17
histograms 5/8–5/11, 5/32, 7/16, 7/19 market research 1/2, 4/5, 6/2, 6/12,
Holt’s Method 14/6–14/8 6/18, 7/3, 8/2, 13/9
Holt–Winters Method 14/8 mathematics 2/1–2/25, 11/5–11/7
hypotheses 4/2, 8/2, 8/9–8/21, 8/24, equations 2/8–2/12, 11/5–11/6
9/13 exponential functions 2/18–2/25
testing 8/10 graphical representation 2/2–2/8
impossible events 1/3 linear functions 2/11–2/14
income statement 3/29 simultaneous equations 2/14–2/18
independent events 7/10, 7/12 mean absolute deviation (MAD) 5/15–
indices 5/24–5/31 5/21, 15/6, 15/13
fixed weight index 5/28 mean square error (MSE) 15/6, 15/13
Laspeyres Index 5/28 mean squares 10/7, 10/10
Paasche Index 5/28 measures of central tendency 5/5
price relative index 5/27 measures of dispersion 5/15
simple aggregate index 5/26–5/27 measures of location 5/3–5/14, 5/22,
simple index 5/24–5/26 5/31
weighted aggregate index 5/27–5/31 arithmetic mean 5/5–5/15
inference, statistical 6/12, 6/16, 8/1– choice of measure 5/6–5/8
8/40, 9/1 comparisons 5/12
applications 8/2 median 5/5–5/6
case studies 8/38–8/40 mode 5/6, 5/11–5/14
confidence levels 8/2–8/3 visual approach 5/12
estimation 8/6–8/9 measures of scatter 5/14–5/22, 5/31
sampling distribution of the mean coefficient of variation 5/21–5/22
8/3–8/6 comparison of measures 5/19–5/21
significance tests 8/9–8/29 interquartile range 5/15, 5/21
information collection 6/17 mean absolute deviation (MAD)
interaction variable 10/14 5/15–5/21, 15/6, 15/13
intercept 2/11, 2/21, 11/5, 11/24 range 5/15
internal rate of return (IRR) 3/31 standard deviation 5/18
interquartile range 5/15, 5/21, 6/12 variance 5/16–5/21
interviews 1/20 median 5/5, 5/10, 5/12
invoice checking 6/3, 6/17 medium-term forecasting 13/4
Jenkins, G. 15/4, 15/14 method of steepest descent 14/17
judgement sampling 6/4, 6/11–6/12 microcomputers 11/17–11/21
labelling in tables 3/9, 3/12, 3/22, 4/7 microeconomics 2/14
Laspeyres Index 5/28 mode 5/6, 5/11–5/14
least-squares method 11/7–11/9, 11/23, model building 4/7, 4/12, 4/13, 5/31
12/3, 12/24 modelling, practical experiences with
linear functions 2/6, 2/11–2/18, 2/22, forecasting time series and 15/14
2/24 moving averages 14/3–14/6, 14/13
and exponential functions 2/24 multi-collinearity 12/4
local relevance number (LRN) 13/17 multi-factor situations 10/15

I/4 Edinburgh Business School Quantitative Methods


Index

multiple regression analysis 12/2–12/7, percentages 1/21


12/24–12/26 pictorial representations 1/23
and simple regression 12/2–12/3 point estimate 8/7, 8/8, 8/27, 9/13,
collinearity 12/4–12/6 12/22
correlation coefficient 12/3–12/4 point representation 2/4
dummy variables 12/6 Poisson distribution 9/2–9/8
retention of variables 12/19–12/21 approximation to binomial 9/6–9/7
scatter diagrams 12/3 characteristics 9/2, 9/25
multiplication law 2/5, 2/10–2/11, 2/19, degrees of freedom 9/8–9/9
2/21 derivation 9/4
multiplication law of probability for occurrences 9/3
independent events 7/10, 7/12 parameters 9/6
multi-stage sampling 6/4–6/8, 6/11 tables 9/3–9/6
mutually exclusive events 7/9 t-distribution 9/9–9/15
negative binomial distribution 9/24, 9/25 pooled estimate of the standard error
non-linear regression analysis 12/2, 8/22
12/7–12/13 population mean 8/6–8/8, 9/21
curvilinear 12/7–12/9 population measures 8/7
transformations 12/9–12/13 population split 6/6–6/8
normal curve tables 1/13, 7/21, 7/23– populations 1/2, 1/17, 6/1–6/18, 8/1
7/25, 8/11, 8/17, 8/21, 9/6, 9/10 standard deviation of 8/7
normal distribution 7/20–7/28, 8/6, price relative index 5/27
9/25 probability 1/3–1/5, 7/4–7/14
approximating binomial with normal a priori assessment 1/4, 1/5, 7/8, 9/2
7/27–7/28 binomial 7/16, 7/17, 7/19
characteristics 7/20–7/21 concepts 7/8–7/14
derivation 7/22–7/23 conditional 7/9
normal curve tables 1/13, 7/21, errors 1/24
7/23–7/25, 8/11, 8/17, 8/21, 9/6, measurement 1/3–1/4, 1/10, 7/5
9/10 objective assessment 1/4
occurrences 7/21–7/22 probability distributions 7/2, 9/2
parameters 7/25–7/27 relative frequency assessment 1/4,
null hypothesis 8/9, 8/15–8/17 7/8, 9/2
number picking 4/8 subjective assessment 1/4, 1/5, 7/8
observations 1/6, 7/4, 8/23, 9/8, 10/8, probability histograms 1/8, 7/5, 7/6
11/2, 11/4 probability sampling 6/10
observed distributions 1/12–1/14, 1/24, profit figures 1/21
7/2–7/8, 7/14, 7/28–7/29, 9/2 profit margin 3/6
omissions 1/20 profitability 1/21
opinion polls 1/2, 6/3–6/6, 6/18, 7/16 promotion schemes 8/21–8/26
ordered arrays 1/6, 7/3 purchasing behaviour 5/29
orderings 7/10–7/12 quadrants 2/3
outliers 5/11, 5/13, 5/23–5/24 qualitative forecasting techniques 13/2–
Paasche Index 5/28 13/17, 15/8
panel consensus forecasting 13/8 analogies 13/12
parameters 1/14–1/18 brainstorming 13/8
binomial distribution 7/19 catastrophe theory 13/13–13/14
normal distribution 7/25–7/27 cross-impact matrices 13/11–13/12
Poisson distribution 9/6 Delphi forecasting 13/6, 13/9–13/10,
t-distribution 9/15 13/18, 15/8
partial relevance numbers (PRN) 13/16 market research 13/9

Quantitative Methods Edinburgh Business School I/5


Index

panel consensus forecasting 13/8 rounding 3/6, 3/16–3/17, 3/22–3/23,


relevance trees 13/14–13/17 4/13
scenario writing 13/10–13/11 fixed 3/4
visionary forecasting 13/8 to two effective figures 3/3–3/5,
qualitative methods of business 3/11, 3/16, 3/22, 4/7, 4/9
forecasting 13/2–13/3, 13/6–13/8, variable 3/3
13/17 runs test 11/21, 12/16–12/19
quality control 6/3, 9/24 sample bias 1/20, 1/23, 6/12, 6/15
quota sampling 6/12 samples 1/2, 1/20, 6/1–6/3
raising to a power 2/20 paired 8/23–8/24
random numbers 4/8 sampling distribution of the mean 8/3–
tables 6/5–6/6 8/6, 8/11, 8/20
random sampling 6/5–6/11, 6/13, 6/16, sampling frame 6/14
8/26 sampling methods 6/1–6/24
randomness 11/16, 11/21 accuracy of samples 6/12–6/14
statistical tests 12/13, 12/16–12/19 applications of sampling 6/3–6/4
range 5/15 area sampling 6/10
readings 7/4 bias 6/12, 6/15
regression 2/24, 12/36 case studies 6/24
accuracy of predictions 12/22–12/23 cluster sampling 6/8, 6/11
applications 11/3–11/5 convenience sampling 6/12
calculations on microcomputer difficulties in sampling 6/14–6/16
11/17–11/21 judgement sampling 6/4, 6/11–6/12
case studies 11/31–11/32, 12/33– multi-stage sampling 6/4–6/8, 6/11
12/36 non-response 6/14–6/15, 6/18
equation of straight line 11/5–11/6 opinion polls 6/3–6/6, 6/18, 7/16
forecasting 11/3 probability sampling 6/10, 8/10
limitations 11/21–11/24 quota sampling 6/12
mathematics 11/5–11/7 random sampling methods 6/5–6/11,
measuring closeness of fit 12/14– 6/13, 6/16
12/16, 12/24 sample size 6/16–6/17, 9/4, 9/8
multiple regression analysis 12/2– sampling frame 6/14
12/7, 12/24–12/26 simple random sampling 6/4–6/11,
non-linear regression analysis 12/2, 6/13
12/7–12/13 stratified sampling 6/4, 6/8–6/9,
regression coefficients 11/19, 12/14 6/13, 6/14
residuals 11/6, 12/24 systematic sampling 6/11
scatter diagrams 11/1–11/10, 11/17– variable sampling 6/10
11/21, 12/3–12/5 weighting 6/9
simple linear 11/7–11/9, 12/13– scatter 5/4, 5/14–5/22, 9/16, 9/18, 9/21
12/23 scatter diagrams 11/1–11/9, 11/17–
spurious 11/22 11/21, 12/3–12/4
regression coefficients 11/19, 12/14 scenario writing 13/10–13/11
relevance trees 13/14–13/17 scepticism 1/24
case study 13/24 seasonality 14/8–14/16
representativeness 6/4, 6/8, 6/10, 6/12 sensitivity analysis 3/31
research and development 4/3 sequences 7/10
residuals 11/6, 11/7, 11/15–11/17, serial correlation 11/20
11/19–11/21 short-term forecasting 13/4, 13/5, 14/2,
tests for randomness 12/5, 12/9, 14/17
12/16–12/19

I/6 Edinburgh Business School Quantitative Methods


Index

significance level 8/9–8/13, 8/9–8/22, logical errors 1/21


8/27 misuse 1/2–1/3, 1/18–1/22, 1/23–
significance tests 8/2, 8/9–8/29 1/25
assumptions 9/1 omissions 1/20
basic 8/15–8/19 technical errors 1/21
critical values 8/12–8/19 stepwise regression 12/25
difference between paired samples stratified sampling 6/4, 6/8–6/9, 6/12,
8/23–8/24 6/14
difference in means of two samples Student's distribution 9/11
8/19–8/23 subpopulations 6/10
errors 8/15–8/19 subscripts 2/12
limitations 8/25–8/27 subtraction 2/9, 2/21
one-tailed 8/13, 8/14, 9/14 sum of squares between treatments (SST)
power 8/16–8/17 10/6, 10/9, 10/11, 10/12
stages 8/20, 8/21, 12/18, 12/20 summary measures 5/1–5/41
summary 9/26 case studies 5/39–5/41
two-tailed 8/13, 8/14, 9/14, 9/18 in data presentation 3/7, 3/12, 3/22,
type 1 errors 8/15, 8/16, 8/17, 8/26 4/7, 4/13
type 2 errors 8/15, 8/16, 8/17, 8/26 indices 5/24–5/31
simple index 5/24–5/26 kurtosis 5/22
simple random sampling 6/4–6/11, measures of location 5/5–5/14
6/14, 6/17 measures of scatter 5/14–5/22
simultaneous equations 2/14–2/18 outliers 5/23–5/24
algebraic solution 2/16–2/18 skew 5/22
skew 5/22, 7/19, 9/22 usefulness 5/3–5/4, 5/12
slope coefficient 12/11, 12/23 sums of squares 12/14–12/16
slope of the line 2/11, 2/13, 11/5–11/7, symbols 2/5
11/24, 4/69 systematic sampling 6/11
smoothing constants 14/6–14/8, 14/18, tables 3/5–3/13, 3/21–3/22, 4/13, 5/4,
15/6 5/12, 5/32
squaring 5/16 t-distribution 9/9–9/15, 9/25, 12/20,
standard deviation 1/14, 1/17, 5/18– 12/26
5/19, 5/23, 7/21–7/28, 8/7–8/9, characteristics 9/9
8/21, 8/22 derivation 9/10
estimate 9/8–9/13 normality 9/15, 9/17
standard distributions 1/12–1/18, 7/2, occurrences 9/10
7/8, 7/14–7/15, 7/26, 9/2, 9/15 parameters 9/15
summary 9/26 tables 9/11–9/15
standard error of predicted values (SE technical errors 1/21, 1/23
(Pred)) 12/22–12/23 technological forecasting 13/6
stationary series 14/2–14/6 theoretical distributions 7/2, 9/2
exponential smoothing 14/4–14/6 time series forecasting 14/1–14/26
moving averages 14/3–14/4 advantages 14/2, 14/18, 15/4
statistical gap 4/2 Box–Jenkins method 14/16–14/18
statistical tests 4/2, 12/13, 12/16–12/19 case studies 14/25–14/26
statistical theory 6/16, 8/7, 8/27, 10/8, cycles 14/9–14/16
15/17 decomposition method 14/9
statistics Holt’s Method 14/6–14/9
definition 1/1, 1/18 Holt–Winters Method 14/8–14/9
descriptive 1/2 practical experiences with modelling
inferential 1/2 and 15/14

Quantitative Methods Edinburgh Business School I/7


Index

review of techniques 14/16–14/18 discrete 1/9


seasonality 14/9, 14/10–14/15 related 1/21, 2/2, 2/14, 11/1, 11/4,
series with a trend 14/6–14/9 11/21–11/22
series with trend and seasonality variance 4/14, 5/16–5/21, 5/23, 8/20
14/8–14/9 variance sum theorem 8/20
series with trend, seasonality and variations 7/22
cycles 14/9–14/16 explained 11/12, 11/13
stationary series 14/2–14/6 R-squared 11/12
total sum of squares (Total SS) 10/5 unexplained 11/12, 11/13
transformation 2/24, 12/7, 12/9–12/13, verbal information 5/2, 5/32
12/25 verbal summary 3/10–3/13, 3/22, 4/7,
treatments 10/4–10/10 5/32
trends 14/6–14/13 visionary forecasting 13/8
Twyman’s Law 5/23 wage negotiations, case study 1/31–1/34
unique solution 2/15, 2/16 wage, average weekly 5/2
variables 1/6, 1/12, 1/14, 2/5, 2/18, weighted aggregate index 5/27–5/31
7/2, 7/14–7/15, 7/19 weighting 5/27–5/31, 6/9
associated 11/21–11/22, 15/16 x-axis 2/2–2/13, 7/20
continuous 1/9 y-axis 2/2–2/8, 2/13

I/8 Edinburgh Business School Quantitative Methods

You might also like