This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

**WEEK 9:TIME SERIES
**

• This is a graphical representation of observations

taken for along time, at specific times, and usually

at equal intervals.

• Examples: the daily closing price of a share on

SE., total monthly sales, hourly temperatures

announced by a weather station, population,

un/employment figures, the turnover of a firm,

prices, quantities etc.

• It is a function of time. I.e. Y = F (t).

• What are the factors that affect the comparability

of values used in constructing a time series?

Calendar, price & population variations.

• See overleaf slide for details.

2

ATTRIBUTES TO VARIATIONS IN DETAILS

• Calendar days variation affect time series

expressed on monthly basis. E.g. monthly

wages that vary with working days. To adjust for

this we divide monthly figure with respective

days of the month.

• Price changes due to change in value of money.

Price index could be used in adjusting the

values.

• Large population changes affect the unit basis

or per capita cases. Adjustment is C/N.

• A change in units, definition, classification or

type of product.

3

TIME SERIES CHARACTERISTICS

4

CHARACTERISTICS OF A TIME SERIES

• Long term or secular movements or variations:

the general or overall trend of the data, drawn in a

dotted line. This trend line can be used to predict

the future values of the series.

• Seasonal variation: they are fluctuations attributed

to seasonal factors. E.g. demand for certain

products like toys.

• Other elements: cyclical fluctuations (repeat

themselves), and random or irregular fluctuations.

• See a typical time series on the provided hand out.

5

METHODS OF ESTIMATING THE TREND

• The least squares method where,

D = d

2

1

+ d

2

2

+ d

2

3

+…+ d

2

n

is a minimum.

• The method of semi-averages: This entails the

separation of data into two equal parts which are

averaged to give two points that are used to draw

the trend. E.g. 1970 – 1981(12 years) 1970 -75,

then 1976 – 81. For odd-n the middle period is

omitted. E.g. 1970- 1982 (13years) 1976 is

omitted.

• The free hand method by looking at the graph.

• The moving average method. This has some

disadvantages. List two of them.

• See example in the next slide.

6

MOVING AVERAGE EXAMPLES

Computation of the 3-

unit moving average of:

23, 21, 22, 23, 27, 25,

and 25.

Try: 170, 231, 261,

267,278,302,299,

298, 340, 273, 210,

and 158.

Value Moving

Total

Moving

Aver.

23 - -

21 66 22

22 66 22

23 72 24

27 75 25

25 77 25 2/3

25 - -

7

SAMPLING THEORY

• The theory has to do with the study of the

relationship existing between a population and

samples drawn from the population.

• It‟s importance includes:

(i) Estimation of parameters from known sample

statistics.

(ii) Determining the significance of the difference

between two samples, or it may be due to

chance variation. The answers to such are

obtained through the use of tests of

significance and hypothesis.

8

SAMPLING DISTRIBUTION

• This is a distribution where the variable is a

statistic obtained from samples of size N

drawn from their parent population.

• The statistic computed will vary from sample to

sample giving a distribution of the statistic.

• Examples: Sampling distribution of means, of

medians and of standard deviations.

9

A SAMPLING DISTRIBUTION OF

MEANS

•

x x x x

x x x x x x x

x x x x x x x x x x x x x x

µ

s

= µ

p

The samples of means form

a bell-shaped distribution

with an SD and a SE.

An infinite population or sampling with replacement is used

10

SAMPLING DISTRIBUTION FOR MEANS

• For an infinite population of size N where all

possible samples of size n are drawn without

replacement, the sampling distribution mean is µ

x

and that of the population is µ

p

, are such that

µ

x

= µ

p

. Thus, sample mean is an estimator of

that of the population.

• Estimates of SE and SD are given as, SE

x

= s /\n

and SD

x

= o/\n, where s is the SD of one of the

samples and o I the population SD.

• If the population is finite we have, µ

x

= µ

p

and

SE = o

x

= (o/\n) \[(N – n)/(N

- 1)],

• Find o

x

for N = 15,000, o = 20 and n =1,000.

(~ 0.611)

• What is a finite population correction factor (FPC)?

11

PRE-REQUISITES FOR THE

VERIFICATION OF THE RELATIONS.1

Use of random sampling: This provides every

member of the population with an equal chance of

being selected into the sample.

Combination of samples: For a finite population of

size N

P

the number of combinations of samples of

size N, drawn from it is given as a combinatory of,

(n) =

N p

C

N

= N

P

! / [N!(N

P

– N)! ] ÷ N!/ [n! (N – n)!].

Chance of a sample being selected: Each of the

possible samples is chosen with the same

probability, p = 1/ (

N p

C

N

) ÷1/ (

N

C

n

).

Self- check: Compute the number of distinct

samples of size, n = 2, that can be drawn from a

population of size, N = 5. (10)

12

PRE-REQUISITES FOR THE

VERIFICATION OF THE RELATIONS.2

Probability based sample statistics

• The mean value, µ or expected value, E (x)

is given by, µ = Ex

i

p

i

, where p

i

is the

respective probability of occurrence of a

value, x

i

, in this case it remains the same for

all samples.

• The variance value, is given by,

o

2

= E(x

i

- µ)

2

p

i

=

Ep

i

x

2

i

-E(p

i

x

i

)

2

.

• Use these concepts to draw a sample

distribution of the means for random samples

of size, n = 2 from a population of values: 1,

3, 5, 7 and 9.

13

DISTRIBUTION MEANS AND PROBABILITY

• Possible samples: Using combinatory theory

there are 10 possible distinct random samples

of size, n = 2 which can be drawn from of

population of: 1, 3, 5, 7 and 9. They include,

(1,3), (1,5), (1,7), (1,9), (3,5), (3,7), (3,9), (5,7),

(5,9, (7,9).

• The means of these samples are respectively:

2, 3, 4, 5, 4, 5, 6, 6, 7 and 8.

• The probability, p

i

for

each of

the samples

being selected is 1/10.

• Use the results to construct a table and a

histogram of these samples of means

probabilities distribution.

14

SAMPLE MEANS PROBABILITIES

AND DISTRIBUTION

Tabulation

µ Probability (p

i

)

2 1/10

3 1/10

4 2/10

5 2/10

6 2/10

7 1/10

8 1/10

Distribution

p

i

2/10

1/10 µ 1/10

2 3 4 5 6 7 8

HISTOGRAM

15

DETAILED TABULATION

µ p

i

µp

i

(x

i

-µ) (x

i

-µ)

2

(x

i

-µ)

2

p

i

2 1/10 1/5 -3 9 9/10

3 1/10 3/10 -2 4 4/10

4 2/10 8/10 -1 1 2/10

5 2/10 1 0 0 0

6 2/10 11/5 1 1 2/10

7 1/10 7/10 2 4 4/10

8 1/10 8/10 3 9 9/10

Ep

i

=1 Eµp

i

=5 E(x

i

-µ)

2

p

i

=3

16

DEDUCTIONS FROM THE TABULATION

• Means:

From the table, µ

x

= 5 and yet that of the

population, µ = (1+3+5+7+9)/5 = 5. Thus, the

sampling distribution of means that is ~ normal is

centered around the parent population mean. I.e.

µ

x

= µ.

• Standard deviation: The s d of the sampling

distribution of means, o

x

< o. I.e. \3 < \8. For a

infinite population, o

x

= o /\n. The o

x

is called the

standard error of the mean. The value increases

with variability of the population, but decreases as

sample size increases. For n =1, o

x

= o, and

equals zero when n = N. These facts constitutes

parts of the central limit theorem.

17

FINITE POPULATION CORRECTION FACTOR

• It is the factor \[(N – n)/(N

- 1)]

• The FPC is used to covert the SE formula from

infinite populations case to finite one.

• FPC is usually ignored when the sample

constitutes 5% or less, for when the sample size

is much smaller than the population,

the latter is

considered to be effectively infinite in size.

• Find the value of the finite population correction

factor for n = 100 and N = 10,000. (0.995)

• Also find SE when n = 20, N = 1,200 and

o = 346.55 (76.874)

18

STANDARD ERROR (S.E) OF A SAMPLE MEAN

• It is rare for S.E of means for infinite population to

be derived, instead it is estimated from the S.E of

the mean of a single sample. In this case,

S.E = o

x

= s

/\n.

• According to Chebyshev’s (1821-1894) theorem,

the probability of a value being within k s d (s)

from the mean is at least, p

k

= 1 - 1/k

2

.

• This applies to any set of data, not only normally

distributed. ¬ µ- µ

x

= k o

x

or µ = µ

s

+ k o

x

.

• In addition, the central limit theorem

(CLT) provides that for large n the sample

distribution of means approximates to a normal

curve. As such, µ = µ

s

± z (S.E) and

z = (µ

s

- µ) / S.E.

• See estimation example overleaf.

19

AN EXAMPLE ON THE ESTIMATION OF

POPULATION MEAN

• For example the processing time of µ

x

= 72

days, o

= 3 days, and n = 80, we have at 68.27%

C I, S.E = 3/\(80) = 0.335410196

¬ µ ~ 72 ± 0.34

the range of µ is 71.66 to 72.34.

Thus, the average processing time is between these

values at the given confidence interval or level.

• At 99% C I µ = 72 ± 2.58 S.E = 72 ± 0.865358307

¬ µ ~ 72 ± 0.87, giving a wider range of 71.13 to

72.87. See other levels of C I overleaf.

20

INTERPRETATIONS OF

CONFIDENCE INTERVALS (C Is)

C I (%) Z C I (%) Z

99.73 3.00 80 1.28

99 2.58 68.27 1.00

98 2.33 50 0.6745

96 2.05

95.45 2.00

95 1.96

90 1.645

21

STANDARD ERRORS SUMMARY

• Means: µ = µ

s

± z(S.E), S.E = s /\n .

• Difference of means: d

p

= d

s

± z(S.E),

S.E = \(o

1

2

/ n

1

+ o

2

2

/n

2

). See p. 12.-12.5 Gupta.

• Variance: o

2

= o

2

± z(S.E), S.E = s

2

\(2/n).

• Standard deviations: o = s

± z(S.E), S.E = \(s

2

/2n).

• Difference of s d: o

1

- o

2

= d

s

± z(S.E),

S.E=\(o

1

2

/2n

1

+ o

2

2

/2n

2

). The best o

is of d f = n -1.

• Proportion: [= p ± z(S.E), S.E=\(p q/n) (Bernoulli

process).

• Difference of proportion: [

1

- [

2

= d

s

± z(S.E),

S.E = \[(p

1

q

1

/n

1

) +(p

2

q

2

/n

2

)].

See estimators properties (estimation theory) overleaf.

22

WEEK 10: ESTIMATOR CONSISTENCY AND

SUFFICIENCY

Consistency is attributed to the absolute difference

between the statistics used to estimate the

parameter tending to zero (CLT) as the size of the

sample increases (precision). E.g. |µ

s

- µ| ÷ 0, as

n ÷ N. This is true for proportion, the mean and s d.

Give any other suggestions.

Sufficiency has to do with an estimator‟s formula

using all the available data. E.g. the mean, E(x), and

the s d; unlike the mode and median whose

formulae use only part of the data or scores.

23

UNBIASED ESTIMATES

• Is attributed to the sample distribution of a

statistic being equal to the corresponding

population parameter estimate.

• E.g. the mean and median of the means of

infinite sets of samples equals those of the

population.

• The r

x y

has this property when r

x y

for the

population is zero.

• The sample variance, s for d f = n -1 does so.

• The conversion formula of sample variances is,

s

2

= n/(n-1) o

2

. Check this with 1,2,…8,and 9.

(7.5, 6 2/3)

24

ESTIMATOR RELATIVE EFFICIENCY

• In comparing two statistics of sampling

distributions that have same mean, the statistic

with a smaller standard error (of estimation) is

referred to as an efficient estimator; while the one

with the reverse is an inefficient estimator.

• In considering all possible statistics of sampling

distributions with the same mean, the one with the

least standard error is referred to as the most

efficient or the best estimator.

• An efficient estimator gives an efficient estimate of

the population estimate.

• E.g. Which of these is so? The mean or the SD.

25

POINT ESTIMATE AND INTERVAL ESTIMATE

• A point estimate:

This is an estimate of a population parameter

of a single number form. E.g. the difference of

two populations means is 5.28, s ~ o, Etc.

• An interval estimate: This is a population

estimate that is given using an interval

expression. E.g. the difference of the SD (s) of

two different populations is 3.25 ± 0.03.

The interval estimate indicates the error of the

estimate and is therefore preferable to a point

one. This error or precision indicates its

reliability.

26

CONFIDENCE LEVELS AND LIMITS

• Confidence interval or limits:

These are the end values of a population

interval estimate. They are sometimes referred

to as fiducial limits. The equivalent of „confident

of finding‟ is “expect to find”.

• Confidence level:

This indicates the level of certainty or % used in

the estimation. The higher the level the better

the certainty. See slide # 16 for details. The

value of the statistic, in the available tables, that

corresponds to the level used is called

confidence coefficient or critical value. E.g. z

c

.

27

EXAMPLES .1

• A poll of 400 randomly selected registered voters

in a given constituency show that 186 intend to

vote for candidate X. Construct a 95% C I for the

portion of all the voters who will vote for

candidate X. (0.416 - 0.514)

• During a sale by firm X,150 units of given

product were sold at a mean price of $ 1,400

with a s d of 120. 200 units of the same product

were sold by firm Y at a mean price of 1,200 with

a s d of 80. Construct a 95% C I of the limits of

the difference of the means of the two sales.

(200 ± 22.16, 177.8 - 222.16)

28

EXAMPLE. 2

• A random sample of 400 adults and 600 youth

listened to a newly launched FM station. 100 adults

and a half of the youth preferred it. Compute a 95%

C I of the difference in portions of all the adults and

the youth who preferred the new station.

(0.19 – 0.31)

• For a sample of size n drawn from a population with

a mean of 220 and variance 50; construct a table

showing the S.E and (S.E)

2

, as n changes from 2, to

4, 8, 16, and ending at 32.

• Give an example of estimators that are:

(i) unbiased and efficient (ii) unbiased and

inefficient, (ii) biased and inefficient (µ, median,o)

ATTRIBUTES TO VARIATIONS IN DETAILS • Calendar days variation affect time series expressed on monthly basis. E.g. monthly wages that vary with working days. To adjust for this we divide monthly figure with respective days of the month. • Price changes due to change in value of money. Price index could be used in adjusting the values. • Large population changes affect the unit basis or per capita cases. Adjustment is C/N. • A change in units, definition, classification or type of product.

2

TIME SERIES CHARACTERISTICS 3 .

g.CHARACTERISTICS OF A TIME SERIES • Long term or secular movements or variations: the general or overall trend of the data. E. This trend line can be used to predict the future values of the series. • See a typical time series on the provided hand out. drawn in a dotted line. demand for certain products like toys. • Other elements: cyclical fluctuations (repeat themselves). 4 . • Seasonal variation: they are fluctuations attributed to seasonal factors. and random or irregular fluctuations.

• The method of semi-averages: This entails the separation of data into two equal parts which are averaged to give two points that are used to draw the trend. • See example in the next slide. 1970. • The moving average method. 1970 – 1981(12 years) 1970 -75. List two of them. E.g. This has some disadvantages. E. • The free hand method by looking at the graph. For odd-n the middle period is omitted.1982 (13years) 1976 is omitted. then 1976 – 81.METHODS OF ESTIMATING THE TREND • The least squares method where.g. 5 . D = d21+ d22+ d23 +…+ d2n is a minimum.

66 66 72 75 77 22 22 24 25 25 2/3 6 . Value 23 21 22 23 27 25 25 Moving Moving Total Aver. 298. 340. and 158. Try: 170. 22. 27.299. 267. 231.MOVING AVERAGE EXAMPLES Computation of the 3unit moving average of: 23. 210. 21.302. 23. 261. 273. and 25. 25.278.

(ii) Determining the significance of the difference between two samples. • It‟s importance includes: (i) Estimation of parameters from known sample statistics. The answers to such are obtained through the use of tests of significance and hypothesis. 7 .SAMPLING THEORY • The theory has to do with the study of the relationship existing between a population and samples drawn from the population. or it may be due to chance variation.

8 . • The statistic computed will vary from sample to sample giving a distribution of the statistic.SAMPLING DISTRIBUTION • This is a distribution where the variable is a statistic obtained from samples of size N drawn from their parent population. of medians and of standard deviations. • Examples: Sampling distribution of means.

A SAMPLING DISTRIBUTION OF MEANS • xxxx xxxxxxx xxxxxxxxxxxxxx µs = µp The samples of means form a bell-shaped distribution with an SD and a SE. An infinite population or sampling with replacement is used 9 .

• Estimates of SE and SD are given as. where s is the SD of one of the samples and I the population SD. are such that x = p.1)].000.611) • What is a finite population correction factor (FPC)? 10 . = 20 and n =1. sample mean is an estimator of that of the population. • Find x for N = 15.000. • If the population is finite we have.SAMPLING DISTRIBUTION FOR MEANS • For an infinite population of size N where all possible samples of size n are drawn without replacement. x = p and SE = x = (/n) [(N – n)/(N . the sampling distribution mean is x and that of the population is p. Thus. ( 0. SE x = s /n and SD x = /n.

that can be drawn from a 11 population of size. Self. (n) = N p C N = N P! / [N!(N P – N)! ] N!/ [n! (N – n)!]. n = 2.check: Compute the number of distinct samples of size. N = 5. (10) . drawn from it is given as a combinatory of. Chance of a sample being selected: Each of the possible samples is chosen with the same probability. p = 1/ (N p C N) 1/ (N C n).PRE-REQUISITES FOR THE VERIFICATION OF THE RELATIONS.1 Use of random sampling: This provides every member of the population with an equal chance of being selected into the sample. Combination of samples: For a finite population of size N P the number of combinations of samples of size N.

or expected value. in this case it remains the same for all samples. 2 = (xi .)2pi = pi x2i -(pixi)2. = xi pi. n = 2 from a population of values: 1. • Use these concepts to draw a sample distribution of the means for random samples of size. xi. 3. E (x) is given by. 7 and 9. • The variance value. is given by.PRE-REQUISITES FOR THE VERIFICATION OF THE RELATIONS. 12 . 5. where pi is the respective probability of occurrence of a value.2 Probability based sample statistics • The mean value.

• The means of these samples are respectively: 2. (1. 7 and 9. 5. n = 2 which can be drawn from of population of: 1. (3. They include. 6. 6.DISTRIBUTION MEANS AND PROBABILITY • Possible samples: Using combinatory theory there are 10 possible distinct random samples of size. (1. 5.7). pi for each of the samples being selected is 1/10. (3.9). (5. (7. 5. 3. 4.7). 13 .9).5).9. 3. 4. • The probability. (1. • Use the results to construct a table and a histogram of these samples of means probabilities distribution. (1.5).7). 7 and 8.3).9). (3. (5.

SAMPLE MEANS PROBABILITIES AND DISTRIBUTION Tabulation Probability (pi) 2 1/10 3 1/10 4 2/10 5 2/10 6 2/10 7 1/10 8 1/10 Distribution pi 2/10 1/10 1/10 2 3 4 5 6 7 8 HISTOGRAM 14 .

DETAILED TABULATION pi pi (xi-) (xi-)2 (xi-)2pi 2 3 4 5 6 7 8 1/10 1/5 -3 1/10 3/10 -2 2/10 8/10 -1 2/10 1 0 2/10 11/5 1 1/10 7/10 2 1/10 8/10 3 pi=1 pi=5 9 4 1 0 1 4 9 9/10 4/10 2/10 0 2/10 4/10 9/10 (xi-)2pi=3 15 .

x= .DEDUCTIONS FROM THE TABULATION • Means: From the table. x = . I. but decreases as sample size increases. Thus. the sampling distribution of means that is normal is centered around the parent population mean. x . The value increases with variability of the population. = (1+3+5+7+9)/5 = 5.e. x = 5 and yet that of the population. For a infinite population. 3 8. These facts constitutes parts of the central limit theorem. The x is called the standard error of the mean.e. • Standard deviation: The s d of the sampling distribution of means. For n =1. 16 . and equals zero when n = N. I. x = /n.

• Find the value of the finite population correction factor for n = 100 and N = 10.55 (76.FINITE POPULATION CORRECTION FACTOR • It is the factor [(N – n)/(N . N = 1.1)] • The FPC is used to covert the SE formula from infinite populations case to finite one.995) • Also find SE when n = 20.874) 17 .200 and = 346. for when the sample size is much smaller than the population. • FPC is usually ignored when the sample constitutes 5% or less.000. (0. the latter is considered to be effectively infinite in size.

E of means for infinite population to be derived.STANDARD ERROR (S. instead it is estimated from the S.E. the probability of a value being within k s d (s) from the mean is at least. S. . 18 • See estimation example overleaf. = s z (S. As such.E) OF A SAMPLE MEAN • It is rare for S.) / S. the central limit theorem (CLT) provides that for large n the sample distribution of means approximates to a normal curve. • In addition.x = k x or = s + k x. • According to Chebyshev’s (1821-1894) theorem. p k = 1 . not only normally distributed. • This applies to any set of data.E = x= s /n.E of the mean of a single sample.E) and z = ( s . In this case. .1/k2.

See other levels of C I overleaf.27% C I. = 3 days.87. Thus.865358307 72 0. giving a wider range of 71.66 to 72. we have at 68.335410196 72 0. 19 . • At 99% C I = 72 2.E = 72 0.34. and n = 80. the average processing time is between these values at the given confidence interval or level. S.13 to 72.87.34 the range of is 71.AN EXAMPLE ON THE ESTIMATION OF POPULATION MEAN • For example the processing time of x = 72 days.E = 3/(80) = 0.58 S.

00 0.45 95 90 Z 3.00 2.28 1.645 C I (%) 80 68.05 2.73 99 98 96 95.33 2.96 1.6745 20 .27 50 Z 1.58 2.00 1.INTERPRETATIONS OF CONFIDENCE INTERVALS (C Is) C I (%) 99.

E = (s2/2n).E = s /n .E).E).E).5 Gupta.E = [(p1 q1/n1) +(p2 q2/n2)]. S. S.E=(p q/n) (Bernoulli process). • Proportion: = p z(S.E). • Difference of proportion: 1 . See estimators properties (estimation theory) overleaf. • Difference of means: d p = d s z(S.-12. • Difference of s d: 1 . 12.STANDARD ERRORS SUMMARY • Means: = s z(S. S.E = s2 (2/n). The best is of d f = n -1.E). • Standard deviations: = s z(S.E=( 12/2n1 + 22/2n2). 21 . S. See p. S.E).E = ( 12/ n1 + 22/n2). S. • Variance: 2 = 2 z(S.E). S. 2 = d s z(S.2= d s z(S.

E. the mean and s d.g. This is true for proportion.g. Give any other suggestions. as n N.| 0. E(x). and the s d. 22 . unlike the mode and median whose formulae use only part of the data or scores. the mean. E. | s .WEEK 10: ESTIMATOR CONSISTENCY AND SUFFICIENCY Consistency is attributed to the absolute difference between the statistics used to estimate the parameter tending to zero (CLT) as the size of the sample increases (precision). Sufficiency has to do with an estimator‟s formula using all the available data.

• E. • The conversion formula of sample variances is. • The sample variance.g. s for d f = n -1 does so. • The r x y has this property when r x y for the population is zero.and 9. 6 2/3) 23 .2. (7.5.UNBIASED ESTIMATES • Is attributed to the sample distribution of a statistic being equal to the corresponding population parameter estimate. Check this with 1. the mean and median of the means of infinite sets of samples equals those of the population. s2 = n/(n-1) 2 .…8.

24 .g.ESTIMATOR RELATIVE EFFICIENCY • In comparing two statistics of sampling distributions that have same mean. • In considering all possible statistics of sampling distributions with the same mean. while the one with the reverse is an inefficient estimator. the statistic with a smaller standard error (of estimation) is referred to as an efficient estimator. the one with the least standard error is referred to as the most efficient or the best estimator. • E. • An efficient estimator gives an efficient estimate of the population estimate. Which of these is so? The mean or the SD.

The interval estimate indicates the error of the estimate and is therefore preferable to a point one. This error or precision indicates its 25 reliability.g. • An interval estimate: This is a population estimate that is given using an interval expression.POINT ESTIMATE AND INTERVAL ESTIMATE • A point estimate: This is an estimate of a population parameter of a single number form.03. the difference of two populations means is 5. E. . s .g. Etc.28. the difference of the SD (s) of two different populations is 3.25 0. E.

z c.CONFIDENCE LEVELS AND LIMITS • Confidence interval or limits: These are the end values of a population interval estimate. E. See slide # 16 for details. • Confidence level: This indicates the level of certainty or % used in the estimation. that corresponds to the level used is called confidence coefficient or critical value. The higher the level the better the certainty. 26 . The equivalent of „confident of finding‟ is “expect to find”. They are sometimes referred to as fiducial limits.g. The value of the statistic. in the available tables.

416 .EXAMPLES . 200 units of the same product were sold by firm Y at a mean price of 1.1 • A poll of 400 randomly selected registered voters in a given constituency show that 186 intend to vote for candidate X. 177.150 units of given product were sold at a mean price of $ 1.200 with a s d of 80. (0. Construct a 95% C I for the portion of all the voters who will vote for candidate X.16. Construct a 95% C I of the limits of the difference of the means of the two sales.222.8 .514) • During a sale by firm X.400 with a s d of 120. (200 22.16) 27 .0.

• Give an example of estimators that are: (i) unbiased and efficient (ii) unbiased and inefficient.E)2. 8.EXAMPLE.31) • For a sample of size n drawn from a population with a mean of 220 and variance 50. as n changes from 2. Compute a 95% C I of the difference in portions of all the adults and the youth who preferred the new station. 16. to 4. (ii) biased and inefficient (. 100 adults and a half of the youth preferred it.) 28 . median. and ending at 32.E and (S. (0. 2 • A random sample of 400 adults and 600 youth listened to a newly launched FM station. construct a table showing the S.19 – 0.

- Probablity Estimation Theory
- 3301 Topic 10 Confidence Intervals
- Lecture 3+ , Descriptive Statistics (Slide)
- econometrics_s03
- Statistical Mathematics- Lecture1 by M. S
- 5Enote8
- Heteroskedasticity-Robust Inference in Finite Samp
- DemandEstimation
- Lecture 8 and 9 Regression Correlation and Index
- Factor Analysis
- Usman Dilshad QTIA Presentation Scribd
- Sampling
- Standard Deviation Reference
- Logistic Regression sample size calculation
- Research Methodology
- STAT_106
- 线性回归分析 linear regression.ppt
- Statistics Lily
- Regression Equation
- Regression Analysis
- Question Bank for Statistics Management 1 Marks
- Quantitative Method Cp 102
- Electrical Metrology
- Monograph 3 SPEE
- frbclv_wp1995-17.pdf
- Statistical Methods in Measurement
- Reliability
- Histograms
- Analysis of Variance
- VanGestel-Etal2001 Financial Time Series Prediction Using Least Squares Support Vector Machines Within the Evidence Framework

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd