You are on page 1of 10

r_,pl'';

r1cs-lll (QVll.~)_ _ _ _ __ ___(_7-4_1. l:. ._ _ __ _'.:_'.PR~O~B~AB~llITY~:.!_A~N~D~P~RO~B~A~B~ll.lTY~~Dl~ST_::Rl::B::U:..:Tl.:.:O:..:.N_S


.n.1£114.A
..;,,.~,,.
3 > 10) = P
(2x + 3 -15 > 10 - 15)
6 6
fl/ p(2X
:: p (Z > - 0.8333)
:: l - p (Z > 0.8333)
= l -0.20327
:: 0.79673

oJSTRIBUllONS
,&r.4pL1NG . f rence about a certain phenomenon, sampling is a well accepted tool. Entire population cannot be
I :,,: d aw in e · · 1· · h f I·
IJ oilier 10 reral reasons. In such a situation samp mg 1s t e onlt alterna~ive. A properly drawn sample is mu~h use u. in
due 10 seV ions. Here, we draw a sample from probability d1stnbution rather than a group of obJects. Using
,.d I ,oneIus
;-': reliabe_ sample is drawn .
.,,n9 . ,h que • "b t·
:•· o0ntev· 01 continuous D1str1 u 10n
,,J a ..
'.,.,,,nSI"· pie frOll'I a
le from a continuous pro bab'I'
11ty d'l~tn'but1on
. f (x, 8) is nothing but the values of independent and 'd . II
I entica Y
~ -,andoiTl sannP 'ables with the common probability density function f (x, 8).
,. donn vari . . . . .
,s11bllted ran . If Xi, x , ....., Xn are independent and 1dent1cally distributed random variables, with p.d .f. f (x, 8), then we say
2
· Dlfinld0" • pie from the population with p.d.f. f (x, 8).
fornn asann
!,t~ey
io~: •ng inference, we use the numerical values of X1, x2, ...., Xn.
l for draw1 .
. . t pd f of X1, X2, ... Xn 1s,
TheJOIO · · ·
n
ll f (Xi)
i=1
~nc AND PARAMETER - -··. - · - · I
l.lUsing;random sample Xi, X2, ..., Xn we draw conclusion about the unknown probability distribution. However probability
, ...,, - can be studied if the parameter 8 is known. In other words study of probability distribution reduces to the study of
ai5tnw1100
rarneter a. we use sampled observations for this purpose. There are various ways of summarizing the sampled observations.
:esummarized quantity is called as statistic. We define it precisely as follows.
Definition: If X1, X2, ..., Xn is a random sample from a probability distribution f (x, 0), then T = T (x1, x2, ... , xnl a function of

11111
plevalues which does not involve unknown parameter 0 is called as a statistic (or estimator).
Some typical statistics are given below :
2Xi
Ii Sample mean : n

T = X is a statistic
[ii) Sample variance :
T = T (xi, x2, ... xn)
1
- n_
1
2 (Xi - -x )2 is a statistic.

~SQUARE DISTRIBUTION
Introduction:

,ppl~ei~~i-square (pronounced as Ki, sky without 's') distribution is one of the important distributions in Statistics. It is mainly
11t
ing of hypothesis for testing the independence of attributes, testing the goodness of fit of a model etc
Th •
echi-square . . 2
variable 1s denoted by Xn . Hence n is the parameter of the distribution, also, called as the 'degrees of
1
~om· (d.f.). The 2 . . .
Xn variate 1s defined as sum of squares of n independent standard normal [N (0, 1)] variables.
"J. • • • ,it. • . .. , "fl '-''- "IIIV <;;/'-'C ll UC ll l l't \ U , .J.J Vd ll d U IC .> , lll '-"

n
Y L 2
Xi follows chi-squa re distribution with n degrees of freedom (d.f.).
i= l
2
Notation : Y --> Xn . (·: in positive integer)

7.15 ADDITIVE PROPERTY


Statement: If Y1 and y 2 are independent y2
"-
variates with n 1 and n2 d .f. respectively, then Y 1 + Y 2 has v2
A
d. t .b
15 ri lJtio
(n1 + n2) degrees of freedom. n With

7.16 APPLICATIONS OF CHI-S UARE DISTRIBUTION TO TESTS OF HYPOTHESIS


Meaning of Statistical Hypothesis
We are mainly interested in testing certain claims about the population parameters such as mean, variance, proponio
example, a particular scooter gives average of 50 km per litre, proportion of unemployed persons is same for two st n. For
These claims stated in terms of population parameters or statistical distribution are called hypothesis. ates etc.
Definition :
Hypothesis : It is a statement or assertion about the statistical distribution or unknown parameter of statistical distribution.
In other words, hypothesis is a claim to be tested.

!7.17 CONCEPTS OF NULL HYPOTHESIS AND ALTERNATIVE HYPOTHESIS :J


In each problem of test of significance, two hypothesis are to be set. These are set in such a way that if one is rejected, the
other is to be accepted. These hypothesis are referred to as null hypothesis and alternative hypothesis.
Null Hypothesis : A hypothesis of "no difference" is called as null hypothesis according to R. A Fishe r. Null hypothesis is
denoted by H 0 .
For Example : H0 : µ = 100. Here the hypothesis states that there is no difference in population mean and 100. H0 : µ1 = µ2.
This hypothesis states that there is no difference between two population means. While conducting the test, some difference will
be observed in sample value and hypothesized value. Whether this difference is just due to chance element, is decided in testing
procedure.
Alternative Hypothesis : It is a hypothesis to be accepted in case null hypothesis is rejected. In other words, a
complementary hypothesis to null hypothesis is called as alternative hypothesis. It is denoted by H1.
For Example: If H0 : µ 1 = µ 2 then alternative hypothesis may be H1 : µ 1 ,. µ2 or H1: µ1 < µ2 or H1: µ1 > µ2.

!7.18 ONE AND TWO TAILED HYPOTHESIS


By considering the nature of hypothesis, these are classified as one sided or two sided.
Hypothesis of the type H1 : µ > ~- H1 : P < 0.5, H1 : µ 1 < µ 2 , H0 : o 1 > o 2 etc. are called as one sided. On the other hand, the
hypothesis of the type H1 : P1 ,. 0.5, H1 : o 1 ,. o 2, H1 : µ,. µ 0 etc. are called as two sided.

In this text we will consider null hypothesis to be hypothesis of equality and alternative hypothesis to be two sided. Choice of
one sided hypothesis as null hypothesis is beyond the scope of the book.

/7.19 TYPE I AND TYPED ERRORS J


Since decision of acceptance or rejection of Ho, is based on sampling, it is subject to two kinds of errors. For instance, in the
inspection of a lot of manufactured items, the inspector will choose a sample of suitable size and accordingly take decision
whether to accept or reject the lot. In this process, two errors are possible viz, rejection of a good lot and acceptance of a bad lot.
In testing of hypothesis these errors are called as type I and type IT errors.
Type I error : Rejecting Ho when it is true.
Type U error : Accepting H0 when it is false.
r,4AT1CS- ID (CML) (7.43)
PROBABILITY AND PROBABILITY DISTRIBUTIONS
Gr,4A1HE
~11itll' can be put in tabular form to remember easily.
1 ,,e
!~
errors Actual Situation Decision
Reject Ho Accept Ho
Ho is true Type I error , J c_ _.:.....,_:__ ___j
_
orrect decision
Ho is false Correct decision
:i~;========ab=====,;;,:;;;:~ =.JL=~T~y~
pe~ ll!e~rr~or: ..,.-l _ _ _ _ _ _ __
cALllic;10N
;o cf!J!! x be a random sample taken for testing Ho, The set of
1
(el X1•::·x~:· ... xn) for which Ho _is rejected _is called as critical region
1'1,s of ( 1 . n Many a times, critical region 1s expressed with the help A99apta11cg , @ iQII o
Critical
region (W)
' ' ·eetion reg10 . - . ..
o1 r•J . . g x c, where c 1s constant. Critical region is denoted
,st stat1st1c e.of .all sample observations
. can be partitioned into two
o11 The set .

~-
ut VJ. . . region (W) and acceptance region (WC) as shown below Fig. 7.2S
ets : cnt1ca 1 ·
e, proportion
stJbS Of HYPOTHESIS
or tw · For 7JJArule wh1c. h leads .to the decision of acceptance of H0 or re;ect·
, ion °f Ho on the basis
• of observations
. in a random sample ·1s
o states etc.
t of hypothesis.
Statistical inference .15 th.at_branch of stat,.5t .'cs which
. concerned with using probability concept to deal with uncertainly in
.. n making field of statistical inference has a fruitful development since the letter half of the 19th century. It refers to the
:al distribution.

-e .
deC'''0, of selecting and using a sample statistic to draw inference about a population parameter based on a sub-set of it the
p,ocelsdrawn from the population. Statistical inference treats two different classes of problems
') Hypothesis testing and (ii) Estimation.
~ypothesis testin~ . begins with us assumption called as hypothesis, which is made by the population parameter. A
! ,s rejected, the
thesis is a suppos1t1on made as basis for reasoning.
~n article 7.14 to 7._21 _we discussed the various terms like statistical hypothesis, null hypothesis and alternative hypothesis,
Jil hypoth~sls is critical region, level of s1gnif1cance etc. . .
Now we will see the method of testing population mean (µ) equal to specified value (µ 0).
Testing Population Mean (µ) Equal to Specified Value (llo) : Test statistic is
JO. Ho: µ1 = µ2,
! difference will
:ided in testing

!her words, a X-µo


Under Ho, U = ,. ,- --> N (0, 1)
<JfVn
Critical Region: Value of JUI > 1.96 at 5% level of significance.
If calculated value of U is more than 1.96 or less than - 1.96, Ho is rejected and accepted otherwise.
/7.22 ONE SIDED AND TWO SIDED TESTS
The tests used for testing null hypothesis are called as one sided or two sided tests according as the alternative hypothesis
'i er hand, the are one sided or two sided.
[23 TEST STATISTIC
, d. Choice of A function of sample observations which is used to test null hypothesis Ho is called a test statistic. The distribution of test
natistic is completely known under Ho, Hence, it can be used to test Ho.
J §_4 LEVEL OF SIGNIFICANCE .. - ·
:ance, in the
Probability of rejecting Ho when it is true is called as level of significance. Thus, it is probability of committing type I error. It is
,ke decision
denoted by a.
Jf a bad lot.
Level of significance can be interpreted as proportion of cases in which H0 is rejected though it is true.
11 we try to minimize level of significance, the probability of type II error increases. So level of significance cannot be made
zero.However, we can fix it in advance as a.OS (i.e. 5%) or 0.01 (1.e.
· 1 %). In most of t he cases, 1t
· 1s
· ta ken as 5%.
44
.:E::.:N::G::IN:::E::E:::Rl::N::G:_M::::A:.:_T:,:H,::EM:::::ATl.:,::C:._5,:_-~m~(~OVl:::,:L::!)~--------('--7_._;_)_ _ _ _ _ _P_R....:O....:B::.:A::B::ILITY~~A~N~O PRoaABIUty

Test for Goodness of Fit of x2 Distribution : . .. . . . DISr~~


For a given data (frequency distribution), we try to fit some probability d1stnbut1on. Since th r.
distributions, which distribution will fit properly may be a question of interest. In such cases, we want ~re 0
are several
Prob
·I·t d ' t 'b t ·
of the fit. Hence we desire to test Ho : Fitting of the pro b ab 11 Y. is n u ion
t . d · test th
°
given ata 1s proper (good). T e •PProp, ib,
2
distribution used to test this Ho is called -,_ test of goodness of fit. he test bas: •!·~.
In this case, we compare the expected and observed frequencies. Thus we can take H0. There is . ~,
. . . d f no sIgnifI
between observed and (theoretical) expected frequencies. The test Is came out as ollows : cant diff,.
Suppose o 1, o 2, ... , Oi, ..., Ok be a set of observed frequencies and e1, ei, ..., ei - ek be corresponding ex · !·,,
.
obtained under H0 . req, 1,(;
k k .
I Oi = N = I ei
i=1 i=1
Suppose, p = number of parameters estimated for fitting the probability distribution.
If Ho is true, then the statistic

(- 2J -N
k k oi
,,_2 = I = I
i =1 i = I ei
has ,,_2 distribution with (k - p - 1) degrees of freedom. Degree of freedom is a parameter of the x2 distribution. In th·
iscas,.~
critical region at level of significance a is
2 2
Xk - p - 1 'X.k - p - I; a

where, 'X.~ _ P _ I ; " is the table value corresponding to degrees of freedom k - P - 1 and level of significance (l.o.s) Q We~/
2
. . I va Iue.
'X.k _ P _ 1, a as a cntIca
It is shown by the shaded region in Fig. 7.26.
Normally, the values of a are taken as 0.05 or 0.01.
Thus, we reject Ho at 1.o.s. a if,
2 2
Xk- p -1 Xk - p -1; a
y

Density curve
otl

oo X
(0, 0)
- - - - -(C.R.}--
Rejection Region for H0
2
Xk-p-1 : a
Fig. 7.26
Note: ·ii i
1. We can apply this test if expected frequencies are greater than or equal to 5 (i.e. ei 5) and total of cell frequen~
sufficiently large (greater than 50). f
. I 'th its obsen .
2. W hen expecte d f requency o f a cIass Is less than 5, the class is merged into neighbouring class a ongwi 10
th
and expected frequencies until total of expected frequencies becomes 5. This procedure is called ·pooling e'
In this case, k is the number of class frequencies after pooling. . ect~
3. It is obvious that if any parameters are not estimated while fitting a probability distribution or obtaining exP
frequencies, the value of p is zero. C\lnt
4. This test _is not applicable for testing goodness of fitting of straight line or curves such as second degree
exponential curve etc.
Remark : eeted ini
2
~~te's Co~ectlon : If in_a 2 x 2 conti~gency table, any cell frequency is less than 5 then the test statistic X is
spec1f1c way. This correction 1s due to Yates and hence is known as Yate's correction. It is beyond the scope of thiS b
74
r,1ATICS-lll"(~CIV1::,::::Ll_ _ _ _ _ _~;~;;~
(.:..:
'.'.'·:::.
_:!.
S..;); : - = . ~ - - - -_!'.P~R~0!_BA~B~l~UTY~!_
A~N~D~P".!R~O~B~A~B~IUTY
~~D~IS~ T~Rl~B~U:._:
Tl::O
:.:N
~S
Gr,1Aft1E - '-
(jlllll~ l1LLUSTRATIONsj
11
. { ed bank utilizes four teller windows to render fast service to the customers. On o particular day, 800
: ;,. notiona : They were given service ot the different windows os folio WS :
£;• J
were observe . Wlndow Number Expected Numbe; 4 customers
~~""ers 1 150
2 250
3 170
4 230

h theC Usto mers are uniformly distributed over the windows .


re5twhet e:e want to test Ho: Customers are uniformly distributed over the windows. i.e. Ho : customers on all windows are
soL: Here . They are not equal on all windows.
ainst Hi· .
e<lual ag expected frequencies are :
under Ho, the
Window Number Expected Number of Customers
(ej)
1 200
2 200
3 200
4 200

The test statistic is


k
2 (Oi - ei) 2 (-50) 2 (50) 2 (-30) 2 .@Qt
1
_
Xk- p-l ei - 200 ... 200 ... 200 ... 200
i= l

Here number of parameters estimated = p = 0, =4.


)(~ 34 (Calculated value)
2 2
x3 34 > x3; o.os = 7.815 (Critical or table value)

We reject H0 at 5 % l.o.s.

Conclusion: The customers in the nationalized bank may not be uniformly distributed over different windows.
Ex, 2: One hundred samples were drawn from a production process each a~er 5 hours. The number of defectives in these
samples were noted. A Poisson distribution by estimating the parameter m was fitted to these doto. The results obtained are as
follows :

Number of Number of Samples (observed) Expected Number of Samples


Defectives
0 63 60.65
1 28 30.33
2 6 7.58

3 2 1.26
4 1 0.16
5 and above 0 0.02
Test the go0 d
Sol . ness of fit of Poisson distribution in above situation. [Use 5 % level of significance]
• We want to test
Ho : Fitting of Poisson distribution is good {proper) against
H1: Fitting 0f p · . . . .
01sson d1stnbut1on 1s not proper.
TICS_ ID (CIVIL) PROBABILITY AND PROBABILITY DISTRIBUTIONS

,t!:-11~
. ed bank utilizes four teller windows to render fast service to the customers. On a particular day, 800
J : r, natiana a · service· a t th e d 1,,erent
1 '" ·
d They were given windows as follows .
i,, were observe . Window Number Expected N umbEf ,1{ customers
~ ers
1 150
2 250
3 170
4 230
ustamers are uniformly distributed over the windows.
hetherthe c . . . .
rest W we want to test Ho : Customers are uniformly d1stnbuted over the windows. i.e. Ho : customers on all windows are
so • .
L,Here They are not equaI on a II win
. d ows.
ainstH1- .
equal ag h xpected frequencies are :
under H~ t e e
Window Number Expected Number of Customers
(ej)

1 200
2 200
3 200
4 200

Thetest statistic is

2 (-50)
2
illt (-30)
2
fill:.
Xk- p -1 200 + 200 + 200 + 200
i= l

Here number of parameters estimated = p = 0, = 4.


x! 34 (Calculated value)

x! 34 > x!, 0 _05 = 7.815 (Critical or table value)

We reject Ho at 5 % 1.o.s.

Conclusion: The customers in the nationalized bank may not be uniformly distributed over different windows.
Ex. 2: One hundred samples were drawn from a production process each after 5 hours. The number of defectives in these
samples were noted. A Poisson distribution by estimating the parameter m was fitted to these data. The results obtained are as
follows :
Number of Number of Samples (observed) Expected Number of Samples
Defectives
0 63 60.65
1 28 30.33
2 6 7.58
3 2 1.26
4 1 0.16
5 and above 0 0.02
th
:t e goodness of fit of Poisson distribution in above situation. [Use 5 % level of significance]
I. : We want to test
H.0 F'tf
1
· ing of Poisson distribution is good (proper) against
H1 : Fittin
9°fp · . . . .
0Isson d1stnbut1on Is not proper.
r
ENGINEERING MATHEMATICS - m (CML) ( 7A 6) PROBAIIIUTY AHO P R o ~

Here we pool expected frequencies until their sum becomes 2: 5 and also pool corresponding observed f ~-
frequencies can be written as : reque•-~
Flcies. T~
Observed Frequ1111cles Expected frequencies ltt~
(ol) (el)
63 60.65
28 30.33
9 9.02
We use the test statistic

2
Xk- p-1
i=l
i (iJ-N
Here number of parameters estimated = p = 1, N = 100, k = 3.
2
X1 100.27009 - 100
2
X1 0.27009 [Calculated value]
2
X1 0.27009 < X~; o.o5 = 3.841 [Critical or Table value]
Hence we accept Ho at 5 % l.o.s.
Conclusion : Fitting of Poisson distribution may be good to the given data.
Ex. 3: Among 64 of{springs of a certain cross between guinea pigs 34 were red, 10 were black and 20 were white A;-;-
a genetic model these numbers should be in the ratio 9 : 3 : 4. · ' 0' ~g10
Are the dato consistent with the model at 5 % level ?
Sol. : Here Ho : The offsprings red, black and white in the colour are in the ratio 9 : 3 : 4.
In this pro blem, N = 64. Hence observed and exoecte d f reauencies are as f ollows :
Observed Frequencies (oj) 34 10 20
Expected Frequencies (el) 9 3 4
X 64 = 36
16 16 X 64 = 12 16 X 54 = 16
To test H0, the test stat1st1c is
k
2 , (oi-ei) 2
Xk-p-1 L ei
Here p =0 and k = 3.
i=l
2
X2 1.444444 (Calculated value)
2
X2 o.o5 5.991 (Critical o r Table value)
2
X2 1.444444 < X~; o.05 = 5.99
We accept Ho at 5 % l.o.s.

Conclusion : The data are consistent with the genetic model that the offsprings red, black and white in colour are in th e ratio
9: 3 : 4.
Ex. 4 : The table below gives n umber of books issued fro m a certain libran on the various doys ofa week.
Days No. of Books Issued (01- e,P
Mon. 120 0
Wed. 130 100
Thr. 110 100
Fri. 115 25
Sat. 135 225
Sun. 110 100
Test at 5 % l.o.s. whether issuing the book is day dependent.
·••TICS-Ill (CIVIL) (7A7)
PROBABILITY AND PROBABIUTY DISTRIBUTIONS
""'flll,..,.
-~ . of the book is not dependent on the day of the week.
~ t,e issu1n9 2 z
,.1. : 2 = X&-0-1 = Xs
,.. X1c-p-1
2
L (oi-eil2
Xs = ei

2 550
Xs = 120 = 4.5833
2
Xs. o.os = 11.07
2 2
X5 < X5, o.o5

Accept Ho-
• · e issuing of book is independent of day.
condusIon , 1• ·
fl. : experiment on pea breeding, the following frequencies of seeds were obtained
5 1n Total
Wrinkled and Green Round and Yellow Wrinkled and Yellow
Roundan_
d,
gl'ffll .

120 32 150 524


222
The«Y predicts that the frequencies should be in proportion 8 : 2 : 2 : 1. Examine the correspondence between theory and

e,pinment.
Sol: from the given data the corresponding frequencies are, i.e. expected frequencies are
2 2 1
Expected 8
l3 524 = 81 l3 X 524 = 81 l3 X 524 = 40
l3 X 524 = 323 X
Frequencies (el)

2 2 1 (Oi-eil2
Xk-p-1 X3 ei
2
2 (222 - 323) 2 (120 - 81) 2 (32 - 81) 2 (150 - 40)
X3 323 + 81 + 81 + 40

x! 31.5820 + 18.7778 + 29.64198 + 302.s


2
X3 382.502
2
X3. o.os = 7.815
The calculated value of x2 is much more than x!.0_05, there is a very low degree of agreement between the theory and
expenment.

&. &:A set of five similar coins is tossed 210 times and the result is
1 2 3 4 5
No.of Heads 0
5 20 60 100 23
Frequency 2
th (DK.2009)
Test ehyPorhesis that the data follow a binomial distribution.
Sol : Here k - p - 1 = 5.
1
p Probability of getting a head = 2
1
q Probability of getting a tail = 2.
(7.48) PROBABIUTY AND
ENGINEERING MATHEMATICS- m (CML) PRoaAalUT-r
. • f tt · o 1 2 3 4 5 heads are the successive terms of th b. D1Sflllaui,.
Hence the theoret,cal frequene1es o ge ing , , ' ' ' e inornial '-.."ii
q) s 210,
2 2 3 5 4 5
= 210 [p5 + 5 p4 q + 10 p 3 q + 10 p q + pq + q ] '·
l 5 10 10 2- + ..1.J
= 210 [ 32 + 32 + 32 + 32 + 32 32
7 + 33 + 66 + 66 + 33 + 7
The theoretical frequencies are 7, 33, 66, 66, 33, 7.
2 (2 - 7)2 (5 - 33) 2 (20 - 66) 2 (60 - 66) 2 (100 - 33)2 Q.3 _ ?i
Hence, X5 7 + 33 + 66 + 66 + 33 + 7
2
X5 3.57143 + 23.7576 + 32.06061 + 0.5455 + 136.0303 + 36.5714
2
Xs 232.53684

X~. 0 _05 11.070


Since the calculated value of x2 is much greater than X~. o.os, the hypothesis that the data follow the binornial distribur
rejected. 'on i
Ex. 7 : The figures given below ore (a) the theoretical frequencies of a distribution and (b) the frequenc-;;;;;---.
distribution having the same mean, standard deviation and the total frequency as in (a). norm,1
(a) l 5 20 28 42 22 15 5 2
(b) l 6 18 25 40 25 18 6 1
Apply the~ test of goodness of fit.
Sol. : Since the observed and expected frequencies are less and 10 in the beginning and end of the series, we pull the class"
and then apply the x'
test.

°' Iii (o, - e,)' (o1 - e,) 2/E 1

n
20
6 !}
18
7
l

4
0.1429

0.2222
28 25 9 0.36
42 40 4 0.1
22 25 9 0.36
15 18 9 0.5

;}
2
7
n 7 0 0

Now, X6. o.os

We take table value of x:. Because, we have total 9 classes, 2 classes are pulled, so after pulling number of classes is 7.

The degree of freedom for this experiment is 6.


2
X6. o.o5 = 12.592

Now, we have,
2 2
X6 < X6. o.o5

Accept H0 .

_ _C::.o::.nc.:.c::.l.::u.::.s:..:io:..;.n.:...c...:T.:_h..;.e:......:fi.:_tc.:is__,ge.co:..:o:..:d::.·_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
___

d
,,cs- Ill (CJVILl (7 A 9 ) PROBABIIJTY AND PROBAlllUTY DISTIUJIUTIONS
ft!~ . .
'1'.G IP' r particular spare port 111 a factory was found to vary from day to day In O sample study the fo llowing
# , ,·,nede,r,andu,ed.fo O

1 0
~n ,.as obt c,oys~- - - -+-_
M_o_n._ + _Tu_e_s_.-1:--..:.W
:.:e:.:d:_._Jf-.:_:
Th~u::rs=. _f----.'.Fr
..:'..'i._-4-_.::S:.
.: at::..--,
/' .,,,,u .,o.,na~:=:.:n_d_e d_ __,__l _l_24_ _.__l_l _2 _5 _L.__l_l_l _O_J.__:l:.:l:.:2_:0_.1.__l::l:.:2:.:6:.___L~l_l _l5_ __,
No-ofr• -
. t the number of parts demanded does not depend on the day of the week.
0
theSLS thO
rJ,ehYP
rill ,si·
L: 10 1 · rtS demanded does not depend on t he day of the week.
So urnber of pa
;;:: fhe n ber of parts demanded depend on the day of the week.
. n,e nurn . .
vsH, · rts demanded dunng six days = 6720.
,nt,erof pa
· of parts to be demanded each day of the week = - 6- 1120.
E#'1ed no.

:--
oays 0; ei
I
(o,-e,>1 (0;-e.)2/ei \I
,--
Monday 1124 1120 16 0.01429

Tuesday 1125 1120 25 0.0223

Wednesday 1110 1120 100 0.0893

Thursday 1120 1120 0 0

Friday 1126 1120 36 0.0321

Saturday 1115 1120 25 0.0223 I

Now,

Th. critical value at 5% l.o.s. is x!. 0 .05 11.07


2 2
'l.5 < :/.5. o.o5
Accept H~
Condusion: The number of parts demanded does not depend on the day of the week.
irn STUDENTS T-DISTRIBUTION

You might also like