INTRODUCTION TO

BIOS TATIS TIC S
SECOND EDITION Robert R. Sokal and F. James Rohlf
State University of New York at Stony Brook

D O V E R P U B L I C A T I O N S , INC. Mineola, New York

Copyright C o p y r i g h t © 1969, 1973. 1981. 1987 b y R o b e r t R . S o k a l and F. J a m e s R o h l f All rights reserved.

Bibliographical

Note

T h i s D o v e r e d i t i o n , first p u b l i s h e d in 2009, is a n u n a b r i d g e d r e p u b l i c a t i o n of t h e w o r k originally p u b l i s h e d in 1969 by W . H . F r e e m a n a n d C o m p a n y , N e w Y o r k . T h e a u t h o r s h a v e p r e p a r e d a new P r e f a c e f o r this e d i t i o n .

Library S o k a l , R o b e r t R.

of Congress

Cataloging-in-Publication

Data

I n t r o d u c t i o n t o Biostatistics / R o b e r t R. S o k a l a n d F. J a m e s R o h l f . D o v e r ed. p. c m . O r i g i n a l l y p u b l i s h e d : 2 n d ed. N e w Y o r k : W . H . F r e e m a n , 1969. I n c l u d e s b i b l i o g r a p h i c a l r e f e r e n c e s a n d index. I S B N - 1 3 : 978-0-486-46961-4 I S B N - 1 0 : 0-486-46961-1 I. B i o m e t r y . I. R o h l f , F. J a m e s , 1936- II. Title. Q H 3 2 3 . 5 . S 6 3 3 2009 570.1 '5195 dc22 2008048052

M a n u f a c t u r e d in the U n i t e d S t a l e s of A m e r i c a D o v e r P u b l i c a t i o n s , Inc., 31 Fast 2nd Street, M i n e o l a , N . Y . 1 1501

to Julie and Janice

Contents

PREFACE TO THE DOVER EDITION PREFACE 1. xiii 1
1 of bioslatistics frame of mind 4

xi

INTRODUCTION
1.1 1.2 1.3 Some

definitions

The development The statistical

2

2.

D A T A IN B i O S T A T l S T I C S
2.1 2.2 2.3 2.4 2.5 2.6 Samples Variables Accuracy Derived Frequency

6
7 8 of data 14 24 10

and populations in biostatisties and precision variables 13 distributions of data

The handling

3.

D E S C R I P T I V E STATISTICS
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.Ν The arithmetic Other means mean 31 32 33 34 deviation

27
28

The median The mode The range The standard Sample Practical deviation

36 37 mean and standard

statistics methods 39

and parameters for computing

3.9

The coefficient

of variation

43

V1U

CONTENTS

4.

I N T R O D U C T I O N TO PROBABILITY DISTRIBUTIONS: T H E B I N O M I A L A N D P O I S S O N D I S T R I B U T I O N S 46
4.1 4.2 4.3 Probability, The The binomial Poisson random sampling, 54 63 and hypothesis testing 48

distribution distribution

5.

THE N O R M A L PROBABILITY DISTRIBUTION
5.1 5.2 5.3 5.4 5.5 Frequency Derivation Properties Applications Departures distributions of the normal of the normal of the normal from normality: of continuous distribution distribution distribution Graphic variables 76 78 82 methods

74
75

85

6.

ESTIMATION A N D HYPOTHESIS TESTING
6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 Distribution Distribution Introduction Student's Confidence The and and variance variance of means of other limits 106 on sample 112 114 115 statistics 94 statistics 103

93

101

to confidence t distribution limits based distribution

109

chi-square

Confidence Introduction Tests Testing

limits

for variances testing

lo hypothesis hypotheses

of simple

employing

the t distribution 129

126

the hypothesis

!!,,: σ2 = al

7.

INTRODUCTION TO ANALYSIS O F VARIANCE
7.1 7.2 7.3 7.4 7.5 7.6 7.7 The The The variances of samples 138 / / „ : σ] = among the total 154 157 sample sum 143 means 143 and degrees and their means 134

133

F distribution hypothesis

Heterogeneity Partitioning Model Model

of squares

of freedom

150

I anova II anora

8.

SINGLE-CLASSIFICATION ANALYSIS O F VARIANCE
H.l H.2 Λ.J X.4 <V.5 H.6 Computational Equal Unequal η η 162 165 168 among among means: means: Planned Unplanned comparisons comparisons 173 formulas 161

160

Two groups Comparisons Comparisons

179

ASSUMPTIONS O F ANALYSIS O F VARIANCE 10.1 13.5 267 and regression 268 coefficient 280 284 correlation 286 270 Correlation The product-moment tests correlation in correlation Significance Applications Kendall's of correlation coefficient of rank 13. REGRESSION 11. CORRELATION 12.2 9. TWO-WAY ANALYSIS O F VARIANCE 9.2 10.CONTENTS ix 9.3 12.5 11.1 10.2 11. ANALYSIS O F FREQUENCIES 13.1 9.6 11.3 The assumptions of anova 216 methods in lieu of anova 220 212 Transformations Nonparametric 211 11.1 11.1 12.2 12.4 11.7 11.3 Tests for goodness Single-classification Tests of independence: goodness 294 295 301 305 of fit tests tables of Jit: introduction Two-way APPENDIXES AI A2 314 appendix tables 320 314 Mathematical Statistical BIBLIOGRAPHY INDEX 353 349 .3 Two-way Two-way Two-way anova anova: anova with replication testing Significance without 185 186 197 199 replication 10.4 12.8 230 to regression 233 equation of 235 value 250 of X 243 231 Introduction Models The More Tests The in regression regression one value linear than Y for each of significance uses of regression in regression 257 Residuals and transformations test for in regression 263 259 A nonparametric regression 12.3 11.2 13.

.

The book furnishes an introduction to most of the statistical topics such students are likely to encounter in their courses and readings in the biological and biomedical sciences. In this spirit. we would use structural formulas in Boxes 3. Similarly. The reader may wonder what we would change if we were to write this book anew. but are also less prone to rounding error in calculations performed by computers. Because of the vast changes that have taken place in modalities of computation in the last twenty years. we would put more emphasis on permutation tests and resampling methods. James Rohlf November 2008 . Permutation tests and bootstrap estimates are now quite practical.8) on page 39 and draw the readers' attention to equation (3. respectively. we find there is little in it that needs changing for an introductory textbook of biostatistics for an advanced undergraduate or beginning graduate student.1 on pages 163/164.7) instead. we would deemphasize computational formulas that were designed for pre-computer desk calculators (an age before spreadsheets and comprehensive statistical computer programs) and refocus the reader's attention to structural formulas that not only explain the nature of a given statistic.Preface to the Dover Edition We are pleased and honored to see the re-issue of the second edition of our Introduction to Biostatistics by Dover Publications. on page 161 and in Box 8. Robert R. Secondly.1 on pages 278/279. we would omit the equation (3. Sokal F. as well as in Box 12. We have found this approach to be not only easier for students to understand but in many cases preferable to the traditional parametric methods that are emphasized in this book. On reviewing the copy.2 on pages 41 and 42.1 and 3.

such c o m p u t a t i o n s can easily be m a d e on electronic calculators a n d m i c r o c o m p u t e r s . thus. W e intend Introduction to Biostatistics to be used in c o m p r e h e n s i v e biostatistics courses.Preface T h e favorable reception that the first edition of this b o o k received f r o m teachers a n d s t u d e n t s e n c o u r a g e d us to p r e p a r e a second edition. we include examples f r o m the health-related sciences. also. W e believe t h a t the p r o v e n pedagogic features of that book. In this revised edition. We present material in a sequence that progresses from descriptive statistics to f u n d a m e n t a l d i s t r i b u t i o n s and the testing of elementary statistical hypotheses. for example. T h u s . but it can also be a d a p t e d for short courses in medical a n d professional schools. we provide a t h o r o u g h f o u n d a t i o n in biological statistics for the u n d e r g r a d u a t e student w h o has a minimal knowledge of m a t h e m a t i c s . in Introduction to Biostatistics we provide detailed outlines for statistical c o m p u tations but we place less e m p h a s i s on the c o m p u t a t i o n s themselves. will be valuable here. such as its informal style. We have modified some of the features f r o m Biometry. we (hen proceed immediately to the analysis of variance and the familiar t test . Why? Students in m a n y u n d e r g r a d u a t e courses are not motivated to a n d have few o p p o r t u n i t i e s to p e r f o r m lengthy c o m p u t a t i o n s with biological research m a terial. We have extracted most of this text f r o m the more-inclusive second edition of our o w n Biometry. we rely on the course instructor t o advise students on the best c o m p u t a t i o n a l p r o c e d u r e s to follow.

We call special.) All t tests can be carried out directly as analyses of variance. W e discovered that we could not cover even half as m u c h of o u r subject if we had to put this material on the blackboard d u r i n g the lecture. but these arc consigned to Appendix A l . we deliberately keep discussions of statistical theory to a m i n i m u m . I n s t r u c t o r s w h o use this b o o k m a y wish to use the boxes in a similar m a n n e r . s t u d e n t s should b e c o m e a c q u a i n t e d with the subject early in the course. J a m e s Rohlf . we reorient the c h a p t e r to e m p h a s i z e log-likeiihood-ratio tests. Lewontin. They usually c o n t a i n all the steps necessary t o solve a p r o b l e m — f r o m the initial setup to the final result. a n d B a r b a r a T h o m s o n with p r o o f r e a d i n g . Statistical tables to which the reader can refer when w o r k i n g t h r o u g h the m e t h o d s discussed in this b o o k are found in A p p e n d i x A2.S m i r n o v two-sample test. with p r e p a r i n g the m a n u s c r i p t s . We found in teaching this course that we w a n t e d s t u d e n t s to be able to refer to the material n o w in these boxes. We e m p h a s i z e the practical a p p l i c a t i o n s of statistics to biology in this book. d o u b l e . (One would still w a n t to use it for the setting of confidence limits a n d in a few o t h e r special situations. E. Sokal F." T h e y can be used as convenient guides for c o m p u t a t i o n because they s h o w the c o m p u t a t i o n a l m e t h o d s for solving various types of biostatistical problems. a n d the a m o u n t of c o m p u t a t i o n of these analyses of variance is generally equivalent to t h a t of t tests. We have also a d d e d new h o m e w o r k exercises. R u s s e k . Robert R. Gabriel. T h u s . Derivations are given for s o m e f o r m u l a s . Also. Patricia Rohlf. a n d (2) if analysis of variance is u n d e r s t o o d early. s t u d e n t s familiar with material in the b o o k can use them as quick s u m m a r y reminders of a technique. R. a n d so we m a d e u p and distributed boxe^ a n d asked s t u d e n t s to refer to them d u r i n g the lecture. a n d M. n o n p a r a m e t r i c regression. where they should be studied a n d reworked by the student. We also a p p r e c i a t e the w o r k of o u r secretaries. the need to use the f distribution is reduced. C.. h a n g i n g h i s t o g r a m s . Resa C h a p e y a n d Cheryl Daly.n u m b e r e d tables "boxes. W e d o this deliberately for two reasons: (1) since t o d a y ' s biologists all need a t h o r o u g h f o u n d a t i o n in the analysis of variance. stem-and-leaf d i a g r a m s . We a r e grateful to K. a n d of D o n n a D i G i o v a n n i . D. because the f o r m e r h a s been shown t o have m o r e desirable statistical properties. the c o m p u t a t i o n of the G statistic is n o w easier t h a n that of the earlier chi-square test.XIV PREFACE (which is treated as a special case of the analysis of variance a n d relegated to several sections of the book). thus. W e have rewritten t h e c h a p t e r on the analysis of frequencies in terms of the G statistic rather t h a n χ 2 . M o r g a n .C o h e n . a n d the B o n f e r r o n i m e t h o d of multiple c o m p a r i s o n s . R. because of t h e availability of l o g a r i t h m functions on calculators. a n d M . Singh for c o m m e n t s on an early d r a f t of this book. T h u s . K a b a y for their extensive c o m m e n t s on t h e second edition of Biometry and to M. This larger second edition includes the K o l g o r o v .

INTRODUCTION TO BIOSTATISTICS .

3 we conclude the c h a p t e r with a discussion of the a t t i t u d e s that the person trained in statistics brings to biological research. T h e n u m b e r of definitions you can find for it is limited only by the n u m b e r of b o o k s you wish to consult. We then cast a necessarily brief glance at its historical development in Section 1. In Section 1.CHAPTER Introduction This c h a p t e r sets the stage for your study of biostatistics. T h e n in Section 1. we define the field itself. We might define statistics in its m o d e r n .I Some definitions Wc shall define biostatistics as the application of statistical methods to the solution of biological problems. I. T h e definition of biostatistics leaves us s o m e w h a t u p in the air—"statistics" has not been defined. Biostatistics is also called biological statistics o r biometry. T h e biological p r o b l e m s of this definition a r e those arising in the basic biological sciences as well as in such applied areas as the health-related sciences a n d the agricultural sciences.2. Statistics is a scicnce well k n o w n by n a m e even to the l a y m a n .1.

not with a single datum. W e m u s t always be objective in p r e s e n t a t i o n a n d e v a l u a t i o n of d a t a a n d a d h e r e t o the general ethical code of scientific m e t h o d ology. This subject also became k n o w n as political arithmetic. way. might so consider it a n d deem it w o r t h y of study. such as the n u m b e r of animals e m p l o y e d in an experiment. but also those evoked by scientists a n d partly u n d e r their control. 1. Different biologists will c o n c e r n themselves with different levels of n a t u r a l p h e n o m e n a . T h e qualification " n a t u r a l p h e n o m e n a " is included in the definition of statistics mostly to m a k e certain that the p h e n o m e n a studied are not a r b i t r a r y ones t h a t are entirely u n d e r the will a n d c o n t r o l of the researcher. even t h o u g h scientists have interfered with t h e p h e n o m e n o n t h r o u g h their intervention.2 CHAS'TER 1 / INTRODUCTION sense as the scientific study of numerical data based on natural phenomena. the n u m b e r of peas in a pod.2 The development of biostatistics M o d e r n statistics a p p e a r s to have developed f r o m t w o sources as far back as the seventeenth century. It can be the plural of the n o u n statistic. N u m e r i c a l d a t a can be m e a s u r e m e n t s (the length or w i d t h of a s t r u c t u r e or t h e a m o u n t of a chemical in a b o d y fluid. the m u t a t i o n rate in maize after irradiation. T h e first s o u r c e was political science. hence it deals with quantities of i n f o r m a t i o n . But all would agree t h a t the chirping of crickets. Data: Statistics generally deals with p o p u l a t i o n s or g r o u p s of individuals. or we m a y find t h a t t h e old saying t h a t "figures never lie. and the age of a w o m a n at m e n o p a u s e are n a t u r a l p h e n o m e n a . T h u s . which refers t o any one of m a n y c o m p u t e d or estimated statistical quantities. as in experiments. only statisticians d o " applies to us. All p a r t s of this definition a r e i m p o r t a n t a n d deserve emphasis: Scientific study: Statistics m u s t meet t h e c o m m o n l y accepted criteria of validity of scientific evidence. such as the m e a n . a form of statistics developed as a quantitive description of the v a r i o u s aspects of the affairs of a g o v e r n m e n t or state (hence the term "statistics"). Numerical: Unless d a t a of a study c a n be quantified in one way o r a n o t h e r . Natural phenomena: W e use this term in a wide sense to m e a n not only all t h o s e events in a n i m a t e a n d i n a n i m a t e n a t u r e that take place outside the c o n t r o l of h u m a n beings. Each o n e of these is a statistic. they will not be a m e n a b l e to statistical analysis. for example) o r c o u n t s (such as t h e n u m b e r of bristles or teeth). however. Sociologists o r h u m a n ecologists. o r the correlation coefficient. or t h e incidence o r m o r b i d i t y in patients treated with a vaccine m a y still be considered n a t u r a l . T a x e s a n d insurance caused people to b e c o m e . o t h e r k i n d s of scientists. t h e m e a s u r e m e n t of a single a n i m a l or the response f r o m a single biochemical test will generally not be of interest. with yet different ones. T h e w o r d "statistics" is also used in a n o t h e r . T h e h e a r t b e a t of rats in response to adrenalin. t h o u g h related. the s t a n d a r d deviation. T h e average biologist w o u l d n o t c o n sider the n u m b e r of stereo sets b o u g h t by p e r s o n s in different states in a given year to be a n a t u r a l p h e n o m e n o n .

F. were a m o n g the leaders in this field. w h o in his work c o m b i n e d the t h e o r y a n d practical m e t h o d s of statistics a n d applied t h e m to p r o b l e m s of biology. b e c a m e interested in the application of statistical m e t h o d s t o biology. His m a n y c o n t r i b u t i o n s to statistical theory will become o b v i o u s even to the cursory reader of this b o o k . M a n y of the f a m o u s a s t r o n o m e r s a n d m a t h e m a t i c i a n s of the eighteenth century. particularly in the d e m o n s t r a t i o n of n a t u r a l selection. P e a r s o n continued in the tradition of G a l t o n a n d laid the f o u n d a t i o n for m u c h of descriptive a n d correlational statistics. medicine.2 / THE DEVELOPMENT OF BIOSTATISTICS 3 interested in p r o b l e m s of censuses.1. laid the f o u n d a t i o n of m o d e r n probability t h e o r y in /Irs Conjectandi. T h e i n a d e q u a c y of D a r w i n ' s genetic theories stimulated G a l t o n to try to solve the p r o b l e m s of heredity. was the first to c o m b i n e the statistics of his d a y with probability t h e o r y in w o r k i n g o u t a n n u i t y values a n d t o a p p r o x i m a t e the i m p o r t a n t n o r m a l distribution t h r o u g h the expansion of the binomial. a F r e n c h m a n living in E n g l a n d . I m p o r t a n t c o n t r i b u t i o n s to this theory were m a d e by Blaise Pascal (1623-1662) a n d Pierre de F e r m a t (1601-1665). Weldon. Francis G a l t o n (1822-1911). a Swiss. T h e latter's lasting c o n t r i b u t i o n to statistics is the d e v e l o p m e n t of the m e t h o d of least squares.1 8 2 7 ) in F r a n c e a n d K a r l Friedrich G a u s s ( 1 7 7 7 . G a l t o n ' s m a j o r c o n t r i b u t i o n to biology was his application of statistical m e t h o d o l o g y to the analysis of biological variation. a zoologist at t h e s a m e institution. at University College. At a b o u t the s a m e time. the s e c o n d s o u r c e of m o d e r n statistics developed: the m a t h e m a t i c a l t h e o r y of probability engendered by t h e interest in games of c h a n c e a m o n g the leisure classes of the time. incidentally. He started with the most difficult material a n d with the w r o n g a s s u m p t i o n s . h a s been called the father of biostatistics a n d eugenics. a n d sociology. A b r a h a m de M o i v r e (1667-1754). .1 6 7 4 ) a n d William Petty (1623-1687) were early students of vital statistics. A later stimulus for the d e v e l o p m e n t of statistics came f r o m the science of a s t r o n o m y . longevity. R. L o n d o n . especially in E n g l a n d as the c o u n t r y p r o s p e r e d d u r i n g the d e v e l o p m e n t of its empire. a Belgian a s t r o n o m e r a n d m a t h e m a t i c i a n . his m e t h o d o l o g y has become the f o u n d a t i o n for the application of statistics to biology. J o h n G r a u n t ( 1 6 2 0 . is credited with coining the term " b i o m e t r y " for the type of studies he and P e a r s o n pursued. in which m a n y individual o b s e r v a t i o n s h a d to be digested into a c o h e r e n t theory. J a c q u e s Bernoulli (1654-1705). particularly t h r o u g h the analysis of variability and t h r o u g h his study of regression a n d correlation in biological m e a s u r e m e n t s .1 8 5 5 ) in G e r m a n y . a n d o t h e r s followed in their footsteps. Fisher (1890 1962). P e a r s o n ' s interest came a b o u t t h r o u g h the influence of W. a n d mortality. His hope of unraveling the laws of genetics t h r o u g h these p r o c e d u r e s was in vain. Karl P e a r s o n (1857-1936). a cousin of C h a r l e s D a r w i n . T h e d o m i n a n t figure in statistics and biometry in the twentieth century has been R o n a l d A. However. b o t h F r e n c h m e n . W c l d o n (1860-1906). P e r h a p s the earliest i m p o r t a n t figure in biostatistic t h o u g h t was A d o l p h e Quetelet (1796-1874). such as Pierre Simon Laplace ( 1 7 4 9 . Such c o n s i d e r a t i o n s a s s u m e d increasing i m p o r t a n c e .

A m i s u n d e r s t a n d i n g of these principles and relationships h a s given rise t o the a t t i t u d e of some biologists t h a t if differences induced by an experiment. " M a n s m o k i n g cigarette" is a frequently observed event. In biology. interestingly e n o u g h . as their science has b e c o m e m o r e refined. to d e t e r m i n e the e r r o r of m e a s u r e m e n t . even to this day. they arc not w o r t h investigating. a b s o r b ing m a n y facts f r o m the outside world. There are few legitimate fields of inquiry.3 The statistical frame of mind A brief perusal of a l m o s t a n y biological j o u r n a l reveals h o w pervasive the use of statistics has b e c o m e in the biological sciences. or observed by nature. a n d to ascertain the reality of m i n u t e but i m p o r t a n t differences. a statement that individuals of species A a r e larger t h a n those of specics Β or that w o m e n suffer m o r e often f r o m disease X t h a n d o m e n is of a kind c o m m o n l y m a d e by biological and medical scientists. In so doing. most p h e n o m e n a are affected by m a n y causal factors. in which. however.c e n t u r y physical science. biologists such as R o b e r t M a y e r .4 CHAPTER 1 / INTRODUCTION Statistics t o d a y is a b r o a d a n d extremely active field w h o s e a p p l i c a t i o n s t o u c h a l m o s t every science a n d even the humanities. statistical investigation is unnecessary. a n d o t h e r s tried t o d e m o n s t r a t e t h a t biological processes were n o t h i n g but physicochemical p h e n o m e n a . In t h a t century. flics with g a r b a g e cans in the s u m m e r frequently. W h y h a s there been such a m a r k e d increase in the use of statistics in biology? Apparently. 1. in which wc try to q u a n t i f y o u r observations. because biologists h a v e f o u n d t h a t the interplay of biological causal a n d response variables d o e s n o t fit the classic m o l d of n i n e t e e n t h . o t h e r s rarely. F r o m o u r experience we k n o w certain events to o c c u r frequently. In statistics we express o u r degree of belief or disbelief as a p r o b a b i l i t y rather than as a vague. W e k n o w f r o m experience t h a t J a p a n e s e arc on the average shorter than Englishmen a n d that E g y p t i a n s are on the average d a r k e r t h a n Swedes. they helped create the impression t h a t the experim e n t a l m e t h o d s a n d n a t u r a l philosophy t h a t h a d led to such d r a m a t i c p r o g r e s s in the physical sciences should be imitated fully in biology. We associate t h u n d e r with lightning a l m o s t always. general s t a t e m e n t . a n d n o o n e can predict f r o m w h a t b r a n c h of statistics new a p p l i c a t i o n s to biology will be m a d e ." rare. Statistics is needed to m e a s u r e such variable p h e n o m e n a . u n c o n t r o l l a b l e in their variation a n d often unidentifiable. f r o m the n a t u r e of the p h e n o m e n a studied. H e r m a n n von H e l m h o l t z . New a p p l i c a t i o n s for statistics are c o n s t a n t l y being f o u n d . M a n y biologists. but s n o w with the . Such s t a t e m e n t s can a n d should be m o r e precisely expressed in q u a n t i t a t i v e form. have retained the tradition of strictly mechanistic a n d deterministic concepts of t h i n k i n g (while physicists. F o r example. have begun t o resort t o statistical approaches). In m a n y ways the h u m a n mind is a r e m a r k a b l e statistical machine. digesting these. a n d regurgitating them in simple s u m m a r y form. " M a n slipping on b a n a n a peel. are not clear on plain inspection (and therefore a r e in need of statistical analysis). Statistical thinking is not really different f r o m o r d i n a r y disciplined scientific thinking.

which we learn a b o u t by direct c o m m u n i c a t i o n or t h r o u g h reading. A l t h o u g h statistics arose t o satisfy the needs of scientific research. o n e of whose p r o b l e m s is the s e p a r a t i o n of e n v i r o n m e n t a l f r o m genetic effects. This a b s t r a c t is constantly u n d e r revision. it is o u r k n o w l e d g e of the m o m e n t . b o t h o u r o w n a n d that of others. created t o serve the needs of n a t u r a l science. All these facts have been processed by that r e m a r k a b l e c o m p u t e r . which furnishes an abstract. and m a n y of the c o n c e p t s of q u a n t i t a t i v e genetics have b e e n directly built a r o u n d the designs inherent in the analysis of variance.1. T h e whole field of quantitative genetics. a n d t h o u g h occasionally faulty a n d biased. t h r o u g h positive feedback. statistics. T h u s . it is o n the whole astonishingly s o u n d .3 / THE STATISTICAL FRAME OF MIND 5 s o u t h e r n C a l i f o r n i a n desert extremely rarely. All such k n o w l e d g e comes to us as a result of experience. T o cite a n example: Analysis of variance has h a d a t r e m e n d o u s effect in influencing the types of experiments researchers carry out. d e p e n d s u p o n the analysis of variance for its realization. . h a s itself affected the c o n t e n t a n d m e t h o d s of t h e biological sciences. t h e h u m a n brain. the develo p m e n t of its m e t h o d o l o g y in t u r n affected the sciences in which statistics is applied.

we c o m e to the types of o b s e r v a t i o n s t h a t we o b t a i n f r o m biological research material. .3 we discuss the degree of accuracy necessary for recording d a t a a n d the p r o c e d u r e for r o u n d i n g off figures. F r e q u e n c y distributions. K n o w i n g how to a r r a n g e d a t a in frequency distrib u t i o n s is i m p o r t a n t because such a r r a n g e m e n t s give an overall impression of the general p a t t e r n of the variation present in a s a m p l e a n d also facilitate f u r t h e r c o m p u t a t i o n a l procedures. as well as the p r e s e n t a t i o n of numerical d a t a .2. " which we shall be using t h r o u g h o u t this book. Then.5.1 we explain the statistical m e a n i n g of the terms " s a m p l e " a n d " p o p u l a t i o n . In Section 2. in Section 2.6 we briefly describe the c o m p u t a t i o n a l h a n d l i n g of d a t a .CHAPTER Data in Biostatistics In Section 2. a r e discussed in Section 2.4 certain k i n d s of derived d a t a frequently used in biological science a m o n g them ratios a n d indices a n d the peculiar problems of accuracy a n d d i s t r i b u t i o n they present us. we shall see h o w these c o r r e s p o n d to the different kinds of variables u p o n which we perform (he various c o m p u t a t i o n s in the rest of this b o o k . We shall then be ready to consider in Section 2. In Section 2.

where each colony is a basic s a m p l i n g unit. b u t not necessarily. T h e m o r e c o m m o n term employed in general statistics is "variable. a n d the sample of o b s e r v a t i o n s is the t e m p e r a t u r e s for all the colonies considered. defined as a collection of individual observations selected by a specified procedure." H o w e v e r . or variable. o n e rat. It refers to all the individuals of a given species ( p e r h a p s of a given life-history stage or sex) f o u n d in a circumscribed area at a given time. If we m e a s u r e weight in 100 rats. if we h a d studied weight in a single rat over a period of time. then the p o p u l a t i o n f r o m which the sample has been d r a w n represents the leucocyte c o u n t s of all extant males of the species Homo sapiens. existing anywhere in the world or at least within a definitely specified sampling area limited in space and time. If. t h e h u n d r e d rat weights together represent the sample of observations. M o r e t h a n one variable can be measured on each smallest sampling unit. If we wish to m e a s u r e t e m p e r a t u r e in a study of ant colonies.2. p o p u l a t i o n always m e a n s the totality of individual observations about which inferences are to he made. These smallest s a m p l i n g units frequently. on the other hand. T h e y are observations or measurements taken on the smallest sampling unit. However. Next we define population. T h e data in biostatistics are generally based on individual observations. such as live male .1 Samples and populations We shall n o w define a n u m b e r of i m p o r t a n t terms necessary for an unders t a n d i n g of biological d a t a . are also individuals in the o r d i n a r y biological sense. blood p H a n d red cell c o u n t would be the t w o variables studied. In statistics. In this instance. W e have carefully avoided so far specifying what particular variable was being studied. T h e actual property m e a s u r e d by the individual o b s e r v a t i o n s is the character. T h e biological definition of this term is well k n o w n . in a g r o u p of 25 mice we might m e a s u r e the blood pH and the e r y t h r o c y t e c o u n t . one individual o b s e r v a t i o n (an item) is based on o n e individual in a biological s e n s e — t h a t is. If you take five men a n d study the n u m b e r of leucocytes in their peripheral blood and you are prepared to d r a w conclusions a b o u t all men from this s a m p l e of five. Each m o u s e (a biological individual) is the smallest sampling unit. because the terms "individual o b s e r v a t i o n " a n d " s a m p l e of observations" as used a b o v e define only the s t r u c t u r e but not the n a t u r e of the d a t a in a study. If we consider an estimate of the D N A c o n t e n t of a single m a m m a l i a n sperm cell to be an individual o b s e r v a t i o n . the ρ Η readings a n d cell c o u n t s are individual observations. you restrict yourself to a m o r e narrowly specified sample. T h u s . O r we might speak of a bivariate sample of 25 observations. then the weight of each rat is an individual observation. the s a m p l e of o b s e r v a t i o n s may be the estimates of D N A c o n t e n t of all the sperm cells studied in o n e individual m a m m a l .1 / SAMPLES AND POPULATIONS 7 2. in biology the word " c h a r a c t e r " is frequently used synonymously. each t e m p e r a t u r e reading for o n e colony is an individual observation. a n d two samples of 25 o b s e r v a t i o n s (on ρ Η a n d on e r y t h r o c y t e c o u n t ) would result. the s a m p l e of individual o b s e r v a t i o n s w o u l d be the weights recorded on one rat at successive times. each referring to a />H reading paired with an e r y t h r o c y t e c o u n t .

such as all the w h o o p i n g cranes in N o r t h America or all the recorded cases of a rare but easily d i a g n o s e d disease X. these p o p u l a t i o n s a r e in fact finite. the leucocyte c o u n t s of all the Chinese m e n in the world of age 20. t h o u g h p o p u l a t i o n s are theoretically finite in most applications in biology. By c o n t r a s t . all Chinese men of age 20. T h e p o p u l a t i o n in this statistical sense is sometimes referred t o as the universe. a n d you are restricling y o u r conclusions to this p a r t i c u l a r g r o u p . In cases of the first kind the p o p u l a t i o n is generally finite. as in genetics. such as the tail lengths of all the white mice in the world. since m a m m a l s are all alike in this regard. If t h e property does not differ within a s a m p l e at h a n d or at least a m o n g the samples being studied. S o m e of the statistical m e t h o d s to be developed later m a k e a distinction between s a m p l i n g from finite a n d f r o m infinite p o p u l a t i o n s . However. c o n c e n t r a t i o n s of chemicals in b o d y fluids. A given experiment. and genotypes are examples of variables in o r d i n a r y . a n d e x a m i n e all h a m s t e r sperm cells. 2. a n d radiation biology. but we have not yet defined them. then the p o p u l a t i o n f r o m which you a r e s a m p l i n g will be leucocyte n u m b e r s of all Chinese males of age 20.8 CHAPTER 2 / DATA IN BIOSTATISTICS Chinese. n u m b e r of teeth. A p o p u l a t i o n m a y represent variables of a concrete collection of objects or creatures. rates of certain biological processes. height. . or the D N A c o n t e n t of all the h a m s t e r sperm cells in existence: or it m a y represent the o u t c o m e s of experiments. epidemiology. they are generally so much larger than samples d r a w n from them that they can be c o n sidered de facto infinite-sized populations. A c o m m o n misuse of statistical m e t h o d s is to fail to define the statistical p o p u l a t i o n a b o u t which inferences can be m a d e . such as all the h e a r t b e a t frequencies p r o d u c e d in guinea pigs by injections of adrenalin. A report on the analysis of a s a m p l e f r o m a restricted p o p u l a t i o n should not imply that the results hold in general. weight. such as the a d m i n i s t r a t i o n of adrenalin t o guinea pigs. Certain smaller p o p u l a t i o n s . an experiment can be repeated an infinite n u m b e r of times (at least in theory). it c a n n o t be of statistical interest. or all white mice in the world. A l t h o u g h in practice it would be impossible to collect. count. aged 20. physical readings of optical or electronic machinery used in biological research. T h e s a m p l e of experiments actually perf o r m e d is a sample f r o m an infinite n u m b e r that could be p e r f o r m e d .b l o o d e d n e s s in a g r o u p of m a m m a l s is not. We shall define a variable as a properly with respect to which individuals in a sample d i f f e r in some ascertainable way. We have already referred to biological variables in a general way. which may include conventional m o r p h o l o g i c a l m e a s u r e m e n t s .2 Variables in biostatisties Each biological discipline has its own set of variables. vitamin ( ' c o n t e n t . W a r m . Length. could be repealed as long as the e x p e r i m e n t e r could o b t a i n material a n d his or her health and patience held out. and m a n y more. genetically and phcnotypically diverse g r o u p s of organisms. m a y well lie within reach of a total census. frequencies of certain events.

male or female. M e a s u r e m e n t variables are of t w o kinds. By expressing a variable as a series of ranks. also k n o w n as meristic or discrete variables. M a n y of the variables studied in biology arc c o n t i n u o u s variables. T h e first kind consists of continuous variables. d e a d or alive. for instance. and the . 2 / VARIABLES IN BIOSTATISTICS 9 a l t h o u g h b o d y t e m p e r a t u r e of individual m a m m a l s would. between the t w o length m e a s u r e m e n t s 1. with no intermediate values possible in between. r a n k s I and 2 is identical lo or even p r o p o r t i o n a l to the difference between r a n k s 2 a n d 3.2. of course. such as a length of 1. areas. Examples are lengths.3. periods of time.6 cm there are an infinite n u m b e r of lengths that could be m e a s u r e d if o n e were so inclined a n d h a d a precise e n o u g h m e t h o d of calibration. they can be treated statistically. angles. which in practice is u n k n o w a b l e . F o r example. Some variables c a n n o t be m e a s u r e d but at least can be ordered or r a n k e d by their m a g n i t u d e .57 m m . 3. Examples of d i s c o n t i n u o u s variables are n u m b e r s of a given s t r u c t u r e (such as segments.2 .5 a n d 1. percentages. temperatures. such as 1. n u m b e r s of offspring. leel h. Any given reading of a c o n t i n u o u s variable. T h u s the n u m b e r of segments in a certain insect a p p e n d a g e may be 4 or 5 or 6 but never 5l or 4. t w o agouti. I he o r d e r of emergence. These are all properties. C o n t r a s t e d with c o n t i n u o u s variables are the discontinuous variables. Special m e t h o d s for dealing with such variables have been developed. we d o not imply that the difference in m a g n i t u d e between. or n u m b e r s of plants in a given q u a d r a t . Of 80 mice. bristles. state that four were black. we may. be a variable. such as black or white. is therefore an a p p r o x i m a t i o n to the exact reading. or glands). We c a n divide variables as follows: Variables Measurement variables Continuous variables Discontinuous variables Ranked variables Attributes Measurement variables are those measurements and counts that are expressed numerically. a n d rates. Variables that c a n n o t be measured but must be expressed qualitatively are called attributes. T h u s . 5. W h e n such attributes are c o m b i n e d with frequencies. 4. in an experiment one might record the rank o r d e r of emergence o f t e n p u p a e without specifying the exact time at which each p u p a emerged. which at least theoretically can assume an infinite n u m b e r of values between a n y t w o fixed points. and several arc furnished in this book. c o n c e n t r a t i o n s . weights. n u m b e r s of colonics of m i c r o o r g a n i s m s or animals. These are variables that have only certain fixed numerical values. p r e g n a n t or not p r e g n a n t . In such cases we code the d a t a as a ranked variable. volumes. or nominal variables. say.

three a t t r i b u t e s referring to a s t r u c t u r e as " p o o r l y developed. 2. By chance. when we count four eggs in a nest. Unless there is bias in a m e a s u r i n g i n s t r u m e n t . score. the ratio of females to males (a . not 3 or 5. a n d 3. A variate will refer t o a given length m e a s u r e m e n t . an insensitive scale might result in an a c c u r a t e reading. A biased but sensitive scale might yield inaccurate but precise weight. W h e n a t t r i b u t e s are c o m b i n e d with frequencies into tables suitable for statistical analysis. if we have m e a s u r e m e n t s of the length of the tails of five mice. but not necessarily. be imprecise. a n d y 4 is the m e a s u r e m e n t of tail length of the f o u r t h m o u s e in our sample. Precise variates arc usually. T h u s . tail length will be a c o n t i n u o u s variable. or d i s c o n t i n u o u s . Precision is the closeness of repeated measure- ments. c o n t i n u o u s variables derived from mcristic ones can u n d e r certain c o n d i t i o n s also be exact n u m b e r s . the most c o m m o n s y m b o l being Y. whole n u m b e r s ." a n d " h y p e r t r o p h i e d " could be c o d e d 1. Seemingly. In this b o o k we shall use it as a single reading. T h u s . We need therefore mainly be concerned with the former. Accuracy is the closeness of a measured or computed value to its true value. precision will lead to accuracy.3 Accuracy and precision of data " A c c u r a c y " and "precision" are used s y n o n y m o u s l y in everyday speech. T h u s colors c a n be c h a n g e d into wavelengths o r c o l o r . A term that has not yet been explained is variate. variables are generally m e a s u r e d as exact n u m b e r s . F o r example. since a repeated weighing would be unlikely to yield an equally accurate weight. they are referred to as enumeration data. or o b s e r v a t i o n of a given variable. C e r t a i n o t h e r a t t r i b u t e s that can be r a n k e d o r ordered can be c o d e d t o bec o m e r a n k e d variables. F o r instance. Meristic." "well developed. but in statistics we define them m o r e rigorously. In this text we identify variables by capital letters.10 CHAPTER 2 / DATA IN BIOSTATISTICS rest gray. Yt is the m e a s u r e m e n t of tail length of the /'th mouse. however. T h u s the e n u m e r a t i o n d a t a on color in mice w o u l d be a r r a n g e d as follows: Color Black Agouti Gray T o t a l n u m b e r of m i c e Frequency 4 2 74 80 In s o m e cases a t t r i b u t e s c a n be c h a n g e d into m e a s u r e m e n t variables if this is desired. If in a c o l o n y of a n i m a l s there are 18 females and 12 males.c h a r t values. and clearly it could not be 4 plus or minus a fractional part. T h u s V may s t a n d for tail length of mice. ratios between exact n u m b e r s arc themselves also exact. a n d each of the five readings of length will be a variate. 2. which would. it is 4. there is no d o u b t a b o u t the exact n u m b e r of eggs in the nest if we have c o u n t e d correctly.

a length m e a s u r e m e n t of 12.2 or 12. always carry one more figure beyond the last significant one measured by the observer.2. W e see.75 192. which in turn are wider t h a n those of the third m e a s u r e m e n t . Meristic variates. Implied limits. t h a t is.315 a n d 12.3—clearly an unsatisfactory state of affairs? Such an a r g u m e n t is correct. T o h o w m a n y significant figures should we record m e a s u r e m e n t s ? If we array t l l r » f «» m r-vl i> K i .35 m m . is u n k n o w n a n d p r o b a b l y u n k n o w a b l e .500 to 36. as the interval between their implied limits decreased. a n u m b e r b e c o m e s increasingly m o r e a c c u r a t e as we are able to write m o r e significant figures for it (increase its precision). This decision was not taken arbitrarily. You will n o t e that the implied limits of the t o p m o s t m e a s u r e m e n t a r e wider than those of the o n e below it. implies that the true n u m b e r varies s o m e w h e r e f r o m 35.5 192.765 We m a y imagine these n u m b e r s t o be recorded m e a s u r e m e n t s of the same structure. Assuming there is n o bias.5 192.325.3. we imply t h a t the decision w h e t h e r to put it into the higher or lower class h a s already been taken. W e m e a n by this that the exact value of the single m e a s u r e m e n t . but are relative.8 192. we must imply a n increase in precision. Exactly where between these implied limits the real length is we d o not k n o w . T h e last digit of the m e a s u r e m e n t stated should imply precision. are a p p r o x i m a t e .000 insects in a cubic meter of soil. a c o u n t of 36. therefore. b u t p r e s u m a b l y was based o n the best available m e a s u r e m e n t . Unless this is w h a t we m e a n . If the scale of m e a s u r e m e n t is so precise t h a t a value of 12. b u t w h e n we record a n u m b e r as either 12. t h a t accuracy a n d precision in n u m b e r s are not a b s o l u t e concepts. therefore. however. But where w o u l d a true m e a s u r e m e n t of 12. consider the following three n u m b e r s : Implied 193 192. i otrv ο 11 η ο i » ιη/ίι\.3 / ACCURACY AND PRECISION OF DATA 11 M o s t c o n t i n u o u s variables. Hence. it follows t h a t if we record the m e a s u r e m e n t as 12. T h u s w h e n c o u n t s are reported to the nearest t h o u s a n d .3 m m implies t h a t the true length of the structure lies s o m e w h e r e between 12.25 would clearly have been recognized.25 a n d 12.2 a n d 12. Let us a s s u m e that we h a d e x t r a m u n d a n e knowledge that the true length of the given s t r u c t u r e was 192. the three m e a s u r e m e n t s would increase in accuracy f r o m the t o p d o w n . t h o u g h ordinarily exact. there would be n o p o i n t in a d d i n g the last decimal figure to o u r original measurements. for example.85 192.</1ιΐ'>1 1a I l-> α l<iri»ni' . If we d o a d d a n o t h e r figure. then the m e a s u r e m e n t should have been recorded originally to four significant figures. it should indicate t h e limits on the m e a s u r e m e n t scale between which we believe the true m e a s u r e m e n t to lie.25 fall? W o u l d it not equally likely fall in either of the t w o classes 12. If t h a t were so. T o illustrate this concept of the relativity of accuracy. οr /-»f t-v-i < n m l ι ι / ί rv Γι-ί-> ι-νι 1 U .755 limits 193. we a r e implying that the true value lies between 12.758 units. T h u s .76 192. the variate.500 insects. may be recorded a p p r o x i m a t e l y when large n u m b e r s are involved.32.

with 41 unit steps between them (counting the last significant digit as the unit).3476 Significant digits desired Answer 27 133. It would therefore be advisable to record the heights to the nearest centimeter. T h u s 7.58 133. it is unchanged if it is even but increased by 1 if it is odd.75 to 7.85 is implied. if we are measuring a series of shells to the nearest millimeter a n d the largest is 8 m m and the smallest is 4 m m wide. which are far too many. Then the two extreme measurements might be 8. If 7.3 . we should measure our shells to one m o r e significant decimal place. The last digit should always be significant. this would be an a d e q u a t e n u m b e r of unit steps. Using the rule we have stated for the n u m b e r of unit steps.03715 18. that is. we should have as m a n y digits raised as arc being lowered. Practice the above rules by r o u n d i n g off the following n u m b e r s to the indicated n u m b e r of significant digits: Number 26. as follows: 173 cm for the tallest and 27 cm for the shortest. Hence. it is increased by 1. these changes should therefore balance out.2 m m a n d 4.71 0. an easy rule to remember is that the number of unit steps from the smallest to the largest measurement in an array should usually be between 30 a n d 300. if we measured the height of the tallest of a series of plants as 173.03725 0. the difference between these limits would comprise 1466 unit steps (of 0.12 CHAPTER 2 / DATA IN BIOSTATISTICS one.1 cm). Zeros should therefore not be written at the end of a p p r o x i m a t e n u m bers to the right of the decimal point unless they are meant to be significant digits. we carry out the process of rounding off numbers. T h e reason for such a rule is that an error of 1 in the last significant digit of a reading of 4 m m would constitute an inadmissible error of 25%. but an e r r o r of 1 in the last digit of 4. When the digit to be rounded off is followed by a 5 standing alone or a 5 followed by zeros. This would yield 146 unit steps. there are only four unit steps between the largest a n d the smallest measurement. zero included. the measurement should be recorded as 7. This applies to all digits.805.795 to 7. it should imply a range for the true measurement of from half a "unit step" below to half a "unit step" above the recorded score.5%.8.0372 2 5 3 3 2 3 8. A digit to be rounded off is not changed if it is followed by a digit less than 5.6 cm. Thus.2 cm a n d that of the shortest of these plants as 26. When the n u m b e r of significant digits is to be reduced.316 17. we shall record two or three digits for most measurements.1 mm. If the digit to be rounded off is followed by a digit greater than 5 or by 5 followed by other nonzero digits. as illustrated earlier.1 is less t h a n 2. on the average.71 37 0.000 17. The rules for r o u n d i n g off are very simple.0372 0.80 must imply the limits 7. Similarly. T h e reason for this last rule is that when such numbers are summed in a long series.

a n d d e a t h rates would fall in this category. the t w o ratios a b o v e could be expressed as f | a n d f ^ . an index could be the average of t w o m e a s u r e m e n t s — e i t h e r simply. weight gain per unit time.2:1.2 . In its simplest form.8. 4 / DERIVED VARIABLES 13 M o s t pocket calculators or larger c o m p u t e r s r o u n d off their displays using a different rule: they increase t h e preceding digit when the following digit is a 5 s t a n d i n g alone o r with trailing zeros. T h e a m o u n t of a s u b s t a n c e liberated per unit weight or volume of biological material. Often ratios m a y be the only m e a n i n g f u l way to interpret and u n d e r s t a n d certain types of biological problems. It is this form for ratios that we shall consider further. . . which m a y represent the ratio of width t o length in a sclerite of an insect o r the ratio between the c o n c e n t r a t i o n s of t w o minerals contained in w a t e r or soil. the a c c u m u l a t i o n of r o u n d i n g e r r o r s is minimized. T h e two ratios cited would therefore be 2. indices. Incidentally. T h e s e examples imply ratios based on counts. a c o u n t of parasitized individuals versus those not p a r a sitized. Ratios m a y also be expressed as fractions.666 . rates. thus. percentages. Conceived in the wide sense. However. a ratio is expressed as in 64:24. a n d so on. if t w o calculators give answers with slight differences in the final (least significant) digits.4 Derived variables T h e m a j o r i t y of variables in biometric w o r k are o b s e r v a t i o n s r e c o r d e d as direct m e a s u r e m e n t s or c o u n t s of biological material o r as readings that are the o u t p u t of various types of instruments. such as ^ [ ( 2 χ length of A) + length of B\.666 . a n d the like. Percentages are also a type of ratio. reproductive rates per unit p o p u l a t i o n size a n d time (birth rates). A well-known example of an index in this sense is the cephalic index in physical a n t h r o p o l o g y . An index is the ratio of the value of one variable to the value of a so-called standard one. a n d c o n c e n t r a t i o n s are basic quantities in m u c h biological research. . These are pure n u m b e r s . A ratio expresses as a single value the relation that t w o variables have. These are generally based on t w o o r m o r e independently m e a s u r e d variables whose relations are expressed in a certain way. Ratios. We are referring to ratios. .. T h e use of ratios a n d percentages is deeply ingrained in scientific t h o u g h t . percentages. concentrations. H o w e v e r . A ratio based on a c o n t i n u o u s variable might be similarly expressed as 1. there is a n i m p o r t a n t class of variables in biological research t h a t we m a y call the derived or computed variables. suspect a different n u m b e r of significant digits in m e m o r y as a cause of t h e disagreement. which m a y represent the n u m b e r of wild-type versus m u t a n t individuals. widely used and generally familiar. . or in weighted fashion. respectively. 2. since m o s t of the m a c h i n e s usable for statistics also retain eight or ten significant figures internally. such as {(length of A + length of β). the n u m b e r of males versus females. not expressed in m e a s u r e m e n t units of any kind. o n e to the other. However. for c o m p u t a t i o n a l p u r p o s e s it is m o r e useful to express the ratio as a quotient. Rates are i m p o r t a n t in m a n y experimental fields of biology. If the biological process being investigated . and 0.

75 to 1. First. . we should be able to find selection affecting b o d y p r o p o r t i o n s to exist in t h e evolution of almost any o r g a n i s m . Sinnott a n d H a m m o n d (1935) f o u n d t h a t inheritance of the shapes of squashes of the species Cucurbita pepo could be interpreted t h r o u g h a form index based on a length-width ratio. or f r o m 0. the best estimate of a ratio is n o t usually the m i d p o i n t between its possible ranges.25. perhaps. T h e r e are several d i s a d v a n t a g e s to using ratios. A second d i s a d v a n t a g e to ratios a n d percentages is that they m a y not be a p p r o x i m a t e l y n o r m a l l y distributed (see C h a p t e r 5) as required by m a n y statistical tests. similarly. F u r t h e r m o r e .2 implies a true r a n g e of m e a s u r e m e n t of the variable f r o m 1.667)/0.14 CHAPTER 2 / DATA IN BIOSTATISTICS o p e r a t e s o n the ratio of the variables studied.668 a n d the r a t i o based on 4τ§~ is 0.8 implies a r a n g e f r o m 1.1 A. . the discrepancy m a y be greater in o t h e r instances. it tells us that there is.to -Hi". a m e a s u r e m e n t of 1. or possibly that the scale of m e a s u r e m e n t .85. therefore. A third d i s a d v a n t a g e of ratios is t h a t in using them o n e loses i n f o r m a t i o n a b o u t the relationships between the t w o variables except for the i n f o r m a t i o n a b o u t the ratio itself. As we c o n t i n u e s a m p l i n g additional h u n d r e d s a n d t h o u s a n d s of birth weights (Figure 2.2.0%: (0.2)/1.25 — 1. We realize. By similar m e t h o d s of investigation.714. 2. T h e distribution of a variable is of c o n s i d e r a b l e biological interest. in o u r example the m i d p o i n t between the implied limits is 0.0.5 Frequency distributions If we were to sample a p o p u l a t i o n of birth weights of infants. o n e must e x a m i n e this r a t i o to u n d e r s t a n d the process.622 t o 0. W e n o t e a possible m a x i m a l e r r o r of 4.1C a n d D).666 .714 .1 B). This is illustrated in Figure 2. This difficulty can frequently be o v e r c o m e by t r a n s f o r m a t i o n of the variable (as discussed in C h a p t e r 10).667. that the true ratio m a y vary a n y w h e r e f r o m -f^J. for a s a m p l e of 25 birth weights. The outline of the m o u n d of p o i n t s a p p r o x i m a t e s the distribution of the variable. the c o r r e s p o n d i n g maximal e r r o r for the r a t i o is 7. we shall p r o b a b l y have to place some of these points on t o p of o t h e r points in o r d e r to record them all correctly (Figure 2. b u t n o t t h r o u g h the i n d e p e n d e n t d i m e n s i o n s of shape.2% if 1. T h u s . while this is only a slight difference. R e m e m b e r thai a c o n t i n u o u s variable such as birth weight can a s s u m e an infinity of values between a n y t w o p o i n l s on the abscissa. If we find thai the disl ribution is asymmetrical a n d d r a w n out in one direction. If we s a m p l e repeatedly from the p o p u l a t i o n a n d o b t a i n 100 birth weights. we could represent each sampled m e a s u r e m e n t by a point a l o n g an axis d e n o t i n g m a g n i t u d e of birth weight. they are relatively inaccurate. T h u s . Let us return to the ratio m e n t i o n e d a b o v e a n d recall f r o m t h e previous section that a m e a s u r e m e n t of 1. T h e refinement of o u r m e a s u r e m e n t s will d e t e r m i n e how fine the n u m b e r of recorded divisions between any t w o p o i n t s a l o n g the axis will be. selection that causes o r g a n i s m s to fall preferentially in o n e of the tails of the distribution.2 is an original m e a s u r e m e n t : (1. the assemblage of points will c o n t i n u e to increase in size but will a s s u m e a fairly definite shape.15 to 1. .

ι 30 r 20 10 500 0 i I L 70 60 50 40 2000 30 20 10 0 60 • ll. 100 . 11 ι .ι...i .5 / FREQUENCY DISTRIBUTIONS 15 / 10 l· 25 0 10 I III· I .1 S a m p l i n g from . Λ s a m p l e of 25.l 140 150 160 f i g u r k 2. Λ s a m p l e of 2(XX). A s a m p l e of 500.il. 11 1 1 1 70 80 90 100 110 120 130 Birth w e i g h t (oz) n i " l ·• -t. D.. Β. Λ. .l I. C. ll. I.i. Λ s a m p l e of KM).2.i p o p u l a t i o n of birth weights of i n f a n t s (a c o n t i n u o u s variable).llililillilHi hull Li. .

L-shaped d i s t r i b u t i o n s as in Figure 2. 6. the variates 7. 9.1). This m e a n s that different species or races m a y have become intermingled in o u r sample. all of which impart significant i n f o r m a t i o n a b o u t the relationships they represent. 5. for our example. 7. 6. T h e r e are several characteristic shapes of frequency distributions. to list a frequency for each of the recurring variates. a time-saving device might immediately have occurred to you namely.16 200 - CHAPTER 2 / DATA IN BIOSTATISTICS £ 150 - JJ _ FIGURE 2 . 6. A simple a r r a n g e m e n t would be an array of the d a t a by o r d e r of m a g n i t u d e . they must a r r a n g e the d a t a in a form suitable for c o m p u t a t i o n a n d interpretation. 7. 4. 4. this would indicate that the p o p u l a t i o n is d i m o r p h i c . O r the d i m o r p h i s m could have arisen f r o m the presence of b o t h sexes or of different instars. 7. 6. a frequency distribution is stated in t a b u l a r form. for example. 7. 0 1 2 3 4 5 (i 7 S N u m b e r of p l a n t s q u a d r a t chosen is such as to bring a b o u t a distortion of the distribution. a n d others. 8. in a s a m p l e of i m m a t u r e insects. W h e r e there are some variates of the same value. 6(3 χ ). 7.2. 8. 6. which is simply an a r r a n g e m e n t of the classes of variates with the frequency of each class indicated. C o n v e n t i o n a l l y . We m a y a s s u m e that variates arc r a n d o m l y ordered initially or are in the o r d e r in which the m e a s u r e m e n t s have been taken. T h u s . this is d o n e as follows: . 7(4 χ ). We shall have m o r e to say a b o u t the implications of various types of distributions in later c h a p t e r s a n d sections.2. F r e q u e n c y of t h e sedge Car ex in 500 q u a d r a t s . Such a s h o r t h a n d n o t a t i o n is o n e way to represent a frcqucncy distribution. thus: 9. orginally f r o m A r c h i b a l d (1950). 7. U-shaped distributions. which is the s h a p e of the n o r m a l frequency distribution discussed in C h a p t e r 5. T h e r e a r e also skewed d i s t r i b u t i o n s (drawn out m o r e at o n e tail than the other). we discover that the m e a s u r e m e n t s are b i m o d a l l y distributed (with t w o peaks). 5. 7 could be arrayed in o r d e r of decreasing m a g n i t u d e as follows: 9. 8. 2 2 £ I '"" flacca B a r d i a g r a m . 6. 5. If. T h e most c o m m o n is the symmetrical bell shape ( a p p r o x i m a t e d by the b o t t o m g r a p h in Figure 2. such as the 6's a n d 7's in this Fictitious example. After researchers have obtained d a t a in a given study. 4. D a t a f r o m T a b l e 2.

5 / FREQUENCY DISTRIBUTIONS 17 Variable Frequency V 9 8 7 6 5 4 / I 1 4 3 1 1 T h e a b o v e is a n example of a quantitative frequency distribution. those identifed by the A phenotype. Oiilii from I cc . W e can m a k e frequency distributions of attributes.1. In these. a n d those comprising the h o n i o z y g o t e recessive aa. the rest of the gastrointestinal tract. This table tells us t h a t the t r u n k a n d limbs are the most frequent sites for m e l a n o m a s and that the buccal cavity. a r r a y s a n d frequency distributions need not be limited to such variables. which s h o w s the distribution of m e l a n o m a (a type of skin cancer) over b o d y regions in men a n d w o m e n . in genetics we might have a qualitative frequency distribution as follows: Phenolype Aan J 86 32 This tells us that there are two classes of individuals.2. OhseiVi ϊ<1 frequency Men Women J ( 949 3243 8 5 12 382 4599 645 3645 11 21 93 371 4786 Anatomic site Mead a n d neck T r u n k and limbs Buccal cavity Rest of g a s t r o i n t e s t i n a l tract Genital tract F.1 Two qualitative frequency distributions. of which 32 were seen in the sample. since Y is clearly a m e a s u r e m e n t variable. the various classes are listed in some logical o r a r b i t r a r y order. F o r example. An example of a m o r e extensive qualitative frequency distribution is given in Table 2. However. of which 86 were f o u n d . N u m b e r of cases of skin c a n c e r ( m e l a n o m a l d i s t r i b u t e d over b o d y regions of 4599 men a n d 47X6 w o m e n .ye Total cases Snune. and the genital tract are rarely afflicted by this ΤΛΒΙ Κ 2. called qualitative frequency distributions.

Carex N u m b e r of p l a n t s of the sedge flacca f o u n d in 500 q u a d r a t s . the tallies are c o n verted into n u m e r a l s indicating frequencies in the next c o l u m n .1 in the o r d e r in which they were o b t a i n e d as m e a s u r e m e n t s .7.2. Their sum is indicated by Σ / . that are not represented by a single aphid.18 TABI. We often e n c o u n t e r o t h e r examples of qualitative frequency d i s t r i b u t i o n s in ecology in the form of tables. Q u a n t i t a t i v e frequency d i s t r i b u t i o n s based on a c o n t i n u o u s variable are the most c o m m o n l y e m p l o y e d frequency distributions. T h e frequency distribution is prepared by entering each variate in turn on the scale a n d indicating a c o u n t by a conventional tally m a r k . This is an example f r o m plant ecology: the n u m b e r of p l a n t s per q u a d r a t sampled is listed at the left in the variable c o l u m n .1. We find that variates 3.E 2. of plants per quadrat Y 0 1 2 3 4 5 6 7 8 Total Observed frequency f 181 118 97 54 32 9 5 3 1 500 Source: Data from Archibald (1950). as in some botanical species lists. This gives the . T h e 25 readings a r e s h o w n at the t o p of Box 2. T h e a r r a n g e m e n t of such tables is usually alphabetical. you should b e c o m e t h o r o u g h l y familiar with them.) T h e d a t a arc next set up in a frequency distribution. o r it m a y follow a special c o n v e n t i o n . (They could have been arrayed a c c o r d i n g to their magnitude. such as 3.8. and 4.1. of the i n h a b i t a n t s of a sampled ecological area. the observed frequency is shown at the right. An e x a m p l e is s h o w n in Box 2. A q u a n t i t a t i v e frequency distribution based on meristic variates is s h o w n in T a b l e 2.6. o r species lists. However.4 or 3. It is based on 25 femur lengths m e a s u r e d in an aphid p o p u l a t i o n . W h e n all of (lie items have been tallied in the c o r r e s p o n d i n g class. T h e variates increase in m a g n i t u d e by unit steps of 0.2 A meristic frequency CHAPTER 2 / DATA IN BIOSTATISTICS distribution.3 have the highest frequencies. W h a t have we achieved in s u m m a r i z i n g o u r d a t a ? T h e original 25 variates are now represented by only 15 classes. 3. Such tables c a t a l o g the i n h a b i t a n t s by species o r at a higher t a x o n o m i c level a n d record the n u m b e r of specimens observed for each. type of cancer. wc also n o t e that there arc several classes. No.

it has been m a d e to equal 0. in which the original measurements were used as class marks. t o o few to put into a frequency distribution with 15 classes. This process is k n o w n as grouping of classes of frequency distributions. Because of the o d d last significant digit.25 a n d 3. looking at .45.1 will m a k e this process clear. If we now wish to m a k e wider class intervals. a n d so forth. We should not now be led to believe that we have suddenly achieved greater precision. H a d we estimated that it exceeded the value of 3.2 in this case). If we start at the lower end.65.25 and 3. the limits for the next class from 3. to find the class mark of the first class. O u r next task is to find the class marks. However.1 units. We g r o u p the d a t a twice in order to impress u p o n the reader the flexibility of the process. Whenever we designate a class interval whose last significant digit is even (0. but that we were u n a b l e t o measure to the second decimal place. w h e n we m e a s u r e an aphid and record its femur length as 3.3. the m i d p o i n t between 3. T h e reason for this is that we have only 25 aphids.25 and 3. we would have given it the next higher score. similarly. 5 / FREQUENCY DISTRIBUTIONS 19 entire frequency distribution a d r a w n . This was quite simple in the frequency distribution s h o w n at the left side of Box 2.25 a n d 3. In recording the m e a s u r e m e n t initially as 3. which turns out to be 3.65. as we have seen in Section 2.3 units.45. the others can be written d o w n bv inspection without any special c o m p u t a t i o n . We note that the class m a r k has one m o r e decimal place than the original measurements.4. Reference to Box 2. 3. Thus. O u r class interval was 0. the implied class limits will now be f r o m 3.35 units. that is. O n the right side of the table in Box 2. we lake the midpoint between 3. T h u s . and so forth. now we are using a class interval twice as wide as before. the class interval has been doubled in width.3 units. using a class interval of 0. and the class m a r k s arc calculated by t a k i n g the midpoint of the new class intervals. 3. However.1.X5. It should be o b v i o u s that the wider the class intervals.25 to 3. we are d o i n g n o t h i n g but extending the r a n g e within which m e a s u r e m e n t s are placed into one class.35. a n d so forth. We should realize t h a t g r o u p i n g individual variates into classes of wider range is only an extension of the same process t h a t t o o k place w h e n we o b t a i n e d the initial m e a s u r e m e n t . all the m e a s u r e m e n t s between 3.25.3. starting with the lower limit 3.l o o k i n g distribution. T h u s . the m o r e c o m p a c t the d a t a become but also the less precise. it is illustrated in Box 2.55. we have to c o n d e n s e our d a t a into fewer classes. we o b t a i n 3. by a d d i n g 0. the class mark will carry one m o r e decimal place than the original m e a s u r e m e n t s .35.55 being 3. O n c e the implied class limits and the class mark for the first class have been correctly found. we estimated t h a t it fell within this range.3.o u t a n d scattered a p p e a r a n c e . we imply thereby that the true m e a s u r e m e n t lies between 3.45. for example.4. 3. the class mark now shows as m a n y decimal places as the original variates. T o o b t a i n a m o r e cohesive and s m o o t h . Simply a d d the class interval repeatedly to each of the values. 3.2 units.1 a n d described in the following p a r a g r a p h s . 3.35.35 were in fact g r o u p e d into the class identified by the class mark 3. In the first example of grouping.2 wc obtain 3.75. Therefore.45 to 3. for the class marks.2 .1 the d a t a are grouped once again.

5 4. hollow bars represent grouped distribution.75 4.65 3.45-3.85-4.3 4.0 4.65 4.45 3.7 3.8 3.65-4.5 3.05 4.6 4. Line below abscissa shows class marks for the grouped frequency distribution.3 4.9 Original frequency distribution Grouping into 8 classes of interval 0.BOX 2.15 4.05-4.7 Jff||| 8 425-4.35 4.65-4.55-3.1 4.55-3.5.65-3.7 4 3 1 0 1 25 4.3 J|ff||| 8 3.35 3. Sokal.2 1 0 1 4 0 4 3 0 2 1 Implied limits Class mark Tally marks Implied limits 1 3.95 4.75 4.45-4.4 3.35-4.15 4.75 | | 7 1 _1 25 4.1 4.85 4.95-4.45-4.4 I 1 1 i— 3.3 3. Twenty-five femur lengths of the aphid Pemphigus.9 3.25-3.05-4.85 3.9 4.85 3.25-3.1 mm) For a detailed account of the process of grouping.6 3.85-3.3 3.15-4.45-4.1 Preparation of frequency distribution and grouping into fewer classes with wider class intervals.25-4.5 4.6 4.7 3.55 3.45 4.45-3.25-3. Ji .6 Y (femur length.35 3.65 3.8 3.05 4.75-3. Measurements are in m m χ 10~ \ Original measurements 3.4 4.75 3.45 3.65-3.45 4. see Section 2.55 4.2 Grouping into S classes of intend 0.35 4. Shaded bars represent original frequency distribution.1 4.25 3.3 3. 25 Histogram of the original frequency distribution shown above and of the grouped distribution with 5 classes.3 4.0 4.55 Class mark 3.6 If Source: Data from R.2 4.75 3.95 3.8 3.55 3.6 4.3 4.4 3.85-4.8 4.6 3.85 4 3 3 4.8 3.3 Implied limits Tally marks 3. 10 r _] 3.45 4.6 3.35-3.55 4.25 3.4 4. R.15 | M mi (Κ III 5 3. in units of 0.9 4.15-4.0 j^ff 5 3.5 3.4 4.7 4.4 Tally marks f 3.65 4.55-4.3 3.

This is especially true in an age in which it is all l o o easy for a researcher to o b t a i n a numerical result f r o m a c o m p u t e r p r o g r a m without ever really e x a m i n i n g the d a t a for outliers or for o t h e r ways in which the sample m a y not c o n f o r m to the a s s u m p t i o n s of the statistical m e t h o d s .3 units. If the a p h i d d a t a of Box 2. it b e c o m e s n o t a b l y b i m o d a l (that is. Yet with m a n y meristic variables. It would therefore be better to use a class ranging over 3 bristles or 5 bristles. if we were to g r o u p the bristle n u m b e r s 13. This technique will therefore be useful in c o m p u t i n g the m e d i a n of a sample (see Section 3.3. we can e m p l o y T u k e y ' s stem-and-leaf display. W h e n the d a t a are meristic. a n d 16 into o n e class.ηΛ t Ί . then n o t h i n g can be d o n e if the variable is meristic. W h e n e v e r we c o m e u p with m o r e t h a n the desired n u m b e r of classes.1 need to be g r o u p e d . . If we h a d followed the rules on n u m b e r of significant digits for m e a s u r e m e n t s stated in Section 2. If the original d a t a p r o v i d e us with fewer classes than we think we should have. N o w a d a y s even t h o u s a n d s of variatcs can be processed efficiently by c o m p u t e r without prior grouping. samples of several t h o u s a n d m a y profitably be g r o u p e d into m o r e t h a n 20 classes. such as a bristle n u m b e r varying f r o m a low of 13 to a high of 81. However. O n the o t h e r h a n d . T h u s . This t e c h n i q u e is an i m p r o v e m e n t .1. we notice that the initial r a t h e r chaotic s t r u c t u r e is being simplified by grouping. the implied limits of c o n t i n u o u s variables are meaningless. G r o u p i n g data into frequency d i s t r i b u t i o n s was necessary when c o m p u tations were d o n e by pencil a n d paper. but it should be e m p l o y e d with some of the c o m m o n sense that comes f r o m experience in h a n d l i n g statistical d a t a . a meaningless value in terms of bristle n u m b e r . 14. . . T h e n u m b e r of classes d e p e n d s largely on the size of the s a m p l e studied. giving the integral value 14 or 15 as a class m a r k .5. 15. Rather t h a n using tally m a r k s to set u p a frequency distribution. W h e n we g r o u p the frequency distribution into five classes with a class interval of 0. This rule need not be slavishly a d h e r e d to. with a c o n t i n u o u s variable a scarcity of classes w o u l d indicate that we p r o b a b l y had not m a d e our m e a s u r e m e n t s with sufficient precision. frequency d i s t r i b u t i o n s are still extremely useful as a tool for d a t a analysis. f r o m 12 to 20 classes should be established.3) a n d in c o m p u t i n g various tests that require ordered a r r a y s of the sample variates c . In setting up frequency distributions. H o w e v e r .1. it would p r o b a b l y be wise to g r o u p the variates into classes. this could not have h a p p e n e d . since that w o u l d provide t o o few frequencies per class. Samples of less t h a n 40 o r 50 should rarely be given as m a n y as 12 classes. ι Ι Ο Ί . since this is the n a t u r e of the d a t a in question. they should p r o b a b l y not be g r o u p e d into m o r e t h a n 6 classes.. it possesses t w o peaks of frequencies). This can best be d o n e by using an o d d n u m b e r as a class interval so that the class m a r k representing the d a t a will be a whole rather than a fractional n u m b e r . each c o n t a i n i n g several counts. since it not only results in a frequency distribution of the variates of a sample but also permits easy checking of the variates a n d o r d e r i n g them into an array (neither of which is possible with tally marks). as was d o n e in Box 2. the class m a r k would have to be 14. g r o u p ing should be u n d e r t a k e n .22 CHAPTER 2 / DATA IN BIOSTATISTICS the frequency d i s t r i b u t i o n of a p h i d f e m u r lengths in Box 2.

7. which lists 15 b l o o d n e u t r o p h i l c o u n t s . It is clearly 6.4.6.5.4. W e c o n t i n u e in this way until all 15 variates have been entered (as "leaves") in sequence a l o n g the a p p r o p r i a t e leading digits of the stem. Completed (Step array ) 3 67 964 5 4 Step I Step 2 . it permits the efficient o r d e r i n g of the variates. T h u s .4.5. 5.5. T h e unordered m e a s u r e m e n t s are as follows: 4.3. F o r very large samples. we record a 5 next to the leading digit 5. 7.8.8. 7. 16.6. . 18. 9. It is entered by finding the stem level for the leading digit 4 a n d recording a 6 next to the 9 that is already there. F o r a distribution of meristic d a t a we e m p l o y a bar wc may results.3.9. Next. a n d 9. Step 7 . T o p r e p a r e a stem-and-leaf display. for the third variate. 9. 3. 7.4. 4. We then put the next digit of the first variate (a "leaf") at that level of the stem c o r r e s p o n d i n g to its leading digit(s).3. 9. T h e m e d i a n can easily be read off the stem-and-leaf display. stem-and-leaf displays m a y b e c o m e a w k w a r d . 5.9. we scan the variates in the s a m p l e to discover the lowest a n d highest leading digit or digits.1. 3. M o r e o v e r ..1. let us look a h e a d to Table 3.0.7. as s h o w n in the a c c o m p a n y i n g illustration.6. 4. 16. f r o m the c o m p l e t e d array it becomes o b v i o u s that the a p p r o p r i a t e o r d e r i n g of the 15 variates is 2. we write d o w n the entire range of leading digits in unit increments to the left of a vertical line (the "stem").7. are two diagram.3.. 4. 4. 18.. 6. In such cases a conventional frequency distribution as in Box 2.4.2.7.1 w o u l d be preferable. 3.3.9. wish to present the distribution in graphic form when discussing the This is generally d o n e by m e a n s of frequency d i a g r a m s .6. T h e next variate is 4. Ί 3 4 5 6 7 X 9 10 9 Ί 3 4 5 6 7 X 9 96 3 4 5 6 7 X 9 10 96 5 4 7 3 4 5 6 7 X 9 13 IX 1 11 12 13 14 15 10 11 12 11 12 7 10 11 12 7 13 14 13 14 15 16 17 18 3 13 14 15 16 17 IX 0 3 16 17 IX 15 16 17 IX W h e n the shape of a frequency distribution is of particular interest.1 in the next c h a p t e r . 12. . of which there c o m m o n types. 7. 2. Similarly.6. 5.0.1. W e therefore place a 9 next to the 4. 4. T h e first o b s e r v a t i o n in o u r s a m p l e is 4.5 / FREQUENCY DISTRIBUTIONS 23 T o learn how to construct a stem-and-leaf display.3. 12. The completed a r r a y is the equivalent of a frequency distribution a n d has the a p p e a r a n c e of a histogram or bar d i a g r a m (see the illustration).1. 6. 3.

24 CHAPTER 2 / DATA IN BIOSTATISTICS FIGURE 2 . which indicates that the variable is not c o n tinuous. In such cases. those lor the y o u n g e r age groups.! are shown h i s t o g r a m s of the frequency distribution of the aphid data. Readers should therefore a c q u a i n t themselves with (he var- . At the b o t t o m of Box 2. the variable (in o u r case. the class intervals lor the older age g r o u p s would be wider. D a t a f r o m Millis a n d Seng (1954). a n d the o r d i n a t e represents the frequencies. in a frequency distribution of ages we might have m o r e detail on the different ages of y o u n g individuals and less a c c u r a t e identification of the ages of old individuals. we may take a histogram a n d m a k e the class intervals m o r e n a r r o w .3 shows a n o t h e r graphical m o d e of representation of a frequency distribution of a c o n t i n u o u s variable (in this case. birth weight in infants). At this point the h i s t o g r a m becomes the c o n t i n u o u s distribution of the variable. Birth weights of 9465 males infants. For instance. In representing such d a t a . are g r a p h e d as a histogram. T h e i m p o r t a n t point a b o u t such a d i a g r a m is t h a t the bars d o not t o u c h each other. the n u m b e r of p l a n t s per q u a d r a t ) . By contrast. T o illustrate that h i s t o g r a m s are a p p r o p r i a t e a p p r o x i m a t i o n s to the c o n t i n u o u s distributions f o u n d in nature. In a h i s t o g r a m the width of each bar a l o n g the abscissa represents a class interval of the frequency distribution a n d the bars t o u c h each other to s h o w that the actual limits of the classes a r e contiguous. T h e m i d p o i n t of the bar c o r r e s p o n d s to the class mark. p r o d u c i n g m o r e classes. 1950 a n d 1951. C h i n e s e third-class p a t i e n t s in S i n g a p o r e . 3 F r e q u e n c y polygon. 2. such as the frequency distribution of the femur lengths of a p h i d stem m o t h e r s . Occasionally the class intervals of a g r o u p e d c o n t i n u o u s frequency distribution are unequal. u n g r o u p e d a n d g r o u p e d . As we shall see later the shapes of distributions seen in such frequency polygons can reveal much a b o u t the biological situations affecting the given variable. the bars of the histogram are d r a w n with different widths. We can c o n t i n u e this p r o cess until the class intervals b e c o m e infinitesimal in width. T h e h i s t o g r a m would then clearly have a closer fit to a c o n t i n u o u s distribution. c o n t i n u o u s variables.6 The handling of data D a t a must be handled skillfully a n d expeditiously so that statistics can be practiced successfully. f igure 2. T h e height of each bar represents the frequency of the c o r r e s p o n d i n g class. narrower.

f r o m small pocket calculators to larger d e s k . have replaced it. 1958.s t a n d a r d electrically driven mechanical desk calculator has completely d i s a p p e a r e d . which is very tedious. almost no c o m p u t a t i o n is involved. we m a y sometimes substitute an casy-to-evaluate m e d i a n (defined in Section 3.t o p c o m p u t e r s . one should test it using d a t a f r o m t e x t b o o k s with which o n e is familiar. .cxelcrM>ilwaa\com. li-mail: siilcs(«'cxctcrsoflwai'e. contact Hxcter S o f t w a r e . T h e m a t e r i a l in this b o o k c o n s i s t s of r e l a t i v e l y s t a n d a r d statistical c o m p u t a t i o n s that arc a v a i l a b l e in m a n y statistical p r o g r a m s . such as the sign test (Section 10. Websile:hUp://www. O t h e r m e t h o d s are only a p p r o x i m a t e but can often serve the p u r p o s e adequately.2) which requires c o m p u t a t i o n .6 / THE HANDLING OF DATA 25 In this b o o k we ignore " p e n c i l .p a p e r " short-cut m e t h o d s for c o m p u t a tions. We c a n n o t really d r a w the line between the m o r e sophisticated electronic calculators. give numerically reliable results. Users must select p r o g r a m s carefully to ensure that those p r o g r a m s perform the desired c o m p u t a t i o n s . Such dcvices are so diverse that we will not try to survey the field here. a 2-by-2 contingency table c o n t a i n i n g small frequencies that is used for the test of independence ( P e a r s o n a n d Hartley.1 a n d 3. S o m e p r o g r a m s * Ι·'οι· information or l<> order. including prog r a m m a b l e small calculators. thus. and are as free from e r r o r as possible. O n e can all too easily cither feed in e r r o n e o u s d a t a or choose an i n a p p r o p r i a t e p r o g r a m .com. O t h e r statistical techniques are so easy to carry out that no mechanical aids are needed. since we a s s u m e that t h e s t u d e n t has access to a calculator or a c o m p u t e r .2. Finney's table can be used in place of Fisher's m e t h o d of finding exact probabilities. just as there is n o n e as we progress f r o m m i c r o c o m puters to m i n i c o m p u t e r s a n d so on up to the large c o m p u t e r s that o n e associates with the central c o m p u t a t i o n center of a large university or research l a b o r a t o r y . S o m e are inherently simple. S o m e statistical m e t h o d s are very easy to use because special tables exist that provide answers for s t a n d a r d statistical problems. ' t h e s e programs are compatible with Windows XI' and Vista. Even if we did. BI()Mstat : .3). F o r small problems. T a b l e 38).se of m o d e r n d a t a processing procedures has o n e inherent danger. for example. An example is Finney's table. are a d e q u a t e for all of the c o m p u t a t i o n s described in this book. All can perform c o m p u t a t i o n s automatically a n d be controlled by a set of detailed instructions p r e p a r e d by the user. T h e ti. We can use m a n y new types of e q u i p m e n t to p e r f o r m statistical c o m p u t a t i o n s — m a n y m o r e than we could have when Introduction to Biostalistics was first published.3) for the m e a n (described in Sections 3. When using a p r o g r a m for the first time. a n d digital c o m p u t e r s . even for large sets of d a t a . T h e r e is no a b r u p t increase in capabilities between the more versatile p r o g r a m m a b l e calculators a n d the simpler m i c r o c o m p u t e r s . Most of these devices.a n d . on the o n e h a n d . the rate of a d v a n c e in (his area would be so rapid that w h a t e v e r we might say would soon become obsolete. f o u n d in earlier t e x t b o o k s of statistics. T h e o n c e . M a n y new electronic devices. : is a statistical s o f t w a r e p a c k a g e that i n c l u d e s most of the statistical m e t h o d s c o v e r e d in this b o o k .

In a d d i t i o n .1 0 7 . 1 0 6 .9 11.7 10. 5 . h o w w o u l d y o u g r o u p t h e m i n t o a f r e q u e n c y d i s t r i b u t i o n ? G i v e class limits a s well a s c l a s s m a r k s .1 10.26 CHAPTER 2 / DATA IN BIOSTATISTICS a r e n o t o r i o u s because the p r o g r a m m e r has failed to g u a r d against excessive r o u n d i n g e r r o r s or o t h e r p r o b l e m s .8 2. 1958).2 106.6 2.3 11.4 i n l o c o m m o n l o g a r i t h m s ( u s e a t a b i c o r c a l c u l a t o r ) a n d m a k e a f r e q u e n c y d i s t r i b u t i o n of t h e s e t r a n s f o r m e d v a r i a t e s .9 1 1.2 10. 5 5 . A N S .9 11. populations. M e a s u r e m e n t s a r e in m i l l i m e t e r s .9 H o w p r e c i s e l y s h o u l d y o u m e a s u r e t h e w i n g l e n g t h of a s p e c i e s of m o s q u i t o e s in a s t u d y of g e o g r a p h i c v a r i a t i o n if t h e s m a l l e s t s p c c i m c n h a s a l e n g t h of a b o u t 2.7 11.4 11.32 t o 2 .1 a n d 2.a n d . Users of a p r o g r a m should carefully check the d a t a being analyzed so t h a t typing e r r o r s are not present. 1 0 6 .8 11.0 12.4 10.2 i d e n t i f y t h e i n d i v i d u a l o b s e r v a t i o n s . s a m p l e s .555. T h e d i s t r i b u t i o n o f a g e s of s t r i p e d b a s s c a p t u r e d by h o o k a n d l i n e f r o m t h e E a s t R i v e r a n d t h e H u d s o n R i v e r d u r i n g 1 9 8 0 w e r e r e p o r t e d a s f o l l o w s ( Y o u n g . a n d 20. (e) B a r d i a g r a m a n d h i s t o g r a m .3 11. .6 D i f f e r e n t i a t e b e t w e e n t h e f o l l o w i n g p a i r s of t e r m s a n d g i v e a n e x a m p l e o f e a c h . 106. (f) A b s c i s s a a n d o r d i n a t e . 5 4 5 2.1 11. 2. 1 T r a n s f o r m t h e 4 0 m e a s u r e m e n t s in E x e r c i s e 2.8 1 1. G r o u p t h e f o l l o w i n g 4 0 m e a s u r e m e n t s of i n t e r o r b i t a l w i d t h of a s a m p l e o f d o m e s t i c p i g e o n s i n t o a f r e q u e n c y d i s t r i b u t i o n a n d d r a w its h i s t o g r a m ( d a t a f r o m O l s o n a n d M i l l e r . F o r t h e first v a l u e : 107.3 13. ( b ) V a n a l e a n d i n d i v i d u a l . and variables. (a) S t a t i s t i c a l a n d b i o l o g i c a l p o p u l a t i o n s . 0 6 8 1 9 .01. G i v e n 2 0 0 m e a s u r e m e n t s r a n g i n g f r o m 1.6 11.0 11. (c) A c c u r a c y a n d p r e c i s i o n ( r e p e a t a b i l i t y ) .2 11. ( d ) C l a s s i n t e r v a l a n d c l a s s m a r k .7 12.5 10.8 m m a n d t h e l a r g e s t a l e n g t h of a b o u t 3.8 12. 0 .1 R o u n d t h e f o l l o w i n g n u m b e r s t o t h r e e s i g n i f i c a n t figures: 1 0 6 .8 11. W h a t a r e t h e implied limits b e f o r e a n d after r o u n d ing? R o u n d these s a m e n u m b e r s t o o n e decimal place.6 1 1.1 2. 5 .8 10.5 11.1 11.2 10.2 11.4 12. 3 .2 10. p r o g r a m s should help users identify a n d remove b a d d a t a values a n d should p r o v i d e them with t r a n s f o r m a t i o n s so that they can m a k e sure that their d a t a satisfy the a s s u m p t i o n s of various analyses.4.5 12.7 12. Exercises 2.9149.6 10.6 11.l c a f d i s p l a y of t h e d a t a g i v e n in E x c r c i s c 2.5 mm'.9 1 1. C o m m e n t o n t h e r e s u l t i n g c h a n g e in t h e p a t t e r n of t h e f r e q u e n c y d i s tribution from that found before f o r t h e d a t a of T a h l e s 2. 0 4 9 5 .3 10.9 11.7 2.9 11. 1981): A<tc I 1 2 3 4 5 13 49 96 28 16 6 X S h o w t h i s d i s t r i b u t i o n in t h e f o r m of a b a r d i a g r a m .1500. M a k e a s t e m .8 2. 9 5 m m . 12. 7815.3 2.4 2.

CHAPTER

Descriptive

Statistics

An early a n d f u n d a m e n t a l stage in any seienec is the descriptive stage. Until p h e n o m e n a c a n be accurately described, a n analysis of their causes is p r e m a t u r e . T h e question " W h a t ? " comes before " H o w ? " Unless we k n o w s o m e t h i n g a b o u t the usual distribution of the sugar c o n t e n t of blood in a p o p u l a t i o n of guinea pigs, as well as its fluctuations f r o m day to d a y a n d within days, we shall be unable to ascertain the effect of a given dose of a d r u g u p o n this variable. In a sizable s a m p l e it w o u l d be tedious to o b t a i n o u r knowledge of the material by c o n t e m p l a t i n g each individual o b s e r v a t i o n . W e need s o m e f o r m of s u m m a r y to permit us to deal with the d a t a in m a n a g e a b l e form, as well as to be able to share o u r findings with o t h e r s in scientific talks a n d publications. A hist o g r a m or bar d i a g r a m of the frequency distribution would be o n e type of s u m m a r y . However, for most purposes, a numerical s u m m a r y is needed to describe concisely, yet accurately, t h e properties of the o b s e r v e d frequency distribution. Q u a n t i t i e s p r o v i d i n g such a s u m m a r y are called descriptive statistics. This c h a p t e r will i n t r o d u c e you to some of them a n d s h o w how they arc c o m p u t e d . T w o kinds of descriptive statistics will be discussed in this c h a p t e r : statistics of location and statistics of dispersion. T h e statistics of location (also k n o w n as

28

CHAPTER 3 /' DESCRIPTIVE STATISTICS

measures of central tendency) describe the position of a sample along a given dimension representing a variable. F o r example, after we m e a s u r e t h e length of the a n i m a l s within a sample, we will then w a n t to k n o w w h e t h e r the a n i m a l s a r e closer, say, to 2 cm o r to 20 cm. T o express a representative value for t h e s a m p l e of o b s e r v a t i o n s — f o r the length of the a n i m a l s — w e use a statistic of location. But statistics of location will n o t describe the s h a p e of a frequency distribution. T h e s h a p e m a y be long or very n a r r o w , m a y be h u m p e d or Us h a p e d , m a y c o n t a i n t w o h u m p s , or m a y be m a r k e d l y asymmetrical. Q u a n t i tative m e a s u r e s of such aspects of frequency distributions a r e required. T o this e n d we need to define a n d study t h e statistics of dispersion. T h e a r i t h m e t i c m e a n , described in Section 3.1, is u n d o u b t e d l y the most i m p o r t a n t single statistic of location, but o t h e r s (the geometric m e a n , the h a r m o n i c mean, the m e d i a n , a n d the m o d e ) are briefly m e n t i o n e d in Sections 3.2, 3.3, a n d 3.4. A simple statistic of dispersion (the range) is briefly n o t e d in Section 3.5, a n d the s t a n d a r d deviation, the most c o m m o n statistic for describing dispersion, is explained in Section 3.6. O u r first e n c o u n t e r with c o n t r a s t s between s a m p l e statistics a n d p o p u l a t i o n p a r a m e t e r s occurs in Section 3.7, in c o n n e c t i o n with statistics of location a n d dispersion. In Section 3.8 there is a description of practical m e t h o d s for c o m p u t i n g the m e a n a n d s t a n d a r d deviation. T h e coefficient of variation (a statistic that permits us to c o m p a r e the relative a m o u n t of dispersion in different samples) is explained in the last section (Section 3.9). T h e techniques that will be at y o u r disposal after you have mastered this c h a p t e r will not be very powerful in solving biological problems, but they will be indispensable tools for any further w o r k in biostatistics. O t h e r descriptive statistics, of b o t h location and dispersion, will be taken up in later chapters. An important note: We shall first e n c o u n t e r the use of l o g a r i t h m s in this c h a p t e r . T o avoid c o n f u s i o n , c o m m o n logarithms have been consistently abbreviated as log, a n d n a t u r a l l o g a r i t h m s as In. T h u s , log \ m e a n s l o g , 0 χ a n d In v m e a n s log,, x.

3.1 The arithmetic mean T h e most c o m m o n statistic of location is familiar to everyone. It is the arithmetic mean, c o m m o n l y called the mean or average. T h e m e a n is calculated by s u m m i n g all the individual o b s e r v a t i o n s or items of a s a m p l e and dividing this s u m by the n u m b e r of items in the sample. F o r instance, as the result of a gas analysis in a respirometer an investigator o b t a i n s the following four readings of oxygen percentages a n d s u m s them: 14.9
10.8

12.3 23.3
Sum =- 6 1 7 3

3.1 / THE ARITHMETIC MEAN

29

T h e investigator calculates the m e a n oxygen percentage as the s u m of the four items divided by the n u m b e r of items. T h u s the average oxygen p e r c e n t a g e is Mean = 15.325%

Calculating a m e a n presents us with the o p p o r t u n i t y for learning statistical symbolism. W e have already seen (Section 2.2) t h a t a n individual o b s e r v a t i o n is symbolized by Y|·, which s t a n d s for t h e ith o b s e r v a t i o n in t h e sample. F o u r observations could be written symbolically as follows:
Υ» v2, Y3, ^

W e shall define n, t h e sample size, as the n u m b e r of items in a sample. In this particular instance, the sample size η is 4. T h u s , in a large sample, we c a n symbolize the a r r a y f r o m the first to the nth item as follows:
Yl, Υ2,.·.,Υη

W h e n we wish to s u m items, we use the following n o t a t i o n : Σ"
i= 1 Yi

= y> +

y

2 + ·· ·+

η

T h e capital Greek sigma, Σ, simply m e a n s the sum of the items indicated. T h e i = 1 m e a n s that the items should be s u m m e d , starting with the first o n e a n d ending with the nth one, as indicated by the i = η a b o v e the Σ. T h e subscript a n d superscript are necessary to indicate how m a n y items s h o u l d be s u m m e d . T h e "/ = " in the superscript is usually o m i t t e d as superfluous. F o r instance, if we h a d wished t o s u m only the first three items, we would have written Σ?=, Y{. O n the o t h e r h a n d , h a d we wished to sum all of them except the first one, we would have written Σ " = 2 ν ; . W i t h some exceptions (which will a p p e a r in later chapters), it is desirable to omit subscripts a n d superscripts, which generally add to the a p p a r e n t complexity of the f o r m u l a and, when they are unnecessary, distract the s t u d e n t ' s a t t e n t i o n f r o m the i m p o r t a n t relations expressed by the formula. Below are seen increasing simplifications of the c o m p l e t e s u m m a t i o n n o t a t i o n shown at the extreme left:

!

Σ 1 Yi = ιΣ 1 γί

= Σγ< ;

= Σ

γ

=

Σ

γ

T h e third symbol might be interpreted as meaning, " S u m the Y t 's over all available values of /." This is a frequently used n o t a t i o n , a l t h o u g h we shall not employ it in this b o o k . T h e next, with η as a superscript, tells us to sum η items of V; note (hat the i subscript of the Y has been d r o p p e d as unnecessary. Finally, the simplest n o t a t i o n is s h o w n at the right. It merely says sum the Vs. This will be the form we shall use most frequently: if a s u m m a t i o n sign precedes a variable, the s u m m a t i o n will be u n d e r s t o o d to be over η items (all the items in the sample) unless subscripts or superscripts specifically tell us otherwise.

30

CHAPTER 3 /' DESCRIPTIVE STATISTICS

W e shall use the s y m b o l Y for the a r i t h m e t i c m e a n of the variable Y. Its f o r m u l a is ^written as follows: y y L Y = = ~YY
η η""

(3.1)

This f o r m u l a tells us, " S u m all the («) items a n d divide the s u m by n." T h e mean of a sample is the center of gravity of the obsen'ations in the sample. If you were to d r a w a h i s t o g r a m of an observed frequency d i s t r i b u t i o n o n a sheet of c a r d b o a r d a n d then cut out the h i s t o g r a m a n d lay it flat against a b l a c k b o a r d , s u p p o r t i n g it with a pencil b e n e a t h , chances a r e t h a t it would be out of balance, t o p p l i n g to either the left o r the right. If you m o v e d the s u p p o r t i n g pencil p o i n t to a position a b o u t which the h i s t o g r a m w o u l d exactly balance, this point of b a l a n c e would c o r r e s p o n d to the a r i t h m e t i c m e a n . W e often m u s t c o m p u t e averages of m e a n s or of o t h e r statistics that m a y differ in their reliabilities because they are based on different sample sizes. At o t h e r times we m a y wish the individual items to be averaged to have different weights or a m o u n t s of influence. In all such cases we c o m p u t e a weighted average. A general f o r m u l a for calculating the weighted average of a set of values Yt is as follows:

t

=

(3.2)

Σ »·.w h e r e η variates, each weighted by a factor w„ are being averaged. T h e values of Yi in such cases are unlikely to represent variates. They are m o r e likely to be s a m p l e m e a n s Yt or s o m e o t h e r statistics of different reliabilities. T h e simplest case in which this arises is when the V, are not individual variates but are means. T h u s , if the following three m e a n s are based on differing s a m p l e sizes, as shown,

>;
3.85 5.21 4.70

n, 12 25 Η

their weighted average will be =

(12)(3.85) + (25)(5.2I| + (8X4.70) 12T25 1 S

=

214.05 45

N o t e that in this example, c o m p u t a t i o n of Ihc weighted mean is exactly equivalent to a d d i n g up all the original m e a s u r e m e n t s a n d dividing the sum by the total n u m b e r of the m e a s u r e m e n t s . Thus, the s a m p l e with 25 observations, having the highest m e a n , will influence the weighted average in p r o p o r t i o n to ils size.

3.2 / OTHER MEANS

31

3.2 Other means W e shall see in C h a p t e r s 10 a n d 11 t h a t variables are s o m e t i m e s t r a n s f o r m e d into their l o g a r i t h m s or reciprocals. If we calculate the m e a n s of such transformed variables a n d then c h a n g e the m e a n s back into the original scale, these m e a n s will not be the s a m e as if we h a d c o m p u t e d the arithmetic m e a n s of t h e original variables. T h e resulting m e a n s have received special n a m e s in statistics. T h e b a c k - t r a n s f o r m e d m e a n of the logarithmically t r a n s f o r m e d variables is called the geometric mean. It is c o m p u t e d as follows: GMv = antilog - Υ log Y
η

(3.3)

which indicates that the geometric m e a n GMr is the a n t i l o g a r i t h m of the m e a n of the l o g a r i t h m s of variable Y. Since a d d i t i o n of logarithms is equivalent t o multiplication of their antilogarithms, there is a n o t h e r way of representing this quantity; it is GMY = ^Y^YiT77Yn (3.4)

T h e geometric m e a n p e r m i t s us to b e c o m e familiar with a n o t h e r o p e r a t o r symbol: capital pi, Π , which m a y be read as " p r o d u c t . " Just as Σ symbolizes s u m m a t i o n of the items that follow it, so Π symbolizes the multiplication of the items that follow it. T h e subscripts a n d superscripts have exactly the same m e a n i n g as in the s u m m a t i o n case. T h u s , Expression (3.4) for the geometric m e a n can be rewritten m o r e c o m p a c t l y as follows: GMr=nY\Yi
I

(3.4a) tedious. variates harmonic can be

T h e c o m p u t a t i o n of the geometric m e a n by Expression (3.4a) is quite In practice, the geometric m e a n has to be c o m p u t e d by t r a n s f o r m i n g the into logarithms. The reciprocal of the arithmetic m e a n of reciprocals is called the mean. If we symbolize it by HY, the f o r m u l a for the h a r m o n i c m e a n written in concise form (without subscripts a n d superscripts) as
1 1 „ 1

You may wish to convince yourself that the geometric mean a n d the h a r m o n i c m e a n of the four oxygen percentages are 14.65% a n d 14.09%, respectively. U n less the individual items d o not vary, the geometric m e a n is always less than the arithmetic m e a n , and the h a r m o n i c m e a n is always less t h a n the geometric mean. S o m e beginners in statistics have difficulty in accepting the fact that measures of location or central tendency o t h e r t h a n the arithmetic m e a n are permissible or even desirable. T h e y feel that the arithmetic m e a n is the "logical"

32

CHAPTER 3 /' DESCRIPTIVE STATISTICS

average, a n d that any o t h e r m e a n would be a distortion. This whole p r o b l e m relates t o the p r o p e r scale of m e a s u r e m e n t for representing d a t a ; this scale is not always the linear scale familiar to everyone, but is sometimes by preference a logarithmic or reciprocal scale. If you have d o u b t s a b o u t this question, we shall try to allay t h e m in C h a p t e r 10, where we discuss the reasons for t r a n s f o r m i n g variables.

3.3 The median T h e median Μ is a statistic of location occasionally useful in biological research. It is defined as that value of the variable (in an o r d e r e d array) that has an equal number of items on either side of it. Thus, the m e d i a n divides a frequency distribution into two halves. In the following sample of five m e a s u r e m e n t s , 14, 15, 16, 19, 23 Μ ~ 16, since the third o b s e r v a t i o n has an equal n u m b e r of o b s e r v a t i o n s on b o t h sides of it. We can visualize the m e d i a n easily if we think of an a r r a y f r o m largest t o s m a l l e s t — f o r example, a row of m e n lined u p by their heights. T h e m e d i a n individual will then be that m a n having an equal n u m b e r of m e n on his right a n d left sides. His height will be the median height of the s a m ple considered. This quantity is easily evaluated f r o m a sample a r r a y with an o d d n u m b e r of individuals. W h e n the n u m b e r in the s a m p l e is even, the m e d i a n is conventionally calculated as the m i d p o i n t between the (n/2)th a n d the [(«/2) + 1 j t h variate. T h u s , for the s a m p l e of four m e a s u r e m e n t s 14, 15, 16, 19 the median would be the m i d p o i n t between the second and third items, or 15.5. Whenever any o n e value of a variatc occurs m o r e than once, p r o b l e m s may develop in locating the m e d i a n . C o m p u t a t i o n of the median item b e c o m e s m o r e involved because all the m e m b e r s of a given class in which the m e d i a n item is located will have the s a m e class m a r k . T h e median then is the {n/2)lh variate in the frequency distribution. It is usually c o m p u t e d as that point between the class limits of the m e d i a n class where the median individual would be located (assuming the individuals in the class were evenly distributed). T h e median is just o n e of a family of statistics dividing a frequency distribution into equal areas. It divides the distribution into two halves. T h e three quartiles cut the d i s t r i b u t i o n at the 25, 50, and 75% p o i n t s — t h a t is, at points dividing the distribution into first, second, third, and f o u r t h q u a r t e r s by area (and frequencies). T h e second quarlile is, of course, the median. (There are also quintiles, deciles, a n d percentiles, dividing the distribution into 5. 10, a n d 100 equal portions, respectively.) M e d i a n s arc most often used for d i s t r i b u t i o n s that d o not c o n f o r m to the s t a n d a r d probability models, so that n o n p a r a m e t r i c m e t h o d s (sec C h a p t e r 10) must be used. Sometimes (he median is a m o r e representative m e a s u r e of location than the a r i t h m e t i c m e a n . Such instances almost always involve a s y m m e t r i c

3.4 / THE MODE

33

distributions. An often q u o t e d example f r o m economics w o u l d be a suitable m e a s u r e of location for the "typical" salary of a n employee of a c o r p o r a t i o n . T h e very high salaries of the few senior executives would shift the arithmetic m e a n , the center of gravity, t o w a r d a completely unrepresentative value. T h e m e d i a n , on the o t h e r h a n d , would be little affected by a few high salaries; it w o u l d give the p a r t i c u l a r point o n the salary scale a b o v e which lie 50% of the salaries in the c o r p o r a t i o n , the o t h e r half being lower t h a n this figure. In biology an example of the preferred application of a m e d i a n over the arithmetic m e a n m a y be in p o p u l a t i o n s showing skewed distribution, such as weights. T h u s a m e d i a n weight of American males 50 years old m a y be a more meaningful statistic than the average weight. T h e m e d i a n is also of i m p o r t a n c e in cases where it m a y be difficult or impossible to o b t a i n a n d m e a s u r e all the items of a sample. F o r example, s u p p o s e an animal behaviorist is studying the time it takes for a s a m p l e of a n i m a l s to perform a certain behavioral step. T h e variable he is m e a s u r i n g is the time from the beginning of the experiment until each individual has performed. W h a t he w a n t s to o b t a i n is an average time of p e r f o r m a n c e . Such an average time, however, can be calculated only after records have been o b t a i n e d on all the individuals. It m a y t a k e a long lime for the slowest a n i m a l s to complete their p e r f o r m a n c e , longer t h a n the observer wishes to spend. (Some of them may never respond a p p r o p r i a t e l y , m a k i n g the c o m p u t a t i o n of a m e a n impossible.) Therefore, a convenient statistic of location to describe these a n i m a l s may be the median time of p e r f o r m a n c e . Thus, so long as the observer k n o w s what the total sample size is, he need not have m e a s u r e m e n t s for the right-hand tail of his distribution. Similar e x a m p l e s would be the responses to a d r u g or poison in a g r o u p of individuals (the median lethal or effective dose. LD 5 ( I or F.D S 0 ) or the median time for a m u t a t i o n to a p p e a r in a n u m b e r of lines of a species.

3.4 The mode
T h e mode r e f e r s t o the value represented by the greatest number of individuals.

When seen on a frequency distribution, the m o d e is the value of the variable at which the curve peaks. In grouped frequency distributions the m o d e as a point has little meaning. It usually sulliccs It) identify the m o d a l class. In biology, the m o d e does not have m a n y applications. Distributions having two peaks (equal or unequal in height) are called bimodal; those with m o r e than two peaks are multimodal. In those rare distributions that are U-shaped, we refer to the low point at the middle of the
distribution as an antimode.

In evaluating the relative merits of the arithmetic mean, the median, a n d the mode, a n u m b e r of c o n s i d e r a t i o n s have to be kept in mind. T h e m e a n is generally preferred in statistics, since it has a smaller s t a n d a r d e r r o r than o t h e r statistics of location (see Section 6.2), it is easier to work with mathematically, and it has an a d d i t i o n a l desirablc p r o p e r t y (explained in Section 6.1): it will tend to be normally distributed even if the original data are not. T h e mean is

ι :i.(i . median. It is . and mean are generally these: the mean is closest to the d r a w n . P e r c e n t b u t t e r f a t in 120 s a m p l e s of milk ( f r o m a C a n a d i a n c a t t l e b r e e d e r s ' r e c o r d b o o k ) . 3. a n d the m o d e are all identical. 8 :i.o u t tail of the distribution. and the m e d i a n is between these. a n d if it is desired to have a statistic reflecting such changes.2 d e m o n s t r a t e s that radically different-looking distributions may possess the identical arithmetic mean. a n d m o d e . u n i m o d a l d i s t r i b u t i o n s the mean.o 1.(> 4.1 bul I r r f a t An a s y m m e t r i c a l f r e q u e n c y d i s t r i b u t i o n ( s k e w e d t o the right) s h o w i n g l o c a t i o n of t h e m e a n . In a typical asymmetrical d i s t r i b u t i o n . T h e mean is generally m o r e sensitive to c h a n g e s in the s h a p e of a frequency distribution. such as the o n e s h o w n in Figure 3. the m e a n may be preferred.34 CHAPTER 3 /' DESCRIPTIVE STATISTICS 20 18 η = 120 Hi uh 14 12 c" c t10 U.2 1.is i.0 lVl"!'!'] HGURi·: 3. m a r k e d l y affected by outlying observations. the m o d e is farthest.1. the relative positions of the mode.5 The ran}>e We now turn to measures of dispersion. f igure 3. the m e d i a n and m o d e are not. m e d i a n . A prime example of this is the well-known n o r m a l distribution of C h a p t e r 5.8 5. An easy way to r e m e m b e r this seq u e n c e is to recall that they occur in alphabetical o r d e r from the longer tail of t h e distribution. In symmetrical. the median.1 !.

3.1) is Range = 4.3..5 / THE RANGE 35 10 8 6 4 2 0 10 8 Uh £ 6 α. 2 I T h r e e frequency d i s t r i b u t i o n s h a v i n g identical m e a n s a n d s a m p l e si/. which is defined as the difference between the largest and the smallest items in a sample.1 m m Since the range is a m e a s u r e of the s p a n of the variates a l o n g the scale of the variable.3 = 1.7 .5".es but differing in dispersion pattern.3 .1) is R a n g e = 23. . O n e simple m e a s u r e of dispersion is the range. T h e range is clearly affected by even a single outlying value a n d for this reason is only a rnuoh estimate of the dtsriersion of all the items in the samtnle. a n d the range of the a p h i d femur lengths (Box 2.8 = 12. it is in the same units as the original m e a s u r e m e n t s . αϊ 4 2 0 10 8 (i 1 0 FIGURE 3 .10.4 units of 0. the range of the four oxygen percentages listed earlier (Section 3. Thus.

3542 308. is by c o n v e n t i o n c o m p u t e d as the individual o b s e r v a t i o n m i n u s t h e m e a n . W e shall n o w try t o c o n s t r u c t such a statistic.7 .1 The standard deviation.7770 7.5 9.9195 105. T h e distance of e a c h variate f r o m t h e m e a n is c o m p u t e d as t h e following deviation: y = Y .05 7. In T a b l e 3.3042 16.01 -0.7248 0.1 16.713 Mean Y ΣΥ I Is. not r e c o m m e n d e d for h a n d or c a l c u l a t o r c o m p u t a t i o n s but s h o w n here to illust r a t e t h e m e a n i n g of t h e s t a n d a r d deviation. D e v i a t e s are symbolized by lowercase letters c o r r e s p o n d i n g to the capital letters of t h e variables. But n o t e that when TABLE 3. T h e c o m p u t a t i o n of t h e m e a n is s h o w n below the table.6 18. C o l u m n (2) in T a b l e 3.9 4.7308 24.59 4.0 3.29 -4. T h e m e a n n e u t r o p h i l c o u n t t u r n s o u t to be 7.31 -0.3762 29.11 10. or deviate. in 15 p a t i e n t s with n o n h e m a t o l o g i c a l t u m o r s .61 -5.1 gives the deviates c o m p u t e d in this manner.Y (i) y = y2 -2.31 2. (/) Y 4. W e n o w wish to calculate a n average d e v i a t i o n t h a t will s u m all t h e deviates and divide t h e m by the n u m b e r of deviates in the sample.99 -1.6928 4.81 -3. weighting e a c h item by its distance f r o m the center of the distrib u t i o n .09 0.7 7.4 7.9782 4.41 -3.7 6. C o l u m n (1) s h o w s the variates in t h e o r d e r in which they were reported.39 8.3 4.8 Total 15.8668 1.8988 1.21 1.Y E a c h individual deviation. r a t h e r t h a n the reverse. Ϋ — Y.4 9.11 -2.6 5.1068 0. T h e d a t a a r e b l o o d n e u t r o p h i l c o u n t s (divided by 1000) per microliter.1 2.3 12.6 The standard deviation W e desire t h a t a m e a s u r e of dispersion t a k e all items of a d i s t r i b u t i o n i n t o c o n s i d e r a t i o n . L o n g m e t h o d .713.41 -4.1708 10.7 (2) Υ .3 3.9228 73.1 we s h o w a s a m p l e of 15 b l o o d n e u t r o p h i l c o u n t s f r o m p a t i e n t s with t u m o r s .8155 16. Υ — Ϋ.36 CHAPTER 3 /' DESCRIPTIVE STATISTICS 3.9148 9.

However..8. Variance = X>· 2 __ 308.1 a n d e n a b l e s us to reach a result o t h e r t h a n zero. T h u s . where accuracy of c o m p u t a tions is an i m p o r t a n t consideration. S q u a r i n g t h e deviates gives us c o l u m n (3) of Table 3.". T h e o b s e r v a n t reader m a y have noticed that we have avoided assigning any symbol to either the variance o r the s t a n d a r d deviation. only rarely in biology (or f ι. or the mean square'. (Squaring the deviates also h o l d s o t h e r m a t h e matical a d v a n t a g e s .Λ . since it is a s q u a r e r o o t of the squared units of the variance. .1 is 15.7770 15 = 20. an average based o n the s u m of deviations w o u l d also always e q u a l zero.5 a n d 11.537.7770) is a very i m p o r t a n t q u a n t i t y in statistics. 3. 308.7 Sample statistics and parameters U p to now we have calculated statistics f r o m samples without giving t o o m u c h t h o u g h t to what these statistics represent. as is s h o w n by the s u m at the b o t t o m of c o l u m n (2).) T h e sum of the s q u a r e d deviates (in this case. T o u n d o the effect of the squaring.3. it is often used in c o m p u t e r p r o g r a m s . negative a n d positive deviates cancel out.-. W h e n correctly calculated. a m e a n and s t a n d a r d deviation will always be absolutely true measures of location a n d dispersion for the samples on which they are based.. Alternative a n d simpler c o m p u t a t i o n a l m e t h o d s are given in Section 3.3. T h e s t a n d a r d deviation of the 15 n e u t r o p h i l c o u n t s is 4.-. An important note: T h e technique just learned a n d illustrated in T a b l e 3.Λ. T h e resulting q u a n t i t y is k n o w n as the variance. this sum a p p e a r s to be u n e q u a l to zero only because of a r o u n d i n g error. Y o u are urged to study A p p e n d i x A l . s t a n d a r d deviation is again expressed in the original units of measurement. It is called t h e sum of squares a n d is identified symbolically as Σγ2.325". we need only r e m e m b e r that because of the s q u a r i n g of the deviations. we now take the positive s q u a r e r o o t of the variance a n d o b t a i n the standard deviation: Thus. We shall explain why in the next section.5851 T h e variance is a m e a s u r e of f u n d a m e n t a l i m p o r t a n c e in statistics.. a n d we shall employ it t h r o u g h o u t this b o o k . A n o t h e r c o m m o n symbol for the s u m of s q u a r e s is SS.1 is not the simplest for direct c o m p u t a t i o n of a variance a n d s t a n d a r d deviation.. D e v i a t i o n s f r o m the a r i t h m e t i c m e a n always s u m to zero because the m e a n is the center of gravity. which we shall t a k e u p in Sections 7... l .ιί^. T h e next step is t o o b t a i n the average of the η s q u a r e d deviations. the variance is expressed in squared units. which d e m o n s t r a t e s that the s u m of deviations a r o u n d the m e a n of a s a m p l e is equal t o zero.ι. C o n s e q u e n t l y . the true m e a n of the four oxygen percentage readings in Section 3. At the m o m e n t .ι . However.7 / SAMPLE STATISTICS AND PARAMETERS 37 we s u m o u r deviates.

the less difference there will be between division by η a n d by n I. however. it is good practice to divide a sum of s q u a r e s by η — 1 when c o m p u t i n g a variance or s t a n d a r d deviation. we would like t o k n o w the true m e a n neutrophil c o u n t of the p o p u l a t i o n of patients with n o n h e m a t o l o g i c a l t u m o r s . it is c u s t o m a r y to c o m p u t e variances by dividing the sum of squares by η — 1. W h a t we w a n t to k n o w is not the m e a n of the particular four oxygen precentages. the p a r a m e t r i c m e a n of the p o p u l a t i o n . the sample m e a n Ϋ estimates μ. will give the p a r a m e t r i c value. the s a m p l e variance as c o m p u t e d in Section 3. m a t h e m a t i c a l statisticians have shoWn t h a t w h e n s u m s of squares are divided by π — 1 rather than by η the resulting s a m p l e variances will be unbiased estimators of the p o p u l a t i o n variance.6 is not unbiased. Such e s t i m a t o r s should be unbiased.38 CHAPTER 3 /' DESCRIPTIVE STATISTICS only as descriptive s u m m a r i e s of the samples we have studied. it will u n d e r e s t i m a t e the m a g n i t u d e of the p o p u l a t i o n variance a 1 . W h o would be able t o collect all the patients with this p a r t i c u l a r disease a n d m e a s u r e their n e u t r o p h i l c o u n t s ? T h u s we need to use sample statistics as e s t i m a t o r s of population statistics or parameters. when averaged. However. Of course.537. n o t merely the m e a n of the 15 individuals m e a s u r e d . a sample variance.6) In the n e u t r o p h i l . as the q u a n t i t y η — 1 is generally referred to. are u n k n o w n a n d (generally speaking) are u n k n o w a b l e . T h e s a m p l e m e a n Ϋ is an unbiased e s t i m a t o r of the p a r a m e t r i c m e a n μ. T h u s .c o u n t d a t a the s t a n d a r d deviation would thus be c o m p u t e d as We note that this value is slightly larger than o u r previous estimate of 4. estimates a p a r a m e t r i c variance. Similarly. It is c o n v e n t i o n a l in statistics to use G r e e k letters for p o p u l a t i o n p a r a m e t e r s a n d R o m a n letters for s a m p l e statistics. T o o v e r c o m e this bias. These p o p u l a t i o n statistics. Similarly. W h e n s t u d y i n g dispersion we generally wish to learn the true s t a n d a r d deviations of t h e p o p u lations a n d not those of t h e samples. O n the average. regardless of sample size. An estimator that d o e s not d o so is called biased. it refers to a variance o b t a i n e d by division of the sum of squares by the degrees of freedom. Almost always we are interested in the populations f r o m which t h e samples h a v e been t a k e n . Division of the s u m of s q u a r e s by η is a p p r o p r i a t e only when the interest of the investigator is limited to the s a m p l e at h a n d a n d to its variance a n d . symbolized by s 2 . By this we m e a n that samples (regardless of the sample size) t a k e n f r o m a p o p u l a t i o n with a k n o w n p a r a m e t e r should give sample statistics that. symbolized by a 2 . the greater the s a m p l e size. It m a y be assumed that when the symbol s2 is e n c o u n t e r e d . T h e f o r m u l a for the s t a n d a r d deviation is therefore customarily given as follows: (3. H o w e v e r . but r a t h e r the t r u e oxgyen percentage of the universe of readings f r o m which the f o u r readings have been sampled. F o r this reason.

because then the investigator is not e s t i m a t i n g a p a r a m e t e r but is in fact e v a l u a t i n g it. but they are t o o well established to think of a m e n d i n g them. It is often called the correction term (CT).8 Practical methods for computing mean and standard deviation T h r e e steps are necessary for c o m p u t i n g the s t a n d a r d deviation: (1) find Σ>>2. a n d (3) take the s q u a r e r o o t of the variance to o b t a i n the s t a n d a r d deviation. This w o u l d be in c o n t r a s t t o using these as estimates of the p o p u l a t i o n p a r a m e t e r s .3. You are urged to work . "the sum of squares of Y. (2) divide by η — 1 to give the variance. each s q u a r e d . In general. T h e p r o c e d u r e used t o c o m p u t e the s u m of squares in Section 3. so that accuracy may be lost in c o m p u t i n g their difference unless one takes the precaution of c a r r y i n g sufficient significant figures. T h e n u m e r a t o r of this term is the s q u a r e of the sum of the Y's. Σ Υ 2 should be called the "sum of Y s q u a r e d " and should be carefully distinguished f r o m Σ>>2. similarly.7)? T h e proof of this identity is very simple a n d is given in Appendix A 1. all t h e Y's are first s u m m e d . and this s u m is then s q u a r e d . T h u s the variance of the wing lengths of all a d u l t w h o o p i n g cranes w o u l d b e a p a r a m e t r i c value. T h e d i s a d v a n t a g e of Expression (3. the s u m of squares. A quicker c o m p u t a t i o n a l f o r m u l a for this q u a n t i t y is v r V>" 11 (3.7) This f o r m u l a t i o n explains most clearly the m e a n i n g of the sum of squares. t h a t is.8 / PRACTICAL METHODS FOR COMPUTING MEAN AND STANDARD DEVIATION 39 s t a n d a r d deviation as descriptive statistics of the sample.8) is that the quantities Σ Y2 a n d (Σ Y)2hi may b o t h be quite large. you can convince yourself of this fact by calculating these two quantities for a few n u m b e r s .2. alt h o u g h it m a y be inconvenient for c o m p u t a t i o n by h a n d or calculator. which first squares the y ' s a n d then sums them. W h y is Expression (3. T h e r e are also the rare cases in which the investigator possesses d a t a on the entire p o p u lation." These names are u n f o r t u n a t e .6 can be expressed by the following f o r m u l a : £y2 = X<y-y) 2 (3. in such cases division by η is perfectly justified. this q u a n t i t y is different f r o m Σ Υ 2 . as follows: £y 2 - Y 2 + >1 + >1 + · • • + Y2„ W h e n referred to by name. T h e o t h e r q u a n t i t y in Expression (3. since one must first c o m p u t e the m e a n before one can s q u a r e a n d sum the deviations. is the sum of all individual Y's.8) identical with Expression (3.8) Let us see exactly w h a t this f o r m u l a represents. If you arc not certain a b o u t this. if the heights of all winners of the N o b e l Prize in physics h a d been m e a s u r e d . These two terms a r c identical only if all the Y's arc equal. their variance w o u l d be a p a r a m e t e r since it w o u l d be based on the entire p o p u l a t i o n . Σ Υ 2 . T h e first term o n the right side of the e q u a t i o n . 3.8) is (ΣΥ) 2 />ι.

the only statistic in need of d e c o d i n g is the mean. variances. T h e m a t h e m a t i c a l proof is given in A p p e n d i x A 1. In c o m b i n a t i o n coding the additive code can be ignored. However.3 we e x a m i n e the c o n s e q u e n c e s of the three types of coding in the c o m p u t a t i o n of means. the s u m s of squares or variances have to be divided by the multiplicative codes s q u a r e d . because they are s q u a r e d terms. W h e n the d a t a are a r r a y e d in a frequency distribution. a n d s t a n d a r d deviations. T h e s t a n d a r d deviations have to be divided by the multiplicative code. T h e calculation is simplified by coding to remove the a w k w a r d class m a r k s . . the c o m p u t a t i o n s can be m a d e m u c h simpler. and the multiplicative factor becomcs s q u a r e d d u r i n g the o p e r a t i o n s . T h u s . the lowest class m a r k of the arrav. W h e n c o m p u t i n g the statistics. W e shall use the term additive coding for the a d d i t i o n or s u b t r a c t i o n of a c o n s t a n t (since s u b t r a c t i o n is only a d d i t i o n of a negative n u m b e r ) .1. T h e distance f r o m an item of 15 to its m e a n of 10 would be 5. just as had to be d o n e for the m e a n . O n considering the effects of c o d i n g variates on the values of variances and standard deviations. variances.40 CHAPTER 3 /' DESCRIPTIVE STATISTICS t h r o u g h it t o build u p y o u r confidence in h a n d l i n g statistical s y m b o l s a n d formulas. because an additive code has n o effect on the distance of an item f r o m its m e a n . then where C is an additive c o d e a n d D is a multiplicative code. the item would now be 5 a n d the m e a n zero. T h e d a t a are the birth weights of male Chinese children. first e n c o u n t e r e d in Figure 2. In A p p e n d i x A 1. you can often avoid the need for m a n u a l entry of large n u m b e r s of individual variatcs if you first set u p a frequency distribution. since it would not have simplified the c o m p u t a t i o n s appreciably.2. This is d i m e bv s u b t r a c t i n g 59. variances. T h e difference between t h e m would still be 5. Sometimes the d a t a will c o m e to you already in the form of a frequency distribution.5. having been g r o u p e d by the researcher.3. T h e c o m p u t a t i o n of Ϋ a n d s f r o m a frequency distribution is illustrated in Box 3. if only additive c o d i n g is employed. we find that additive codes have no effect o n the s u m s of squares. or s t a n d a r d deviations.1. W e chose not to apply coding to these d a t a . If the c o d e d variable is Yc = D(Y + C). the c o m p u t a t i o n of the m e a n and s t a n d a r d deviation proceeds as in Box 3. but we can see this intuitively. W e shall use the t e r m combination coding to m e a n the a p p l i c a t i o n of b o t h additive a n d multiplicative c o d i n g t o the same set of d a t a . If we were to code the variates by subtracting a c o n s t a n t of 10. the f o r m u l a for c o m b i n a t i o n coding a n d d e c o d i n g is the most generally applicable one. which is based on the u n o r d e r e d n e u t r o p h i l c o u n t d a t a s h o w n in T a b l e 3. F o r the case of means. W e shall similarly use multiplicative coding to refer to the multiplication or division by a c o n s t a n t (since division is multiplication by the reciprocal of the divisor). It is s o m e t i m e s possible t o simplify c o m p u t a t i o n s by recoding variates into simpler f o r m . W h e n the d a t a are u n o r d e r e d . But multiplicative coding does have an effect on s u m s of squares.3. a n d s t a n d a r d deviations.

this value is (2. S t a n d a r d deviations can be estimated f r o m ranges by a p p r o priate division of the range.1 Calculation of Ϋ and s from unordered data. W h e n checking the results of calculations. A simple m e t h o d for e s t i m a t i n g the m e a n is to average the largest a n d smallest o b s e r v a t i o n to obtain the so-called miJrunye. F o r the neutrophil c o u n t s of T a b l e 3. 32. a n d so on. it is frequently useful to have an a p p r o x i m a t e m e t h o d for e s t i m a t i n g statistics so that gross e r r o r s in c o m p u tation can be detected. Neutrophil counts. which c h a n g e s them to 0. unordered as shown in Table 3.713 η ΣΥ 2 = 1201. T h e y are then divided by 8. T h e details of the c o m p u t a t i o n can be learned f r o m the box.7 y = .8 / PRACTICAL METHODS FOR COMPUTING MEAN AND STANDARD DEVIATION 41 BOX 3.21 = 308. 4.7773 Σ / . and so on.7773 ~ η. 3.T y = 7. 1.1~ 14 = 22.15 (not a very good estimate).3. 2.0J/2 = 10.056 s = V22. which is the desired f o r m a t . 8. as follows: /•'or samples of 10 30 100 500 1000 Divide the range by 3 4 5 6 6 .3 + 18. Computation n = 15 £ 7 = 115.1.056 = 4. 16. 24.1. 308.696 S T h e resulting class m a r k s are values such as 0.

5 75.42 CHAPTER 3 /' DESCRIPTIVE STATISTICS BOX 3.9 sc = 1.5 155.987 CT = η = 375.5 171.5 83.5 147.593 oz V = i X 100 .6991 .9 oz Z / > ' c = Z / n 2 .629 Ϋ.5 163.2 Calculation of F.5 = 109. Computation v Coding and decoding Σ JX = 59.5 115. = 6. Birth weights of male Chinese in ounces.5 139.5 123.5 99.327.300 ^ /Y f = 402.5 67.5 91.369% Y 109.5 107.450 s? = η— 1 = 2.5 = 50-4 + 59. s.5 Source: Millis and Seng (1954).888 To decode sf: s = 8sc = 13. and Κ from a frequency distribution.5 131.550 2 Code: Yc = To decode Y — 59 5 y = 8fr+59.659.^ ^ X 100 = 12.C T = 27. (/) Class (2) Coifei c/iJM mark y mark / 2 6 39 385 888 1729 2240 2007 1233 641 201 74 14 5 1 9465 = η 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 59.

These are g o o d estimates of the a c t u a l values of 4.925. Its f o r m u l a is F o r example. " N o w w h a t ? " At this stage in o u r c o m prehension of statistical theory. symbolized by V (or occasionally CV). This is simply the s t a n d a r d deviation expressed as a percentage of the m e a n . the s t a n d a r d deviation of the tail lengths of e l e p h a n t s is obviously m u c h greater than the entire tail length of a mouse. An a n s w e r to this question can be o b t a i n e d by examining the coefficients of variation of wing length in these samples. H o w e v e r . W h e n p o p u l a t i o n s differ appreciably in their means. the s a m p l e m e a n a n d s t a n d a r d deviation. has been developed.35. F o r instance.7. we shall wish to test whether a given biological sample is m o r e variable for o n e character than for a n o t h e r . for a s a m p l e of rats. H o w e v e r . frequent type of c o m p a r i s o n . the coefficient of variation. T h u s . which c o m p a r e s with the calculated value of 4. Coefficients of variation are used when one wishes t o c o m p a r e the variation of two p o p u l a t i o n s without considering the m a g n i t u d e of their means. However. y o u m a y be led to ask. So far.37%. W h e n this value is divided by 4. is a m o n g different p o p u l a t i o n s for the same c h a r a c t e r . which would suggest that the birth weights arc m o r e variable. w h e n we estimate m e a n a n d s t a n d a r d deviation of the a p h i d f e m u r lengths of Box 2.) O f t e n . is b o d y weight m o r e variable than b l o o d sugar content? A second.696 in Box 3. 3. we get a n estimate for the s t a n d a r d deviation of 3. respectively. Thus. T h u s . especially in systcmatics. T o c o m p a r e the relative a m o u n t s of variation in p o p u l a t i o n s having different means.13%.2 is 12.9 The coefficient of variation H a v i n g o b t a i n e d the s t a n d a r d deviation as a m e a s u r e of t h e a m o u n t of v a r i a t i o n in the d a t a . We wish t o k n o w w h e t h e r any o n e of these p o p u l a t i o n s is m o r e variable than the others. the direct c o m p a r i s o n of their variances o r s t a n d a r d deviations is less useful.004 = 9. the coefficient of variation of the birth weights in Box 3. (It is p r o b a b l y of little interest to discover whether the birth weights of the Chinese children are m o r e or less variable t h a n the femur lengths of the aphid stem mothers. we o b t a i n 4. n o t h i n g really useful comes of the c o m p u t a t i o n s we have carried out.1. T h e coefficient of variation is independent of the unit of m e a s u r e m e n t a n d is expressed as a percentage. the only use t h a t we might have for the s t a n d a r d deviation is as an estimate of the a m o u n t of variation in a p o p u l a t i o n . .3656 χ Ι00)/4.9 / THE COEFFICIENT OF VARIATION 43 T h e range of the neutrophil c o u n t s is 15.3656. as s h o w n at the b o t t o m of that box. since larger o r g a n i s m s usually vary m o r e t h a n smaller one. we m a y have m e a s u r e d wing length in samples of birds f r o m several localities.004 a n d 0. the skills j u s t learned are basic to all later statistical w o r k .1 in this m a n n e r . we may wish to c o m p a r e the m a g n i t u d e s of the s t a n d a r d deviations of similar p o p u l a tions a n d see w h e t h e r p o p u l a t i o n A is m o r e or less variable than p o p u l a t i o n B.0 a n d 0. we can calculate V for the latter as (0.3.

18 3.32 5.89 4. a n d c o e f f i c i e n t of v a r i a t i o n f o r t h e p i g e o n d a t a g i v e n i n E x e r c i s e 2.440 .74 4.38 4. a n d t h e m e d i a n f o r t h e f o l l o w i n g d a t a ( m g o f g l y c i n e p e r m g o f c r e a t i n i n e in t h e u r i n e o f 3 7 c h i m p a n z e e s .94 4.89 4.38 3.96 4.09 4.89 4.89 3.42 4.97 3.71 4.2 t o all o b s e r v a t i o n s have upon the 3.41 4.22 3.10 4.91 4. Y = 0 .32 3.47 4.74 4.00 4. T h e f o l l o w i n g a r e p e r c e n t a g e s of b u t t e r f a t f r o m 120 r e g i s t e r e d t h r e e .93 4.51 4.20 4. F i r s c h e i n . (a) C a l c u l a t e Y.38 4.300 .11 4.24 3. H o w m u c h p r e c i s i o n h a s b e e n l o s t b y grouping? Also calculate the median. f r o m G a r t l e r .88 4.05 3.09 4.60 3.00 4.23 4.99 3. V. a n d V.82 3.060 .33 4.03 4.011 .155 . V.58 4.66 3.070 .36 4.o l d A y r s h i r e c o w s selected at r a n d o m f r o m a C a n a d i a n stock r e c o r d b o o k .110 .056 .87 3.08 3.110 . s.70 4.60 4.35 4.036 . C o m p u t e the m e d i a n for the g r o u p e d data.83 3.48 4. .82 3.100 . s t a n d a r d d e v i a t i o n .06 3.100 .10 4.46 3.97 4. A N S . 4.91 4.30 4.370 .17 4.86 4. 1956).05 4.67 4.24 4. .00 4. a n d D o b z h a n s k y .99 4.70 4.52 4.44 CHAPTER 3 /' DESCRIPTIVE STATISTICS Exercises 3.116 .14 3.100 .09 4.97 4. median.56 3. (b) G r o u p t h e d a t a i n a f r e q u e n c y d i s t r i b u t i o n a n d a g a i n c a l c u l a t e Y. C o m p a r e t h e r e s u l t s w i t h t h o s e o f (a).25 4.100 3.15 4.66 3.42 4.100 .019 .4 W h a t clfect w o u l d a d d i n g a c o n s t a n t n u m e r i c a l v a l u e s o f t h e f o l l o w i n g s t a t i s t i c s : Υ. s.86 4.350 .07 3. a n d c o m p a r e t h e m with the results o b t a i n e d from u n g r o u p e d data.49 4.77 3.24 4.135 . r e c o m p u t e Ϋ a n d s.100 .91 4.81 4.120 .y e a r . G r o u p t h e d a t a i n t o t e n c l a s s e s .025 .58 3. 1 0 4 0 4 .28 4.018 .38 4.05 4.10 4.20 4.20 3.050 .s.34 3.055 .31 4. a n d V d i r e c t l y f r o m t h e d a t a .66 4.17 3. s = 0 .02 3.81 4.96 3.120 .33 3.100 .1 F i n d f .29 4.133 .33 3.3 3. s.00 3.077 .97 4.12 4.24 3.27 3.300 .2 F i n d t h e m e a n .70 4. a v e r a g e d e v i a t i o n .28 4.98 4. 1 1 5 .58 5.38 4.92 4.20 4.16 4.4.080 .06 3.88 3.16 4.09 3.97 4.052 . .40 4.008 .29 3.95 4.026 .043 .110 .72 4.67 4.

mode = 3.3. 7 = 3. How well do these estimates agree with the estimates given by Y and s? ANS. 90004.2 and then multiplying the sums by 8. s = 1. 900003.2. range? What would be the effect of adding 5.125.9.98%.2661.1. median. 900002. 90003.8) for the data in Exercises 3. Estimates of μ and σ for Exercise 3. ANS. 9005 (c) 90001.8 to compute s 2 for the following artificial data sets: (a) 1 . 4 .2 are 0.0? Would it make any difference in the above statistics if we multiplied by 8.7 and 3. 9003. and mode.1014. and 3.0 first and then added 5. What is the correct answer? Explain your results. compute the following statistics: Y. 90002. 3 . Use a calculator and compare the results of using Equations 3. s2.2? Estimate μ and σ using the midrange and the range (see Section 3. median = 2. s.224 and 0.7 3. 9004. Show that the equation for the variance can also be written as s2 = ^ . 900005 Compare your results with those of one or more computer programs.5 3. V = 36. V.043. s2 = 1.8 Using the striped _bass age distribution given in Exercise 2. 90005 (d) 900001. 9002. 5 (b) 9001. _3. ΤΥ2-ηΫ2 η — 1 3.6 mode. 2 . .EXERCISES 45 3.948. 900004.

represent a p o p u l a t i o n of male Chinese infants f r o m a given geographical area. that these frequency d i s t r i b u t i o n s are only samples f r o m given p o p u l a t i o n s . we could m a k e all sorts of predictions based u p o n the s a m p l e frequency distribution. T a b l e 2. we could say t h a t a p p r o x i m a t e l y 2.2 s h o w s a distribution for a meristic.1 and the h u m a n birth weights in Box 3. m o s t of the q u a d r a t s c o n t a i n e d either n o sedges or o n e or t w o plants. Each of these d i s t r i b u t i o n s i n f o r m s us a b o u t the a b s o l u t e f r e q u e n c y of a n y given class a n d permits us to c o m p u t a t e the relative frequencies of a n y class of variable. variable.1% of male Chinese babies b o r n in this p o p u l a t i o n should weigh between 135. F o r example. Examples of distributions for c o n t i n u o u s variables are the f e m u r lengths of a p h i d s in Box 2.1% of the infants are in t h a t birth weight class. the n u m b e r of sedge p l a n t s per q u a d r a t . that is. or discrete (discontinuous). T h u s . Similarly.5-oz class of birth weights. In the 139. for example.5 we first e n c o u n t e r e d frequency distributions.5 a n d 143. But if we k n e w o u r s a m p l e to be representative of that p o p u l a t i o n . W e realize. we find only 201 out of the total of 9465 babies recorded.5 oz at birth. we might say that . a p p r o x i m a t e l y only 2.2.CHAPTER Introduction Distributions: Poisson to Probability The Binomial and Distributions In Section 2. of course. T h e birth weights. F o r instance.

we shall have serious d o u b t s a b o u t o u r a s s u m p t i o n s . however.1. W e m a y feel t h a t the d a t a should be distributed in a certain way because of basic a s s u m p t i o n s a b o u t the n a t u r e of the forces acting o n the e x a m p l e at h a n d .2 only o n e out of a l m o s t 10. in Section 4. Wc should point out to the reader. It seems m u c h m o r e r e a s o n a b l e t o s u p p o s e t h a t the u n k n o w n p o p u l a t i o n f r o m which we are s a m p l i n g has a larger m e a n t h a t the o n e sampled in Box 3. we shall m a k e such predictions not f r o m empirical distributions.5 or 115. b u t on the basis of theoretical c o n s i d e r a t i o n s that in o u r j u d g m e n t are pertinent. In this c h a p t e r we shall first discuss probability. If o u r actually observed d a t a d o not c o n f o r m sufficiently to the values expected on the basis of these a s s u m p t i o n s . owing to the n o n m a t h e m a t i c a l o r i e n t a t i o n of this b o o k . if we were t o s a m p l e f r o m a n u n k n o w n p o p u l a tion of babies a n d find t h a t the very first individual sampled h a d a b i r t h weight of 170 oz. T h e a s s u m p t i o n s being tested generally lead to a theoretical frequency distribution k n o w n also as a probability distribution.2.47CHAPTER4/INTRODUCTION TO PROBABILITY DISTRIBUTIONS the p r o b a b i l i t y t h a t the weight at birth of any o n e b a b y in this p o p u l a t i o n will be in t h e 139.000 infants h a d a birth weight t h a t high.5-oz class w o u l d be very low i n d e e d — o n l y 2.2. W e have used this empirical frequency distribution to m a k e certain predictions (with w h a t frequency a given event will occur) or to m a k e j u d g m e n t s a n d decisions (is it likely t h a t an infant of a given birth weight belongs to this population?).2. T h o u g h it is possible t h a t we could have sampled f r o m the p o p u l a t i o n of male Chinese babies a n d o b t a i n e d a birth weight of 170 oz. t h e probability t h a t we w o u l d pull out one of the 201 in the 139. W e w o u l d arrive at this conclusion because in the distribution in Box 3. of c o n g r e g a t i o n of a n i m a l s at certain favored places or. It would be m u c h m o r e p r o b a b l e t h a t we w o u l d sample a n infant of 107. such as the 3:1 ratio in a Mendelian cross. This is a c o m m o n use of frequency distributions in biology. We shall thus m a k e use of probability theory to test o u r a s s u m p t i o n s a b o u t the laws of occurrence of certain biological p h e n o m e n a . Next. since. T h e p h e n o m e n a of linkage in genetics. or it m a y be a m o r e complicated function.5 oz. we w o u l d p r o b a b l y reject a n y hypothesis t h a t the u n k n o w n p o p u l a t i o n was the same as t h a t sampled in Box 3. the probability t h a t t h e first individual s a m p l e d would have such a value is very low indeed. t h a t probability theory underlies the entire s t r u c t u r e of statistics. as it would be if we were trying to predict the n u m b e r of plants in a q u a d r a t .5-oz b i r t h class is quite low. conversely. we shall take up the . this m a y not be entirely obvious. If we find that the observed d a t a d o not fit the expectations on the basis of theory. but only to the extent necessary for c o m p r e h e n s i o n of the sections that follow at the intended level of m a t h e m a t i c a l sophistication. their territorial dispersion are cases in point. This m a y be a simple two-valued distribution. Finally. however. in Section 4. since the infants in these classes are represented by frequencies 2240 a n d 2007. If all of the 9465 weights were mixed up in a h a t a n d a single o n e pulled out. we are often led to the discovery of s o m e biological m e c h a n i s m causing this deviation f r o m expectation. of preferential m a t i n g between different p h e n o t y p e s in animal behavior.1%. In m a n y cases in biology. respectively.

Let us b e t a k e ourselves to M a t c h l e s s University. T h e Poisson d i s t r i b u t i o n . T h u s the enrollment consists of 0. the probability of s a m p l i n g a foreign s t u d e n t can be defined as the limit as s a m p l e size keeps increasing of the ratio of foreign s t u d e n t s to the total n u m b e r of s t u d e n t s sampled. In m u c h of o u r w o r k we shall use p r o p o r t i o n s r a t h e r t h a n percentages as a useful c o n v e n t i o n . P [ A G ] = 0. T h u s . T h e r e might not be a single F G s t u d e n t a m o n g the 100 sampled.03. 3 would be foreign g r a d u a t e students. a n d 0.48 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS b i n o m i a l frequency distribution. we notice the following b r e a k d o w n of the student body: 70% of the s t u d e n t s a r e American u n d e r g r a d u a t e s (AU) a n d 26% are American g r a d u a t e s t u d e n t s (AG). such as genetics. but also f u n d a m e n t a l to an u n d e r s t a n d i n g of t h e various k i n d s of p r o b a b i l i t y d i s t r i b u t i o n s t o be discussed in this b o o k .03.03. is therefore represented by the figure 1. the closer the r a t i o of F G s t u d e n t s sampled t o the total s t u d e n t s s a m p l e d will a p p r o a c h 0. we would intuitively expect that. that of s a m p l i n g an American u n d e r g r a d u a t e is /-"[AUJ = 0. on the average. B o t h the b i n o m i a l a n d P o i s s o n d i s t r i b u t i o n s are discrete p r o b a b i l i t y distributions. especially for tests of r a n d o m n e s s of occurrence of certain events. or there might be quite a few m o r e t h a n 3. This is not as easy a task as might be imagined.70 AU's. 0.3. 1% are foreign u n d e r g r a d u a t e s ( F U ) a n d 3% are foreign g r a d u a t e s t u d e n t s (FG). 4. . it is less likely t h a t t h e ratio would fluctuate widely a r o u n d 0. the probability of s a m p l i n g a foreign u n d e r g r a d u a t e is P [ F U ] = 0 . If we increased o u r s a m p l e size to 500 or 1000.70. T h e actual o u t c o m e might vary. we may formally s u m m a r i z e the situation by stating that the probability that a student at Matchless University will be a foreign g r a d u a t e student is P [ F G ] = 0. T h e ratio of the n u m b e r of foreign g r a d u a t e s t u d e n t s sampled divided by the total n u m b e r of s t u d e n t s sampled might therefore vary f r o m zero to c o n s i d e r a b l y greater than 0. and hypothesis testing W e shall start this discussion with an e x a m p l e t h a t is n o t biometrical o r biological in the strict sense. 0. the r e m a i n i n g 4% are f r o m a b r o a d . a state institution s o m e w h e r e between the A p p a l a c h i a n s a n d the Rockies. T h e greater the s a m p l e taken. If we w a n t e d to d o this o p e r a t i o n physically.0.01 F U ' s . In fact. If we were to assemble all the s t u d e n t s a n d s a m p l e 100 of t h e m at r a n d o m . even if the e x a m p l e is n o t relevant to the general subject m a t t e r of biostatistics. Of these. discussed in C h a p t e r 5. and that for American g r a d u a t e students.03 F G ' s . T h e total student b o d y .1 Probability. N o w let us imagine the following experiment: We try to sample a student at r a n d o m f r o m a m o n g the student body at Matchless University. which is not only i m p o r t a n t in certain types of studies. W e have often f o u n d it pedagogically effective t o i n t r o d u c e new c o n c e p t s t h r o u g h situations t h o r o u g h l y familiar to the s t u d e n t . c o r r e s p o n d i n g to 100%. 0 1 .03.26 AG's.26. which follows in Section 4. T h e m o s t c o m m o n c o n t i n u o u s p r o b a b i l i t y distribution is the n o r m a l frequency d i s t r i b u t i o n . is of wide applicability in biology. L o o k i n g at its e n r o l l m e n t figures. random sampling. Similarly.

This set. stir well. if a n y . political rallies. A G . T h o s e a m o n g t h e r e a d e r s w h o are interested in s a m p l i n g o r g a n i s m s f r o m n a t u r e will already h a v e perceived parallel p r o b l e m s in their w o r k . t h a t is. n o easy solution seems in sight. such places can be f o u n d in a university. Clearly. of insects f r o m a field. F e w e r foreign a n d g r a d u a t e s t u d e n t s m i g h t be f o u n d a l o n g fraternity row. A n d t o m a k e certain t h a t the s a m p l e was truly r a n d o m with respect t o t h e entire s t u d e n t p o p u l a t i o n . F G } is called the sample space. I m a g i n e n o w t h a t we s a m p l e a single s t u d e n t physically by the t r a p p i n g m e t h o d . a n d then pull out a n u m b e r . in the seasonal as well as the d i u r n a l cycle. selecting a page f r o m it at r a n d o m a n d a n a m e at r a n d o m f r o m the page. In f r o n t of the b u r s a r ' s w i n d o w we might s a m p l e s t u d e n t s p a y i n g tuition. we w o u l d have t o k n o w t h e ecology of s t u d e n t s o n c a m p u s very t h o r o u g h l y . dances. o r of m u t a t i o n s in a culture? In s a m p l i n g at r a n d o m . A n o t h e r way of saying this is that in a r a n d o m s a m p l e every individual in the p o p u l a t i o n being s a m p l e d has a n equal probability of being included in the sample. AND HYPOTHESIS TESTING 49 we w o u l d h a v e t o set u p a collection o r t r a p p i n g s t a t i o n s o m e w h e r e o n c a m p u s . we are a t t e m p t i n g t o permit the frequencies of v a r i o u s events occurring in n a t u r e t o be r e p r o d u c e d unalteredly in o u r records. less by t h o s e living in organized houses a n d d o r m i t o r i e s . W e should try to locate o u r t r a p a t s o m e s t a t i o n where e a c h s t u d e n t h a d a n e q u a l probability of passing. Athletic events. H o w s h o u l d we proceed t o o b t a i n a r a n d o m s a m p l e of leaves f r o m a tree. F U or F G . because o u r p r o b a b i l i t y of s a m p l i n g a foreign s t u d e n t w o u l d be greatly e n h a n c e d . such as the student directory. F e w . which we c a n represent as {AU. W h a t a r e the possible o u t c o m e s ? Clearly. write each o n a chip or disk. put these in a large c o n t a i n e r . W e could n o longer speak of a r a n d o m sample. In the familiar ecosystem of t h e university these violations of p r o p e r sampling p r o c e d u r e a r e o b v i o u s t o all of us. RANDOM SAMPLING. T h e time of s a m p l i n g is equally i m p o r t a n t . we h o p e t h a t o n the average the frequencies of these events in o u r s a m p l e will be the same as they a r e in the n a t u r a l situation. indeed. F U . we w o u l d n o t wish t o place o u r t r a p near the I n t e r n a t i o n a l C l u b o r H o u s e . We might go a b o u t o b t a i n i n g a r a n d o m s a m p l e by using records representing the student b o d y .4 . O r we could assign an an a r b i t r a r y n u m b e r t o each s t u d e n t . and the like w o u l d all d r a w a differential s p e c t r u m of the s t u d e n t body. But those o n scholarships m i g h t n o t be found there. Any single trial of the experiment described a b o v e w o u l d result in only o n e of the f o u r possible o u t c o m e s (elements) . 1 / PROBABILITY. T h e set of these four possible o u t c o m e s exhausts the possibilities of this experiment. their p r o b a b i l i t y of being foreign s t u d e n t s w o u l d b e a l m o s t 1. b u t they are not nearly so o b v i o u s in real biological instances where we a r e unfamiliar with the true n a t u r e of the environment. the student could be either a n A U . W e d o n o t k n o w w h e t h e r the p r o p o r t i o n of scholarships a m o n g foreign o r g r a d u a t e s t u d e n t s is t h e s a m e as o r different f r o m t h a t a m o n g t h e American or u n d e r g r a d u a t e students. T h e s t u d e n t u n i o n facilities a r e likely t o be frequented m o r e by i n d e p e n d e n t a n d foreign students. after carefully p l a n n i n g t h e placement of the t r a p in such a way as to m a k e s a m p l i n g r a n d o m . If we were to s a m p l e only s t u d e n t s wearing t u r b a n s or saris. A G .

this is PLAuBj = P[{AU.96 + 0. A G } Β = {AG. events Β a n d C are m u t u a l l y exclusive. {AU. F G } T h u s . A u Β would describe all s t u d e n t s w h o are either American students.03} G i v e n this p r o b a b i l i t y space. A G .0. . . In o u r e x a m p l e above. T h e intersection of events A a n d B. the event A = {AU. T h u s there is n o c o m m o n element in these t w o events in the s a m p l i n g space. F U } . Clearly only A G qualifies. as can be seen below: A = {AU. .70. {AU. A single element in a s a m p l e space is called a simple event. As defined above. in the s a m p l e space defined a b o v e {AU}. W h a t is the p r o b a b i l i t y that a student is either American o r a g r a d u a t e s t u d e n t ? In terms of the events defined earlier." simple events as well as t h e entire s a m p l e space a r e also events. which is a n y subset of the sample-space. a n d { F G } a r e e a c h simple events. {FU}. F G } s u m m a r i z e s the possibilities for o b t a i n i n g a g r a d u a t e student. F G } . T h u s {AU. W h y a r e we c o n c e r n e d with defining s a m p l e spaces a n d events? Because these concepts lead us to useful definitions a n d o p e r a t i o n s r e g a r d i n g the p r o b a b i l i t y of various o u t c o m e s . F U . the event Β = {AG. t o each simple event in a s a m p l e space such t h a t the sum of these p's over all simple events in the space e q u a l s unity. or b o t h . If we can assign a n u m b e r p. {AG.AG}] + P[{AG. it is P [ { A G } ] = 0. AG. where 0 < ρ < 1.01. T h u s Α υ Β indicates t h a t A or Β or b o t h A a n d Β occur. FG} {0. A G } enc o m p a s s e s all possible o u t c o m e s in the space yielding a n A m e r i c a n student. where C = {AU.26 . W e m a y also define events t h a t are unions of t w o o t h e r events in the s i m p l e space. T h e m e a n i n g of these events s h o u l d be clarified. written Α η Β. Α η Β is that event in the s a m p l e space giving rise to the s a m p l i n g of a n A m e r i c a n g r a d u a t e s t u d e n t . 0. By t h e definition of "event. T h u s . we a r e n o w able to m a k e s t a t e m e n t s r e g a r d i n g the probability of given events. F U } . F U } implies being either a n A m e r i c a n o r a n u n d e r g r a d u a t e . F o r example. the following n u m b e r s were associated with the a p p r o p r i a t e simple events in the s a m p l e space: {AU. describes only those events t h a t a r e shared by A a n d B. A G .26. Similarly. F G } . w h a t is the p r o b a b i l i t y that a s t u d e n t sampled at r a n d o m will be an A m e r i c a n g r a d u a t e s t u d e n t ? Clearly. g r a d u a t e students. {AG}.0. as in Β η C.99 0. F G } . It is distinguished f r o m an event. A G .29 = 0. then the space b e c o m e s a (finite) probability space. FG]] P[{AG]] = 0. T h e following s a m p l i n g results a r e some of the possible events: {AU.26. Given the s a m p l i n g space described above. W h e n the intersection of t w o events is e m p t y . o r A m e r i c a n g r a d u a t e students.50 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS in t h e set.

1 S a m p l e space lor s a m p l i n g Iwo students from Matchless University. If we could be certain t h a t o u r s a m p l i n g m e t h o d was r a n d o m (as when d r a w i n g s t u d e n t n u m b e r s o u t of a container). FG). It is o b v i o u s t h a t we could have chanced u p o n a n F G as the very first one t o be s a m p l e d . FG). (AG.0078 o.000!) ε •a o. W e shall n o w extend o u r experiment a n d s a m p l e two s t u d e n t s r a t h e r t h a n just one. FU). 0.oi i'T (1.1820 0.03. (AU. FU). n o t impossible. of the t i m e — n o t very frequently. W h a t a r e the possible o u t c o m e s of this s a m p l i n g e x p e r i m e n t ? T h e new sampling space can best be depicted by a d i a g r a m (Figure 4.70 Λ Γ 0. (AG. T h e p r o b a b i l i t y is 0. we w o u l d have t o decide that an i m p r o b a b l e event h a s occurred. ( F U .1 / PROBABILITY.20 0.0070 0. since if we accept the hypothesis of r a n d o m sampling. FG).1820 0. once in P [ A ] a n d once in P [ B ] .o:i Kirs! s t u d e n t n c a κι 4. 0.2ΐ» \<.0()7(i 0. (AU.0070 0.0210 0. this result w o u l d h a p p e n 0.01 o. AU). they are (AU. AG). AG).F G . (AU. . (AG.97 t h a t a single s t u d e n t s a m p l e d will be a n o n .»:! Η.0210 i''<. N o w let us a s s u m e t h a t we have sampled o u r single s t u d e n t f r o m the s t u d e n t b o d y of Matchless University a n d t h a t s t u d e n t t u r n s o u t to be a foreign g r a d u a t e student. W h a t c a n we c o n c l u d e f r o m this? By c h a n c e alone. If we were uncertain a b o u t this. T h e simple events are the following possible c o m b i n a t i o n s .70 AC I'll 0.002(1 0. R A N D O M SAMPLING.0078 0. A N D HYPOTHESIS TESTING 51 W e s u b t r a c t P [ { A G } ] f r o m the s u m on the right-hand side of t h e e q u a t i o n because if we did n o t d o so it w o u l d be included twice. ( F U . FG). we w o u l d be led to a s s u m e a higher p r o p o r t i o n of foreign g r a d u a t e students as a c o n s e q u e n c e of the o u t c o m e of o u r sampling experiment.ooo:! ο. T h e decisions of this p a r a g r a p h are all based on o u r definite k n o w l e d g e t h a t the p r o p o r t i o n of s t u d e n t s at Matchless University is indeed as specified by t h e p r o b a b i l i t y space. ο. H o w e v e r . a n d ( F G . AC 0. FU).1) t h a t shows the set of t h e 16 possible simple events as p o i n t s in a lattice.0001 o.ooo:! 0. it is not very likely. T h e a s s u m p t i o n t h a t we have sampled at r a n d o m should p r o b a b l y be rejected.0020 0.4. Please n o t e that we said improbable.1900 0. or 3%. the o u t c o m e of the experiment is i m p r o b a b l e . a n d w o u l d lead to the a b s u r d result of a p r o b a b i l i t y greater t h a n 1. I g n o r i n g which student was sampled first.

mix u p the disks with the n u m b e r s on them). there m u s t be 300 F G s t u d e n t s at the university. o n the o t h e r h a n d . this n u m b e r is reduced t o 299 o u t of 9999 students.000 students. If.000 students. replace the first student). if we have s a m p l e d o n e foreign s t u d e n t . if o n e foreign student has been s a m p l e d . T h e r e is a second potential source of difficulty in this design. even in this relatively small p o p u l a t i o n of 10.03. a slightly lower probability t h a n the value of 0. but also that it is independent of it. T h e n . the probability of s a m p l i n g a second foreign g r a d u a t e student (without replacement) is only minutely different f r o m 0. After all. in d r a w i n g student n u m b e r s out of a container. since foreign s t u d e n t s tend to congregate. b u t w h a t will be the p r o b a b i l i t y space c o r r e s p o n d i n g to t h e new s a m p l i n g space of 16 elements? N o w t h e n a t u r e of the s a m p l i n g p r o c e d u r e b e c o m e s q u i t e imp o r t a n t . we r e t u r n the original foreign student t o the s t u d e n t p o p u l a t i o n a n d m a k e certain that the p o p u l a t i o n is t h o r o u g h l y r a n d o m i z e d before being sampled again (that is. C o n s e q u e n t l y .03 for s a m p l i n g the first F G student. we can s a m p l e from it as t h o u g h it were a n infinite-sized population. Biological p o p u l a t i o n s are. give the student a c h a n c e to lose him. If we have sampled s t u d e n t s on c a m p u s .03. In the case of the students. the probability of s a m p l i n g a second F G student is the s a m e as before—0. If we d o n o t replace the first individual s a m p l e d . W e h a v e to a s s u m e not only that the p r o b a b i l i t y of s a m p l i n g a second foreign s t u d e n t is equal to that of the first. since 3% are foreign g r a d u a t e students. of course. it is quite likely that the events are not i n d e p e n d e n t . T h u s . if we keep on replacing the sampled individuals in the original p o p u l a t i o n .0299. In fact. the p r o b a b i l i t y of s a m p l i n g a n F G s t u d e n t n o w b e c o m e s 299/9999 = 0.o r herself a m o n g the c a m p u s crowd or. that is. . is it m o r e o r less likely t h a t a second s t u d e n t sampled in the same m a n n e r will also be a foreign s t u d e n t ? Independence of the events may d e p e n d on where we sample the s t u d e n t s or o n the m e t h o d of sampling. F o r the rest of this section we shall consider s a m p l i n g to be with replacement. W e m a y s a m p l e with or w i t h o u t replacement: we m a y r e t u r n t h e first s t u d e n t s a m p l e d to the p o p u l a t i o n (that is. the p r o b a b i l i t y of s a m p l i n g a foreign g r a d u a t e s t u d e n t will n o longer be exactly 0.52 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS W h a t are t h e expected probabilities of these o u t c o m e s ? W e k n o w t h e expected o u t c o m e s for s a m p l i n g o n e s t u d e n t f r o m the f o r m e r p r o b a b i l i t y space. but they are frequently so large t h a t for p u r p o s e s of s a m p l i n g e x p e r i m e n t s we can consider t h e m effectively infinite whether we replace s a m p l e d individuals or not. so that the p r o b a b i l i t y level of o b t a i n i n g a foreign s t u d e n t d o e s not change. Let us a s s u m e t h a t M a t c h l e s s University h a s 10. at M a t c h l e s s University the probability that a student walking with a foreign g r a d u a t e s t u d e n t is also an F G will be greater than 0. o r we m a y k e e p him or her o u t of the p o o l of t h e individuals to be s a m p l e d . This is easily seen. After s a m p l i n g a foreign g r a d u a t e s t u d e n t first. finite. By independence of events we m e a n that the probability that one event will occur is not affected by whether or not another event has or has not occurred.03.03. the probability that the second student will be foreign is increased.

03 = 0. Given P [ { F G } ] = 0. T h e second generally describes an i n n a t e p r o p e r t y of the objects being sampled a n d thus is of greater biological interest. by s a m p l i n g first o n e F G a n d then again a n o t h e r F G .03 as a fact. we c a n c o m p u t e t h e probabilities of the o u t c o m e s simply as the p r o d u c t of the i n d e p e n d e n t probabilities. R a n d o m s a m p l i n g is sometimes confused with r a n d o m n e s s in nature. W e find t h a t this value is t h e p r o d u c t P [ { A U } ] P [ { F G } ] = 0. it is in fact twice that p r o b a bility. It is quite i m p r o b a b l e to o b t a i n such a result by c h a n c e alone. T h e r e is only o n e way of o b t a i n i n g t w o F G students. t h a t is. 1 / PROBABILITY. If we c o n d u c t e d such a n experiment a n d o b t a i n a sample of two F G students. if the s a m p l i n g probabilities for t h e second s t u d e n t a r e i n d e p e n d e n t of t h e type of s t u d e n t sampled first. RANDOM SAMPLING. R e a d e r s interested in extending their k n o w l e d g e of the subject are referred to M o s i m a n n (1968) for a simple i n t r o d u c t i o n . A G j F G j .03 = 0. A U ^ G .70 χ 0.0420. .0210 in the p r o b a b i l i t y space of Figure 4. there is only o n e w a y to s a m p l e t w o A U students. we n o t e t h a t t h e intersection D n E is { A U . T h u s the probability is 2 P [ { A U } ] P [ { F G } ] = 2 χ 0. However. a n d letting P [ E ] equal the p r o b a b i l i t y t h a t the second s t u d e n t will be a n F G . the latter is the i n d e p e n d e n c e of the events in n a t u r e . F G 2 } · T h i s h a s a value of 0. H o w e v e r . We have already seen how lack of i n d e p e n d e n c e in samples of foreign s t u d e n t s can be interpreted f r o m both points of view in o u r illustrative example f r o m M a t c h l e s s University.03 χ 0. t h a t is.0009. T h u s . T h e p r o b a b i l i t y values assigned to the sixteen p o i n t s in the sample-space lattice of F i g u r e 4. These m u t u a l l y i n d e p e n d e n t relations have been deliberately imposed u p o n all p o i n t s in the p r o b a b i l i t y space. T h e former is the faithful r e p r e s e n t a t i o n in the s a m p l e of the distribution of the events in nature. T h e a b o v e a c c o u n t of probability is a d e q u a t e for o u r present p u r p o s e s but far t o o sketchy to convey a n u n d e r s t a n d i n g of the field. AND HYPOTHESIS TESTING 53 Events D a n d Ε in a s a m p l e space will be defined as i n d e p e n d e n t whenever P [ D η E ] = P [ D ] P [ E ] . A U ^ U .1.0210. . Therefore.70 χ 0.4 . we would be led to the following conclusions. Similarly. s a m p l i n g one of each type of s t u d e n t can be d o n e by s a m p l i n g first an A U a n d then an F G or by s a m p l i n g first a n F G a n d then an AU. It is easy t o see why.03 = 0.1 have been c o m p u t e d t o satisfy the a b o v e c o n d i t i o n . namely.03. T h u s the probability of o b t a i n i n g t w o F G students is P [ { F G } ] P [ { F G } ] = 0.70 χ 0.0009 of the samples of 1% o r 9 out of 10. T h e p r o b a b i l i t y of o b t a i n i n g o n e A U a n d o n e F G student in t h e s a m p l e should be the p r o d u c t 0. O n l y 0. T h e first of these generally is or should be u n d e r the control of the e x p e r i m e n t e r a n d is related to the strategy of g o o d sampling. we would therefore suspect that s a m p l i n g was not r a n d o m or that the events were not i n d e p e n d e n t (or t h a t b o t h assumptions r a n d o m s a m p l i n g a n d independence of e v e n t s — w e r e incorrect). } ] . P [ { A U ! F G 2 . F U 1 F G 2 . letting P [ D ] e q u a l the p r o b a b i l i t y t h a t t h e first s t u d e n t will be an A U . P ^ A ^ A U ^ A U X A G 2 . T h e confusion between r a n d o m s a m p l i n g a n d i n d e p e n d e n c e of events arises because lack of either can yield observed frequencies of events differing f r o m expectation.000 cases) w o u l d be expected to consist of t w o foreign g r a d u a t e students. F G 1 F G 2 } ] .

2pq. in s a m p l e s of three there are three ways of o b t a i n i n g t w o s t u d e n t s of o n e k i n d a n d o n e s t u d e n t of the other. W e shall e x p a n d the expression for samples of k f r o m 1 to 3: F o r samples of 1. Similarly. a n d i g n o r e w h e t h e r the s t u d e n t s are u n d e r g r a d u a t e s or g r a d u a t e s . ρ + q = 1. ρ e q u a l s the p r o b a b i l i t y of occurrence of the first class. AA} { P2.P[F]. a n d q = P [ A ] . hence q is a f u n c t i o n of p: q = 1 — p. q}. then the s a m p l i n g sequence c a n be A F F . where k e q u a l s s a m p l e size. FAA. F F A . r o u g h or s m o o t h . T h u s the p r o b a b i l i t y of this o u t c o m e will be 3 p 2 q . the probability space of samples of three s t u d e n t s w o u l d be as follows: { F F F . As before. As before. q3 } S a m p l e s of three foreign o r three A m e r i c a n s t u d e n t s can again be o b t a i n e d in only o n e way. 3pq2. A convenient way to s u m m a r i z e these results is by m e a n s of the b i n o m i a l e x p a n s i o n . black o r white. Let us symbolize the probability space by {p. t h e p r o b ability t h a t the s t u d e n t is American. FA. H o w e v e r . A}. which is applicable to samples of a n y size f r o m p o p u l a t i o n s in which objects occur i n d e p e n d e n t l y in only t w o c l a s s e s — s t u d e n t s w h o m a y be foreign or American. respectively. (p + q)1 = ρ + q F o r samples of 2. foreign a n d American students. a n d their probabilities a r e p3 a n d q3. we c a n c o m p u t e the p r o b a b i l i t y space of samples of t w o s t u d e n t s as follows: { F F . o r individuals w h o m a y be d e a d or alive. F A F . By definition. where ρ — . (p + q)2 = p2 + 2pq + q2 F o r s a m p l e s of 3. if A s t a n d s for American a n d F s t a n d s for foreign. a n d q e q u a l s the probability of o c c u r r e n c e of the second class. T h e coefficients (the n u m b e r s before the p o w e r s of ρ a n d q) express the n u m b e r of ways a p a r t i c u l a r o u t c o m e is o b t a i n e d . AAA} { p \ 3p2q.54 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS 4. the p r o b a b i l i t y t h a t the s t u d e n t is foreign. we shall represent t h e s a m p l e space by the set {F. F F A for t w o foreign s t u d e n t s a n d o n e American. This is accomplished by e x p a n d i n g the b i n o m i a l term (p + q f . the p r o b a b i l i t y for t w o A m e r i c a n s a n d o n e foreign s t u d e n t is 3 p q 2 . q2 } If we were to s a m p l e three s t u d e n t s independently. (p + q)3 = p3 + 3 p 2 q + 3pq 2 + q3 It will be seen t h a t these expressions yield the s a m e probability spaces discussed previously. a n d so forth.2 The binomial distribution F o r p u r p o s e s of the discussion to follow we shall simplify o u r s a m p l e space to consist of only t w o elements. male o r female. An easy m e t h o d for e v a l u a t i n g the coefficients of the e x p a n d e d t e r m s of the binomial expression .

1. Let us n o w practice o u r newly learned ability to e x p a n d the binomial. T h e expected p r o p o r tions would be t h e e x p a n s i o n of t h e binomial: (p + q)k = (0. the n u m b e r of possible o u t c o m e s of the v a r i o u s c o m b i n a t i o n s of events. the p r o p o r t i o n not infected. we c a n simplify this expression as s h o w n below a n d at the s a m e time provide it with t h e coefficients f r o m Pascal's triangle for t h e case k — 4: p 4 + 4p3<7 + 6 p V + 4pq3 + q4 T h u s we are able t o write d o w n a l m o s t by inspection the e x p a n s i o n of the b i n o m i a l to any r e a s o n a b l e power.6) 5 . W e give it here for k = 4: pV + pV + p V + pY + p°q4 T h e p o w e r of ρ decreases f r o m 4 t o 0 (k to 0 in t h e general case) as t h e p o w e r of q increases f r o m 0 t o 4 (0 to k in the general case).6. 20.4 . 15. exactly 40% of which a r e infected with a given virus X. Since a n y value t o t h e power 0 is 1 a n d a n y term to the p o w e r 1 is simply itself.4. T h e 2 in the middle of this line is the s u m of the values t o the left a n d right of it in the line above. which should be easy to imitate for any value of k. If we t a k e samples of k = 5 insects each a n d e x a m i n e e a c h insect separately for presence of the virus. 2 / THE BINOMIAL DISTRIBUTION 55 is t h r o u g h t h e use of Pascal's triangle: k 1 2 3 4 5 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 Pascal's triangle p r o v i d e s the coefficients of the b i n o m i a l e x p r e s s i o n — t h a t is. the values at the beginning a n d e n d of the third line a r e 1. Y o u can w o r k out the coefficients for any size s a m p l e in this m a n n e r . S u p p o s e we have a p o p u l a t i o n of insects. w h a t distribution of s a m p l e s could we expect if the p r o b a b i l i t y of infection of each insect in a s a m p l e were i n d e p e n d e n t of that of o t h e r insects in the sample? In this case ρ = 0. Similarly. T h e line for fc = 6 w o u l d consist of the following coefficients: 1. T h e line is c o n c l u d e d with a 1. F o r the second line (k = 2). t h u s 3 is the s u m of 1 a n d 2. 6. a n d q = 0. 15. 6. the p r o p o r t i o n infected. a n d the o t h e r n u m b e r s a r e s u m s of the values t o their left a n d right in t h e line above.4 + 0. F o r k = 1 t h e coefficients a r e 1 a n d 1. write 1 at the left-hand m a r g i n of the line. It is a s s u m e d t h a t the p o p u l a t i o n is so large t h a t the question of w h e t h e r s a m p l i n g is with o r w i t h o u t replacement is irrelevant for practical purposes. T h e ρ a n d q values bear p o w e r s in a consistent p a t t e r n . This principle c o n t i n u e s for every line.

36000 0. T h e relative TABLE 4 .16000 0. f o u r infected a n d o n e noninfected insects.4 628.06400 0. T h e reader h a s p r o b a b l y realized by n o w t h a t the terms of t h e b i n o m i a l e x p a n s i o n actually yield a type of f r e q u e n c y distribution for these different o u t c o m e s .02560 0.4) 3 (0.4) 5 = 0.34560 0.4)(0.23040 0.00000 1.00000 2. a n d so on.4) 5 + 5(0.09545 Mean Standard deviation .6) + 10(0.07776 Σ / or Μ Binomial coefficients 1 5 10 10 5 1 £ / ( = « ) Z y (5) Relative expected frequencies (6) Absolute expected frequencies f 24.6) 3 + 5(0.00004 1.6) 5 representing t h e expected p r o p o r t i o n s of samples of five infected insects.60000 0.11934 L 0. A convenient l a y o u t for p r e s e n t a t i o n a n d c o m p u t a t i o n of a b i n o m i a l d i s t r i b u t i o n is s h o w n in T a b l e 4. T h e p r o b a b i l i t y d i s t r i b u t i o n described here is k n o w n as the binomial distribution.07776 1. a n d the b i n o mial e x p a n s i o n yields the expected frequencies of the classes of t h e b i n o m i a l distribution.6) 4 + (0.0 188.00000 (3) Powers of q = 0.01024.25920 0.40000 1.01024 0.1 558.1.21600 0.56 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS With the aid of Pascal's triangle this e x p a n s i o n is {p 5 + 5 p*q + 10p3q2 + 1 0 p V + 5 pq4 + q5} or (0. It describes the expected distribution of o u t c o m e s in r a n d o m s a m p l e s of five insects f r o m a p o p u l a t i o n in which 40% are infected.98721 1. T h e b i n o m i a l coefficients f r o m Pascal's triangle are s h o w n in c o l u m n (4)..8 186.0 4846.12960 0. U) Number of infected insects per sample V 5 4 3 2 1 0 (2) Powers of ρ = 0. 1 Expected frequencies of infected insects in samples of 5 insects sampled from an infinitely large population with an assumed infection rate of 40".00000 0. the second c o l u m n shows decreasing p o w e r s of ρ f r o m p s to p°." there is a p r o b a b i l i t y of o c c u r r e n c e — i n this case (0.4) 2 (0. Associated with e a c h o u t c o m e . such as "five infected insects. This is a theoretical frequency d i s t r i b u t i o n o r probability distribution of events that can o c c u r in t w o classes.00000 2.09543 (7) Observed frequencies f 29 197 535 817 643 202 2423 4815 1. a n d the third c o l u m n shows increasing powers of q f r o m q° to q5.4 2423.07680 0.6 1.6) 2 + 10(0. T h e first c o l u m n lists t h e n u m b e r of infected insects per sample.4) 4 (0..1 2.4 0. three infected a n d t w o noninfected insects.3 837.01024 0.

a n d 3 in a t a b l e of r a n d o m n u m b e r s is 0. the d i s t r i b u t i o n of t h e f o u r digits s i m u l a t i n g t h e p e r c e n t a g e of infection is s h o w n in this c o l u m n . we w o u l d c o n c l u d e that the insects h a d indeed been r a n d o m l y s a m p l e d . T o c a l c u l a t e the e x p e c t c d f r e q u e n c i e s for this a c t u a l e x a m p l e we m u l t i p l i e d the relative f r e q u e n c i e s / r c . a r e s h o w n in c o l u m n (5).9% a r e e x p e c t e d t o c o n t a i n 1 infected a n d 4 n o n i n f e c t e d insects. S u c h n u m b e r s c a n a l s o b e o b t a i n e d f r o m r a n d o m n u m b e r k e y s o n s o m e p o c k e t c a l c u l a t o r s a n d b y m e a n s of p s e u d o r a n d o m n u m b e r . T a k e a s m a n y g r o u p s of five a s y o u c a n f i n d t i m e t o d o . S i m u l a t e t h e s a m p l i n g o f i n f e c t e d i n s e c t s b y u s i n g a t a b l e o f r a n d o m n u m b e r s s u c h a s T a b l e I in A p p e n d i x A l . o r 3) is t h e r e f o r e 4 0 % .4 (rejection of this h y p o t h e s i s w o u l d n o r m a l l y not be r e a s o n a b l e . 5. 6. Experiment 4. a n d (3) t h a t e v e n t s were i n d e p e n d e n t .4.g e n e r a t i n g a l g o r i t h m s in c o m p u t e r p r o g r a m s . y o u c a n let a n y f o u r d i g i t s ( s a y . o r 0. 2. (2) t h a t s a m p l i n g w a s at r a n d o m . 8.2. will b e a 0. 1. of c o l u m n (5) times η = 2423. T h e o b s e r v e d f r e q u e n c i e s a r c l a b e l e d / . 1 . T h e p r o b a b i l i t y that a n y o n e digit selected f r o m the t a b l e will r e p r e s e n t a n i n f e c t e d i n s e c t ( t h a t is. since t h e events listed in c o l u m n (1) e x h a u s t the p o s s i b l e o u t c o m e s . a n d 3 is 0. s h o w n in c o l u m n (6). this e n t i r e e x p e r i m e n t can be p r o g r a m m e d a n d performed a u t o m a t i c a l l y — e v e n on a small computer.1 s h o w s the results of o n e such e x p e r i m e n t d u r i n g o n e year by a b i o s t a t i s t i c s class. labeled / . 3) s t a n d f o r t h e i n f e c t e d i n s e c t s a n d t h e r e m a i n i n g d i g i t s (4. s u c c e s s i v e d i g i t s a r e a s s u m e d t o b e i n d e p e n d e n t of t h e v a l u e s o f p r e v i o u s d i g i t s . 9) s t a n d for the n o n i n f e c t e d insects. W h e n w e c o m p a r e the o b s e r v e d f r e q u e n c i e s in c o l u m n (7) with the e x p e c t e d f r e q u e n c i e s in c o l u m n (6) we n o t e general a g r e e m e n t b e t w e e n t h e t w o c o l u m n s of figures. If the o b s e r v e d f r e q u e n c i e s did not fit expected f r e q u e n c i e s . 2. f o r w e m a y rely o n the fact t h a t t h e p r o p o r t i o n of digits 0.) Since t h e r e is a n e q u a l p r o b a b i l i t y f o r a n y o n e d i g i t t o a p p e a r . T h e s e s t a t e m e n t s c a n be r e i n t e r p r e t e d in t e r m s of the o r i g i n a l infection m o d e l with w h i c h we s t a r t e d this d i s c u s s i o n . (In fact.0. T h e n u m b e r s a r e g r o u p e d in b l o c k s o f 2 5 f o r c o n v e n i e n c e . E n t e r t h e t a b l e o f r a n d o m n u m b e r s a t a n a r b i t r a r y p o i n t ( n o t a l w a y s at t h e b e g i n n i n g ! ) a n d l o o k at s u c c e s s i v e g r o u p s of five d i g i t s . (3). a n d 25.2 / THE BINOMIAL DISTRIBUTION 57 expected frequencies. n o t i n g in e a c h g r o u p h o w m a n y of t h e d i g i t s a r e 0. T h u s t h e a s s u m p t i o n s o f t h e b i n o m i a l d i s t r i b u t i o n s h o u l d b e m e t in t h i s e x p e r i m e n t . T h e i r s u m is e q u a l t o 1. s i n c e t h e s e a r e f o u r o f t h e t e n p o s s i b l e d i g i t s . 2 . 1. this h a d been a real s a m p l i n g e x p e r i m e n t of insects. W e label s u c h e x p e c r e d f r e q u e n c i e s / r e l . the n u m b e r of s a m p l e s t a k e n . A l s o . T h e t w o d i s t r i b u t i o n s a r e a l s o illustrated in F i g u r e 4. C o l u m n (7) in T a b l e 4. 2. b u t n o f e w e r t h a n 100 g r o u p s .1 t h a t o n l y a b o u t 1% of s a m p l e s a r e e x p e c t e d t o c o n s i s t of 5 infected insects. W e see f r o m c o l u m n (5) in T a b l e 4. w h i c h a r e t h e p r o b a b i l i t i e s of the v a r i o u s o u t c o m e s . 1. T h i s results in absolute expected frequencies. A total of 2423 s a m p l e s of five n u m b e r s were o b t a i n e d f r o m t h e t a b l e of r a n d o m n u m b e r s . o r 3. 1.1. 2. If. T h e s e a r e r a n d o m l y c h o s e n o n e .d i g i t n u m b e r s in w h i c h e a c h d i g i t 0 t h r o u g h 9 h a s a n e q u a l p r o b a b i l i t y of a p p e a r i n g . a n d (4). we might believe t h a t the lack of fit w a s d u e t o c h a n c e a l o n e . W e shall test w h e t h e r these p r e d i c t i o n s h o l d in a n a c t u a l e x p e r i m e n t . 7. T h e y a r e simply the p r o d u c t of c o l u m n s (2). i n s t e a d of a s a m p l i n g e x p e r i m e n t of digits by a biostatistics class.4. 0. O r w e might be led t o reject o n e o r m o r e of the f o l l o w i n g h y p o t h e s e s : (I) t h a t the t r u e p r o p o r t i o n of digits 0.4 o r very close to it).

s h o w n in fictitious . If the observed frequencies h a d not fitted the expected frequencies.1.0 t h a t a n o t h e r o n e of the indicated digits w o u l d be included in the sample. this would result in higher frequencies of s a m p l e s of t w o o r m o r e indicated digits a n d in lower frequencies t h a n expected (on the basis of the b i n o m i a l distribution) of s a m p l e s of o n e such digit. E x p e r i m e n t 4.1 w a s designed to yield r a n d o m samples a n d i n d e p e n d e n t events. T h u s . T w o m a i n types of d e p a r t u r e f r o m exp e c t a t i o n can be characterized: (1) clumping a n d (2) repulsion. o r t o t h e conclusion t h a t the true p r o p o r t i o n of infection was not 0. H o w could we simulate a s a m p l i n g p r o c e d u r e in which the occurrences of the digits 0. H o w w o u l d we interpret a large d e p a r t u r e of the observed frequencies f r o m expectation? W e have not as yet learned techniques for testing w h e t h e r observed frequencies differ f r o m those expected by m o r e t h a n c a n be a t t r i b u t e d to c h a n c e alone. a n d 3 were not i n d e p e n d e n t ? We could. every time a 3 w a s f o u n d a m o n g t h e first four digits of a sample. for example. once a 3 w a s f o u n d . instruct the sampler t o s a m p l e as indicated previously.' f k ">00 400 :ioo '200 100 ο 0 1 2 I 5 N u m b e r of i n f e c t e d insects per s a m p l e FIGURE 4 . a n d t h a t we h a d n o evidence to reject the hypothesis t h a t the p r o p o r t i o n of infected insects was 40%. T h i s will be taken u p in C h a p t e r 13. but.4.58 chapter 4 / introduction t o probability distributions 900 Observed frequencies Π Expected frequencies H00 700 C. It s h o u l d b e q u i t e clear t o t h e reader that t h e probability of the second event's o c c u r r i n g w o u l d be different f r o m that of the first a n d d e p e n d e n t on it. 2 B a r d i a g r a m of o b s e r v e d a n d expected frequencies given in T a b l e 4. t o replace t h e following digit with a n o t h e r o n e of the four digits s t a n d i n g for infected individuals. the p r o b a b i l i t y w o u l d be 1. A variety of such different s a m p l i n g schemes could be devised. 2. After repeated samples. 1. A s s u m e t h a t such a test h a s been carried o u t a n d t h a t it has s h o w n us that o u r observed frequencies a r e significantly different f r o m expectation. the lack of fit m i g h t be a t t r i b u t e d t o chance. o r we w o u l d h a v e h a d to reject o n e o r b o t h of the following a s s u m p t i o n s : (1) t h a t s a m p l i n g was at r a n d o m a n d (2) t h a t the o c c u r r e n c e of infected insects in these samples w a s i n d e p e n d e n t .

the hypotheses tested a r e w h e t h e r the samples a r e r a n d o m a n d the events independent.01435 _ — - + — - Σγ η 2423. In actual examples we w o u l d have n o a priori n o t i o n s a b o u t the m a g n i t u d e of p. 2 / THE BINOMIAL DISTRIBUTION 59 TABLE 4 .4 628.8 186. E x p e c t e d frequencies f r o m T a b l e 4. 2 Artificial distributions to illustrate clumping and repulsion.1. the probability of o n e of the t w o possible o u t c o m e s . (/) Number of infected insects per sample Y (2) Absolute expected frequencies f <i) Clumped (contagious) frequencies f (4) Deviation from expectation (5) Repulsed frequencies f «5) Deviation from expectation 5 4 3 2 1 0 Σ/or 24. ) In the repulsed frequency distrib u t i o n there are m o r e o b s e r v a t i o n s t h a n expected at the center of the distribution a n d fewer at the tails. C l u m p i n g would then m e a n t h a t the samples of five a r e in some way related.09543 Mean Standard deviation examples in T a b l e 4. m o r e samples were entirely noninfected (or largely noninfected) t h a n you would expect if p r o b a bilities of infection were independent.0 4846 2. W h a t d o these p h e n o m e n a imply? In the c l u m p e d frequencies. o t h e r s in the . so that if o n e insect is infected.00004 1. This could be d u e t o p o o r s a m p l i n g design. since by design the expected frequencies will have t h e same ρ value as the observed frequencies. This w o u l d m e a n that the hypothesis t h a t ρ is a given value c a n n o t be tested. the results b e c o m e m o r e interesting.4 .2.4 47 227 558 663 703 225 2423 4846 2.1 558.1 2. If.2.2 have an excess of o b s e r v a t i o n s at the tails of the frequency distribution a n d consequently a s h o r t a g e of o b s e r v a t i o n s at the center. T h e c l u m p e d frequencies in T a b l e 4. a n d similarly.20074 + -ΙΟ — + + 14 157 548 943 618 143 2423. ( R e m e m b e r that the total n u m b e r of items m u s t be the s a m e in b o t h observed a n d expected frequencies in o r d e r t o m a k e t h e m c o m p a r a b l e . Therefore. using the s a m p l e p. m o r e samples were entirely infected (or largely infected).0 4846. These discrepancies are most easily seen in c o l u m n s (4) and (6) of T a b l e 4. for example.00000 1. But if the s a m p l i n g design is s o u n d .0 188. the investigator in collecting samples of five insects always tended t o pick out like o n e s — t h a t is. Such a distribution is also said to be contagious. where t h e deviations of observed f r o m expected frequencies are s h o w n as plus o r m i n u s signs. In such cases it is c u s t o m a r y to o b t a i n ρ f r o m the observed s a m p l e a n d to calculate the expected frequencies.00000 1.3 837. infected o n e s or noninfected o n e s — t h e n such a result would likely a p p e a r .

1.60 CHAPTER 4 / INTRODUCTION TO PROBABILITY DISTRIBUTIONS s a m e s a m p l e a r e m o r e likely to be infected. there were only a limited n u m b e r of p a t h o g e n s available. These values are given at the b o t t o m of c o l u m n s (5). ρ = 0. a n d q = 0. which are identical to the values c o m p u t e d f r o m c o l u m n (5) in T a b l e 4. but in situations in which a limited n u m b e r of p a r a s i t e s enter t h e b o d y of the host. Here. If the infected insects in the s a m p l e could in some w a y t r a n s m i t imm u n i t y to their associates in t h e sample. in later c h a p t e r s we resort to ρ a n d q for p a r a m e t r i c p r o p o r t i o n s (rather t h a n π. they should be distinguished from s a m p l e p r o p o r t i o n s . to indicate m e a n incidence of infection in the insccts as 0. O r possibly the infection m i g h t s p r e a d a m o n g m e m b e r s of a sample between t h e time that the insects are s a m p l e d a n d the time they a r e examined. however. simply because there is n o m o r e infectious agent. If we wish to express o u r variable as a p r o p o r t i o n r a t h e r than as a c o u n t — t h a t is. F r o m the expected a n d observed frequencies in T a b l e 4. T h e o p p o s i t e p h e n o m e n o n . which c o n v e n t i o n ally is used as the ratio of the circumfcrence to the d i a m e t e r of a circle).45. t h e o t h e r s go free of infection. W e n o t e that the m e a n s a n d s t a n d a r d deviations in c o l u m n s (5) a n d (6) a r e a l m o s t identical a n d differ only trivially because of r o u n d i n g errors. a n d (7) in T a b l e 4. repulsion m a y b e m o r e reasonable. the o t h e r s in the s a m p l e are less likely to be. O r they could be siblings j o i n t l y exposed to a source of infection. If we wish to k n o w the m e a n a n d s t a n d a r d d e v i a t i o n of expected binomial frequency distributions. This is an unlikely situation in microbial infections. (6). respectively. This could be t r u e if they c o m e f r o m a d j a c e n t l o c a t i o n s in a s i t u a t i o n in which n e i g h b o r s are easily infected. T h e m e a n a n d s t a n d a r d deviation of a b i n o m i a l frequency distribution are. A m o r e r e a s o n a b l e i n t e r p r e t a t i o n of such a finding is t h a t for e a c h s a m p l i n g unit. T h e r e are fewer h o m o g e n e o u s g r o u p s a n d m o r e mixed g r o u p s in such a distribution. b u t it is biologically i m p r o b a b l e . r a t h e r t h a n as 2 per sample of 5 we can use o t h e r f o r m u l a s for the m e a n a n d s t a n d a r d deviation in a binomial .1. then once several of the insects have b e c o m e infected.4. such a situation could arise logically. In fact. is m o r e difficult t o interpret b i o logically. we need not go t h r o u g h the c o m p u t a t i o n s s h o w n in T a b l e 4. T h i s involves the idea of a c o m p e n s a t o r y p h e n o m e n o n : if s o m e of the insects in a s a m p l e a r e infected.6 of the a b o v e example. C o l u m n (7). as a r e the m e a n a n d s t a n d a r d deviation in c o l u m n (7). μ = kp σ = \Jkpq S u b s t i t u t i n g the values k = 5.095.4. not s a m p l e statistics. we prefer to keep o u r n o t a t i o n simple. T h e p r o p o r t i o n s ρ a n d q a r e p a r a m e t r i c values also.1. we o b t a i n μ = 2. T h e m e a n is slightly smaller a n d the s t a n d a r d deviation is slightly greater t h a n in the expected frequencies.0 a n d σ = 1. and strictly speaking. differs s o m e w h a t . repulsion. N o t e that we use the G r e e k p a r a m e t r i c n o t a t i o n here because μ a n d a are p a r a m e t e r s of an expected frequency d i s t r i b u t i o n . we m a y calculate the m e a n a n d s t a n d a r d deviation of the n u m b e r of infected insects per sample.1. being a sample f r o m a p o p u l a t i o n whose p a r a m e t e r s are the s a m e as those of the expected frequency distribution in c o l u m n (5) or (6).

T h e s e t u p of t h e e x a m p l e is s h o w n in T a b l e 4. where ! m e a n s "factorial. 5! = 1 χ 2 χ 3 χ 4 χ 5 = 120.363. which. we need not proceed b e y o n d the term for 13 females a n d 4 males. If we a d d t o this value all "worse" o u t c o m e s — t h a t is. all o u t c o m e s that are even m o r e unlikely t h a n 14 females a n d 3 males on the a s s u m p t i o n of a 1:1 h y p o t h e s i s — w e o b t a i n a probability of 0. a very small value. the pertinent p r o b a b i l i t y distribution is the b i n o m i a l for s a m p l e size k = 17.5 a n d t h a t this probability is i n d e p e n d e n t a m o n g the m e m b e r s of t h e sample.005. the binomial coefficient for the expected frequency of samples of 5 items c o n t a i n i n g 2 infected insects is C(5. T h u s .) .3. H o w ever.4. This can be evaluated as kl/[ Y!(k — V)!]. O n the basis of o u r k n o w l e d g e of the cytology a n d biology of species A. By convention. which c a n be o b t a i n e d either from an e x p a n s i o n of Pascal's triangle (fairly tedious unless once o b t a i n e d a n d stored for f u t u r e use) or by w o r k i n g o u t t h e expected frequencies for any given class of Y f r o m the general f o r m u l a for any term of the b i n o m i a l distribution C(k. we note that the probability of 14 females a n d 3 males is 0.2. T h e study of a litter in n a t u r e reveals t h a t of 17 offspring 14 were females a n d 3 were males.1) T h e expression C(/c. T h u s 5!/3! = (5 χ 4 χ 3!)/ 3! = 5 χ 4. 2) = 5!/2!3! = (5 χ 4)/2 = 10. F o r example." In m a t h e m a t i c s k factorial is the p r o d u c t of all the integers from 1 u p t o a n d including k.2 / THE BINOMIAL DISTRIBUTION 61 distribution: μ = ρ σ = It is interesting t o look at the s t a n d a r d deviations of the c l u m p e d a n d replused frequency distributions of T a b l e 4. n o t e that any factorial will always cancel against a higher factorial. we m u s t have the b i n o m i a l coefficients. Since we require the probability of 14 females.42. W h a t conclusions can we d r a w f r o m this evidence? A s s u m i n g t h a t (the probability of being a female offspring) = 0. a n d t h a t of the repulsed one is less t h a n expected. W e shall n o w e m p l o y the b i n o m i a l distribution t o solve a biological p r o b lem.188. W e n o t e that the c l u m p e d distrib u t i o n h a s a s t a n d a r d deviation greater t h a n expected. f o r t u n a t e l y need n o t be d o n e in its entirety. we often need to calculate the p r o b a b i l i t y of observing a deviation as large as or larger than a given value.17 d o w n a n d increasing powers of q . C a l c u l a t i n g the relative expected frequencies in c o l u m n (6). Decreasing p o w e r s of ρ f r o m p. y ) s t a n d s for the n u m b e r of c o m b i n a t i o n s t h a t can be formed f r o m k items t a k e n Y at a time. a r e c o m p u t e d (from p o w e r 0 to p o w e r 4). In w o r k i n g out fractions c o n t a i n i n g factorials. as we shall see.40. E x p a n d i n g the b i n o m i a l t o the p o w e r 17 is a f o r m i d a b l e task. C o m p a r i s o n of sample s t a n d a r d deviations with their expected values is a useful m e a s u r e of dispersion in such instances. (In statistics. we expect the sex r a t i o a m o n g its offspring to be 1:1. 0! = 1. Y)prqk Y (4. still a very small value.006. we n o t e that for the p u r p o s e s of o u r p r o b l e m .

000.681 υ · υ υ 6 ' · 3 Μ ' 4 2 0.000.363.5).030. or (3) t h a t the sexes of t h e offspring are i n d e p e n d e n t of o n e a n o t h e r . a r e the sexes u n e q u a l in frequency? It m a y be t h a t we k n o w from past experience t h a t in this particular g r o u p of o r g a n i s m s the males are never m o r e f r e q u e n t t h a n females.26 0. the individual sibships.5. = 0.125 0. which w o u l d indicate a tendency for unisexual sibships.631 0. so that the offspring f r o m a given m a t i n g w o u l d tend to be all (or largely) females or all (or largely) males.07 1 17 136 680 2380 O n the basis of these findings o n e o r m o r e of the following a s s u m p t i o n s is unlikely: (1) t h a t the t r u e sex r a t i o in species A is 1:1.157. This new value is still very small. 5 ) ' 7 ] .012.000.5 + 0 .0625 L 0. and the r e a s o n i n g followed a b o v e is a p p r o p r i a t e .63 0. are largely unisexual.061. ( p .188. which results in 0. = 0.129. This is y o u r first experience with o n e of the m o s t i m p o r t a n t a p p l i c a t i o n s of statistics —hypothesis testing.122. (2) t h a t we have s a m p l e d at r a n d o m in t h e sense of o b t a i n i n g an u n b i a s e d sample. Lack of i n d e p e n d e n c e of events m a y m e a n t h a t a l t h o u g h the average sex r a t i o is 1:1.711 0. First.015.000. We should then consider not only the p r o b a b i l i t y of samples with 14 females a n d 3 males (and all worse cases) but also the probability of samples of 14 males a n d 3 females (and all worse cases in t h a t direction).40j 0.5 0. (/) (2) (3) (4) (5) Binomial coefficients (6) Relative expected frequencies $$ 17 16 15 14 13 1 2 3 4 Pi is 1 0.001.007. we need be c o n c e r n e d only with the first of these t w o questions.25 0.006. H o w e v e r . in that case.037. o r litters.726. if we k n o w very little a b o u t this g r o u p of organisms.000.42 o b tained previously.62 chapter 4 / introduction to probability distributions TABLE 4 . we w o u l d need t o have m o r e samples a n d then examine the distrib u t i o n of samples for clumping.84.91 0. are the sexes u n e q u a l in frequency so t h a t females will a p p e a r m o r e often t h a n males? Second. W e m u s t be very precise a b o u t the q u e s t i o n s we ask of o u r d a t a . + q-)k = (0.5. d e p a r t u r e s f r o m the 1:1 ratio could occur in either direction.005.018. then we have to consider b o t h tails of the binomial f r e q u e n c y distribution. q.007. Since this probability distribution is symmetrical (because p. a n d if o u r q u e s t i o n is simply whether the sexes a m o n g the offspring are unequal in frequency. m a k i n g it quite unlikely that the true sex ratio is 1:1. A f o r m a l i n t r o d u c t i o n t o this field will be deferred . T o c o n f i r m this hypothesis. 3 S o m e expected frequencies of males and females for samples of 17 offspring on the assumption that the sex ratio is 1:1 [ p v = 0.04 0.000.52 0.000. we simply d o u b l e the c u m u l a t i v e probability of 0. T h e r e are really t w o questions we could ask a b o u t the sex ratio. = q .

the best estimate of it for the p o p u l a t i o n in S a x o n y was simply o b t a i n e d using the m e a n p r o p o r t i o n of males in these d a t a .4.001 + 0. C o n s e q u e n t l y .4. r a t h e r t h a n a hypothetical value external to the data. This value t u r n s out to be 0.8. tests a n d two-tailed tests.57).1 we h a d a hypothetical p r o p o r tion of infection based on outside knowledge. T h e r e is a distinct c o n t r a s t between the d a t a in T a b l e 4. we used an empirical value of ρ obtained from the data. a hypothetical value of ρ is used. 17 offspring.215)0. respectively.999) 1 0 0 0 . T h e genetic basis for this is not clear. however. T h e expected frequencies were not calculated on the basis of a 1:1 hypothesis.519." Evidence of c l u m p i n g can also be seen f r o m the fact t h a t s2 is m u c h larger t h a n we would expect on the basis of t h e b i n o m i a l distribution (σ 2 = kpq = 12(0.4 r e p r o d u c e s sex ratios of 6115 sibships of 12 children each f r o m t h e m o r e extensive study by Geissler. In the deviations of the observed frequencies f r o m the a b s o l u t e expected frequencies s h o w n in c o l u m n (9) of T a b l e 4. W e m a y simply p o i n t o u t here t h a t the t w o a p p r o a c h e s followed a b o v e are k n o w n a p p r o p r i a t e l y as one-tailed.519. male a n d female). t h e p r o p o r t i o n of females = 0. In the insect infection d a t a of T a b l e 4.480. we notice considerable clumping. T h e r e are m a n y m o r e instances of families with all male or all female children (or nearly so) than i n d e p e n d e n t probabilities would indicate. S t u d e n t s sometimes have difficulty k n o w i n g which of the t w o tests to apply.58) for the 6115 sibships a n d c o n v e r t i n g this i n t o a p r o p o r t i o n . An actual case of this n a t u r e is a classic in t h e literature. This can be o b t a i n e d by calculating the average n u m b e r of males per sibship ( F = 6. This is the .785 = 2. All c o l u m n s of the table should by n o w be familiar. S u p p o s e y o u h a d to e x p a n d t h e expression (0.4 we h a d n o such knowledge. Q u i t e frequently. In the sex r a t i o d a t a of T a b l e 4. This is a distinction whose i m p o r t a n c e will b e c o m e a p p a r e n t later. W e have said t h a t a tendency for unisexual sibships w o u l d result in a clumped distribution of observed frequencies.995. T a b l e 4.215. as in m u c h w o r k in M e n d e l i a n genetics.2 / t h e BINoMiAL d i s t r i b u t i o n 63 until Section 6. since it is k n o w n t h a t in h u m a n p o p u l a t i o n s the sex ratio at b i r t h is n o t 1:1.785.4. 4. In f u t u r e examples we shall try to p o i n t out in each case w h y a onetailed or a two-tailed test is being used.1 a n d those in Table 4. infected a n d n o n i n fected. In such cases we are generally interested in o n e tail of the distribution only.3. we study cases in which sample size k is very large a n d o n e of the events (represented by p r o b a b i l i t y q) is very m u c h m o r e f r e q u e n t t h a n the o t h e r (represented by probability p). 12 siblings) in which t w o alternative states occurred at varying frequencies (American a n d foreign. b u t it is evident t h a t there a r e some families which " r u n t o girls" a n d similarly those which " r u n t o boys. In the sex r a t i o d a t a of T a b l e 4. 5 insects.480. the sex r a t i o d a t a o b t a i n e d by Geissler (1889) f r o m hospital records in Saxony. As the sex r a t i o varies in different h u m a n populations. W e have seen t h a t the e x p a n s i o n of the binomial (p + qf is quite tiresome w h e n k is large.3 The Poisson distribution In the typical a p p l i c a t i o n of the binomial we h a d relatively small s a m p l e s (2 students.230.

3 χ β" δ .— ^ η \o m o\ oo ci O · • —' rn — c-f lr'> r i • ' — τ»·<Ν—'OOOOOO O O O O O O O O O O O IS JS § cn ρ< -h uri r^ so os < N N ' o ' .5! a » ·~ qj iu N u-> 00 < O ο m — oo Ο <*ϊ N c O < N >/"i 00 oo fN Ο 1 < vo ο · v-i N Λ (N — inN N O (N in < O r<i O < O Tf oo ο N N •-H O N O 5/ϊ r—ν© •I a Μ in oo n o fS rn CI s γ-' < N § ©s ο ο ο ο | / " 00 Ο Ο <> γ λ ο ο ρt— o n V) ο οο τ γ ρ·^ ίΝ r.Ν ' λ ο oCr^fNoCoCoC Ο Ο Ο © — r«-i m no -H pOOOpOO©-*<N</-i o o o o o o o o o o o o Τ 'C * (X . O -o a .ο — '5 ο ε + + + + i + + + + + ^ O O h N ^ ' t n h M O N Co ' 3 C ξ I"-.σν crC n o < ri Ν m < fS Ο — ο < NΝ Ο Ο Ο ο ο ο ο ο ο ο s s§ g Ο Η 5 -Si ν t =0 ο o\ Tt ΜΙΛΓ'κηΟΟ'Λ'Ί'^Ι— r-.

represented by the c o m b i n a t o r i a l terms discussed in the previous section. T o be d i s t r i b u t e d in Poisson fashion the variable m u s t have t w o properties: (I) Its m e a n m u s t be small relative t o the m a x i m u m possible n u m b e r of events per sampling unit. 2 ) p V " 2 . As a rule of t h u m b . But. which closely a p p r o x i m a t e s the desired results.1 a n d the p r o d u c t kp (sample size χ probability) is less t h a n 5. C(k.2 / t h e BINoMiAL d i s t r i b u t i o n 65 tail represented by the terms p°q\ C(k. the presence of o n e m o s s plant in a q u a d r a t m u s t not e n h a n c e or diminish the probability t h a t o t h e r m o s s plants a r e developing in the q u a d r a t . T h e Poisson variable Y will be the n u m b e r of events per sample. T h e first term represents n o r a r e events a n d k frequent events in a s a m p l e of k events. a n d so forth. T h u s the event should be "rare. T h e Poisson distribution is also a discrete frequency distribution of the n u m b e r of times a r a r e event occurs. T h e second t e r m represents o n e r a r e event a n d k — 1 f r e q u e n t events. F o r example. 3)p3qk~\ . Similarly. a P o i s s o n variable will be studied in samples t a k e n over space o r time. T h u s . the Poisson distribution. Similarly. b u t within 1 week a great m a n y such cases could occur. T h e p u r p o s e of fitting a Poisson distribution to n u m b e r s of rare events in n a t u r e is to test w h e t h e r the events occur independently with respect to each .4. a q u a d r a t in which m o s s plants are c o u n t e d m u s t be large e n o u g h t h a t a substantial n u m b e r of m o s s plants could occur there physically if the biological c o n d i t i o n s were such as to favor the d e v e l o p m e n t of n u m e r o u s moss plants in the q u a d r a t . F o r p u r p o s e s of o u r t r e a t m e n t here. a time s p a n of 1 m i n u t e would be unrealistic for r e p o r t i n g new influenza cases in a t o w n . 1 ) / > Y ~ \ C(k. -. An example of the first would be the n u m b e r of m o s s p l a n t s in a s a m p l i n g q u a d r a t on a hillside or the n u m b e r of parasites on a n individual host. as long as sufficient decimal accuracy is m a i n t a i n e d . A q u a d r a t consisting of a 1-cm s q u a r e w o u l d be far t o o small for mosses to be distributed in Poisson fashion. It can a s s u m e discrete values f r o m 0 o n up. T h e expressions of the f o r m C(k. i) are the b i n o m i a l coefficients. it is c u s t o m a r y in such cases to c o m p u t e a n o t h e r distribution. a n e x a m p l e of a t e m p o r a l sample is the n u m b e r of m u t a t i o n s occurring in a genetic strain in the time interval of o n e m o n t h or the r e p o r t e d cases of influenza in o n e t o w n d u r i n g one week. T h e third term represents t w o r a r e events a n d k — 2 f r e q u e n t events. the fact t h a t o n e influenza case has been r e p o r t e d m u s t not affect the probability of r e p o r t i n g s u b s e q u e n t influenza cases. in c o n t r a s t to the binomial distribution. t h e Poisson distribution applies t o cases where the n u m b e r of times that a n event does n o t occur is infinitely large. Events t h a t meet these c o n d i t i o n s (rare and random events) should be d i s t r i b u t e d in Poisson fashion. we m a y use the P o i s s o n distribution to a p p r o x i m a t e the b i n o m i a l distribution w h e n the p r o b a b i l i t y of the rare event ρ is less t h a n 0. (2) An occurrence of the event m u s t be i n d e p e n d e n t of prior occurrences within the s a m p l i n g unit. A l t h o u g h the desired tail of the curve could be c o m p u t e d by this expression." But this m e a n s t h a t o u r s a m p l i n g unit of space o r time must be large e n o u g h to a c c o m m o d a t e a potentially substantial n u m b e r of events.

W h y would we expect this frequency distribution to be d i s t r i b u t e d in Poisson fashion? W e have here a relatively rare event. 2. T h u s . T h a t is. O n l y 17 s q u a r e s c o n t a i n e d 5 or m o r e yeast cells. Relative to the a m o u n t of space provided in each s q u a r e and the n u m b e r of cells t h a t could have c o m e to rest in a n y o n e square. distribution." as explained in the previous section. w h o described it in 1837. T h e series can be represented as ^ · 1!έ>"' 2!ί>"' 3!<>μ' (4.718. If the occurrence of one event impedes t h a t of a second such event in t h e s a m p l i n g unit. 3. is 2. a c o n s t a n t w h o s e value. distribution. If the o c c u r r e n c e of o n e event e n h a n c e s the p r o b a b i l i t y of a second such event. T h e Poisson can be used as a test for r a n d o m ness or i n d e p e n d e n c e of d i s t r i b u t i o n not only spatially but also in time. the second term. the first of these t e r m s represents the relative expected f r e q u e n c y of samples c o n t a i n i n g n o rare event. the actual n u m b e r f o u n d is low indeed. on the average there are 1. the m e a n of which has been calculated a n d f o u n d to be 1. or N a p i e r i a n . it is a c o n s t a n t for a n y given problem. we o b t a i n a c l u m p e d . If they do. or c o n t a g i o u s . logarithms. o r spatially o r temporally u n i f o r m .8. We might also expect that the occurrence of individual yeast cells in a s q u a r e is independent of the occurrence of o t h e r yeast cells. W e recognize μ as the p a r a m e t r i c m e a n of the distribution. but that m o s t s q u a r e s held either 1 or 2 cells. 1. We n o t e that 75 s q u a r e s c o n t a i n e d n o yeast cells. Since we d o . the frequency of yeast cells per h e m a c y t o m e t e r s q u a r e . T h e Poisson distribution is n a m e d after the F r e n c h m a t h e m a t i c i a n P o i s s o n . as s o m e examples below will show. they will follow the P o i s s o n distribution. C o l u m n (1) lists the n u m b e r of yeast cells observed in each h e m a c y t o m e t e r square. a c o u n t i n g c h a m b e r such as is used in m a k i n g c o u n t s of b l o o d cells a n d o t h e r microscopic objects suspended in liquid. and c o l u m n (2) gives the observed f r e q u e n c y — t h e n u m b e r of squares c o n t a i n i n g a given n u m b e r of yeast cells. T h e m e a n of the rare event is the only q u a n t i t y that we need to k n o w to calculate the relative expected frequencies of a Poisson distribution. we o b t a i n a repulsed. o n e rare event. It is an infinite series w h o s e terms a d d t o 1 (as m u s t be t r u e for a n y probability distribution). t h e third term. At the t o p of Box 4.66 chapter 4 / introduction to probability distributions other. This is a c o m m o n l y e n c o u n t e r e d class of application of the Poisson distribution. a c c u r a t e to 5 decimal places.1 is a well-known result f r o m the early statistical literature based on the d i s t r i b u t i o n of yeast cells in 400 squares of a h e m a c y t o meter.28. 4. O n e way to learn m o r e a b o u t the Poisson distribution is to apply it to a n actual case. T h e e x c l a m a t i o n m a r k after the coefficient in the d e n o m i n a t o r m e a n s "factorial.2) 4!?" w h e r e the terms are the relative expected frequencies c o r r e s p o n d i n g t o the following c o u n t s of the rare event Y: 0. t w o rare events. a n d so on. T h e d e n o m i n a t o r of each term c o n t a i n s e where e is t h e base of the n a t u r a l .8 cells per square.

3) multiplied by n.2 0.1 119.11 '(f). Yeast cells in 400 squares of a hemacytometer: f = 1. Computational steps Flow of computation based on Expression (4.12 6 . Find e f in a table of exponentials or compute it using an exponential key: J _ e* „1.0 107. / W o ? 4.0. f 0 6.64.27 107.4.12 2. + — + — + — 4+ — • + + 399.9 Source: "Student" (1907).5 0. Y 8·/6= Λ "(τ) - . since we wish to obtain absolute expected frequencies.92 10.3 28.12(1. 1.A y 64. η .02 =119.9 10. Λ . Λ 66.41 3.400 squares sampled.0496 3 .41 3.8 6.2 / t h e BINoMiAL d i s t r i b u t i o n 67 BOX 4 1 Calculation of expected Poisson frequencies.0496 η 400 66./.1 64.8) 119.27 28.8 •14. / W 3 7.02 / 2J t ^ 5.8 cells per square.1 t ) 28./3=/ 2 3 = Y = 107.1 0.92 . (/) Number of cells per square Y (2) (3) Μ Deviation from expectation /--/ Observed frequencies / Absolute expected frequencies / 0 1 2 3 4 5 6 7 8 9 75 103 121 54 30 13Ί 2 1 •17 0 lj 40) 66.

a n d then proceed with the c o m p u t a t i o n a l steps as before.05 9 and beyond At step 3 enter Ϋ as a constant multiplier. a n d the expected frequencies so o b t a i n e d a r e listed in c o l u m n (3) of t h e frequency d i s t r i b u t i o n .80 10. Λ .1. since in such a chain multiplication the correctness of each term d e p e n d s o n the accuracy of the term before it. T h e a c t u a l c o m p u t a t i o n is illustrated in Box 4.(y) for i = 1. 1 2 ^ 0.68 CHAPTER 4 / i n t r o d u c t i o n t o p r o b a b i l i t y distributions BOX 4. N o clear p a t t e r n of deviations f r o m expectation is s h o w n . At each subsequent step multiply the result of the previous step by ? and then divide by the appropriate integer. Each term developed by this recursion f o r m u l a is m a t h e m a t i c a l l y exactly the same as its c o r r e s p o n d i n g term in Expression (4.1 Continued 3 . It is c o n v e n i e n t for t h e p u r p o s e of c o m p u t a t i o n t o rewrite Expression (4. because the m e a n of the expected distribution was t a k e n f r o m the s a m p l e m e a n of the observed variates. .8.8θ(~^ = 0. Then multiply it by n/er (quantity 2). where η is the n u m b e r of samples. we notice quite a good fit of o u r o b served frequencies t o a Poisson d i s t r i b u t i o n of m e a n 1. .3) N o t e first of all that the p a r a m e t r i c m e a n μ h a s been replaced by the s a m p l e m e a n Ϋ.2). we e m p l o y a n e s t i m a t e (the sample m e a n ) a n d calculate expected frequencies of a Poisson d i s t r i b u t i o n with μ equal to the m e a n of the observed frequency d i s t r i b u t i o n of Box 4.1. a b s o l u t e expected frequencies are desired. We c a n n o t test a hypothesis a b o u t the m e a n . If. c l u m p i n g o r a g g r e g a t i o n w o u l d indicate that the probability that a second yeast cell will be f o u n d in a s q u a r e is not i n d e p e n d e n t of the prcs- . as is m o r e usual. Expression (4.18 Total f A 39935 0. where / 0 = (4. a l t h o u g h we have not as yet learned a statistical test for g o o d n e s s of fit (this will be covered in C h a p ter 13).Λ I = 0.3) yields relative expected frequencies. . W h a t have we learned f r o m this c o m p u t a t i o n ? W h e n we c o m p a r e the observed with the expected frequencies.2) as a recursion f o r m u l a as follows: h = L . simply set the first term / 0 to n/ey. It is i m p o r t a n t to m a k e n o c o m p u t a t i o n a l error. 2. . • n o t k n o w the p a r a m e t r i c m e a n of t h e yeast cells in this p r o b l e m . As in the binomial distribution.

5) s h o w s the d i s t r i b u t i o n of a n u m b e r . we needed t o k n o w only o n e p a r a m e t e r — t h e m e a n of the distribution. we g r o u p the low frequencies at o n e tail of the curve. as in the s u b s e q u e n t tables giving examples of the application of the P o i s s o n distribution. which is discussed in Section 13. F o r p u r p o s e s of this test. t h e variance being equal to the m e a n . the m a i n r e a s o n for this g r o u p ing is related t o the G test for g o o d n e s s of fit (of observed t o expected f r e q u e n cies). will be > 1 in c l u m p e d samples. We simply c o m p u t e a coefficient of dispersion This value will be near 1 in distributions that are essentially Poisson distributions. N o t e t h a t in Box 4. Before we t u r n t o o t h e r examples. In the yeast cell example.965. T h u s .3 as frequency polygons (a frequency polygon is formed by the line connecting successive m i d p o i n t s in a bar diagram).1. CD = 1. will often stick t o g e t h e r because of a n electrical c h a r g e unless the p r o p e r suspension fluid is used. in the b i n o m i a l distribution we needed t w o parameters. This would result in a c l u m p i n g of the items in the classes at the tails of the distrib u t i o n so t h a t there w o u l d be s o m e s q u a r e s with larger n u m b e r s of cells t h a n expected. uniting t h e m by m e a n s of a bracket. We notice that for the low value of μ = 0.092.1 the frequency polygon is extremely L-shapcd. we need to learn a few m o r e facts a b o u t the P o i s s o n distribution. T h e variance of the n u m b e r of yeast cells per s q u a r e based o n the observed frequencies in Box 4. Y o u p r o b a b l y noticed t h a t in c o m p u t i n g expected frequencies. o n the o t h e r h a n d .2. We c o n c l u d e o u r study of the Poisson distribution with a c o n s i d e r a t i o n of two examples. T h e yeast cells seem to be r a n d o m l y distributed in t h e c o u n t i n g c h a m b e r . but with an increase in the value of μ the d i s t r i b u t i o n s b e c o m e h u m p e d a n d eventually nearly symmetrical. not m u c h larger t h a n t h e m e a n of 1. n o expected frequency / should be less t h a n 5.8. In a P o i s s o n distribution. T h i s so-called r o u l e a u x effect w o u l d be indicated by c l u m p i n g of t h e observed frequencies. T h e shapes of five Poisson d i s t r i b u t i o n s of different m e a n s are s h o w n in Figure 4.4. o t h e r s with fewer n u m b e r s . ρ and k. b u t is higher t h a n the p r o b a b i l i t y for the first cell. However. the m e a n completely defines the s h a p e of a given Poisson distribution.1 e q u a l s 1. we have a very simple relationship between the two: μ = σ 2 . F r o m this it follows that the variance is some f u n c t i o n of the m e a n . T h e biological i n t e r p r e t a t i o n of the dispersion p a t t e r n varies with the p r o b l e m . indicating t h o r o u g h mixing of the suspension. This r e l a t i o n s h i p between variance a n d m e a n suggests a rapid test of w h e t h e r an observed frequency distribution is distributed in Poisson fashion even w i t h o u t fitting expected frequencies to the d a t a .2 / t h e BINoMiAL d i s t r i b u t i o n 69 ence of t h e first one. T h e first e x a m p l e (Table 4. By c o m p a r i s o n . Red b l o o d cells. a n d will be < 1 in cases of repulsion. indicating again that the yeast cells are distributed in Poisson fashion. This t e n d s t o simplify the p a t t e r n s of d i s t r i b u t i o n s o m e w h a t . hence r a n d o m l y .

Observed frequencies f Poisson expected frequencies f 0 2 3 4 5+ Total 447 132 42 406.3 189. T h e m o d e l assumes.4652 Source: Greenwood and Yule (1920). of accidents per w o m a n f r o m an accident record of 647 w o m e n w o r k i n g in a m u n i t i o n s factory d u r i n g a five-week period. T h e s a m p l i n g unit is o n e w o m a n d u r i n g this period.0 0 L^i 2 I^^t-^gc^—. T h e coefficient of dispersion is 1. .488.0 = 0. a n d this is clearly reflected in the observed frequencies. t h a t the accidents a r e n o t fata! o r very serious a n d thus d o not remove the individual f r o m f u r t h e r exposure.5 Accidents in 5 weeks to 647 women working on high-explosive shells. which are greater t h a n expected in the tails a n d less t h a n expected in the center. T h i s relationship is easily seen in the deviations in the last c o l u m n (observed m i n u s expected frequencies) a n d shows a characteristic c l u m p e d p a t t e r n .692 CD = 1. ι 4 6 8 I I I 10 ι ι ι ι — I .0 + 647 647. T h e noticeable c l u m p i n g in these d a t a p r o b a b l y arises t a b l e 4.70 chapter 4 / introduction t o probability distributions 1.' 12 14 16 ' 18 N u m b e r of r a r e e v e n t s p e r s a m p l e figure 4. T h e rare event is the n u m b e r of accidents t h a t h a p p e n e d t o a w o m a n in this period. of course.3 F r e q u e n c y p o l y g o n s of t h e P o i s s o n d i s t r i b u t i o n for v a r i o u s values of t h e m e a n .0 44.488 7 = 0.

T h e rare event in this case is the presence of the weevil in the bean.269 • -1 -J ? = 0. U) Number of weevils emerging per bean Y emerging from (2) Observed frequencies f (3) Poisson expected frequencies f (4) Deviation from expectation f . one c a n n o t distinguish between the two alternatives. which suggest very different changes that should be m a d e to reduce the n u m b e r s of accidents. In this case it was found that the adult female weevils tended to deposit their eggs evenly rather than r a n d o m l y over the available beans.4 32. A statistical finding of this sort leads us to investigate the biology of the p h e n o m e n o n .2}· 8. in which generally all but one were killed or driven out. Larvae of these weevils enter the beans. Sokal). CD = 0.exercises 71 t a b l e 4.579 either because some w o m e n are accident-prone or because some w o m e n have m o r e d a n g e r o u s j o b s t h a n others. Thus.1 The two columns below give fertility of eggs of the CP strain of Drosophila melanogaster raised in 100 vials of 10 eggs each (data from R.0 x2 = 0. Exercises 4.6) is extracted f r o m an experimental study of the effects of different densities of the Azuki bean weevil. feed and p u p a t e inside them.f 0 1 2 3 4 Total 61 50 η oil 112 70.9 0. T h u s the n u m b e r of holes per bean is a good measure of the n u m b e r of adults that have emerged.6 Azuki bean weevils (Callosobruchus chinensis) 112 Azuki beans (Phaseolus radiatus). it is easily understood h o w the above biological p h e n o m e n a would give rise to a repulsed distribution.7 7. W e note that the distribution is strongly repulsed. There are m a n y m o r e beans containing one weevil than the Poisson distribution would predict. Using only information on the distributions of accidents. A contributing factor was competition a m o n g remaining larvae feeding on the same bean. This prevented the placing of too m a n y eggs on any one bean and precluded heavy competition a m o n g the developing larvae on any one bean. R.1 J 112.4643 Source: Utida (1943). Find the expected frequencies on the assumption of independence of mortality for . The second example (Table 4. and then emerge through an emergence hole.6] 1.

4 4.2) as k becomes indefinitely . (a) What proportion of the progeny should exhibit both traits simultaneously? (b) If four flies are sampled at random.417. what is the probability that they will all be white-eyed? (c) What is the probability that none of the four flies will have either white eyes or "singed bristles?" (d) If two flies are sampled. would you recommend that the Corps attempt to improve its diagnostic methods? ANS. what is the probability that at least one of the flies will have either white eyes or "singed bristles" or both traits? ANS.) How many slides should laboratory technicians be directed to prepare and examine per stool specimen. Use the observed mean.i)(l Those readers who have had a semester or two of calculus may wish to try to prove that Expression (4. 21 slides.2 43 4.6 In human beings the sex ratio of newborn infants is about 100?V': 105 J J . There is evidence that mortality rates are different for different vials. Calculate Poisson expected frequencies for the frequency distribution given in Table 2. However. Interpret results. so that in case a specimen is positive. knowing that the eggs of each vial are siblings and that the different vials contain descendants from different parent pairs." Assume that the two gene loci segregate independently. ANS.1) tends to Expression (4. and so on? The Army Medical Corps is concerned over the intestinal disease X. s 2 = 6.000 random samples of 6 newborn infants from the total population of such infants for one year.5 4.[(1 . From previous experience it knows that soldiers suffering from the disease invariably harbor the pathogenic organism in their feces and that to all practical purposes every stool specimen from a diseased person contains the organism.72 chapter 4 / introduction t o probability distributions each egg in a vial. the organisms are never abundant. Calculate the expected variance and compare it with the observed variance. (c) [(1 .2 (number of plants of the sedge Carex flacca found in 500 quadrats). and thus only 20% of all slides prepared by the standard procedure will contain some. it will be erroneously diagnosed negative in fewer than 1 % of the cases (on the average)? On the basis of your answer.i)] 4 .i)(l . Number of eggs hatched Y Number f of vials 0 1 2 3 4 5 6 7 8 9 10 1 3 8 10 6 15 14 12 13 9 9 4. σ 2 = 2. 4 males. (We assume that if an organism is present on a slide it will be seen.636. what would be the expected frequency of groups of 6 males. 5 males. (d) 1 . A cross is made in a genetic experiment in Drosophila in which it is expected that { of the progeny will have white eyes and 5 will have the trait called "singed bristles. Were we to take 10. (a) (b) (i) 4 .

(a) I i .1429. 0.3. 0.36.) . P{aaaa} = q4. what would you expect the frequency distribution of weed seeds lo be in ninety-eight 4-ounce samples? (Assume there is random distribution of the weed seeds.2Κ(0. . 3 . P{Aaaa} = 4 p q 3 . . for n: "f .10 If the average number of weed seeds found in a j o u n c e sample of grass seed is 1. what are the expected frequencies of the zygotes A A.18. . f o r a t e t r a p l o i d . . respectively. .8 4. P{AA} = p2. Aa. P{Aa} = 2pq.7 4.3)".9 Summarize and compare the assumptions and parameters on which the binomial and Poisson distributions are based. . 2. 4. (b) (0. 3. 1/2". 0.5|'(0.507. (a) What is the probability of obtaining only individuals of type Αλ in samples of size 1. . 2. HINT: *Y 1 -» e x as η oo V "/ If the frequency of the gene A is ρ and the frequency of the gene a is q.0. . . P{AAaa} = / 4. = 4 p 3 q . P{aa} = q2. f o r a d i p l o i d .3)" • ' 4. and aa (assuming a diploid zygote represents a random sample of size 2)? What would the expected frequency be for an autotetraploid (for a locus close to the centromere a zygote can be thought of as a random sample of size 4)? ANS. 5 . n? (b) What would be the probabilities of obtaining only individuals that were not of type Α γ or A 2 in a sample of size n? (c) What is the probability of obtaining a sample containing at least one representation of each type in samples of size 1. I .·=ι "£ ' i= ι "'. so that μ = kp remains constant). with relative frequencies of 0.2. n? ANS. Aj\(n-i-j)\ |0.exercises 73 large (and ρ becomes infinitesimal. A population consists of three types of individuals. . and 0. 0.5. A„ A2. (c) 0. and A3. . P{AAAa} 6 p 2 q 2 . . a n d P{AAAA} = p4.

Similarly.1 introduces frequency d i s t r i b u t i o n s of c o n t i n u o u s variables. most variables e n c o u n t e r e d in biology either are c o n t i n u o u s (such as the aphid femur lengths or the infant birth weights used as e x a m p l e s in C h a p t e r s 2 a n d 3) or can be treated as cont i n u o u s variables for m o s t practical purposes. the n u m b e r of infected insects per sample could be 0 or 1 o r 2 but never an i n t e r m e d i a t e value between these.3. T h e i r variables a s s u m e d values that c h a n g e d in integral steps (that is. T h e n we e x a m i n e its properties in Section 5. Section 5. A g r a p h i c technique for pointing out d e p a r t u r e s f r o m normality and for cstimat- . the n o r m a l probability distribution. even t h o u g h they a r e inherently meristic (such as the n e u t r o p h i l c o u n t s e n c o u n t e r e d in the same chapters).2 we show o n e way of deriving the m o s t c o m m o n such distribution. However. In Section 5.CHAPTER The Normal Distribution Probability T h e theoretical frequency d i s t r i b u t i o n s in C h a p t e r 4 were discrete.4. the n u m b e r of yeast cells per h e m a c y t o m e t e r s q u a r e is a meristic variable a n d requires a discrete probability f u n c t i o n to describe it. they were meristic variables). T h u s . A few a p p l i c a t i o n s of the n o r m a l d i s t r i b u t i o n are illustrated in Section 5. C h a p t e r 5 will deal m o r e extensively with the distributions of c o n t i n u o u s variables.

1). 5. This is so because the area of the curve c o r r e s p o n d i n g to a n y point a l o n g the curve is an infinitesimal.1. as s h o w n in F i g u r e 5.5. y o u r choice of class limits is arbitrary. In a c o n t i n u o u s distribution. since in all the cases that we shall e n c o u n t e r .0 or n. as will be seen later in Figures 5. as s h o w n by the vertical lines in F i g u r e 5.1 A p r o b a b i l i t y d i s t r i b u t i o n of .3 and 5. . By density we m e a n the relative c o n c e n t r a t i o n of variates a l o n g t h e Y axis (as indicated in F i g u r e 2. as are s o m e of the reasons for d e p a r t u r e f r o m n o r m a l i t y in observed frequency distributions. F o r t u n a t e l y .11. can be represented by a c o n t i n u o u s curve. the tail I KillKL 5. this is not a great c o n ceptual stumbling block.5. Probability density f u n c t i o n s are defined so t h a t the expected frequency of observations between two class limits (vertical lines) is given by the area between these limits u n d e r t h e curve. T h e total area u n d e r the curve is therefore equal t o the s u m of the expected frequencies (1. T h u s .3 and 6. or probability density function. d e p e n d i n g on w h e t h e r relative or absolute expected frequencies have been calculated). to calculate expected frequencies for a c o n t i n u o u s distribution. as s h o w n in F i g u r e 5. one c a n n o t evaluate the probability t h a t the variable will be exactly equal to a given value such as 3 or 3. In Sections 5.i c o n t i n u o u s variable.1 Frequency distributions of continuous variables F o r c o n t i n u o u s variables.1. t h e theoretical p r o b a b i l i t y distribution.5. we shall see how this is d o n e for the n o r m a l frequency distribution. however. W h e n you f o r m a frequency distribution of o b s e r v a t i o n s of a c o n t i n u o u s variable. because all values of a variable are theoretically possible. O n e can only estimate the frequency of o b s e r v a t i o n s falling between two limits. or o n e o r both e n d s of the curve may extend indefinitely.1.1 / f r e q u e n c y " d i s t r i b u t i o n s o f c o n t i n u o u s variables 75 ing m e a n a n d s t a n d a r d deviation in a p p r o x i m a t e l y n o r m a l d i s t r i b u t i o n s is given in Section 5. T h e idea of an area u n d e r a curve w h e n o n e or b o t h e n d s go to infinity m a y t r o u b l e those of you not a c q u a i n t e d with calculus. we have t o calculate the area u n d e r the curve between the class limits.4. it is necessary to divide the t w o into c o r r e s p o n d i n g classes. T h e o r d i n a t e of the curve gives the density for a given value of the variable s h o w n a l o n g the abscissa. C o n t i n u o u s frequency distributions m a y start and t e r m i n a t e at finite points a l o n g the Y axis. In o r d e r to c o m p a r e the theoretical with the observed frequency distribution.

W e shall therefore use a largely intuitive a p p r o a c h . the n o r m a l frequency distribution. In such cases. o t h e r s e n v i r o n m e n t a l .76 chapter 4 / introduction to probability distributions of t h e curve will a p p r o a c h the Y axis rapidly e n o u g h t h a t the p o r t i o n of t h e a r e a b e y o n d a certain p o i n t will for all practical p u r p o s e s be zero a n d t h e frequencies it represents will be infinitesimal. it c o n t r i b u t e s o n e unit of p i g m e n t a t i o n to skin color. Let us consider a b i n o m i a l distribution of t h e familiar f o r m ( ρ + qf in which k becomes indefinitely large.5. 5. With only o n e factor (k = 1). has the identical effect. or the s u m of three c o n t r i b u t i o n s of o n e unit each. as follows: {/% {0. W e m a y fit c o n t i n u o u s frequency d i s t r i b u t i o n s t o s o m e sets of meristic d a t a (for example. As a simplifying a s s u m p t i o n . be treated as t h o u g h they were c o n t i n u o u s . the probability t h a t the factor is present. Such variables can. O n e final a s s u m p tion: E a c h f a c t o r has an equal probability of being present or a b s e n t in a given individual. Each factor. even t h o u g h expressed as a discrete variable. W i t h k = 2 factors present in the p o p u l a t i o n (the factors arc a s s u m e d to occur i n d e p e n d e n t l y of each other). which we have f o u n d of heuristic value.5.5. range into the t h o u s a n d s . W e shall n o w proceed t o discuss the m o s t i m p o r t a n t p r o b a b i l i t y density f u n c t i o n in statistics. T h u s . / } 0. {I. let us state t h a t every factor can occur in t w o states only: present or absent. M o s t of these require m o r e m a t h e m a t i c s t h a n we expect of o u r readers. the distribution of p i g m e n t a t i o n intensities would be represented by . while q = P [ / ] = 0. such as c o u n t s of blood cells. but it c o n t r i b u t e s n o t h i n g to p i g m e n t a t i o n w h e n it is absent. we h a v e r e a s o n t o believe t h a t u n d e r l y i n g biological variables t h a t cause differences in n u m b e r s of the s t r u c t u r e a r e really c o n t i n u o u s . a n d the effects are additive: if three o u t of five possible factors are present in an individual. the probability that the factor is absent. for practical p u r poses. s o m e genetic. the o t h e r half 0. t h e n u m b e r of teeth in an organism).5} 0 } p i g m e n t a t i o n classes (probability space) expected frequency p i g m e n t a t i o n intensity Half the a n i m a l s would have intensity I. e x p a n s i o n of the binomial (p + </)' would yield t w o p i g m e n t a t i o n classes a m o n g the animals. ρ = = 0. T h e intensity of skin p i g m e n t a t i o n in an a n i m a l will be d u e to the s u m m a t i o n of m a n y factors. W h e n the factor is present. W h a t type of biological situation could give rise to such a b i n o m i a l distribution? An e x a m p l e might be one in which m a n y f a c t o r s c o o p e r a t e additively in p r o d u c i n g a biological result. regardless of its n a t u r e or origin. T h e following h y p o t h e t i c a l case is possibly not t o o far r e m o v e d f r o m reality. the p i g m e n t a t i o n intensity will be three units.2 Derivation of the normal distribution T h e r e a r e several ways of deriving the n o r m a l frequency d i s t r i b u t i o n f r o m elem e n t a r y a s s u m p t i o n s . S o m e inherently meristic variables.

We note that the g r a p h a p p r o a c h e s t h e familiar bellshaped outline of the n o r m a l frequency distribution (seen in Figures 5. it can also be s h o w n that the m u l t i n o m i a l (p + q + r + · · · + z)k a p p r o a c h e s the n o r m a l frequency distribution as k a p p r o a c h e s infinity. intensity 0. W h a t h a p p e n s when these a r e removed? First.50.3 a n d 5.2 (rather t h a n as a b a r d i a g r a m . ρ. the n o r m a l distribution will be closely a p p r o x i m a t e d . .4 / a p p l i c a t i o n s o f t h e n o r m a l distribution 77 0. T h e b i n o m i a l d i s t r i b u t i o n for k— 10 is g r a p h e d as a h i s t o g r a m in F i g u r e 5. T h i r d . However.4). o u r h i s t o g r a m w o u l d be so close to a n o r m a l frequency distribution that we could not show the difference between the t w o on a g r a p h the size of this page. {2.2 0. a n d so forth. T h e n u m b e r of classes in the binomial increases with the n u m b e r of factors.5) 1 0 .ι 0. a n d q are such that kpq > 3. a n d the expected frequencies at the tails b e c o m e progressively less as k increases. in a m o r e realistic situation. it can be s h o w n that w h e n k. At the beginning of this procedure.4 0.1 0 0 1 2 3 4 5 V FIGURE 5 . 0. 1. Lifting these restrictions m a k e s the a s s u m p t i o n s leading to a n o r m a l distribution c o m p a t i b l e with i n n u m e r a b l e biological situations. when ρ φ q. The frequency distributions a r e symmetrical. the h i s t o g r a m is at first asymmetrical. as it should be drawn). a n d the r e m a i n i n g f o u r t h . {0.3 Ten factors Λ. W e r e we to e x p a n d the expression for k = 20. However. jf } 0. different factors m a y be present in different frequencies a n d m a y have different q u a n t i t a t i v e effects. the distribution also a p p r o a c h e s n o r m a l i t y as k a p p r o a c h e s infinity. because when ρ / q.25. factors w o u l d be permitted to o c c u r in m o r e t h a n two s t a t e s — o n e state m a k i n g a large c o n t r i b u t i o n . It is therefore not surprising that so m a n y biological variables are a p p r o x i m a t e l y normally distributed. Second. intensity 1.f o u r t h of the individuals w o u l d have p i g m e n t a t i o n intensity 2.5.5 + 0. we m a d e a n u m b e r of severe limiting a s s u m p t i o n s for the sake of simplicity. 2 6 7 8 ! ) 10 H i s t o g r a m b a s e d o n relative expected frequencies resulting f r o m e x p a n s i o n of b i n o m i a l (0.25} 0 } p i g m e n t a t i o n classes (probability space) expected frequency p i g m e n t a t i o n intensity O n e . This is intuitively difficult to see. T h e 7 axis m e a s u r e s t h e n u m b e r of p i g m e n t a t i o n f a c t o r s F. a second state a smaller c o n t r i b u t i o n . Ff. the e x p a n s i o n of the b i n o m i a l ( ρ + q)2'· {FF. one-half. As long as these a r e additive a n d independent. n o r m a l i t y is still a p p r o a c h e d as k a p p r o a c h e s infinity.

t h a t their effects be additive.78 chapter 5 / the normal probability distribution Let us s u m m a r i z e the c o n d i t i o n s t h a t tend to p r o d u c e n o r m a l f r e q u e n c y distributions: (1) t h a t there be m a n y factors. n .59.1) H e r e Ζ indicates the height of the o r d i n a t e of the curve. (Β) μ = 8. whose value a p p r o x i m a t e s 2. 5. being a function of the variable Y. m a k i n g \/yj2n a p p r o x i m a t e l y 0. a n d (4) t h a t they m a k e e q u a l c o n t r i b u t i o n s t o t h e variance.94. the base of the n a t u r a l logarithms. we m e n t i o n it here only for completeness. as might a p p e a r to the uninitiated w h o keep e n c o u n t e r i n g the same bell-shaped image in t e x t b o o k s . (Α) μ = 4. there are an infinity of such curves.28.398. which represents the density of the items. This is illustrated by the three n o r m a l curves in Figure 5.5. <r = 1. a n d e. a = I.3 Properties of the normal distribution F o r m a l l y . R a t h e r . (C) μ = 8. . since these p a r a m e t e r s can a s s u m e a n infinity of values. (2) t h a t these factors be i n d e p e n d e n t in occurrence. It is the d e p e n d e n t variable in the expression. FKil IKh 5. there is not j u s t one n o r m a l distribution.3.718. These are the p a r a m e t r i c m e a n μ a n d the p a r a m e t r i c s t a n d a r d deviation σ. well k n o w n to be a p p r o x i m a t e l y 3. representing the same total frequencies. the normal expression probability density function can be represented by the 1 Z = — = e 2 " (5.141. T h u s . T h e f o u r t h c o n d i t i o n we a r e n o t yet in a position t o discuss. T h e r e are t w o p a r a m e t e r s in a n o r m a l probability density function.3 I l l u s t r a t i o n of h o w c h a n g e s in t h e Iwo p a r a m e t e r s of t h e n o r m a l d i s t r i b u t i o n alTecl t h e s h a p e a n d l o c a t i o n of Ihe n o r m a l p r o b a b i l i t y d e n s i t y f u n c t i o n . (3) t h a t t h e factors be i n d e p e n d e n t in e f f e c t — t h a t is. T h e r e are t w o c o n s t a n t s in the e q u a t i o n : π.0. It will be discussed in C h a p t e r 7. which d e t e r m i n e the location a n d s h a p e of the distribution.

We can therefore look u p directly t h e probability t h a t an observation will be less t h a n a specified value of Y. until it intersects the c u m u l a t i v e distribution curve.87%. their relative expected frequencies being very small. median.5.674σ 95% of the items fall in the range μ + 1.00% a n d the frequency u p to a point o n e s t a n d a r d deviation below the m e a n is 15.576σ These relations are s h o w n in F i g u r e 5.4 / applications of t h e n o r m a l distribution 79 Curves A a n d Β differ in their locations a n d hence represent p o p u l a t i o n s with different m e a n s . T h e curve is symmetrical a r o u n d the m e a n . 50% of the items fall in the range μ ± 0. Since the s t a n d a r d deviation of curve C is only half t h a t of curve B. T h e following percentages of items in a n o r m a l frequency distribution lie within the indicated limits: μ ± σ c o n t a i n s 68. W h e n 7 is very large or very small. for those of you w h o d o not k n o w calculus (and even for those of you w h o do) the integration has already been carried out a n d is presented in an alternative f o r m of the n o r m a l distribution: the normal distribution function (the theoretical cumulative distribution f u n c t i o n of the n o r m a l probability density function). the m e a n .1). a n d Ζ will therefore be very small. This m e a n s t h a t a n o r m a l l y distributed variable can assume a n y value.27% of the items μ ±2σ c o n t a i n s 95. These frequencies are f o u n d . F o r example.4. however large or small. C u r v e s Β a n d C represent p o p u l a t i o n s t h a t h a v e identical m e a n s but different s t a n d a r d deviations. H o w have these percentages been calculated? T h e direct calculation of any p o r t i o n of the area u n d e r the n o r m a l curve requires an integration of the function s h o w n as Expression (5. This can be seen f r o m Expression (5. such as — σ. T h e probability that a n o b s e r v a t i o n will fall between t w o a r b i t r a r y points can be found by s u b t r a c t i n g the probability t h a t an observation will fall below the .73% of the items Conversely. It gives the total frequency f r o m negative infinity u p to a n y point a l o n g the abscissa. In theory.45% of the items μ ± 3σ c o n t a i n s 99. graphically. H e n c e e raised to the negative p o w e r of t h a t t e r m will be very small. a n d m o d e of the n o r m a l distribution are all at the same point.1).4 shows that the total frequency u p to the m e a n is 50. it presents a m u c h n a r r o w e r a p p e a r a n c e . F o r t u n a t e l y .87%) off the ordinate. a n d then reading the frequency (15. a l t h o u g h it is frequently the abscissa). Therefore. a n o r m a l frequency distribution extends f r o m negative infinity to positive infinity a l o n g the axis of the variable (labeled Y. the term (Υ — μ)2/2σ2 will necessarily b e c o m e very large. also s h o w n in Figure 5.4. a l t h o u g h values f a r t h e r f r o m the m e a n t h a n plus or m i n u s three s t a n d a r d deviations are quite rare. Figure 5.960σ 99% of the items fall in the range μ + 2. by raising a vertical line f r o m a point.

80

chapter 5 / the normal probability

distribution

~
figure 5.4

'>5.45'ν W.7.V;

-

Areas u n d e r t h e n o r m a l p r o b a b i l i t y density f u n c t i o n a n d the c u m u l a t i v e n o r m a ! d i s t r i b u t i o n function

lower point f r o m the p r o b a b i l i t y t h a t an o b s e r v a t i o n will fall below the u p p e r point. F o r example, we can see f r o m F i g u r e 5.4 that the probability t h a t an o b s e r v a t i o n will fall between the m e a n a n d a point o n e s t a n d a r d deviation below the m e a n is 0.5000 - 0.1587 = 0.3413. T h e n o r m a l distribution f u n c t i o n is t a b u l a t e d in T a b l e II in A p p e n d i x A2, "Areas of the n o r m a l curve," where, for convenience in later calculations, 0.5 has been s u b t r a c t e d f r o m all of the entries. This table therefore lists the p r o p o r tion of the area between the m e a n a n d a n y p o i n t a given n u m b e r of s t a n d a r d deviations a b o v e it. T h u s , for example, t h e area between the m e a n a n d the point 0.50 s t a n d a r d deviations a b o v e the m e a n is 0.1915 of the total area of t h e curve. Similarly, the area between t h e m e a n a n d the point 2.64 s t a n d a r d deviations a b o v e the m e a n is 0.4959 of the curve. A point 4.0 s t a n d a r d deviations f r o m the m e a n includes 0.499,968 of the a r e a between it a n d the mean. H o w e v e r , since the n o r m a l distribution e x t e n d s f r o m negative to positive infinity, o n e needs

5.4 / a p p l i c a t i o n s o f t h e n o r m a l

distribution

81

t o g o a n infinite d i s t a n c e f r o m t h e m e a n t o r e a c h a n a r e a of 0.5. T h e use of the t a b l e of a r e a s of t h e n o r m a l c u r v e will b e i l l u s t r a t e d in t h e n e x t section. A s a m p l i n g e x p e r i m e n t will give y o u a "feel" f o r t h e d i s t r i b u t i o n of i t e m s sampled from a normal distribution. Experiment 5.1. You are asked to sample from two populations. The first one is an approximately normal frequency distribution of 100 wing lengths of houseflies. The second population deviates strongly from normality. It is a frequency distribution of the total annual milk yield of 100 Jersey cows. Both populations are shown in Table 5.1. You are asked to sample from them repeatedly in order to simulate sampling from an infinite population. Obtain samples of 35 items from each of the two populations. This can be done by obtaining two sets of 35 two-digit random numbers from the table of random numbers (Table I), with which you became familiar in Experiment 4.1. Write down the random numbers in blocks of five, and copy next to them the value of Y (for either wing length or milk yield) corresponding to the random number. An example of such a block of five numbers and the computations required for it are shown in the

TABLE 5.1 Populations of wing lengths and milk yields. Column I. R a n k n u m b e r . Column 2. L e n g t h s (in m m χ 1 ( T ' ) of 100 wings of houseflies a r r a y e d in o r d e r of m a g n i t u d e ; / / = 45.5. σ2 = 15.21, σ = 3.90; d i s t r i b u t i o n a p p r o x i m a t e l y n o r m a l . Column 3. T o t a l a n n u a l milk yield (in h u n d r e d s of p o u n d s ) of 100 two-year-old registered Jersey c o w s a r r a y e d in o r d e r of m a g n i t u d e ; μ = 66.61, a 2 = 124.4779, ο = 11.1597; d i s t r i b u t i o n d e p a r t s s t r o n g l y f r o m n o r m a l i t y .

(/) 01 02 03 04 05 06 07 08 09 10 II 12 13 14 15 16 17 18 19 20

(2) 36 37 38 38 39 39 40 40 40 40 41 41 41 41 41 41 42 42 42 42

li) 51 51 51 53 53 53 54 55 55 56 56 56 57 57 57 57 57 57 57 57

(/) 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

(2) 42 42 42 43 43 43 43 43 43 43 43 44 44 44 44 44 44 44 44 44

(3)

(/) 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

(2) 45 45 45 45 45 45 45 45 45 45 46 46 46 46 46 46 46 46 46 46

«
61 61 61 61 61 62 62 62 62 63 63 63 64 65 65 65 65 65 67 67

(0 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80

(2) 47 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48 49 49 49

(3) 67 67 68 68 69 69 69 69 69 69 70 72 73 73 74 74 74 74 75 76

(') 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00

(2) 49 49 49 49 50 50 50 50 50 50 51 51 51 51 52 52 53 53 54 55

(3) 76 76 79 80 80 8! 82 82 82 82 83 85 87 88 88 89 93 94 96 98

58 58 58 58 58 58 58 58 58 58 58 59 59 59 60 60 60 60 60 61

Source: Column 2—Data adapted from Soka] and Hunter (1955). Column 3 —Data from Canadian government records.

82

chapter 5 / the normal probability

distribution

following listing, using the housefly wing lengths as an example:

Random number

Wing length Y

16 59 99 36 21 £ Y = £Y
2

41 46 54 44 42 227

= 10,413 45.4

y =

Those with ready access to a computer may prefer to program this exercise and take many more samples. These samples and the computations carried out for each sample will be used in subsequent chapters. Therefore, preserve your data carefully! In this experiment, consider t h e 35 variates for each variable as a single sample, r a t h e r t h a n b r e a k i n g them d o w n i n t o g r o u p s of five. Since the t r u e m e a n a n d s t a n d a r d deviation (μ a n d a) of the t w o d i s t r i b u t i o n s are k n o w n , you can calculate the expression (Υ, — μ)/σ for each variate Y,. T h u s , for the first housefly wing length s a m p l e d above, you c o m p u t e 41 - 45.5 - l 9 0 =

"

: 1 1 5 3 8

This m e a n s t h a t the first wing length is 1.1538 s t a n d a r d deviations below the true m e a n of t h e p o p u l a t i o n . T h e deviation f r o m the m e a n m e a s u r e d in s t a n d a r d deviation units is called a standardized deviate o r standard deviate. T h e a r g u m e n t s of T a b l e II, expressing distance f r o m the m e a n in units of σ, a r e called standard normal deviates. G r o u p all 35 variates in a frequency distribution; then d o t h e s a m e for milk yields. Since you k n o w the p a r a m e t r i c m e a n a n d s t a n d a r d deviation, you need not c o m p u t e each deviate separately, but can simply write d o w n class limits in terms of the actual variable as well as in s t a n d a r d deviation f o r m . T h e class limits for such a frequency distribution are s h o w n in T a b l e 5.2. C o m b i n e the results of y o u r s a m p l i n g with those of your classmates a n d study the percentage of the items in the distribution one, two, a n d three s t a n d a r d deviations t o each side of the m e a n . N o t e the m a r k e d differences in d i s t r i b u t i o n between the housefly wing lengths and the milk yields.

5.4 Applications of the normal distribution
T h e n o r m a l frequency d i s t r i b u t i o n is the m o s t widely used distribution in statistics, a n d time a n d again we shall have recourse to it in a variety of situations. F o r the m o m e n t , wc may subdivide its a p p l i c a t i o n s as follows.

5.4 / a p p l i c a t i o n s o f t h e n o r m a l

distribution

83

t a b l e

5.2

Table for recording frequency distributions of standard deviates (¥, — μ)]ο for samples of Experiment 5.1. Wing lengths Variates falling between these limits — 00 — 3σ - 2 k - 2 a - i k — a - k μ = 45.5 k a i k 2a 2 k 3σ + GO 36, 37 38, 39 40,41 42, 4 3 44, 4 5 46, 47 48, 49 50, 51 52, 53 54, 5 5 — 2(7 - l k 51-55 56-61 Milk Variates falling between these limits — 00 -3σ yields

f

μ = 66.61

62 - 6 6 67-72 73-77

Ik

78-83 84-88 89-94 95 98

2k
3σ + GO

1. We sometimes have to k n o w whether a given s a m p l e is normally distributed before we can apply a certain test to it. T o test whether a given sample is normally distributed, we have to calculate expected frequencies for a n o r m a l curve of the same m e a n a n d s t a n d a r d deviation using the table of areas of the n o r m a l curve. In this book we shall e m p l o y only a p p r o x i m a t e graphic m e t h o d s for testing normality. These are featured in the next section. 2. K n o w i n g whether a sample is normally distributed may confirm or reject certain underlying hypotheses a b o u t the n a t u r e of the factors affecting the p h e n o m e n o n studied. This is related to the c o n d i t i o n s m a k i n g for n o r m a l i t y in a frequency distribution, discusscd in Scction 5.2. T h u s , if we find a given variable to be normally distributed, we have no reason for rejecting the hypothesis t h a t the causal factors affecting the variable arc additive a n d independent a n d of equal variance. O n the o t h e r h a n d , when we find d e p a r t u r e from normality, this may indicate certain forces, such as selection, affecting the variable u n d e r study, f o r instance, bimodality may indicate a mixture

84

chapter 5 / the normal probability

distribution

of o b s e r v a t i o n s f r o m t w o p o p u l a t i o n s . Skewness of milk yield d a t a m a y indicate t h a t these a r e r e c o r d s of selected c o w s a n d s u b s t a n d a r d milk c o w s h a v e n o t been included in the record. 3. If we a s s u m e a given d i s t r i b u t i o n t o be n o r m a l , we m a y m a k e p r e d i c t i o n s a n d tests of given h y p o t h e s e s based u p o n this a s s u m p t i o n . (An e x a m p l e of such a n a p p l i c a t i o n follows.) Y o u will recall the b i r t h weights of m a l e Chinese children, illustrated in Box 3.2. T h e m e a n of this s a m p l e of 9465 b i r t h weights is 109.9 oz, a n d its s t a n d a r d deviation is 13.593 oz. If y o u s a m p l e a t r a n d o m f r o m t h e b i r t h r e c o r d s of this p o p u l a t i o n , w h a t is y o u r c h a n c e of o b t a i n i n g a birth weight of 151 oz o r heavier? Such a birth weight is considerably a b o v e t h e m e a n of o u r sample, the difference being 151 — 109.9 = 41.1 oz. H o w e v e r , we c a n n o t consult t h e table of a r e a s of the n o r m a l curve with a difference in ounces. W e m u s t express it in standardized u n i t s — t h a t is, divide it by the s t a n d a r d deviation t o c o n v e r t it i n t o a s t a n d a r d deviate. W h e n we divide the difference by t h e s t a n d a r d deviation, we o b t a i n 41.1/13.593 = 3.02. This m e a n s t h a t a birth weight of 151 oz is 3.02 s t a n d a r d deviation units greater t h a n the m e a n . A s s u m i n g t h a t t h e birth weights a r e n o r m a l l y distributed, we m a y consult the table of areas of the n o r m a l curve (Table II), where we find a value of 0.4987 for 3.02 s t a n d a r d deviations. T h i s m e a n s t h a t 49.87% of the area of the curve lies between the m e a n a n d a point 3.02 s t a n d a r d deviations f r o m it. Conversely, 0.0013, or 0.13%, of the a r e a lies b e y o n d 3.02 s t a n d a r d deviation units a b o v e the m e a n . T h u s , a s s u m i n g a n o r m a l distribution of birth weights a n d a value of σ = 13.593, only 0.13%, or 13 o u t of 10,000, of the infants w o u l d have a birth weight of 151 oz or farther f r o m the m e a n . It is quite i m p r o b a b l e t h a t a single s a m p l e d item f r o m t h a t p o p u l a t i o n w o u l d deviate by so m u c h f r o m the m e a n , a n d if such a r a n d o m s a m p l e of o n e weight were o b t a i n e d f r o m the records of an unspecified p o p u l a t i o n , we m i g h t be justified in d o u b t i n g w h e t h e r the o b s e r v a t i o n did in fact c o m e f r o m the p o p u l a t i o n k n o w n to us. T h e a b o v e p r o b a b i l i t y was calculated f r o m o n e tail of the distribution. W e f o u n d t h e probability t h a t a n individual w o u l d be greater t h a n the m e a n by 3.02 o r m o r e s t a n d a r d deviations. If we are not c o n c e r n e d w h e t h e r t h e individual is either heavier o r lighter t h a n the m e a n but wish to k n o w only h o w different the individual is f r o m t h e p o p u l a t i o n m e a n , an a p p r o p r i a t e q u e s t i o n would be: A s s u m i n g t h a t the individual belongs to the p o p u l a t i o n , w h a t is the probability of observing a birth weight of a n individual deviant by a certain a m o u n t f r o m the m e a n in either direction? T h a t probability m u s t be c o m p u t e d by using b o t h tails of the distribution. T h e previous probability can be simply d o u b l e d , since the n o r m a l curve is symmetrical. T h u s , 2 χ 0.0013 = 0.0026. This, too, is so small t h a t we w o u l d c o n c l u d e t h a t a birth weight as deviant as 151 oz is unlikely t o have c o m e f r o m the p o p u l a t i o n represented by o u r s a m p l e of male Chinese children. W e can learn o n e m o r e i m p o r t a n t point f r o m this example. O u r a s s u m p t i o n has been t h a t the birth weights are n o r m a l l y distributed. Inspection of the

5.5 / d e p a r t u r e s f r o m n o r m a l i t y : g r a p h i c

methods

85

frequency d i s t r i b u t i o n in Box 3.2, however, shows clearly t h a t the d i s t r i b u t i o n is asymmetrical, t a p e r i n g t o the right. T h o u g h there are eight classes a b o v e the m e a n class, there a r e only six classes below the m e a n class. In view of this a s y m m e t r y , conclusions a b o u t o n e tail of the distribution w o u l d n o t necessarily p e r t a i n to the second tail. W e calculated t h a t 0.13% of the items w o u l d be f o u n d b e y o n d 3.02 s t a n d a r d deviations a b o v e t h e m e a n , which c o r r e s p o n d s to 151 oz. In fact, o u r s a m p l e c o n t a i n s 20 items (14 + 5 + 1) b e y o n d t h e 147.5-oz class, the u p p e r limit of which is 151.5 oz, a l m o s t the same as the single b i r t h weight. H o w e v e r , 20 items of the 9465 of the s a m p l e is a p p r o x i m a t e l y 0.21%, m o r e t h a n the 0.13% expected f r o m t h e n o r m a l frequency distribution. A l t h o u g h it w o u l d still be i m p r o b a b l e to find a single birth weight as heavy as 151 oz in the sample, conclusions based o n the a s s u m p t i o n of n o r m a l i t y m i g h t be in e r r o r if the exact p r o b a b i l i t y were critical for a given test. O u r statistical conclusions are only as valid as o u r a s s u m p t i o n s a b o u t the p o p u l a t i o n f r o m which the samples are d r a w n .

5.5 Departures from normality: Graphic methods In m a n y cases an observed frequency distribution will d e p a r t obviously f r o m normality. W e shall e m p h a s i z e t w o types of d e p a r t u r e f r o m n o r m a l i t y . O n e is skewness, which is a n o t h e r n a m e for a s y m m e t r y ; skewness m e a n s that o n e tail of the curve is d r a w n o u t m o r e t h a n the other. In such curves the m e a n a n d the m e d i a n will not coincide. C u r v e s a r e said to be skewed to the right o r left, d e p e n d i n g u p o n w h e t h e r the right or left tail is d r a w n out. T h e o t h e r type of d e p a r t u r e f r o m n o r m a l i t y is kurtosis, or " p e a k e d n e s s " of a curve. A leptokurtic curve has m o r e items near the m e a n a n d at the tails, with fewer items in t h e i n t e r m e d i a t e regions relative to a n o r m a l distribution with the s a m e m e a n a n d variance. A platykurtic curve has fewer items at the m e a n a n d at the tails t h a n the n o r m a l curvc but h a s m o r e items in i n t e r m e d i a t e regions. A b i m o d a l distribution is an extreme p l a t y k u r t i c distribution. G r a p h i c m e t h o d s have been developed that examine the s h a p e of an observed distribution for d e p a r t u r e s f r o m normality. These m e t h o d s also permit estimates of the m e a n a n d s t a n d a r d deviation of the distribution w i t h o u t computation. T h e graphic m e t h o d s are based on a c u m u l a t i v e frequency distribution. In F i g u r e 5.4 we saw t h a t a n o r m a l frequency distribution g r a p h e d in c u m u l a t i v e fashion describes an S - s h a p e d curve, called a sigmoid curve. In F i g u r e 5.5 the o r d i n a t e of the sigmoid curve is given as relative frequencies expressed as percentages. T h e slope of the c u m u l a t i v e curve reflects changcs in height of the frequency distribution o n which it is based. T h u s the steep m i d d l e segment of the c u m u l a t i v e n o r m a l curve c o r r e s p o n d s to the relatively greater height of the n o r m a l curvc a r o u n d its m e a n . T h e o r d i n a t e in Figures 5.4 a n d 5.5 is in linear scale, as is the abscissa in Figure 5.4. A n o t h e r possible scale is the normal probability scale (often simply called probability scale), which can be generated by d r o p p i n g p e r p e n d i c u l a r s

86

chapter 5 / the normal probability

distribution

1

5 10

30 50 70

90 95

99

C u m u l a t i v e p e r c e n t in p r o b a b i l i t y scale
figure 5.5

T r a n s f o r m a t i o n of c u m u l a t i v e p e r c e n t a g e s into n o r m a l probability scale.

f r o m the c u m u l a t i v e n o r m a l curve, c o r r e s p o n d i n g to given percentages o n t h e o r d i n a t e , t o the abscissa (as s h o w n in F i g u r e 5.5). T h e scale represented by the abscissa c o m p e n s a t e s for the n o n l i n e a r i t y of the cumulative n o r m a l curve. It c o n t r a c t s the scale a r o u n d t h e m e d i a n a n d e x p a n d s it at the low a n d high c u m u l a t i v e percentages. This scale c a n be f o u n d on arithmetic or normal probability graph paper ( o r s i m p l y probability graph paper), w h i c h is g e n e r a l l y avail-

able. Such p a p e r usually h a s the long edge g r a d u a t e d in probability scale, while the short edge is in linear scale. N o t e that there are n o 0% or 100% p o i n t s on the o r d i n a t e . These p o i n t s c a n n o t be shown, since the n o r m a l frequency distrib u t i o n extends f r o m negative t o positive infinity a n d thus however long we m a d e o u r line we w o u l d never reach the limiting values of 0% a n d 100%. If we g r a p h a c u m u l a t i v e n o r m a l distribution with the o r d i n a t e in n o r m a l probability scale, it will lie exactly on a straight line. Figure 5.6A shows such a g r a p h d r a w n o n p r o b a b i l i t y p a p e r , while the o t h e r p a r t s of F i g u r e 5.6 s h o w a series of frequency d i s t r i b u t i o n s variously d e p a r t i n g f r o m normality. These are g r a p h e d b o t h as o r d i n a r y f r e q u e n c y d i s t r i b u t i o n s with density on a linear scale (ordinate not shown) a n d as c u m u l a t i v e d i s t r i b u t i o n s as they w o u l d a p p e a r on

1 for e x p l a n a t i o n . (See Box 5. 100%. T h e y a r e useful as guidelines for e x a m i n i n g the d i s t r i b u t i o n s of d a t a o n probability paper.5 / d e p a r t u r e s f r o m n o r m a l i t y : g r a p h i c m e t h o d s 87 Normal distributions figurk 5. T h e m e t h o d w o r k s best for fairly large s a m p l e s (η > 50).6 E x a m p l e s of s o m e f r e q u e n c y d i s t r i b u t i o n s with their c u m u l a t i v e d i s t r i b u t i o n s p l o t t e d with the o r d i n a t e in n o r m a l p r o b a b i l i t y scale. . T h e m e t h o d d o e s not permit the plotting of the last c u m u l a t i v e frequency.1 shows you h o w to use probability p a p e r to e x a m i n e a frequency distribution for n o r m a l i t y a n d to o b t a i n g r a p h i c estimates of its m e a n a n d s t a n d a r d deviation. Box 5.5. ) probability p a p e r .

2 55.9 77.50 4.5 159.5 175.02 0. and (3).08 0.99 100.5 87.5 131.2.5 2 6 39 385 888 1729 2240 2007 1233 641 201 74 14 5 1 9465 8 47 432 1320 3049 5289 7296 8529 9170 9371 9445 9459 9464 9465 0.6 13.5 119. 3. the closer the mean will be to the median.5 139.5 (2) Upper class limit (β) f (4) Cumulative frequencies F 2 (5) Percent cumulative frequencies 67. This is because a difference of a single item may make appreciable changes in the percentages at the tails.0 99.5 79.5 107.5 127.5 103. Form a cumulative frequency distribution as shown in column (4).5 95.7).5 155.94 99. which is 9465 in this example. 2. Prepare a frequency distribution as shown in columns (1).5 163.1 90.79 99. The more normal the distribution is. In column (5) express the cumulative frequencies as percentages of total sample size n.5 167. from Box 3. preferably using a transparent plastic ruler. V) Class mark Y 59.5 135. We notice that the upper frequencies deviate to the right of the straight line.1 Graphic test for normality of a frequency distribution and estimate of mean and standard deviation.5 123. The mean is approximated by a graphic estimation of the median.5 151. Such a graph permits the rapid estimation of the mean and standard deviation of a sample.5 75. Use of arithmetic probability paper. This is typical of data that are skewed to the right (see Figure 5. 4.5 63.5 143.5 99. most weight should be given to the points between cumulative frequencies of 25% to 75%.5 83. Graph the upper class limit of each class along the abscissa (in linear scale) against percent cumulative frequency along the ordinate (in probability scale) on normal probability paper (see Figure 5.0 Computational steps 1.88 chapter 5 / the normal probability distribution BOX 5. A straight line is fitted to the points by eye.9 99. which permits all the points to be seen as the line is drawn. . It is obtained by successive summation of the frequency values.5 71.5 147.9 32.5 171.5 91.6D). Birth weights of male Chinese in ounces.5 111.1 96. These percentages are 100 times the values of column (4) divided by 9465.5 115. (2). In drawing the line.

These points enclose the portion of a normal curve represented by μ ± σ. 5.5 / d e p a r t u r e s f r o m n o r m a l i t y : g r a p h i c methods 89 BOX 5.5 127.5 95. The standard deviation can be estimated by dropping similar perpendiculars from the intersections of the 15. In this instance the estimate is s = 13. This is a close approximation to the computed value of 13. By measuring the difference between these perpendiculars and dividing this by 2.7 oz is quite close to the computed mean of 109. we obtain an estimate of one standard deviation.59 oz.5 Kirth weights (if mail.1 Continued The median is estimated by dropping a perpendicular from the intersection of the 50% point on the ordinate and the cumulative frequency curve to the abscissa (see Figure 5.5 79.9% and the 84.6.5 111.5 1 Ci.2 oz divided by 2.1% points with the cumulative curve.1.7 G r a p h i c analysis of d a t a f r o m Box 5. since the difference is 27. <«. The estimate of the mean of 110.5 175.5 159. .Chinese (in oz. respectively.5.) hg((r!·: 5.9 oz.7).

5 so 3 u Η S ^ 5 < Έ Ι 8. β ξ ο « 2 J3 .u Η u ο" u Q •Ό £ Ο 9 8 οο ο© ir> ο 'S·. -S = U Ο u. ω Λ Oχ g C υ -ϊ ·° -s Ο rt ο & ε ·£ •£ w ϊ ε J2 Ρ -S S ο •Ο ο. 4 00 S .Μ . u Q• — σ.S ο VI ο ο ο — ο C υ Ι C α. 3= ·° ο χι § 3 ° U" on 3 d ς σ· .(Λ i rt c c C ο V η jz -— 3 C Ο 3 S Ο -ο . Ο 5 c ο. 1) β ο =5 .> ι/) Λ Μ Οc ο . s s Ξ > * 2 U ο ( Ο Λ .5 tc "3 ω > ι τ? _c _ Ά *> ' ^ / £ Ξ Ο •Γ.c Ο 3 c Ο 3 Τ3 a ο « = > a c "3 > ·2 w £ uχ < Η 3 'δ Ε 13 ( C Λ Λ ϊΐ "S 'Q J= = •£ -ο ·° ε s « 5 » 2 ο· 2 α -S θ § £ Μ Ο κ g -ο ·— -J fi υ ? Ο ° ·Κ £ ο •S ο ν.ω ι> υ .Β js ν < Wu £ υ > Λ ς< ο Jo ε V £ f.

1) to c o n f o r m t o a frequency distribution: l /Υ-μ\ ζ Z = Syfln (5. y o u c a n plot.8A we s h o w t h e frequency distribution of b i r t h weights of m a l e Chinese f r o m Box 5. it is t h e p r o b a b i l i t y t h a t a s t a n d a r d n o r m a l d e v i a t e is less t h a n (0 . N o t e the a c c e n t u a t i o n of the d e p a r t u r e f r o m normality.5 / d e p a r t u r e s f r o m n o r m a l i t y : g r a p h i c methods 91 since t h a t c o r r e s p o n d s to an infinite distance f r o m t h e m e a n . o n e can also use an a n a l o g o u s technique for c o m p a r i n g expected with observed histograms. w h a t is t h e p r o b a b i l i t y o f o b t a i n i n g a n i n d i v i d u a l w i t h a n e g a t i v e b i r t h w e i g h t ? W h a t is t h i s p r o b a b i l i t y if w e a s s u m e t h a t b i r t h w e i g h t s a r e n o r m a l l y d i s t r i b u t e d ? A N S . Fitting a n o r m a l d i s t r i b u t i o n as a curve s u p e r i m p o s e d u p o n a n observed frequency distribution in t h e f o r m of a histog r a m is usually d o n e only when g r a p h i c facilities (plotters) a r e available.8B for the birth weight d a t a . a table of ordinates of the n o r m a l curve is useful. You will p r o b a b l y find it difficult t o c o m p a r e the heights of bars against the arch of a curve. T h e e m p i r i c a l e s t i m a t e is z e r o . If a n o r m a l d i s t r i b u t i o n c a n b e a s s u m e d . 9 ) / 1 3 . Figure 5.8 .5. a n d t h e p r o b a b i l i t y can be c o n s i d e r e d z e r o for practical purposes. . Such a h a n g i n g h i s t o g r a m is s h o w n in Figure 5.1 0 9 .1 U s i n g t h e i n f o r m a t i o n g i v e n in B o x 3.8D shows the s a m e d a t a plotted in this m a n n e r . J o h n T u k e y h a s suggested t h a t the bars of the h i s t o g r a m s be s u s p e n d e d f r o m t h e curve. O f t e n it is desirable to c o m p a r e observed frequency d i s t r i b u t i o n s with their expectations w i t h o u t resorting to c u m u l a t i v e frequency distributions. T h e excess of observed over expected frequencies in the right tail of the distribution is quite evident. If this needs t o be d o n e w i t h o u t a c o m p u t e r p r o g r a m . O n e m e t h o d of d o i n g so w o u l d be t o s u p e r i m p o s e a n o r m a l curve o n the h i s t o g r a m of a n observed frequency distribution. 5 9 3 = . Such a " h a n g i n g r o o t o g r a m " is s h o w n in Figure 5. instead of cumulative frequencies F. Exercises 5.1 with t h e o r d i n a t e s of the n o r m a l curve s u p e r i m p o s e d . T h e i r d e p a r t u r e s f r o m expectation c a n then be easily observed against the straight-line abscissa of the g r a p h . In F i g u r e 5. F o r this reason. T h i s v a l u e is b e y o n d t h e r a n g e of m o s t tables. T h e d e p a r t u r e f r o m n o r m a l i t y is n o w m u c h clearer. If y o u a r e interested in plotting all observations.8C for the Chinese birth weight d a t a .2. Finally. 0 8 5 . it has been suggested that s q u a r e r o o t s of expectcd frequencies should be c o m pared with the s q u a r e roots of observed frequencies. Becausc i m p o r t a n t d e p a r t u r e s are frequently noted in the tails of a curve. the q u a n t i t y F — j expressed as a p e r c e n t a g e of n.2) In this expression η is the s a m p l e size a n d i is the class interval of the frequency distribution. O r d i nates are c o m p u t e d by m o d i f y i n g E x p r e s s i o n (5. T h e r e is an excess of observed frequencies at the right tail d u e t o the skewness of the distribution. S q u a r e r o o t s of frequencies are again s h o w n .

Are they consistent w i t h w h a t o n e w o u l d e x p e c t in s a m p l i n g f r o m a n o r m a l d i s t r i b u t i o n ? 11. ( b ) = 0 . 2 3 5 3 .40 5.4 17. L e t u s a p p r o x i m a t e t h e o b s e r v e d f r e q u e n c i e s in E x e r c i s e 2.7 19.5 5.42 10.(P{A < 3 0 } ) χ ( Ρ { Β < 30}) = 1 .6 39. C o m p a r e t h e r e s u l t s of t h e t w o a n a l y s e s . T h i s is c l e a r e v i d e n c e f o r s k e w n e s s in t h e o b s e r v e d distribution.8.60 7. μΒ = 16.2 33.2 16. 2 . V = 3 2 .7 13.20 11. A s s u m e that traits A a n d Β are independent a n d normally distributed with p a r a m e t e r s μΛ = 2 8 .9 19. T h e r e is a s u g g e s t i o n o f b i m o d a l i t y . 0 . 3. ( b ) C o m p u t e t h e e x p e c t e d f r e q u e n c i e s f o r e a c h o f t h e c l a s s e s b a s e d o n a n o r m a l d i s t r i b u t i o n w i t h μ = Ϋ a n d σ = s. a n d σΒ = 4.2 5.8 33. 0 8 2 .4 24.4 19.6 5.7 27.72 10.74 3.6.06 21. 5 c m ? ( b ) G r e a t e r t h a n 1.9 w i t h a n o r m a l f r e q u e n c y distribution. 9 0 3 5 .46 14. s = 8 . M a k e a h i s t o g r a m of t h e d a t a . 8 . m a k e a frequency distribution f r o m the d a t a a n d g r a p h the r e s u l t s i n t h e f o r m of a h i s t o g r a m .3 12. A N S .4 38. σΑ = 4 . 1 5 4 .21 8. T h e e x p e c t e d f r e q u e n c i e s f o r t h e a g e c l a s s e s a r e : 17. Perform a graphic analysis on the following measurements.5. plot the d a t a on probability p a p e r with the abscissa in l o g a r i t h m i c units. F = 2 7 .3856.81 5. P e r f o r m a g r a p h i c a n a l y s i s o f t h e b u t t e r f a t d a t a g i v e n i n E x e r c i s e 3.0 24.50 6.3 37. a n d t h e coefficient of v a r i a t i o n .6 c m ? A N S .4. 7 2 . 5 1 .7 36.26 5.2 c m a n d a s t a n d a r d d e v i a t i o n o f σ = 1. w h a t t y p e o f d e p a r t u r e is s u g g e s t e d ? A N S .2 40.6 30.7 20.6 33.3 18.88 9.2. (a) If y o u h a v e not already d o n e so.2 37.9. W h a t p r o p o r t i o n o f t h e p o p u l a t i o n w o u l d b e e x p e c t e d t o h a v e a p e t a l l e n g t h ( a ) g r e a t e r t h a n 4 .1 o n t h e t r a n s f o r m e d d a t a g e n e r a t e d i n E x e r c i s e 2.7 18.74 12.02 9. t h e s t a n d a r d d e v i a t i o n . D o t h e d a t a s e e m c o n s i s t e n t w i t h a n o r m a l d i s t r i b u t i o n o n t h e b a s i s o f a g r a p h i c a n a l y s i s ? If n o t .09 7.3. A s s u m e y o u k n o w t h a t t h e p e t a l l e n g t h of a p o p u l a t i o n of p l a n t s of s p e c i e s X is n o r m a l l y d i s t r i b u t e d w i t h a m e a n o f μ = 3.1 38. 4 8 .8 31.37 8.3 37.0 19.78 c m ? (c) B e t w e e n 2 .0.7 5.40 11. u s i n g p r o b ability paper.6 13. P e r f o r m t h e f o l l o w i n g o p e r a t i o n s o n t h e d a t a o f E x e r c i s e 2. a n d (c) = 0 . 9 a n d 3. 4 3 8 . (d) C o m m e n t o n t h e d e g r e e of a g r e e m e n t b e t w e e n o b s e r v e d a n d expected frequencies.9960) = 0. 3 6 5 4 ) ( 0 .4 33.5 32. 7 8 4 5 . C o m p a r e the observed frequencies with those expected w h e n a n o r m a l d i s t r i b u t i o n is a s s u m e d .2 23. 4 .25 6.18 6. (a) = 0 .3 24.1. C o m p a r e t h e t w o d i s t r i b u t i o n s b y f o r m i n g a n d superimposing the observed a n d the expected histograms a n d by using a h a n g i n g h i s t o g r a m .8 T h e f o l l o w i n g d a t a a r e t o t a l l e n g t h s (in c m ) o f b a s s f r o m a s o u t h e r n laki 29. 6 . 17. 3 8 ) = 0 . (b) 1 .6 19.1 39. In addition.(0.92 3.8 18. 0 3 0 .2 C o m p u t e t h e m e a n .60 12.6147)(0. (a) P{A < 20}P{B < 2 0 } = ( 0 .6 29. (c) G r a p h t h e e x p e c t e d f r e q u e n c i e s in t h e f o r m o f a h i s t o g r a m a n d c o m p a r e t h e m w i t h t h e o b s e r v e d f r e q u e n c i e s .27 6. .44 15.3 C a r r y o u t t h e o p e r a t i o n s l i s t e d in E x e r c i s e 5.92 chapter 5 / the normal probability distribution 5.4 5.1 41.53 6. Y o u s a m p l e t w o i n d i v i d u a l s a t r a n d o m (a) W h a t is t h e p r o b a b i l i t y o f o b t a i n i n g s a m p l e s i n w h i c h b o t h i n d i v i d u a l s m e a s u r e l e s s t h a n 2 0 f o r t h e t w o t r a i t s ? (b) W h a t is t h e p r o b a b i l i t y t h a t a t l e a s t o n e o f t h e i n d i v i d u a l s is g r e a t e r t h a n 3 0 f o r t r a i t B ? A N S .2 34.1 20. 4 4 7 5 .

is answered t h r o u g h the setting of confidencc limits to s a m p l e statistics.2 we examine the d i s t r i b u t i o n s a n d variances of statistics o t h e r t h a n the mean. a b o u t reliability. . However. W e d e v e l o p the idea of a confidence limit in Section 6. This brings us to the general subject of s t a n d a r d errors.1 we consider the f o r m of the distribution of m e a n s a n d their variance. T h e second question leads into hypothesis testing.3 a n d s h o w its application to samples where the true s t a n d a r d d e v i a t i o n is k n o w n . Both subjects belong to the field of statistical inference. o n e usually deals with small. m o r e o r less normally distributed s a m p l e s with u n k n o w n s t a n d a r d deviations. In Section 6. In Section 6. which a r e statistics m e a s u r i n g the reliability of an estimate. C o n f i d e n c e limits provide b o u n d s to o u r estimates of p o p u l a t i o n parameters.CHAPTER Estimation Hypothesis and Testing In this c h a p t e r we provide m e t h o d s to a n s w e r t w o f u n d a m e n t a l statistical questions that every biologist must ask repeatedly in the c o u r s e of his or her work: (1) how reliable are the results I h a v e o b t a i n e d ? a n d (2) h o w p r o b a b l e is it that the differences between observed results a n d those expected on the basis of a hypothesis have been p r o d u c e d by c h a n c e alone? T h e first question. T h e subject m a t t e r in this c h a p t e r is f u n d a mental to an u n d e r s t a n d i n g of a n y of the s u b s e q u e n t chapters.

7. This theorem.1 Distribution and variance of means W e c o m m e n c e o u r s t u d y of t h e d i s t r i b u t i o n a n d v a r i a n c e of m e a n s with a s a m pling experiment. H o w e v e r . T h i s illustrates a n o t h e r t h e o r e m of f u n d a m e n t a l i m p o r t a n c e in statistics: As sample size increases. A n o t h e r i m p o r t a n t d i s t r i b u t i o n .4. T h e n it is a p p l i e d to s e t t i n g c o n f i d e n c e limits for t h e v a r i a n c e in S e c t i o n 6. A c t u a l l y . is k n o w n as t h e central limit theorem.2.1. which is a mean of a sample 35 items. C o n s i d e r c o l u m n s (1) a n d (3) for the t i m e being. S e c t i o n 6. Here again we shall make a frequency distribution of these means. as d o c s t h a t of the m e a n s b a s e d o n 200 s a m p l e s of 35 w i n g l e n g t h s s h o w n in t h e s a m e figure. t h e m e a n s based o n five milk yields d o n o t a g r e e with the n o r m a l nearly as well as d o the m e a n s of 35 items. Experiment 6. as s h o w n in F i g u r e 6. T h e t h e o r y of h y p o t h e s i s t e s t i n g is i n t r o d u c e d in Section 6. and construct a frequency distribution of these means. the means of samples drawn from a population of any distribution will approach the normal distribution. t h e s e s a m p l e s w e r e o b t a i n e d not by b i o s t a t i s t i c s classes but by a digital c o m p u t e r .1 we s h o w a f r e q u e n c y d i s t r i b u t i o n of 1400 m e a n s of s a m p l e s of 5 h o u s e f l y w i n g lengths.9 to a variety of cases e x h i b i t i n g the n o r m a l o r t d i s t r i b u t i o n s . T h u s . e n a b l i n g us t o collect t h e s e values with little elTort. In T a b l e 6. T h e s e v a l u e s are p l o t ted o n p r o b a b i l i t y p a p e r in F i g u r e 6. we n o t e t h a t t h e m e a n s of s a m p l e s f r o m the n o r m a l l y d i s t r i b u t e d housefly w i n g l e n g t h s a r e n o r m a l l y d i s t r i b u t e d w h e t h e r t h e y a r e b a s e d o n 5 or 35 i n d i v i d u a l r e a d i n g s .6.s q u a r e d i s t r i b u t i o n . T h e a p p l i c a t i o n of t t o t h e c o m p u t a t i o n of c o n f i d e n c e limits f o r statistics of s m a l l s a m p l e s w i t h u n k n o w n p o p u l a t i o n s t a n d a r d d e v i a t i o n s is s h o w n in S e c t i o n 6. N o t e t h a t t h e d i s t r i b u t i o n a p p e a r s q u i t e n o r m a l . is e x p l a i n e d in S e c t i o n 6.94 chapter 6 / estimation and hypothesis testing in w h i c h case t h e t d i s t r i b u t i o n m u s t be used. T h e i m p o r t a n c e of this t h e o r e m is that if η is l a r g e e n o u g h . possibly adding them to the sampling results of previous classes. Finally.s q u a r e d i s t r i b u t i o n . We can collect these means from every student in a class.10 illustrates h y p o t h e s i s t e s t i n g for v a r i a n c e s by m e a n s of t h e c h i . a p p e a r t o be close t o n o r m a l d i s t r i b u t i o n s .8 a n d is a p p l i e d in S e c t i o n 6. t h e c h i . it p e r m i t s us t o use the n o r m a l distri- . T h e i r m e a n a n d s t a n d a r d d e v i a t i o n a r c given at the f o o t of the table. For each variable we can also obtain the mean of the seven means.1 the means of the seven samples of 5 housefly wing lengths and the seven similar means of milk yields. although it takes a considerable number of samplers to accumulate a sufficient number of samples of 35 items for a meaningful frequency distribution. when rigorously stated (about sampling from populations with finite variances).5.1 You were asked to retain from Experiment 5. Similarly o b t a i n e d d i s t r i b u t i o n s of m e a n s of t h e heavily s k e w e d milk yields. T h i s i l l u s t r a t e s a n i m p o r t a n t t h e o r e m : The means of samples from a normally distributed population are themselves normally distributed regardless of sample size n. W e shall i n t r o d u c e the t dist r i b u t i o n in Section 6. 6.

but the individual milk yields range f r o m 51 t o 98.808 47.4 to 51. 45.9 t o 47.480 s = 1.6.1 / d i s t r i b u t i o n a n d v a r i a n c e o f TABLE 6 . 1 means 95 Frequency distribution of means of 1400 random samples of 5 housefly wing lengths.2 to 89.704 41.680 48. but the range of t h e distribution of the m e a n s diminishes as the sample size u p o n which the m e a n s a r e based increases. the wing-length m e a n s range f r o m 39.744 b u t i o n to m a k e statistical inferences a b o u t m e a n s of p o p u l a t i o n s in which the items are not at all n o r m a l l y distributed.320 44. T h e milk-yield m e a n s range f r o m 54.192 .U .778 ffy = 1.168 Class W Class mark (in ffy units) _ il z (S) f 1 11 19 64 128 ->4 4 4 . mark Y (in mm χ 10~ ') 39.U 3 4 1 4 4 3 4 u 1 247 226 259 231 121 61 23 6 3 1400 |3 21 z *A 4 ->4 F= 45. T h e differences in ranges a r e reflected in differences in the s t a n d a r d deviations of these distributions.424 50.3 in samples of 35.936 46.064 μ = 45.4 in s a m p l e s of 35. ( D a t a f r o m T a b l e 5. (Skewed p o p u l a t i o n s require larger s a m p l e sizes.296 51.576 42.0 in samples of 5 a n d f r o m 61.5 -» 45.9 to 71.552 49.) C l a s s m a r k s chosen t o give intervals of t o each side of the p a r a m e t r i c m e a n μ.832 40. If we calculate t h e s t a n d a r d deviations of the m e a n s .448 43. but the individual wing lengths r a n g e f r o m 36 to 55.6 in samples of 5 a n d f r o m 43.) T h e next fact of i m p o r t a n c e that we n o t e is that the r a n g e of the m e a n s is considerably less t h a n that of t h e original items. N o t only d o m e a n s s h o w less scatter than the items u p o n which they are based (an easily u n d e r s t o o d p h e n o m e n o n if you give s o m e t h o u g h t to it).1. T h e necessary size of η d e p e n d s u p o n the distribution. T h u s .

1 G r a p h i c analysis of m e a n s of 14(X) r a n d o m s a m p l e s of 5 housefly wing lengths (from T a b l e 6. .Samples of 5 _l 1 i i i .3 .2 .1 . i i i i i i 1 i l . 1 ι 1 I I I I I I—I I 1 I I I .3 .1 0 1 2 3 4 H o u s e f l y w i n g l e n g t h s in σ γ units Samples of 35 0.1) a n d of m e a n s of 200 r a n d o m s a m p l e s of 35 housefly wing lengths.1 0 I 2 3 4 H o u s e f l y w i n g l e n g t h s in (i v units figure 6.2 .

.70 r 5 ft) s 0 8 40 X 30 ω 20 1 3 ε 3 "0 5 υ ι οι .2 .2 .„. ..1 0 1 2 3 M i l k y i e l d s in <τν units FIGURE 6.1 .1 0 1 2 3 M i l k y i e l d s in ιτ. 0 0 r a n d o m samples of 35 milk yields.3 .2 .Samples of 5 0.9 99 •S a | £-95 90 1 8 0 1. m G r a p h i e analysis of m e a n s of 1400 r a n d o m s a m p l e s of 5 milk yields a n d of m e a n s of .7 units S a m p l e s of 3 5 99. . „ .

It is possible ίο w o r k out the expected value of the variance of s a m p l e m e a n s . M e a n s b a s e d o n large s a m p l e s should be close to the p a r a m e t r i c m e a n . equal to 1. F r o m Expression (3. if we were t o t a k e samples of a m e a n s of η items repeatedly a n d were t o calculate t h e variance of these a m e a n s each time.2) we o b t a i n v w _ in > i i ΣΗ . we o b t a i n the following values: Observed standard deviations of distributions of means η = 5 η =35 Wing lengths Milk yields 1. It follows that η (6. a n d m e a n s based o n large s a m p l e s will not vary as m u c h as will m e a n s based on small samples. It is also a function of the variance of t h e items in the samples. Σ" »ν( = η.584 1.1597) is considerably greater t h a n that of individual wing lengths (3.1) Since the weights u·.799 N o t e t h a t t h e s t a n d a r d deviations of the s a m p l e m e a n s based o n 35 items are considerably less t h a n t h o s e based on 5 items.040 0. T h u s . the m e a n s of milk yields have a m u c h greater s t a n d a r d deviation t h a n m e a n s of wing lengths based o n c o m p a r a b l e s a m p l e size simply because the s t a n d a r d deviation of the individual milk yields (11. in t h e text table above. This is also intuitively obvious.90). a n d we can rewrite the a b o v e expression as η V σΐ .778 5. the average of these variances would be t h e expected value. T h e v a r i a n c e of m e a n s is therefore partly a f u n c t i o n of t h e s a m ple size o n which the m e a n s are based. W e can visualize the m e a n as a weighted a v e r a g e of the η independently sampled o b s e r v a t i o n s with cach weight w. T h u s . W c shall state w i t h o u t proof t h a t the variance of the weighted s u m of independent items Σ" is V a r ( £ w . By expected value we m e a n the average value to be obtained by infinitely repeated sampling.98 c h a p t e r 6 / estimation and hypothesis testing in t h e four distributions u n d e r c o n s i d e r a t i o n . Y ^ = £vvraf where nf is the variance of V^. in this case equal 1. ' for the weighted m e a n .

490 to 2. averaging m a n y observations. Very large s a m p l e sizes. However. a n d we can o b t a i n only a s a m p l e estimate s of the latter. This m a k e s g o o d sense. t h e expected s t a n d a r d deviation of m e a n s is σ (6.2a). f r o m the s t a n d a r d deviation of a single sample. C u s t o m a r i l y . a n d the estimate of s t a n d a r d deviation of 1 he means f r o m 0. the smaller will be the s t a n d a r d deviation of means. Also.2 illustrates some estimates of the s t a n d a r d deviations of means that might be o b t a i n e d f r o m r a n d o m samples of the t w o p o p u l a t i o n s that we have been discussing. substituting s for a: (6. k n o w its p a r a m e t r i c s t a n d a r d deviation σ. of course. As we shall see.3) Thus. in a particular sample in which by c h a n c c the s a m p l e s t a n d a r d deviation is a p o o r estimate of the p o p u l a t i o n s t a n d a r d deviation (as in the second sample of 5 milk yields). the estimate of the s t a n d a r d deviation of m e a n s is equally wide of the m a r k . We m m η ν <->»/·) f l i i t i h i · p c i l r r i ' i i p u/itt i m n r n v p anrl will a n n r o a e h the true s t a n d a r d .095 to 4.827. should yield estimates of m e a n s closer to the p o p u l a t i o n m e a n a n d less variable t h a n those based on a few items. T h e m e a n s of 5 samples of wing lengths based on 5 individuals ranged f r o m 43. W e should e m p h a s i z e o n e point of difference between the s t a n d a r d deviation of items a n d the s t a n d a r d deviation of s a m p l e means. Ranges for the o t h e r categories of samples in T a b l e 6. T h e estimates of the s t a n d a r d deviations of the m e a n s of the milk yields cluster a r o u n d the expected value. we o b t a i n . the s t a n d a r d deviation of m e a n s becomes vanishingly small. their s t a n d a r d deviations f r o m 1.1 / d i s t r i b u t i o n a n d v a r i a n c e o f means 99 If we a s s u m e that the variances of a r e all e q u a l t o σ 2 . as s a m p l e size increases to a very large n u m b e r . an estimate of the s t a n d a r d deviation of m e a n s we would expect were we t o o b t a i n a collection of m e a n s based on equal-sized samples of η items f r o m the same p o p u l a t i o n . we w o u l d be unlikely to have n u m e r o u s samples of size η f r o m which to c o m p u t e the s t a n d a r d deviation of m e a n s directly. this estimate of the s t a n d a r d deviation of a m e a n is a very i m p o r t a n t a n d frequently used statistic. we therefore have to estimate the s t a n d a r d deviation of m e a n s f r o m a single sample by using Expression (6. W h e n w o r k i n g with samples f r o m a p o p u l a t i o n . T h e greater the sample size.159.6. sincc they are not d e p e n d e n t on n o r m a l i t y of the variates. we d o not. In fact. the m a g n i t u d e of the e s t i m a t e will not c h a n g e as we increase o u r s a m p l e size. If we estimate a p o p u l a t i o n s t a n d a r d deviation t h r o u g h the s t a n d a r d deviation of a sample. the expected variance of the m e a n is a n d consequently.6 to 46.2a) F r o m this f o r m u l a it is clear t h a t the s t a n d a r d deviation of m e a n s is a f u n c t i o n of the s t a n d a r d deviation of items as well as of s a m p l e size of means.8.2 similarly include the p a r a m e t r i c values of these statistics. T a b l e 6.

991 1.159 2.095 3. H o w e v e r .3). w h e t h e r the sample is based on 3.764 1.61 deviation of the p o p u l a t i o n .072 14.6 η = (2) s lengths 1.. Yet the general m a g n i t u d e is the same in b o t h instances.099 ί = 1.6 44.604 0.160 π 2.209 4.710 0.131 0.886 σγ η = 35 66. decreases as s a m p l e size increases.61 65.827 4. 30. standard deviations.195 5.90 3.415 11.490 1.221 9.29 45.37 η = 35 45.2.278 16.490 1. as is o b v i o u s f r o m Expression (6.91 μ = 45. ( D a t a f r o m T a b l e 5.8 μ = 45.160 11.744 0.2 Means.435 2. T h e values of s are closer to σ in the samples based on η = 35 t h a n in samples of η = 5.188 6.850 3.s i~*r\/\/\ .003 1 1.2 = 66.651 0.914 μ = 66.6 65.958 σ = 3.348 2.v .978 9.0 61. respectively.1.576 4.001 σ = 12.687 1.5 45. however.100 c h a p t e r 6 / estimation a n d hypothesis testing t a b l e 6. This can be seen clearly in T a b l e 6.198 3.775 1.205 4.897 1.669 Of = 0 .971 <τ = 6.) P a r a m e t r i c values for t h e statistics are given in the sixth line of each c a t e g o r y .s 1A .095 σ = 3.429 64.74 45. o r 3000 individuals. m e a n s based on 3000 items will have a s t a n d a r d d e v i a t i o n only o n e .8 46.90 Milk yields σρ = (3) Sf 0.400 68. T h e s t a n d a r d deviation of means.543 64.0 62.644 0.t e n t h that of m e a n s based on 30 items.521 2.215 11. I i ί\ί\ /Sn . 6 5 9 5 43.8 45.860 1.5 η = 5 66.00 45.913 7. T h u s . its o r d e r of m a g n i t u d e will be the same.6 67. U) Υ Wing 45.332 = 4. This is o b v i o u s from . and standard deviations of means (standard errors) of five random samples of 5 and 35 housefly wing lengths and Jersey cow milk yields.812 3.

their distribution is never n o r m a l . or a coefficient of variation. II 10. which shows a frequency distrib u t i o n of the variances f r o m t h e 1400 samples of 5 housefly wing lengths. In o t h e r cases the statistics will be distributed n o r m a l l y only if they are based on samples f r o m a n o r m a l l y distributed p o p u l a t i o n . An illustration is given in F i g u r e 6. just as we did for the frequency distribution of means. T h u s .21! 11 . In s o m e instances. a m e d i a n . o r if b o t h these conditions hold. as in variances.6. In m a n y cases t h e statistics a r e n o r m a l l y distributed.lifi 1 S 10 12 II IS f i g u r i : 6.02 2(>.80 II. we would have frequency distributions for these statistics a n d w o u l d be able t o c o m p u t e their s t a n d a r d deviations. with the following exception: it is not c u s t o m a r y to use " s t a n d a r d e r r o r " as a s y n o n y m of " s t a n d a r d d e v i a t i o n " for items in a sample or p o p u l a t i o n . 11 57. which is c h a r a c t e r istic of the distribution of variances.2 Distribution and variance of other statistics Just as we o b t a i n e d a m e a n a n d a s t a n d a r d deviation f r o m e a c h s a m p l e of the wing lengths a n d milk yields. S t a n d a r d e r r o r or s t a n d a r d deviation has to be qualified by referring t o a given statistic. Beginners s o m e t i m e s get confused by a n imagined distinction between s t a n d a r d deviations a n d s t a n d a r d errors.3.S:i 10.3 H i s t o g r a m of v a r i a n c e s b a s e d o n 1400 s a m p l e s of 5 housefly w i n g l e n g t h s f r o m T a b l e 5.01 (vl. such as the s t a n d a r d deviation 100 0 0 :s. T h e s t a n d a r d e r r o r of a statistic such as the m e a n (or V) is the s t a n d a r d deviation of a distribution of m e a n s (or K's) for samples of a given s a m p l e size n. the terms " s t a n d a r d e r r o r " a n d " s t a n d a r d d e v i a t i o n " are used synonymously.(>2 ii 1. S t a n d a r d deviations of various statistics are generally k n o w n as standard errors. Abscissa . W e notice t h a t the distribution is strongly skewed t o the right. so we could also have o b t a i n e d o t h e r statistics f r o m each sample. After repeated s a m p l i n g a n d c o m p u t a t i o n .1. or if they are based on large samples. as was true for the means.2 / d i s t r i b u t i o n a n d v a r i a n c e o f o t h e r statistics 101 6. such as a variance.

s t a n d a r d errors. " S t a n d a r d d e v i a t i o n " used w i t h o u t qualification generally m e a n s stand a r d deviation of items in a s a m p l e or p o p u l a t i o n .102 c h a p t e r 6 / estimation a n d hypothesis testing of V. which is the same as t h e s t a n d a r d e r r o r of V. S t a n d a r d deviation of a statistic St = s t a n d a r d e r r o r of a statistic St = S t a n d a r d e r r o r = s t a n d a r d e r r o r of a m e a n = s t a n d a r d deviation of a m e a n = Sy. Box 6. s t a n d a r d deviations.~ •Jin r VlOO/ η ~ 1 Samples from normal populations Used when V < 15 η - 1 .η V Sy X V . a n d coefficients of v a r i a t i o n a r e s h o w n in a table. t h e t e r m " s t a n d a r d e r r o r " conventionally implies the s t a n d a r d e r r o r of t h e m e a n . a n d coefficients of v a r i a t i o n are displayed. Used w i t h o u t a n y qualification. (1) Statistic Estimate S (2) of standard Sy yfn error Is? (3) df Μ Comments on applicability 1 2 Ϋ Sf η - 1 1 Vn V η η - True for any population with finite variance Large samples from normal populations Samples from normal populations (n > 15) Median s med « (1. w h e n y o u r e a d t h a t m e a n s .2533)sy 3 s s. T h u s .1 lists the s t a n d a r d e r r o r s of four c o m m o n statistics. You will r e m e m b e r that we estimated the s t a n d a r d e r r o r of a distribution of m e a n s f r o m a single s a m p l e in this m a n n e r in the p r e v i o u s section. c o l u m n (2) shows the f o r m u l a BOX 6. = (0. this signifies t h a t a r i t h m e t i c m e a n s .7071068) j: V» η ~ 1 4 V φ. T h e following s u m m a r y of terms m a y be helpful: S t a n d a r d deviation = s = jΣ}>2/(η — 1). s t a n d a r d deviations of their m e a n s ( = s t a n d a r d e r r o r s of means). C o l u m n (1) lists the statistic whose s t a n d a r d e r r o r is described. s t a n d a r d d e v i a t i o n s of items in samples.1 Standard errors for common statistics. S t a n d a r d e r r o r s are usually not o b t a i n e d f r o m a frequency d i s t r i b u t i o n by r e p e a t e d s a m p l i n g but a r e estimated f r o m only a single s a m p l e a n d represent t h e expected s t a n d a r d d e v i a t i o n of the statistic in case a large n u m b e r of such s a m p l e s h a d been o b t a i n e d .

5). T h e true values of the p a r a m e t e r s will a l m o s t always remain u n k n o w n .96 a n d + 1. 9 6 σ / y f n below μ to 1 .3 / i n t r o d u c t i o n t o c o n f i d e n c e limits 103 for the estimated s t a n d a r d error.3. 9 6 a j ^ f n a b o v e μ includes 95% of the s a m p l e m e a n s of size n. Since they are n o r m a l l y distributed. the sample m e a n s will be n o r m a l l y distributed. T h e m e a n of a sample of η items is symbolized by Ϋ. T h e expected s t a n d a r d e r r o r of the m e a n is σ / s j n . So far we have n o t discussed t h e reliability of these estimates. the region f r o m 1 . a n d we c o m m o n l y estimate reliability of a sample statistic by setting confidence limits to it. all terms of which can be multiplied by aj\fn to yield W e can rewrite this expression as because —a < b < a implies a > — b > —a. such as m e a n s or s t a n d a r d deviations. Therefore. 95% of such s t a n d a r d deviates will lie between — 1.μ)Κσ^η). But k n o w i n g . T o begin o u r discussion of this topic.96. respectively. which can be written as —a < — b < a. we can transfer — Ϋ across the inequality signs. respectively.6.95. As we have seen. This is the s t a n d a r d deviate of a s a m p l e m e a n f r o m the p a r a m c t r i c m e a n . let us start with the u n u s u a l case of a p o p u l a t i o n whose p a r a m e t r i c m e a n a n d s t a n d a r d deviation are k n o w n to be μ a n d σ.96 s t a n d a r d e r r o r s σ/sjn f r o m the p a r a m e t r i c mean μ equals 0. are estimates of p o p u l a t i o n p a r a m e t e r s μ or σ.3 Introduction to confidence limits T h e various sample statistics we have been obtaining. c o l u m n (3) gives the degrees of f r e e d o m on which t h e s t a n d a r d e r r o r is based (their use is explained in Section 6. for example. W e first of all wish to k n o w w h e t h e r the s a m p l e statistics are unbiased estimators of the p o p u l a t i o n p a r a m e t e r s . And finally. t h a t Ϋ is an unbiased estimate of μ is not e n o u g h . just as in an .7. A n o t h e r way of stating this is t o consider the ratio (Κ . f r o m Section 5. W e can express this statement symbolically as follows: This m e a n s that the probability Ρ that the sample means Y will dilfcr by no m o r e t h a n 1. T h e expression between the brackets is an inequality. W e w o u l d like to find out h o w reliable a m e a s u r e of μ it is. 6. T h e uses of these s t a n d a r d errors will be illustrated in s u b s e q u e n t sections. a n d c o l u m n (4) provides c o m m e n t s on t h e r a n g e of application of the s t a n d a r d error. as discussed in Section 3.

= 44. then 99 times out of 100. finally 999 times out of 1000. you could increase it.576σ. c o m p u t e the two q u a n t i t i e s L[ = Ϋ — 2 .95 (6. . 5 7 6 σ / ν / « a n d L2 = Ϋ + 2 . In this case 99 out of 100 confidence intervals o b t a i n e d in repeated s a m p l i n g w o u l d c o n t a i n the true m e a n .4a) 1.95. If you were still n o t satisfied with the reliability of the confidence limit. 5 7 6 σ / y f n as lower a n d u p p e r confidence limits.9% confidence limits. T h u s .8 .576 as a coefficient in place of 1.90).96σ ? < μ < Ϋ + 1.6σγ we shall call Lx a n d L2. Notice that you can c o n s t r u c t confidence intervals that will be cxpcctcd to c o n t a i n μ an increasingly greater percentage of the time.1. = 44. since the confidence interval lengthens. we could expect 95% of the intervals between these limits to c o n t a i n the true m e a n . If you were not satisfied to have the confidence interval c o n t a i n t h e true m e a n only 95 times o u t of 100.96σγ} = 0. multiplying the s t a n d a r d error of the m e a n by 3.6592. T h e new coefficient would widen the interval further. W e compute' confidence limits as follows: T h e lower limit is L . This value could be found by inverse interp o l a t i o n in a m o r e extensive table of areas of the n o r m a l curve or directly in a table of the inverse of the n o r m a l probability distribution.960. T h e new c o n f i d e n c e interval is wider t h a n t h e 95% interval (since we have multiplied by a greater coefficient). to calculate 99% confidence limits. respectively. y o u r s t a t e m e n t becomes vaguer and vaguer. Let us e x a m i n e this by way of an actual sample.8 + (L960)(0. T h e u p p e r limit is / .4a) is t h a t if we repeatedly o b t a i n e d samples of size η f r o m the p o p u l a t i o n a n d c o n s t r u c t e d these limits for each.96σ \Jn + \Jn ) 1.6592) = 46.96σ ? is less t h a n or e q u a l to t h e p a r a m e t r i c m e a n μ a n d t h a t the term Ϋ + 1. a n d only 5% of the intervals w o u l d miss μ. This yields the final desired expression: ίP\Y I or Ρ{ Ϋ .96σ ? a n d Ϋ + \9. the lower a n d u p p e r 95% confidence limits of the m e a n .6592) = 43.104 chapter 6 / estimation and hypothesis testing e q u a t i o n it could be t r a n s f e r r e d across the equal sign.960)(0. T h e interval f r o m L j to L2 is called a confidence interval. But as y o u r confidence increases. First you would expect to be right 95 times out of 100.291 to o b t a i n 99.09.1. .1 with k n o w n m e a n {μ = 45. A n o t h e r way of stating the r e l a t i o n s h i p implied by Expression (6. We can expect the s t a n d a r d deviation of m e a n s based on samples of 35 items to be σγ = σ/yfn = 3.4) This m e a n s that the p r o b a b i l i t y Ρ t h a t the term Ρ .90/^/35 = 0.95 (6. you might e m p l o y 2. respectively. Y o u m a y r e m e m b e r t h a t 99% of the area of the n o r m a l curve lies in the r a n g e μ ± 2.96σ) = 0. We o b t a i n a s a m p l e of 35 housefly wing lengths from the p o p u l a t i o n of T a b l e 5.96σ ρ is greater t h a n o r e q u a l to μ is 0.(1.51.5) a n d s t a n d a r d deviation (σ = 3.8. Let us a s s u m e that the sample m e a n is 44. T h e t w o terms f — 1.

F o r example. W e tried the experiment on a c o m p u t e r for the 200 samples of 35 wing lengths each.97) contain the p o p u l a t i o n m e a n .0%) cross the p a r a m e t r i c m e a n of the p o p u l a t i o n . we can frequently reduce the variance of o u r response variable a n d hence o b t a i n m o r e prccisc estimates of p o p u l a t i o n parameters.90. milk yields σ = 11. W e c o u l d increase the reliability of these limits by going t o 99% confidence intervals. to 99.2) as η increases. we might be able to reduce this variance by taking rats of only o n e age g r o u p . contain the parametric mean of the population. Experiment 6. If we are s a m p l i n g f r o m a p o p u l a t i o n in nature.1597). the variance of the response variable.5) a n d hence we k n o w t h a t the confidence limits enclose the m e a n . Obviously f r o m Expression (6. we could be virtually certain t h a t o u r confidence limits (L. Thus. we ordinarily have no way of reducing its s t a n d a r d deviation.960 in the a b o v e expression by 2. replacing 1. By increasing the degree of confidence still f u r t h e r . if we are studying the effect of a d r u g on heart weight in rats a n d find that its variance is rather large. is reduced. W e could have greater confidence that o u r interval covers the m e a n .50. but we could be m u c h less certain a b o u t the true value of the m e a n because of the wider limits.1 (Section 6.1). σγ = 0. T h e first of these alternatives is not always available. c o m p u t i n g confidence limits of the p a r a m e t r i c m e a n by e m p l o y i n g the p a r a m e t r i c s t a n d a r d e r r o r of the mean. This ties in with w h a t we have learned: in samples whose size a p p r o a c h e s infinity. .2. the s t a n d a r d e r r o r decreases.9%. Record how many in each of the four classes of confidence limits (wing lengths and milk yields. H o w ever.576 a n d o b t a i n i n g Ll = 43. η = 5 and η = 35) are correct—that is. we have to reduce the stand a r d e r r o r of the m e a n .3 / i n t r o d u c t i o n t o c o n f i d e n c e limits 105 R e m e m b e r t h a t this is an u n u s u a l case in which we h a p p e n t o k n o w the true m e a n of t h e p o p u l a t i o n (μ = 45. the s t a n d a r d e r r o r and the lengths of confidence intervals a p p r o a c h zero.63. L2 = 46. 194 (97. Similarly. W e expect 95% of such confidence intervals o b t a i n e d in repeated s a m p l i n g to include the p a r a m e t r i c m e a n .6592. in which the variation of heart weight would be considerably less. = 42.6. Λ c o m m o n way to reduce the s t a n d a r d e r r o r is to increase s a m p l e size.10 a n d L 2 = 46. T o reduce the width of the confidence interval. but the b o u n d s enclosing t h e m e a n are n o w so wide as to m a k e o u r prediction far less useful t h a n previously. heart weight. in m a n y experimental p r o c e d u r e s we m a y be able to reduce the variance of the d a t a . (he s a m p l e m e a n would a p p r o a c h the p a r a m e t r i c mean. this can be d o n e only by reducing the s t a n d a r d deviation of the items or by increasing the s a m p l e size. Since σγ = σ/^Jn. For the seven samples of 5 housefly wing lengths and the seven similar samples of milk yields last worked with in Experiment 6. by keeping t e m p e r a t u r e or o t h e r e n v i r o n m e n t a l variables c o n s t a n t in a procedure. Pool your results with those of other class members. by controlling o n e of the variables of the experiment. hence. say. compute 95% confidence limits to the parametric mean for each sample and for the total sample based on 35 items. Base the standard errors of the means on the parametric standard deviations of these populations (housefly wing lengths σ = 3. as η a p p r o a c h e s infinity. Of the 200 confidence intervals plotted parallel to the ordinate.

This is illustrated in f i g u r e 6. so that its s h a p e is not altered a n d a previously normal distribution r e m a i n s so. So far we have considered only m e a n s based on normally d i s t r i b u t e d s a m ples with k n o w n p a r a m e t r i c s t a n d a r d deviations. Dividing each deviation by the c o n s t a n t o Y reduces the variance to unity. extend the m e t h o d s just learned t o samples f r o m p o p u l a t i o n s where the s t a n d a r d deviation is u n k n o w n but where the distribution is k n o w n t o be n o r m a l a n d the s a m p l e s a r e large. we shall have t o b e c o m e familiar with this distribution in the next section.4 Student's t distribution T h e deviations Υ — μ of s a m p l e m e a n s f r o m the p a r a m e t r i c m e a n of a n o r m a l distribution are themselves normally distributed. therefore. expressed in a n o t h e r way. 0. when the samples are small (n < 100) a n d we lack k n o w l e d g e of the p a r a m e t r i c s t a n d a r d deviation.stands for the estimate of the s t a n d a r d error of the m e a n of the f'th sample. η > 100. is simply an additive code (Section 3. hence it is a fixed value. however. T h e . t h a t o n t h e average 95 out of 100 confidence intervals similarly o b t a i n e d w o u l d cover the mean.1). If these deviations are divided by the p a r a m e t r i c s t a n d a r d deviation. We shall learn how to set confidence limits e m p l o y i n g the t distribution in Section 6.1. or. Before that. on the o t h e r h a n d . 6.μ)/*Υι for the 1400 samples of live housefly wing lengths o f T a b l e 6. If. however. In such cases we use the s a m p l e s t a n d a r d d e v i a t i o n for c o m p u t i n g the s t a n d a r d e r r o r of the m e a n .sy . we imply t h a t the p r o b a b i l i t y t h a t this interval covers the m e a n is.95 t h a t t h e true m e a n is c o n t a i n e d within a given p a i r of confidence limits. It is i m p o r t a n t . W e can. W h e n we have set lower a n d u p p e r limits ( L j a n d L 2 . T h e latter s t a t e m e n t is incorrect because the t r u e m e a n is a p a r a m e t e r . the resulting ratios. T h e new distribution ranges wider than the c o r r e s p o n d i n g n o r m a l distribution. we m u s t m a k e use of the so-callcd t or S t u d e n t ' s distribution. W e cannot state t h a t there is a p r o b a b i l i t y of 0. but p r o p o r t i o n a t e l y so for the entire distribution. for example..8) and will not c h a n g e the f o r m of the distribution of s a m p l e means. which shows the ratio (Vi . However.5.4. a n d it is therefore either inside the interval or outside it. we calculate the variance sf of each of the samples a n d calculate the deviation for each m e a n \\ as ( V· — /()/%. say.95. a l t h o u g h this m a y seem to be saying the same thing. This increased variation will he reflected in the greater variance of the ratio (Υ μ) 'sY. respectively) to a statistic. we m u s t take into c o n s i d e r a t i o n the reliability of o u r sample s t a n d a r d deviation. because the d e n o m i n a t o r is the sample s t a n d a r d e r r o r r a t h e r than the p a r a m e t r i c s t a n d a r d e r r o r a n d will s o m e t i m e s be smaller a n d sometimes greater than expected. we will find the distribution of the deviations wider and m o r e peaked than the n o r m a l distribution. to learn the correct s t a t e m e n t a n d m e a n i n g of confidence limits. where . with μ — 0 a n d σ = 1.106 c h a p t e r 6 / estimation and hypothesis testing W e m u s t g u a r d against a c o m m o n m i s t a k e in expressing the m e a n i n g of the confidence limits of a statistic. which is n o r m a l (Section 6. are still normally distributed. T o d o so. S u b t r a c t i n g the c o n s t a n t μ f r o m every Ϋ. (Ϋ — μ)/σγ. It c a n n o t be inside the given interval 95% of the time.

where η is the sample size u p o n which a variance has been based. By "degrees of f r e e d o m " we m e a n the q u a n t i t y n I. It will be r e m e m b e r e d that η — 1 is the divisor in o b t a i n i n g an unbiased estimate of the variance f r o m a sum of squares. expected distribution of this ratio is called the f distribution. w h o first described it. figure 6. also k n o w n as "Student's' distribution. A t distribution for dj = 1 deviates most m a r k e d l y f r o m the n o r m a l . T h e t distribution shares with the n o r m a l the properties of being symmetric a n d of extending f r o m negative to positive infinity.6. As the n u m b e r of degrees of freedom increases. n a m e d after W.h a n d o r d i n a t e represents frequencies for the h i s t o g r a m . R i g h t .4 D i s t r i b u t i o n of q u a n t i t y f s = (Ϋ — μ)/Χγ a l o n g abscissa c o m p u t e d for 1400 s a m p l e s of 5 housefly wing lengths presented as a h i s t o g r a m a n d as a c u m u l a t i v e frequency d i s t r i b u t i o n . T h e n u m b e r of degrees of freedom pertinent to a given Student's distribution is the s a m e as the n u m b e r of degrees of f r e e d o m of the s t a n d a r d deviation in the ratio (Ϋ — μ)/χγ. S. l e f t . Student's distribution a p p r o a c h e s the s h a p e of the s t a n d a r d n o r m a l distribution (μ = 0. However. Degrees of freedom (abbreviated dj o r sometimes v) can range f r o m I to infinity. " T h e t distribution is a function with a complicated m a t h e m a t i c a l f o r m u l a that need not be presented here. σ = 1) ever m o r e closcly.4 / s t u d e n t ' s i d i s t r i b u t i o n 107 f. and in a g r a p h the size of this page a t distribution of df — 30 is essentially indistinguishable f r o m a n o r m a l distribution. p u b lishing u n d e r the p s e u d o n y m " S t u d e n t . G o s s c t t . it differs f r o m the n o r m a l in that it a s s u m e s different shapes d e p e n d i n g on the n u m b e r of degrees of freedom.h a n d o r d i n a t e is c u m u l a t i v e frequency in probability scale. At .

You should b e c o m e very familiar with l o o k i n g u p t values in this table.764. However.365. / Normal = -() .960. we can think of t h e t d i s t r i b u t i o n as the general case.2 . T r y looking u p s o m e of these values to b e c o m e familiar with the table.025 will fall to the right o f f = + 2 . . F i g u r e 6. O n l y those probabilities generally used are s h o w n in T a b l e III. we find t = 2. t h e probability of 0.5 -4 -:i . T h u s . respecx tively. 2. convince yourself t h a t fo.841. Since this is a two-tailed table. which is equivalent to the t value for the c u m u l a t i v e p r o b a b i l i t y of 1 — (a/2). looking up the critical value of t at p r o b a b i l i t y Ρ = 0. 5.05 m e a n s that 0. F o r example. T h e probabilities indicate t h e percent of the area in b o t h tails of the curve (to the right a n d left of the m e a n ) b e y o n d the indicated value of t. t h e f d i s t r i b u t i o n is the n o r m a l distribution.05 a n d df = 5.2 . We shall now e m p l o y the t distribution for the setting of confidence limits to m e a n s of small samples.108 chapter 6 / estimation and hypothesis testing df — co. C o n v e n t i o n a l t tables are therefore differently a r r a n g e d . it will be necessary to have a s e p a r a t e t table. m e a n i n g the tabled t value for ν degrees of f r e e d o m a n d p r o p o r t i o n α in both tails (a/2 in each tail). since t h e t distrib u t i o n s differ in s h a p e for differing degrees of freedom. A fairly c o n v e n t i o n a l symbolism is ί ϊ [ ν ] . T h i s w o u l d m a k e for very c u m b e r s o m e and e l a b o r a t e sets of tables. for e a c h value of d f .osnj' 'ο ο 113]' 'o.571 in T a b l e III. 5 7 1 a n d 0. 5 7 1 . c o r r e s p o n d i n g in s t r u c t u r e to the table of the areas of the n o r m a l curve.oziioi' a n d Vo5[ < ι c o r r e s p o n d to 2. Y o u will recall that the c o r r e s p o n d i n g value for infinite degrees of freedom (for the n o r m a l curve) is 1.025 of the area will fall to the left of a t value of . T a b l e III s h o w s degrees of f r e e d o m a n d probability as a r g u m e n t s a n d the c o r r e s p o n d i n g values of t as functions. considering the n o r m a l to be a special case of S t u d e n t ' s distribution with df = σο. Thus.5 s h o w s t distributions for 1 a n d 2 degrees of f r e e d o m c o m p a r e d with a n o r m a l frequency distribution.5 F r e q u e n c y c u r v e s of ί d i s t r i b u t i o n s for 1 a n d 2 d e g r e e s of f r e e d o m c o m p a r e d with t h e n o r m a l d i s t r i b u t i o n . This is o n e of the most i m p o r t a n t tables to be consulted. W e were able t o e m p l o y a single table for t h e areas of the n o r m a l curve by c o d i n g the a r g u m e n t in s t a n d a r d deviation units.1 0 I units 2 3 4 5 H<aiKi 6.960. a n d 1.

for 95% confidence limits we use values of f 0 0 5 [ „ _ . where 1 .004 + 0.u S r } = 1 . = Ϋ — ία(Μ_ ιjSy a n d L2 = Ϋ + tx[„^1]Sy for confidence limits of p r o b a b i l i t y Ρ — 1 . we are n o w able to set confidence limits to the m e a n s of samples f r o m a n o r m a l frequency distribution whose p a r a m e t r i c s t a n d a r d deviation is u n k n o w n . v W e c a n rewrite Expression (6.853 L2 (upper hmit) = Y + to ost„- „ s ~ = 4.799 L2 — Y + 'o.209 .α. η = 25.2 Confidence limits for μ.π s Sn = 3.01[24] r4n = 4. T h e limits a r e c o m p u t e d as L . Values for ιφ_.064 t0.205 = 4.151 = 4.Vosi». s = 0.004.155 The 99% confidence limits are Lt — Y to.01[24] = 3.a (6. (lower limit) = Y .α is the proportion expressing confidence and η — 1 are the degrees of freedom: «0.01[241 = 2 " 7 9 7 The 95% confidence limits for the population mean μ are given by the equations L .051241 = 2.366. from a two-tailed t table (Table ΠΙ)..5 Confidence limits based on sample statistics A r m e d with a k n o w l e d g e of the t distribution.4a) as P{L.6. W e can BOX 6. Aphid stem mother femur lengths from Box 2.004 + 0.1: Ϋ = 4. T h u s .5) An example of the application of this expression is shown in Box 6.5 / c o n f i d e n c e l i m i t s b a s e d o n s a m p l e statistics 109 6. < μ < L2} = P{ Ϋ .2.ί α ί η _ n s y < μ < Υ + t ^ .

3). for example. we m a y set confidence limits to t h e coefficient of v a r i a t i o n of the a p h i d f e m u r lengths of Box 6.6 N i n e t y . Repeat the computations and procedures of Experiment 6.2 (Section 6. W e can use the s a m e t e c h n i q u e for setting confidence limits t o a n y given statistic as long as it follows the n o r m a l distribution.5%) of the 200 confidence intervals cross the p a r a metric mean. c o m p u t e d with t a n d sf r a t h e r t h a n with the n o r m a l curve a n d σΫ. T h e h e a v y h o r i z o n t a l line is t h e p a r a m e t r i c m e a n μ.l i v e p e r c e n t c o n f i d e n c e i n t e r v a l s of m e a n s of 20(1 s a m p l e s of 35 houselly w i n g l e n g t h s .-^ν} = 1. Experiment 6. This will a p p l y in an a p p r o x i m a t e way to all the statistics of Box 6. T h e o r d i n a t e represents the variable .6 shows 95% confidence limits of 200 sampled m e a n s of 35 housefly wing lengths.2.1. T h u s . F i g u r e 6. but base standard errors of the means on the standard deviations computed for each sample and use the appropriate t value in place of a standard normal deviate. b a s e d o n s a m p l e s t a n d a r d e r r o r s s.ln„usv < VP < V + ίφ..110 chapter 6 / estimation and hypothesis testing convince ourselves of the a p p r o p r i a t e n e s s of t h e t distribution for setting c o n fidence limits t o m e a n s of s a m p l e s f r o m a n o r m a l l y distributed p o p u l a t i o n with u n k n o w n σ t h r o u g h a s a m p l i n g experiment.α 1(10 X u m b e r of t r i a l s 101 150 N u m b e r of t r i a l s 200 i k a JKf: 6.t. These a r e c o m p u t e d as P{V . W e note t h a t 191 (95.3.

36.6. D i v i d e the s t a n d a r d e r r o r by 3. If. 0 6 4 ) ( 1. as a n e x a m p l e will illustrate. T h e r e f o r e the m e a n s h o u l d be r e p o r t e d t o o n e d e c i m a l placc. T h u s .( 2 .29) = 9. η b e i n g necessary t o c o m p u t e the correct d e g r e e s of f r e e d o m . d i v i d i n g this s t a n d a r d e r r o r by 3 w o u l d h a v e yielded 0. y o u will freq u e n t l y see c o l u m n h e a d i n g s such as " M e a n + S. we r e p o r t this result as 2. .05[24]SK 2. r a t h e r t h a n t u r n t o t h e t a b l e of a r e a s of t h e n o r m a l curve. It is i m p o r t a n t t o s t a t e a statistic a n d its s t a n d a r d e r r o r t o a sullicient n u m b e r of d e c i m a l places.363.13 + 2. a n d the first n o n z e r o digit w o u l d h a v e been in the s e c o n d d e c i m a l placc. w h i c h yields 0.004 = 9. T h u s the m e a n s h o u l d h a v e been rep o r t e d as 2.4 ± 0.0711 L 2 9 L ι = V — l0.13 V ~ V ^ 2 5 7.3656) 4.354 ± 0.363 by 3. 100s Y 100(0.66 = 9. a n d the s t a n d a r d e r r o r s h o u l d be r e p o r t e d t o t w o d e c i m a l places. T h u s ." T h i s i n d i c a t e s t h a t the r e a d e r is free to use the s t a n d a r d e r r o r to set c o n f i d e n c e limits if so inclined.5 / c o n f i d e n c e l i m i t s b a s e d o n s a m p l e statistics 111 w h e r e VP s t a n d s f o r t h e p a r a m e t r i c value of t h e coefficient of v a r i a t i o n . the o c c a s i o n a l citing of m e a n s a n d s t a n d a r d e r r o r s w i t h o u t a l s o s l a t i n g s a m p l e size η is to be s t r o n g l y d e p l o r e d . Since t h e s t a n d a r d e r r o r of t h e coefficient of v a r i a t i o n e q u a l s a p p r o x i m a t e l y = V/sJln. T h e f o l l o w i n g rule of t h u m b helps.79 W h e n s a m p l e size is very large o r w h e n σ is k n o w n .243.35 + 0. o n the o t h e r h a n d . t h e d i s t r i b u t i o n is effectively n o r m a l . w e p r o c e e d as follows: . it is c o n v e n i e n t to s i m p l y use f a [ o o ] . give the statistic significant to t h a t d e c i m a l placc a n d p r o v i d e o n e f u r t h e r d e c i m a l for the s t a n d a r d e r r o r . t h e y a r e n o t c o m m o n l y given in scientific p u b l i c a t i o n s .121.47 L2 = V + io. It s h o u l d be o b v i o u s t o y o u f r o m y o u r s t u d y of the I d i s t r i b u t i o n t h a t y o u c a n n o t set c o n f i d e n c e limits t o a statistic w i t h o u t k n o w i n g the s a m p l e size o n which it is based. the f d i s t r i b u t i o n with infinite degrees of f r e e d o m .05[24]SF = 9. then n o t e the d e c i m a l placc of the first n o n z e r o digit of the q u o t i e n t .081. T h i s rule is q u i t e simple.13 = 6. T h u s . wc d i v i d e 0. t h e statistic plus o r m i n u s its s t a n d a r d e r r o r b e i n g cited in their place. A l t h o u g h c o n f i d e n c e limits a r e a useful m e a s u r e of the reliability of a s a m ple statistic.66 = 11. If the m e a n a n d s t a n d a r d e r r o r of a s a m p l e a r e c o m p u t e d as 2.13 .E. H o w e v e r .243. the s a m e m e a n h a d a s t a n d a r d e r r o r of 0.

χ 2 is a function of v. 2. but m o r e or less a p p r o a c h i n g symmetry for higher degrees of freedom. The chi-square distribution is a probability density function w h o s e values range f r o m zero to positive infinity. and 6 degrees of freedom. not at b o t h tails. T h e quantities Σ" Y'2 c o m p u t e d for each sample will be distributed as a χ2 distribution with η degrees of freedom. As in t.7 F r e q u e n c y curves of χ 2 d i s t r i b u t i o n for I. N o w imagine repeated samples of η variates Y f r o m a n o r m a l J p o p u l a t i o n with m e a n μ a n d standard deviation σ. the n u m b e r of degrees of freedom. Figure 6. Thus. unlike the n o r m a l distribution or i. You will recall that we standardize a variable X by subjecting it to the o p e r a t i o n (Y t — μ)/σ. 2. F o r each sample. T h e function describing the χ2 distribution is complicated a n d will n o t be given here.112 chapter 6 / estimation and hypothesis testing 6. there is n o t merely one χ2 distribution. a n d 6 degrees of f r e e d o m . x'2 FKiURE 6. We can generate a χ2 distribution f r o m a population of s t a n d a r d n o r m a l deviates. we transform every variate Yt to Y'h as defined above.6 The chi-square distribution A n o t h e r c o n t i n u o u s distribution of great i m p o r t a n c e in statistics is the distrib u t i o n of χ1 (read chi-square). but there is one distribution for each n u m b e r of degrees of freedom. W e need to learn it now in connection with the distribution and confidence limits of variances.7 shows probability density functions for the χ2 distributions for 1. Therefore. 3. L-shaped at first. Notice that the curves are strongly skewed to the right. . Let us symbolize a standardized variable as Y]• = (Yj — μ)/σ. the function a p p r o a c h e s the horizontal axis asymptotically only at the righth a n d tail of the curve. 3.

Let us learn how to use Tabic IV. has a second scalc a l o n g the abscissa. would be to the right of 0. for χ(251 the 50% point is 4. the p a r a m e t r i c variance.3. as would be expected in a χ 2 distribution. we notice that they are generally lower than the expected value (the means).991. Each chi-square in Tabic IV is the value of χ2 b e y o n d which the area under the γ2 distribution for ν degrees of freedom represents the indicated probability. a s a m p l e distribution of variances. If we were to sample repeatedly η items f r o m a normally distributed p o p u lation. we note that 90% of all values of χ 2 . Figure 6.s q u a r e distribution -a 113 U s i n g the definition of Y'h we can rewrite Σ" Y'2 as π |y-u) 2 1 " (6. Just as we used subscripts to indicate the cumulative p r o p o r t i o n of the area as well as the degrees of freedom represented by a given value of f. C o n v e n t i o n a l χ 2 tables as shown in T a b l e IV give the probability levels customarily required a n d degrees of freedom as a r g u m e n t s and list the χ 2 corresponding to the probability and the df as the functions.8). the distribution of the sample variance will serve to illustrate a sample distribution a p p r o x i m a t i n g χ2.6.. a l t h o u g h we have samples of η items.6) W h e n we c h a n g e the p a r a m e t r i c m e a n μ t o a s a m p l e m e a n . T h u s .2^.2ν( (the m e a n of a y2 distribution) equals its degrees of freedom v. this expression becomes Kiw. L o o k i n g at the distribution of χ. T h e distribution is strongly skewed to the right.y? (6. which. N o t i c e that. but only 5% of all values of χ 2 2ί would be greater than 5. which is the first scalc multiplied by the c o n s t a n t (η — 1)/σ 2 . W h e n wc e x a m i n e 50% values (the medians) in the χ 2 table.8) c o m p u t e d for each s a m p l e would yield a χ2 distribution with η — 1 degrees of f r e e d o m .211. Since the second scale is p r o p o r t i o n a l to s2. we have lost a degree of freedom because we are now employing a sample m e a n r a t h e r t h a n the p a r a m e t r i c mean. T h u s the expected value of a χ ( 2 5| distribution is 5. This scale converts the s a m p l e variances s 2 of the first scale into Expression (6. Expression (6.6 / t h e c h i . wc shall subscript χ 2 as follows: indicates the χ 2 value to the right of which is found p r o p o r t i o n α of the area u n d e r a χ 2 distribution for ν degrees of freedom.7) with η — 1 times the sample variance. yields the sum of squares.351. A n o t h e r c o m m o n way of stating this expression is Here we have replaced the n u m e r a t o r of Expression (6. It can be s h o w n that the expected value οΓ χ.7) which is simply t h e sum of squares of the variable divided by a c o n s t a n t . of course. This .

.10) This still looks like a formidable expression.9) to •fi^— <σ 2 < -J--Σζ! 1 .484 This confidence interval is very wide.. we can simplify Expression (6.52. Then we look up the values for xf. based on only 5 individuals. 7 4 Und / 54. They correspond to 11.u a S ~ This expression is similar to those encountered in Section 6. N o t e also that the interval ..1-13 or /.52 = 54.1] — . An actual numerical example will m a k e this clear.4 .(a/2 ))[n .10) for the sample variance . 0 2 a n d χο. and the limits in Expression (6.-distribution. These χ2 values span between them 95% of the area under the χ2 curve.10) then become 54. the two quotients will cnclose the true value of the variance σ1 with a probability of Ρ = I — a.s2 = Σ ν 2 . 6. H o w ever.1] \r ~ l1 . respectively.s-2.114 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s t e s t i n g illustrates the a s y m m e t r y of the χ2 distribution.<παι*ί· Since 95% confidence limits are required.484. If we wish to set 95% confidcncc limits to the parametric variance. O u r first application of the χ2 distribution will be in the next section.9) ( Χίν!2){ιι . . we evaluate Expression (6.7 Confidence limits for variances We saw in the last section that the ratio (η — 1 )s2/a2 is distributed as χ 2 with η — 1 degrees of freedom. we can m a k e the following statement a b o u t the ratio (η — 1 )s2/a2: P i J 2 2 < ( 1 .3 a n d implies that the probability Ρ that this ratio will be within the indicated b o u n d a r y values of X[2„-1j is 1 — a.08 11.08.11 Since (η — l). its most extensive use will be in connection with C h a p t e r 13. 8 5 and L2 = I 1 1 . but it simply means that if we divide the sum of squares Σ y2 by the two values of xf„ that cut off tails each a m o u n t i n g to a/2 of the area of the _. We take a d v a n t a g e of this fact in setting confidence limits to variances. First. the m e a n being to the right of the median.08 -·' 0.1 I J (6. alter all.143 and 0. Simple algebraic manipulat ion of the quantities in the inequality within brackets yields -2 < σ < τ >= 1— α 1 -(α/2))[π . a in this case is equal lo 0.05. We first calculate the sum of squares for this sample: 4 x 13."σ2^ < y /(α/·2)[η. Suppose we have a sample of 5 housefly wing lengths with a sample variance of s 2 = 13. but we must not forget that the sample variance is.I 7 (6.

This is in contrast to the confidence intervals encountered earlier.8 / i n t r o d u c t i o n t o h y p o t h e s i s testing 115 BOX 63 Confidence limits for a 2 . T h e c o m p u t a t i o n is very simple.0. Method of shortest unbiased confide»» Intervals. Aphid stem mother femur lengths from Box 11: » = 25.46 L2 = (upper limit) = f2s2 = 1. Statistical m e t h o d s arc i m p o r t a n t in biology because results of experiments are usually not clear-cut and therefore need statistical tests to support decisions between alternative hypotheses.1337) = 0.3 shows how to obtain these shortest unbiased conlidence intervals for σ 2 using Table VII..99 they are The 95% confidence limits for the population variance <r2 are given by the equations L. O n e may wish the confidence interval to be "shortest" in the sense that the ratio L 2 /L^ be as small as possible.6. which were symmetrical a r o u n d the sample statistic. The method described above is called the equal-tuils method. = (lower limit) = /. based on the m e t h o d of Tate a n d Klett (1959).a) = 0.351 and for a confidence coefficient of 0.2508 The 99% confidence limits are L.876(0. the sample variance.95 are / . this m e t h o d does not yield the shortest possible confidence intervals.f2s2 = 2. on the basis of an expected distribution of the data.. A statistical test examines a set of sample data and. = 0. s2 .1337) = 0.068. s 2 = 0. = 0.71 L2 .079. This table gives (η — \)/χ^η _.5943(0.8 Introduction to hypothesis testing The most frequent application of statistics in biological research is to lest some scientific hypothesis.5943 / . The factors from Table VII for ν = π .52.1337.876 f2 = 2. It can be shown that in view of the skewness of the distribution of variances. leads to a decision on whether to accept the hypothesis underlying the expccted distribution or to reject that hypothesis and accept an alternative .5139(0.. where ρ is an adjusted value of a/2 or 1 — (a/2) designed to yield the shortest unbiased confidence intervals. = / .1 = 24 df and confidence coefficient (1 .3143 is asymmetrical a r o u n d 13.351(0. Box 6. 6.s 2 = 0.5139 f2 = 1. 2\%).1337) = 0.1337) = 0. because an equal a m o u n t of probability is placed in each tail (for example.

T h e above c o m p u t a t i o n is based on the idea of a one-tailed test. These d a t a were examined for their fit t o the binomial frequency distribution presented in Section 4. and since the distribution is symmetrical. and their analysis was shown in T a b l e 4.188. or we may decide that so deviant a sample is too improbable an event to justify acceptance of the null hypothesis. = 0. but it is not very probable. or 1.012.3% of the time. the probability of obtaining a sample with 14 males a n d 3 females would be 0.116 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing one. which is the hypothesis under test.363.3. If we decide to reject the hypothesis under these circumstances. Study the material below very carefully. Either of these decisions may be correct. We may decide that the null hypothesis is in fact true (that is. 14 of which were females a n d 3 of which were males. the null hypothesis. T h e n a t u r e of the tests varies with the d a t a a n d the hypothesis. because it is f u n d a m e n t a l to an u n d e r s t a n d i n g of every subsequent chapter in this b o o k ! W e would like to refresh your m e m o r y on the sample of 17 animals of species A. still a very small value. It is called the null hypothesis because it assumes that there is n o real difference between the true value of ρ in the p o p u l a t i o n from which we sampled and the hypothesized value of ρ = 0. W e learned that it is conventional to include all "worse" o u t c o m e s — t h a t is. If in fact the 1:1 hypothesis is correct.5 is true.5. all those that deviate even m o r e f r o m the o u t c o m e expected on the hypothesis p9 = qs = 0.5.005. the sex ratio is 1:1) and that the sample obtained by us just happened to be one of those in the tail of the distribution.006. the null hypothesis implies that the only reason o u r sample does not exhibit a 1:1 sex ratio is because of sampling error.5.2. but the same general philosophy of hypothesis testing is c o m m o n to all tests a n d will be discussed in this section. This requires the probability either of obtaining a sample of 3 females a n d 14 males (and all worse samples) or of obtaining 14 females and 3 males (and all worse samples). Including all worse outcomes.3 that if the sex ratio in the p o p u l a t i o n was 1:1 (ρς = qs = 0. The rejection of a true null hypothesis is called a type I error. W e may therefore decide that the hypothesis that the sex ratio is 1:1 is not true. in which we are interested only in departures f r o m the 1:1 sex ratio that show a prep o n d e r a n c e of females. if in fact the true sex ratio of the pop- . If the null hypothesis p. = 0.5). If we have no preconception a b o u t the direction of the d e p a r t u r e s f r o m expectation. we may m a k e one of two decisions.t = q. then the first decision (to accept the null hypothesis) will be correct. Let us call this hypothesis H0. Thus. we commit an error. Such a test is two-tailed. We concluded f r o m T a b l e 4. m a k i n g it very unlikely that such a result could be obtained by chance alone. then approximately 13 samples out of 1000 will be as deviant as or more deviant than this one in either direction by chance alone. depending u p o n the truth of the matter. If we actually obtain such a sample.t = q . we d o u b l e the previously discussed probability to yield 0. O n the other hand. Applied to the present example. since so deviant an event would occur only a b o u t 13 out of 1000 times. the probability is 0. W h a t does this probability mean? It is our hypothesis that p. we must calculate the probability of obtaining a sample as deviant as 14 females a n d 3 males in either direction f r o m expectation.726. it is quite possible to have arrived at a sample of 14 females and 3 males by chance.

for all samples of 17 animals having 14 or more of one sex. in the tails equals 2 χ 0.8 / i n t r o d u c t i o n t o h y p o t h e s i s testing 117 ulation is other t h a n 1:1. = q. If we permit 5% of samples to lead us into a type I error. T h e most deviant of these are likely to mislead us into believing our hypothesis IIQ to be untrue.012.725.3. a so-called type II error. you obtain a type I error slightly less than 5% if you sum relative expected frequencies for both tails starting with the class of 13 of one sex and 4 of the other. such as 0. These relationships between hypotheses a n d decisions can be summarized in the following table: Statistical decision Null hypothesis Actual situation Accepted p^e Correct decision Type II error Rejected Type I error Correct decision Null hypothesis Before we carry out a test. a n d type II. such as the binomial. if the 1:1 hypothesis is not true and we d o decide to reject it. Finally. you might be falling into a type II error.. You may decide to work with an extremely small type I error. we cannot calculate errors of exactly 5% as we can in a continuous frequency distribution.024.3 we find the / rc . This can be seen by referring to column (3) of Table 6. which is the acceptance of a false null hypothesis. which showed only a tail of this distribution. this means that we would reject all samples of 17 animals containing 13 of one sex plus 4 of the other sex.5 are shown. there are two kinds of correct decisions: accepting a true null hypothesis a n d rejecting a false null hypothesis.049. This table is an extension of the earlier Table 4.9 = 0. Thus. the smaller (he type I error wc are prepared to accept.041.) Thus. we will reject the hypothesis p.8. the m o r e deviant a sample has to be for us to rejcct the null hypothesis H0. then we again m a k e the correct decision.6. In the distribution under study. where we can measure olf exactly 5% of the area. Your natural inclination might well be to have as little error as possible.362. accepting the null hypothesis unless the sample is extremely deviant.3. = </.006. The difficulty with such an a p p r o a c h is that.1% or even 0.8. where the expected frequencies of the various outcomes on the hypothesis ρ. = 0.01 %. a n d there are two kinds of errors: type I. accepting a false null hypothesis. accepting the null hypothesis . although guarding against a type I error. rejecting a true null hypothesis.9 = 0. Actually. Even when we sample from a population of k n o w n parameters.3 it can be seen that the relative expccted frequency in the two tails will be 2 χ 0. ( F r o m Table 6. there will always be some samples that by chance are very deviant. then we shall reject 5 out of 100 samples from the population. deciding that these are not samples from the given population. In a discrete frequency distribution.520. the first decision (to accept the 1:1 hypothesis) is an error. If wc decide on an a p p r o x i m a t e I % error. we have to decide what m a g n i t u d e of type I error (rejection of true hypothesis) we are going to allow. F r o m Table 6.

0000002 W /rel 0. given H 0 .0000042 0.1483765 0.8A is a bar diagram showing the expected distribution of o u t c o m e s in the sex ratio example.1483765 0. W h e n we cut off on a frequency distribution those areas p r o p o r t i o n a l to a (the type 1 error). This is the probability of accepting the null hypothesis when in fact it is false. T h u s a type I error of a = 0.0061334 0. you immediately run into a problem.1854706 0.0345086 0.0181580 0. But unless you can specify H u you are not in a position to calculate type II error. 3 Relative expected frequencies for samples of 17 animals under two hypotheses. First. (•J) (2) 3? 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Total 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 1rel 0.1509752 0.0862715 0.9999999 when in fact it is not true and an alternative hypothesis H 1 is true. we shall show how this comes about.0472107 0.1962677 0.1854706 0. When a type I error is expressed as a percentage.0944214 0.0001297 0.118 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing TABLE 6 .0481907 0.0000421 0. If you try to evaluate the probability of type II error.0181580 0. Binomial d i s t r i b u t i o n .1542104 0.0002949 0.1962677 0. Presently.0192763 0. Suppose in o u r sex ratio case we have only two reasonable possibilities: (I) our old hypothesis H 0 : ρ. the portion of the abscissa under the area that has been cut off is called the rejection region or critical region of a test. N o w let us take a closer look at the type II error.0963815 0.0086272 0. some other hypothesis Η γ must be true.0015333 0.0000076 0. If the null hypothesis H 0 is false.0000076 1.0944214 0. Figure 6.0051880 0.0472107 0. An example will m a k e this clear immediately.0010376 0. it is also k n o w n as the significance level.0001297 0.0010376 0. The dashed lines separate rejection regions from the 99% acceptance region.0000002 0.05 corresponds to a significance level of 5% for a given test. let us learn some m o r e terminology.0010150 0. The portion of the abscissa that would lead to acceptance of the null hypothesis is called the acceptance region. = or (2) an alternative .0051880 0.0000000 0. Type I error is most frequently expressed as a probability and is symbolized by a.

Type I e r r o r -x e q u a l s a p p r o x i m a t e l y 0.2 testing C r i t i c a l or rejection ρ—region—• 119 -Acceptance region- 1- a 0.8B a n d a r e tabulated and c o m p a r e d with expectcd frequencies of the earlier distribution in T a b l e 6.) 17 to lind the probabilities of the various o u t c o m e s u n d e r the alternative hypothesis. Approximately 99% of all samples will fall into this category. = 2.3. : p.01 means " a p p r o x i mately equal to") as s h o w n in Figure 6.15 0. (A) //(>:.2 0.6. = 2q .15 0.1 0. However. = 2q.f = (5 + J . 8 Expected d i s t r i b u t i o n s of o u t c o m e s when s a m p l i n g 17 a n i m a l s f r o m two h y p o t h e t i c a l p o p u l a t i o n s .t.8 / i n t r o d u c t i o n t o h y p o t h e s i s C r i t i c a l or rejection —region—η0. At this significance level we would accept (he / / 0 for all samples of 17 having 13 o r fewer a n i m a l s of o n e sex.01. = 3. = J.(B) / / . what if H 0 is not true a n d H . : p .05 0 0 1 2 3j 4 5 0 7 8 9 10 11 12 13|14 15 10 17 Jk N u m b e r of f e m a l e s in s a m p l e s of 17 a n i m a l s FIGURE 6 .8A. which states thai the sex ratio is 2:1 in favor of females so that p . These arc s h o w n graphically in Figure 6. hypothesis H . S u p p o s e we h a d decided on a type I e r r o r of α 0. = f a n d q . + q . we could also o b t a i n o u t c o m e s in which o n e sex w a s represented . D a s h e d lines s e p a r a t e critical regions f r o m a c c e p t a n c e region of the d i s t r i b u t i o n of part A. We now have to calculate expected frequencies for the binomial distribution (p. is true? Clearly. f r o m the p o p u l a t i o n represented by hypothesis / i .1 0./>. — 4 .05 ^ i n 0 1 2 3 14 5 6 7 8 HL 9 10 11 12 13| 14 15 16 17 ι N u m b e r of f e m a l e s in s a m p l e s of 17 a n i m a l s I I -1 — β- 0.

55. By moving the dashed lines o u t w a r d . We shall see in the following whal steps can be taken to decrease β while holding α constant at a preset level. can be varied at will. The smaller the rejection region ι in the distribution under / / 0 . we are increasing β and in a sense defeating our own purposes. fully 87% would d o so under f / . Convince yourself of this in Figure 6. the two distributions would overlap more and more and the magnitude οί β would increase. Clearly. Clearly. then. As W. for example). Thus.120 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing 13 or fewer times in samples of 17. In most applications. If it had 3 or fewer females. we might conclude that neither H0 nor H{ was true. the greater will be its overlap with the distribution representing W. a p p r o a c h e d H 0 (as in / / . we are reducing the critical regions representing type 1 error a in diagram A. scientists would wish to keep both of these errors small. more of the distribution of / / . as in the previous example ( / / . : ρ = 2 q .. Generally.95% of the time. \ the m a g n i t u d e of the type I error α we are prepared to tolerate will determine the m a g n i t u d e of the type II error β.001. the type II error expressed as a proportion. When the alternative hypothesis is fixed. the greater will be the acceptance region 1 — a in this distribution. however. T h e greater I — χ. and 0. If the sample had 14 or m o r e females. in diagram Β will lie in the acceptance region of the null hypothesis. This p r o p o r t i o n is called β. cumulative probabilities of the a p p r o priate distributions have not been tabulated and so published probability levels must be used. : ρ = 0.9. investigators are frequently limited because. Thus. These are c o m m o n l y 0. But as the dashed lines move outward. by decreasing a. and hence the greater will be β. if Η 1 represented p : = 0. for m a n y tests. In this case we find that 0. Although significance levels y. the distributions would be much farther apart and type II error β would be reduced. W e have to calculate what p r o p o r t i o n of the curve representing hypothesis H . a m o n g other things.01.8695 of the distribution representing Hl overlaps the acceptance region of H0 (see Figure 6. Conversely. making discrimination between the hypotheses even less likely. a sample of 17 animals is unsatisfactory to discriminate between the two hypotheses. This percentage c o r r e s p o n d s to the p r o p o r t i o n of samples from H y that fall within the limits of the acceptance regions of H0. we would erroneously accept the null hypothesis 86.8. since they d o not wish to reject a null hypothesis when it is true.8B). T h o u g h 99% of the samples under H 0 would fall in the acceptance region. will overlap the acceptance region of the distribution representing hypothesis H 0 . although several others arc occasionally encountered.0. When a null hypothesis has been rejected at a specified level of x we say that the sample is significantly different from the parametric or hypothetical population at probability Ρ < a. . A single sample that falls in the acceptance region would not enable us to reach a decision between the hypotheses with a high degree of reliability.05. we would conclude that H 1 was correct. on the parameters of (he alternative hypothesis H t and c a n n o t be specified without knowledge of the latter. In this example β is quite large. the m a g n i t u d e of β depends. if Hl is really true (and H0 correspondingly false). values . nor d o they wish to accept it when a n o t h e r hypothesis is correct.

9 represents the expected distribution of means of samples of 5 housefly wing lengths f r o m the specified population. the conventional level of significance. as was demonstrated in Tabic 6.01) are nearly always adjudged significant. Your null hypothesis will be H0: μ = 45. We shall review what we have learned by means of a second example.744) = 48.05 are delimited along the abscissa. where μ is the true mean of the population from which you have sampled and μ0 stands for the hypothetical parametric mean of 45. and others can serve to underscore differences and effects. those between 5% and 1 % may be considered significant at the discretion of the investigator.01) to one error in 100 trials. We shall assume for the m o m e n t that we have no evidence that the variance of o u r sample is very much greater or smaller than the paramctric variance of the housefly wing lengths. 0.5 or H0: μ = μ 0 . marked.05 are not considered to be statistically significant. and for this reason the critical regions have to be defined independently.08 .92 = 45 5 - " (1.5. unless such a technical meaning is clearly implied. There is a critical test of the assumption a b o u t the sample variance.5 + (1. = §. = 45.96)0. as discussed above.0ii„. it would be unreasonable to assume that our sample comes from the specified population. If it were. It is clearly asymmetrical. T h e boundaries of the critical regions arc computed as follows (remember that t.x.6. Acceptance and rejection regions for a type I error α = 0. Since statistical significance has a special technical m e a n i n g ( H 0 rejected at Ρ < α). M e a n s based on 5 ilems sampled from these will also be normally distributed. we shall use the adjective "significant" only in this sense. A brief remark on null hypotheses represented by asymmetrical probability distributions is in order here.8 / i n t r o d u c t i o n t o h y p o t h e s i s testing 121 of α greater t h a n 0. The curve at the center of Figure 6. In this latter case.21. this time involving a c o n t i n u o u s frequency d i s t r i b u t i o n — t h e normally distributed housefly wing lengths—of parametric mean μ = 45. meaningful. is equivalent to the normal distribution).96)(1. which we shall take up later.5 and variance σ2 = 15. Significance levels of 1% or less (Ρ < 0. „ „ .05) corresponds to one type I error in 20 trials. Lt = μ0 . A significance level of 5% (P = 0. half the conventional level of significance.pr and L2 = μα + ' „ . should be discouraged. noticeable. T h e distribution of samples of 17 offspring from such a population is shown in Figure 6. F o r general descriptive purposes synonyms such as i m p o r t a n t . or we can c o m p a r e Ρ with a/2.1 and Figure 6:1.t0.8B.744) = 42. its use in scientific papers a n d reports.025 is the m a x i m u m value of Ρ conventionally considered significant. a level of 1% (P = 0. F o r a given two-tailed test we can either double the probability Ρ of a deviation in the direction of the closer tail and c o m p a r e 2 Ρ with a. Let us assume that someone presents you with a single sample of 5 housefly wing lengths and you wish to test whether they could belong to the specified population. Suppose our null hypothesis in the sex ratio case had been H 0 : p .

122 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing Η. will overlap the acceptance region of W„.5 units greater than μ 0 .0018 of the distribution of H . = 54. T h e test we are proposing is two-tailed because we have no a priori assumption a b o u t the possible alternatives to our null hypothesis. under this alternative hypothesis.08 or greater t h a n 48. However. the type II error under this alternative hypothesis. that would overlap the acceptance region implied by f / 0 . but that the variance is the same as before. Therefore. This is β. However. : μ = μ .08.91 σ at one tail of the curve.0) and can be ignored. We find that 54.5. T h u s the . stands for the alternative parametric mean 54. N o w let us examine various alternative hypotheses. distribution goes all the way to negative infinity. μ = 37 or μ = 54. wc may simply assume thai it is 8. is 6.: μ = 5-1 FIGURE 6 . units from μ . wc can calculate the p r o p o r t i o n of the distribution implied by W.: μ = 37 H„: μ = 4o. We can express this assumption as Η.744 = 2. From Table II ("Areas of the normal curve") and o u r knowledge of the variance of the means. Thus.5. . C e n t e r curve r e p r e s e n t s null h y p o t h e s i s .0018 of the area will lie beyond 2. Vertical lines delimit 5% rejection regions for the null h y p o t h e s i s (2i°7> in each tail.: μ — 54. F r o m T a b l e II we find that 0. F o r such sample m e a n s we would therefore reject the null hypothesis. this is not entirely correct.0.9ίσγ units.0 is 5. Thus. Actually.83σ. the upper b o u n d a r y of the acceptance region of II0.92 to have been sampled from this population.5. O u r alternative hypothesis Η t specified that μ. H0: μ = 45. In such a case we must similarly calculate β for the alternative hypothesis that μ ι — μ 0 8. we may have no a priori reason to believe that the true mean of our sample is either greater or less than μ. This corresponds to 5.5 measurement units away from 45. 9 Expected d i s t r i b u t i o n of m e a n s of s a m p l e s of 5 housefly wing lengths f r o m n o r m a l p o p u l a t i o n s specified by μ as s h o w n a b o v e curves a n d a j = 1. shaded). Since the left tail of (he / / . where μ. curves at sides represent alternative h y p o t h e s e s .08/1. 0. it will leave the acceptance region and cross over into the left-hand rejection region of H0. as said before. If we could assume that the true mean of the population from which the sample was taken could only be equal to or greater than 45. O n e alternative h y p o t h esis might be that the true mean of the population from which our sample stems is 54. we would consider it i m p r o b a b l e for m e a n s less t h a n 42.08 measurement units from 48.o Η. is 8.0. this represents only an infinitesimal a m o u n t of the area of Η i (the lower critical b o u n d a r y of H () .92.5.0 or / / . the test would be one-tailed.744. 42.

the distribution of means would be much narrower. As the alternative mean approaches the parametric mean. It would be quite unusual if we h a d any justification for such a belief. f r o m which it is directly derived. starting with the test illustrated in Figure 6. An i m p o r t a n t concept in connection with hypothesis testing is the power of a test. for any given test we would like the quantity 1 . where represents either 54.9. C o m m o n sense confirms these conclusions: we can make clear and firm decisions a b o u t whether our sample comes from a population of mean 45.11 g r a p h s the complement of this value. If H l is really true. T o simplify the graph. As a m a t t e r of fact. and Figure 6. the curves for H0 a n d H . the c o m p l e m e n t of β.6. At this maxim u m . we have to describe β or 1 — β for a c o n t i n u u m of alternative values. a p p r o a c h e s μ.0. now being closer together. This figure can be compared with Figure 6.β to be as large as possible and the quantity β as small as possible.9. T o improve the power of a given test (or decrease β) while keeping a constant for a stated null hypothesis. and is the probability of rejecting the null hypothesis when in fact it is false and the alternative hypothesis is correct.0.5 or 60. If we d r a w curves for Η^ μ = μ0 ± 7.5 or 6. We note that the power of the test falls off sharply as the alternative hypothesis approaches the null hypothesis. μ = 54. β increases u p to a m a x i m u m value of 1 — a. Thus.0 or 37. W h e n 1 — β is graphed in this manner. which is the area of the acceptance region under the null hypothesis.5 m e a s u r e m e n t units to either side of μ 0 = 45. But if the alternative hypothesis is that μι = 45. we clearly see that β is not a fixed value but varies with the nature of the alternative hypothesis.5.0 or any n u m b e r of units to either side of μ 0 . If instead of sampling 5 wing lengths we had sampled 35.10 illustrates the increase in β as μ .21 and 46. the two distributions would be superimposed u p o n each other. These relations are shown in Figure 6. and the power will be very low.1 from the value assumed under the null hypothesis. a very low p r o p o r t i o n of error. the m a g n i t u d e of β will depend on how far the alternative p a r a m e t r i c mean is from the p a r a m e t r i c mean of the null hypothesis. Type II error for hypothesis H 1 is therefore 0. Figure 6.10.10 emphasizes the type II error β. or Η μ = μ1. β is the same for b o t h alternative hypotheses.000 samples will lead to an incorrect acceptance of H 0 . differing only by 0.0. Figure 6. Thus. the alternative parametric means. we find that β has increased considerably. regardless of which of the two alternative hypotheses is correct.5.79. Obviously. Since we generally c a n n o t specify a given alternative hypothesis. the true m e a n may just as well be 7. Thus. the alternative distributions are shown for one tail only. The power is essentially 1.11 shows the power curve for the housefly wing length example just discussed. wc must increase sample size. You m a y rightly ask what reason we have to believe that the alternative parametric value for the m e a n is 8. Since the distributions are symmetrical. Figure 6. rejection regions for the identical type I error would now commence at 44. It is 1 — β. 18 out of 10.6. the result is called a power curve for the test under consideration.8 / i n t r o d u c t i o n t o h y p o t h e s i s testing 123 alternative hypothesis becomes H{. 1 β. it will be difficult to decide which of these hypotheses is true. Although the acceptance and rejection regions have .0 or 37.0018.

D a t a identical to those in Γ-'igure 6.9.45.5. a p p r o a c h e s μ. Shading represents β.UKi: ή I I Power curves for testing //„: μ .5 for ιι 5 . 1 0 D i a g r a m to illustrate increases in type II error β as alternative hypothesis H . T o simplify the graph the alternative distributions are shown for one tail only. : μ φ 45. / / . a p p r o a c h e s null hypothesis / / „ — t h a t is.II„: μ μ„ Η] : μ = μ. W i n g length (in units of 0.1 mm) F1GURL 6 . Vertical lines m a r k off 5% critical regions (2{% in each tail) for the null hypothesis. Wing length (in units of 0. μ .1 mm* FK.

> μ(). If we were to increase o u r sample size to 100 or 1000. we might wish to c o m p a r e the untreated p o p u l a t i o n that has a m e a n fatality rate θ (from cancer) with the treated p o p u l a tion.5: if it is different from that value. if we are investigating the effect of a certain d r u g as a cure for cancer. t h e probability t h a t the null hypothesis could be accepted by m i s t a k e (type II error) is infinitesimally small. μ = 54. . However. the p o w e r m a y be raised by c h a n g i n g the n a t u r e of the test. has g r o w n not only because of their c o m p u t a t i o n a l simplicity but also bccause their p o w e r curves are less affected by failure of a s s u m p t i o n s t h a n are those of the p a r a m e t r i c m e t h o d s . it certainly is not m u c h of a prospect for a cure. if the alternative h y p o t h esis H{. of course.000 a n d the hypothesis would.8 / i n t r o d u c t i o n t o h y p o t h e s i s testing 125 r e m a i n e d the s a m e p r o p o r t i o n a t e l y . W h a t h a s h a p p e n e d t o type II error? Since the distribution curves a r e n o t as wide as before. whose rate is 0{. we reach a n i m p o r t a n t conclusion: If a given test is not sensitive e n o u g h . T h e null hypothesis is H0: μ0 = 45. T h e popularity of the various n o n p a r a m e t r i c tests. we m a y be testing the effect of a chemical in the larval food intended to increase the size of the flies in the sample. with confidence. Let us briefly look at a one-tailed test. Tests that m a i n t a i n higher p o w e r levels over substantial ranges of alternative hypotheses are clearly t o be preferred. as before.„ = 45. we would expect that μ . H o w e v e r . w h e n based on 35 individuals. we arc not interested in any 0 { that is greater than 0. A second reason might be that we are interested in only o n e direction of difference.5. it is also true that n o n p a r a m e t r i c tests have lower overall p o w e r than p a r a m e t r i c ones. T h e r e is yet a n o t h e r w a y of increasing the p o w e r of a test. If we c a n n o t increase s a m p l e size. there is less o v e r l a p between them. that is less than //„. Similarly. Previously.0. the alternative hypothesis a s s u m e s that we have reason to believe that the p a r a m e t r i c m e a n of the p o p u l a t i o n from which o u r s a m p l e has been taken c a n n o t possibly be less t h a n /. a m e a n as deviant as 48. and wc would not be interested in testing for any /(.0 o r 37. For example.6. T h a t is. Therefore.0 w o u l d occur only 15 times out of 100. but it will always be smaller t h a n the c o r r e s p o n d i n g value for sample size η = 5. bccause if o u r d r u g will increase mortality f r o m cancer. we can increase its sensitivity ( = power) by increasing s a m p l e size. so that any o t h e r p o p u l a t i o n f r o m which o u r sample could c o m e must be bigger. reject the null hypothesis for a sample m e a n of 48. be rejected.5. β will increase. This c o m p a r i s o n is s h o w n in Figure 6. the p o w e r would be still f u r t h e r increased. If we let μ j a p p r o a c h μ0. Different statistical techniques testing roughly the same hypothesis m a y differ substantially b o t h in the actual m a g n i t u d e a n d in the slopes of their power curves. First. We might have two g r o u n d s for such a hypothesis. where the p o w e r for the test with η = 35 is m u c h higher t h a n that for η = 5. m e n t i o n e d in several places in this b o o k . O u r p a r a m e t r i c flies might be a dwarf p o p u l a t i o n . N o w . : 0t < 0.0 is true. because such an effect is the exact opposite of what we expect.11. the acceptance region h a s b e c o m e m u c h n a r r o w e r in a b s o l u t e value. we might have some biological reason for such a belief. T h u s . therefore. it can only be greater than 45. when all the a s s u m p t i o n s of the p a r a m e t r i c test a r e met. O u r alternative hypotheses will be / / . we could not.

Wc proceed to calculate the significance of the deviation Ϋ — μ0 expressed in standard deviation units. W e c o m p u t e the critical b o u n d a r y as 45.5 + (1. which rejects the null hypothesis for all means greater than 48. which reject the null hypothesis for means lower than 42.542.9 Tests of simple hypotheses employing the t distribution We shall proceed lo apply our newly won knowledge of hypothesis testing to a simple example involving the ι distribution. We prepare 10 samples of this preparation and test each for potency. Wc therefore calculate sY = s/yj'n = 1 1 . C o m p a r e this rejection region. because the deviation is that of a sample mean a r o u n d a parametric mean.5 units per cc a n d the standard deviation of the samples is 11.12. that a deviation divided by an estimated .t a i l e d significance test for the d i s t r i b u t i o n of F i g u r e 6.. Vertical line n o w cuts off 5% rejection region f r o m o n e tail of t h e d i s t r i b u t i o n ( c o r r e s p o n d i n g a r e a of curve has been shaded). T h e a p p r o p r i a t e s t a n d a r d deviation is that of m e a n s (the s t a n d a r d error of the mean).645 is i 0 I O f i r . T h e alternative hypothesis is considered for one tail of the distribution only.4.Ϊ4 testing 40 48.126 11 (| • μ — 4. 2 / ^ 1 0 -= 3.92. We next test the deviation (Ϋ μ0)/χγ. the rejection region will be in one tail of the curve only and for a 5% type I error will a p p e a r as shown in Figure 6. : μ / μ0. Docs our sample c o n f o r m to the government standard? Stated m o r e precisely. The 1. our null hypothesis is H(>: μ . nol the s t a n d a r d deviation of items.:i7 Wing length (in units of 0 I mm1 FIGURE 6 . G o v e r n m e n t regulations prescribe that the standard dosage in a certain biological preparation should be 600 activity units per cubic centimeter. the rejection region along the abscissa is under only one tail of the curve representing the null hypothesis.744) = 48.10. in Scction 6.08 and greater than 48. which corresponds to the 5% value for a one-tailed test. for our housefly data (distribution of means of sample size η = 5). We find that the mean n u m b e r of activity units per sample is 592. The alternative hypothesis is that the dosage is not equal to 600. 6. We have seen earlier.37. W h e n such a one-tailed test is performed.: μ = .μ0.9.645)( 1.37. with the two rejection regions in Figure 6.2. and the power curve of the test is not symmetrical but is d r a w n out with respect to one side of the distribution only.V c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s Π. 1 2 O n e . or / / . Thus.

by observed. N o t e that in Expression (6. with t serving as the sample statistic. it can be smaller or greater. The actual test is very simple. 592.01 > Ρ > 0.10. since some a u t h o r s occasionally imply other ranges by these asterisks. .5 = „ = —2.05.10. and always look up its positive value in Table III. we prefer to use the subscript s to indicate the sample value. we would report the results of the statistical analysis as follows: " The sample mean is not significantly different from the accepted s t a n d a r d . The symbols generally represent the following probability ranges: * = 0.86**. Another way of saying this is that the value of is is not significant (frequently abbreviated as ns).26 and t 0 1 0 ( 9 ] = 1.05 > Ρ > 0. The two values on cither side of f s are t a (1514| = 2. the probability of obtaining a deviation as great as or greater than 7. this is insufficient for declaring the sample m e a n significantly different from the standard. Since the t distribution is symmetrical.542 1 = 9 and c o m p a r e it with the expected values for t at 9 degrees of freedom.001 *** = ρ < 0.6. b u t readers should be quite clear that in any hypothesis testing of samples we are only assuming that the distributions of the tested variables follow certain theoretical probability distributions. In most textbooks you will find this ratio simply identified as t.5 is somewhere between 0. It appears that the significance level of o u r value of iv is between 5% and 10%.5 . By customary levels of significance. T o conform with general statistical practice. but in fact the t distribution is a p a r a m e t r i c and theoretical distribution that generally is only approached. " Such a statement in a scientific report should always be backed up by a probability value.001 However." This means that the probability of such a deviation is between 0. and the p r o p e r way of presenting this is to write "0. This may seem a minor distinction.12 dJ j = n - .9 / t e s t s o f s i m p l e h y p o t h e s e s e m p l o y i n g t h e f d i s t r i b u t i o n 127 standard deviation will be distributed according to the t distribution with η — 1 degrees of freedom. W e consequently accept the null hypothesis. ft 3.05 and 0. the meaning of the symbols has to be specified in each scientific report. sampled data.600 i = -7. In conventional language. Since this would violate long-standing practice. we shall ignore the sign of f.05 and 0.83. but never equaled. These are f values for two-tailed tests. if the null hypothesis is actually true. a p p r o p r i a t e in this instance because the alternative hypothesis is that μ φ 600: that is.542 3. A convention often encountered is the use of asterisks after the computed value of the significance test. W e therefore write = (6-1U Sy This indicates that we would expect this deviation to be distributed as a t variate. the t distribution should really have a Greek letter (such as τ). as in ts = 2.10 > Ρ > 0.11).01 ** = 0. W e calculate Expression (6.11) we wrote f s .

This may be one of those biological preparations in which an excess of the active c o m p o n e n t is of no h a r m but a shortage would make the preparation ineffective at the conventional dosage. it will never be possible to establish a significant difference between the standard and the sample mean. The promulgation of a s t a n d a r d mean is generally insufficient for the establishment of a rigid standard for a product. and you may begin to wonder whether some of the things you hear about statistics and statisticians are not. The explanation lies in the fact that the two results are answers to different questions. the t distribution is used. performed in exactly the same m a n n e r except that the critical values of t for a one-tailed test are at half the probabilities of the two-tailed test. Such a test applies whenever the statistics arc expected to be normally distributed.„. T h u s 2. the sample would have been unquestionably significantly different from the standard by the one-tailed or the two-tailed test. the former 0. Both of these are undesirable aspects of any experimental setup. The test described above for the biological preparation leads us to a general test for the significance of any statistic that is. correct. significant at 0. employing the same d a t a and significance tests.128 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing It might be argued that in a biological preparation the concern of the tester should not be whether the sample differs significantly from a standard. becomes ?0. for the significance of a deviation of any statistic from a parametric value. more precisely stated. You may be surprised that the same example. W e should also point out that such a difference in the outcome of the results is not necessarily typical..025. we exclude from consideration the fact that the true sample mean μ could be greater than the established standard μ0. Then the test becomes one-tailed. after all. If we test whether our sample is significantly different from the standard in either direction.10 value. If we are prepared to accept a 5% significance level. This is an important point that should be quite clear to you. becomes i o. It is obvious from this example that in any statistical test one must clearly state whether a one-tailed or a two-tailed test has been performed if the nature of the example is such that there could be any d o u b t a b o u t the matter. most statisticians uniformly apply the I distribution with the a p p r o - .12 "significant at the 5T> level" or. we must conclude that it is not different enough for us to reject the null hypothesis. When the standard error is estimated from the sample. on the other hand.05 > Ρ > 0.05 value. H a d the difference between sample and s t a n d a r d been 10.05[<>]< making o u r observed ts value of 2. Remember that the standard error can be increased in two ways by lowering sample size or by increasing the s t a n d a r d deviation of the replicates. but whether it is significantly below the standard. of the I distribution. It is only because the o u t c o m e in this case is in a borderline area between clear significance and nonsignilicance. However.26. we would consider the p r e p a r a t i o n significantly below the standard. should lead to two different conclusions.5 activity units. which is outlined in Box 6. the difference as found by us is clearly significant. since the normal distribution is just a special case /.o25[9]> a r "d 1-83. If the variance a m o n g the samples is sufficiently large. If. the former 0.4.

Suppose the government postulates that the variance of samples from the preparation should be no greater than 100. Is our sample variance significantly above 100.4 Testing the significance of a statistic—that is. look up the critical value of t. the variance must have been 125.6.0. Stp Ht: St > St„ Hi-· St Φ St„ priatc degrees of freedom f r o m 1 to infinity. In the case of the variance. in Section 6.4 can be used only if the statistic is normally distributed. St < St. r. and ss.10 / t e s t i n g t h e h y p o t h e s i s Η 0 σι ~ al 129 BOX 6. In the two-tailed test.(v). Sip is the parametric value against which the sample statistic is to be tested. Let us use the biological preparation of the last section as an example. Ζ The pertinent hypotheses are H 0 : St — Stp for a two-tailed test. The method of Box 6.6. is its estimated standard error. St" or H0: St — Stp for a one-tailed test. Accept or reject the appropriate hypothesis in 2 on the basis of the ts value in 1 compared with critical values of t in 3.44.2 based on 10 samples.2. Therefore. obtained from Box 6.1).1. and Hq'. 3.4. Computational steps I. for testing the hypothesis that a sample variance is different from a parametric variance. or elsewhere in this book. 6. sums of squares divided by <τ2 follow the χ 2 distribution. we must employ the χ 2 distribution. 4. the significance of a deviation of a sample statistic from a parametric value. Compute t„ as the following ratio: t β St — S i . As we have seen. For normally distributed statistics. In the one-tailed test look up the critical value of for a significance level of a. where α is the type I error agreed upon and ν is the degrees of freedom pertinent to the standard error employed (see Box 6.0? Remembering from . this is not so. Therefore.10 Testing the hypothesis H„: σζ = σ. Ht·. ss< where St is a sample statistic. An example of such a test is the f test for the significance of a regression coefficient shown in step 2 of Box 11. Wc were told that the s t a n d a r d deviation was 11.

1 0 ( 9 ] = 14. In the next chapter we shall see that there is another significance test available to test the hypotheses a b o u t variances of the present section. we proceed as follows. however.023 The values represent chi-squarcs at points cutting off 2\'Z rejection regions at each tail of the χ 2 distribution. O u r value of X 2 = 11. we are to perform a one-tailed test.2 and Η σ 2 Φ a n d a 5'7.290 is therefore less than 0. which are H0: σ2 = σ20 and Η a~ > <Tq. permitting it to deviate in either direction. T h u s X 2 is not significant at the 5"ό level.023 would have been evidence that the sample variance did not belong to this population.10. assuming that the null hypothesis is true. type I error would have yielded the following critical values for the two-tailed test: Xo. Following the general outline of Box 6. we have no basis for rejecting the null hypothesis.130 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing Expression (6.50 but higher t h a n 0. we next establish our null and alternative hypotheses. If wc had decided to test whether the variance is different from the s t a n d a r d . This is the mathematically equivalent F test. F o r ν = 9 degrees of freedom. and ν is the pertinent degrees of freedom.684 2 2 j & S 0 . The critical value of y2 is found next as χ 2 1ν . and wc must conclude that the variance of the 10 samples of the biological p r e p a r a t i o n may be no greater than the standard permitted by the government.44 ~ 100 = 11.700 Χι). allowing us to test the hypothesis that two sample variances come from populations with equal variances . that is. 9 ] = 8. where α is the p r o p o r t i o n of the χ1 distribution to the right of the critical value. ζ J " ~ I)* 2 a2 _ (9)125. which is. as described in Section 6. we find in Table IV that X l o s m = 16. You see now why we used the symbol α for that portion of the area.6. a more general test.. the hypotheses for this two-tailed test would have been H 0 : ο 2 = σ.700 or > 19.290 N o t e that we call the q u a n t i t y X rather t h a n χ 2 .47S[9] = 2.8) that (n — l)s2/a2 W e first calculate γ is distributed as χ2„_ i].4.919 χ 2 .025(y| = 19.290 would again have led to an acceptance of the null hypothesis. Λ value of X 2 < 2. It c o r r e s p o n d s to the probability of a type I error.343 We notice that the probability of getting a χ as large as 11. This is done to emphasize that we are obtaining a sample statistic that we shall c o m p a r e with the p a r a metric distribution.

minus corresponded to the cold end. the following results were found.67 g. and plus corresponded to the warmer end.2. that we have sampled from a Poisson distribution (in which the coefficient of dispersion should equal unity).6. Is it correct to say that 95 times out of 100 the population mean. When the null hypothesis is false. -Jit = K^F/IOO.7. Arc these limits all correct? (That is. and ti equaled 500 individuals. The gradient was marked olTin units: zero corresponded to the middle of the gradient.540. and 178. test the agreement of the observed distribution with a Poisson distribution by testing the hypothesis that the true coefficient of dispersion equals unity for the data of Tabic 4. Differentiate between type I and type II errors. In Section 7. the initial starting point of the animals. //. Littlejohn (1965) found the note duration of the call in a sample of 39 observations from Tasmania to have a mean of 189 msec and a standard deviation of 32 msec.4 Since it is possible to test a statistical hypothesis with any size sample. On the basis of this information.10 .30. Since V = lOOs/F and . Set 95% confidence intervals to the mean and to the variance. The lower limits are 109. animals turn more often in the warm end of a gradient and less often in the colder end. Note that in these examples the chi-squarc tabic is not adequate.91 and 5.136. ANS.708. why are larger sample sizes preferred? ANS. Thus η 328.131 Exercises 6. however. falls inside the interval from 4. Since in a true Poisson distribution.267. 12.91 to 5. The 95% confidence limits for the mean are from 178.021 and a coefficient of variation of 4. The standard deviation was 12.1 6. 109. 6. ANS. = s/sjn. respectively. χΐ ~ 645. The 95% confidence limits for μ as obtained in a given sample were 4. In a computer simulation of such behavior. In a study of bill measurements of the dusky flycatcher.6.3 an alternative significance test that avoids this problem will be presented. The mean position along a temperature gradient was found to be — 1. In direct klinokinctic behavior relating to temperature. was zero.3 6. coefficient of variation.8). so that approximate critical values must be computed using the method given with Tabic IV.s. the mean position along the gradient. what would the correct statement be? In a study of mating calls in the tree toad Hyla ewingi.352.060.4. the direction of turning being at random. Set 95% confidence limits to the means listed in Table 6.8 6. do they contain μ?) In Section 4. A'2 — (η — 1) χ CD = 1308.6 6. ANS. What do we mean by the power of a statistical test? Set 99% confidence limits to the mean. Using the mite data from Table 4.14 + 0.6 to 199.9 6. the probability of a type II error decreases as η increases.3 the coefficient of dispersion was given as an index of whether or not data agreed with a Poisson distribution. test the hypothesis that /<.2 6. Johnson (1966) found that the bill length for the males had a mean of 8. Using the method described in Exercise 6.67%.5 6. and variance for the birth weight data given in Box 3. The 95% shortest unbiased limits for the variance are from 679. Pest the hypothesis that direct klinokinetic behavior did not result in a tendency toward aggregation in either the warmer or colder end.2. infer how many specimens must have been used? ANS. the mean μ equals the parametric variance σ \ the coefficient of dispersion is analogous to Expression (6.5 to 1646.67 g? If not.698.7 6.5. that is. median. test the hypothesis that the true variance is equal to the sample mean — in other words.

the following results were obtained. Yw = —36.458.11 In an experiment comparing yields of three new varieties of corn.545.001 level.132 c h a p t e r 6 /' e s t i m a t i o n a n d h y p o t h e s i s testing 6.56 20 To compare the three varieties the investigator computed a weighted mean of the three means using the weights 2.86 20 43. — 1. = 34. ANS. and the weighted mean is significantly different from zero even at the Ρ < 0.05. — 1. .21 20 38.555 to —24. Compute the weighted mean and its 95% confidence limits. Variety 1 2 3 y η 22. the 95% confidence limits are — 47. assuming that the variance of each value for the weighted mean is zero.

Wc are now ready for Scction 7.3 is a n o t h e r digression. the I test can also be used.5. is f u n d a m e n t a l to m u c h of the application of statistics in biology and especially to experimental design. W h e r e only t w o samples a r e involved. here we s h o w how the F distribution can be used to test w h e t h e r t w o samples may reasonably have been d r a w n f r o m p o p u l a t i o n s with the same variance. the s a m p l i n g experiment of the housefly wing lengths. in which we e x a m i n e the effects of subjecting the samples to different treatments. F r o m these samples we shall o b t a i n two independent estimates of the p o p u l a t i o n variance. F isher. developed by R. In Section 7. a n d we arc therefore i n t r o d u c i n g it at this early stage in o r d e r to e q u i p you with this powerful w e a p o n for y o u r statistical arsenal. Wc shall discuss the / test for t w o samples as a special ease in Section 8.4.2 to i n t r o d u c e yet a n o t h e r c o n t i n u o u s distribution. A.1 wc shall a p p r o a c h the subject on familiar g r o u n d . Section 7. we describe the partitioning of . However.CHAPTER Introduction of Variance to Analysis We now proceed to a study of the analysis of variance. O n e use of the analysis of variance is to test whether two or m o r e s a m p l e m e a n s have been o b t a i n e d f r o m p o p u l a t i o n s with the same p a r a m e t r i c m e a n . In Section 7. which permits testing two samples as well as m a n y .4. the /·' distribution. Wc digress in Scction 7. needed lor the significance test in analysis of variance. the analysis of variance is a m o r e general test. This m e t h o d .

in Table 7.1 and Table 5.6 and 7. in the first g r o u p = 29. the so-called fixed treatment effects model (Model I) and the variance component model (Model II). our estimate of the p o p u l a t i o n variance is . The sum of the items of each g r o u p is shown in the first row under the horizontal line. a t h o r o u g h understanding of the material in C h a p t e r 7 is necessary for working out actual examples of analysis of variance in C h a p t e r 8. symbolized by V'. The remaining t w o rows in that portion of Table 7.1 further. However. the actual analysis of variance. we must become familiar with a d d e d terminology and symbolism for dealing with this kind of problem. W e have reproduced one such sample in Table 7. In an anova. Therefore. The mean of each group.134 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance sums of squares and of degrees of freedom. The seven samples of 5. here called groups.1). in the present example a = 7. In any analysis of variance we shall have two or more such samples or groups. which in this case equals 7 χ 5 or 35. and we shall use the symbol a for the n u m b e r of groups. Before we proceed to explain Table 7. are listed vertically in the upper half of the table. Except for Section 7.3.1 list Σ"Υ1 and Σ" y1.1 The variances of samples and their means We shall a p p r o a c h analysis of variance t h r o u g h the familiar sampling experiment of housefly wing lengths (Experiment 5. F r o m the sum of squares for each g r o u p we can obtain an estimate of the population variance of housefly wing length. s u m m a t i o n signs can no longer be as simple as heretofore. Thus.1. separately for each group. W. These are the familiar quantities.7) take up in a more formal way the two scientific models for which the analysis of variance is appropriate. as before. η = 5. We can sum either the items of one g r o u p only or the items of the entire table. the entire chapter is largely theoretical. O n e final c o m m e n t . in which we combined seven samples of 5 wing lengths to form samples of 35.1. we shall use Σ"Υ to indicate the sum of the items of a g r o u p and Σ" η Υ to indicate the sum of all the items in the table.2. is in the next row and is c o m p u t e d simply as Σ"Υ/>!. whenever this is not likely to lead to misunderstanding. The last two sections (7. In line with our policy of using the simplest possible notation. We therefore have to use superscripts with the s u m m a t i o n symbol. W e shall use J. W e shall p o s t p o n e the practical details of c o m p u t a t i o n to C h a p t e r 8. The total n u m b e r of items in the table is a times n. they are sometimes called classes or are k n o w n by yet other terms we shall learn later. the sum of the squared V's and the sum of squares of Y. Thus. 7. Each g r o u p or sample is based on η items. Tukey's acronym " a n o v a " interchangeably with "analysis of variance" t h r o u g h o u t the text. We call our samples groups. The sums of the items in the respective groups are shown in the row underneath the horizontal dividing line.

Tf 00 fH ΓΙ t 4 ^t ^t ^t oo rl V <J /> S ii •o c ii c ο .— O I rt V ) Tf V) " τ»Κ ι ι Ο OS < — N t rj· ro "t 0\ r-l 'Τ νο οο (Λ Ο α Ό Ό Tf Tf m ^ rf — Tf rJ m π Ο Ο) T" O Ο rf O Tf J Tj" Tf O C \ O Άι Ο O 7 N rr 1 t τί" Tt xt t Γ .s· § c <> Ο3 2 C 3 ^ t-l ο ο Γ.•f Ο Ο in τίII ΙΙ O O O O w o rII Ii Μ 1 = " § Κ II rn •«t II T <T N Κt t Tf II 'W r-1 — ι vi ΓΊ II Ι&Γ I a * » § S » Ε ® ° ν» U o1 " V ^t D t <> / - Μ ^r •f -3- θ" O — r.

The average variance. N o t e thai this is rather sloppy symbolism. we treat the seven g r o u p means Ϋ as though they were a sample of seven observations. resulting in a sum of squares ( Z y f ) . the parametric variance of housefly wing lengths. N o t e that we use the expression within groups.21. Thus.417. where the weighted average is necessary.136 c h a p t e r 7 /' i n t r o d u c t i o n to analysis of variance a rather low estimate c o m p a r e d with those obtained in the other samples. since all estimates of the variance are based on samples of the same size. it stands to reason that we would get a better estimate if we averaged these separate variance estimates in some way. is s2 = 7 29. one can also c o m p u t e variances a m o n g groups. In this case each sample variance sf is weighted by its degrees of freedom. we prefer to give the general formula.2) is the sum of the sums of squares. — 1. F r o m the sum of squares of the means we obtain a variance among the means in the conventional way as follows: Σ" (Ϋ Y) 2 /(a I). and the grand mean is Ϋ = 45.8 + 81. since («. in the general case there will be a means.4. variance of the means s2· — .029 This quantity is an estimate of 15. T h e resulting statistics arc shown in the lower right part of Tabic 7.· — l)s 2 = Σ y f . T h e reason we do this is that the variance estimates used for c o m p u t i n g the average variance have so far all come from sums of squares measuring the variation within one column. The customary c o m p u t a t i o n a l formula for sum of squares applied to these means is Σ"Ϋ2 . T h e sum of squares represents the deviations of the g r o u p means from the grand mean." There arc seven means in this example. Actually. cutting across g r o u p boundaries. This is d o n e by c o m p u t i n g the weighted average of the variances by Expression (3. the sum of the means.8 28 = 6. — 1) = 7 χ 4. For this wc first need the quantity Σ"Κ 2 . Since we have a sum of squares for each group.1.2) in Section 3.34. T o be entirely proper. which equals 14. As wc shall see in what follows.5.0 + 75. ^" Yh s u m m i n g the m e a n s of g r o u p 1 through g r o u p a. headed " C o m p u t a t i o n of sum of squares of means. we could obtain an estimate of the p o p u l a t i o n variance f r o m each of these. However. the sum of the degrees of freedom of each group. is called the average variance within groups or simply variance within groups. T h e d e n o m i n a t o r is Σ"(π.[(Σ"Υ) 2 /ciJ = 25. We first c o m p u t e Σ"Ϋ. T h e sum of the seven means is Σ"Ϋ = 317. which works equally well for this case as well as for instances of unequal sample sizes.417. in this instance a simple average would suffice. the grand mean of the g r o u p means. Thus. This estimate. based on 7 independent estimates of variances of groups.1. computed as Υ = Σ"Ϋ/α. a fairly close a p p r o x i m a t i o n to the parametric mean μ — 45.2 + 98. w\ = n . therefore. Σ"(>' — >7)2.2 + 45.2 28 = 448.24. we should identify this q u a n t i t y as Σ. However. T h e next quantity c o m p u t e d is Ϋ. the n u m e r a t o r of Expression (3. although in previous chapters we used the term variance of groups. T o obtain a sccond estimate of the population variance.2 + 107.2 + 12. Wc divide by a 1 rather than η — 1 because the sum of squares was based on a items (means).

>. Expression (6.21. This is a second estimate of the parametric variance 15. Each individual wing length is represented by Y.1 / t h e v a r i a n c e s o f s a m p l e s a n d t h e i r means 137 25.2362 = 21.181. it is η times the variance of means and is an independent estimate of the parametric variance σ2 of the housefly wing lengths. Let us review what we have done so far by expressing it in a more formal way. •·· i n V. >. (/roups a I " >0 >:„ ). we can estimate a variance of items by multiplying the variance of means by the sample size on which the means are based (assuming we have sampled at r a n d o m from a c o m m o n population). ' • iy.. W h e n we do this for our present example. single classification. γ t2 Y2 y iy3 Υ. since it is based on only 7 "observations. Thus. W e learned in C h a p t e r 6. >«. W e ask you to take on faith that they are." W e need a n a m e describing this variance to distinguish it from the variance of means from which it has been computed. and hence Thus..2 Data arranged for simple analysis of variance.. x.„ ••• >. sums Means £γ Ϋ Σ. as well as from the variance within groups with which it will be compared. you will notice that (he first subscript changes with each column representing a g r o u p in the tabi.K 7. >.417/6 = 4.-.1).2362. >. It m a y not be clear at this stage why the two estimates of a 2 that we have obtained. the variance within groups and the variance a m o n g groups. .2 represents a generalized table for d a t a such as the samples of housefly wing lengths. V. W e shall call it the variance among groups. but this is to be expected. completely randomized. that when we randomly sample f r o m a single population. The wing length of the j t h fly from the /th sample or g r o u p is given by Y^. ••• · >.. Table 7...7. subscripted to indicate the position of the quantity in the data table. are independent. Υ. we obtain s2 = 5 χ 4. It is not as close to the true value as the previous estimate based on the average variance within groups.

vf. It means that we start with the first group. This process is continued until i.. set i = 2. what shall we do with them? We might wish to find out whether they d o in fact estimate the same parameter. wc might always sample 8 wing lengths for the first sample (n.-Y) a .ι „> ir ι. wc calculate This will be a ratio near 1. 1 r . and n 2 may or may not be equal to each other. T o test this hypothesis.. This is quite a tedious one without the use of computers. The variance a m o n g groups is c o m p u t e d as n i=a 2 -^-rliY. so we will not ask you to carry it out.. In other words.i. we need a statistical test that will evaluate the probability that the two sample variances are from the same population.„ . which is the average variance of the samples. Such a test employs the F distribution. we sum all the squared deviations within one g r o u p first and add this sum to similar sums f r o m all the other groups.. Assume that you are sampling at r a n d o m from a normally distributed population. Its actual value will depend on the relative magnitudes of variances .2 The F distribution Let us devise yet a n o t h e r sampling experiment.. which is taken u p next. followed by sampling n 2 items and calculating their variance .<ii.r 1 i Σ ι ( y u η = i)2 The variance within groups. for example.138 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance table. such as the housefly wing lengths with mean μ and variance σ2. W e then return to the outer summation.s2. is set to a. Sample sizes n. we can c o m p u t e the variance of sample 1 as 1 i=" y —. 7. changing index j of the inner Σ f r o m 1 to η in the process..) and 6 wing lengths for the second sample (n 2 ). but are fixed for any one sampling experiment. After each pair of values (sf and has been obtained.. T h e sampling procedure consists of first sampling n l items and calculating their variance . is c o m p u t e d as 1 α ( η i=a j —η Γ = j > 1) . setting i = 1 (i being the index of the outer Σ). W e sum the squared deviations of all items from the mean of the first group. . a n d sum the squared deviations for g r o u p 2 from j = 1 toj = n. and the second subscript changes with each row representing an individual item. because these variances arc estimates of the same quantity. Using this notation. Thus....1 Μ N o w that we have two independent estimates of the population variance.Σ ι Σ= ι ( Y i j - N o t e the double s u m m a t i o n . the index of the outer Σ.i.

there exists a separate F distribution. the average of these ratios will in fact a p p r o a c h the quantity (n2 — l)/(«2 — 3).7. we obtain an F distribution whether the samples come from the same normal population or from different ones. based on sample variances are sample statistics that m a y or may not follow the F distribution.s h a p c d . that is. like the t distribution and the χ2 distribution. A. F or very low degrees of freedom the distribution is l . the shape of the F distribution is determined by two values for degrees of freedom. φ μ 2 but σ\ = σ\. conforming to o u r convention of separate symbols for sample statistics as distinct from probability distributions (such as ts and X2 contrasted with t and χ2). Figure 7. Vj and v 2 (corresponding to the degrees of freedom of the variance in the n u m e r a t o r and the variance in the d e n o m i n a t o r . Variance ratios s f / s f . We have therefore distinguished the sample variance ratio by calling it Fs.2 / t h e F d i s t r i b u t i o n 139 Fs of their variances. but it becomes humped and strongly skewed to the right as both degrees of freedom increase. We could also have generated it by sampling from two separate n o r m a l distributions differing in their mean but identical in their parametric variances. Fisher.0 when n2 is large. for every possible combination of values v l5 v 2 . with μ. in h o n o r of R. This is a n o t h e r distribution described by a complicated mathematical function that need not concern us here. Thus. so long as their variances arc identical. Remember that the F distribution is a theoretical probability distribution. ι . which is close to 1.1 shows several representative F distributions. each ν ranging from 1 to infinity. Table V in Appendix norm 7. respectively). The distribution of this statistic is called the F distribution. Unlike the t and χ2 distributions. We have discussed how to generate an F distribution by repeatedly taking two samples from the same normal distribution. Thus.

2. where a is the p r o p o r t i o n of the F d i s t r i b u t i o n t o t h e right of the given F value (in o n e tail) a n d \'j. 2 F r e q u e n c y curve of the /· d i s t r i b u t i o n for (> and 24 degrees of f r e e d o m .05. Before we c a n inspect the FKHJRE 7 . T h e variance a m o n g g r o u p s based on 7 m e a n s w a s 21.4. the degrees of f r e e d o m pertaining to the lower ( d e n o m i n a t o r ) variance.51 at a = 0. T h e reason for this restrictive alternative hypothesis.180. W e can n o w test the t w o variances o b t a i n e d in the s a m p l i n g e x p e r i m e n t of Section 7. T h u s . respectively.140 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance A2 s h o w s the cumulative probability distribution of F for three selected p r o b ability values. A one-tailed .029 = 1.029. Figure 7. v 2 are the degrees of f r e e d o m p e r t a i n i n g to the variances in the n u m e r a t o r and the d e n o m i n a t o r of the ratio.32. T h e values in the table represent F a ( v i v j ] .1 a n d T a b l e 7. a n F distribution with v. At each intersection of degree of f r e e d o m values we list three values of F decreasing in m a g n i t u d e of a.01 of the area u n d e r the curve lies t o the right of F = 3. O u r null hypothesis is that the t w o variances estimate the same p a r a m e t r i c variance.2 illustrates this.67.05 of the a r e a u n d e r the curve lies to the right of F = 2.181/16. with the alternative hypothesis Ηx: σ\ > we use a one-tailed F test. a n d the variance within 7 g r o u p s of 5 individuals was 16.51. respectively. as illustrated by F i g u r e 7. will be explained in Section 7. the alternative hypothesis in an a n o v a is always that the p a r a m e t r i c variance estim a t e d by the variance a m o n g g r o u p s is greater t h a n that estimated by the variance within g r o u p s . T h e table is a r r a n g e d so t h a t across the t o p o n e reads v l 5 the degrees of f r e e d o m p e r t a i n i n g to the u p p e r ( n u m e r a t o r ) variance. W e calculate the variance ratio F s = s\js\ = 21. = 6. which leads to a one-tailed test. By t h a t we m e a n that 0. a n d a l o n g the left m a r g i n o n e r e a d s v 2 . O n l y 0. if we have a null hypothesis H0: σ\ = σ\. v 2 = 24 is 2. F o r example.1.

Dividing this value by 10 dj\ we obtain 1. and in such cases a 5% type I error means that rejection regions of 2 j % will occur at each tail of the curve. of course.05. Thus.32. There is an i m p o r t a n t relationship between the F distribution and the χ2 distribution.221.ν 2 /σ 2 . the estimate using the variance of their means is expected to yield another estimate of the parametric variance of housefly wing length. = . each of t h e m based on 5 individuals yielding 4 degrees of freedom per variance: a(n — 1) = 7 χ 4 = 28 degrees of freedom. we have to k n o w the a p p r o p r i a t e degrees of freedom for this variance ratio. the upper variance has 6. The upper degrees of freedom arc η — I (the degrees of freedom of the sum of squares or sample variance).. Thus. 5[ΐοι ^ 18.307. We shall learn simple formulas for degrees of freedom in an a n o v a later. respectively.1 / t h e F d i s t r i b u t i o n 141 F table. T h u s 95% of an F distribution with 5 and 24 degrees of freedom lies to the right of 0.53. Vl] For example. If we check Table V for ν 1 = 6 . = 2.32. we obtain an Fs value with η .62. α is clearly >0. T h e u p p e r variance (among groups) was based on the variance of 7 means. to what we knew anyway f r o m o u r sampling experiment. If you divide the n u m e r a t o r of this expression by n — 1. Since these values arc rarely tabulated. hence it should have α — 1 = 6 degrees of freedom.. Whenever the alternative hypothesis is that the two parametric variances are unequal (rather than the restrictive hypothesis Η { . because only on the basis of an infinite n u m b e r of items can we obtain the true.51. the sample variance s j can be smaller as well as greater than s2. χ2^\!ν ~ *]· Wc can convince ourselves of this by inspecting the F and χ2 tables. Therefore. Then F0 4515 241 is the reciprocal of 4. but at the m o m e n t let us reason it out for ourselves. . You may remember that the ratio X2 = Σ\>2/σ2 was distributed as a χ2 with η — I degrees of freedom. we first have to find F(1 0 5 1 2 4 = 4. . F r o m the χ2 tabic (Table IV) we find that χ 2. in the left half of the F distribution). respectively).. This leads to a two-tailed test.8307. v 2 = 24. the closest a r g u m e n t s in the table.. σ \ > σ 2 ). we find that F0 0 5 [ 6 24] = 2. F(1 „ 5 ( 5 2 4 . This corresponds. the lower variance 28 degrees of freedom. to have Fs values greater t h a n 1. we may expect m o r e t h a n 5% of all variance ratios of samples based on 6 and 28 degrees of freedom. you obtain the ratio F. by dividing a value of X 2 by η — 1 degrees of freedom. which is a variance ratio with an expected distribution of F. Since the seven samples were taken from the same population. T h e lower degrees of freedom are infinite.1 and co d f . corresponding to the Fs value actually obtained. which equals 0. We have no evidence to reject the null hypothesis and conclude that the two sample variances estimate the same parametric variance. T h e lower variance was based on an average of 7 variances.5 (that is.221. If we wish to obtain F 0 4 5 [ 5 2 4 1 (the F value to the right of which lies 95% of the area of the F distribution with 5 and 24 degrees of freedom. In general. parametric variance of a population.7. F o r F = 1. they can be obtained by using the simple relationship ' I I K)[V2.. respectively.53. In such a case it is necessary to obtain F values for ot > 0.

the F value of 4.02519.9 Source: Data modified from Willis and Lewis (1957). we look up the critical value Fa/2|vi. the two sample variances are not significantly different—the two sexes are equally variable in their duration of survival. the two statistics of significance are closely related and. lacking a χ 2 table.05.248.10 > Ρ > 0.9] = 4. the outcome is close enough to the 5% significance level to make us suspicious that possibly the variances are in fact different.1 Testing the significance of differences between two variances. . = 10 n2 = 1 0 H0: <xf = σ | Y. The alternative hypothesis is that the two variances are unequal.9i = '/f0.σίΦσΙ = 3.. assuming the null hypothesis is true.025 is matched by a similar left-hand area to the left of ^o. we double these probabilities. — 1 are the degrees of freedom for the upper and lower variance.03 and F 0 0 5 l 9 i 9 J = 3. Therefore.05. the probability of observing an F value greater than 4. We have no reason to suppose that one sex should be more variable than the other. BOX 7. since the right-hand tail area of α = 0. Survival in days of the cockroach Blattella vaga when kept without food or water. From Table V we find F0.8 days Η^. In view of the alternative hypothesis this is a two-tailed test.vi] depends on whether sample 1 or sample 2 has the greater variance and has been placed in the numerator. It would be desirable to repeat this experiment with larger sample sizes in the hope that more decisive results would emerge. respectively.03 represents a probability of α = 0. = ri1 — 1 and v2 = n. Thus.025(9. we calculate F s as the ratio of the greater variance over the lesser one: Because the test is two-tailed. Since only the right tail of the F distribution is tabled extensively in Table V and in most other tables. we shall first apply our newly won knowledge of the F distribution to testing a hypothesis a b o u t two sample variances.9] = 0. Whether we look up ^<χ/2ΐν.18. = 8.»2)> where α is the type I error accepted and v. Females Males n. using the values of vF [v ^ in place °f* 2 v.142 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance Thus.25 is 0. However.ν2] o r Fx/up.6 s\ = 0.· Before we return to analysis of variance. Because this is a two-tailed test.975[9.00 and smaller than 1/4. we could m a k e d o with an F table alone.5 days P2 = 4. Strictly speaking.00 = 0.

1. yet others more solid matter. this test is of interest in its own right. We shall assume the following effects resulting from treatment of the medium: Medium 1 decreases average wing length of a sample by 5 units 2 -decreases average wing length of a sample by 2 units 3 — d o e s not change average wing length of a sample 4 increases average wing length of a sample by 1 unit 5 -increases average wing length of a sample by 1 unit 6 increases average wing length of a sample by 5 units 7—(control) does not change average wing length of a sample The effect of treatment / is usually symbolized as a. and the medium in each of the culture jars was prepared in a different way. In genetics wc may need to k n o w whether an offspring generation is m o r e variable for a character t h a n the parent generation. = «Λ = . Each sample was reared in a separate culture jar. As will be seen later.3 The hypothesis H0: σ\ = σ\ A test of the null hypothesis that two normal populations represented by two samples have the same variance is illustrated in Box 7. this in turn affects the wing lengths we have been measuring.4 Heterogeneity among sample means We shall now modify the data of Table 7. assumes the following values for the above treatment effects.3 / THE HYPOTHESIS H0:uj = σ\ 143 7. We will repeatedly have to test whether two samples have the same variance.) Thus a. Suppose the seven groups of houseflies did not represent r a n d o m samples from the same population but resulted from the following experiment. Let us assume that sample 7 represents the s t a n d a r d medium against which we propose to c o m p a r e the other samples. if b o t h setups were equally variable. The various changes in the medium affect the sizes of the flies that emerge from it. H o w ever.5 -2 0 α 4 =• I «5=1 α6 = 5 /ν — η . 7. α.7. the experimenter would pursue the one that was simpler or less costly to undertake. discussed in Section 7. In experimental biology we may wish to d e m o n s t r a t e under which of two experimental setups the readings will be more variable. (Please note that this use of α is not related to its use as a symbol for the probability of a type I error. some tests leading to a decision a b o u t whether two samples come f r o m p o p u l a tions with the same m e a n assume that the population variances are equal. In general.1. α. In systematics we might like to find out whether two local p o p u l a t i o n s are equally variable. others more sugar..1. the less variable setup would be preferred. Some had more water added.

+ — νο ι rr fN ΚΊ Ό r-.Ξ * W 1 .c-ι ι ti- o in ^t so ι^ι ο ο ^f 2 « Ο + ο s te « r<~) hΟ C _ L i/i i/3 II XI = 1 ο .c «I Q = ε δ< ο * υ •f II ΓΪ σ> · ι>- * 2 > § + < N I ε "! r-i ο ο ο n>I 'b.r r — Ο γ. \θ r J ^D + tl.un tn to v D •5 ο — ο (Ν • •3.r.

Clearly. the sum of s q u a r e s of each g r o u p .23. We note that each g r o u p has a c o n s t a n t a. b e c o m e s 39. c h a n g i n g t h e first wing length f r o m 48 t o 46. W h e n we multiply by η = 5 t o get an estimate of σ 2 . n o w becomes 36.· = 0.2 Σ Σ ( V >'. t h a t is. W h a t has h a p p e n e d ? We can easily explain it by m e a n s of T a b l e 7.029 = 5. 4 ] = 3.4 / h e t e r o g e n e i t y a m o n g s a m p l e means 145 N o t e t h a t t h e α.·. the second wing length. there has now been a d d e d a. so that the second a.2 represented Table 7. which is a value m u c h higher t h a n the variance of m e a n s f o u n d before.ι ι>.. . but it is unnecessary for o u r a r g u m e n t .2 we can see that additive codes d o not affect s u m s of s q u a r e s or variances.617/6 = 16.1.1.-'s cancel out.2. W h e r e a. This is a convenient p r o p e r t y t h a t is generally p o s t u l a t e d . which represents T a b l e 7.770. In Section 7. F r o m Appendix A 1. to each g r o u p is simply that of an additive code. our f o r m u l a becomes m o r e complicated. since a. the first wing length. which now is 83. Therefore.029. which is a r r a n g e d identically to T a b l e 7.848/16. all o t h e r values of Σ" y2.3 symbolically in the m a n n e r that Table 7. N o w let us c o m p u t e the variance of the means.. is 0. the u p p e r variance.236. T h e c h a n g e d values can be inspected in Table 7. has become significantly larger. is c o n s t a n t for any one group.1 we c o m p u t e d the variance within g r o u p s as J u j ~ π . If you c o m p a r e this value with the sum of squares of the first sample in T a b l e 7.848 a n d is no longer even close to an estimate of σ2. we o b t a i n the variance of groups. W e first calculate the s u m of squares of the first s a m p l e to find it t o be 29. not only is each s e p a r a t e s u m of squares the same as before. In fact. F o r the second s a m p l e a 2 > s — 2. · ·Λ)| Then we o p e n the parentheses inside t h e s q u a r e brackets. the wing lengths d o not change. but the average variance within g r o u p s is still 16. representing the variance a m o n g groups. which is m u c h greater than the closest critical value of F 0 0 S | h 2 4| = 2. added a n d that this constant changes the s u m s of the g r o u p s by na.1 by a d d i n g t h e a p p r o p r i a t e values of a t to e a c h sample.4. We n o w repeat o u r previous c o m p u t a t i o n s .51. a n d the m e a n s of these g r o u p s by <Xj. We therefore write 2 a(n - Σ I ) ι Σ l ' y u · Α) .67. the effects cancel out.1. W e can now modify T a b l e 7. a n d so on. you find the two values to be identical. are identical to their previous values. therefore. T h e t w o variances are most unlikely to represent the same p a r a m e t r i c variance. formerly 44. leaving the expression exactly as before.1). W e repeat the I·' test with the new variances a n d find that Fs = 83. changes sign a n d the α. In s a m p l e 1 the value of a 1 is —5.3.-'s have been defined so t h a t Σ" a. they are increased by the m a g n i t u d e indicated.· When wc try to repeat this.7. Similarly. because to each Y:j a n d each V. which was 41 (see T a b l e 7. the observed F s is greater t h a n F„ 0 l | ( 1 . It is 100. 4. W h y is this so? T h e effect of a d d i n g a. where a { is positive.

+ «3 n • y3+*3 • fi + ti • ·· η + »„ s u b s t a n t i a t i n g o u r earlier o b s e r v a t i o n t h a t the variance within g r o u p s d o e s nol c h a n g e despite the t r e a t m e n t effects.2. •• Y.a ) l 2 a S q u a r i n g (he expression in the s q u a r e brackets. ι . Σ Y ΫΙ+ a. it is a so-called covariance. vvc obtain the terms 1 a - 1 . which we have not w i e n c o u n t e r e d .1.-a 1 . Σ ^ . F.4 we see that the new grand m e a n equals I i=a a i^i . ι a i=ι ' = * W h e n we substitute the new values for the g r o u p m e a n s and the g r a n d m e a n the f o r m u l a a p p e a r s as -—τ'ς π.)-(y+<*)]2 »· a .m «) T h e first of these terms we immediately recognize as the previous variance el the means. 4 D a t a of Table 7. T h e variance of m e a n s was previously calculated by the f o r m u l a ι a — . • ) = ι ι = <i _ ιa • = <. Sy.2 + «« + «„ Yal J η Sums Means Yxj + a.n .+ «.I .Σ . T h e tliiM expression is a new type.·<)·' + -a 2 .. a I 2 Y Groups 3 i ·· a ΙΛ1 t 2 3 r . 1. .J A · + »„ Yin + «3 • ' Yin + Oti•• Ym • η π Σ y 3 + »a3 •• tYi + *i • Σκ + ny • >:«.I + • ·• • Y.146 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance TABLE 7 .3 arranged in the manner of Table 7. f r o m T a b l e 7.• . ς'<>. + «1 + ll + y 22 + * 2 Yli + «2 «2 Yil y 33 + «3 + «3 + «3 ' • Yn + a.χ (>. • Yi 2 + a. — Σ ϋ< + . +<χ2 + "a2 + · Yij+ " • • Y.=1 H o w e v e r . . y^ + η + *2 2 η + HC O.ι ζ-ι which in turn yields -. T h e second is a new q u a n t i t y .Σ a . it clearly is a variance or at least a q u a n t i t y akin to a variance. We shall not be concerned with it at this stage except to say th. but is familiar by general appeal ancc.• + « . >)' + a 1 .= ι Σ .= ι V) + («.-v . · • ^3 + «.

to be significantly greater than could be reconciled with the null hypothesis. wc can more or less alter the variancelike quantity at will. but simply treated all jars as controls. in planning our treatments. such as that of medium 6. Therefore. The independence of the treatments effects and the sample m e a n s is an i m p o r t a n t concept that we must u n d e r s t a n d clearly. we would still have obtained differences a m o n g the wing length means. with the m e a n that by chance would be the greatest.1 with r a n d o m sampling from the same population.· is assumed to be independent of the X to which they are added. It is a n a l o g o u s to a variance. In our planning of the experiment we had no way of predicting which sample means would be small and which would be large. where the m a g n i t u d e of the treatment effects a. since it is not based on a r a n d o m variable. hence it does not contribute to the new variance of means. By changing the m a g n i t u d e and n a t u r e of the treatments. It is now obvious why this is so. we call the quantity so obtained the variance of groups. By chance.-'s are arranged so that a = 0. We shall therefore call it the added component due to treatment effects. When wc d o this for the ease in which treatment effects are present.4 / HETEROGENEITY AMONG SAMPLE MEANS 147 in cases such as the present one. but it cannot be called a variance. as that for sample 2. Only if the m a g n i t u d e of the treatment effects were deliberately correlated with the sample means (this would be difficult to d o in the experiment designed here) would the third term in the expression. If we had not applied different treatments to the medium jars. As you know.7. Since the α. the covariance. Also. We found the variance ratio f\. have an expected value other than zero. Those are the differences f o u n d in Table 7. some are smaller. but rather on deliberately chosen treatments largely under our control. the smallest sample mean (sample 4) is not associated with the smallest treatment effect. the expected value of this q u a n t i t y is zero. we had n o way of m a t c h i n g u p a large treatment effect. we obtain Thus we see that the estimate of the parametric variance of the population is increased by the quantity a which is η times the added c o m p o n e n t due to treatment effects. We were testing the variance . T h e second term in the expression for the new variance of m e a n s is clearly added as a result of the treatment effects. some of these means are greater. we can rewrite the middle term as In analysis of variance we multiply the variance of the m e a n s by η in order to estimate the parametric variance of the items.

w h e t h e r a g r o u p of m e a n s can simply be considered r a n d o m samples f r o m the same p o p u l a t i o n . we have no way of k n o w i n g whether b r o o d 1 will have longer wings than b r o o d 2. By this we m e a n that we have not deliberately planned or fixed the t r e a t m e n t for any one group. So far as we can ascertain. In such a study. It is clear f r o m this f o r m u l a (deliberately displayed in this lopsided m a n n e r ) that the F test is sensitive to the presence of the a d d e d c o m p o n e n t d u e to treatm e n t effects. In fact. If the latter is so. or w h e t h e r t r e a t m e n t s that have affected each g r o u p separately have resulted in shifting these m e a n s so m u c h that they can n o longer be considered samples from the s a m e p o p u l a t i o n . the genctic factors . we are generally not interested in the m a g n i t u d e of but we are interested in the m a g n i t u d e of the separate values of In o u r e x a m p l e these a r c the effects of different f o r m u l a t i o n s of the m e d i u m on wing length. We may also be interested in s t u d y i n g differences of the type a . nor have we any way of controlling this experiment so that b r o o d 1 will in fact grow longer wings. leading us to the question of the significance of the differences between the effects of a n y two types of m e d i u m or any two drugs.148 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance ratio expecting to find F a p p r o x i m a t e l y equal to σ2/σ2 we have η a — ι " = 1. and their seven b r o o d s would reflect this. we call it a Model 1 tmovu. S u p p o s e that the seven samples of houscflies in T a b l e 7. in which the a d d e d effects for cach g r o u p arc not fixed t r e a t m e n t s but are r a n d o m effects. At this point. It permits us to test w h e t h e r there are a d d e d t r e a t m e n t e f f e c t s — t h a t is. called a Model 11 anova. however. would represent the effects of d r u g s on the blood pressure. but that the actual effects on each g r o u p are r a n d o m and only partly u n d e r o u r control. an a d d e d c o m p o n e n t d u e to t r e a t m e n t effects will be present a n d m a y be detected by an F test in the significance test of the analysis of variance. But we a r e a little a h e a d of o u r story.6). instead of housefly wing length. — x 2 .3 represented the offspring of seven r a n d o m l y selected females f r o m a p o p u l a t i o n reared on a uniform m e d i u m . T h e r e would be gcnctic differences a m o n g these females. W h e n analysis of variance involves t r e a t m e n t effects of the type just studied. which is clearly the issue of interest to the investigator. we were m e a s u r i n g b l o o d pressure in samples of rats a n d the different g r o u p s had been subjected to different d r u g s or different doses of the same drug. Before actually m e a s u r i n g them. T h e exact n a t u r e of these differences is unclear and unpredictable. y o u have an a d d i t i o n a l insight into the analysis of variance. the quantities a. T h e r e is a n o t h e r model. Later in this c h a p t e r (Section 7. M o d e l I will be defined precisely. If.

7. the variance could be expected t o be greater. If the g r o u p s are r a n d o m samples. I . T h e estimate of the variance a m o n g m e a n s now represents the q u a n t i t y 1 a . because all the i m p o n d e r a b l e accidents of f o r m u l a t i o n a n d e n v i r o n m e n t a l differences d u r i n g m e d i u m p r e p a r a t i o n that m a k e o n e batch of m e d i u m different f r o m a n o t h e r would c o m e into play. as before.4 / h e t e r o g e n e i t y a m o n g s a m p l e m e a n s 149 for wing length are distributed in a n u n k n o w n m a n n e r in the p o p u l a t i o n of houseflies (we m i g h t hope t h a t they are n o r m a l l y distributed). be interested in the m a g n i t u d e of the variance of the a d d e d effects. for example. d e p e n d i n g on which of the designs discussed a b o v e we were thinking of.4. W e w o u l d not be interested in the exact differences f r o m batch to batch. except that in place of a. T h e existence of this added variance component is d e m o n s t r a t e d by the /·' test. a ' . s u p p o s e that instead of m a k i n g u p our seven cultures f r o m a single b a t c h of m e d i u m .·-1 ·"·' · 2 α . we would not be in a position to interpret them. Similarly. is a r a n d o m variable. In T a b l e 7.. because the r a n d o m effects are independent of (he m a g n i t u d e of the means. N o t h a v i n g deliberately varied b a t c h 3. Even if these were m e a s u r e d . We symbolize it by . It would represent the added variance c o m p o n e n t a m o n g females or a m o n g medium batches. In a n o t h e r example for a M o d e l II a n o v a . but with an added variance c o m p o nent. we have no idea why.s^ and call it the added variance component amoiui (/roups. since there were 5 flies per jar. We use a capital letter to indicate that the effect is a variable. Interest would focus on the a d d e d variance c o m p o n e n t arising f r o m differences a m o n g batches. Σ · 1 . o n e right after the other. we imagine /I. a n d are n o w analyzing the v a r i a t i o n a m o n g the batches. a n d o u r s a m p l e of seven is a r a n d o m sample of these factors.Sy. is η2 X + ησ\ a 2 " . in the o t h e r example we would be interested in the a d d e d variance c o m p o n e n t arising f r o m genetic differences a m o n g the females. .to a p p r o x i m a t e σ1/σ1 . it should p r o d u c c longer wings t h a n b a t c h 2. substituted in Table 7. We shall now take a rapid look at the algebraic f o r m u l a t i o n of (he a n o v a in the case of Model II. since . we have p r e p a r e d seven batches separately. T h e middle term is a true variance. ' Σ <··'. again displayed lopsidcdly.4.· .> >' + I . but also Ah which is the symbol we shall use for a r a n d o m g r o u p effect. W e would. T h u s . however. we could expect the variance of the j a r m e a n s to be σ 2 / 5 . But when based on different batches of m e d i u m .π κ - η T h e first term is the variance of m e a n s . we may expect I.I. if we used seven j a r s of m e d i u m derived f r o m o n e batch. and the last term is the covariance between the g r o u p m e a n s and (he r a n d o m effects Ah the expected value of which is zero (as before). T h e algebra of calculating the two estimates of the p o p u l a t i o n variance is the same as in Model I. the expected ratio. .3 the second row at the head of the d a t a c o l u m n s shows not only a. Σ Ο ' .· .

589 % of the sum of the variances a m o n g and within groups.(variance a m o n g g r o u p s — variance within groups) η J-[(s2+ ns2A)-s2]=i-(ns2A) = s2A F o r the present example.029 + 13.56. The total variance c o m p u t e d from all an items is a n o t h e r estimate of σ 2 .16. If we remove the classification into groups. In a Model II a n o v a we are interested not in the m a g n i t u d e of any At or in differences such as Al — A2. headed " C o m p u t a t i o n of total sum of squares. which is generally expressed as the percentage 100s^/(s 2 + sA). s2A = |(83.848 . we obtain ? = 45. the p u r p o s e of calculating the total variance in an a n o v a is not for using it as yet a n o t h e r estimate of σ 2 .v2 = 27. This is best seen when we arrange our results in a conventional analysis of variance table. it is a poor estimate of the population variance. the same as the quantity Ϋ c o m p u t e d previously from the seven g r o u p means.886. but in the second sample (Table 7. T h e various quantities necessary for this c o m p u t a t i o n are shown in the last column at the right in Tables 7.7).5 Partitioning the total sum of squares and degrees of freedom So far we have ignored one other variance that can be c o m p u t e d from the d a t a in Table 7. which is.34 (the same as in Table 7.029) = 13. It is a good estimate in the first case.34 for the sample in Table 7. which is considerably greater than the c o r r e s p o n d ing variance from Table 7.56 = J356_ 29. which gives a variance of 16.1 and 7.3).3.1. This a d d e d variance c o m p o n e n t a m o n g groups is 100 x 13. the parametric value of sA. 7.997. Repeating these c o m p u t a t i o n s for the d a t a in Table 7.150 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance N o t e that σΑ. is multiplied by η. the methods of estimating variance c o m p o n e n t s are treated in detail in the next chapter. Since the variance a m o n g g r o u p s estimates σ2 + ησ\. we can calculate s2A as . Model II will be formally discussed at the end of this chapter (Section 7." We obtain a mean of F = 45. of course.1. where added c o m p o n e n t s due to treatment effects or added variance c o m p o n e n t s are present. T h e sum of squares of the 35 items is 575.56 16. as . However. but in the m a g n i t u d e of σΑ a n d its relative m a g n i t u d e with respect to σ 2 . = 0) and .3.1. since we have to multiply the variance of m e a n s by η to obtain an independent estimate of the variance of the population. we can consider the housefly d a t a to be a single sample of an = 35 wing lengths and calculate the m e a n and variance of these items in the conventional manner.1 because Σ" a. but for introducing an i m p o r t a n t m a t h e m a t i c a l relationship between it and the other variances.938 when divided by 34 degrees of freedom.

except for minute r o u n d i n g errors.5 are the same as those obtained previously. N o t e that the mean squares arc not additive. They have been obtained independently of each other. The next two columns show sums of squares and variances.5 / p a r t i t i o n i n g t h e t o t a l s u m o f s q u a r e s a n d d e g r e e s o f f r e e d o m 151 TABLE 7. but are generally called mean squares. the sum of squares within groups. since. respectively. Thus. if we know any two of the sums of squares (and their a p p r o p r i a t e degrees of freedom). an i m p o r t a n t property of the sums of squares. and that for the total variation is an — 1. Wc shall use the c o m p u t a t i o n a l formula for sum of squares (Expression (3.886 21. This is obvious.Y (2) dj Sum of squares SS (41 Mean square MS . You will note that variances arc not referred to by that term in anova. Notice that the sums of squares entered in the a n o v a table are the sum of squares a m o n g groups. Depending on computational equipment. and the sum of squares of the total sample of an items. however. The first identifies the source of variation as a m o n g groups. it is placed here rather than in the Appendix because these formulas will also lead us to some c o m m o n c o m p u t a t i o n a l formulas for analysis of variance.938 shown in Table 7. Such a table is divided into four columns. because the sums of squares are divided by the degrees of freedom rather than sample size.5. T h e degrees of freedom for variation a m o n g groups is a — 1. the formulas wc . Although it is an algebraic derivation. T h e sum of squares and mean square arc frequently abbreviated SS and MS.1.Y Y Y Among groups Within groups Total 6 28 34 127. respectively. The column headed df gives the degrees of freedom by which the sums of squares pertinent to each source of variation must be divided in order to yield the corresponding variance. Note.029 16. in a Model I anova. they d o not estimate a population variance. These quantities arc not true mean squares.086 448. we can c o m p u t e the third and complete our analysis of variance. but when we add the SS a m o n g groups to the SS within groups we obtain the total SS. within groups.8)) to d e m o n s t r a t e why these sums of squares are additive.181 16. that for variation within groups is a (η — 1). Observe that the degrees of freedom are also additive and that the total of 34 df can be decomposed into 6 df a m o n g groups and 28 df within groups. since generally (a + b)f(c + d) Φ a/c + b/d. The sums of squares and mean squares in Table 7.7. The sums of squares are additive! Another way of saying this is that wc can decompose the total sum of squares into a portion due to variation a m o n g groups and a n o t h e r portion due to variation within groups. and total (groups a m a l g a m a t e d to form a single sample).5 Anova table for data in Table 7.800 575. (i) U) Source of variation Y Y .

T h e sum of squares of m e a n s in simplified n o t a t i o n is Σ SS„ Y =ς (. This yields 1 " /" V 1 /ο " SS g r o u p s = η X SS m e a n s = . Collection of d e n o m i n a t o r s outside the summ a t i o n signs yields the final desired form. Σ Σ Y 1 /" " ..Σ ί Σ Π . T o obtain the sum of squares of groups.152 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance have used so far to obtain the sums of squares may not be the most rapid procedure.-„tr \n / a ΣΙ = 1 a l η Ση . we multiply SS m c a n s by n. slightly rearranged as follows: SS.\Σ Σ y an ) + Σ Σ y 2 ^ Σ ( Σ y ss..i ΣΣ^ an* \ί 1 / a η N o t e that the deviation of m e a n s from the g r a n d mean is first rearranged t o fit the c o m p u t a t i o n a l f o r m u l a (Expression (3. Σ Σ a n η 1 1 ( a n an ΣΣγ .Σ y y .an [\ Σ Σ - γ We now copy the formulas for these sums of squares. as before.8)). a n d then each m e a n is written in terms of its constituent variates.( Σ Σ r Next we evaluate the sum of squares within groups: Y 2 ss w h W i n = l X ( α = ς - η t = Σ u / π π ς > 2 - „ Σ ( Σ ^ T h e total sum of squares represents ssuniύ = Σ Σ ( u η γ - η 2 1 / a η = ΣΣ γ 2 .

This d e m o n s t r a t i o n explains why the sums of squares are additive. the g r a n d m e a n 45. in order to have all three variances of the same order of magnitude. which describes the dispersion of the 7 (a) g r o u p means a r o u n d the g r a n d mean. TABLE 7. which summarizes the a n o v a of Table 7. we multiply the variance of means by η to obtain the variance a m o n g groups.7. we o b t a i n a q u a n t i t y that is identical to the one we have j u s t developed as SStotal. We could show this algebraically.800 951. The MS a m o n g groups is based on the variance of g r o u p means.6. is a d d e d to each sample. the within-group MS should estimate a1.3 in which a. T h e within-group MS. T h e additivity relations we have just learned are independent of the presence of added treatment or r a n d o m effects.848 16. The total degrees of freedom are split u p into the degrees of freedom corresponding to variation a m o n g groups a n d those of variation of items within groups. Otherwise. but simply state that the degrees of freedom pertaining to the sums of squares are also additive. The additivity relation still holds. the expected variance of their m e a n will be σ2/η. We shall not go t h r o u g h any derivation.3. or /t.6 Anova table for data in Table 7.086 448. although the values for g r o u p SS and the total SS are different from those of Table 7. If the groups are r a n d o m samples from a h o m o geneous population. Before we continue. it is an estimate of σ 1 η -1 a — ι a \—' > or ^ or σ Ί + ησΑ J depending on whether the a n o v a at hand is Model I or II. T h e total MS is a statistic of dispersion of the 35 (an) items a r o u n d their mean. If the a groups are r a n d o m samples f r o m a c o m m o n h o m o g e n e o u s p o p u l a t i o n .886 83. It describes the variance in the entire sample due to all the sundry causes and estimates σ2 when there are n o a d d e d treatment effects or variance c o m p o n e n t s a m o n g groups.34. W U) Source of Y Y variation (4) Μ can square MS C) df Sum af squares SS y y y . the MS a m o n g groups is an estimate of σ 2 .5. gives the average dispersion of the 5 (η) items in each g r o u p a r o u n d the g r o u p means. also k n o w n as the individual or intragroup or error mean square.y - Among groups Within groups Total 6 28 34 503. Therefore. If there are n o added treatment effects o r variance c o m p o n e n t s . but it is simpler to inspect Table 7.029 27.997 .5 / p a r t i t i o n i n g t h e t o t a l s u m o f s q u a r e s a n d d e g r e e s o f f r e e d o m 153 Adding the expression for SSgroaps to that for SS w i t h i n . let us review the m e a n i n g of the three m e a n squares in the anova.

F) + ( Y . ( F .1. So do some of the supplementary tests and c o m p u t a t i o n s following the initial significance test.4 .45.4 .06 and the deviation of the individual wing length from the grand m e a n is γΊι . the deviations were in the relationship a + b = c.5 a n d 7. N o t e that the deviations add u p correctly: the deviation a m o n g groups plus the deviation within groups equals the total deviation of items in the analysis of variance.6).4 = . sincc ( Y — F) = 0 for each of the a groups (proof in Appendix A 1.45. we would expect them to take the form a2 4.4 . The deviation of the item from the g r o u p m e a n and that of the g r o u p mean from the grand m e a n add to the total deviation of the item from the g r a n d j n e a n .f) = 2 Ϊ [ ( ? - Ϋ ) Σ ι υ .154 c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f variance A n o t h e r way of looking at the partitioning of the variation is to study the deviation f r o m m e a n s in a particular case. as well as the actual c o m p u t a t i o n and significance test. we can look at the wing length of the first individual in the seventh group. These deviations are stated algebraically as ( 7 — F) + ( F .b2 + lab = c2.6 Model I anova An i m p o r t a n t point to remember is that the basic setup of data.F) = (Y .34 = .F). in most cases is the same for both models. Referring to Table 7.F = 45. This will not only lead us to a more formal interpretation of a n o v a but will also give us a deeper u n d e r s t a n d i n g of the nature of variation itself. Squaring and s u m m i n g these deviations for an items will result in a n _ a _ _ an Before squaring.F) = ( Y . W h a t h a p p e n e d to the cross-product term corresponding to 2ab'l This is απ _ _ ^ a — = " _ 2Σ(y . 4 The deviation of the g r o u p m e a n from the grand m e a n is F7 . After squaring. which h a p p e n s to be 41. For .1). 3 4 N o t e that these deviations are additive. Let us now fry to resolve the variation found in an analysis of variance case. We identify the deviations represented by each level of variation at the left margins of the tables giving the analysis of variance results (Tables 7.34 = 0.F).F h y . 7.y = 4 i — 45. The purposes of analysis of variance differ for the two models.?>] a covariance-type term that is always zero. Its deviation from its g r o u p mean is y 7 1 _ y 7 = 41 .

7.7

/ m o d e l ii a n o v a

155

p u r p o s e s of discussion, we r e t u r n t o the housefly wing lengths of T a b l e 7.3. W e ask the question, W h a t m a k e s any given housefly wing length a s s u m e the value it does? T h e third wing length of the first sample of flies is recorded as 43 units. H o w c a n we explain such a reading? If we knew n o t h i n g else a b o u t this individual housefly, o u r best guess of its wing length w o u l d be the g r a n d m e a n of the p o p u l a t i o n , which we k n o w to be μ = 45.5. However, we have a d d i t i o n a l i n f o r m a t i o n a b o u t this fly. It is a m e m b e r of g r o u p 1, which has u n d e r g o n e a t r e a t m e n t shifting the m e a n of the g r o u p d o w n w a r d by 5 units. Therefore, a . 1 = —5, a n d we w o u l d expect o u r individual V13 (the third individual of g r o u p 1) t o m e a s u r e 45.5 - 5 = 40.5 units. In fact, however, it is 43 units, which is 2.5 units a b o v e this latest expectation. T o what can we ascribe this deviation? It is individual variation of the flies within a g r o u p because of the variance of individuals in the p o p u l a t i o n (σ 2 = 15.21). All the genetic a n d e n v i r o n m e n t a l effects that m a k e one housefly different f r o m a n o t h e r housefly c o m e into play t o p r o d u c e this variance. By m e a n s of carefully designed experiments, we might learn s o m e t h i n g a b o u t the causation of this variance a n d a t t r i b u t e it to certain specific genetic or environmental factors. W e might also be able to eliminate some of the variance. F o r instance, by using only full sibs (brothers and sisters) in any one culture jar, we would decrease the genetic variation in individuals, a n d undoubtedly the variance within g r o u p s would be smaller. However, it is hopeless to try to eliminate all variance completely. Even if we could remove all genetic variance, there would still be environmental variance. And even in the most i m p r o b a b l e case in which we could remove both types of variance, m e a s u r e m e n t error would remain, so that we would never obtain exactly the same reading even on the same individual fly. T h e within-groups MS always remains as a residual, greater or smaller f r o m experiment to e x p e r i m e n t — p a r t of the n a t u r e of things. This is why the within-groups variance is also called the e r r o r variance or error mean square. It is not an error in the sense of o u r m a k i n g a mistake, but in the sense of a measure of the variation you have to c o n t e n d with when trying to estimate significant differences a m o n g the groups. T h e e r r o r variance is composed of individual deviations for each individual, symbolized by the r a n d o m c o m p o n e n t of the j t h individual variatc in the /th group. In o u r case, e 1 3 = 2.5, since the actual observed value is 2.5 units a b o v e its expectation of 40.5. We shall now state this relationship m o r e formally. In a Model I analysis of variance we assume that the differences a m o n g g r o u p means, if any, are due to the fixed treatment effects determined by the experimenter. T h e p u r p o s e of the analysis of variance is t o estimate the true differences a m o n g the g r o u p means. Any single variate can be d e c o m p o s e d as follows:
Yij

=

μ

+ α,· + €y

(7.2)

where i — 1 , . . . , a, j = 1 , . . . , « ; a n d e (J represents an independent, normally distributed variable with m e a n €,j = 0 a n d variance σ2 = a1. Therefore, a given reading is composed of the grand m e a n μ of the population, a fixed deviation

156

c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f

variance

of the mean of g r o u p i from the grand mean μ, and a r a n d o m deviation eis of the /th individual of g r o u p i from its expectation, which is (μ + α,). R e m e m b e r that b o t h a,· and can be positive as well as negative. The expected value (mean) of the e^-'s is zero, a n d their variance is the parametric variance of the population, σ 2 . F o r all the assumptions of the analysis of variance to hold, the distribution of £ u must be normal. In a Model I a n o v a we test for differences of the type <xl — i 2 a m o n g the g r o u p m e a n s by testing for the presence of an added c o m p o n e n t due to treatments. If we find that such a c o m p o n e n t is present, we reject the null hypothesis that the g r o u p s come f r o m the same p o p u l a t i o n and accept the alternative hypothesis that at least some of the g r o u p means are different from each other, which indicates that at least some of the a,"s are unequal in magnitude. Next, we generally wish to test which a,'s are different from each other. This is d o n e by significance tests, with alternative hypotheses such as Hl:ctl > α 2 or H\+ a 2 ) > a 3 . In words, these test whether the mean of g r o u p 1 is greater t h a n the mean of g r o u p 2, or whether the mean of g r o u p 3 is smaller than the average of the m e a n s of groups I and 2. Some examples of Model I analyses of variance in various biological disciplines follow. An experiment in which we try the effects of different drugs on batches of animals results in a Model I anova. We arc interested in the results of the treatments and the differences between them. The treatments arc fixed and determined by the experimenter. This is true also when we test the effects of different doses of a given f a c t o r - a chemical or the a m o u n t of light to which a plant has been exposed or temperatures at which culture bottles of insects have been reared. The treatment does not have to be entirely understood and m a n i p ulated by the experimenter. So long as it is fixed and rcpcatable. Model I will apply. If wc wanted to c o m p a r e the birth weights of the Chinese children in the hospital in Singapore with weights of Chinese children born in a hospital in China, our analysis would also be a Model I anova. The treatment effects then would be "China versus Singapore," which sums up a whole series of different factors, genetic and environmental —some known to us but most of them not understood. However, this is a definite treatment wc can describe and also repeat: we can, if we wish, again sample birth weights of infants in Singapore as well as in China. Another example of Model 1 anova would be a study of body weights for animals of several age groups. The treatments would be the ages, which are fixed. If we find that there arc significant differences in weight a m o n g the ages, wc might proceed with the question of whether there is a difference from age 2 to age 3 or only from age I to age 2. T o a very large extent. Model I anovas are the result of an experiment and of deliberate manipulation of factors by the experimenter. However, the study of differences such as the c o m p a r i s o n of birth weights from two countries, while not an experiment proper, also falls into this category.

7 . 7 / m o d e l ii a n o v a

157

7.7 Model II anova The structure of variation in a M o d e l II a n o v a is quite similar t o t h a t in M o d e l I: YtJ = μ + Al + € υ (7.3)

where i = 1 , . . . , a; j = 1 , . . . , n; eu represents an independent, normally distributed variable with m e a n ei;- = 0 a n d variance σ 2 = σ 2 ; a n d A-t j e p r e s e n t s a normally distributed variable, independent of all e's, with m e a n A t = 0 and variance σ\. T h e m a i n distinction is that in place of fixed-treatment effects a,·, we now consider r a n d o m effects At that differ f r o m g r o u p t o group. Since the effects are r a n d o m , it is uninteresting t o estimate the m a g n i t u d e of these r a n d o m effects o n a group, or the differences f r o m g r o u p to group. But we can estimate their variance, the a d d e d variance c o m p o n e n t a m o n g g r o u p s σ \ . W e test for its presence a n d estimate its m a g n i t u d e s^, as well as its percentage c o n t r i b u t i o n to the variation in a M o d e l II analysis of variance. Some examples will illustrate the applications of M o d e l II a n o v a . Suppose we wish to determine the D N A content of rat liver cells. W e take five rats and m a k e three p r e p a r a t i o n s f r o m each of the five livers obtained. T h e assay readings will be for a — 5 g r o u p s with η = 3 readings per group. T h e five rats presumably are sampled at r a n d o m f r o m the colony available to the experimenter. They must be different in various ways, genetically a n d environmentally, but we have n o definite i n f o r m a t i o n a b o u t the n a t u r e of the differences. T h u s , if wc learn that rat 2 has slightly m o r e D N A in its liver cells t h a n rat 3, we can d o little with this i n f o r m a t i o n , because we are unlikely to have any basis for following u p this problem. W e will, however, be interested in estimating the variance of the three replicates within any one liver and the variance a m o n g the five rats; that is, does variance σ2Λ exist a m o n g rats in addition to the variance σ2 cxpcctcd on the basis of the three replicates? T h e variance a m o n g the three p r e p a r a t i o n s presumably arises only from differences in technique and possibly f r o m differences in D N A content in different parts of the liver (unlikely in a homogenate). Added variance a m o n g rats, if it existed, might be due to differences in ploidy or related p h e n o m e n a . T h e relative a m o u n t s of variation a m o n g rats and "within" rats ( = a m o n g preparations) would guide us in designing further studies of this sort. If there was little variance a m o n g tlic p r e p a r a t i o n s a n d relatively m o r e variation a m o n g the rats, wc would need fewer p r e p a r a t i o n s and more rats. O n the other h a n d , if the variance a m o n g rats was proportionately smaller, we would use fewer rats and m o r e p r e p a r a t i o n s per rat. In a study of the a m o u n t of variation in skin pigment in h u m a n populations, we might wish to study different families within a h o m o g e n e o u s ethnic or racial g r o u p and brothers and sisters within cach family. T h e variance within families would be the error mean square, a n d we would test for an a d d e d variance c o m p o n e n t a m o n g families. Wc would expect an a d d e d variance c o m p o n e n t σ2Α because there arc genctic differences a m o n g families that determine a m o u n t

158

c h a p t e r 7 /' i n t r o d u c t i o n t o a n a l y s i s o f

variance

of skin p i g m e n t a t i o n . W e w o u l d be especially interested in the relative p r o p o r tions of the t w o variances σ2 a n d σ\, because they would p r o v i d e us with i m p o r t a n t genetic i n f o r m a t i o n . F r o m o u r k n o w l e d g e of genetic t h e o r y , we w o u l d expect the variance a m o n g families t o be greater t h a n the variance a m o n g b r o t h e r s a n d sisters within a family. T h e a b o v e examples illustrate the t w o types of p r o b l e m s involving M o d e l II analysis of variance t h a t a r e m o s t likely to arise in biological w o r k . O n e is c o n c e r n e d with the general p r o b l e m of the design of a n e x p e r i m e n t a n d the m a g n i t u d e of the e x p e r i m e n t a l e r r o r at different levels of replication, such as e r r o r a m o n g replicates within rat livers a n d a m o n g rats, e r r o r a m o n g batches, experiments, a n d so forth. T h e o t h e r relates t o variation a m o n g a n d within families, a m o n g a n d within females, a m o n g a n d within p o p u l a t i o n s , a n d so forth. Such p r o b l e m s are c o n c e r n e d with the general p r o b l e m of the relation between genetic a n d p h e n o t y p i c variation.

Exercises
7.1 In a study comparing the chemical composition of the urine of chimpanzees and gorillas (Gartler, Firschein, and Dobzhansky, 1956), the following results were obtained. For 37 chimpanzees the variance for the amount of glutamic acid in milligrams per milligram of creatinine was 0.01069. A similar study based on six gorillas yielded a variance of 0.12442. Is there a significant difference between the variability in chimpanzees and that in gorillas? ANS. Fs = 11.639, 025[5.36] ~ 2.90. The following data are from an experiment by Sewall Wright. He crossed Polish and Flemish giant rabbits and obtained 27 F , rabbits. These were inbred and 112 F 2 rabbits were obtained. We have extracted the following data on femur length of these rabbits.
η

7.2

y 83.39 80.5

s 1.65 3.81

F,
Fi

27

112

7.3

Is there a significantly greater amount of variability in femur lengths among the F2 than among the Fx rabbits? What well-known genetic phenomenon is illustrated by these data? For the following data obtained by a physiologist, estimate a 2 (the variance within groups), a, (the fixed treatment effects), the variance among the groups, and the added component due to treatment Σ α 2 /(a — 1), and test the hypothesis that the last quantity is zero.
Treatment A Β C D

V .v2
η

6.12 2.85 10

4.34 6.70 10

5.12 4.06 10

7.28 2.03 10

exercises

159

7.4

7.5

ANS. s 2 = 3.91, a, = 0.405, &2 = 1.375, ά 3 = 0.595, ά 4 = 1.565, MS among groups = 124.517, and F, = 31.846 (which is significant beyond the 0.01 level). For the data in Table 7.3, make tables to represent partitioning of the value of each variate into its three components, Ϋ, (Ϋ — Ϋ),(Υυ — Yj). The first table would then consist of 35 values, all equal to the grand mean. In the second table all entries in a given column would be equal to the difference between the mean of that column and the grand mean. And the last table would consist of the deviations of the individual variates from their column means. These tables represent estimates of the individual components of Expression (7.3). Compute the mean and sum of squares for each table. A geneticist recorded the following measurements taken on two-week-old mice of a particular strain. Is there evidence that the variance among mice in different litters is larger than one would expect on the basis of the variability found within each litter?
Litters 1 2 3 4 5 6 7

19.49 20.62 19.51 18.09 22.75

22.94 22.15 19.16 20.98 23.13

23.06 20.05 21.47 14.90 19.72

15.90 21.48 22.48 18.79 19.70

16.72 19.22 26.62 20.74 21.82

20.00 19.79 21.15 14.88 19.79

21.52 20.37 21.93 20.14 22.28

7.6

ANS. .r = 5.987, MS among = 4.416, s2A = 0, and Fs = 0.7375, which is clearly not significant at the 5% level. Show that it is possible to represent the value of an individual variate as follows: y = (>') + (>',— V') + (Vj; — Y). What docs each of the terms in parentheses estimate in a Model 1 anova and in a Model II anova?

CHAPTER

Single-Classification Analysis of Variance

We are now ready to study actual eases of analysis of variance in a variety of applications and designs. The present chapter deals with the simplest kind of a n o v a , single-classification analysis of variance. By this we mean an analysis in which the groups (samples) are classified by only a single criterion. Either interpretations of the seven samples of housefly wing lengths (studied in the last chapter), different medium formulations (Model I), or progenies of different females (Model II) would represent a single criterion for classification. O t h e r examples would be different temperatures at which groups of animals were raised or different soils in which samples of plants have been grown. We shall start in Section 8.1 by staling the basic computational formulas for analysis of variance, based on the topics covered in the previous chapter. Section 8.2 gives an example of the c o m m o n case with equal sample sizes. We shall illustrate this case by means of a Model I anova. Since the basic computations for the analysis of variance - are the same in either model, it is not necessary to repeat the illustration with a Model II anova. The latter model is featured in Section 8.3, which shows the minor c o m p u t a t i o n a l complications resulting from unequal sample sizes, since all groups in the anova need not necessarily have the same sample size. Some c o m p u t a t i o n s unique to a Model II anova are also shown; these estimate variance components. F o r m u l a s be-

8.1 / c o m p u t a t i o n a l

formulas

161

come especially simple for the two-sample case, as explained in Section 8.4. In Model I of this case, the mathematically equivalent t test can be applied as well. W h e n a Model I analysis of variance has been f o u n d to be significant, leading to the conclusion that the m e a n s are not f r o m the same population, we will usually wish to test the means in a variety of ways to discover which pairs of m e a n s are different f r o m each other and whether the m e a n s can be divided into groups that are significantly different from each other. T o this end, Section 8.5 deals with so-called planned comparisons designed before the test is run; and Section 8.6, with u n p l a n n e d multiple-comparison tests t h a t suggest themselves to the experimenter as a result of the analysis. 8.1 Computational formulas We saw in Section 7.5 that the total sum of squares and degrees of freedom can be additively partitioned into those pertaining to variation a m o n g groups and those to variation within groups. F o r the analysis of variance proper, we need only the sum of squares a m o n g groups and the sum of squares within groups. But when the c o m p u t a t i o n is not carried out by computer, it is simpler to calculate the total sum of squares and the sum of squares a m o n g groups, leaving the sum of squares within groups to be obtained by the subtraction SSiotai — SS g r o u p s . However, it is a good idea to c o m p u t e the individual variances so we can check for heterogeneity a m o n g them (sec Section 10.1). This will also permit an independent c o m p u t a t i o n of SS w i l h i n as a check. In Section 7.5 we arrived at the following c o m p u t a t i o n a l formulas for the total a n d a m o n g groups sums of squares:

These formulas assume equal sample size η for each g r o u p and will be modified in Section 8.3 for unequal sample sizes. However, they suffice in their present form to illustrate some general points a b o u t c o m p u t a t i o n a l procedures in analysis of variance. We note that the second, subtracted term is the same in both sums of squares. This term can be obtained by s u m m i n g all the variates in the a n o v a (this is the grand total), squaring the sum, and dividing the result by the total n u m b e r of variates. It is c o m p a r a b l e to the second term in the c o m p u t a t i o n a l formula for the ordinary sum of squares (Expression (3.8)). This term is often called the correction term (abbreviated CT). The first term for the total sum of squares is simple. It is the sum of all squared variatcs in the anova table. T h u s the total sum of squares, which describes the variation of a single unstructured sample of an items, is simply the familiar sum-of-squares formula of Expression (3.8).

You may wonder how it could be possible for the MS g r o u p s to be less than the MSwuhin· You must remember that these two are independent estimates. T h e c o m p u t a t i o n is illustrated in Box 8.1 could also serve for a Model II a n o v a with equal sample sizes. The d a t a are f r o m a n experiment in plant physiology. we d o not bother going on with the analysis. square the sum. If the MS g r o u p s is equal to or less than the M 5 w i l h i n . If there is no added c o m p o n e n t due to treatment or variance component a m o n g groups. sum the quotients of these operations and subtract from the sum a correction term. giving rise to a suspicion that an added c o m p o n e n t due to treatment effects is present. To find this correction term.2 Equal η W e shall illustrate a single-classification a n o v a with equal sample sizes by a Model I example. square the sum of each group and divide by the sample size of the group·. General formulas for such a tabic arc shown first. Since the sample size of each g r o u p is equal in the above formulas.1.162 c h a p t e r 8 / single-classification analysis of variance The first term of the sum of squares a m o n g g r o u p s is obtained by squaring the sum of the items of each group. t h c p o p u l a j i o n means are all assumed to be equal. the c o m p u t a t i o n of Box 8. as shown in the box. and divide it by the number of items on which this sum is based. and 45 df within groups. Thus. We find that the mean square a m o n g g r o u p s is considerably greater than the error mean square. The c o m p u t a t i o n up to and including the first test of significance is identical for b o t h models. we can first sum all the squares of the g r o u p sums and then divide their sum by the constant n. We arc interested in the effect of the sugars on length. F r o m the formula for the sum of squares a m o n g groups emerges an important c o m p u t a t i o n a l rule of analysis of variance: To find the sum of squares among any set of groups. 8. representing 5 times (10 — 1) degrees of freedom. F o u r experimental groups. that is. . plus one control without sugar. We note 4 degrees of freedom a m o n g groups. It is obvious that the five g r o u p s d o not represent r a n d o m samples from all possible experimental conditions but were deliberately designed to legt^the effects of certain sugars o n J h £ growth rate. Ten_observations (replicates) were m a d e for each treatment. the estimate of the variance a m o n g groups is as likely to be less as it is to be greater than the variance within groups. sum all the items in the set. They are the lengths in coded units of pea sections grown in tissue culture with auxin present. After quantities 1 t h r o u g h 7 have been calculated. were used. they are entered into an analysis-of-variance table. for we would not have evidence for the presence of an added c o m p o n e n t . representing three different sugars and one mixture of sugars. T h e term "trejitmenj_" already implies a_Mmlel I anova. a n d s u m m i n g the quotients from this operation for each group. dividing each square by its sample size. these arc followed by a table filled in for the specific example. T h e p u r p o s e of the experiment was to test the effects of the addition of various sugars on growth as measured by length. and our null hypothesis will be that there is no added c o m p o n e n t due to treatment effects a m o n g the five groups. there being five treatments.

3 2% Fructose added 58 61 56 58 57 56 61 60 57 58 582 58.2 17.18 . Treatments (a = 5) 2% Glucose added 57 58 60 59 62 60 60 57 59 61 593 59. The effect of the addition of different sugars on length.A(701 2 + 5932 + · · · + 641 2 ) (1. Grand total = £ £ Y = 701 + 593 + · · · + 641 = 3097 2. Preliminary computations 1. Glucose + /% Fructose added 58 59 58 61 57 56 58 57 57 59 580 58X> 2% Sucrose added 62 66 65 63 64 62 65 65 62 67 641 64.. Sum of the squared observations α η **ΣΣγ2 *= 75 2 + 67* + · · · + 68 2 + 57 2 + · • · + 67 2 = 193. i.151 3.114 = mm). This is a Model I anova.e. They are the expressions you learned in the previous chapter for a M o d e l I anova.929. of pea sections grown in tissue culture with auxin present: η = 10 (replications per group). Sum of the squared group totals divided by η = J Σ (l = y J . Grand total squared and divided by total sample size = correction term CP M i e»V ΣΥY y 5 x 1 0 .1 Observations.2 / e q u a l η 163 Expressions for the expected values of the m e a n squares are also shown in the first a n o v a table of Box 8.905.191.8.^ 50 .1 ΣϊY Source: Data by W. Purves. replications 1 2 3 4 5 6 7 8 9 10 It Control 75 67 70 75 65 71 67 67 76 68 701 70. BOX 8.828.055) = 192.1 Single-classification anova with equal sample sizes.1. in ocular units ( x 0.50 4.

1 ) 7 — a(n . * * . There is a highly significant (P « 0. we obtain the fol lowing: Anova table Source of variation Ϋ -.1 a(n .5 and 8.46 49.50 1322.191.1) an .Y Among groups Within groups Total a .77 * . These conventions will be followed throughout the text and will no longer be explained in subsequent boxes and tables.191.4-51 = df SS MS Fs 4 45 49 2.50 T h e anova table is constructed as follows.y y .1077. Conclusions.45] = 269.f Y -. .01(4. The different sugar treatments clearly have a significant effect on growth of the pea sections. ss total = i i r 2 ~ C T = quantity 2 .58 1077.quantity 4 .6 for the completion of a Model I analysis of variance: that is.18 .164 c h a p t e r 8 / single-classification analysis of variance B O X 8.32 7.0. See Sections 8.quantity 4 « 192.828.151 .Y F .32 = 245. Expected MS f .50 .05.Y Y -.1 6 7 5 . Source of variation df SS MS F. SS w j t h i n =s SS (ora i — SSgreap.828.1) MS w i thi „ ^ + a .Ϋ Among groups (among treatments) Within groups (error. replicates) Total ^0.905.05(4.32 245.1 a2 Substituting the computed values into the above table.01) added component due to treatment effects in the mean square among groups (treatments).18 = 1077.193.82 .82 « quantity 3 .33** 3.quantity 6 « 1322.01 < Ρ 5 0.01. the method for determining which means are significantly different from each other.i (β .1 Continued S.82 ^0. « quantity 5 .1322.33 5.P S 0.

C o m p a r i s o n of the observed variance ratio Fs = 49. the sugars produce an added treatment effect. but we defer their discussion until Sections 8. Clearly. At this stage we are not in a position to say whether each treatment is different from every other treatment. We then test the observed mean square a m o n g groups and sec if it contains an added c o m p o n e n t of variance. when confronted with this example. These four hosts were obtained at r a n d o m from one locality. of larvae on any one host) represents genetic differences a m o n g larvae and differences in environmental experiences of these larvae. l-wiclt. quantity 7.33 with F 0 0 1 [ 4 4 0 ] = 3.ι . They represent a r a n d o m sample of the population of host individuals from the given locality. T h e example is shown in Table 8.· ΛΙΙΪ.8. Added variance a m o n g hosts demonstrates significant differentiation a m o n g the larvae possibly due to differences a m o n g t In.· r. The probability that the five groups differ as much as they d o by chance is almost infinitesimally small. Since v 2 is relatively large. This is often necessary to ensure that the e r r o r sum of squares.η.5 and 8. 8.™·!.83. Ordinarily. the c o m p u t a t i o n s are exactly the same whether the anova is based on Model I or Model II. Remember that up to and including the F test for significance. you would not bother w o r k i n g out these values of F. would convince you that the null hypothesis should be rejected. Il -ilcr» mau ke> rllwa Ι.3 Unequal η This time we shall use a Model II analysis of variance for an example. We know nothing about their origins or their genetic constitution..· I·.' -ilTivf inn ill.. since we know nothing of the origins of the individual rabbits. Such tests are necessary to complete a Model I analysis.3 / u n e q u a l η 165 It may seem that we are carrying an unnecessary n u m b e r of digits in the c o m p u t a t i o n s in Box 8. apparently inhibiting growth and consequently reducing the length of the pea sections. The critical values have been given here only to present a complete record of the analysis. We would not be in a position to interpret differences between larvae from different hosts. the conservative critical value (the next tabled F with fewer degrees of freedom).6. has sufficient accuracy. What would such an added c o m p o n e n t of variance represent? The mean square within host individuals (that is.1. or whether the sugars are different f r o m the control but not different f r o m each other.1.. Population biologists arc nevertheless interested in such analyses because they provide an answer to the following question: Are (he variances of means of larval characters a m o n g hosts greater than expected on the basis of variances of the characters within hosts? We can calculate the average variance of width of larval scutum on a host.|i Ίηι. the critical values of F have been c o m p u t e d by h a r m o n i c interpolation in Table V (see f o o t n o t e to Table III for h a r m o n i c interpolation). This will be our "error" term in the analysis of variance. It concerns a series of morphological measurements of the width of the scutum (dorsal shield) of samples of tick larvae obtained from four different host individuals of the cottontail rabbit. We shall point out the stage in the c o m p u t a t i o n s at which there would be a divergence of operations depending on the model.

y Fq.108.1. among larvae on a host) Total Fq.257.272 142. In view of t h e r a n d o m c h o i c e of h o s t s this is a clear c a s e of a M o d e l II a n o v a . W i d t h of s c u t u m (dorsal shield) of larvae of t h e tick Haemaphysalis leporispalustris in s a m p l e s f r o m 4 c o t t o n t a i l r a b b i t s . the larvae. Hosts 1 2 (a = 4) 3 4 380 376 360 368 372 366 374 382 350 356 358 376 338 342 366 350 344 364 354 360 362 352 366 372 362 344 342 358 351 348 348 4619 13 1.7 MS Fs 5. W e are n o t i n t e r e s t e d in t h e i n d i v i d u a l m e a n s o r p o s s i b l e differences .121 79.01[3.21 Source: Data by P.89 3 33 36 602. B e c a u s e this is a M o d e l 11 a n o v a .05[3.7 3778.33] ~ 4. o r a t least a p o p u l a t i o n w h o s e i n d i v i d u a l s a r e m o r e related t o e a c h o t h e r t h a n they a r e to tick l a r v a e on other host individuals.07 Σ s2 γ 2 1. T h i s is a M o d e l II a n o v a .44 Conclusion.04 2168 6 784. A.166 TABLE 8 .01) a d d e d v a r i a n c e c o m p o n e n t a m o n g h o s t s for w i d t h of s c u t u m in larval ticks. Thomas. M e a s u r e m e n t s in m i c r o n s . 1 c h a p t e r 8 / single-classification analysis of variance D a t a and anova table for a single classification anova with unequal sample sizes.5 y .56 376 344 342 372 374 360 ΣΥ 2978 8 3544 10 1.940 54.536 233. t h e m e a n s for e a c h h o s t h a v e been o m i t t e d f r o m T a b l e 8.26** y y Among groups (among hosts) Within groups (error.642. T h e r e is a significant (Ρ < 0. Anova table Source of Y Y - variation df SS 1808.0 5586.6 114.331 = 2. s h o u l d e a c h h o s t c a r r y a f a m i l y of ticks. T h e e m p h a s i s in this e x a m p l e is o n the m a g n i t u d e s of the v a r i a n c e s .

the conclusion would have been that there is a significant treatment effect rather than an added variance c o m p o n e n t . O n e might wish to look at the g r o u p means to spot outliers. Possibly the ticks on each host represent a sibship (that is. or the hosts have migrated to the collection locality from different geographic areas in which the licks differ in width of scutum. genetic differences a m o n g sibships seem most reasonable. . the ticks on different host individuals dilfer more from each other than d o individual ticks on any one host. for α = 0. However. If this had been Model I. respectively. 2. Fal3i30].01. or it may be due to genetic diflcrcnces a m o n g the ticks. which might represent readings that for a variety of reasons could be in error. considerably above the interpolated as well as the conservative value of F0 0l. respectively. since sample sizes differ for each group.05 and a = 0.51. differences in the skin. W h a t is the biological meaning of this conclusion? For some reason. in view of the biology of the organism. Steps 1. We therefore reject the null hypothesis (H0: a\ = 0) that there is no added variance c o m p o n e n t a m o n g g r o u p s and that the two mean squares estimate the same variance. each divided by its sample size.44.3 / u n e q u a l η 167 a m o n g them. however. again.8. Sum of the squared g r o u p totals. it was not necessary to carry out such an interpolation. A possible reason for looking at the means would be at the beginning of the analysis. Now. The c o m p u t a t i o n s up to this point would have been identical in a Model 1 anova.26. and 4 t h r o u g h 7 are carried out as before. or perhaps selection has acted differently on the tick populations on each host. The conservative value of F.1 (2. allowing a type I error of less than \ X . are descendants of a single pair of parents) and the differences in the ticks a m o n g host individuals represent genetic differences a m o n g families. N o t e that the argument v2 = 33 is not given. Of these various possibilities.1. We accept. You should confirm them for yourself in Table V. This may be due to some modifying influence of individual hosts on the ticks (biochemical differences in blood. is 2. It is: 3. T h e observed value Fs is 5. differences in the environment of the host individual—all of them rather unlikely in this case). You therefore have to interpolate between a r g u m e n t s representing 30 to 40 degrees of freedom. instead.89 and 4. the alternative hypothesis of the existence of an added variance c o m p o n e n t σ2Λ. Only step 3 needs to be modified appreciably. we must complete the c o m p u t a t i o n s a p p r o p r i a t e to a Model II anova. except that the symbol Σ" now needs to be written Σ"'. T h e values shown were c o m p u t e d using h a r m o n i c interpolation. These will includc the estimation of the added variance c o m p o n e n t and the calculation of percentage variation at the two levels. a = Σ The critical 5% and 1% values of F are shown below the a n o v a table in Table 8. respectively). The c o m p u t a t i o n follows the outline furnished in Box 8.92 and 4.

1 4 (8 + 10 + 13 + 6) + 10 2 + 13 2 + 6 2 ~ 8 + 10 + 13 + = 9. This is measured as the average age (in days) at beginning of r e p r o d u c t i o n . It is o b v i o u s that no single value of η would be a p p r o p r i a t e in the f o r m u l a . however. this. the d e n o m i n a t o r becomcs n 0 .009 Since the M o d e l II expected MS g r o u p s is a2 + ησ2Λ a n d the expected M 5 w i l h i n is σ 2 .690. (602.190 168.2 conccrns the onset of r e p r o d u c t i v e m a t u r i t y in water fleas. Box 8. we c a n n o t write σ2 + ησ2Α for the expected MS g r o u p s .9% a n d 32. However. In this example.v2 arc 67. Hacli variate in the table is in fact an average. F o r this p u r p o s e we sum the c o m p o nents a n d express each as a percentage of the resulting sum. T h e example in Box 8. In this example. is n o t simply n.5 + 54.s2t a n d s2. W e are frequently not so m u c h interested in the actual values of these variance c o m p o n e n t s as in their relative magnitudes. This example is clcarly a Model 1 a n o v a . W e therefore use an average n. the values that we o b t a i n are s a m p l e estim a t e s a n d therefore are written as .v2 a n d .168 c h a p t e r 8 / single-classification analysis of variance Since sample size n.1% of this sum. it is o b v i o u s how the variance c o m p o n e n t a m o n g g r o u p s a2A a n d the e r r o r variance σ 2 are o b t a i n e d . T h e t w o scries represent different genetic crosses. Of course.4 T w o groups Λ frequent test in statistics is to establish the siynijicancc of the difference between two means. a n d . the c o m m o n case. unless s a m p l e sizes are equal.5)/9.2 shows this p r o c e d u r e for a Model I a n o v a . Inspection of the d a t a shows thai the mean age at beginning of r e p r o d u c t i o n .1) Σ"· / which is a n average usually close to b u t always less t h a n n. a n d the seven replicates in each series arc clones derived f r o m the same genetic cross. we arc not given this i n f o r m a t i o n and have to proceed on the a s s u m p t i o n that each reading in the tabic is an equally reliable variate. W h e n e v e r sample sizes a r e u n e q u a l . relatively m o r e variation occurs within g r o u p s (larvae on a host) than a m o n g g r o u p s (larvae on different hosts).190.114. the a r i t h m e t i c m e a n of the «. T h u s s2 + s2. in which case n0 = n. This can easily be d o n e by m e a n s of an analysis of variance for two (jroups. 8. Daphnia loiu/ispina.009 = 54. differs a m o n g g r o u p s in this example. and a possible Haw in the analysis might be that the averages arc not based on equal sample sizes. but is 1 «η = V Σ n i ~ Σ? >\ a (8.7 . since the question to be answered is whether series I differs from series II in average age at the beginning of r e p r o d u c t i o n . respectively.·'s. = 114. T h e a d d e d variance c o m p o n e n t s\ is estimated as (JVfSgrouph — MS w i l h i n )/«.

8 7.6 7.2 7.28 0. η = 7 clones per series. Series (a = 2) I 7. The means of the two series are not significantly different. If in doubt about this hypothesis. from Banta (1939).49214 MS y .5143 Ζ Σγ s2 398.Υ Total 1 12 13 0.1 9.00643 5. Single classification anova with two groups with equal sample sizes Anova table Source of variation df ss 0.4 6. Average age (in days) at beginning of reproduction in Daphnia longispina (each variate is a mean based on approximately similar numbers of females). This is a Model I anova.45714 0.5571 402.48571 5. Two series derived from different genetic crosses and containing seven clones each are compared.5047 Source: Data by Ordway.7 7. Since Fs « F 0 0 5 ( 1 | 2| .y Between groups (series) Within groups (error.23 0.75 Conclusions.1 7.4 / t w o groups 169 BOX 8J Testing the difference in means between two groups.9 7. . ~ 4. test by method of Box 7.3 7.3.1. Section 7.00643 0.4095 Σγ Υ η 52.2 11 7. that is.OJ(l.8.6 7. the two series do not differ in average age at beginning of reproduction. clones within series) Y.y y .5 8. the null hypothesis is accepted. also confidence limits of the difference between two means This test assumes that the variances in the populations from which the two samples were taken are identical.7 7.2 7.2 52.0141 121 FO. A t test of the hypothesis that two sample means come from a population with equal μ.5 7.

. representing the variance within scries. and s ? .Y2 was found to be not significant. one cannot tell from the m a g n i t u d e of a difference whether i( is significant.3): t __ ( ή .0 . 0 4 2 8 + (2.170 chapter 8 / single-classification analysis of variance BOX 8. we choose Expression (8. = —0. when sample sizes are identical (regardless of size): df = 2(« . as computed earlier for the denominator of the t test.0 . = -0. therefore. . 8 3 0 3 L 2 = . t„. + n 2 — 2 Expression (8.2 Continued The appropriate formula for f s is one of the following: Expression (8.μι) We are testing the null hypothesis that μι — μ2 = 0.179.3). when n1 and n 2 are unequal but both are large ( > 30): df ~ tts -+ rt2 — 2 For the present data.0428 0-3614 Λ11ή.3614.1.2).0428 .(2.4). to find that tlicy arc significantly different. -γ. Since the absolute value of our observed f.3614) = 0. we shall carry out a test anyway. It would surprise us.(μ. Then t% = 7. The c o m p u t a t i o n s for the analysis of variance are not shown. Therefore L . With equal sample sizes and only two groups.1184 The degrees of freedom for this example are 2(n — 1) = 2 χ 6 = 12. As you realize by now.05„2.7. is less than the critical t value.1) Expression (8.0428.7447 The 95% confidence limits contain the zero point (no difference). the means are found to be not significantly different. This depends on the m a g n i t u d e of (he error mean square. Confidence limits of the difference between two means = (^l — ^2) ~~ '«[vjSFi-Fz L 2 = (Yi — Y2) + ta[V]Sp. They would be the same as in Box 8. • is very similar for the two series. = 0.0428 ^09142/7 -0. which is the same result as was obtained by the anova. since the difference V. .4095)/7 -0. or n z or both sample sizes are small ( < 30): df = n.3614) = . Therefore we replace this quantity by zero in this example.VVl . since sample sizes are equal.179X0. The critical value of f0.f 2 = --0. However.oMi2j = 2-179. when sample sizes are unequal and n.5143 . = 2.179)(0. In this case F.5571 V(a5047 + 0. there . as was to be expected.

but it is really not necessary to consult it. . T h e n u m e r a t o r is a deviation. they must have the same parametric mean. It has no real advantage in either ease of c o m p u t a t i o n or understanding. (% . and Ϋ2. puted by the following simple formula: ( Σ ^ . computed . There is a n o t h e r m e t h o d of solving a Model I two-sample analysis of variance. In a test of this sort our null hypothesis is that the two samples come from the same population. Q u a n t i t y 6. — μ2 is assumed to be zero. The critical value of F 0 . Thus. because the analysis of variance could not possibly be significant. and the true difference between the m e a n s of the populations represented by these means. This t test is the traditional m e t h o d of solving such a problem. and the d e n o m i n a t o r represents a standard error for such a deviation. therefore the value of F s is far below unity. SSgroups. Expression (8. where sy_ has η — 1 degrees of freedom and Ϋ is normally distributed. = "(η.05[i. — F2 from zero.8. we d o not usually b o t h e r to calculate Fs. In Section 6. and as you will see.-Fi· Tfie left portion of the expression. it may already be familiar to you from previous acquaintance with statistical work.4 we learned a b o u t the t distribution and saw that a t distribution of η — 1 degree of freedom could be obtained from a distribution of the term (F( — μ)/χ ? ι . F.(μ.4 / t w o groups 171 is one further c o m p u t a t i o n a l shortcut.Y2) .1 >sl "ι η. In cases where A/S g r o u p s < MS w i t h i n . It is presented here mainly for the sake of completeness. μ2) n.i2] >s given u n d e r n e a t h the a n o v a table.n7 (8. We therefore test the deviation of the difference V. Inspection of the m e a n squares in the a n o v a shows that MS g r o u p s is m u c h smaller t h a n MS„ U h i n . This is a t test of the differences between two means. The d e n o m i n a t o r of Expression (8. and there c a n n o t possibly be an added c o m p o n e n t due to treatment effects between the series. It would seem too much of a break with tradition not to have the t test in a biostatistics text. We now learn that the expression i.2. + η2 2 is also distributed as t. it is mathematically equivalent to the a n o v a in Box 8. but it really has the same structure as the simpler term for t. not between a single sample mean and the parametric mean. which is in square brackets. the s t a n d a r d error of the difference between two means •«F. this time.v2 and .2) 1 Mf i (>i2 . is a weighted average of the variances of the two samples.Σ ^ ) = ^ 2 n = can be directly com- (526 1 4 529) 2 = 0 0 0 6 4 3 There is only 1 degree of freedom between the two groups. The n u m e r a t o r of this term represents a deviation of a sample mean from a parametric mean. the difference μ.2) looks complicated. but between a single difference between two sample means. . that is.v2.2) is a s t a n d a r d error.

T h e d e n o m i n a t o r is clearly a variance with ν degrees of f r e e d o m . equality m a y be tested by the p r o c e d u r e in Box 7. T h e n u m e r a t o r is also a variance.3) ilf is 2(η — I). 1 1 8 4 in Box 8. = .0 .s a m p l e test.μ . T h e result should be identical to the Fs value of the c o r r e s p o n d i n g analysis of variance.3) which is w h a t is applied in t h e present e x a m p l e in Box 8.Υ. : μ. since i[2v. However. W h e n sample sizes are e q u a l in a t w o .2. Since ι a p p r o a c h e s the n o r m a l distribution as . Φ μ2. This is also a n a s s u m p t i o n of the analyses of variance carried out so far.2) to Expressions (8._vj.2. so t h a t the differences between and —1 are relatively trivial. T h e pertinent degrees of f r e e d o m for Expressions (8. Expression (8. this expression can be regarded as a variance ratio. It is a single deviation s q u a r e d . a l t h o u g h we have not stressed this. T h u s .3) a n d (8. Υ2)-(μ. T h e a n a l o g y with the m u l t i p l i c a t i o n of a s a m p l e variance s 2 by 1 jn to t r a n s f o r m it into a variance of a m e a n sy s h o u l d be obvious.4) T h e simplification of Expression (8.4) are nl + n2 2.2) simplifies to the expression (Υ.4 wc d e m o n s t r a t e algebraically that the t 2 a n d the /·'„ value o b t a i n e d in Box 8. W h y is this so? We learned that f |v i = (Ϋ — μ )/*>·. ) (8.(μι . which represents a sum of squares possessing 1 r a t h e r than zero degrees of f r e e d o m (since it is a deviation f r o m the true m e a n μ r a t h e r t h a n a s a m p l e mean). T h e test as outlined here assumes e q u a l variances in the t w o p o p u l a t i o n s sampled. This is a two-tailed test because o u r alternative hypothesis is / / .4) is s h o w n in A p p e n d i x A 1. .) .0140. t2 = 0. W i t h only two variances. In A p p e n d i x A 1. which is the factor by which t h e average variance within g r o u p s m u s t be multiplied in o r d e r to convert it i n t o a variance of the difference of m e a n s . t 2 is a variance ratio.2) reduces to the simpler form (V.3. T h e results of this test are identical t o those of the a n o v a in the s a m e box: the two m e a n s are not significantly different.0141). . where ν is the degrees of freedom of the variance of the m e a n stherefore = (Υ — μ) 2 Is]. Λ s u m of s q u a r e s based on I degree of f r e e d o m is at the same time a variance.2) a n d (8. as we have seen.1.μ 2 ) (8. T h e test of significance for differences between m e a n s using the f test is s h o w n in Box 8. W e can d e m o n s t r a t e this m a t h e m a t i c a l equivalence by s q u a r i n g the value for ts. Expression (8. Since ts = . W h e n the s a m p l e sizes are u n e q u a l but r a t h e r large. a n d for Expression (8. W i t h i n r o u n d i n g error.1.2 are identical quantities. T h e right term of the s t a n d a r d e r r o r is the c o m p u t a t i o n a l l y easier f o r m of ( l / n j ) + ( l / n 2 ) .2.172 chapter 8 / single-classification analysis of variance in the m a n n e r of Section 7. this is e q u a l to the Fs o b t a i n e d in the a n o v a (Fx = 0.

we might test the control against the 4 experimental treatments representing a d d e d sugars.j/Vi = Flvuao]. r a n g i n g f r o m ^ 0 . Such an example is furnished in Exercise 8. T h e question to be lested would be.4) is chosen for ts. (this c a n be d e m o n s t r a t e d f r o m Tables IV.2 shows h o w to calculate 95% confidence limits to the difference between the series m e a n s in the Daphnia example.84 = 1. a n d III. no further tests arc possible.x] = 3. (8.841 ^0.os[*i = 3-8416 T h e t test for differences between t w o m e a n s is useful w h e n we wish t o set confidence limits to such a difference. Therefore. Box 8.2) that rfv.960 fo.4.1 show a significant difference a m o n g (he five treatments (based on 4 degrees of freedom). A n o t h e r instance when you might prefer to c o m p u t e the t test for differences between two m e a n s rather t h a n use analysis of variance is w h e n you are lacking the original variates a n d have only published m e a n s a n d s t a n d a r d e r r o r s available for the statistical test. It d o e s not surprise us to find that the confidence limits of the difference in this case enclose the value of zero. 8 3 0 3 t o + 0 . 7 4 4 7 . we d o nol k n o w which ones differ from which o t h e r ones. the average age of water fleas at beginning of r e p r o d u c t i o n . or (8. the d a t a on lenglh of pea sections given in Box 8. D o e s the addition of sugars have an effect on length of pea sections? We might also test for differences a m o n g the sugar treatments. W e also k n o w (from Section 7. T h e a p p r o p r i a t e s t a n d a r d e r r o r a n d degrees of f r e e d o m d e p e n d on whether Expression (8.05[1 . when νί = 1 a n d v 2 = oo. However. As you will recall. fructose. Let us look again at the M o d e l I a n o v a s treated so far in this chapter. A reasonable test might be pure sugars (glucose. Bui even if there had been such a difference.0511 ] 2 = 3. We can i n t e r p r e t this by saying that we c a n n o t exclude zero as the true value of the difference between the m e a n s of the t w o series. T h u s . a M o d e l II analysis of variance is c o m p l e t e d by estimation of the a d d e d variance c o m p o n e n t s . respectively): Z0. testing which m e a n s are different f r o m which o t h e r ones or which g r o u p s of m e a n s arc different from o t h e r such g r o u p s or from single means. and sucrose) versus the mixed sugar treatment (1% . We usually c o m p l e t e a Model 1 a n o v a of m o r e t h a n t w o g r o u p s by e x a m i n i n g the d a t a in greater detail.s a m p l e ease in Box 8. T h i s must be so w h e n a difference is found to be not significantly different from zero. for example. Although we k n o w that the means are not all equal.5 Comparisons among means: Planned comparisons We have seen that after the initial significance test. 8. V.2).2. there was no significant difference in age between the two genetic scries. We can dispose right away of the t w o .3).5 / c o m p a r i s o n s a m o n g m e a n s ' p l a n n e d comparisons 173 the s q u a r e of t h e n o r m a l deviate as ν -» oo.8. x f u = F [ l ao] = f j ^ . This leads us to the subject of tests a m o n g pairs a n d g r o u p s of means.

on the o t h e r h a n d . Let us a s s u m e we have sampled f r o m an a p p r o x i m a t e l y n o r m a l p o p u l a t i o n of heights on men. they will a p p e a r to be highly significantly different f r o m each other. Such tests are applied regardless of the results of the preliminary overall a n o v a . a c c o r d i n g to the m e t h o d s described in the present section. If we deliberately take individuals f r o m e a c h tail a n d c o m p a r e them. Such c o m p a r i s o n s are called planned or a priori comparisons. the probability distribution on which o u r hypothesis testing rested would n o longer be valid. W e have c o m p u t e d their m e a n and s t a n d a r d deviation. a p p e a r s to have had less of a g r o w t h . T h e s e tests are p e r f o r m e d only if the preliminary overall a n o v a is significant.1. we can predict the difference between them o n the basis of o r d i n a r y statistical theory. it is o b v i o u s that we would repeatedly o b t a i n differences within pairs of men that were several s t a n d a r d deviations a p a r t .i n h i b i t i n g effect t h a n fructose. We might therefore wish to test w h e t h e r there is in fact a significant difference between the effects of fructose a n d sucrose. of course. even t h o u g h they belong to the s a m e p o p u l a t i o n . for reasons t h a t will be learned in Section 12. T h e y should be p l a n n e d before the experiment h a s been carried out a n d the results o b t a i n e d . but because they were not being sampled at r a n d o m but being inspected before being sampled. a n d time a n d again wc would reject o u r null hypothesis when in fact it was true. sucrose. with a m e a n of 58. with a m e a n of 64. W h e n there are a means. F o r instance. we are d o i n g exactly the s a m e thing as t a k i n g the tallest and the shortest men f r o m the frequency distribution of . S o m e m e n will be very similar.2.174 c h a p t e r 8 / single-classification analysis of variance An i m p o r t a n t point a b o u t such tests is t h a t they are designed a n d c h o s e n i n d e p e n d e n t l y of the results of the experiment. we might wish to c o m p a r e certain m e a n s t h a t we notice to be m a r k e d l y different. which suggest themselves as a result of the c o m p l e t e d experiment. after t h e e x p e r i m e n t has been carried out. are called unplanned o r a posteriori comparisons. If. we were to look at the heights of the men before s a m p l i n g t h e m and then take pairs of m e n w h o seemed to be very different from each o t h e r . T h e men would be sampled f r o m the s a m e p o p u l a t i o n . It is o b v i o u s that the tails in a large s a m p l e f r o m a n o r m a l distribution will be a n y w h e r e f r o m 5 to 7 s t a n d a r d deviations a p a r t . By c o n t r a s t . Their differences will be distributed normally with a m e a n of 0 and an expected variance of 2 a 2 . T h e y include tests of the c o m p a r i s o n s between all possible pairs of means. T h u s . W h e n we c o m p a r e m e a n s differing greatly f r o m each o t h e r as the result of some treatment in the analysis of variance. be a(a — l)/2 possible c o m p a r i s o n s between pairs of means. If we s a m p l e t w o m e n at a time f r o m this p o p u l a t i o n . Such c o m p a r i s o n s . it will have to be a sufficient n u m b e r of s t a n d a r d deviations greater t h a n zero for us to reject o u r null hypothesis that the t w o men c o m c from the specified p o p u l a t i o n . A simple e x a m p l e will s h o w why this is so. Such differences would be outliers in the expected frequency d i s t r i b u t o n of differences. if we o b t a i n a large difference between t w o r a n d o m l y sampled men.2. there can. T h e reason we m a k e this distinction between a priori a n d a posteriori c o m p a r i s o n s is that the tests of significance a p p r o p r i a t e for the t w o c o m p a r i s o n s a r e different. o t h e r s relatively very different.

b e c a u s e it involves all the g r o u p s of t h e s t u d y . we c a n n o t use the o r d i n a r y p r o b a b i l i t y d i s t r i b u t i o n o n w h i c h t h e analysis of v a r i a n c e rests.8.12 (701 + 593 + 582 + 580 + 641) 2 50 + In this case the c o r r e c t i o n term is the s a m e as for the a n o v a . T h e p r e s e n t section c o n c e r n s itself with t h e c a r r y i n g o u t of t h o s e c o m p a r i s i o n s p l a n n e d b e f o r e t h e e x e c u t i o n of t h e e x p e r i m e n t . l ength of pea sections g r o w n in tissue culture (in o c u l a r units). a n d s a m p l e sizes of the e x p e r i m e n t with t h e p e a sections f r o m Box 8. Y o u will recall t h a t t h e r e were highly significant differences a m o n g t h e g r o u p s . If t h e c o m p a r i s o n i n c l u d e s all t h e g r o u p s in t h e a n o v a .0 580 10 I η 701 10 593 10 641 10 .94 3097 50 F) 70. s u b t r a c t a c o r r e c t i o n t e r m .2 582 10 58. T h e result is a s u m of s q u a r e s for the c o m p a r i s o n TABLE 8. T h e s e u n p l a n n e d tests will be discussed in t h e next section. b u t we h a v e t o use special tests of significance.1 Σ (61. divide the result by the s a m p l e size nh a n d s u m the k q u o t i e n t s so o b t a i n e d . b e i n g restricted only to these g r o u p s .2 lists the m e a n s .i illliCOSi' + Γ'~„ fructose 1" ('onirol Y yhtcost' 593 Jructosc siurosc 64.1. t h e CT will be different. o n e t h e c o n t r o l g r o u p a n d t h e o t h e r the " s u g a r s " g r o u p s . w h i c h y o u d e t e r m i n e by t a k i n g t h e g r a n d s u m of all t h e g r o u p s in this c o m p a r i s o n . / ".6 / c o m p a r i s o n s a m o n g m e a n s : u n p l a n n e d c o m p a r i s o n s 175 heights. W e t h e r e f o r e c o m p u t e SS (control v e r s u s sugars) _ (701 ) 2 10 (701) = — 10 2 4 2 (593 + 582 + 580 + 641) 2 40 (2396) 40 (3097)50 ~ = 8^2.1). s q u a r i n g it. it is related t o t h e r u l e f o r o b t a i n i n g t h e s u m of s q u a r e s for a n y set of g r o u p s (discussed at the e n d of Section 8. If.1. and sample sizes from the data in Box 8. h o w e v e r . a n d d i v i d i n g t h e result by the n u m b e r of items in the g r a n d s u m . T a b l e 8.2 Means. the latter with a sum of 2396 a n d a s a m p l e size of 40.1 y 58. F r o m t h e s u m of these q u o t i e n t s . the c o r r e c t i o n t e r m will be the m a i n CT of the s t u d y . t h e c o m p a r i s o n includes only s o m e of t h e g r o u p s of the a n o v a . If w e wish t o k n o w w h e t h e r these a r e significantly different f r o m e a c h o t h e r . T h e general rule f o r m a k i n g a p l a n n e d c o m p a r i s o n is e x t r e m e l y simple. W e n o w wish t o test w h e t h e r the m e a n of the c o n t r o l differs f r o m t h a t of the f o u r t r e a t m e n t s r e p r e s e n t i n g a d d i t i o n of s u g a r . T o c o m p a r e k g r o u p s of a n y size nh t a k e the s u m of e a c h g r o u p . T h e s e rules c a n best be l e a r n e d by m e a n s of a n e x a m p l e . s q u a r e it. T h e r e will t h u s be t w o g r o u p s . g r o u p s u m s . group sums.

05[1. that mixed s u g a r s affect (lie s e c t i o n s differently f r o m p u r e s u g a r s . A final test is a m o n g t h e t h r e e sugars.8^ MSwilhin 5.13 H e r e the CT is different. <593) 2 <582) 2 (641 )2 SS ( a m o n g p u r e sugars) = + + |() (() )(| (1816) 2 . = — — ~ 8.13 /. T h i s m e a n s q u a r e h a s 2 d e g r e e s of f r e e d o m . 0112.0 1 [ 1 . T h u s we c o m p u t e .05. a n d that the p u r e s u g a r s a r e signilicanlly different a m o n g themselves. T h e a p p r o p r i a t e test statistic is MS (mixed s u g a r s versus p u r e sugars) 48. t h e s u m of s q u a r e s is at t h e s a m e t i m e a m e a n s q u a r e .<„ 580 i 2 ( 5 9 3 ^ 5 8 2 j f J > 4 1 ) 2 _ (593 + 582_+ 580 + 641) 2 _ (580) 2 K) (1816) 2 30 (2396) 2 40 = 40 ~ 48.401 = 5·'^· W e c o n c l u d e that the a d d i t i o n of the t h r e e s u g a r s r e t a r d s g r o w t h in the pea sections.„ = 98. we need the m i . since even /·'.87 SS ( a m o n g p u r e sugars) 196. p r o b a b l y bec a u s e the s u c r o s e lias a far higher m e a n . since it is b a s e d o n t h e s u m of the s u g a r s only.46 T h i s Fx is highly significant.32 = 15944 = M5^th.433 . which s u g g e s t s itself to us alter we have l o o k e d at the results. T h i s m e a n s q u a r e is tested o v e r t h e e r r o r m e a n s q u a r e of t h e a n o v a t o give t h e following comparison: MS ( c o n t r o l v e r s u s sugars) Fs = 832. b e c a u s e that w o u l d be a n u n p l a n n e d test.4 5] = ^.03 5. s h o w i n g t h a t the a d d i t i o n s of s u g a r s h a v e significantly r e t a r d e d t h e g r o w t h of the p e a sections.46 T h i s is significant in view of the critical v a l u e s of 4 5 | given in t h e p r e c e d i n g paragraph. Since a c o m p a r i s o n b e t w e e n t w o g r o u p s h a s o n l y 1 d e g r e e of f r e e d o m . U s i n g the s a m e t e c h n i q u e .433 d) 2 I\ = MS ( a m o n g p u r e s u g a r s ! A/S w i l h .„ ^0. T o c a r r y o u t such a test.t h n i k (il'lhc next section.45] = ~5A6~ 4.23 T h i s c o m p a r i s o n is h i g h l y significant. W e c a n n o t test the s u c r o s e a g a i n s t the o t h e r two. since it is based o n t h r e e m e a n s . .176 chapter 8 / single-classification analysis of variance b e t w e e n t h e s e t w o g r o u p s . we c a l c u l a t e SS (mixed s u g a r s v e r s u s p u r e s u g a r s ) . N e x t we test w h e t h e r t h e m i x t u r e of s u g a r s is significantly d i f f e r e n t f r o m t h e p u r e sugars. F 0.87 MS ( a m o n g p u r e s u g a r s ) --= — — -= 98.— 18.() = 196..

13 98. t h e s u m of t h e d e g r e e s of f r e e d o m of t h e s e p a r a t e p l a n n e d tests s h o u l d n o t exceed a — 1. T h u s : df 1 1 2 4 SS ( c o n t r o l versus sugars) SS ( a m o n g p u r e sugars) SS ( a m o n g t r e a t m e n t s ) = = 832.32 T h i s a g a i n illustrates the elegance of analysis of v a r i a n c e .6 / c o m p a r i s o n s a m o n g m e a n s : u n p l a n n e d c o m p a r i s o n s 177 O u r a p r i o r i tests m i g h t h a v e been q u i t e different.33** 152. T h e p a t t e r n a n d n u m b e r of p l a n n e d tests a r e d e t e r m i n e d b y o n e ' s h y p o t h eses a b o u t t h e d a t a .50 1322. t o g e t h e r a d d u p t o t h e s u m of s q u a r e s a m o n g t r e a t m e n t s of t h e original a n a l y s i s of v a r i a n c e based o n 4 degrees of f r e e d o m . Source of I'tiriulioii <H .V ' 1077. F o r e x a m p l e . Since these tests a r e i n d e p e n d e n t .32 48. TAHI.82 MS Treatments Control vs. d e p e n d i n g entirely o n o u r initial h y p o t h e s e s . f o l l o w e d by mixed versus p u r e m o n o s a c c h a r i d e s a n d finally by glucose v e r s u s f r u c t o s e . a n d 3 differed if we h a d a l r e a d y f o u n d t h a t m e a n 1 differed f r o m m e a n 3.87 245. a n d 2 d f . f r u c t o s e . the three s u m s of s q u a r e s we h a v e so far o b t a i n e d . W e c a n present all of these results as a n a n o v a table. T h e t r e a t m e n t s u m s of s q u a r e s can be d e c o m p o s e d i n t o s e p a r a t e p a r t s that are s u m s of s q u a r e s in their o w n right.I.32 48.32 832. sugars Mixed vs.43 5.S. a n d the third the r e m a i n i n g v a r i a t i o n a m o n g the t h r e e s u g a r s . 3 Anova table from Box K. respectively. pure sugars Among pure sugars Within Total 4 1 1 7 45 49 269. since significance of the latter suggests significance of the f o r m e r . followed by d i s a c c h a r i d e s (sucrose) versus m o n o s a c c h a r i d e s (glucose.82** 18.03** .33 832. In a d d i t i o n .13 196.F 8 . H o w e v e r . 1. F o r a g r o u p s . the second t h a t b e t w e e n the mixed s u g a r s a n d the p u r e sugars.13 196. with treatment sum of squares decomposed into planned comparisons. w e could h a v e tested c o n t r o l v e r s u s s u g a r s initially.44** 8.3. It w o u l d clearly be a m i s u s e of statistical m e t h o d s t o d e c i d e a p r i o r i t h a t o n e wished t o c o m p a r e every m e a n a g a i n s t every o t h e r m e a n (a(a — l)/2 c o m p a r i s o n s ) . 2. t h e r e are c e r t a i n restrictions. glucose + fructose). it is d e s i r a b l e t o s t r u c t u r e t h e tests in s u c h a w a y t h a t each o n e tests a n i n d e p e n d e n t r e l a t i o n s h i p a m o n g t h e m e a n s (as w a s d o n e in the e x a m p l e above). T h u s . O n e s u m of s q u a r e s m e a s u r e s the difference between the c o n t r o l s a n d the s u g a r s .46 49. we w o u l d prefer n o t t o lest if m e a n s 1.32 48.8.87 SS (mixed versus p u r e sugars) = =1077. with degrees of f r e e d o m p e r t a i n i n g to t h e m . based o n 1. as s h o w n in T a b l e 8.

it assures us of an experimentwise e r r o r rate < r. F) versus (G + F) T h e 5 degrees of f r e e d o m in these tests require that each individual test be a d j u s t e d to a significance level of a 0. the conclusions c o n c e r n i n g the first three tests are u n c h a n g e d . A d j u s t ing the critical value is a conservative procedure: individual c o m p a r i s o n s using this a p p r o a c h are less likely to be significant. which is a(a — 1)/2. the average of glucose a n d fructose versus a mixture of the two. The last test. (lie critical value for the [·\ ratios of these c o m p a r i s o n s is /·„ l ) ] M 4 S | or /·'„ <>.05 a' = ^ ^ . T h e first three tests arc carried out as shown above. so that decisions based on conventional levels of significance m i g h t be in d o u b t . K. (G.0.)2 (58())2 (593 + 5g2 + 58Q)2 SS 20 (I 175)2 20 + 10 (580) 2 _ (1755) 2 _ 10 Ή) 30 In spite of the c h a n g e in critical value. we carry out the tests as j u s t shown but we a d j u s t the critical values of the type 1 e r r o r a. we e m p l o y a conservative a p p r o a c h . This value is called the experimentwise error rate. let us assume that the investigator has good reason to test the following c o m p a r i s o n s between and a m o n g treatments. G + F). F o r this reason. as well as (G.178 c h a p t e r 8 / single-classification analysis of variance W h e n the planned c o m p a r i s o n s are not i n d e p e n d e n t . glucose \ and fructose mixed (593 + 58. as discussed above. a d d i n g u p to k degrees of freedom.687. a n d when t h e n u m b e r of c o m p a r i s o n s p l a n n e d is less t h a n the total n u m b e r of c o m p a r i s o n s possible between all pairs of means. lowering the type I e r r o r of the statistic of significance for each c o m p a r i s o n so that the p r o b a bility of m a k i n g any type I e r r o r at all in the entire series of tests d o e s not exceed a predetermined value a. .01 for an experimentwise critical α — 0. if the o u t c o m e of a single c o m p a r i s o n is significant. Assuming that the investigator plans a n u m b e r of c o m p a r i s o n s . since F s = i l l 0. S. given here in abbreviated form: (C) versus (G. the o u t c o m e s of s u b s e q u e n t c o m p a r i s o n s are m o r e likely t o be significant as well. where y 7 k T h e a p p r o a c h using this relation is called the Bonferroni method. S) versus (G t F). a n d (G) versus (F) versus (S). T h e last test is c o m p u t e d in a similar manner: Iaverage of glucose a n d \ fructose vs. F. In c o m p a r i s o n s that are not i n d e p e n d e n t .| > 4 5 ] . the a p p r o p r i a t e critical values will be o b t a i n e d if the probability x' is used for any o n e c o m p a r i s o n . as a p p r o p r i a t e . Applying this a p p r o a c h to the pea section d a t a .05. T h u s . is not significant.

05 It is t h e r e f o r e possible t o c o m p u t e a critical λ\ν value for a test of significance of a n a n o v a . we o b t a i n 1077. [v] . Section 14. if we were t o p l a n tests i n v o l v i n g a l t o g e t h e r 6 d e g r e e s of f r e e d o m . E x a c t tables for B o n f e r r o n i critical values are a v a i l a b l e for the special case of single d e g r e e of f r e e d o m tests. w h e r e the a n o v a is significant.58) . n o m a t t e r .10. O n e p r o c e d u r e for testing a posteriori c o m p a r i s o n s w o u l d be to set k — a in this last f o r m u l a . c o n s u l t S o k a l a n d Rohlf (1981).„. t a b l e d a r g u m e n t s of α for the F d i s t r i b u t i o n . T h i s was discussed in the p r e v i o u s scction.32. Alternatively. a n o t h e r way of c a l c u l a t i n g overall significance w o u l d be t o sec w h e t h e r the S. A c o m p a r i s o n w a s called significant if its /·'. T h u s .46)(2. S u b - F o r e x a m p l e .5) 1) M S w i l h i n J . T h u s . SS Br „ s t i t u t i n g into E x p r e s s i o n (8.6) — 1077. s e c t i o n 9. A c o n s e r v a t i v e a l t e r n a t i v e is t o use t h e next smaller t a b l e d v a l u e of a. t h e D u n n .6 / c o m p a r i s o n s a m o n g m e a n s : u n p l a n n e d comparisons 179 T h e B o n f e r r o n i m e t h o d generally will n o t e m p l o y the s t a n d a r d . T h e B o n f e r r o n i m e t h o d (or a m o r e r e c e n t r e f i n e m e n t . F o r details.8. w h e r e ν is the degrees of f r e e d o m of the entire s t u d y a n d a' is the a d j u s t e d t y p e I e r r o r e x p l a i n e d earlier.VKI„ups is g r e a t e r t h a n this critical SS.S i d a k m e t h o d ) s h o u l d a l s o be e m p l o y e d w h e n y o u a r e r e p o r t i n g c o n f i d e n c e limits for m o r e t h a n o n e g r o u p m e a n resulting f r o m a n analysis of v a r i a n c e . 8.«(»• πι· w h e r e k is the n u m b e r of m e a n s being c o m p a r e d . we can r e w r i t e E x p r e s s i o n g r o u p s ^ (" " Π M S w i l h i „ /·'„!„ !. b u t y o u w o u l d e m p l o y t„.6). s is as large as it is a n d to test for t h e significance of the v a r i o u s c o n t r i b u t i o n s m a d e to this SS by dilfercnccs a m o n g the s a m p l e m e a n s . W e c a n n o w also s t a t e this in t e r m s of s u m s of s q u a r e s : An SS is significant if it is g r e a t e r t h a n {k I) M S w i l h i n Fxlk . T h e a b o v e tests w e r e a priori c o m p a r i s o n s .„.35 for a = 0.6 Comparisons among means: Unplanned comparisons A single-classification a n o v a is said to be significant if MS — ' > Fjh.56. Thus. w h e r e s e p a r a t e s u m s of s q u a r e s were c o m p u t e d based o n c o m p a r i s o n s a m o n g m e a n s p l a n n e d b e f o r e the d a t a were e x a m i n e d .| (8.„ n]. if y o u w a n t e d to p u b l i s h the m e a n s a n d 1 — a c o n f i d e n c e limits of all live t r e a t m e n t s in the p e a section e x a m p l e .0083. t h e v a l u e of a' w o u l d be 0. D e t a i l s of such a p r o c e d u r e c a n be learned in S o k a l a n d Rohlf (1981). r a t i o w a s > I''iik !. we c a n c o m p u t e the d e s i r e d critical v a l u e b y m e a n s of a c o m p u t e r p r o g r a m .6.32 > (5 1)(5. in Box 8.1. It is of interest t o investigate w h y the SS vt>Ui . you w o u l d not set c o n f i d e n c e limits t o each m e a n as t h o u g h it were a n i n d e p e n d e n t s a m p l e .„ 1..5) as | „(„•!)] (8. ^^wilhin Since M S g r o u p J M S „ i t h i n = S S g r o u p s / [ ( « (8.

therefore. for the test of each subset o n e is really using a significance level a \ which may be m u c h less than the cxperimcntwisc .2 (fructose). Setting k = a allows for the fact t h a t we c h o o s e for testing those differences between g r o u p m e a n s t h a t a p p e a r to be c o n t r i b u t i n g substantially to the significance of the overall a n o v a . This p r o c e d u r e was p r o p o s e d by Gabriel (1964). 58.35) calculated above. m a k i n g it m o r e difficult to d e m o n s t r a t e the significance of a s a m p l e SS.0 (glucose + fructose). 64. the chancc of m a k i n g a n y type I e r r o r at all is a. F o r an example. N o t e that t h o u g h the probability of any e r r o r at all is a. wc c o m p u t e the SS a m o n g these three m e a n s by the usual formula: 2 2 2 2 _ (593) + (582) + (580) _ (593 + 582_+ 580) __ _ - 102. Wc conclude. w h o called it a sum of squares simultaneous test procedure (SS-S'l'P).1 (sucrose). will always be less than χ Thus. a n d 5 at a time. By " m a k i n g any type I e r r o r at all" we m e a n m a k i n g such an e r r o r in the overall test of significance of the a n o v a a n d in any of the subsidiary c o m p a r i s o n s a m o n g m e a n s or sets of means needed to complete the analysis of the experiment.8 T h e dilfcrcnccs a m o n g these m e a n s are not significant. T h e sucrose m e a n looks suspiciously different from the m e a n s of the o t h e r sugars. 59.088. that sucrosc retards g r o w t h significantly less than the o t h e r sugars tested.3 . the probability of e r r o r for any p a r t i c u l a r test of s o m e subset. This latter a p p r o a c h may require a c o m p u t e r if there are m o r e than 5 m e a n s to be c o m pared.3 (glucose). let us r e t u r n to the effects of sugars on g r o w t h in pea sections (Box 8. T o test this. 4.1 + 102.667. T o test this wc c o m p u t e (641) 2 k (593 + 582 + 580) 2 + (641 + 593 + 582 + 580) 2 κΓ+30 ~ 10 30 = 41.180 c h a p t e r 8 / single-classification analysis of variance how m a n y m e a n s we c o m p a r e .4 = 235.667.5 - 143.102. In the SS-S I I' and in the original a n o v a . We may c o n t i n u e in this fashion. the probability selected for the critical I· value f r o m T a b l e V. Phis probability a therefore is an experimentwise e r r o r rate. considering them 2.520. 3.5 = 9. because this SS is less than the critical SS (56. testing all the differences that look suspicious o r even testing all possible sets of means. W e notice t h a t the first three t r e a t m e n t s have quite similar m e a n s a n d suspect t h a t they d o n o t differ significantly a m o n g themselves a n d hence d o n o t c o n t r i b u t e substantially to the significance of the SSgroups. thus the critical value of the SS will be larger t h a n in the previous m e t h o d . 70.1 (control).1). such as a test of the difference a m o n g three o r between t w o means. since there arc very m a n y possible tests that could be m a d e .2 which is greater than the critical SS.677. We write d o w n the m e a n s in ascending o r d e r of m a g n i t u d e : 58.

all four plots being contained in an area of two miles square. H o w e v e r . h e n c e less c o n s e r v a t i v e . since it a l l o w s a large n u m b e r of possible c o m p a r i s o n s . Analyze the variance of thorax length. significant intergall variance must be present. the e r r o r r a t e of e a c h w o u l d be greater. T h i s is t h e price w e p a y for n o t p l a n n i n g o u r c o m p a r i s o n s b e f o r e we e x a m i n e t h e d a t a : if w e w e r e t o m a k e p l a n n e d tests. D i f f e r e n c e s s h o w n t o be significant by this m e t h o d c a n be reliably r e p o r t e d as significant differences. Is there significant intergall variance present? (Jive estimates of the added component of intergall variance. He has measured the height of three plants in each of four plots representing different soil types. The aphids were collected in 28 galls on the cottonwood I'opulas delloides. t h e u n p l a n n e d tests d i s c u s s e d a b o v e a n d the overall a n o v a a r e n o t very sensitive t o differences b e t w e e n i n d i v i d u a l m e a n s o r differences w i t h i n small subsets. o n e o n e . 1952). (Height is given in centimeters. Observation number / Loetilil ies 2 .7. It is t h e m o s t c o n s e r v a t i v e . O b v i o u s l y . Section 9. thorax length. a n d if t h e r e a r e m a n y m e a n s in t h e a n o v a .H| — 4 . The alate aphids of each gall are isogenic (identical twins). = 6. F o r this r e a s o n . this a c t u a l e r r o r r a t e a ' m a y be o n e . . Variance between galls may be due to differences in genotype and also to environmental differences between galls. What percentage of the variance is controlled by intragall and what percentage by intergall factors? Discuss your results. Thus. Exercises 8. any variance within galls can be due to environment only. Yes. it could as well be due to environmental differences between galls (data by Sokal.i 4 1 2 3 8. is affected by genetic variation. T h i s is a c o m p l e x s u b j e c t . 0 7 . A plant ecologist wishes to test the hypothesis that the height of plant species X depends on the type of soil it grows in. since F. 1964). t o w h i c h a m o r e c o m p l e t e i n t r o d u c t i o n is given in S o k a l a n d Rohlf (1981). T h e SS-STP p r o c e d u r e is only o n e of n u m e r o u s t e c h n i q u e s f o r m u l t i p l e u n p l a n n e d c o m p a r i s o n s . Four alate (winged) aphids were randomly selected from each gall and measured. n o t m a n y differences a r e g o i n g t o be c o n s i d e r e d significant if a' is m i n u t e . if present.) Does your analysis support this hypothesis? ANS.t e n t h . o r even o n e o n e . His results are tabulated below.t h o u s a n d t h of t h e e x p e r i m e n t wise α ( G a b r i e l .1 The following is an example with easy numbers to help you become familiar with the analysis of variance.2 15 9 14 25 21 19 17 23 20 10 13 16 The following are measurements (in coded micrometer units) of the thorax length of the aphid Pemphigus populitransversus.e x e r c i s e s 195 α. If this character.951 is larger than 'θ <I5|J. The converse is not necessarily true: significant variance between galls need not indicate genetic variation. m o r e sensitive a n d p o w e r f u l c o m p a r i s o n s exist w h e n t h e n u m b e r of possible c o m p a r i s o n s is c i r c u m s c r i b e d b y t h e user. being descended parthcnogenetieally from one stem mother.h u n d r e d t h .

24. 13.6.4 6.1.1. 6. 11.5 5. ANS.4. 2. 3. 12.3 5.9.4.0. 6.4 5.4. 5.0 5. 6.3 4. 5. 5. 6. 22. 18.5.1. 6.8. 5. 5.9. 4.8 5.=.1. 27. 6.3 6. 5. 6.3. 5. 6. 5.2. 5. 25. and verify that ff = F s .0 5. 5. 5. 14. 6.9.0.1 4. The data below on first-born and eighth-born infants are extracted from a table of birth weights of male infants of Chinese third-class patients at the K a n d a n g Kerbau Maternity Hospital in Singapore in 1950 and 1951. 9.4. 4.7 7.9 6.7. 6. 4. 6. 5.1.121.9. 5.) Reanalyze.9.4. 5. 26.4.8. 19.3. 6.5.0 5.3.7.2.5 6. 5. 5. 5.6. 7. 6.5. l s ^ 11. 4. 6. 5. 4. 6. 6.4.5. 6. 5.7. 5.0. 6. 5. 5. 5.0.2. 5.1. 4.1. 5.2.9.8.0.1 15.3. 4. 5.7 6.3 6.0.182 c h a p t e r 8 / s i n g l e .0. 8.5 6.7 5.8. 17. 23. 6.6 6.5. using the I test.4.1 3 7 111 267 457 485 363 162 64 6 5 4 5 19 52 55 61 48 39 19 4 1 1932 307 8. 5. 4.6. 5.3 6. 6.9.7.3 VI ill is and Seng (1954) published a study on the relation of birth order to the birth weights οΓ infants.7 5. 4.9. 4. 4. 6. 6. 4. 5. 6.2.7.8.8.3.3.4 Which birth order appears to be accompanied by heavier infants? Is this differ ence significant? Can you conclude that birth order causes differences in birth weight? (Computational note: The variable should be coded as simply as possible.6. 5. 20. 5. 5.3.1 4.5. 5.8.9 4. 6.3. 6.2.7.5 8.2.3. 6.5. 1. 10.3 5.5.1. Birth weight (Ih:: oz ) Birth I order ti 3:0 3:8 4:0 4:8 5:0 5:8 6:0 6:8 7:0 7:8 8:0 8:8 9:0 9:8 10:0 10:8 3: 7 3: 15 4::7 •4:: 15 5::7 5 : 15 6:: 7 6 : 15 7:7 7 : 15 8: 7 8:1 5 9 :7 9 :15 10:7 10:15 . 5.3. 6.2 5. 16.3 3. 21.9.0. 6.6.5.016 and /·. 4.3.4. 4.1. 5. 6. 6. 5. 4.c l a s s i f i c a t i o n a n a l y s i s of variance Gall no. 4. 5.352 " The following cytochrome oxidase assessments of male Pcriplaneta roaches in cubic millimeters per ten minutes per milligram were taken IVom a larger study .0. 5. 5. Gall no. 28. 5.5.2. 6. 5.1. 5.

4 5.8 5. ANS.9 5.1 4.0 5.0 5. The variableis the length from the anterior end of the narial opening to the lip of the bony beak and is recorded in millimeters. MS | S L v s 1 L ) = 2076.9 5.8 5.7 5.9 1.9 6.5 4.9 5. Strain CS SL LL tii 80 8070 69 7291 33 3640 3 "ι Σ Σ γ 2 = 1.5 5.4 5.7 4.8 19.5 5. Analyze and interpret.6 5.4 5.2 4.0 5.1 5.5 Are the two means significantly different? P.0 4.5 5. Perform unplanned tests a m o n g the three means (short vs.3 5.3 5. Set 95% confidence limits to the observed differences of means for which these comparisons are made.8 5.8 4.6 5.6 Note that part of the computation has already been performed for you.2 5.9 6.0 5.7 6.8 4.1 4.4 6.8 5.1 4.3 6.6 5.8 5.9 5.0 .9 5.3 5.5 5.2 6.0 4.9 5.4 4.2 4.6 6.8 4.8 4.2 5. long larval periods and each against the control). and March in Chicago in 1955.8 5.7 5.5.7 4.8 5.1 4. Hunter (1959.5 5.1 5.7 5.0 5.2 4.6697.9 5.1 4. detailed data unpublished) selected two strains of D.0 . These data are measurements of live random samples of domestic· pigeons collected during January.1 4.4 4.3 5.1 5.5 4.2 5.4 5.exercises 183 η y Sy 24 hours after methoxychlor injection Control 5 3 24.9 5. A nonselected control strain (CS) was also maintained.0 5.1 5.1 5. one for short larval period (SL) and one for long larval period (LL).1 5.3 4.0 5.7 0. At generation 42 these data were obtained for the larval period (measured in hours).6 5.4 5.0 4.6 5.7 5. E.8 5.9 4. 1 1 Samples 3 4 s 5.5 5.1 5.1 5.9 6.8 5.8 4.4 3.650 8.4 8.994.5 5.8 5. February. melanoiicisler.9 5. Data from Olson and Miller (1958).9 5.1 4.

Β A. Perform an analysis of variance and a multiple-comparison test. 5 and 2. Y Sitka California blacktail Vancouver Island blacktail Mule deer Whitetail 2. using the sums of squares STP procedure.9 2. (B + C + D)/3 B.07 0.5 2.5 2. (C + D)/2 .0416. 3. MS within = 0.c l a s s i f i c a t i o n a n a l y s i s o f variance The following data were taken from a study of blood protein variations in deer (Cowan and Johnston.184 8. D A. The variable is the mobility of serum protein fraction II expressed as 1(T 5 cm 2 /volt-seconds.8 S T 0.8 η = 12 for each mean.05 0. For the data from Exercise 7.05 0.05 0. 4 (numbered in the order given).07 8.3 use the Bonferroni method to test for differences between the following 5 pairs of treatment means: A.7 198 c h a p t e r 8 / s i n g l e . 1962). C A.05) are samples 1.8 2. maximal nonsignificant sets (at Ρ = 0. ANS.

e a c h c o m b i n a t i o n of f a c t o r s s h o u l d be r e p r e s e n t e d by a s a m p l e of flies. In Section 9.1) m i g h t h a v e been carried o u t at t h r e e different t e m p e r a t u r e s . T h u s . T h u s . T h i s w o u l d h a v e resulted in a two-way analysis of variance of t h e effects of s u g a r s as well as of t e m p e r a t u r e s . which yielded s a m p l e s r e p r e s e n t i n g different m e d i u m f o r m u l a t i o n s . S u p p o s e we w a n t e d t o k n o w n o t o n l y w h e t h e r m e d i u m 1 i n d u c e d a different wing l e n g t h t h a n m e d i u m 2 b u t a l s o w h e t h e r m a l e housefiies differed in w i n g length f r o m females. the h o u s e f l y w i n g l e n g t h s s t u d i e d in earlier c h a p t e r s . the exp e r i m e n t testing five s u g a r t r e a t m e n t s o n p e a s e c t i o n s (Box 8. might also be divided i n t o m a l e s a n d females.w a y m e t h o d of a n o v a t h a t a given t e m p e r a t u r e a n d a given s u g a r each c o n t r i b u t e a c e r t a i n a m o u n t to the g r o w t h of a p e a section. It is the a s s u m p t i o n of this t w o . a n d t h a t these t w o c o n t r i b u t i o n s a d d their effects w i t h o u t i n f l u e n c i n g each o t h e r . for seven m e d i a a n d t w o sexes we need at least 7 x 2 = 1 4 s a m p l e s .CHAPTER Two-Way of Variance Analysis F r o m the single-classification a n o v a of C h a p t e r 8 we p r o g r e s s t o the t w o .1 wc shall see h o w d e p a r t u r e s f r o m the a s s u m p t i o n . O b v i o u s l y . Similarly.w a y a n o v a of the p r e s e n t c h a p t e r by a single logical step. I n d i v i d u a l items m a y be g r o u p e d i n t o classes r e p r e s e n t i n g t h e different possible c o m b i n a t i o n s of t w o t r e a t m e n t s o r factors.

8.1. Steps I t h r o u g h 3 in Box 9. c o r r e s p o n d ing to steps 5.1. F r o m these quantities we o b t a i n SSu„ah a n d . for columns. but we shall u n d e r t a k e several digressions to ensure that the c o n c e p t s u n d e r lying this design arc a p p r e c i a t e d by the reader. a n d divide the result by 24. which also c o n t a i n s a discussion of the m e a n i n g of interaction as used in statistics.1. 6. Eight replicate readings were o b t a i n e d for each c o m b i n a t i o n of species a n d s e a w a t e r c o n c e n t r a t i o n . o u r only p u r p o s e here is to c o m p u t e s o m e quantities necessary for the further analysis. This is dime by the general f o r m u l a stated at the end of Section 8. Thus. T h e cells are also called s u b g r o u p s or subclasses. T h e t w o factors in the present design m a y represent either M o d e l I or M o d e l II effects o r o n e of each.w a y a n o v a for replicated subclasses (more t h a n o n e variate per subclass or factor c o m b i n a t i o n ) is s h o w n in Section 9. T h e c o m p u t a t i o n of a t w o .1 Two-way anova with replication W e illustrate the c o m p u t a t i o n of a t w o .3. a l t h o u g h the symbolism has changed slightly.w a y a n o v a without replication. We c o m m e n c e by c o n s i d e r i n g the six subclasses as t h o u g h they were six g r o u p s in a single-classification a n o v a . W e have c o n t i n u e d t o call the n u m b e r of c o l u m n s and a r e calling the n u m b e r of rows b. which is labeled step 6 in Box 9.1 correspond to the identical steps in Box 8. But since we have the subdivision by species a n d salinity. in which case we talk of a mixed model. sum the resulting squares. we need a correction term.2. such an a n o v a would test whether there was any variation a m o n g the six subg r o u p s over a n d a b o v e the variance within (he s u b g r o u p s . T h e c o m p u t a t i o n a l steps labeled Preliminary computations provide an efficient p r o c e d u r e for the analysis of variance. You will o b t a i n closer insight into the s t r u c t u r e of this design as we explain the c o m p u t a t i o n s . the n u m b e r of items per row.1. T h e well-known m e t h o d of paired c o m p a r i s o n s is a special ease of a t w o . T h e d a t a arc featured in Box 9. we shall also consider the expression for d e c o m p o s i n g variates in a t w o . F r o m these . T h e c o m p u t a t i o n is continued by finding the s u m s of squares for rows a n d c o l u m n s of the table. If we had no further classification of these six s u b g r o u p s by species or salinity. Significance testing in a two-way a n o v a is the subject of Section 9. on two-way a n o v a without replication.1.w a y a n o v a . 9. a n d 12. T h e results of this preliminary a n o v a arc featured in l able 9. T o c o m p l e t e the a n o v a .1./ t w o .w a y a n a l y s i s oh v a r i a n c e are measured.1.S\S\vilhlll in steps 7. we s q u a r e the c o l u m n sums. Λ similar q u a n t i t y is c o m p u t e d for rows (step 5). and 7 in the layout of Box 8. or with only a single variate per subclass. since in place of a g r o u p s we now have ab subgroups. T h i s is step 4 in Box 9.w a y a n o v a . liach s u b g r o u p or subclass represents eight oxygen c o n s u m p t i o n readings. T h e sample size for each cell (row a n d c o l u m n c o m b i n a t i o n ) of the table is n. W e will n o w proceed to illustrate the c o m p u t a t i o n of a t w o .w a y a n o v a in a study of oxygen cons u m p t i o n by two species of limpets at three c o n c e n t r a t i o n s of seawater. This is followed by Section 9.1.186 c h a p t e r 9 .

Ή « <3 •a 5 O N υt A W <3 OS 5? < u cs (u I iu (Ν η οο 8 S νο — < σν νο -Η Tfr ο οΟ Γ οί V τ»· O οο' rον vd 00 Ο II m νο ro Ον νο Ί 4 W — 8 "Τ 0 Ο Ο Ο «ο S 00 ^ ^J Η Ο rt—Ο rr II Γ 00 r — σ< 00 o< W rΝ •w 3 V3 t o o o Ο Φ 00 νο Ο • m σι « ο σν σν νο <η α\ C J •si ( -t^ Λ ω XI ' ο 60 ό Ο •β ΐ 2 •β Ο >> * "S ο χ> rv| "ί δ- "fr νΟ Q Ον r-.Ο £ —Α > « β .00 3 v> ro c> "Λ £ *T ΣΝ >Η —.2 _ 6·? > Λ Ο ν-> ο.187 ω —i.5 1 Ο -D Ο 8 I S 8 « H.0 «ο ο »h 00 *F τ r^j rr cR ρ. νο Ο Ο νο (Ν 55 — νο ο ι 00 νΟ Ον Tt 3 νΟ 00 Q m II — Γ-. Φ Ον < 10 rn οό ο as g 00 Λ oc d Η ~ ~ Ο Ο oo w-ί «λ Κ νο 2 •S s Τ3 00 U H <η m ο. 25 °° e t II 'Π 3 « 5 ^ > Μ "3 ωu > ίβ Ό ed Λ S S 5 Ο c .2 _ " -i 0c <» §•2 c . Β D I ί .2 Ο_ ' ο TJ & αν S S δ < 3 2 60 S " x 4 >> -SS <> L ο ε π 1 = t . a •3 P.

Sum of the squared row totals divided by the sample size of a row = Σ Ϊ Σ Σ ^ . error SS) = SSloltll — SSsllbgr = quantity 7 .3S44 \2 Υ 5. SSsubgr = ^ .61)2 8 \2 « = 4663.? 2 B + f ) 2 12a. (461. Sum of the squared column totals divided by the sample size of a column = .49)2 + •·• + (98.00)2 (2^8) 6..SS„ = quantity 8 . Explicit formulas for these sums of squares suitable for computer programs are as follows: 9 a .5213 As a check on your computations.quantity 10 = 221. SSA (SS of columns) = — b fa η γ \2 Ι Σ ( Σ Σ 10.7464 = 221. SSAB ? A = n £ i ( Y .74)2 (2x3x8)"4441'7464 7.8853 .quantity 6 = 4458. ascertain that the following relations hold for some of the above quantities: 2 S 3 S 4 i 6. (qua .6380 ς(ςς^) 9.SS.quantity 6 = 4623.6317 . (quantity l) 2 abn . 3 > 5 > 6.quantity 9 . SSA = n b t ( Y A - Y)2 Y 10a.1530 .0674 .ai = Σ Σ Σ a γ1 ~ C T = quantity 2 .6317 t Ϋ f y/ ι 4.6380 .16.4441.CT = quantity 5 . Sum of the squared subgroup (cell) totals. an η2 (143.4066 b / η \2 ΣΣΙΣ 8.74)2 _ ~~ (3 χ 8) ~ 4438.30)2 = 5065.? ) .4441.7464 = 16.221.C T = quantity 3 .4066 .„.7464 = 181.3210 11. SSA „ B (interaction SS) = SS subgr . SSB (SS of rows) = — ^ an '— .SSA . „ .4441.7464 = 623. Grand total squared and divided by the total sample size = correction term CT / a b it \2 \ = 46230674 ΣΣ Σ γ Π ΣΣ ) / abn „.8853 a ( b it bn V C T = quantity 4 . SSwUhin (within subgroups.BOX 9. SS within = t i ^ .00) 2 + (216.quantity 6 = 4663.3844 ..92)2 + (121. SSB = n a £ ( f B 11a.181. Grand total = Σ Σ Σ 461.82)2 + (196.? n ..9263 12. Sum of the squared observations = Σ Σ Σ = + ••• + (12.3210 = 23. divided by the sample size of the subgroups " b / η Σ Σ \ Σ γ 2 ν / η (84.4441.1 Continued Preliminary computations a b π Υ = 1.8853 = 401.74 γ 2 2.1530 3.quantity 8 = 623. .quantity 6 = 5065.A bn b/a η 1 « fb = (2«.

—Oxygen consumption does not differ significantly between the two species of limpets but differs with the sa!in:r· At 50% seawater.Ϋ Α .740 ns 9.4066 16. For a discussion of significance tests.BOX 9.f ab(n abn — I 1) 12 ab(n 1) e x p i r n f f o S r m o S 1 6 * 3 ^ 1 ^ f M b o t h faCtors >the ex ?ected ^ o v e are eorreet Below are the corresponding Source of variation Model II Mixed model (. β random) nb + α — I ° A Β Α χ Β σ2 + ησζΒ + nbai naog σ2 + ησ\Β σ2 + π<7 2 β + ι σ + ηα Α Β ι π- α" σ 2 -I+ ησ" ΑΒ 2 ηασ| σ2 Within subgroups Anova table Source of variation df SS MS F. species) β (rows: salinities) Α χ B (interaction) Within subgroups (error) Total fd.15 Since this is a Model I anova.? A (columns) Β (rows) Α χ Β (interaction) Within subgroups Total 9 10 1) ( a . all mean squares are tested over the error MS.0511. Conclusions.01(2.Y Y . nb« <r2 + — — Vώ a b 2 a .251 ns Fo.483** 1.660 11.4 fixed.560 1. A (columns.Υ β + Ϋ (a - 1 Kb - 11 12 1 Λ - 2 1) Z w ) Y .:. I .07 1 1 Ί 42 47 16. the O .05E2. of a species χ salinity interaction.4 2] = 3.6380 181. see Section 9.4. consumption is increased.2] = 4.3210 23.22 Fo. Source of variation jf "J a 1 >« 9 MS Expected MS (Model Γ) Ϋ Α .963 9.9263 401.2.42] = 5.1 Continued Now fill in the anova table. Salinity appears to affect the two species equally.638 90. for there is insufficient evidir.5213 623.Y h - 1 10 ib 1) 11 (a m 1) (a W Υ .I ) Y B 2 .

1 ab(n abn — 1) 221.1 they a r e not s h o w n p r o p o r t i o n a l to their a c t u a l values in the limpet e x p e r i m e n t .192 c h a p t e r 9 . T h e s e s u b t r a c t i o n s a r e carried o u t as steps 9 a n d 10. a n d c o l u m n s a r e s h o w n . for A. W e c a n best e x p l a i n i n t e r a c t i o n in a t w o .SS a n d t h e colu m n SS f r o m the s u b g r o u p SS. T h e r o w S S is 181.377** 9. Let us r e t u r n for a m o m e n t t o the p r e l i m i n a r y a n a l y s i s of v a r i a n c e in T a b l e 9. T h e new s u m s of s q u a r e s p e r t a i n i n g t o r o w a n d c o l u m n effects clearly are n o t p a r t of the e r r o r . respectively. c o l u m n SS. T h e lotal a n d e r r o r SS are the s a m e as b e f o r e ( T a b l e 9. a n d i n t e r a c t i o n SS. we d o n o t h a v e t o o b t a i n a s e p a r a t e q u o t i e n t for t h e s q u a r e of e a c h r o w o r c o l u m n s u m b u t c a r r y o u t a single division a f t e r a c c u m u l a t i n g t h e s q u a r e s of t h e s u m s . c o m p u t e d as q u a n t i t y 6. T h e f o r m e r is s u b d i v i d e d i n t o the row SS. called the interaction sum of squares.1. Since t h e r o w s a n d c o l u m n s a r e b a s e d o n e q u a l s a m p l e sizes.6380.9263. In F i g u r e 9.2. Source of Y Y Y variation df SS MS Ϋ Υ Τ Among subgroups Within subgroups Total 5 42 47 ab . T h e difference r e p r e s e n t s a t h i r d s u m of s q u a r e s . If we i n t e r c h a n g e the r e a d i n g s for 75% a n d 50'7. a n d t h e c o l u m n SS is 16. t h e e r r o r s u m of s q u a r e s . At the m o m e n t let us say o n l y t h a t it is a l m o s t a l w a y s p r e s e n t (but n o t necessarily significant) a n d g e n e r a l l y t h a t it need n o t be i n d e p e n d e n t l y c o m p u t e d but m a y be o b t a i n e d as illustrated a b o v e by the s u b t r a c t i o n of the row .w a y a n o v a by m e a n s of a n artificial illustration b a s e d o n the limpet d a t a wc h a v e just s t u d i e d .2. D a t a f r o m Box 9. W c shall discuss the m e a n i n g of this new s u m of s q u a r e s presently. a n d t h a t w i t h i n the s u b g r o u p s . rows.8853. W e c o m p l e t e the a n a l y s i s of v a r i a n c e in t h e m a n n e r p r e s e n t e d a b o v e a n d n o t e the results at the fool of f a b l e 9. d'uiitulis only. Before we c a n intelligently test for significance in this a n o v a w e m u s t u n d e r s t a n d the m e a n i n g of interaction. we o b t a i n the d a t a t a b i c s h o w n in T a b i c 9.1. but m u s t c o n t r i b u t e t o t h e differences t h a t c o m p r i s e the s u m of s q u a r e s a m o n g t h e f o u r s u b g r o u p s ./ t w o .w a y a n a l y s i s oh v a r i a n c e TABLE 9. W e t h e r e f o r e s u b t r a c t r o w a n d col u m n SS f r o m the s u b g r o u p SS. O n l y the s u m s of t h e s u b g r o u p s . T h i s p r o c e d u r e is s h o w n g r a p h i c a l l y in F i g u r e 9. which illustrates the d e c o m p o s i t i o n of the total s u m of s q u a r e s i n t o the s u b g r o u p SS a n d e r r o r SS.560 q u o t i e n t s we s u b t r a c t t h e c o r r e c t i o n term. T h e relative m a g n i t u d e s of these s u m s of s q u a r e s will differ f r o m e x p e r i m e n t to e x p e r i m e n t .1.5213 623. T h i s s h o u l d not be . T h e latter is 221. w h i c h d i v i d e d t h e t o t a l s u m of s q u a r e s i n t o t w o p a r t s : t h e s u m of s q u a r e s a m o n g the six s u b g r o u p s .4066 44. T o g e t h e r they a d d u p t o 197. a l m o s t b u t n o t q u i t e t h e value of t h e s u b g r o u p s u m of s q u a r e s . w h o s e v a l u e in this case is 23.1 Preliminary anova of subgroups in two-way anova. o t h e r w i s e the a r e a r e p r e s e n t i n g the row SS w o u l d have to be a b o u t 11 times t h a t allotted to the c o l u m n SS.1).3210.8853 401.9590.

so". Species Seawater concentration A. we d o find s o m e differences. t h e t o t a l for t h a t c o l u m n w a s n o t altered a n d c o n s e q u e n t l y t h e c o l u m n SS did n o t c h a n g e .638 ns 5.. All t h a t we h a v e d o n e is t o interc h a n g e the c o n t e n t s of t h e l o w e r t w o cells in t h e r i g h t . a n d 50% s e a w a t e r c o n c e n t r a t i o n s of Acmaea digitalis in Box 9.w a y a n o v a w i t h r f .9.73 156.39 245.1 D i a g r a m m a t i c r e p r e s e n t a t i o n of the p a r t i t i o n i n g of the total s u m s of s q u a r e s in a t w o .61 58. W e n o t e t h a t the SS b e t w e e n species (between c o l u m n s ) is u n c h a n g e d .25 "S Column SS = 10.74 143.70 216.3566 194.3210 T o t a l SS = 77.49 63. Σ Completed anova Sintrce of variation 84. H o w e v e r ..1 have been i n t e r c h a n g e d . p i r ation 193 R o w SS = 181.09 461/74 df SS MS Species Salinities Sp χ Sal Error Total 1 2 2 42 47 16. T h e r e a d i n g s for 75'7. T h e a r e a s of the subdivisions are not s h o w n p r o p o r t i o n a l to the m a g n i t u d e s of the s u m s of squares.1 / t w o .8803 I n t e r a c t i o n S'.8907 401.w a y o r t h o g o n a l a n o v a .5213 623.560 . O n l y s u b g r o u p a n d marginal totals are given below.4066 16.570.00 59.445** 9. 2 An artificial example to illustrate the meaning of interaction.92 161.. 9 . since we a r e u s i n g the s a m e d a t a .02(53 E r r o r AS = 401.43 98.5213 FIGURE 9.h a n d c o l u m n of the table.12 97.F.6380 10.S* = 23.6380 • S u b g r o u p SS = 211. W h e n we p a r t i t i o n t h e s u b g r o u p SS. 75". s u r p r i s i n g . Since the c h a n g e we m a d e w a s w i t h i n o n e c o l u m n . scahra A digitalis £ 100".178 m 97. t h e s u m s TABl.

1 1 9. is now n o longer so. We have to qualify our statement by the seawater concentration at which they are kept.1 and Table 9.. digitalis. However. In the second example these two main effects (species and salinities) account only for little of the s u b g r o u p sum of squares.33 9. Thus. Mean 10. and this is true for both species. scabra consumes considerably more oxygen than A.34 9. 50".3 Comparison of means of the data in Box 9..03 9.43 7.3).2. By contrast. R e m e m b e r that the s u b g r o u p SS is the same in the two examples. A. previously quite m a r k e d . scabra . 75".03 9.17 10.w a y a n a l y s i s oh v a r i a n c e of the second and third rows have been altered appreciably as a result of the interchange of the readings for 75% and 50% salinity in A. digitalis at two of the salinities. we are n o longer able to m a k e an unequivocal statement a b o u t the a m o u n t of oxygen taken up by the two species. scabra consumes more oxygen than A.89 12.33 7. digitalis.62 loo".34 12. scabra still consumes m o r e oxygen than A.00 10. (lii/italis Μ can V.43 12.76 9.194 c h a p t e r 9 .56 7. if we had to interpret the artificial d a t a (lower half of Table 9../ t w o . leaving the interaction sum of squares as a substantial residual.17 10. leaving only a tiny residual representing the interaction. Mean Artificial data from Table 10. At 100% ι Mil ι 9.21 7. and the difference between the salinities. 75". T h u s our statements a b o u t differences due to species or to salinity can be m a d e largely independent of each other.. At 100% and 50%. 50". is now a large quantity.61 12.56 7. this difference depends greatly on the salinity.2. digitalis (since column sums have not changed).00 7.21 7.62 . The sum for 75% salinity is now very close to that for 50% salinity. Spa ies Seawiiter ianccniraiion - A./ Oruftnui ilalu from Box ion". T h e original results are quite clear: at 75% salinity. but at 75% this relationship is reversed. obtained by subtracting the sums of squares of rows and columns from the s u b g r o u p SS..89 12.. We note further that A. In the first example we subtracted sums of squares due to the effects of both species and salinities.3 we have shown the s u b g r o u p and marginal m e a n s for the original d a t a from Table 9. we would note that although A.1 and for the altered d a t a of Table 9. the interaction SS.25 9. W h a t is the essential difference between these two examples? In Table 9.·). oxygen c o n s u m p t i o n is lower than at the other two salinities.

H o w e v e r . again we have to qualify this s t a t e m e n t by the species of the c o n s u m i n g limpet. T h e expression below a s s u m e s that both factors represent fixed treatment effects. If the artificial d a t a of T a b l e 9.) Synergism a n d interference will both tend to magnify the interaction SS. Variatc Yiik is the Alh item in the s u b g r o u p representing the /th g r o u p οΓ treatment A a n d the /th g r o u p οΓ t r e a t m e n t B. such as the salinity in the present example. W h e n drugs act synergistically. Yscabra > y d i g i . This statement would cover up the i m p o r t a n t differences in the d a t a . T scabril < K d . a l i ! ^ b u t at 75%. and t. It is d e c o m p o s e d as follows: Yijk = / < + «. Testing for interaction is an i m p o r t a n t p r o c e d u r e in analysis of variance. where V s t a n d s for the . a n d / o r ftj in Ihe f o r m u l a by A. a n d / ο ι ΰ.. Bilali . This would seem reasonable. T h e variation a m o n g s u b g r o u p s is represented by ( F — V). which are t h a t scabra c o n s u m e s least at this c o n c e n t r a t i o n . Model I. we notice a mild increase in oxygen c o n s u m p t i o n at 75%.1) where μ equals the p a r a m e t r i c mean of the p o p u l a t i o n . we replace the a.. wc call it interference.jk is the e r r o r term of the fctli item in s u b g r o u p ij.· is the interaction effect in the s u b g r o u p representing the /th g r o u p of factor A a n d the /lh g r o u p of factor B.2 were real. while digitalis c o n s u m e s most. Wc are now able to write an expression symbolizing the d e c o m p o s i t i o n of a single variatc in a two-way analysis of variance in the m a n n e r of Expression (7. W h a t actual deviations does an interaction SS represent? Wc can see this easily by referring back to t h e j u i o v a s of T a b l e 9. ( N o t e that "levels" in a n o v a is customarily used in a loose sense to include not only c o n t i n u o u s factors. the result of the interaction of the t w o d r u g s m a y be a b o v e a n d b e y o n d the sum of the separate effects of each drug. such as salinity c o m b i n e d with a n y one species. In previous c h a p t e r s we have seen that each sum of s q u a r e s represents a sum of s q u a r e d deviations. + (=r/i). contributes a positive o r negative increment to the level of expression of the variable. It indicates that the effects of t h e t w o factors are not simply additive b u t t h a t any given c o m b i n a t i o n of levels of factors. If we examine the effects of salinity in the artificial example.7 + (9. In c o m m o n biological terminology a large positive increment of this sort is called synergism. since species as well as salinity are fixed treatments. W h e n levels of t w o factors in c o m b i n a t i o n inhibit each other's effects.Jl is normally distributed with a mean of 0 and a variance of a 2 . is the fixed treatment effect of the /th g r o u p of t r e a t m e n t β.w a y a n o v a w i t h r i i'i κ λ h o n 195 a n d 50%. while digitalis c o n s u m e s most at this c o n c e n t r a t i o n . We m a k e the usual a s s u m p t i o n that ej.9. This d e p e n d e n c e of the effect of o n e factor o n the level of a n o t h e r f a c t o r is called interaction.2) for single-classification a n o v a . If one or both of the factors represent Model II effects. such as the two species of limpets. + / i . (of/0.1 / t w o .th g r o u p of treatment Α. it would be of little value to state that 75% salinity led to slightly greater c o n s u m p t i o n of oxygen.1. β.. but also qualitative factors. is the fixed treatment effect for the . It is a c o m m o n a n d f u n d a m e n t a l scientific idea. scabra c o n s u m e s least at 75%.

we o b t a i n (F-P)-(«-?)-(C-y)=F-y-K+?-c+F = F-κ . yield the following p a t t e r n of relative m a g n i t u d e s : Scahra Digitalis 100% ν 75% Λ ν Λ 50% T h e relative m a g n i t u d e s of the m e a n s in the lower part of T a b l e 9. statisticians call this p h e n o m e n o n interaction a n d test its significance by m e a n s of an interaction . This p a r t i t i o n of the d e v i a t i o n s also holds for their squares. s q u a r e it. W h e n we subtract the deviations d u e to rows ( R — F) a n d those d u e to c o l u m n s (C — F) f r o m those d u e t o subg r o u p s . a n d F for the g r a n d m e a n . A simple m e t h o d for revealing the n a t u r e of the interaction present in the d a t a is to inspect the m e a n s of the original d a t a table. T h e original d a t a . As long as the p a t t e r n of m e a n s is consistent. a n d multiply the s u m by n. W h e n we e v a l u a t e o n e such expression for each s u b g r o u p . s u m the squares.3 can be s u m marized as follows: Scuhru Digitalis 100% V Λ 75% Λ V 50% W h e n the p a t t e r n of signs expressing relative m a g n i t u d e s is not u n i f o r m as in this latter table. interaction is indicated.3. when the effect of two t r e a t m e n t s applied together c a n n o t be predicted from the average responses of the s e p a r a t e factors.c + F T h i s s o m e w h a t involved expression is the deviation d u e t o interaction.w a y a n a l y s i s oh variance s u b g r o u p m e a n . the statistical test needs to be performed to test whether the deviations arc larger t h a n can be expected f r o m c h a n c e alone. In s u m m a r y . s h o w i n g n o interaction. We c a n d o this in T a b l e 9./ t w o . However. interaction may not be present. interaction is often present without c h a n g e in the direction of the differences.196 c h a p t e r 9 . we o b t a i n the i n t e r a c t i o n SS. sometimes only the relative m a g n i t u d e s are alTected. This is so because the s u m s of t h e p r o d ucts of the s e p a r a t e t e r m s cancel o u t . In any case. as in the f o r m e r table.

1.4.1. we say t h a t t e m p e r a t u r e χ oxygen interaction is absent. T h u s . Neither factor A nor the interaction is significant. W h e n we d o this in the e x a m p l e of Box 9.m e a n s q u a r e s for M o d e i I. it d o e s not contain any term from a lower line.9. Error = MSwilhiI1. a n d there d o e s not a p p e a r to be sufficient evidence for species differences in oxygen c o n s u m p t i o n .1 could have been s h o r t e n e d by e m p l o y i n g the simplified f o r m u l a for a sum of squares between two groups. Any source of variation is tested by the variance ratio of the a p p r o p r i a t e m e a n s q u a r e over the e r r o r MS T h u s .1 we first show the e x p e c t e d . We c o n c l u d e that the differences in oxygen c o n s u m p t i o n are induced by varying salinities ( O z c o n s u m p t i o n r e s p o n d s in a V-shaped manner). Hxccp! for the p a r a m e t r i c variance of the items. T h e most i m p o r t a n t fact to r e m e m b e r a b o u t a M o d e l 1 a n o v a is that the m e a n s q u a r e at each level of variation carries only the added effect d u e to that level of t r e a t m e n t . the significance test is therefore simple a n d s t r a i g h t f o r w a r d . Significance testing in a two-way a n o v a will be deferred until t h e next section. a n d Σ ^ α β ) 2 represent a d d e d c o m p o n e n t s d u e t o t r e a t m e n t for columns. the expected M S o f f a c l o r A c o n t a i n s only the p a r a m e t r i c variance of the items plus the a d d e d term d u e to f a c t o r A. significant. In the a n o v a table of Box 9. O r if t h e effect of t e m p e r a t u r e on a m e t a b o l i c process is i n d e p e n d e n t of the effect of oxygen c o n c e n t r a t i o n . If we say that the effect of density o n the fecundity or weight of a beetle d e p e n d s o n its genotype. N o t e t h a t the w i t h i n . In M o d e l 1. salinity.2 Two-way anova: Significance testing Before we can test h y p o t h e s e s a b o u t the sources of variation isolated in Box 9. T h e q u a n t i t i e s Σ " α 2 . Σ ' / ? 2 . respectively. a n d interaction. rows. we must become familiar with t h e expected m e a n squares for this design. β/Error a n d ( Α χ β)/ Error. we should point o u t that the c o m p u t a t i o n a l steps 4 a n d 9 of Box 9.s u b g r o u p s or e r r o r MS again estimates the p a r a m e t r i c variance of the items. we find only factor ΰ. H o w e v e r . σ 2 . This is a very c o m m o n p h e n o m e n o n . T h u s A — MSA. we s p e a k of a p r o c e d u r e χ t r e a t m e n t interaction. we imply that a g e n o t y p e χ density interaction is present. but does nol also include interaction effects. If the success of several alternative surgical p r o c e d u r e s d e p e n d s on the n a t u r e of the p o s t o p e r a t i v e t r e a t m e n t . for the a p p r o p r i a t e tests we e m p l o y variance ratios Λ/Error. In a n analysis with only t w o r o w s a n d t w o c o l u m n s the interaction SS c a n be c o m p u t e d directly as (Sum of o n e d i a g o n a l . T h e t e r m s should be familiar in the context of y o u r experience in the previous chapter. both species differences a n d seawater c o n c e n t r a t i o n s being fixed t r e a t m e n t effects.2 / T W O .sum of o t h e r diagonal) 2 abn 9. where each boldface term signifies a m e a n square. illustrated in Section 8.W A Y ANOVA: SIGNIFICANCE TESTING I')/ m e a n square. T h e t a b u l a t i o n of the relative m a g n i t u d e s of the m e a n s in the previous section s h o w s t h a t the .

digitalis c o n s u m e s m o s t . A s i m p l e s t a t e m e n t of r e s p o n s e to salinity w o u l d be unclear. A l t h o u g h the o x y g e n c o n s u m p t i o n c u r v e s of t h e t w o species w h e n g r a p h e d a p p e a r far f r o m parallel (see F i g u r e 9.1. H o w e v e r . W h e n w e a n a l y z e t h e results of the artificial e x a m p l e in T a b l e 9. it b e c o m e s i m p o r t a n t to test for overall significance in a M o d e l 1 a n o v a in spite of the p r e s e n c e of i n t e r a c t i o n . T h u s ." W e w o u l d c o n s e q u e n t l y h a v e t o d e s c r i b e s e p a r a t e . W e m a y wish t o d e m o n s t r a t e t h e significance of the effect of a d r u g . F i n d i n g a significant difference a m o n g salinities d o e s n o t c o n c l u d e the analysis.2). T o s u p p o r t this c o n t e n t i o n .6. H o w e v e r . W h e t h e r this is really so c o u l d be tested by t h e m e t h o d s of S e c t i o n 8. w h i c h s h o w t h a t at 75% salinity A.198 CHAPTER 9 / TWO-WAY ANALYSIS OF VARIANCE p a t t e r n of signs in t h e t w o lines is identical.s u b g r o u p s v a r i a n c e . digitalis is only very slightly higher. D a t a f r o m Box 9. T h e p r e s e n c e of i n t e r a c t i o n m a k e s us q u a l i f y o u r s t a t e m e n t s : " T h e p a t t e r n of r e s p o n s e to c h a n g e s in salinity differed in the t w o species. this m a y be m i s l e a d i n g . O c c a sionally. since t h e m e a n of A. r e g a r d l e s s of its significant i n t e r a c t i o n with a g e of t h e p a t i e n t . digitalis I'KiURE 9 . scabra c o n s u m e s least o x y g e n a n d A. scabra is far higher a t 100% s e a w a t e r t h a n a t 75%.1. In t h e last (artificial) e x a m p l e the m e a n s q u a r e s of t h e t w o f a c t o r s ( m a i n effects) a r e n o t significant. we m i g h t wish t o test t h e m e a n s q u a r e a m o n g d r u g c o n c e n t r a t i o n s (over the e r r o r MS). n o n p a r a l l e l r e s p o n s e c u r v e s for the t w o species. b u t t h a t of A. since in such a case a n overall s t a t e m e n t for each f a c t o r w o u l d h a v e little m e a n ing. T h e d a t a suggest t h a t at 75% salinity t h e r e is a real r e d u c t i o n in o x y g e n c o n s u m p t i o n . m a n y statisticians w o u l d n o t even test t h e m o n c e they f o u n d t h e i n t e r a c t i o n m e a n s q u a r e t o be significant. .2. 2 50 75 100 Oxygen consumption by two species of % Seawatrr l i m p e t s at t h r e e salinities. . we w o u l d c o n c l u d e t h a t t h e r e s p o n s e t o salinity differs in t h e t w o species. this s u g g e s t i o n of a species χ salinity i n t e r a c t i o n c a n n o t b e s h o w n t o be significant w h e n c o m p a r e d w i t h t h e w i t h i n . in a n y ease. r e g a r d l e s s of w h e t h e r the i n t e r a c t i o n MS is significant. we find o n l y t h e i n t e r a c t i o n MS significant. T h i s is b r o u g h t o u t b y i n s p e c t i o n of t h e d a t a .

What is (he a p p r o p r i a t e model for this a n o v a 7 Clearly. In the mixed model. T h e conservative position is to c o n t i n u e to test the main effects over the interaction MS.3 Two-way anova without replication In m a n y experiments there will be no replication for each c o m b i n a t i o n of factors represented by a cell in the data lable. W c could then estimate the variation a m o n g individuals with respect to the effect of alcohol over time. In such cases we c a n n o t easily talk of " s u b g r o u p s . a n d for interaction m a k e their a p p e a r a n c e . T h e s u b g r o u p sum of squares . 9. If the interaction is significant.3 / TWO-WAV ANOVA WITHOU I ΚΙ ΙΊ (CATION 199 Box 9. some a u t h o r s suggest c o m p u t a t i o n of a pooled e r r o r MS = (SSAxB + S S w i t h i n ) / ( ^ x B + i// within ) t o test the significance of the main effects. we c o n t i n u e testing Aj(A χ Β) a n d B/(A χ Β). while the m e a n s q u a r e representing the r a n d o m factor c o n t a i n s only the error variance a n d its o w n variance c o m p o n e n t a n d does not includc the interaction c o m p o n e n t . for rows (factor B). T h e y arc the same as those in Box 9. Each studcnl has been measured only once al each lime. T h e eight individuals. in which factor A is assumed to be fixed a n d factor Β to be r a n d o m . S . In the M o d e l II a n o v a n o t e t h a t the two m a i n effects c o n t a i n the variance c o m p o n e n t of the interaction as well as their own variance c o m p o n e n t . σ | . the times arc Model I.P I . immediately after. In Box 9. O u r illustration for this design is from a study in m e t a b o l i c physiology.1 also lists expected m e a n squares for a M o d e l II a n o v a a n d a mixedmodel two-way a n o v a .1 except that the expressions to be evaluated are considerably simpler. In a M o d e l II a n o v a we first test (A χ 6)/Error.9. For s o m e models a n d tests in a n o v a wc must assume that there is n o interaction present. " since each ccll contains a single reading only. Here. a n d we shall follow this p r o c e d u r e in this b o o k . or the m e a s u r e m e n t s m a y be k n o w n to be so repeatable that there is little point in estimating their error. the expected m e a n squares c h a n g e accordingly. But when Α χ Β is n o t significant. Only one type of mixed m o d e l is s h o w n in Box 9.P E P level so much higher than that of student 3. however. Since ι i = l .2 wc s h o w levels of a chemical. but test the fixed treatment MS over the interaction. a n d 12 h o u r s after the a d m i n i s t r a t i o n of an alcohol dose. a n d they are design a t e d σΑ. a n d σ2ΑΒ. P . It is i m p r o b a b l e that an investigator would try to ask why student 4 has an S .2. a two-way a n o v a without replication can be properly applied only with certain assumptions. it is the m e a n s q u a r e representing the fixed t r e a t m e n t that carries with it the variance c o m p o n e n t of the interaction. variance c o m p o n e n t s for c o l u m n s (factor A). We therefore test the MS of the r a n d o m m a i n effect over the error.1. in the blood scrum of eight s t u d e n t s before. respectively. a r e not likely to be of specific interest. F r e q u e n t l y it m a y be t o o difficult or t o o expensive to o b t a i n m o r e than o n e reading per cell. T h e c o m p u t a t i o n s a r c s h o w n in Box 9. If the situation is reversed. Wc would d r a w m o r e meaningful conclusions from this p r o b l e m if wc considered the eight individuals to be r a n d o m l y sampled. much of the s u m m a t i o n can be omitted. As we shall see in the following.

04 10. Preliminary computations a b 1. Grand total = Σ Σ y = 413 40 · α b 2.4138 .84 Source: Data from Leinert et aL (1983).70 12.00 30.68) + (148.73 33.40 1 2 3 4 5 6 7 8 Σ 20. ι . SS error (remainder.8150 = 128.88) + (111.88 27. .62 11. Sum of the squared observations = Σ Σ y2 = (20.45)2 = 8349. The eight sets of three readings are treated as replications (blocks) in this analysis.17 9.2 Two-way anova without replication.45 148.96 152.88 12. Grand total squared and divided by the total sample size = correction term CT •ab = (quantity I) 2 = ( 4 1 3 : 4 0 ) ! = ab 7120 8150 24 6· SSu»ai = Σ Σ γ2 ~ C T = quantity 2 . Factor A: Time (a = 3) Factor B: Individuals Φ = 8) Before alcohol ingestion Immediately after ingestion 12 hours later Σ 49.9428 . Serum-pyridoxal-t-phosphate (S-PLP) content (ng per ml of serum) of blood serum before and after ingestion of alcohol in eight subjects.33 32.34 16.quantity 5 = 8349.45 18.84) = 7249.quantity 5 = 7249.quantity 5 = 8127.45 28.59 33. this is a mixed-model anova.SSB = quantity 6 — quantity 7 .00 17.7578 .06 79.quantity 8 = 1228.7120.CT= quantity 4 . SSA Σ(ςυ)2 (SS of columns) = — \ b 1 .9428 Σ(ςυ)2 8. Sum of squared column totals divided by sample size of a column = — ( Σ 7 b b fa 2 2 2 = (152.7120.8150 = 1228. SSB (SS of rows) = — ^ a J — .72 9. while differences between individuals are considered to be random effects.77 30.51 413.8059 3 a b \2 Σ Σ η 5.BOX 9.25 11.8059 .50 20.8150 = 1006.45 54.6651 .C T = quantity 3 .5988 7.• · • -j.9909 9.4138 7 -» c r Λ ι .»· Μ u ι · r ι Σ 3.67 8.25 19.SSA .9909 = 92.84 20.10 111.39 82. Sum of squared row totals divided by sample size of a row = — \ a \2 / (49 79)2 -t.5988 . Hence. discrepance) = SSlota) .7120.· · + (30.7578 — — 8 y τ y] 4. Time is a fixed treatment effect.68 17. This is a mixed-model anova.00)2 + .06 19.128.70 15.78 11.25 9.1006.(82 51 )2 — = —' —-—— = 8127.79 52.

s § Ϊ3 ι>? I ι ί».202 8 w •α a. + + § «1 NX is «3 to + * * * •G OS ΐί to 5 Tf 3 00 rJi •Ίο·. . 1) CTJ 3 υ Ό •ο ' > c •3 ' 3 C Β υ ο β J2 ο υ ε OQ ω Η + Γ-1 "S χ i 3 χ "S Ο § βο υ c . oo <4 N T — oo Λ W 00 > w1 00 vO ci •ΊΟ οS O CJ\ ο Ο O O O O Os v-i 00 r^i π Vl en S O \D Η Os e C O o.

some individuals had not responded with a lowering of their S .1.2 would not have been legitimate. it would be i m p r o p e r to test the m a i n effects over the r e m a i n d e r unless we could reliably a s s u m e that n o a d d e d effect d u e to interaction is present. a s s u m i n g there is n o interaction.S u b g r o u p = 122S. interaction would have been a p p a r e n t .V = 0 FIGURF. If. which is the equivalent of the previous interaction SS but which is n o w the only source for an e r r o r term in the a n o v a .w a y o r t h o g o n a l a n o v a w i t h o u t r e p l i c a t i o n .S = 128. interaction is unlikely to be present. casual inspection of the d a t a would have predicted o u r findings.3.9909 >. 9 . the r o w and c o l u m n m e a n s q u a r e s arc tested over the e r r o r MS. o r for the r a n d o m factor in a mixed model. 3 / TWO-WAY ANOVA WITHOUT REPLICATION 203 R o w SS = 1006. for example. w h e n c o m p a r e d with Figure 9.9428 I n t e r a c t i o n SS = 92. A c o m m o n a p p l i c a t i o n of t w o . By this we m e a n that the same g r o u p of individuals . If interaction is present. 3 D i a g r a m m a t i c r e p r e s e n t a t i o n of t h e p a r t i t i o n i n g of t h e total s u m s of s q u a r e s in a t w o . we are left with only a single sum of squares. consult Figure 9. in this example is the s a m e as the total sum of squares.9 . If this is not immediately a p p a r e n t .5988 < C o l u m n .6651 = r e m a i n d e r £ E r r o r .w a y a n o v a w i t h o u t replication is the repeated testing of the same individuals. only a M o d e l II a n o v a can be entirely tested. If you refer to the expected m e a n s q u a r e s for the two-way a n o v a in Box 9.1.S'. you will discover why we m a d e the s t a t e m e n t earlier that for s o m e models and tests in a two-way a n o v a w i t h o u t replication we must a s s u m e that the interaction is not significant. T h e a r e a s of the s u b d i v i s i o n s a r e not s h o w n p r o p o r t i o n a l to t h e m a g n i t u d e s of t h e s u m s of s q u a r e s . T h e a d d e d variance a m o n g individuals is also highly significant.2 convinces us that the t r e n d s with time for any o n e individual are faithfully reproduced for the o t h e r individuals.741.P L P levels after ingestion of alcohol.S'. Since we a s s u m e no interaction. But in a pure M o d e l I a n o v a . G e n e r a l inspection of the d a t a in Box 9. Thus. This SS is k n o w n as the remainder SS or the discrepance. Differences with time are highly significant. after we s u b t r a c t t h e sum of squares for c o l u m n s (factor A) a n d for rows (factor B) f r o m the total SS.5988 T o t a l SS = 1228. while in a mixed model only the fixed level c a n be tested over the r e m a i n d e r m e a n square. illustrates that the e r r o r sum of squares based on variation within s u b g r o u p s is missing in this example. yielding a n F„ value of 9. T h e results a r e not surprising. T h u s . which. a n d the test of the m e a n s q u a r e a m o n g individuals carricd out in Box 9.

is r a n d o m . is repeated testing of the s a m e individuals in which only two treatments (a = 2) a r e given.204 CHAPTER 9 . a n d the time d i m e n s i o n is the second factor. S u p p o s e we test the muscle t o n e of a g r o u p of individuals./ TWO-WAY ANALYSIS Oh VARIANCE is tested repeatedly over a period of time. which in this case would be a paircdc o m p a r i s o n s test because there are only t w o t r e a t m e n t classes. T h e individuals are o n e factor (usually considered as r a n d o m a n d serving as replication).3 is of this nature. repeatedly tested on the same maze. O n e special case. which m a y be the control. It would not be wise. between her face width when she is five years old a n d her face width at six years. This " b e f o r e a n d after t r e a t m e n t " c o m p a r i s o n is a very frequent design leading to paired c o m parisons. we again m u s t a s s u m e that there is n o interaction between time a n d the individuals. to test antigen 1 on individual I a n d antigen 2 on individual 2. T h e second factor. T h u s . Paired c o m p a r i s o n s often result from dividing an organism o r o t h e r individual unit so that half receives t r e a t m e n t I a n d the o t h e r half t r e a t m e n t 2. we can a r r a n g e o u r muscle tone readings in pairs. because each o b s e r v a t i o n for o n e t r e a t m e n t is paired with o n e for the o t h e r t r e a t m e n t . Since the same g r o u p of individuals will have been tested twice.w a y anova. p r e s u m a b l y representing a r a n d o m sample of rats f r o m the l a b o r a t o r y p o p u l a t i o n . we might m e a s u r e g r o w t h of a s t r u c t u r e in ten individuals at regular intervals. Thus. and m e a s u r e s of learning after a n u m b e r of trials. The e x a m p l e in Box 9. This case is also k n o w n as paired comparisons. c o m m o n e n o u g h to merit s e p a r a t e discussion. that is. if we wish to test the strength of t w o antigens o r allergens we might inject o n e into each a r m of a single individual a n d measure the d i a m e t e r of the red area p r o d u c e d . each pair representing readings on o n e individual (before a n d after exercise). The paired c o m p a r i s o n is for each individual girl. E x a m p l e s include increasing i m m u n i t y after antigen inoculations. a fixed t r e a t m e n t effect. T h e fixedt r e a t m e n t effect would be the successive trials to which the rats h a v e been subjected. Ano t h e r use of this design is f o u n d in various physiological a n d psychological experiments in which we test the same g r o u p of individuals for the a p p e a r a n c e of some response after t r e a t m e n t . A n o t h e r design simply measures t w o stages in the d e v e l o p m e n t of a g r o u p of organisms. F o r example. Such d a t a are a p p r o p r i a t e l y treated by a two-way a n o v a without replication. subject t h e m to severe physical exercise. the responses of the several individuals are parallel t h r o u g h time. f r o m the point of view of experimental design. so t h a t we can legitimately a r r a n g e the d a t a as a t w o . a n d measure their muscle tone once more. W h e n we test for the presence of an a d d e d variance c o m p o n e n t (due to the r a n d o m factor). Let us e l a b o r a t e on this point. This pair is c o m posed of the same individuals tested twice o r of two individuals with c o m m o n experiences. we m a y study the speed with which ten rats. reach the end point. the ten rats. These individuals m a y be differentially susceptible to these antigens. altered responses after conditioning. It measures lower face width in a g r o u p of girls at age five and in the s a m e g r o u p of girls when they a r e six years old. time being the treatment intervening between the Iwo stages. and we may learn little a b o u t the relative potency of the .

49 7.18 14.19 .22 .72 7.11 7.3 Paired comparisons (randomized Mocks with β = 2).12] = (Conservative tabled value) Conclusions.3000 14 2.82 15.9475 ^0.28 . Individuals 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Er Σγ1 w 5-year-olds 7.43 F.47 7. factor A) Individuals (rows.81 7. If we are willing .17 15.16 .10 7.64 111.49 7.33 7. We conclude that faces of 6-year-old girls are wider than those of 5-year-olds.95 7.oi|i.37 15.—The variance ratio for ages is highly significant.92 836.86 df SS MS 0.79 114. 3 / TWO-WAV ANOVA WITHOU IΚΙΙΊ(CATION 205 BOX 9.188.25 7. Two-way anova without replication Anova table Source of variation Ages (columns.19 .6992 M > »=ri2-r(I (difference) 0.04 7.20 7.86 15.01 7.07 15.26 .0108 29 2.14)** σ <r + tTab 2 2 Expected MS <r2 + o2AB + <3—1 + ασί -b-τΣ"2 1 0.66 7.44 7.92 881.10 16.56 7.19 14.15 . 388.34 0.53 7.25 .13 14.15 3.70 7.43 226.24 14.06 15.46 6.9 .21 7. Lower face width (skeletal bigoniai diameter in cm) for 15 North American white girls measured when 5 and again when 6 years old.3300 (2) 6-year-olds 7.19 .6367 14 0.73 16.3000 0.6216 Source: From a larger study by Newman and Meredith (1956).93 7.35 15.13 7.20 .20 .81 8.771.89** (244.19 .14 15.84 3435.66 8. factor Β) Remainder Total ^o.21 .8304 (i) Σ 14.46 8.00 0.16 .01(12.94 7.27 7.68 7.i4] = 8.000.

039. but if wc were designing such an e x p e r i m e n t a n d knew little a b o u t the reaction of h u m a n s to antigens. equals zero: D. we may test for an added variance component among individual girls and would find it significant.00 2 /fS) _ 14 ~yj /0.89.(^Dfjb _ Sj> _ το 3. which equals the previous F„.2 • 0. antigens.279. A similar example is the testing of ccrtain plant viruses by r u b b i n g a c o n c e n t r a t i o n of the virus over the surfacc of a leaf a n d c o u n t i n g the resulting lesions.oo jO.86 = 0.279.039.010. with η rows (individuals) a n d 2 c o l u m n s (treatments).20 .0216 14 - 1 b —I = V0S")T.2 and thus _ s„ _ 0. as a p r e c a u t i o n . r a n d o m l y allocate antigen 1 to the left or right a r m for different subjects. pt — μ2. we might.542.14Ι.206 CHAPTER 9 .0 ^ " 0. The t test for paired comparisons ._ D ~ (μι~μ2) «Β where D is the mean difference between the paired observations.0L Also tj = 388. antigen 2 being injccted into the o p p o s i t e a r m . Since different leaves are susceptible in different degrees. It is p r o b a b l y immaterial whether an antigen is injected into the right or left a r m . D = _ _ — _ _ _ Λο 20 and sg = sD/v'fo is the standard error of D calculated from the observed differences in column (4): .141.(3.6216 ./ TWO-WAY ANALYSIS Oh VARIANCE BOX 9. A m u c h better design would be lirst to injcct antigen 1 into the left a r m a n d antigen 2 into the right a r m of a g r o u p of n individuals and then to analyze the d a t a as a two-way a n o v a without replication.0 " 0Ό10.9 " 19 7 2 0 3 With " ' = This yields Ρ « 0. a c o n v e n t i o n a l way of m e a s u r i n g the strength of the virus is to . since this would be c o n f o u n d e d by the differential responses of the subjects.9 We assume that the true difference between the means of the two groups.3 Continued to assume that the interaction o \ B is zero.

one of each pair receiving the treatment.. T h e d a t a a r e s h o w n in c o l u m n s (1) a n d (2) for 15 individual girls. a d r u g or a psychological test might be given to g r o u p s of twins o r sibs. which is the traditional m e t h o d of analyzing it.2 and thus arc not shown in detail. T h e o t h e r m e t h o d of analyzing p a i r e d . C o l u m n (3) features the row s u m s that are necessary for the analysis of variance. If interaction is assumed to be zero. T h e a n o v a table shows that there is a highly significant difference in face width between the two age groups. because if thereis no significant a d d e d variance c o m p o n e n t a m o n g blocks.w a y a n o v a s w i t h o u t replication h a s a n equivalent. .9 .3. r u b b i n g the other half of the leaf with a control or s t a n d a r d solution. a n d the conclusions a r c the s a m e as for the two-way a n o v a . O n e reason for f e a t u r i n g the p a i r e d . we prefer the two-way a n o v a .c o m p a r i s o n s ease shown in Box 9. a n d we are trying to c o m p a r e the effect of a h o r m o n e injection with a control. as already m e n t i o n e d . This is a n o t h e r instance in which we o b t a i n the value of F s when we s q u a r e the value of /. T h e c o m p u t a t i o n s for the two-way a n o v a w i t h o u t replication are the same as those already s h o w n for Box 9. A n o t h e r design leading to paired c o m p a r i s o n s is to apply the t r e a t m e n t to t w o individuals s h a r i n g a c o m m o n experience. This is useful knowledge. o n e might simplify the analysis a n d design of future. which the null hypothesis puts at zero. It is quite simple to apply a n d is illustrated in the second half of Box 9.c o m p a r i s o n s test separately is t h a t it alone a m o n g the t w o . Finally. 3 / T W O .c o m p a r i s e n s technique may be used when the t w o individuals to be c o m p a r e d share a single experimental unit a n d are thus subjected to c o m m o n e n v i r o n m e n t a l experiences. be this genetic or e n v i r o n m e n t a l . we might inject o n e of each pair of rats with the h o r m o n e a n d use its cage m a t e as a control. alternative m e t h o d of a n a l y s i s — t h e t test for paired c o m p a r i s o n s .3. T h e c o m p u t a t i o n s arc quite s t r a i g h t f o r w a r d .c o m p a r i s o n s designs is the wellk n o w n t test for paired comparisons.W A V ANOVA WITHOU IΚΙΙΊ(CATION 207 wipe it over t h e half of the leaf on o n e side of the midrib. each of which holds two rats. It tests whether the mean of s a m p l e differences between pairs of readings in the t w o c o l u m n s is significantly different from a hypothetical mean. If we have a set of rat cages. Its c o m p u t a t i o n is no more t i m e . the p a i r e d . T h u s . u n d o u b t e d l y representing genetic as well as e n v i r o n m e n t a l differences. Although the p a i r e d .c o m p a r i s o n s t test is the traditional m e t h o d of solving this type of problem. T h e question being asked is whether the faces of six-year-old girls are significantly wider than those of fiveyear-old girls.c o n s u m i n g and has the a d v a n t a g e of providing a measure of the variance c o m p o n e n t a m o n g the rows (blocks). T h e difference c o l u m n has to be calculated and is s h o w n in c o l u m n (4) of the data tabic in Box 9. T h e p a i r e d . This w o u l d yield a 2 χ η a n o v a for η cages.3 analyzes face widths of fiveand six-year-old girls. T h e s t a n d a r d error over which this is tested is the s t a n d a r d e r r o r of the m e a n difference. there is a large a d d e d variance c o m p o n e n t a m o n g the individual girls. similar studies by e m p l o y i n g single classification a n o v a . the o t h e r one not.

and Tague (1921) determined soil p H electrometrically for various soil samples from Kansas.59 5.90 5.25 40.53 41.40 3.18 4.01 4.55 3.44 4.25 3.57 6.75 5.27 3.53 6.92 4.849 which is clearly not significant at the 5% level.80 6.14 5./ TWO-WAY ANALYSIS Oh VARIANCE Exercises 9.03 4.97 50.58 3.45 5.1 Swanson.55 5.40 5.90 4.18 3.66 4.08 3.83 4.245 5.99 5.55 6.41 4.24 5.54 5.78 4.44 6.24 4.59 3.29 5.721 3.28 4.50 4.34 6.04 4.66 3.29 4.49 5. An extract of their d a t a (acid soils) is shown below.32 8.38 5. D o subsoils differ in p H from surface soils (assume that there is no interaction between localities and depth for p H reading)? County Soil type Surface ρ Η Subsoil pH Finney Montgomery Doniphan Jewell Jewell Shawnee Cherokee Greenwood Montgomery Montgomery Cherokee Cherokee Cherokee Richfield silt loam Summit silty clay loam Brown silt loam Jewell silt loam Colby silt loam Crawford silty clay loam Oswego silty clay loam Summit silty clay loam Cherokee silt loam Oswego silt loam Bates silt loam Cherokee silt loam Neosho silt loam 6.1 17 3.85 6.30 8.85 4.93 3.43 3.98 4.94 4.23 52.76 5.48 4.618 4.83 3.95 4.01 3.37 3.71 6.21 3.208 CHAPTER 9 .340 X Y2 = 2059.52 5.33 45.49 5.94 3.6109 .052 3.05 43.10 4.30 3.98 4.04 4.20 5.77 3.54 37.72 6.18 4.39 5.11 4.13 6.56 5.70 3. Ayshire Mature 2-yr Canadian Mature 2-yr Guernsey Mature 2-yr Holstein-Friesian 2-vr Mature Jersey Mature 2-yr 3.28 4. The average butterfat percentages of these cows were recorded.30 4.77 6.38 3. Fs = 0.72 3.45 5.64 4.21 5.43 4.10 4.66 9.46 5.00 4.51 1 iihit 4. The following data were extracted from a Canadian record book of purebred dairy cattle.55 343 36.1 1 4.38 3.88 5.74 4. Latshaw.07 4. This gave us a total of 100 butterfat percentages.47 4.55 4.66 4.40 4. The 100 butterfat percentages are given below.18 5.6246. broken down into five breeds and into two age classes.25 4.366 4.55 3.2 ANS.53 8.37 4.003 4.97 5. 305-day class).17 4.58 3.66 5.72 53.79 3. MS r e s i d u a l = 0.70 5.06 4.32 5.71 3.18 5.62 4.71 4.6985.66 48. Analyze and discuss your results.22 5.42 7.848 5.77 5.92 6.79 4. R a n d o m samples of 10 mature (five-year-old and older) and 10 two-year-old cows were taken from each of five breeds (honor roll.80 4.32 5. You will note that the tedious part of the calculation has been done for you.95 4.75 5.41 4.1 1 4. MS between surface and subsoils = 0.

0203.44 1.00 2.49 1.13 1.87 2.18 2.0177.66 1. and types.927 0.32 2.29 2.44 1.00 2.79 1.37 1.891 1.00 2.03 1.11 2.58 1.378**).90 2. The following data were extracted from a more cntensive study by Sokal and K a r t c n (1964).57 1. MS.69 1.56 1.971 0. Three seeds of each type were planted in 16 pots. .06 2.23 2.) Do these Hies differ in development period with density and a m o n g strains? You may assume absence of strain χ density interaction.41 1.52 1. T h e cllect of pots is considered to be a Model 11 factor. reared at a density of 20 beetles per gram of flour.48 1.00 1.010 0.48 1.10 2.61 1.00 1.64 1.59 1.50 1.68 1.45 1. (ienol ι Series ι + +b bb 1 2 3 4 0.1 \ l K IS1 s C 9.. T h e data represent mean dry weights (in mg) of three genotypes of beetles. a Model 1 factor.3206 (1·\ = 360.952 0.58 1. T h e four scries of experiments represent replications.3 209 Blakeslee (1921) studied length-width ratios of second seedling leaves of two types of Jimson weed called globe (G) a n d nominal (TV).85 2.44 1.61 1.38 1.76 1.08 2.93 2.87 2.49 1.971 0.23 1.28 1.70 1. MSiy)^ = 7.18 2.18 2.10 2.16 2.71 1.40 2.925 0.60 2.70 1.94 1.986 1.00 1.95 2.56 1.23 2.829 0.94 1.08 1.38 1.67 1.41 1.77 2.71 1. = 0.92 1.89 2.55 1.00 9.38 1. = 3. (Data by Sullivan and Sokal.85 1.11 2.0598 (F.32 2.61 1.52 1.051 0.48 2.68 1. x .958 0. Is there sufficient evidence to conclude that globe and nominal differ in length-width ratio? Pot identification number Types G Ν 16533 16534 16550 16668 16767 16768 16770 16771 16773 16775 16776 16777 16780 16781 16787 16789 1. AFVwilhin — 0.93 1.00 2. 1963.955 9. MSiMs = 0.43 1.47 1.4 ANS.52 1.45 1.38 1.12 2.80 2.29 1.62**).22 1. 'I'riholiimi castaneum.26 2. T h e mean length of developmental period (in days) for three strains of houseflies at seven densities is given.5 Test whether the genotypes differ in mean dry weight.36 1.24 1.53 1.

7 ANS.3943 (F. 5 6 7 8 •1 nnnal no.97 74.1 11.95 144.g\ 101.3426.40 Animal no 1 ρ 3 4 Animal int.LL 9.7 n.73 62.0 10.210 CHAPTER 9 .6 bwb 9.3 9.30 Enerij used \kcal.8 10.4 10.1 11. = 4. lty = 2.32 53.54.>/) 95./ TWO-WAY ANALYSIS Oh VARIANCE Strains Dt'/i.1019**).1 9. MS M r a i n .9 12.6 10.98 food IS C hnenjv used 1 kcal !l) 72.19 76. 9.0905 (F„ = 6. = 1.2 9.69 .si'/ V container 60 80 160 320 640 1280 2560 per OL 9.60 70.v C I'jicriiv used ike id ·/) 62. MS„ cn .07 65. who carried out a study of energy utilization in the pocket mouse I'eroynathus longimembris during hibernation at different temperatures.i 10. 13 14 15 16 Animal no.08 81.6 The following data are extracted from those of French (1976).3 9.070*).5 10. 17 18 19 20 .3 9. Is there evidence that the amount of food available affects the amount of energy consumed at different temperatures during hibernation? Restricted .8 10.6 9.8 (S 74. MS r „ 1 ( i u a l = 0.8 10.02 s c Ad-libit um footl IS C hncrii r used (/«•«/.73 63.8 BF.30 144.

1 wc briefly list t h e v a r i o u s a s s u m p t i o n s of a n a l y s i s of variance. In m a n y cases. T h e . a n d s t e p s t o be t a k e n if t h e a s s u m p tions c a n n o t be met. y o u s h o u l d a s s u r e yourself t h a t t h e a s s u m p t i o n s listed in this c h a p t e r seem r e a s o n a b l e . t h e c o n s e q u e n c e s for a n a n o v a if t h e a s s u m p t i o n s a r e violated. W c s h o u l d stress t h a t b e f o r e y o u c a r r y o u t a n y a n o v a o n a n a c t u a l r e s e a r c h p r o b l e m .CHAPTER Assumptions of Analysis of Variance W c shall n o w e x a m i n e t h e u n d e r l y i n g a s s u m p t i o n s of the a n a l y s i s of v a r i a n c e . m e t h o d s for testing w h e t h e r these a s s u m p t i o n s a r e valid. In Scction 10. a n d we give i n s t r u c t i o n s o n h o w t o p r o c e e d if they d o n o t . indep e n d e n c e . y o u s h o u l d c a r r y out o n e of several p o s s i b l e a l t e r n a t i v e steps to r e m e d y the s i t u a t i o n . d e p a r t u r e f r o m the a s s u m p t i o n s of a n a l y s i s of v a r i a n c e can be rectified by t r a n s f o r m i n g the o r i g i n a l d a t a by using a new scale. n o r m a l i t y . T h e a s s u m p t i o n s i n c l u d e r a n d o m s a m p l i n g . h o m o g e n e i t y of variances. a n d a d d i t i v i t y . If they a r c n o t . W c d e s c r i b e p r o c e d u r e s for t e s t i n g s o m e of t h e m a n d briefly s t a t e t h e c o n s e q u e n c e s if t h e a s s u m p t i o n s d o n o t h o l d .

ASSUMPTIONS OF ANALYSIS OF VARIANC 1 rationale b e h i n d this is given in Section 10.1 The assumptions of anova Randomness. All a n o v a s require that s a m p l i n g of individuals be at r a n d o m .212 CHAPTER 10 . N o n r a n d o m n c s s of sample selection m a y well be reflected in lack of independence of the items. If the five rats e m p l o y e d as c o n t r o l s are either the youngest o r the smallest o r the heaviest rats while those allocated to some o t h e r t r e a t m e n t are selected in some other way. when the a s s u m p t i o n s of a n o v a are met. T h e phys- .s a m p l e cases only. These a r e the n o n p a r a m e t r i c or distribution-free techniques. in heterogeneity of variances. T h u s . In such a case it is often found that a d j a c e n t plots of g r o u n d give rather similar yields. if you a r r a n g e d the variates within any one g r o u p in s o m e logical o r d e r i n d e p e n d e n t of their m a g n i t u d e (such as the o r d e r in which the m e a s u r e m e n t s were obtained). the five rats allocated to each t r e a t m e n t must be selected at r a n d o m . 10. W h e n t r a n s f o r m a t i o n s are u n a b l e to m a k e the d a t a c o n f o r m to the a s s u m p tions of analysis of variance.2) was Yit = μ + a i + e : j ) is that the e r r o r term e. are essential. Independence. you would a s s u m e a long sequence of large positive values followed by an equally long sequence of negative values to be quite unlikely. Expression (7. or in n o n n o r m a l distribution —all discussed in this section. it is clear that the results are not apt to yield an unbiased estimate of the true t r e a t m e n t effects. A d e q u a t e safeguards to e n s u r e r a n d o m sampling d u r i n g the design of an experiment.3 examines three n o n p a r a m e t r i c m e t h o d s in lieu of a n o v a for t w o . In addition.. T h u s . Section 10. However. or d u r i n g s a m p l i n g from n a t u r a l populations. Researchers often like to use the n o n p a r a m e t r i c m e t h o d s because the a s s u m p t i o n s underlying t h e m are generally simple a n d because they lend themselves t o rapid c o m p u tation on a small calculator. in a study of the effects of three doses of a d r u g (plus a control) on five rats each. which are s o m e t i m e s used by preference even when t h e p a r a m e t r i c m e t h o d ( a n o v a in this case) can be legitimately employed. a n a l o g o u s to the intended a n o v a . H o w could d e p a r t u r e s from independence arise? An obvious e x a m p l e would be an experiment in which the experimental units were pints of g r o u n d laid o u t in a field. An a s s u m p t i o n stated in each explicit expression for the expected value of a variatc (for example. we m u s t use o t h e r techniques of analysis. these m e t h o d s a r e less efficient t h a n a n o v a . You would also not cxpcct positive a n d negative values to alternate with regularity. for c o m pleteness we should also add the statement that it is assumed that the e's are independently and identically (as explained below under " H o m o g e n e i t y of variances") distributed. you would expect the e .2. C o n s e q u e n t l y . / s to succeed each o t h e r in a r a n d o m sequence. It would thus be i m p o r t a n t not to g r o u p all the plots c o n t a i n i n g the same t r e a t m e n t into an a d j a c e n t series of plots but rather to r a n d o m i z e the allocation of t r e a t m e n t s a m o n g the experimental plots. is a r a n d o m n o r m a l variable. together with s o m e of the c o m m o n transformations.

H e r e a g a i n . T h i s latter t e r m is c o i n e d f r o m G r e e k r o o t s m e a n i n g e q u a l scatter. S o m e races or species .'ι~ = 9. C o n v e r s e l y . A l t h o u g h w e h a v e n o t stressed it so far. t h e c o n v e r s e c o n d i t i o n (inequality of v a r i a n c e s a m o n g s a m p l e s ) is called heteroscedasticity. In S e c t i o n 8.8. F o r a = 6 a n d ν = η . Syno n y m s for this c o n d i t i o n a r e homogeneity of variances a n d homoscedasticity. r a n d o m i z a t i o n m a y o v e r c o m e t h e p r o b l e m of n o n i n d e p e n d e n c e of e r r o r s . p r e f e r r e d by m a n y b e c a u s e of its simplicity.2. T h i s d i s t r i b u t i o n is s h o w n in T a b l e VI.80 a n d 12. In a n e x p e r i m e n t we m i g h t m e a s u r e t h e effect of a t r e a t m e n t b y r e c o r d i n g weights of ten i n d i v i d u a l s .2 t o 10. this a s s u m p tion t h a t t h e e . F o r m o r e t h a n t w o s a m p l e s t h e r e is a " q u i c k a n d d i r t y " m e t h o d .1 0 . T h i s test relies o n the tabled c u m u l a t i v e p r o b a b i l i t y d i s t r i b u t i o n of a statistic that is the v a r i a n c e r a t i o of the largest t o the smallest of several s a m p l e v a r i a n c e s .s a m p l e a n a l y s i s of variance): we use a n F test for the h y p o t h e s e s H n : a \ = o \ a n d Η . Homogeneity of variances. critical values of w h i c h a r e f o u n d in T a b l e VI. c o m p e n s a t i o n b y the o p e r a t o r of the b a l a n c e m a y result in r e g u l a r l y a l t e r n a t i n g over. Because we a s s u m e t h a t e a c h s a m p l e v a r i a n c e is a n e s t i m a t e of t h e s a m e p a r a m e t r i c e r r o r v a r i a n c e . u l l J „|. in w h i c h we described t h e t test for t h e difference b e t w e e n t w o m e a n s . T h e b a s i c d e s i g n of t h e e x p e r i m e n t o r t h e w a y in w h i c h it is p e r f o r m e d m u s t b e c h a n g e d . / s h a v e identical v a r i a n c e s a l s o u n d e r l i e s t h e e q u i v a l e n t a n o v a test for t w o s a m p l e s — a n d in fact a n y t y p e of a n o v a . W e c o m p u t e t h e m a x i m u m v a r i a n c e r a t i o 'sn>axAs'min = Ύ.a n d u n d e r e s t i m a t e s of the t r u e weight. t h e validity of the u s u a l F test of significance c a n be seriously i m p a i r e d . T h e r e is n o s i m p l e a d j u s t m e n t o r t r a n s f o r m a t i o n t o o v e r c o m e t h e lack of i n d e p e n d e n c e of e r r o r s . as illustrated in Scction 7. for w h i c h we wish t o c a r r y o u t a n a n o v a .1 at the 5% a n d Γ'ό levels. c o m p e n s a t e d f o r by several o v e r e s t i m a t e s . the a s s u m p t i o n of h o m o g e n e i t y of v a r i a n c e s m a k e s intuitive sense. W h a t m a y c a u s e such h e t e r o g e n e i t y ? In this case. L a c k of i n d e p e n d e n c e of t h e e's c a n result f r o m c o r r e l a t i o n in t i m e r a t h e r t h a n space.4 a n d B o x 8. If the e's a r e n o t i n d e p e n d e n t . we s u s p e c t that s o m e of the p o p u l a t i o n s are i n h e r e n t l y m o r e v a r i a b l e t h a n o t h e r s . Let us a s s u m e t h a t we h a v e six a n t h r o p o l o g i c a l s a m p l e s of 10 b o n e l e n g t h s e a c h . lx lest.3 a n d Box 7. W e h a v e a l r e a d y seen h o w t o test w h e t h e r t w o s a m p l e s a r c h o m o s c e d a s t i c p r i o r t o a t test of the differences b e t w e e n t w o m e a n s (or t h e m a t h e m a t i c a l l y e q u i v a l e n t t w o .1 = 9. F o r e x a m p l e .0 a n d c o m p a r e it with f ' m . y o u w e r e told t h a t the statistical test w a s valid o n l y if we c o u l d a s s u m e t h a t t h e v a r i a n c e s of t h e t w o s a m p l e s were e q u a l . /·'„„„ is 7. 1 I THE ASSUMPTIONS OF ANOVA 213 ical p r o c e s s of r a n d o m l y a l l o c a t i n g t h e t r e a t m e n t s t o t h e e x p e r i m e n t a l p l o t s e n s u r e s t h a t t h e e's will be i n d e p e n d e n t . respectively. T h e v a r i a n c e s of the six s a m p l e s r a n g e f r o m 1. T h i s is the F m . O u r b a l a n c e m a y suffer f r o m a m a l a d j u s t m e n t t h a t results in giving successive u n d e r e s t i m a t e s . : σ ] Φ σ \ . Equality of variances in a set of s a m p l e s is a n i m p o r t a n t p r e c o n d i t i o n for several statistical tests. w e m a y d e t e r m i n e t h e s e q u e n c e in w h i c h i n d i v i d u a l s of the v a r i o u s g r o u p s a r e w e i g h e d a c c o r d i n g to s o m e r a n d o m p r o c e d u r e . W e c o n c l u d e t h a t the v a r i a n c e s of the six s a m ples a r c significantly h e t e r o g e n e o u s .1.

a g r a p h i c test. If m e a n s a n d variances are i n d e p e n d e n t . in variables following the Poisson distribution t h e variance is in fact e q u a l t o the m e a n .3. This a s s u m p t i o n of no interaction in a two-way a n o v a is sometimes also referred t o as the a s s u m p t i o n of additivity of the main effects. variances vary as f u n c t i o n s of means. By this we m e a n that any single observed variate can be d e c o m p o s e d into additive c o m p o n e n t s representing the t r e a t m e n t effects of a particular row a n d c o l u m n as well as a r a n d o m term special to it. as explained in the next section. but single degree of f r e e d o m c o m p a r i sons m a y be far f r o m accurate. A check of this a s s u m p t i o n requires either m o r e t h a n a single observation per cell (so that an e r r o r m e a n square can be c o m p u t e d ) . might be applied to each sample separately. differences a m o n g m e a n s b r i n g a b o u t h e t e r o g e n e o u s variances. Additivitv· In two-way a n o v a without replication it is necessary to a s s u m e that interaction is not present if o n e is to m a k e tests of the m a i n effects in a M o d e l I a n o v a . We have a s s u m e d t h a t the e r r o r terms e . while others are quite variable for t h e s a m e c h a r a c t e r . a n d possibly misleading if the effect of the interaction is very large. Such d e p a r t u r e s f r o m the a s s u m p t i o n of homoscedasticity can often be easily corrected by a suitable t r a n s f o r m a t i o n . n o n p a r a m e t r i c m e t h o d s (Section 10. as discussed later in this chapter. finally.5. T h e r e are also m a n y cases in which the heterogeneity of variances is a f u n c t i o n of an i m p r o p e r choice of m e a s u r e m e n t scale. If there is serious question a b o u t the normality of the d a t a . should be substituted for the analysis of variance. A rapid first inspection for hetcroscedasticity is to check for c o r r e l a t i o n between the m e a n s a n d variances or between the m e a n s a n d the ranges of the samples. T h u s . ASSUMPTIONS OF ANALYSIS OF VARIANC 1 are relatively u n i f o r m for o n e character. a n d p o p u l a t i o n s with greater m e a n s will therefore have greater variances. T h e best way to correct for lack of n o r m a l i t y is to carry out a t r a n s f o r m a t i o n that will m a k e the d a t a n o r m a l l y distributed. the ratios s2/Y or s/Ϋ = V will be a p p r o x i m a t e l y c o n s t a n t for the samples. j of the variates in each s a m p l e will be i n d e p e n d e n t . W i t h s o m e m e a s u r e m e n t scales. T h e consequences of n o n n o r m a l i t y of e r r o r are not too serious.214 CHAPTER 10 . as carried out in Section 10. t h a t the error terms will be n o r m a l l y distributed. F o r example. a n d . If interaction is actually present. If n o simple t r a n s f o r m a t i o n is satisfactory. it m a y well be that o n e s a m p l e h a s been o b t a i n e d u n d e r less s t a n d a r d i z e d c o n d i t i o n s t h a n the others a n d hence h a s a greater variance. If the variances increase with the m e a n s (as in a Poisson distribution). then the F test will be very inefficient. If t r a n s f o r m a t i o n c a n n o t cope with heteroscedasticity. a n o n p a r a m e t r i c test. O n l y very skewed distribution w o u l d have a m a r k e d effect on the significance level of the F test or on the efficiency of the design. as illustrated in Section 5.3) m a y have to be resorted to. that the variances of the e r r o r terms of t h e several samples will be equal. these ratios will vary widely. Normality. T h e consequences of m o d e r a t e heterogeneity of variances a r e not t o o serio u s for the overall test of significance. In a n a n o v a representing the results of an experiment.

F o r t r e a t m e n t A 3 B 2 . Finally. Similarly. such as level 2 of f a c t o r A w h e n c o m bined with level 3 of f a c t o r B. we a r c a b l e t o r e s t o r e the additivity of the d a t a . m a k e s a v a r i a t e d e v i a t e f r o m t h e e x p e c t e d value. . as m a y h a p p e n if a n e x c e p t i o n a l p l o t is included in a n a g r i c u l t u r a l e x p e r i m e n t .1 "> os = 2 3 2 0. the e x p e c t e d v a l u e s will be q u i t e different. a n i n t e r a c t i o n t e r m will result if t h e effects of t h e t w o f a c t o r s A a n d Β o n t h e r e s p o n s e v a r i a b l e Y a r e m u l t i p l i c a t i v e r a t h e r t h a n additive. Let us a s s u m e t h a t the expected p o p u l a t i o n m e a n μ is zero. In this case. .1).30 7 10 1. I n t e r a c t i o n c a n be d u e t o a variety of causes.18 Additive effects Multiplicative effects Log of multiplicative effect: Additive effects Multiplicative effects Log of multiplicative effect: /'. by the c o n v e n t i o n a l a d d i t i v e m o d e l .w a y a n o v a . there is a s i m p l e r e m e d y . By t r a n s f o r m i n g the v a r i a b l e i n t o l o g a r i t h m s ( T a b l e 10. the expected s u b g r o u p m e a n s u b j e c t e d t o level 3 for f a c t o r A a n d level 2 for f a c t o r Β is 8.1 we s h o w t h e a d d i t i v e a n d m u l t i p l i c a t i v e t r e a t m e n t effects in a h y p o t h e t i c a l t w o . An e x a m p l e will m a k e this clear. if the p r o c e s s is multiplicative r a t h e r t h a n additive. if a diseased i n d i v i d u a l is i n c l u d e d in a physiological e x p e r i m e n t . as o c c u r s in a variety of p h y s i c o c h e m i c a l a n d biological p h e n o m e n a .00 a. h'tu tor A h acKir Η a. T h e n the m e a n of the s a m p l e s u b j e c t e d to t r e a t m e n t I of fact o r A a n d t r e a t m e n t 1 of f a c t o r Β s h o u l d be 2.5 s 0. If we w e r e t o a n a l y z e m u l t i p l i c a t i v e d a t a of this sort by a c o n v e n t i o n a l a n o v a . T h e third item in each cell gives the l o g a r i t h m of (he expected value. In T a b l e 10. .70 . as in e x a m p l e s of synergism o r interference. S u c h a d e v i a t i o n is r e g a r d e d as a n i n h e r e n t p r o p e r t y of t h e n a t u r a l system u n d e r s t u d y . a s s u m i n g m u l t i p l i c a t i v e ί'λιιι κ κι. S i m i l a r effects o c c u r w h e n a given replicate is q u i t e a b e r r a n t . t h e respective c o n t r i b u t i o n s to the m e a n b e i n g 3 a n d 5.48 8 15 1. the e x p e c t e d value is 15. F o r t r e a t m e n t AlBt< the e x p e c t e d value e q u a l s 1.ι 1 0 () II2 .3 4 3 0. we w o u l d find that the i n t e r a c t i o n s u m of s q u a r e s w o u l d be greatly a u g m e n t e d b e c a u s e of the n o n a d d i t i v i t y of the t r e a t m e n t effects. which is the p r o d u c t of 1 a n d 1. H o w e v e r .10. T h i s is so b e c a u s e each f a c t o r at level 1 c o n t r i b u t e s u n i t y t o t h e m e a n . M o s t f r e q u e n t l y it m e a n s t h a t a given t r e a t m e n t c o m b i n a t i o n . the p r o d uct of 3 a n d 5.ι Illustration of additive and multiplicative elfects.1 / THE ASSUMPTIONS OF ANOVA 215 o r a n i n d e p e n d e n t e s t i m a t e of the e r r o r m e a n s q u a r e f r o m p r e v i o u s comparable experiments. o r if by m i s t a k e a n i n d i v i d u a l f r o m a different species is i n c l u d e d in a b i o m e t r i c study.

differs f r o m g r o u p to g r o u p .70. such as o n e of the distribution-free tests in lieu of a n o v a . a n d the greatest. A second a p p r o a c h w o u l d be to t r a n s f o r m t h e variable to be a n a l y z e d in such a m a n n e r t h a t the resulting t r a n s f o r m e d variates meet the a s s u m p t i o n s of the analysis. you m a y feel even m o r e suspicious. Such d a t a will be heterosccdastic. ASSUMPTIONS OF ANALYSIS OF VARIANC 1 relations. which is additive a n d homoscedastic.48. + log e." W h e n you learn t h a t often a statistical test may be m a d e significant after t r a n s f o r m a t i o n of a set of d a t a . is twicc as great as in a n o t h e r . In such a case t h e a s s u m p t i o n s of n o r m a l i t y a n d of homoscedasticity w o u l d b r e a k d o w n . we m i g h t e n c o u n t e r a situation in which the c o m p o n e n t s were multiplicative in effect. β 2 = 0. t w o courses of action are o p e n t o us. 10. = 1. y . Assume that μ = I. the range of the Y's will be 3 — 1 = 2. the c o r r e s p o n d i n g range will be four times as wide. As a m a t t e r of fact. you will a p p r e c i a t e the fact that the linear scale. H e r e is a g o o d illustration of h o w t r a n s f o r m a t i o n of scale. = 4. Y o u a r c p r o b a b l y a w a r e t h a t teaching of the "new m a t h " in e l e m e n t a r y schools h a s d o n e m u c h to dispel the naive notion that the decimal system of n u m b e r s is the only " n a t u r a l " one. T h e entire analysis of variance would then be carried out on the t r a n s f o r m e d variates. single-classification. At this point m a n y of you will feel m o r e or less u n c o m f o r t a b l e a b o u t what wc have done. In a similar way. so that Y^· = which is the p r o d u c t of the three terms. f r o m 4 χ 1 = 4 to 4 χ 3 = 12. a range of 8. a 3 = 0. In any o n e a n o v a . 3. with s o m e experience in science a n d in the h a n d l i n g of statistical d a t a . = 0 . However. with the e r r o r term normally distributed. the smallest = 1. Clearly. t h e p a r a m c t r i c m e a n μ is c o n s t a n t but t h e t r e a t m e n t elfcct a. N o t i c e that the i n c r e m e n t s are strictly additive again (SS^ x B — 0). then if a. a 2 = 0. W e can correct this situation simply by t r a n s f o r m ing o u r model into logarithms.216 CHAPTER 10 . T r a n s f o r m a t i o n seems t o o m u c h like " d a t a grinding. w h e n a.2 T r a n s f o r m a t i o n s If t h e evidence indicates t h a t the a s s u m p t i o n s for an analysis of v a r i a n c e o r for a t test c a n n o t be m a i n t a i n e d . but there is really n o scientific necessity to e m p l o y the c o m m o n linear or arithmetic scale to which wc arc a c c u s t o m e d . H o w e v e r .2.30. on a l o g a r i t h m i c scale we could simply write a t = 0. W h a t is the justification for t r a n s f o r m i n g the d a t a ? It takes s o m e getting used to the idea. discussed in detail in Section 10. W e m a y carry out a different test n o t requiring t h e rejected a s s u m p t i o n s . Let us look at a simple e x a m p l e of w h a t t r a n s f o r m a t i o n will do. M o d e l I) d e c o m p o s e s as follows: Y{j = μ + a{ + In this m o d e l the c o m p o n e n t s are additive. t h o u g h it would not be so w i t h o u t such a t r a n s f o r m a t i o n . W c would therefore o b t a i n log Y-j = log μ + log a. A single variate of the simplest kind of a n o v a (completely r a n d o m i z e d . helps us m e e t t h e a s s u m p tions of analysis of variance. discussed in the next section. so familiar to all of us f r o m o u r earliest expe- . the scatter a m o n g t h e variates Ytj would d o u b l e in a g r o u p in which a.

it m a y be m u c h m o r e convenient to think of it as an additive system on a logarithmic scale. reciprocals. or o t h e r scale. A f o r t u n a t e fact a b o u t t r a n s f o r m a t i o n s is t h a t very often several d e p a r t u r e s f r o m the a s s u m p t i o n s of a n o v a are simultaneously cured by the s a m e transf o r m a t i o n to a new scale. W h e n a t r a n s f o r m a t i o n is applied.1 0 . simply by m a k i n g the d a t a homoscedastic. a n g u l a r . but wc can also get an estimate of the s t a n d a r d deviation u n d e r t r a n s f o r m a t i o n as measured by the slope of the lilted line. or any o t h e r one. s q u a r e roots. By c h a n g i n g the scale of the sccond c o o r d i n a t e axis f r o m linear to logarithmic. Since the t r a n s f o r m a t i o n s discussed in this c h a p t e r are nonlinear. In reporting results of research with variables that require t r a n s f o r m a t i o n . If the slopes are very heterogeneous. If a system is multiplicative o n a linear scale. tests of significance arc p e r f o r m e d on the t r a n s f o r m e d d a t a . Alternatively. we also m a k e them a p p r o a c h n o r m a l i t y a n d e n s u r e additivity of the t r e a t m e n t effects. or angles. An easy way to find out w h e t h e r a given t r a n s f o r m a t i o n will yield a distribution satisfying the a s s u m p t i o n s of a n o v a is to plot the c u m u l a t i v e distributions of the several samples on probability paper. confidence limits c o m p u t e d in the t r a n s f o r m e d scale a n d c h a n g e d back t o the original scale would be asymmetrical. T h e s q u a r e r o o t of the surface area of an o r g a n i s m is often a m o r e a p p r o p r i a t e m e a s u r e of the f u n d a m e n t a l biological variable subjected to physiological a n d e v o l u t i o n a r y forces t h a n is t h e area. homoscedasticity has not b e e n achieved. Thus. straightens out to indicate n o r m a l i t y (you m a y wish to refresh your m e m o r y on these graphic techniques studied in Section 5. s q u a r e root. you simply have to look at the distributions of t r a n s f o r m e d variates to decide which t r a n s f o r m a t i o n most closely satisfies the a s s u m p t i o n s of the analysis of variance before c a r r y i n g out an a n o v a . we not only test whether the d a t a b e c o m e m o r e n o r m a l t h r o u g h t r a n s f o r m a t i o n . W e can look u p u p p e r class limits on t r a n s f o r m e d scales or e m p l o y a variety of available probability g r a p h p a p e r s whose second axis is in logarithmic. T h u s . Stating the s t a n d a r d e r r o r in the original scale w o u l d therefore be misleading. As s o o n as you are ready t o accept the idea t h a t the scale of m e a s u r e m e n t is a r b i t r a r y . In m a n y cases experience has t a u g h t us to express experimental variables not in linear scale b u t as l o g a r i t h m s . wc can . furnish m e a n s in the backt r a n s f o r m e d scale followed by their (asymmetrical) confidence limits rather than by their s t a n d a r d errors. indicating skewness. occupies a similar position with relation t o other scales of m e a s i m nu ni as does the decimal system of n u m b e r s with respect to the b i n a r y and o c t a l n u m b e r i n g systems a n d others. pH values are l o g a r i t h m s a n d dilution series in microbiological titrations are expressed as reciprocals.5). but estimates of m e a n s are usually given in the familiar u n t r a n s f o r m e d scale. This is reflected in the n o r m a l distribution of the s q u a r e r o o t of the variable as c o m p a r e d to the skewed distribution of areas. we can see w h e t h e r a previously curved line. A n o t h e r f r e q u e n t t r a n s f o r m a t i o n is the s q u a r e r o o t of a variable. T h u s . T h e a s s u m p t i o n of homosccdasticity implies that the slopes for the several samples should be the same. 2 / TRANSFORMATIONS i rience.

Therefore. Freq u e n c y d i s t r i b u t i o n s skewed to the right are often m a d e m o r e symmetrical by t r a n s f o r m a t i o n to a l o g a r i t h m i c scale. we find 41. T h e arcsine t r a n s f o r m a t i o n preserves the i n d e p e n d e n c e of the two. T h e arcsine transf o r m a t i o n stretches out both tails of a distribution of percentages or p r o p o r tions and compresses the middle. a n d k is c o n s t a n t for any o n e p r o b l e m . T h a t t r a n s f o r m a t i o n yielding the best fit over all samples will be chosen for the a n o v a . the logarithmic t r a n s f o r m a t i o n is quite likely to remedy the situation a n d m a k e the variance i n d e p e n d e n t of the m e a n .218 CHAPTER 10 . You will r e m e m b e r that such distrib u t i o n s are likely to be Poisson-distributed rather than normally d i s t r i b u t e d a n d that in a Poisson distribution the variance is the same as the m e a n . the m e a n a n d variance c a n n o t be independent but will vary identically. Thus.1 t h a t logarithmic t r a n s f o r m a t i o n is also called for w h e n effects are multiplicative.'. it is generally not neccssary to apply the arcsinc transformation.2 shows an application of the s q u a r e root t r a n s f o r m a t i o n . W h e n the c o u n t s include zero values. T h e most c o m m o n t r a n s f o r m a t i o n applied is conversion of all variates into logarithms. F o r r e p o r t i n g m e a n s the t r a n s f o r m e d m e a n s arc squared again and confidence limits arc r e p o r t e d in lieu of s t a n d a r d errors.431 — 0. it has been f o u n d desirable to code all variates by a d d i n g 0. T h e s a m p l e with the greater m e a n has a significantly greater variance prior to transf o r m a t i o n . T h e t r a n s f o r m a t i o n finds 0 = arcsin >//>. W e saw in the previous section a n d in T a b l e 10. since such a proced u r e w o u l d distort t h e significance level. ASSUMPTIONS OF ANALYSIS OF VARIANC 1 e x a m i n e g o o d n e s s of fit tests for n o r m a l i t y (see C h a p t e r 13) for the samples u n d e r v a r i o u s t r a n s f o r m a t i o n s . where ρ is a p r o p o r t i o n . W h e n the d a t a are counts. usually c o m m o n logarithms. if we c o m p u t e or look up arcsin v '0.. we frequently find the s q u a r e r o o t t r a n s f o r m a t i o n of value. the angle whose sine is 0.03". W e shall use a s q u a r e root t r a n s f o r m a t i o n as a detailed illustration of t r a n s f o r m a t i o n of scale. W h e n ever the m e a n is positively correlated with the variance (greater m e a n s are acc o m p a n i e d by greater variances). </ = I p.6565. The logarithmic transformation. T r a n s f o r m i n g the variates to s q u a r e roots will generally m a k e the variances i n d e p e n d e n t of the means.'. it is clear that in a binomial distribution the variance would be a function of the mean. T h e term "arcsin" is s y n o n y m o u s with inverse sine or sin which stands for "Ihe angle whose sine is" the given quantity.2 that the s t a n d a r d deviation of a binomial distribution is σ = \Jpq/k. a n d 70".6565. W h e n the percentages in the original d a t a fall between 30". T h e t r a n s f o r m a t i o n then is v'v + i T a b l e 10.5. It is i m p o r t a n t that the t r a n s f o r m a t i o n not be selected on the basis of giving the best a n o v a results. You may r e m e m b e r from Section 4. The arcsine transformation This t r a n s f o r m a t i o n (also k n o w n as the angular transformation) is especially a p p r o p r i a t e to percentages and p r o p o r t i o n s . Sincc μ = />. After t r a n s f o r m a t i o n the variances arc not significantly different. . as of insects on a leaf or blood cells in a h e m a c y t o m e t e r . The square root transformation.

299 0.83 3.2.294** (squared) means r F — > 0.2099 ~ Medium li ns Back-transformed Medium A (7 95% confidence limits — 1.00 1.t a b l e 10.2.681 .46 3. (1) (2) Number of flies emerging y Square root of number of flies J y (3) Medium / A (4) Medium f — — — — Β 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0.324 12.24 2.2634 of variances 3.133 9.145 V 0 ' 2 " 4 = 1.45 2.495 6.015 10.74 3.()2S[l 4.2634 _ 1.030 2.73 2. f\ = transformed 9.Vr (squared) confidence limits '-583 1.561 N " iT" L2 = JY Back-transformed f i 0 .410 1. 1 4|— ~— 9>i /' = •Wl 0.os-Vy 3.307 .16 3.61 3.495 11.255 0.937 — sjt 'o.] l. T h e d a t a r e p r e s e n t t h e n u m b e r of a d u l t Drosophila e m e r g i n g f r o m single-pair c u l t u r e s for t w o different m e d i u m f o r m u l a t i o n s ( m e d i u m A c o n t a i n e d DDT).00 3.65 2.00 1 5 6 — 3 — — — — — — — — — — — — 2 1 2 3 1 1 1 1 1 2 75 15 Untransformed Ϋ s2 Square root 'Jy transformation variable 1.87 4.297 .933 1.053 3.\ 9. 0S .32 3.2099 Tests of equality V ntransformed s2.2 An application of the square root transformation.507 I.687 1.00 2.41 1.410 1.307 0.00 1.145 ^ 3.

1). These techniques are also called distribution-free methods. if we take the s a m p l e with the lower-valued variates. It will be cspccially c o n v e n i e n t when the data arc already g r a p h e d a n d there are not t o o m a n y items in each sample.220 CHAPTER 10 .W h i t n e y test is s t r a i g h t f o r w a r d . we can go t h r o u g h every observation in the lower-valued sample without having any items of the higher-valued o n e below . we m a y resort t o an a n a l o g o u s n o n p a r a metric m e t h o d .1 is a semigraphical test a n d is quite simple to apply. T h e m e t h o d of calculating the sample statistic U s for the M a n n . It is desirable to o b t a i n an intuitive u n d e r s t a n d i n g of the rationale behind this test. So long as you can o r d e r the observations. In the M a n n . 2 individuals f r o m species . we e m p l o y the n o n p a r a m e t r i c Mann-Whitney U test (Box 10. T h e d a t a in Box 10. In the lallcr ease. F o r a design that would give rise to a t test or a n o v a with t w o classes. In recent years.1 are entirely applicable. s u p p o s e you placcd some meat out in the open and studied t h e arrival limes of individuals of Iwo species of blowflies. ASSUMPTIONS Of ANALYSIS Of VARIANC 1 10. but usually will w o r k for a wide r a n g e of different distributions. N o t e that this m e t h o d docs not really require that each individual observation represent a precise m e a s u r e m e n t .W h i t n e y U test as illustrated in Box 10.1 are m e a s u r e m e n t s of heart (ventricular) f u n c t i o n in two g r o u p s of patients that have been allocated to their respective g r o u p s on the basis of o t h e r criteria of ventricular d y s f u n c t i o n . since they are not d e p e n d e n t o n a given d i s t r i b u t i o n (such as the n o r m a l in anova). T h e null hypothesis is that the two samples c o m e f r o m p o p u l a t i o n s having the same distribution.1.4 next. that is. followed by the s i m u l t a n e o u s arrival of o n e of each of the Iwo species (a lie). You could record exactly the lime of arrival of each individual fly. then 3 individuals of B. there will be n o points of the c o n t r a s t i n g s a m p l e below it. as shown in Box 10.3 Nonparametric methods in lieu of anova If n o n e of the a b o v e t r a n s f o r m a t i o n s m a n a g e t o m a k e our d a t a meet t h e ass u m p t i o n s of analysis of variance. you might simply rank arrival times of the t w o species. you are able to p e r f o r m these tests. for example. n o n p a r a m e t r i c analysis of variance has become quite p o p u l a r because it is simple to c o m p u t e a n d permits f r e e d o m f r o m worry a b o u t the distributional a s s u m p t i o n s of a n a n o v a . Yet we should point out that in cases where those a s s u m p t i o n s hold entirely or even a p p r o x i m a t e l y . T h u s . T h e M a n n . starting f r o m a point zero in time when the meat was scl out. T h e y are called n o n p a r a m e t r i c m e t h o d s because their null hypothesis is not c o n c e r n e d with specific p a r a m e t e r s (such as the m e a n in analysis of variance) but only with the distribution of the variates. a n d so forth. n o t i n g that individual 1 of species Β c a m e first. O n the o t h e r h a n d . the analysis of variance is generally the m o r e efficient statistical test for detecting d e p a r t u r e s f r o m the null hypothesis. W e shall discuss only n o n p a r a m c t r i c tests for two samples in this section.W h i l n e y test we can conceive of two extreme situations: in o n e case the two samples o v e r l a p and coincide entirely: in the o t h e r they are quite separate. the techniques of Box 10. While such r a n k e d or ordered data could not be analyzed by the p a r a m e t r i c m e t h o d s studied earlier.

08 n = 8 1 m 1 Killip class 2. not paired. There are 2f observations below the tied second and third observations in class III. Designate the sample size of the larger sample as nl and that of the smaller sample as n2. The sum of these counts C = 29{.6 0.13 η = 29 ι 0. Count \ for each tied observation. There are 3 observations below the fourth and fifth variates in class III. and 6 and 7 observations.1 . 0.29|] = 202^. The Mann-Whitney statistic Vs is the greater of the two quantities C and (n. Thefindingswere already graphed in the source publication. In this case. - * • ι » ft— • • • « * • % • • • 0. n2 = 8..49 + 0.n2 . We compare the left ventricle ejection fraction for patients classified as Killip classes I and III. = 29. count the number of observations in the other sample which are lower in value (below it in this graph).2 0.28 + 0. below the seventh and eight variates in class III. . When the two samples are of equal size it does not matter which is designated as n. 1.5 bu ω 0.7 r0.4 0. The higher Killip class signifies patients with more severe symptons. For each observation in one sample (it is convenient to use the smaller sample). A measure of heart function (left ventricle ejection fraction) measured in two samples of patients admitted to the hospital under suspicion of heart attack. Graph the two samples as shown below.1 Mann-Whitney V test for two samples. there are lj observations in class I below the first observation in class III. in this case 29| and [(29 χ 8) . n.10.C).3 0. 4 observations below the sixth variate.3 / n o n p a r a m e t r 1 c m e t h o d s in i ΙΓΙΙ o f a n o v a 232 BOX 10. The patients were classified on the basis of physical examinations during admission into different so-called Killip classes of ventricular dysfunction. and step 1 illustrates that only a graph of the data is required for the Mann-Whitney U test. respectively. The half is introduced because of the variate in class I tied with the lowest variate in class III.8 0. For example. Indicate the ties by placing dots at the same level. ranked observations.

No tied variates in samples (or variates tied within samples only). it.01. ι :Μ . A little e x p e r i m e n t a t i o n will show this value to be [n(n — l)/2] + (n/2) = n2/l. Conversely. But it takes a substantial number of ties to affect the outcome of the test appreciably. which yields n x ti 2 . O n the o t h e r h a n d . Clearly. with critical value for ί/φ. value of 3. O u r total c o u n t w o u l d therefore be the total c o u n t of o n e s a m p l e multiplied by every o b s e r v a t i o n in the second sample.1 is that (he two admission classes characterized by physical e x a m i n a t i o n differ in their ventricular dysfunction as m e a s u r e d by left ventricular ejection fraction. The null hypothesis is rejected if the observed value is too large. T h e sample characterized as m o r e severely ill has a lower ejection fraction t h a n the sample characterized . the range of possible U values must be between this a n d n { rt 2 . calculate the following quantity t U s ~ n'n^2 /"ι"ζ("ι + n t + 1) V 12 which is approximately normally distributed. the sum of the c o u n t s C or n. O u r conclusion as a result of the tests in Box 10. Larger sample sizes require a more elaborate formula.„2] in Table XI. o u r result in this ease would be n x n 2 .. then for each point in o n e s a m p l e we would have those p o i n t s below it plus a half point for t h e tied value representing t h a t observation in the second s a m p l e which is at exactly the same level as the observation u n d e r c o n s i d e r a t i o n .222 chapter 10 .. a n d the critical value must be s o m e w h e r e within this range. Our example is a case in point. There is no exact test.191 by the uncorrected formula are significantly different at Ρ < 0. compare U. We may conclude that the two samples with a t. which will then be conservative. Corrections for ties increase the t„ value slightly. hence the uncorrected formula is more conservative. Look up the significance of ts in Table III against critical values of for a onetailed or two-tailed test as required by the hypothesis. . When n. use Table XI.n2 — C. £ 20.1 Continued Testing the significance of V.667 12 = ^ A further complication arises from observations tied between the two groups. For sample sizes n. The denominator 12 is a constant. < 20.5 ~(29)(8)/2 ^ 86. if the t w o samples coincided c o m pletely. a s s u m p t i o n s o f a n a l y s i s o f v a r i a n c 1 Box 10. In our case this would yield t V 202. since we are told to take the greater of the t w o values.5 /(29)(8)(29 + 8TT) V734. T h u s . all the p o i n t s of the lower-valued s a m p l e would be below every point of the higher-valued o n e if we started out with the latter. In cases where n t > 20.

and n2. Box 10. Thus in column (2) we note that there are 3 measurements in sample A at or below 112. Form cumulative frequencies F of the items in samples 1 and 2. « 2 = 10. T h e e x a m p l e in this box features m o r p h o l o g i c a l m e a s u r e m e n t s BOX 10. Expected critical values can be l o o k e d u p in a table or evaluated a p p r o x i m a t e l y . A. testing differences in distributions of two samples of continuous observations. It is based on the unsigned differences between the relative c u m u l a t i v e frequency distributions of the t w o samples. Variate measured is length of cheliceral base stated as micrometer units. dispersion. a n d it measures differences in location. Its null hypothesis is identity in distribution for the two samples. A n o n p a r a m e t r i c test t h a t tests differences between t w o distributions is the Kolmogorov-Smirnov two-sample test.) Two samples of nymphs of the ehigger Trombicuia lipovskyi.10. a n d so forth. skewness. . and n2 <. 2.2 s h o w s the application of the m e t h o d to samples in which both n1 a n d n2 < 25. a n d thus the test is sensitive to differences in location.3 / n o n p a r a m e t r 1 c m e t h o d s i n i ΙΓΙΙ of anova 223 T h e M a n n . (Both n. By contrast there are 6 such measurements in sample Β (column (3)). The sample sizes are rij = 16.W h i t n e y V test is based on ranks.2 Kolmogorov-Smirnov two-sample test. Compute relative cumulative frequencies by dividing frequencies in columns (2) and (3) by η. respectively. 25. C o m p a r i s o n between observed a n d expected values leads to decisions w h e t h e r the m a x i m u m difference between the t w o cumulative frequency distributions is significant.5 micrometer units. Crossiey Computational steps 1. and enter in columns (4) and (5). This test is quite simple to carry out. Sample A Y Sample Y Β 104 109 112 114 116 118 118 119 121 123 125 126 126 128 128 128 100 105 107 107 108 111 116 120 121 123 Sourer' Data by D.

475) «= 76. Compare ntn2D with its critical value in Table XIII.700 0.438 0. the absolute value of the difference between the relative cumulative frequencies in columns (4) and (5).224 chapter 10 .188 0.062 0.312 0.100 0.188 0. However.n 2 .100 0.700 0.100 0. 5.412 0.375 0. 6.062 0.200 0.125 0. i.388 0.000 0. and enteT in column (6).500 0.000 1. Locate the largest unsigned difference D..338 0.700 0.188 0 D 0 0 0 0.688 0.600 0.800 0.200 0.000 1.138 0.062 0.100 0.05.1 with respect to the alternative hypothesis of the latter.338 0.500 0. differences in location.312 0.500 0.138 0. «2 0.000 1.350 0.188 0.438 0.812 0.250 0.388 0.350 0. 4.475.062 0.000 1.400 0.600 0.500 0.475 «0.100 0.262 0.562 0.600 0. The KolmogorovSmirnov test is less powerful than the Mann-Whitney U test shown in Box 10.100 0.038 0.375 0.338 0.100 0.562 0.375 0.125 0.900 0.100 {6) Sample Y 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 0 0 0 1 1 1 1 1 2 2 2 3 3 4 4 5 5 7 8 8 9 9 L0 10 11 13 13 16 Λ A Sample Ft 1 1 1 1 1 2 2 4 5 5 5 6 6 6 6 6 7 7 7 7 8 9 9 10 10 10 10 10 10 d** £»_£2 >>1 »2 0.625 0.200 0.375 0. Kolmogorov-Smirnov tests differences in both shape and location of the distributions and is thus a more comprehensive test (') (2) (J) (4) Β "l W Fj. Compute d.412 0.625 0. We accept the null hypothesis that the two samples have been taken from populations with the same distribution. It is 0.2 Continued 3.100 0. Multiply D by ».312 0.300 0. where we obtain a value of 84 for Ρ = 0. a s s u m p t i o n s o f a n a l y s i s o f v a r i a n c 1 Box 10. We obtain (16)(10J(0.900 1.125 0.000 1.062 0.600 0.700 0.250 0.000 .500 0.600 0.e.812 1.

S m i r n o v test m e a s u r e s differences in t h e entire d i s t r i b u t i o n s of the t w o s a m p l e s a n d is t h u s less sensitive t o differences in l o c a t i o n only. T h u s . while c o l u m n (6) f e a t u r e s t h e u n s i g n e d difference b e t w e e n relative c u m u l a t i v e frequencies. B u t t h a t is b e c a u s e t h e t w o tests differ in their sensitivities t o different a l t e r n a t i v e hyp o t h e s e s — t h e M a n n . It records m e a n litter size in two strains of guinea pigs k e p t in large colonies d u r i n g the years 1916 t h r o u g h 1924. G o o d n e s s of fit tests by m e a n s of this statistic a r e treated in C h a p t e r 13. W h e n these d a t a a r e s u b j e c t e d t o the M a n n . After the r a n k s h a v e been c o m p u t e d . 3 / n o n p a r a m e t r i c m e t h o d s i n i ii ii o f anova 22. w h i c h a r e s u m m e d with respect t o the class m a r k s s h o w n in c o l u m n (1). It is therefore q u i t e a p p r o p r i a t e t h a t t h e d a t a be t r e a t e d as paired c o m p a r i s o n s .05 > Ρ > 0. It is a n u n d e r l y i n g a s s u m p t i o n of all K o l m o g o r o v .t a i l e d two= s a m p l e K o l m o g o r o v . C o l u m n (3) in Box 10.3.S m i r n o v tests t h a t the variables s t u d i e d a r e c o n t i n u o u s . Bach of these values is t h e a v e r a g e of a large n u m b e r of litters. with years as replications a n d the s t r a i n differences as the fixed t r e a t m e n t s to be tested. w h e r e a s t h e K o l m o g o r o v .05 = 84. there is a 10% p r o b a b i l i t y of o b t a i n i n g t h e o b s e r v e d difference by c h a n c e alone. o n e finds t h a t t h e t w o s a m p l e s a r e significantly different at 0. Tied r a n k s a r e c o m p u t e d as a v e r a g e s of t h e r a n k s . h o w e v e r . Finally.3.W h i t n e y U test. a n d we give t h e c u m u l a t i v e f r e q u e n c i e s of t h e t w o s a m p l e s in c o l u m n s (2) a n d (3). As s o o n as better c o n d i t i o n s r e t u r n e d . 9. t h u s if the f o u r t h a n d fifth difference h a v e the s a m e a b s o l u t e m a g n i t u d e they will b o t h be assigned rank 4. illustrated in Box 10. T h e critical v a l u e f o r this statistic can be f o u n d in T a b l e XIII. F o r W i l c o x o n ' s test these differences are r a n k e d without regard to sign in c o l u m n (4). a n d we c o n c l u d e t h a t t h e t w o s a m p l e s d o n o t differ significantly in their d i s t r i b u t i o n s . T h i s c o n t r a d i c t s t h e findings of the K o l m o g o r o v . suggesting t h a t these f l u c t u a t i o n s a r c e n v i r o n m e n t a l l y caused.W h i t n e y V test is sensitive to the n u m b e r of i n t e r c h a n g e s in r a n k (shifts in l o c a t i o n ) necessary to s e p a r a t e t h e t w o samples. N o t e the parallelism in the c h a n g e s in the variable in the t w o strains.3 a n d illustrated in Box.c o m p a r i s o n s design. the m e a n litter size increased. D u r i n g 1917 a n d 1918 (war y e a r s for the U n i t e d States).02. T h e m o s t widely used m e t h o d is t h a t of Wilcoxon's signed-ranks test. W e o b t a i n nin2D0A0 76 a n d «IH 2 /) 0 . T h e e x a m p l e to w h i c h it is applied has not yet been e n c o u n t e r e d in this b o o k . T h e m a x i m u m u n s i g n e d difference is D = 0. It is multiplied b y ntn2 t o yield 76.5. N o t i c e t h a t a s u b s e q u e n t d r o p in 1922 is again m i r r o r e d in b o t h lines.3 lists the differences o n w h i c h a c o n v e n t i o n a l p a i r e d c o m p a r i s o n s t test c o u l d be p e r f o r m e d . w h i c h f u r n i s h e s critical values f o r t h e t w o . a s h o r t a g e of c a r e t a k e r s a n d of f o o d resulted in a d e c r e a s e in t h e n u m b e r of offspring per litter.2.S m i r n o v test. Relative e x p e c t e d frequencies are o b t a i n e d in c o l u m n s (4) a n d (5) by d i v i d i n g by t h e respective s a m p l e sizes. W e use t h e s y m b o l F for c u m u l a t i v e f r e q u e n cies. discussed in Scction 9.s of t w o s a m p l e s of chigger n y m p h s . we shall present a n o n p a r a m e t r i c m e t h o d for t h e p a i r e d .1 0 .S m i r n o v test in B o x 10. s o t h a t the smallest a b s o l u t e difference is r a n k e d 1 a n d the largest a b s o l u t e difference (of t h e nine differences) is r a n k e d 9.475. the original sign of each differcncc .

Assign to the ranks the original signs of the differences.94 2.04 + 0.98 2.68 2.03 + 0.39 2.58 2. .60 2. Mean litter size of two strains of guinea pigs. is as defined in step 4 above.70 2. Rank these differences from the smallest to the largest without regard to siyn.41 2. Since T. Year m Strain Β (2) Strain 13 m D w Rank(R) 1916 1917 1918 1919 1920 1921 1922 1923 1924 2.10 + 0.005 in the table. Litter size in strain Β is significantly different from that of strain 13.3 Wilcoxon's signed-ranks test for two groups. Compare the computed value with in Table III.85 2.73 2. = 1.36 2.07 + + + + + 9 8 2 3 7 + 6 + 5 +4 1 44 Absolute sum of negative ranks Sum of positive ranks Source: Data by S.05 + 0.90 2. Sum the positive and negative ranks separately.12 -0.85 2. our observed difference is significant at the 1% level. compared over η — 9 years.82 2. Procedure 1.43 2.89 2. The sum that is smaller in absolute value.78 + 0. Ts.32 +0. labeled D. a s s u m p t i o n s of a n a l y s i s o f v a r i a n c 1 BOX 10. arranged as paired observations. These are entered in column (3).68 2. 4. 3. Wright. 2. which is equal to or less than the entry for one-tailed α = 0.226 CHAPTER 10 . For large samples (η > 50) compute 4 rnn + xK" + i) 12 V where T.09 +0.19 +0. Compute the differences between the η pairs of observations. is compared with the values in Table XII for η = 9.

N o t e t h a t t h e a b s o l u t e m a g n i t u d e s of t h e differences play a role only i n s o f a r as they affect the r a n k s of the differences. O b v i o u s l y . if the a l t e r n a t i v e h y p o t h e s i s were t h a t s t r a i n 13 in Box 10. in w h i c h we c o u n t t h e n u m b e r of positive a n d negative signs a m o n g the differences ( o m i t t i n g all differences of zero). n o t as efficient as the c o r r e s p o n d i n g p a r a m e t r i c t test. T h i s is a very s i m p l e test t o c a r r y o u t . Since we h a v e n o a p r i o r i n o t i o n s t h a t o n e strain s h o u l d h a v e a g r e a t e r litter size t h a n the o t h e r . w h i c h is given in B o x 10. t h u s e x c l u d i n g the value ρ = q = 0 5 p o s t u l a t e d by the null h y p o t h e s i s . Let us learn these by a p p l y i n g the sign test to the g u i n e a pig d a t a of B o x 10. w h i c h e v e r o n e is s m a l l e r in a b s o l u t e value.10. S u c h s a m p l i n g s h o u l d follow the b i n o m i a l d i s t r i b u t i o n . of w h i c h eight a r c positive a n d o n e is n e g a t i v e .t a i l e d d i s t r i b u t i o n .5 c a n be m a d e in a n u m b e r of ways. is t h e n c o m p u t e d (it is labeled Ts) a n d is c o m p a r e d w i t h t h e critical v a l u e Τ in T a b l e XII f o r t h e c o r r e s p o n d i n g s a m p l e size. T h e r e a r c n i n e differences. A still simpler test is the sign test. it is clear t h a t s t r a i n Β h a s a litter size different f r o m t h a t of s t r a i n 13. In view of t h e significance of t h e r a n k s u m .5.0195. as m i g h t be e x p e c t e d if t h e r e were n o t r u e difference b e t w e e n t h e t w o p a i r e d samples. if we i n t e n d a o n e . W e c o u l d follow the m e t h o d s of Section 4. b u t it is.0390.3) in which we c a l c u l a t e the c x p e c t c d p r o b a b i l i t y of s a m p l i n g o n e m i n u s sign in a s a m p l e of nine o n the a s s u m p t i o n of β = q = 0. of c o u r s e . using T a b i c IX. we find the 95% c o n f i d e n c e limits to be 0. which f u r n i s h e s c o n f i d e n c e limits for ρ for v a r i o u s s a m p l e sizes a n d s a m p l i n g o u t c o m e s . F o r a large s a m p l e a n a p p r o x i m a t i o n using the n o r m a l c u r v e is available. we m a y t a k e a s e c o n d a p p r o a c h . W i t h only six p a i r e d c o m p a r i s o n s . this is a n i m p r o b a b l e o u t c o m c . At least at the 5% significance level wc c a n c o n c l u d e that it is unlikely t h a t the n u m b e r of p l u s a n d m i n u s signs is e q u a l . sincc the . a n d the test of the h y p o t h e s i s t h a t the p a r a m e t r i c f r e q u e n c y of t h e plus signs is ρ = 0.2 (illustrated in T a b l e 4. L o o k i n g u p s a m p l e size 9 a n d Υ = 1 ( n u m b e r s h o w i n g the p r o p e r t y ) . T h e p r o b a b i l i t y of such a n o c c u r r e n c e a n d all " w o r s e " o u t c o m e s e q u a l s 0.5. W c t h e n test t h e h y p o t h e s i s t h a t t h e η p l u s a n d m i n u s signs are s a m p l e d f r o m a p o p u l a t i o n in which t h e t w o k i n d s of signs a r e present in e q u a l p r o p o r t i o n s . w h i c h s h o u l d be p r e f e r r e d if the n e c e s s a r y a s s u m p t i o n s hold. Clearly.t a i l e d test. Since the c o m p u t a t i o n of the cxact p r o b a b i l i t i e s m a y be q u i t e t e d i o u s if n o t a b l e of c u m u l a t i v e b i n o m i a l p r o b a b i l i t i e s is at h a n d .025 significance level f r o m the 95% c o n f i d e n c e limits a n d a 0.3 / n o n p a r a m e t r 1 c m e t h o d s in i ΙΓΙΙ of anova 227 is assigned t o t h e c o r r e s p o n d i n g r a n k .t a i l e d test w o u l d be carried out only if the results were in the d i r e c t i o n of t h e a l t e r n a t i v e h y p o t h e s i s .005 level f r o m the 99% limits.r a n k s test. this is a two-tailed test. wc c a n infer a 0. T h e s u m of t h e positive o r of t h e n e g a t i v e r a n k s . a n d wc d o u b l e the p r o b a b i l i t y to 0. N o t e t h a t o n e n e e d s m i n i m a l l y six differences in o r d e r t o c a r r y o u t W i l c o x o n ' s s i g n e d .3 h a d g r e a t e r litter size t h a n strain B.0028 a n d 0.4751 by i n t e r p o l a t i o n . a n d wc reject the null h y p o t h e s i s that ρ — q = 0. such a o n e . T h u s . T h e c o n fidence limits imply a t w o . all differences m u s t be of like sign for the test t o be significant a t t h e 5% level. wc w o u l d not b o t h e r t e s t i n g this e x a m p l e at all.3.3.

t s = (F — = (7 — T h e v a l u e of ts is t h e n c o m p a r e d w i t h Γα[αο) in T a b l e III. Colloidal Experiment no. To test the effect of urea it might be best to pool Experiments 9 and 10.8 6. this is a s a t i s f a c t o r y a p p r o x i m a t i o n . In a study of flower color in Butterflywced (Asc/epias tuherosa). Ρ < 0.59 10. 136. V CI SW2 SW3 29. ANS. A t h i r d a p p r o a c h we c a n use is to test t h e d e p a r t u r e f r o m the e x p e c t a t i o n t h a t ρ = q = 0.3 226 94 23 4.228 chapter 10 . us.5.1 Allee and Bowen (1932) studied survival time of goldfish (in minutes) when placed in colloidal silver suspensions.2. 10 Urea and salts added 210 180 240 210 210 150 180 210 240 240 120 180 240 120 150 330 300 300 420 360 270 360 360 300 120 10. u s i n g o n e tail o r t w o tails of the d i s t r i b u t i o n as w a r r a n t e d . T h e r e f o r e . 9 involved 5 replications. Test for homogeneity of Experiments 9 and 10. F o r l a r g e r s a m p l e s . Us = 33. Do the results of the two experiments differ? Addition of urea.15 1.2 Analyze and interpret. w e let η s t a n d f o r k a n d a s s u m e t h a t ρ = q = 0.3 15. Compare anova results with those obtained using the Mann-Whitney U test for the two comparisons under study. if they prove not to differ significantly. In o u r case. Woodson (1964) obtained the following results: Cieoi/raphie region Y η . NaCl. a s s u m p t i o n s o f a n a l y s i s o f v a r i a n c 1 o b s e r v e d p r o p o r t i o n of y e a r s s h o w i n g t h i s r e l a t i o n is less t h a n half. w e c a n use t h e n o r m a l a p p r o x i m a t i o n t o the b i n o m i a l d i s t r i b u t i o n as follows: ts = (Υ — μ)/σγ = (Y — kp)/y/kpq. w h e r e we s u b s t i t u t e t h e m e a n a n d s t a n d a r d d e v i a t i o n of t h e b i n o m i a l d i s t r i b u t i o n l e a r n e d in S e c t i o n 4. 9 silver Experiment no. and N a 2 S to a third series of suspensions apparently prolonged the life of the fish. Exercises 10. Experiment no.22 . and experiment no.5 by o n e of the m e t h o d s of C h a p t e r 13. Test equality of variances.001. 10 involved 10 replicates. W h e n t h e s a m p l e size η > 12. For the comparison of Experiments 9 and 10 versus urea and salts.

90 1.000 57.500 14.10.60 1. Williams. using Wilcoxon's signed-ranks test. Test whether the samples are homoscedastic.000 (06.6 (a) Calculate means and variances for the three periods and examine the relation between these two statistics. V„ = 123.60 1. Number of bacteria in 1 cc of milk from three cows counted at three periods (data from Park. ANS.06 0.000 65.58 .06 3. Analyze the measurements of the two samples of chigger nymphs in Box 10.000 20. Compare the results with those shown in Box 10. we may still test for differences between the two treatment classes.1. Discuss your results.000 13. Ρ > 0.25 2. Allee et al.25 1.83 1.09 0. 1 2 At time of milking After 24 hours After 48 hours 3 12.5.00 1.5 10.000 21. Discuss. (b) Carry out an anova on transformed and untransformed data.000 10.exercises 229 10.55 1. Although the original variates are not available. ANS. Ai crtu/r gain in length (in millimeters) Conditioned water Unconditioned water Replicate 1 2 3 4 5 6 7 8 9 10 2.20 1.4 The variable recorded was a color score (ranging from 1 for pure yellow to 40 for deep orange-red) obtained by matching flower petals to sample colors in Maerz and Paul's Dictionary of Color. Use the sign test to test for differences in the paired replicates.00 — 0.2 by the Mann-Whitney U test.50 2. Transform the variates to logarithms and compare means and variances based on the transformed data.30 0. Ρ < 0. 1924): Cow no. and Krumwiede. Test for a difference in surface and subsoil p H in the data of Exercise 9.3 10.59 0.000 31.2 for the Kolmogorov-Smirnov test. Ts = 38.10 0.05 3. (1934) studied the rate of growth of Ameiurus melas in conditioned and unconditioned well water and obtained the following results for the gain in average length of a sample fish.90 -0.05.

We also use regression to predict values of o n e variable in terms of the other. In regression we estimate the r e l a t i o n s h i p of one variable with a n o t h e r by expressing the o n e in terms of a linear (or a m o r e complex) function of the other. a n d we c o n s e q u e n t l y would like to be able to express m o r e precisely the n a t u r e of the relationships between these variables. o u r actual analyses were of only o n e variable. . F.vcn t h o u g h we m a y have considered m o r e than o n e variable at a time in o u r studies so far (for example. However. W h e n variables are qualitative (that is. when they a r e attributes). a n d we shall p o s t p o n e o u r effort to clarify the relation a n d distinction between regression and correlation until then. if meristic. seawatcr c o n c e n t r a t i o n a n d oxygen c o n s u m p t i o n in Box 9. we frequently m e a s u r e t w o or m o r e variables on each individual. the m e t h o d s οΓ regression a n d correlation c a n n o t be used. In correlation analysis. C h a p t e r 12 deals with correlation.1. T h e variables involved in regression a n d correlation are either c o n t i n u o u s or meristic. which is s o m e t i m e s confused with regression. This brings us to the subjects of regression a n d correlation.CHAPTER Regression W e n o w turn to the s i m u l t a n e o u s analysis of two variables.3). or age of girls and their face widths in Box 9. they are treated as t h o u g h they were c o n t i n u o u s . we estimate the degree to which two variables vary together.

5. which illustrates the effect of two d r u g s on the blood pressure of t w o species of V V = a + i>\ D r u g Λ on a n i m a l 1' l ) r u K li on a n i m a l (J D r u g Η on a n i m a l Ρ = 20 4 15Λ" ' = 40 + 7. Such a relationship . leaving the d e m o n s t r a t i o n of cause-and-effect relationships to the established p r o c e d u r e s of the scientific m e t h o d .8.6 serves as a s u m m a r y of regression a n d discusses the various uses of regression analysis in biology.11. T h e basic c o m p u t a t i o n s in simple linear regression a r e s h o w n in Section 11. H o w t r a n s f o r m a t i o n of scale c a n straighten out curvilinear relationships for ease of analysis is s h o w n in Section 11. 11.2 by a discussion of the a p p r o p r i a t e statistical m o d e l s for regression analysis. Tests of significance a n d c o m p u t a t i o n of confidence intervals for regression p r o b l e m s a r e discussed in Section 11.5.Y ' = 20 + 7.1. This is followed in Section 11. an alternative a p p r o a c h is by a n o n p a r a m e t r i c test for regression.5. A function is a m a t h e m a t i c a l relationship enabling us to predict what values of a variable Y c o r r e s p o n d to given values of a variable X.Y 0 1 2 3 4 5 6 7 8 .1 / i n t r o d u c t i o n t o regression 231 In Section 11. . A typical linear regression is of the form shown in Figure 11. Such a test is illustrated in Section 11. W e shall be content with establishing the f o r m and significance of functional relationships between two variables. T h e case with several d e p e n d e n t variates for each i n d e p e n d e n t variate is treated in Section 11.7.V M i c r o g r a m s of d r u g / c c blood Ι ICURI: III Blood p r e s s u r e of an a n i m a l in m m H g as a f u n c t i o n of d r u g c o n c e n t r a t i o n in mi per ec of b l o o d . Section 11.3 for the case of o n e d e p e n d e n t variate for each i n d e p e n d e n t variate. generally written as Y = j\X\ is familiar to all of us.1 Introduction to regression M u c h scientific t h o u g h t concerns the relations between pairs of variables hypothesized to be in a cause-and-effect relationship.1 we review the n o t i o n of m a t h e m a t i c a l f u n c t i o n s a n d int r o d u c e t h e new terminology required for regression analysis. W h e n t r a n s f o r m a t i o n c a n n o t linearize the relation between variables.4.

the linear relationship Y = a + bX may be an a d e q u a t e description of the functional d e p e n d e n c e of V o n A". the Y intercept. However. In biology. This line is parallel to that c o r r e s p o n d i n g to the second e q u a t i o n . W h e n wc wish to stress that the regression coefficient is of variable Y on variable X. as reflcctcd by the different slope. that is. wc write />. the Y intercept r e m a i n s the same but the slope has been halved. b = 15. In the e x a m p l e chosen. when A = 0.232 chapter ]1 / regression animals. T h u s .·. a different d r u g is likely to exert a different hypertensive effect. In calculus. while X is called the independent variable. on the s a m e o r g a n i s m P. b is called the regression coefficient. the blood pressure should be at the s a m e V intercept. the e q u a t i o n for the effect of d r u g Β on species Q is written as Y = 40 + 7. generally symbolized by m. It is called the Y intercept. Negative values of X are meaningless in this case. t h e blood pressure in millimeters of m e r c u r y . This point is the intcresection of the function line with the Y axis. F r o m y o u r k n o w l e d g e of analytical geometry you will have recognizcd the slope factor b as the slope of the function Y = a + bX. A l t h o u g h a cause would always be considered an i n d e p e n d e n t variable a n d an effect a d e p e n d e n t variable. which is the n o r m a l blood pressure of animal Ρ in the absence of the d r u g .1 show the effects of varying b o t h a. a n d the function is called a regression equation. In the lowest line. But. T h e two o t h e r f u n c t i o n s in Figure 11. Obviously. the f u n c t i o n just studied will yield a blood pressure of 20 m m H g . T h e third relationship also describes the effect of d r u g B. for an increase of o n e m i c r o g r a m of the drug. Q u i t e p r o b a b l y the slope of the functional relationship will flatten o u t as the d r u g level rises. such a relationship can clearly be a p p r o p r i a t e over only a limited r a n g e of values of A'. Y is a f u n c t i o n of X. since the identical o r g a n i s m is being studied.Y. We visualize this as the effect of a different d r u g . Y = 20 + 7. b is the derivative of that same function (dY/dX = b). the b l o o d pressure would be Y = 20 + (15)(4) = 80 m m H g . T h e relationships depicted in this g r a p h can be expressed by t h e f o r m u l a Y = a + bX. but the experiment is carried out on a different species. which p r e s u m a b l y is free t o vary. In biostatistics. B. Thus. a n d />. In Figure 11. which represents the effect of d r u g A on animal P.5. . Q . T h e highest line is of the r e l a t i o n s h i p Y — 20 + 15X. when n o d r u g is administered. Clearly. w h e n the i n d e p e n d e n t variable equals zero. v . for a limited portion of the range of variable X ( m i c r o g r a m s of the drug).Y. after 4 pg of the d r u g have been given. the slope. the slope factor. it is also unlikely that the blood pressure will c o n t i n u e to increase at a u n i f o r m rate. By this formula.1. W e call the variable Y t h e dependent variable. T h e q u a n t i t y of d r u g is m e a s u r e d in m i c r o g r a m s .5. the b l o o d pressure is raised by 15 m m . a functional relationship observed in n a t u r e m a y actually be s o m e t h i n g o t h e r t h a n a cause-and-effect relationship. whose n o r m a l blood pressure is assumed to be 40 m m H g . T h e m a g n i t u d e of b l o o d pressure Y d e p e n d s on the a m o u n t of the d r u g X a n d can therefore be predicted f r o m the i n d e p e n d e n t variable. the d e p e n d e n t variable equals a. T h e i n d e p e n d e n t variable X is multiplied by a coefficient b. which is assumed to remain the same.

This is the same relation we have just encountered." W e m e a n by this that whereas Y.2 Models in regression In a n y real example. As a m a t t e r of fact. 2. ι Μ i r r u g n o n s of d r u g IT MIUKI . X is described by the linear function μγ = α + βΧ.1. since we are describing a p a r a m e t r i c relationship.2 BUHKI p r e s s u r e of a n animal in m m l l g a s a ( u n c i i o n of d r u g c o n c e n t r a t i o n in /ig p e r cc of blood. as you shall see later. T h e m o r e c o m m o n of these. T h e a p p r o p r i a t e c o m p u t a t i o n s and significance tests in regression relate t o the following t w o models. the Y\ are independently a n d normally distributed. R e p e a t e d s a m p l i n g for a given d r u g c o n c e n t r a t i o n . A n o t h e r way of staling this a s s u m p t i o n is that the p a r a m e t r i c m e a n s μγ of the values of Y are a function of X a n d lie on a straight line described by this e q u a t i o n . Figure I 1. of A. A given experiment can be repeated several times. (). in the e x a m p l e of Figure 11.2 illustrates (his concept with a regression line similar to the ones in Figure 11. It is based on f o u r a s s u m p t i o n s . T h u s . 1. A' does not vary at r a n d o m but is u n d e r the control of the investigator. the d e p e n d e n t variable. W e therefore say that the A"s are "fixed. but we use Greek letters instead of a and b. This can be represented by the e q u a t i o n ^ = α + /(A.233 11. T h e i n d e p e n d e n t variable X is measured without error. 3. We can m a n i p u l a t e . n a t u r a l v a r i a t i o n of the items (genetically a n d e n v i r o n m e n t a l l y caused) a n d also d u e to m e a s u r e m e n t error. T h u s . Model I regression. is especially suitable in experimental situations. T h u s . b u t r a t h e r t h a t the m e a n (or expected value) of Y is a + bX. a n d 10 /<g of the d r u g to each of 20 individuals ol an animal species a n d o b t a i n a rKiUKl· 11. T h e expected value for the variable y for any given value. there is a very close r e l a t i o n s h i p between M o d e l 1 a n o v a a n d M o d e l I regression. + t. wc could administer 2. for instance. where the t.. o b s e r v a t i o n s would not lie perfectly a l o n g a regression line b u t would scatter a l o n g b o t h sides of the line. 8. This scatter is usually d u e to inherent.V in the same way that we were able to m a n i p u l a t e the t r e a t m e n t effect in a Model I a n o v a .'s arc assumed to be normally distributed e r r o r terms with a mean of zero. is a r a n d o m variable.1 we have varied dose of d r u g at will and studied t h e response of the r a n d o m variable blood presssure. in regression a f u n c t i o n a l r e l a t i o n s h i p does n o t m e a n t h a t given a n X the value of Υ must be a + bX. For any given value A'. 4.

We a s s u m e that these samples a l o n g the regression line are homosccdastic. u n d e r control of the investigator. as t h o u g h these s e p a r a t e n o r m a l d i s t r i b u t i o n s were stacked right next t o each other. A s s u m p t i o n 3 states t h a t these s a m p l e values w o u l d be i n d e p e n d e n t l y a n d n o r m a l l y distributed. this being the m o r e c o m m o n case. a n d 10 /^g. An e x a m p l e of such a case would be weight of offspring (Y) as a f u n c t i o n of n u m b e r of offspring (X) in litters of mice. We might be interested in s t u d y i n g wing length as a function of weight o r we might wish to predict wing length for a given weight. that is. you should realize t h a t even in such instances the basic a s s u m p t i o n of M o d e l I regression is that the single variate of Y c o r r e s p o n d i n g to the given value of A" is a sample f r o m a p o p u l a t i o n of independently a n d normally distributed variatcs. In t h o s e rare cases in which the i n d e p e n d e n t variable is d i s c o n t i n u o u s . the responses to each d o s a g e w o u l d n o t be t h e s a m e in every individual. T h e weights of the flies will vary for genetic and e n v i r o n m e n t a l reasons and will also be subject to m e a s u r e m e n t error. 4. which we treat as an i n d e p e n d e n t variable. cases of this sort are m u c h better . T h u s . Also. which is the variance of the e's in the expression in item 3. the variable X is sometimes not fixed. is not fixed and certainly not the "cause" of differences in wing length. T h i s is indicated by t h e n o r m a l curves which are s u p e r i m p o s e d a b o u t several p o i n t s in the regression line in Figure 11. that is.234 chapter 11 / regression frequency d i s t r i b u t i o n of b l o o d pressure responses Y t o the i n d e p e n d e n t variates X = 2. N o t every experiment will have m o r e t h a n o n e reading of Y for each value of X. 6. T h e general case where both variables show r a n d o m variation is called Model 11 regression. a c o n t i n u o u s scatter. 8. as will be discussed in the next c h a p t e r . M a n y regression analyses in biology d o not meet the a s s u m p t i o n s of M o d e l I regression. we a s s u m e that the variance a r o u n d the regression line is c o n s t a n t a n d i n d e p e n d e n t of the m a g n i t u d e of X or Y. 4. In this case the weight. In view of the inherent variability of biological material. T h e final a s s u m p t i o n is a familiar one. the basic c o m p u t a t i o n s we shall learn in t h e next section are for only o n e value of Y per value of X . y o u w o u l d o b t a i n a f r e q u e n c y distribution of values of Y (blood pressure) a r o u n d the expected value. there being. a n infinity of possible i n t e r m e d i a t e values of X between a n y two dosages. of course. the d i s t r i b u t i o n s of y would be physically s e p a r a t e f r o m each o t h e r a n d would o c c u r only a l o n g those p o i n t s of the abscissa c o r r e s p o n d i n g to i n d e p e n d e n t variates. A l t h o u g h . In actuality there is. In fact. that they have a c o m m o n variance σ 2 . F r e q u e n t l y b o t h X and Y are subject to n a t u r a l variation a n d / o r m e a s u r e m e n t error.2.25 mice per litter. T h e r e m a y be three o r four offspring per litter but there would be n o intermediate value of X representing 3. after all. H o w e v e r . A few are s h o w n t o give y o u a n idea of the scatter a b o u t the regression line. S u p p o s e wc s a m p l e a p o p u l a t i o n of female flics a n d m e a s u r e wing length a n d total weight of each individual.

Weight loss in milligrams was c o m p u t e d for each batch. the weight loss decreases. since this is c o m p u t a t i o n a l l y simpler.4). A n o t h e r way of saying this is that the a r i t h m e t i c m e a n of Y represents the least squares horizontal line. the confused flour beetle. which will be presented at the end of this section. a fixed t r e a t m e n t effect u n d e r the c o n t r o l of the experimenter.022 mg. If we were to d r a w a h o r i z o n t a l line t h r o u g h Χ. are m a r k e d a l o n g the c o o r d i n a t e axes. Any h o r i z o n t a l line d r a w n t h r o u g h the d a t a at a point o t h e r than Ϋ would yield a sum of deviations o t h e r t h a n zero and a sum of deviations squared greater t h a n Σ>>2. In this b o o k we shall limit ourselves to a t r e a t m e n t of M o d e l I regression. N i n e batches of 25 beetles were weighed (individual beetles could not be weighed with available equipment). we need to resort to the special techniques of M o d e l II regression. H o w can wc fit a regression line to these d a t a . 11. T h e extension to a s a m p l e of values of Y for each X is s h o w n in Section 11. a m a t h e m a t i c a l l y correct but impractical .11. W e learned in C h a p t e r 3 that the sum of these o b s e r v a t i o n s Σ ( Τ Τ) = Σ>> = 0. and the average weight loss is 6. Just as in the case of the previous analyses. Ϋ (that is. Σ ( Τ — Ϋ)2 = Σ>' 2 . in which the weight loss is t h e d e p e n d e n t variable Y a n d the relative h u m i d i t y is t h e i n d e p e n d e n t variable X. as the humidity increases. permitting us to estimate a value of Y for a given value of ΧΊ Unless the actual o b s e r v a t i o n s lie exactly o n a straight line. then deviations to t h a t line d r a w n parallel to the Y axis would represent the deviations f r o m the m e a n for these o b s e r v a t i o n s with respect to variable Y (see Figure 11.4. is less t h a n t h a t f r o m any o t h e r h o r i z o n t a l line.I. Therefore.3 / t h e l i n e a r r e g r e s s i o n equation 235 analyzed by the m e t h o d s of correlation analysis. kept at different relative humidities. there are also simple c o m p u t a t i o n a l f o r m u l a s . T h e p u r p o s e of the analysis is to establish whether the relationship between relative humidity and weight loss can be a d e q u a t e l y described by a linear regression of the general f o r m Y = a + bX.39%. which wc first e n c o u n t e r e d in C h a p t e r 3 when learning a b o u t the a r i t h m e t i c m e a n a n d the variance. T h e average humidity is 50. we shall c h o o s e a n e x a m p l e with only o n e Y value per i n d e p e n d e n t variate X . This is clearly a M o d e l I regression. a n d weighed again after six d a y s of starvation. we will need a criterion for determining the best possible placing of the regression line. Ϋ a n d X. a line parallel to the X axis at the level of F).3. T h e sum of squares of these deviations. respectively. T h e original d a t a are s h o w n in c o l u m n s (1) and (2) of T a b i c I I. T h e y are plotted in Figure 11. f r o m which it a p p e a r s t h a t a negative relationship exists between weight loss a n d humidity. Statisticians have generally followed the principle of least squares.3 The linear regression equation T o learn the basic c o m p u t a t i o n s necessary t o carry out a M o d e l I linear regression. T h e d a t a o n which we shall learn regression c o m e f r o m a study of w a t e r loss in Tribolium confusum. T o d o so. T h e m e a n s of weight loss a n d relative humidity. we sometimes wish t o describe the f u n c t i o n a l r e l a t i o n s h i p between such variables.

ν-i ND 2 Ε —ο .O rn s ο — Ί" οο αό οό κ NO οο ON Ί ο ON — τ oo • 1 ο Tl ο οο Ο ο Ί" ON <N Ό ON Ο Ί.ΓΙ Ο ΊO '—· nD ο '—' 1 rn OO rn — r i r iο Ο ο Ο — — : Η I 1 1 1 1 rg Ο Ο ο — | ON G\ rn m Ο OC un rn 1 σ.NO Ί.ΟΟ «—· I ι | rn NO ON η I I 1 I 1 1 Ι * rg η r jr^ r ιr t r i Γ1 ΓΊ ΟΟ ^ ^ < r '.o Ί" rN Ί. —t Ί rl NO ΟΟ Ί" ΊI rl ON O N ο fN Ί.m m t— NO Ό Ο Ί. ίο Ο r ιO^ rn rn ri ' ΓΝ Ί.ON „ * O oo Γ Ί N .m Ο j ο ο © Ο Ο 1 I οο rN 00 rn rn ΓΊ m οο Ο .un ο Ί 00 fN 00 oo ON r-j m NO — t | Ο ws ^ rn Ί" \ό V) " o rr.m NO ρ ρ α\ — rn Ο Ο r jrn rn οό • . — — NO NO ο r r jr i> \ τ ri ο ο -—' r i rn Ί" | 1 χ π ΟΟ Ο m oo 00 -t η ο rί ON NO Ο ON oc NO r ιr C J Ο οο oo NO Ν " <r\ u i -f N f rr' θ \0 σν «" Γ" I /ι * Ό r^t Ο OO ON Ό RF $ Μ : : £ <-r ι V.On 04 m m OO O N Ίrs — Ο Ίm ^ 11 Ο Ο Ο — ri — ~ rn Γν| -α ε 3 I ο© οο NO ri ΓΝ|oo m m m '.ο ο Ο <N 8 Ο Ο ο O Ο ON Ο δ Ο Ο q Ο οο οο ο ο ta NO Ί" m ι~ <N ο /> NO — § ο s ο © Ο ο ο § ο NO NO Ο I oc 00 m On /> ^t m ι" NO r -Ο m Τr i o r.rn Wi Τ 1 m ON ο .NO — ON ON rn ο — .C I n r rn Ο sC οο οο y i r .236 On —ι κ rΊ- r._ NO Ι. r α^ ON ΟΟ T — rNO οο NO L j oo NO NO Ί NO NO o K to ^ ON rn r.ΟΝ OO rn r^ ΟΝ OO ο ro Ί.ο Ο Ο ο © I f m Ί" m Ο Ο Ο Ο Ο I NO m ON Γ. O N n Ο Ο οο m C ) ο Ο rn vS Ο rn ΓΜ rrj Cl 1 sO m r'O rΟ χ' NO ο r.1 1 O N ί — ο —< m (Ν Ο ο\ Ί" m ON r · ΟΟ Ί" NO rn Ί" ΟΊ Ί" NO NO oo Cn) ·— Ο Ο fN I 1 1 1 1 Γ .— NO Ό Ί" CO r1 οο DO oo oo ΟΟ Γ I iπ t-)r l ~ Γ ΙΟ Γ I Ί.

W e should distinguish the estimated value of Yh which we shall hereafter designate as Yh (read: Y-hat o r 7-caret). ' 0 10 1 1 30 1 40 A' j 1 1 70 1 80 . γ 20 50 60 90 100 % Relative humidity m e t h o d for finding the m e a n of Y would be to d r a w a series of horizontal lines across a g r a p h . conventionally designated as Y. we still d r a w a straight line t h r o u g h our observations. and choose that line yielding t h e smallest s u m of squares.I. A sloped regression line will indicate for each value of the i n d e p e n d e n t variable AT. 3 W e i g h t loss (in mg) of nine batches of 25 Tribolium beetles after six days of starvation at nine different relative humidities. D a t a f r o m T a b l e 1 I. after Nelson (1964). calculate the s u m of squares of deviations f r o m it.11. T h e regression e q u a t i o n therefore should read Y = a + hX (11. a n d the observed values.4 D e v i a t i o n s f r o m t h e m e a n (of Y) f» r 'he d a t a of F i g u r e 11. . but it is n o longer necessarily horizontal.3.. In linear regression.3 / t h e l i n e a r r e g r e s s i o n equation 237 9 8 7 a FIGURE 1 1 . .1) it ο FIGURE 1 1. a n estimated value of the dependent variable.

3) and (4) of Tabic 11. These deviations can still be drawn parallel to the Y axis.2) Let us calculate h = Σ. Geometrically.νν'/Σ. it would be possible but impractical to calculate the correct regression slope by pivoting a ruler a r o u n d the point Χ.A. and to use the line that makes the sum of the squares of these deviations as small as possible. Σ ά Υ Χ \ & called the unexplained sum of squares. T h e sum of these deviations is again zero ( Σ ά γ . this equation calculates estimated values Y (as distinct from the observed values Y in any actual case). Ϋ. it is most useful to define closeness in terms of the vertical distances from the points to a line. We first c o m p u t e the deviations from the respective means of λ' and Y. would be the least squares regression line.5 D e v i a t i o n s f r o m t h e r e g r e s s i o n line f o r I h e d a t a of F i g u r e 11. It is y xv .1. ) a n d is generally symbolized as d Y x . Relative Iniiniilitv .χ a n a l o g o u s to the sum of squares Σγ2. T h e deviation of an observation Yj f r o m the regression line is ( ^ — f . Whichever position gave the smallest value of ΣιΙ2 .ν 2 for our weight loss data.3. the basic idea is that one would prefer using a line that is in some sense close to as m a n y points as possible. χ for each of the innumerable possible positions. Ϋ and calculating the unexplained sum of squares Σ ά \ . x is obtained by means of the calculus. as shown in columns (. Again.5). I·** Π1. a n d the sum of their squares yields a quantity Σ(Υ — Υ)2 — ΣάΥ. A convenient consequence of this criterion is that the line must pass t h r o u g h the point Χ. The formula for the slope of a line based on the minimum value of Σ d Y . F o r purposes of ordinary Model I regression analysis. but they meet the sloped regression line at an angle (see Figure 11. F o r reasons that will become clear later. MOURE 1 1. χ = 0). The sums of these deviations. The least squares linear regression line t h r o u g h a set of points is defined as that straight line which results in the smallest value of Σ ά Υ χ .238 CHAPTER 11 / REGRESSION which indicates that for given values of X.

which can only be positive.0.| < jx. . You will recall that Σ ν 2 is called the sum of squares.22 units of Y.22 m g in weight loss. This is a p o o r but well-established term. this indicates a negative slope of the regression line: as X increases. Relating it to our actual example. Thus. that is.1 we find that Σ.(-0.22)50. b — 1.|.v. when jy. Y decreases. When y. /> > |l|. the sum of the products of the variates.053.4. Ϋ ^ 6.3 / THE LINEAR REGRESSION EQUATION 239 Σ χ a n d Σ>>.|. Σ . reduces to y . You may recall first having encountered covariances in Section 7. If we look at the product for A". the sum of the p r o d u c t s of the deviations rather t h a n ΣΧΥ.. The sum of these products Σ" xy is a new quantity. The sum of products is a n a l o g o u s to the sum of squares.. or x./x . on the average equals . for (he η values of X h the latter ratio indicates the direction and magnitude of the change in Y for a unit change in X.7038 Therefore. there is a decrease of 0.y.22X .11.053. Ϋ .8.| > |. referring to Σ xy. When divided by the degrees of freedom. would be x 2 .x. You may wish to convincc yourself that the formula for the regression coefficient is intuitively reasonable. At V 50.39 = 8.ν. the observed mean of Y. We can substitute these means into Fxpression (11. it yields the covariance. 0 5 3 . which in this example are all negative because the deviations are of unlike sign. while ΣΥ2 is the sum of the squared variates. Also. we can say that for a 1% increase in relative humidity. = — .053. Y. and b = Σχγ/Σχ2 = .1): Y = a + bX Y = a + bX a = Ϋ bX a = 6.39.022.ν. called the sum of products. N o t e that the sum of products can be negative as well as positive. In this respect it differs from a sum of squares.. 2 2 .4 4 1 . we use Ϋ. as an estimate Ϋ of the mean.. 8 1 7 6 . we obtain x. An increase in humidity results in a decrease in weight loss. for a one-unit increase in X. are slightly different from their expected value of zero because of r o u n d i n g errors.ν.022 . The squares of these deviations yield sums of squares and variances in columns (5) and (7). T h u s the ratio \. when |y.. b < \ \ \ . In column (6) we have c o m p u t e d the products xy..0 . a single value of X. It is the ratio of the sum of products of deviations for X and Y to the sum of squares of deviations for X.3889. b will equal 1. Similarly. /x Although Σ v y / Σ χ 2 only a p p r o x i m a t e s the average of y.χ}' = .γ2 = 8301. How can we complete the equation Y = a -+ bX'J We have stated that the regression line will go through the point . there is a reduction of 0.7038 .053. Thus. and conversely.y. F r o m Table 11. by analogy with the variance resulting from a similar division of the sum of squares. If it is negative.x. (he squared deviation for X. if y.Ϋ.

N o t e t h a t w h e n X is zero (humidity zero). W e can use the regression f o r m u l a t o d r a w t h e regression line: simply estim a t e y at t w o convenient p o i n t s of X. we estimate Y for every o n e of o u r given values of X. we frequently use the intersection of the t w o m e a n s a n d o n e o t h e r point.6.7038 mg. C o m p a r e them with the observed values 9 FIGURE 1 1.3) where p is defined as the deviation y — y.6 L i n e a r r e g r e s s i o n l i l t e d t o d a t a of F i g u r e 11.1). This line has been a d d e d to the observed d a t a a n d is s h o w n in F i g u r e 11. T h e estimated values Y are s h o w n in c o l u m n (8) of T a b l e 11.3818 mg. . In fact.1. the estimated weight loss is greatest.3. N o t e that it goes t h r o u g h the p o i n t Χ. using Expression (11.1). such as X = 0 a n d X = 100.V . we can write Expression (11. Ϋ.bX) + bX = Ϋ + b(X X) Ϋ - Y = bx y = bx (11. Next.240 CHAPTER 11 / REGRESSION This is t h e e q u a t i o n t h a t relates weight loss to relative humidity. for d r a w i n g the regression line. Ϋ = a + bX.Ml (>() 70 SO 90 101) Hchitivc Imniidil v . But as X increases to a m a x i m u m of 100. as Ϋ = (Ϋ .v 0 10 20 :S0 to ' ( . the weight loss decreases to 3. It is t h e n e q u a l t o a = 8. a n d d r a w a straight line between them.

Σ Υ = ΣΥ and hence Υ = Ϋ. Overall agreement between the two columns of values is good. W h a t has caused this reduction? Allowing for different magnitudes of X has eliminated most of the variance of Y from the sample. We shall return to the meaning of the unexplained and explained sums of squares in later sections. T h u s Σ ά γ χ = 0.1307 in column (7). The difference between the total SS. Σ$2.9.5. W h e n we c o m p u t e deviations of each observed Y value f r o m its estimated value ( y — Ϋ) = dY.5130.6160) to this and you obtain Σ y 2 = Σ ν 2 + Σ d γ v = 24. It is unexplained with respect to X.5) is derived in Appendix A1. Note that Σγ a p p r o x i m a t e s zero and that Σ v 2 = 23.x. the regression line is a better base f r o m which to c o m p u t e deviations t h a n the arithmetic average 7.4). Σ y 2 . x. since the value of X has been taken into account in constructing it. W h e n we c o m p a r e Σ ( Υ — Ϋ)2 = Σ y 2 = 24. Expression (11. o u r actual Y values usually are different from the estimated values Y.3 / THE LINEAR REGRESSION LOCATION 241 of Y in column (2).6160. T h e c o m p u t a t i o n of these deviations and their squares is shown in columns (11) and (12).1307 2 with Σ ( Υ — Ϋ) = Σ dj χ = 0.11. The actual c o m p u t a t i o n s for a .1290. we c o m p u t e in column (10) the squares of these deviations and sum them to give a new sum of squares. is not surprisingly called the explained sum of squares. Σά2γχ = 0. just as Σ}> = 0. and the unexplained SS. The regression coefficient Σ χ \ > / Σ χ 2 can be rewritten as η Σ ( χ η χ κ υ γι ' ϋ) (11. We conclude this section with a discussion of calculator formulas for computing the regression equation in cases where there is a single value of y for each value of X. the sum of products.6160. T h e customary formula is η The quantity Σ Χ Υ is simply the accumulated product of the two variables. is Σ ν 2 = Σ λ' 2 (Σ Χ) 2 /η. Its c o m p u t a t i o n a l formula.4) Σ (χ The d e n o m i n a t o r of this expression is the sum of squares of X. χ and list these in column (9). Next. we note that the new sum of squares is much less t h a n the previous old one. This is due to individual variation a r o u n d the regression line. N o t e that except for r o u n d i n g errors. Yet. as first encountered in Section 3. XdY. and is based on the deviations y = Ϋ — Y. Remaining is the unexplained sum of squares Σ dY. H o w ever. Add the unexplained SS (0. We shall now learn an a n a l o g o u s formula for the n u m e r a t o r of Expression (11. which expresses that portion of the total SS of Y that is not accounted for by differences in X. we notice that these deviations exhibit one of the properties of deviations f r o m a mean: they sum to zero except for rounding errors. which is equal (except for rounding errors) to the independently calculated value of 24.

152.7037 5.5145 = 0. T o c o m p u t e regression statistics. which also illustrates how to c o m p u t e the explained B O X 11. Σ Υ.5 4.3889" a = Ϋ — by.022 .90 53.22)(50.260 ^ 3.1.6161 .68 75.22 £ > J ~ 8301.ΣΧ.389) = 8. x X -.1.1. C o m p u t a t i o n of regression statistics.8178r 8301.0 Basic computations 1. Compute sample size.389 Xx ^ Υ = 6. The unexplained sum of squares is = χ . Weight loss in mg(Y) Percent relative humidity (A") 8.72 93.1 X.6.053. These are η. sums of the squared observations.Σ Χ2. we need six quantities initially.20 85. sums.022 ^ y 2 = 24. n=9 Σ A = 31.8178 = .= -0.1306 2 = 8301.5 4. and sum of products are X = 50.23.0 5. Σ Y2.67 29.3889 n = 2289.3889 6.0 3.1306 .0 5.5350 2. and the sum of the X K's.20 £ XY = 2289. employing the weight loss d a t a of Table 11.242 CHAPTER 12 / CORRELATION regression e q u a t i o n (single value of Y per value of X) are illustrated in Box 11.08 43. sums of squares.5 £ Y = 54.1.5 6.(-0.14 12.£j> 2 = 24. F r o m these the regression e q u a t i o n is calculated as shown in Box 11.83 62. The explained sum of squares is (Vx>r J:. and Σ Χ Υ.44«. The regression coefficient is by χ = 4.053.98 0 8. The means.260 £ Y = 350.75 2 2 453. Single value of Y for each value of Data from Table 11.0 6. The Y intercept is 4 5 3 : 5 f -20)^44L8.78 _ -441. ν 2 .

2 and illustrated by Figure 11. T h e term subtracted from Σ γ 2 is obviously the explained sum of squares. Following Section 10.4 / MORE T H A N O N E VALUE. T h e percentage survival to a d u l t h o o d was calculated for varying numbers of replicates at these densities. and also whether we can establish a regression of survival on density. We now would like to determine whether there are differences in survival a m o n g the four groups. these percentages were given arcsine transformations.7B.3 and Table 8. obtaining a sample distribution of Y values at each of the chosen points of A'. is to carry out an analysis of variance.2. O u r aim in doing this is illustrated in Figure 1 1. We have selected an experiment from the laboratory of one of us (Sokal) in which Tribolium beetles were reared from eggs to a d u l t h o o d at four different densities.f ) 2 .7A. A first a p p r o a c h . and it would be unlikely that a regression line fitted to these data would have a slope significantly different from zero. There are four different densities and several survival values at each density. sec Section 6.4 M o r e than one value of Y for each value of X We now take up Model I regression as originally defined in Section 11.11. However. as shown in Figure 11. That is d e m o n s t r a t e d in Appendix A 1. The arrangement of these d a t a is very much like that of a singleclassification model 1 anova. therefore.1. when the means increase or decrease slightly as X increases it may be that they are not different enough for the mean square a m o n g groups to be significant by a n o v a but that a significant regression can still be found. we cannot turn . this would indicate.6. which are listed in Box I 1. If the analysis of variance were not significant. Thus.7) below: Σί 2 = Σ ^ 2 = ^2Σ*2 = § ^ Σ * 2 ("-τ) v-2 L y - (Σ*?)2 Σ*2 11. However. wc usually will lind a significant difference a m o n g the means by an anova. When we find a marked regression of the means on A". although both the analysis of variance and linear regression test the same null hypothesis equality of means the regression test is more powerful (less type II error.8) against the alternative hypothesis that there is a linear relationship between the g r o u p means and the independent variable X. OE Y EOR E A C H V A L U E OF X 243 sum of squares Σ y2 = Σ (Ϋ — Y)2 and the unexplained sum of squares Σ dj .2. that the means are not significantly dilferent from each other. using the methods of Section 8.2. F o r each value of the treatment X we sample Y repeatedly. as shown in Figure 11.7 (sec page 247). as shown in Expression (11. x = Σ ( Υ . These transformed values are more likely to be n o r m a l and homosccdastic than are percentages.

44 60.80 175.84 50/g 58.33 5 64.69 58.Ϋ Total 3 Η Ϊ4 423.81 Source: Data by Sokai (1967). Density = X (a = 4) 5/g 61. We proceed to test whether the differences among the survival values can be accounted for by linear regression on density.2 Y per value of X.. Computation of regression with m o r e than one value of The variates Y are arcsine transformations of the percentage survival of the bettle Tribolium castaneum at 4 densities (X = number of eggs per gram of flour medium).95 The anova computations are carried out as in Table 8.30 320. 259.84 3 50.43 3 58.48 152.68 58.6867 562J883 141. Ϋ .2339 12.72 63.7016 138.37 58.37 100/g 53. „. = 15 Σ Σ y = 907.20** The groups differ significantly with respect to survival. it is impossible for regression to be significant.21 4 64.68 69. If F„ < [l/(« — l)j F a( i Σ„„.6079 11.82 Survival.30 61.1.244 CHAPTER 1 1 / REGRESSION B O X 11. Anova table Source of variation df SS MS F.07 20/g 68.21 66. in degrees £ y «i % Σ «.13 49.37 69. Computation for regression analysis 1.Ϋ Among groups Υ-Ϋ Within groups Y . Sum of X weighted by sample size = ntX = 5(5) + 4{20) + 3(50) + 3(100) = 555 .89 49.

00 5.35 4.quantity 4 = 39. OE Y EOR EACH VALUE OF X 245 BOX 11. Explained sum of squares = Σ .4 / MORE T H A N ONE VALUE.841. Sum of squares of X = Σ x 2 = Σ η<χ1 ~ x = quantity 2 .841.2 Continued a 2.35 j m & L · . 15 _2747.11.62 7.33) + · · + 100(152.20.84) = 30.P2 = ( T xv) 2 -Σ χ2 2 / ΊΊΛΊ quantity 5 8.7735 .690 6.225 3. Unexplained sum of squares = Σ ^ ΐ ' χ = 18. Sum of products of X a n d Ϋ weighted by sample size = Σ η. Sum of products = Σ x >' (a \ ( a Η | Σ > * ) ( ς ς Σ»« quantity 3 quantity 1 .690 ® W P 8 ~~ Σ ί ' 2 = SSgromPS ~ quantity 7 = 423. Correction term for X = CTa X « * Σ»! _ (quantity I) 2 A Σ»< (555) 2 15 CT : 20.535.ΧΫ = i x ( f . S a m of X 2 weighted by sample size — £ n.403.X 2 = %Sf + 4(20? + 3(50)2 + 3(100) 2 = 39.225 .9281 = 19.535 = 18. 30. Y j = 5(320.7016 .

y Y .20** 40.9281/9. compare the ratio Fs = MSr.6867 562.y Among densities (groups) Linear regression Deviations from regression Within groups Total 1 2 11 14 In addition to the familiar mean squares.4394 = 65. since Fs = 403. we clearly reject the null hypothesis that there is n o regression. clearly a s t r a i g h t line fitted to these d a t a w o u l d be a h o r i z o n t a l line h a l f w a y b e t w e e n the u p p e r a n d the lower p o i n t s .5207 + 5. we accept the null hypothesis that the deviations from linear regression are zero. 9. I n such a set of d a t a . and the mean square for deviations from regression.5. */MS„ i I h i „ with i°m-ay Since we find Fs < 1. T o test for the presence of linear regression.01 A". this a r g u m e n t a r o u n d a n d say t h a t a significant difference a m o n g m e a n s as s h o w n by a n a n o v a necessarily i n d i c a t e s t h a t a significant linear regression c a n be fitted to these d a t a . the m e a n s follow a U . Y intercept = a = f -bY . Regression coefficient (slope of regression line) r Λ Σ χ quantity 5 18.9601 — 0. MSy .9601 Hence.8868 = 40.01)555 15 60. we therefore tested MSy over the mean square of deviations from regression s j .x( = s$.81 15 Σ »« (-0. 11. x and. H o w e v e r . T o test if the deviations from linear regression are significant.147.86* < 1 ns f .246 CHAPTER 1 1 / REGRESSION B O X 11. MS γ. v). In F i g u r e I I. the regression equation is Ϋ = 65.8868 12.y y. we now have the mean square due to linear regression. a c u r v i l i n e a r p a r a b o l i c regression w o u l d fit these d a t a a n d r e m o v e .3883 MS 141.86 is greater than /* 0 0 5 [ 1 2 J = 18.xX a Rj _ Σ Σ Y quantity 9 χ quantity 1 Σ "< 907.6079 F. MS g r o u p s and MS w l I h l n .s h a p e d f u n c t i o n (a p a r a b o l a ) . or that β = 0.7C.9281 19.7016 403.147.690 10.9281 9.2 Continued Completed anova table with regression Source of variation if 3 SS 423.2339 403. T h o u g h t h e m e a n s w o u l d likely be significantly different f r o m e a c h o t h e r .7735 138.y y. linear regression c a n e x p l a i n only little of the v a r i a t i o n of the d e p e n d e n t v a r i a b l e .y y.

not will you necessarily get heterogeneity of the type s h o w n in I I. T h e three degrees of freedom a m o n g the four g r o u p s yield a m e a n s q u a r e dial would be . in which a n y straight I inc fitted to the d a t a would be horizontal. Significance of a n y of t h e s e w o u l d d e p e n d o n (he o u t c o m e s of a p p r o p r i a t e tests. most of the variance of Y. as indicated in Figure I I. or the curvilinear o n e in I 1.7B. rising and falling alternatingly. a n d sometimes they may remain as inexplicable residual heterogeneity a r o u n d the regression line. but which will not fit a straight line well.4 / M O R E T H A N O N E V A L U E Ο Γ V F O R E A C H V A L U E O F Λ- Α Β Υ Y + + + + + + + I ι L -J ι ι u _1 ] L I FIGURE 1 1.7D. G e n e r a l ( r e n d s o n l y a r e i n d i c a t e d by these figures. You are m o r e likely to get d a t a in which linear regression can be d e m o n s t r a t e d .11. A similar ease is s h o w n in Figure I I.7D. Again the regression line for these d a t a has slope zero. R e m e m b e r that in real examples you will rarely ever get a regression as clcar-cut as the linear case in 11.7C. Sometimes the residual deviations of the m e a n s a r o u n d linear regression can be removed bv c h a n g i n g from linear to curvilinear regression (as is suggested by the pattern of points in f igure 1 I. A curvilinear (cyclical) regression could also be fitted to such d a t a .7F.2. in which the m e a n s describe a periodically c h a n g i n g p h e n o m e n o n . We carry out the c o m p u t a t i o n s following the by now familiar outline for analysis of variance and o b t a i n the a n o v a table s h o w n in Box 11. but o u r main p u r p o s e in showing this e x a m p l e is to indicate that there could be heterogeneity a m o n g the means of Y a p p a r e n t l y unrelated to (he m a g n i t u d e of X.7 I L Λ' _i ι ι u V I 1 L _ V D i f f e r e n c e s a m o n g m e a n s a n d linear regression.7E).

7.7. the explained s u m of s q u a r e s of Τ. N o t e t h a t the m a j o r quantities in this table are the same as in a single-classification a n o v a . b u t in a d d i t i o n we n o w have a sum of squares representing linear regression. = μ + <*. Significance of the mean s q u a r e for deviations f r o m regression could m e a n either that Y is a curvilinear f u n c t i o n of X or that there is a large a m o u n t of r a n d o m heterogeneity a r o u n d the regression line (as already discussed in c o n n e c t i o n with Figure 11. T h e f o r m u l a s will l o o k u n f a m i l i a r because of the c o m p l i c a t i o n of the several Y's per value of X. t h o u g h there m a y a p p e a r to be only f o u r densities. in fact. W e should u n d e r s t a n d w h a t these sources of variation represent.j T h e t r e a t m e n t effect a .248 CHAPTER 1 1 / REGRESSION highly significant if tested over the w i t h i n . + O.· + c. we again present the results in the f o r m of an a n o v a table. In o u r case. T h u s . the s u m of p r o d u c t s of X a n d Y.g r o u p s m e a n square. T h u s we c a n write Yij = μ + /ix. T h u s linear regression on density has clearly removed a significant p o r t i o n of the variation of survival values. over the m e a n s q u a r e for deviations from regression and find it to be significant. x over the w i t h i n . leaving a residual s u m of s q u a r e s (of two degrees of f r e e d o m in this case) representing the deviations f r o m linear regression. which is always based on o n e degree of freedom. MSY. This s u m of s q u a r e s is s u b t r a c t e d f r o m the SS a m o n g groups. χ) is significant by c o m p u t i n g the variance ratio of MSY. We first test w h e t h e r the m e a n s q u a r e for deviations f r o m regression (MSγ Y = sY. when analyzing regression examples with several Y variatcs at each value of A". which is Yi. We now test the m e a n s q u a r e for regression. W e c o m p u t e t h e s u m of s q u a r e s of X.2. which is assumed to have a m e a n of zero and a variance of σ2υ. there are. T h e SS d u e to deviations f r o m regression represents the residual variation o r scatter a r o u n d the regression line as illustrated by the v a r i o u s examples in Figure 11. + Dh where fix is the c o m p o n e n t d u e to linear regression a n d D . S o m e workers.2. is the deviation of the m e a n Ϋ{ f r o m regression. T h e SS within g r o u p s is a m e a s u r e of the variation of the items a r o u n d each g r o u p m e a n . the deviations from regression a r e clearly not significant. a n d the unexplained s u m of squares of Y. = /ix. as s h o w n in Box 11. H a v i n g completed the c o m p u t a t i o n s . T h e linear m o d e l for regression with replicated Y per X is derived directly f r o m Expression (7.2). as m a n y densities (although of only four m a g n i t u d e s ) as there a r e values of Y in the study. + Cy T h e SS d u e to linear regression represents that p o r t i o n of the SS a m o n g g r o u p s that is explained by linear regression on A'. T h e c o m p u t a t i o n s for the s u m of squares of X involve the multiplication of X by the n u m b e r of items in the study.g r o u p s MS. T h e a d d i t i o n a l steps for t h e regression analysis follow in Box 11. since the m e a n s q u a r e for deviations is less t h a n t h a t within groups. actually a mixture of both c o n d i t i o n s m a y prevail). proceed as follows when the deviations from regression a r e .

Also. W e c o m p l e t e the c o m p u t a t i o n of the regression coefficient a n d regression e q u a t i o n as s h o w n at the end of Box 11. This relation is g r a p h e d in Figure 11.ii wni. W h e n we have e q u a l sample sizes of Y values for each value of X. a r g u i n g t h a t pooling the t w o s u m s of squares c o n f o u n d s the p s e u d o r e p l i c a t i o n of h a v i n g several Y variates at each value of X with the t r u e replication of having m o r e X points t o d e t e r m i n e the slope of the regression line. we w o u l d be able to estimate the m e a n value of F f o r each of the three X values very well.»ι freedom. . survival decreases. forgoing pooling. T h e sums of p r o d u c t s a n d regression slopes of both examples discussed so far have been negative.2. = an. O t h e r w o r k e r s prefer never to pool. b u t we w o u l d be estimating the slope of the line o n the basis of only three points.8. the c o m p u t a t i o n s bccome simpler. T h e m e a n s q u a r e for regression is then tested over the pooled m e a n square. g r o u p s as well as their degrees of freedom. OE Y EOR EACH VALUE OF X 260 not significant. T h e second attitude. it is only a n accident of choice of these t w o examples. S a m p l e m e a n s are identified by p l u s signs. is m o r e conservative a n d will decrease the likelihood that a nonexistent regression will be declared significant. a n d that this relationship can be expressed by a significant linear regression of the form Y = 65.01 X.2.147. are replaced by a c o n s t a n t sample size n.4 / MORE THAN ONE VALUE. will be a better estimator of t h e e r r o r variance and should permit m o r e sensitive tests. a risky procedure. T h e y a d d the sum of squares for deviations a n d ili. since it is based o n m o r e degrees of f r e e d o m .n. which. Significance tests applied to such cases a r e also simplified. T h u s if we h a d only three X p o i n t s but o n e h u n d r e d Υ variates at each. Steps 1 t h r o u g h 8 in Box 11. O u r conclusions are t h a t as density increases. In the exercises at the end of this c h a p t c r a positive regression cocfficicnt will be e n c o u n t e r e d .9601 — 0. D e n s i t y ( n u m b e r of e^Rs/ii of m e d i u m ) . a n d you m a y begin to believe that this is always so. However.8 L i n e a r r e g r e s s i o n fitted t o d a t a of Box 11. figljrr 11.2 bccome simplified bccause the u n e q u a l sample sizes «. First we carry out the a n o v a in the m a n n e r of Box 8. which can generally be factored out of the various expressions.1.11. where X is density per g r a m a n d Y is the arcsine t r a n s f o r m a t i o n of percentage survival. T h e n they calculate a pooled m e a n s q u a r e by dividing the p o o l e d s u m s of s q u a r e s by the pooled degree·.

In Figure 11. ν would be relatively larger for a given value of x.s l o p e form of the e q u a t i o n . χ. F o r each of these deviations we can c o m p u t e a c o r r e s p o n d i n g sum of squares. 0 Λ' -Λ . Appendix A 1. Σ ^ 2 . the slope of the regression line.9 we c a n see t h a t the deviation y can be d e c o m p o s e d into t w o parts.x. Σ r c o r r e s p o n d s to y.6 gives the calculator f o r m u l a for the unexplained s u m of squares. If b.9 that the deviation y = Ϋ — Ϋ represents the deviation of the estimated value Y f r o m the m e a n of Y. were steeper.9 Sthcmalic diagram to s h o w relations inv o l v e d in p a r t i t i o n i n g i h e s u m of s q u a r e s of the d e p e n d e n t variable.'•Λ ι KiURi: 11. T h e SS of a s a m p l e of Y values. T r a n s p o s e d .5 T e s t s o f s i g n i f i c a n c e in regression W e have so far interpreted regression as a m e t h o d for providing an estimate. this yields „ . In analytical geo m e t r y this is callcd the p o i n t . T h e height of y is clearly a f u n c t i o n of . V\. y a n d dy x. given a value of X v A n o t h e r i n t e r p r e t a t i o n is as a m e t h o d for explaining s o m e of the v a r i a t i o n of t h e d e p e n d e n t variable Y in terms of the v a r i a t i o n of the i n d e p e n d e n t variable X. W e have already seen that y = bx (Expression (11. (V a t ) 2 Σ > ' " = y v2 Of course. and 1 V'v''y '/. Σ ι .3)). It is also clear f r o m F i g u r e 11.Ϋ. v ^ r ' Σ<'> to dy Y.250 CHAPTER 1 1 / REGRESSION 11. T h e remaining p o r t i o n of the deviation y is dy. It represents t h e residual variation of the variable Y after the explained variation h a s been s u b t r a c t e d . is c o m p u t e d by s u m m i n g a n d s q u a r i n g deviations y = Υ . We c a n sec t h a t y = y dy x by writing out these deviations explicity as γ ν = (Υ — γ ) + {Y — y).

x = V r .I degrees of freedom.11. .5145 23. is based on o n e degree of freedom. This relationship between regression a n d analysis of variance can be carried further.Σ f £ v2 ' £ ι _ {Σ η s 'f . T h e test is of the null hypothesis H 0 : β = 0.08801 T h e significance test is /-'s = s$/sy.x.2) df remain for the e r r o r M S sincc the total sum of squares possesses η .V (observed Y from estimated V) Ϋ Total (observed Y from mean of F) η.7 will show that the cross p r o d u c t s cancel out. T h e m e a n s q u a r e d u e to linear regression. which measures the residual variation and is used as an e r r o r MS. sf.5145 267. £ . It is clear f r o m the observed value of /·'. It is tested over the unexplained mean square. x is i n d e p e n d e n t of the m a g n i t u d e of the explained deviation y. W e can u n d e r t a k e an analysis of variance of the p a r t i t i o n e d s u m s of squares as follows: Source of variation _ Explained Y tIf _ 1 SS ( £ *v)2 MS 2 Expected 2 σ MS 1 ?ν ~ Y (estimated Y from m e a n of Κ) L>' = y-pT ·νί >· χ + Is' Σ χ~ Unexplained. error V . T h u s we are able lo pa ι η tion the s u m of squares of the d e p e n d e n t variable in regression in a wa\ a n a l o g o u s to the p a r t i t i o n of the total SS in analysis of variance.I. we o b t a i n the following results: Source of variation Explained . that a large a n d significant p o r t i o n of the variance of V has been explained by regression on A'. You may w o n d e r how the additive relation of the deviations can be m a t c h e d by an additive relation of their squares w i t h o u t the presence of any cross p r o d u c t s . S o m e simple algebra in A p p e n d i x A 1.5 / TESTS OF SIGNIFICANCE IN REGRESSION 262 c o r r e s p o n d s to y (as shown in the previous section). a n d consequently (n . T h e m a g n i t u d e of the unexplained deviation d y . W h e n we carry out such an a n o v a on the weight loss d a t a of Box I I.* σ ϊ χ n = γ si T h e explained mean square.18** 0.2 _ .6161 141306 0.d u e to linear regression Unexplained r e g r e s s i o n line Total error around df SS MS l\ 1 7 '8~ 23. o r mean square due lo linear regression. meas- ures the a m o u n t of variation in Y a c c o u n t e d for by variation of X. just as in a n o v a the m a g n i t u d e of the deviation of a n item f r o m the sample m e a n is i n d e p e n d e n t of the m a g n i t u d e of the deviation of the sample m e a n f r o m the g r a n d mean.

S C tA υ •a nj Ι Λ 11 £ a ο I . X 60 2 . (ύ ο 60 μ . υ Κ "Ο. Ο '"δ I< U ο ο c ο ι^Γ υ Ε "θ.8 3 1> 3 < Ν I C > I! κ II κ JS υ * Ο η! > <3 Ί= a >s •S. 2 ε β ο = S ίο ε 'C </ ι Β ·" ) 1 £J Λ W α. 43 £ . | Χ e II g τa ? * ν. < Ν I <3 I έ" II 13 •S-2 8 | feb •S'iH { Λ ® 1 3 -ri.C 55 * •s.

256. using example of Box 11.007. BOX 11. Testing significance of the regression coefficent: -0.22 . 95% confidence limits for regression coefficient: fo.22 £ x 2 = 8301.h a n d column is for the case with a single Y value for each value of X.70 = -0.389 ? = 6. their e m p l o y m e n t in tests of hypotheses. using the weight loss example of Box 11.053.060.007.x = .0.022 by. which is simply the s q u a r e r o o t of the r a t i o of the unexplained variance to t h e s u m of squares of X.70 = -0.ospjSi = -0.92 L j = b + to.408 Ρ < 0.2) 7 1. A is a f u n d a m e n t a l q u a n t i t y t h a t is a p a r t of all s t a n d a r d e r r o r s in regression. η= 9 X = 50. Box 11.3 lists these s t a n d a r d errors in t w o columns.οοιρι = 5.053. N o t e t h a t the unexplained variance Sy .256.5 / TESTS OF S I G N I F I C A N C E i n REGRESSION 253 W e n o w proceed to the s t a n d a r d e r r o r s f o r various regression statistics.053.0.1.70 L! = 6 . T h e s t a n d a r d e r r o r of the regression coefficient permits us t o test various h y p o t h e s e s a n d to set confidence limits to o u r sample estimate of b.098. a n d the c o m p u t a t i o n of confidence limits.11.053. T h e c o m p u t a t i o n of sb is illustrated in step 1 of Box 11.4 Significance tests and computation of confidence limits of regression statistics.1 2.22 ίο.1. Based on standard errors and degrees of freedom of Box 11.osm*!.365(0.1) = 0. Standard error of the sampled mean F (at X): = ^ 1 = 0.4.i 0 „ 5r71 s 6 = -0. Standard error of the regression coefficient: V^nib b. = 2.52 4.3.3 .007.088.0 = ^ ^ 0 I 0 ' 6 0 2 = 0-003.888.001 3.3889 0. T h e r i g h t . T h e first row of the table gives the standard error of the regression coefficient.01 (ft .003. Single value of Y for each value of X.045.22 + 0.

345 f r o m Box I 1.8296 T h e significance lest illustrated in step 2 tests the "significance" of the regression coefficient.098.4479 = 2.93 L.= 3. The confidence limits arc shown in Figure 11.0.t0.05i7ff L2 = Ϋ + t 0 . there is no evidence that the regression is significantly deviant f r o m zero in cither the positive or negative direction. In view of the small m a g n i t u d e of sh.7881 = 6.16.365(0. T h e significance test in step 2 of Box I 1.4 Continued 5.40) = 0. it tests the null hypothesis that the sample value of h comes from a p o p u l a t i o n with a p a r a m e t r i c value β = 0 for the regression coefficient.4. that is.3817 . Standard error of Y h an estimated Y for a given value of JV. Setting confidence limits to the regression coefficient presents n o new features.10 as d a s h e d lines representing the 95% b o u n d s of the slope.888. the confidence interval is quite n a r r o w .2339 = 6. If wc c a n n o t reject the null hypothesis. since h is normally distributed.022 + 0.4479 = 3.871 L j = Ϋ .873 = 0. Wc saw earlier (Section 8.9338 L2 = + t 0 .0.16. .022 .4) that r = /·'.0 S n i si = 3. W h e n wc s q u a r e ty = -. 95% confidence limits for the mean μγ corresponding to Χ(Ϋ . O u r conclusions Tor the weight loss d a t a are that a highly significant negative regression is present.2339 = 5.3) = 0.3817 at A".447.o5f7isf = 2.2559 6. also be used to test w h e t h e r h is significantly different from a p a r a m e t r i c value β o t h e r than zero. = Ϋ. which (within r o u n d i n g error) equals the value of F v found in the a n o v a earlier in this section.o5(7]AV.osmSf = 2.40 7. for Xt = 100% relative humidity = v ® 8 W 0 4 0 7 ^ 0 ) = %/α035.189.254 CHAPTER 1 1 / REGRESSION BOX 11.i0.6.4. N o t e that the regression line as well as its .365(0. of course. 0 5 [ 7 JSY = 6. = 100% relative humidity: to. 95% confidence limits for μ Υί corresponding to the estimate Yt ~ 3. T h e c o m p u t a t i o n is shown in step 3 of Box 11.4 could.3817 + 0.: For example. wc o b t a i n 267.189. the a p p r o p r i a t e degrees of freedom being η — 2 = 7.022): f o.233. This is a t test.

Ϋ. T h u s .10 95% confidence limits to regression line of F i g u r e 11. squared and divided by the sum of squares of X.7881 6.V.. Ϋ.V 100 '.357. The s t a n d a r d e r r o r for Ϋ is only a special ease of the standard any estimated value Y alone) the regression line.11. now that we have regressed Y on X. However. Y on the regression line is less than s}. 0 10 '20 30 r 10 50 60 70 80 00 . hold constant) some of the variation of Y in terms of the variation of X. using as a s t a n d a r d e r r o r of the m e a n sr = \ J s l . we are able to account for (that is. A new factor. β. are shown in step 7 o f l h a t box.4678.9338 = 0. T h e variance of Y a r o u n d the point Χ. from its mean X. At A" we m a y therefore c o m p u t e confidence limits of Ϋ. the farther a w a y . — 100*7. This s t a n d a r d e r r o r is c o m p u t e d in step 4 of Box 11. the greater will be the e r r o r of estimate. it is Sy.Y.A". This factor is seen in the third row of Box 1 1. the less reliable are o u r estimates of Y. the p a r a m e t r i c value c o r r e s p o n d i n g to the estimate Y. considerably wider than the confidence interval at X calculated in step 5. Variation in b therefore rotates the regression line a b o u t the point . x / n with η .3 as the deviation A. which would be from 4. N o t e that the width of the confidence interval is 3. is from its mean. k n o w i n g the relative humidity greatly reduces much of the uncertainty in weight loss. You will recall from Section 6.687 to 7. a n d 95% confidence limits for the sampled m e a n Ϋ at X arc calculated in step 5. is given in step 6 of Box 11.8958.1 that . If we calculate a series of confidence limits for different values of X w c o b t a i n a biconcave confidence belt as shown in Figure 11.6.2559) are considerably n a r r o w e r than the confidence limits for the m e a n based on the c o n v e n t i o n a l s t a n d a r d error s). T h e s t a n d a r d e r r o r for an estimate Y.2 degrees of freedom.·. because of the uncertainty a b o u t the true slope. w e c a l c u l a t e a standard error for the observed sample mean Y.x. which was 6. now enters the error variance. for a relative humidity A'.4.2559 5. N e x t . . of I he regression line.sf = sj/n. R e l a t i v e h u m i d i t y confidence limits passes t h r o u g h the m e a n s of X a n d Y.5 / TESTS OF SIGNIFICANCE IN REGRESSION 255 ! J-· » FIGURE 1 1.7881 = 0.4.8296 2. These limits (5. w h o s e error for magnitude is in part a function of the distance of a given value . T h u s . T h e 95% confidence limits for /(..11. T h e farther wc get a w a y from the mean.

6. 3 0 X 1 1 1 1 l| 1 i 1 . the linear regressions that we fit are often only r o u g h a p p r o x i m a t i o n s t o the m o r e complicated f u n c t i o n a l relationships b e t w e e n biological variables. Hence c o m m o n sense indicates t h a t o n e s h o u l d be very c a u t i o u s a b o u t e x t r a p o l a t i n g from a regression e q u a t i o n if o n e has any d o u b t s a b o u t the linearity of the relationship. T h e confidence limits for a. Very often there is an a p p r o x i m a t e l y linear relation a l o n g a certain range of the i n d e p e n d e n t variable. but b e n e a t h a n d a b o v e this r a n g e the h e a r t b e a t will eventually decrease as the a n i m a l freezes o r suffers heat p r o s t r a t i o n . A n o t h e r significance test in regression is a test of the differences between t w o regression lines.11 95% confidence limits to regression e s t i m a t e s for d a t a of F i g u r e 11. 1 X 10 '20 H 10 50 tin 70 SO 90 100 O Relative humidity F u r t h e r m o r e . and in c o m p a r i n g s a m p l e s . are a special case of those for at A". T h e regression slope of one variable on a n o t h e r is as f u n d a m e n t a l a statistic of a s a m p l e as is the m e a n or the s t a n d a r d deviation. the p a r a m e t r i c value of a. O r genetically differing cultures might yield different responses to increasing density. b e y o n d which range the slope c h a n g e s rapidly. a n d the s t a n d a r d e r r o r of a is therefore Tests of significance in regression analyses where there is m o r e t h a n o n e variate V per value of X are carried out in a m a n n e r similar to that of Box 11.4. h e a r t b e a t of a poikilothermic animal will be directly prop o r t i o n a l to t e m p e r a t u r e over a range of tolerable temperatures. F o r example. except that the s t a n d a r d e r r o r s in the left-hand c o l u m n of Box 11. vvhich would be i m p o r t a n t for u n d e r s t a n d i n g the cffcct of n a t u r a l selection in these cultures. Figure 11. W h y would we be interested in testing differences between regression slopes? We might find that different toxicants yield different d o s a g e mortality curves or that different d r u g s yield different relationships between d o s a g e and response (sec.256 v CHAPTER 11 / REGRESSION FIGURE 11. for example.3 are employed. = 0.1).

x is the weighted average sj. C o m p a r e Fs with V2. regression analysis is a c o m m o n l y used device for . philosophical o n e that we shall not go into here..a n d clfect relationships between two chemical substances in the blood. for instance. v 2 = nl + n2 .s 2 . All are discussed in terms of M o d e l I regression. T h e latter cases arc usually examples of Model II regression with both variables varying freely. You have u n d o u b t e d l y been c a u t i o n e d f r o m your earliest scientific experience not to c o n f u s e c o n c o m i t a n t variation with causation. If we wish to k n o w w h e t h e r variation in a variable V is caused by c h a n g e s in a n o t h e r variable X.2 yx of the t w o groups. even here it is best to be cautious. W h e n we find that h e a r t b e a t rale in a c o l d . However. we generally arc satisfied that the variation of the independent variable A' is the cause of the variation of the d e p e n d e n t variable V (not Ihe cause of the variable!). It is unlikely that a n y o n e would s u p p o s e that h e a r b e a t rate affects the t e m p e r a t u r e of the general e n v i r o n m e n t .11.b l o o d e d animal is a f u n c t i o n of a m b i e n t temperature. but when there is m o r e t h a n o n e variate Y per value of X.6 The uses of regression We have been so busy learning the mechanics of regression analysis t h a t we have not had time to give m u c h t h o u g h t to the various a p p l i c a t i o n s of regression. we might m e n t i o n the study of causation. x YA±M. First. T h e idea of c a u s a t i o n is a complex. we m a n i p u l a t e -V in an experiment a n d see w h e t h e r we can o b t a i n a significant regression of Y on X. Its f o r m u l a is V: F o r o n e Y per value of X. Variables m a y vary together. We shall t a k e u p four m o r e or less distinct applications in this section. A possible mistake is to invert the cause-and-effect relationship. wc m a y conclude that t e m p e r a t u r e is one of the causes of differences in hearheat rate T h e r e m a y well be o t h e r factors affccting rate of heartbeat. 11. v2 = ai + a2 .6 / THE USES OF REGRESSION 257 it m a y be as i m p o r t a n t to c o m p a r e regression coefficients as it is to c o m p a r e these o t h e r statistics. yet this c o v a r i a t i o n m a y be accidental or both may be f u n c t i o n s of a c o m m o n cause affecting them. W e c o m p u t e ρ s = J M l L _ (Σ*1)(Σ4) where . Despite these cautions. but we might be mistaken a b o u t the c a u s c . T h e test for the difference between t w o regression coefficients can be carried out as a n F test. W h e n we m a n i p u l a t e o n e variable a n d lind that such m a n i p u l a t i o n s affect a sccond variable.4.4.

as in Box 11. It would be unfair to c o m p a r e beetles raised at very high density (and expected t o have low survival) with those raised u n d e r o p t i m a l conditions of low density. Since = Ϋ when λ' = X. This suggests that wc calculate adjusted Y values that allow for the m a g n i t u d e of the i n d e p e n d e n t variable X. As soon as it is established t h a t a given variable is a function of a n o t h e r one. the fifth-grader may be far better than is the collcge student relative to his or her peer group. or their c o r r e s p o n d i n g p a r a m e t e r s α a n d β. Since we could u n d o u b t e d l y o b t a i n a regression of m a t h e m a t i c a l knowledge on years of schooling in m a t h e m a t i c s .x = Y . we should be c o m p a r i n g h o w far a given individual deviates from his or her cxpcctcd value based on such a regression.2. a n d M o d e l I regression analysis permits us to estimate f u n c t i o n a l relationships between variables. we speak of a structural mathematical model. the adjusted Y value can be c o m puted as η. Science aims at m a t h e m a t i c a l description of relations between variables in nature. statistical a p p r o a c h e s applied to descriptive' . W h e n we can d o so. one whose c o m p o n e n t p a r t s have clear scientific meaning. m a t h e m a t i c a l curves that a r e not s t r u c t u r a l models arc also of value in science.bx (11. This is the same point of view that m a k e s us disinclined to c o m p a r e the m a t h e matical knowledge of a fifth-grader with that of a college student. where we f o u n d survival of beetles to be a f u n c t i o n of density. with the implication that only the latter can be analytical. relative to his o r her classmates and age group. These f u n c t i o n a l relationships d o not always have clearly interp r e t a b l e biological meaning. T h u s . in which the f u n c t i o n s simply represent the best m a t h e m a t i c a l fit (by a criterion such as least squares) to an observed set of d a t a . the converse statem e n t is true. T h e description of scientific laws a n d prediction are a second general area of application of regression analysis. However. Biologists frequently categorize work as either descriptive or experimental. in m a n y cases it may be difficult to assign a biological i n t e r p r e t a t i o n to the statistics a a n d b. However. A c o n v e n t i o n a l way of calculating such adjusted Y values is to estimate the Y value o n e would expect if the independent variable were equal to its m e a n X a n d the o b s e r v a t i o n retained its observed deviation {dy v ) f r o m the regression line.258 CHAPTER 1 1 / REGRESSION screening out causal relationships. one of which is subject t o error. While a significant regression of Υ on X d o e s not p r o v e that changes in X are the cause of variations in Y.ϋ = t dy. Comparison of dependent rariates is a n o t h e r application of regression. W h e n we find n o significant regression of Y on X . o n e is b o u n d to ask to w h a t degree any observed difference in survival between two s a m p l e s of beetles is a function of the density at which they have been raised. we c a n in all but the most complex cases infer quite safely (allowing for the possibility of type II error) t h a t deviations of X d o n o t affect Y. T h u s . M o s t regression lines a r e empirically fitted curves.8) Statistical control is an application of regression that is not widely k n o w n a m o n g biologists and represents a scientific p h i l o s o p h y that is not well established in biology outside agricultural circles.

may detect outliers in a sample. which would be generally (rue of such a variable as age of a laboratory animal. An alternative a p p r o a c h is superior in some cases.¥. Then we regress blood pressure on age and use an adjusted m e a n as the basic blood pressure reading for each individual. or by the fitting of a curvilinear regression line. removal of such an outlier may improve the regression fit considerably. of course. T h e reaction of most biologists at this point will be to repeat the experiment using rats of only one age group. These a p p r o a c h e s are a t t e m p t s to substitute statistical manipulation of a c o n c o m i t a n t variable for control of the variable by experimental means. take the place of experimental techniques quite adequately—occasionally they are even to be preferred. We might continue to use rats of variable ages and simply record the age of each rat as well as its blood pressure. we should also allow for the corresponding deviation from Λ . Outlying values of Yt (hat correspond to deviant variates . this is a valid. The use of statistical control assumes that it is relatively easy to record the independent variable X and. We can now evaluate the effect of differences in diet on these adjusted means.11. T o reduce the variability of blood pressure in the population. This can be d e m o n s t r a t e d by a significant linear regression of blood pressure on age. which is part of the experimental method. dy.7 Residuals and transformations in regression An examination of regression residuals. In examining the m a g n i t u d e of residuals.7 / R E S I D U A L S A N D T R A N S E O R M A H O N S [N REGRESSION 259 work can. What are the advantages of such an a p p r o a c h ? Often it will be impossible to secure adequate numbers of individuals all of the same age. 11. Further study reveals that the variability is largely due to differences in age a m o n g the rats of the experimental population. An example will clarify this technique. or to c o n t a m i n a t i o n of the sample studied. that this variable can be measured without error. Statistical control may also be preferable because we obtain information over a wider range of both >' and Y and also because we obtain added knowledge a b o u t the relations between these (wo variables. commonsense a p p r o a c h . in a n u m b e r of instances. will have a greater influence in determining the slope of the regression line than will variates close . A . dr . x. after the experimental blood pressures have been regressed on age (which a m o u n t s to the same thing). O r we can analyze the effects of diet on unexplained deviations. when it is impractical or too costly to hold the variable constant. When it is believed that an outlier is due lo an observational or recording error. Such outliers may reveal systematic departures from regression that can be adjusted by transformation of scale. By using regression we are able to utilize all the individuals in the population. We find that the variability of blood pressure in our rat population is considerable. Let us assume that we are studying the effects of various diets on blood pressure in rats. we should keep the age of the rats constant. even before we introduce differences in diet. which would not be so if we restricted ourselves (o a single age group.

where a a n d h are c o n s t a n t s a n d e is t h e base of the n a t u r a l logarithm. W e can e x a m i n e the residuals in c o l u m n (9) of Table 11. a n d the d i s t r i b u t i o n of the deviations of p o i n t s a r o u n d the regression line tends to b e c o m e n o r m a l a n d homoscedastic. This transf o r m a t i o n is indicated when percentage c h a n g c s in the d e p e n d e n t variable vary directly with c h a n g c s in the i n d e p e n d e n t variable. Wc can then simply regress log Y on X to o b t a i n the f u n c t i o n log Y = a' + h'X a n d o b t a i n all o u r prediction e q u a t i o n s and confidcncc intervals in this form. Such a relationship is indicated by the e q u a t i o n Y = aehx.12 shows an e x a m p l e of t r a n s f o r m i n g the d e p e n d e n t variate to logarithmic form. Figure 11. where the successive increases in density need to be in a constant ratio in o r d e r to effect equal decreases in weight. Figure 11. In this expression log e is a c o n s t a n t which when multiplied by h yields a new c o n s t a n t factor h' which is equivalent to a regression coefficient. T h e logarithmic transformation is the most frequently used. which results in considerable straightening of the response curve. This belongs to a well-known class of biological p h e n o m e n a .2) are not m e n t i o n e d below. A l t h o u g h several residuals are quite large. but they arc also effective in regression cases involving d a t a suited to such t r a n s f o r m a t i o n s . it is far m o r e expedient to c o m p u t e a simple linear regression for variates plotted on a t r a n s f o r m e d scale. W c shall briefly discuss a few of the t r a n s f o r m a t i o n s c o m m o n l y applied in regression analysis. at the s a m e time. log a is a new Y intercept. Such a p r o c e d u r e generally increases the p r o p o r t i o n of the variance of the d e p e n d e n t variable explained by the i n d e p e n d e n t variable. If the f u n c t i o n straightens out a n d the systematic deviation of p o i n t s a r o u n d a visually fitted line is reduced. a n o t h e r e x a m p l e οΓ which is the W e b e r . A general test of w h e t h e r t r a n s f o r m a t i o n will i m p r o v e linear regression is to g r a p h the points to be fitted on o r d i n a r y g r a p h p a p e r as well as on o t h e r g r a p h p a p e r in a scale suspected to i m p r o v e the relationship.F e c h n c r law in physiology and psychology. a'. S q u a r e root a n d arcsine t r a n s f o r m a t i o n s (Section 10. the t r a n s f o r m a t i o n is worthwhile. which states that a stimulus has to be increased by a c o n s t a n t p r o p o r t i o n in o r d e r to p r o d u c e a c o n s t a n t increment in response. we aim at simplifying a curvilinear r e l a t i o n s h i p to a linear one. R a t h e r t h a n fit a c o m p l i c a t e d curvilinear regression to p o i n t s plotted on an a r i t h m e t i c scale.1 for the weight loss d a t a . wc o b t a i n log Y = log a + />(log c)A. Most frequently wc t r a n s f o r m the d e p e n d e n t variable Y. A logarithmic t r a n s f o r m a t i o n of the independent variable in regression is effective when p r o p o r t i o n a l c h a n g e s in the i n d e p e n d e n t variable p r o d u c e linear responses in the d e p e n d e n t variable. In t r a n s f o r m i n g either or b o t h variables in regression.13 illustrates how logarithmic . they tend t o be relatively close to Y. is the single m o s t deviant o b s e r v a t i o n f r o m X . A n y o n e d o i n g statistical work is therefore well advised to keep a supply of semilog p a p e r handy. Similarly. O n l y the residual for 0% relative h u m i d i t y is suspiciously large a n d .260 CHAPTER 1 1 / REGRESSION to X. An e x a m p l e might be the declinc in weight of an o r g a n i s m as density increases. After the t r a n s f o r m a t i o n . P e r h a p s the r e a d i n g at this extreme relative h u m i d i t y d o e s n o t fit into the generally linear relations described by the rest of the data.

r e l a t i v e b r i g h t ness of i l l u m i n a t i o n . ( D a t a f r o m B l o c k . A p r o p o r t i o n a l i n c r e a s e in Λ' ( r e l a t i v e b r i g h t n e s s ) p r o d u c e s a l i n e a r e l e c t r i c a l r e s p o n s e V. ( D a t a in l ' r b h l i c h .) Relative brightness (times) R e l a t i v e b r i g h t n e s s ( t i m e s ) in Ion . O r i g i n a l d a t a in left p a n e l . a b s c i s s a . 1966.r a t e a s a f u n c t i o n of t e m p e r a t u r e in m a l e s of t h e t r e e c r i c k e t Oecanthus fultoni. 1 2 L o g a r i t h m i c t r a n s f o r m a t i o n of a d e p e n d e n t v a r i a b l e in r e g r e s s i o n . 1 9 2 1 . O r d i n a t e .l.scale HGuki I l.]50r- 160 120 ·= 100 ο 80 Ο 50 40 10 20 0 10 15 20 -J Λ' FIGURE 1 1 . m i l l i v o l t s . ) . C h i r p . Y p l o t t e d o n l o g a r i t h m i c s c a l e in r i g h t p a n e l . Each point represents the m e a n chirp rate/min f o r all o b s e r v a t i o n s a t a g i v e n t e m p e r a t u r e in " C . T h i s i l l u s t r a t e s s i / e of electrical r e s p o n s e l o i l l u m i n a t i o n in t h e c e p h a l o p o d eye.l L o g a r i t h m i c t r a n s f o r m a t i o n of t h e i n d e p e n d e n t v a r i a b l e in r e g r e s s i o n .

0 c o r r e s p o n d s to a cumulative frequency of 84. c o r r e s p o n d i n g to a c u m u l a t i v e perc e n t a g e . These curves are the subject m a t t e r of an entire field of biometric analysis. will yield hyperbolic curves when plotted in original m e a s u r e m e n t scale. We d o the same thing here except that we g r a d u a t e the probability scale in s t a n d a r d deviation units.. with respect to their general body sizes.13"„.262 CHAPTER 1 1 / REGRESSION t r a n s f o r m a t i o n of the i n d e p e n d e n t variable results in the straightening of the regression line. we obtain a straight line when plotting the cumulative n o r m a l curve against it. to which wc can refer only in passing here. the tolerances of m a n y o r g a n i s m s to these poisons arc a p p r o x i m a t e l y normally distributed. It is often found that if the doses of toxicants arc transformed into logarithms. F o r c o m p u t a t i o n s one would t r a n s f o r m X into logarithms.0 c o r r e s p o n d s to a cumulative frequency of 2. such as the sizes of antlers of deer or h o r n s of stage beetles. which will avoid negative values for most deviates. With increasing dosages an ever greater p r o p o r t i o n of the sample dies until at a high e n o u g h dose the entire sample is killed. M a n y rate p h e n o m e n a (a given p e r f o r m a n c e per unit of time or per unit of population). a n d the probit value 3. a r c c a l l e d normal equivalent deviates (NEI)).. Increasing dosages lead to cumulative normal distributions of mortalities. such as wing beats per second or n u m ber of eggs laid per female. T h u s . Finally. The most c o m m o n technique in this licld is probil analysis. A d o u b l e logarithmic t r a n s f o r m a t i o n is indicated when a plot on log-log g r a p h p a p e r results in a straight-line graph. the 50% point becomes 0 s t a n d a r d deviations. L o g a r i t h m i c t r a n s f o r m a t i o n for both variables is applicable in situations in which the true relationship can be described by the f o r m u l a Y = aXb. some c u m u l a t i v e curves can be straightened by the prohit transformation.14 shows an example of mortality percentages for increasing doses of an insecticide.27". F r o m these we can derive 1/Y = bX or 1/Y = a + bX. If we u s e o r d i n a r y g r a p h p a p e r and m a r k the o r d i n a t e in NED units. wc can frequently o b t a i n straight-line regressions. the 84. the probit value 6. G r a p h i c a p p r o x i m a t i o n s can be e a r n e d out on so-called probil paper. By t r a n s f o r m i n g the d e p e n d e n t variable into its reciprocal. Reciprocal transformation. Such s t a n d a r d deviations. often called dosage-mortalit ν curves.27% point becomes 2 standard deviations. the probit value 5. R e m e m b e r that by c h a n g i n g the o r d i n a t e of the cumulative n o r m a l into probability scale we were able to straighten out this curve. bioassav.5. These represent differing points of a cumulative frequency distribution. These t r a n s f o r m e d doses arc often called dosages. Probits arc simply n o r m a l equivalent deviates coded by the a d d i t i o n of 5.0. they form curves described by the general m a t h e m a t ical e q u a t i o n s bXY = 1 or + hX)Y = 1. Figure 11. T h e regression e q u a t i o n is rewritten as log Y = log a + b log X and the c o m p u t a t i o n is d o n e in the c o n v e n t i o n a l m a n n e r .13% point becomes -f 1 s t a n d a r d deviation. T h u s . Refresh y o u r m e m o r y on the cumulative n o r m a l curve s h o w n in Figure 5.0 c o r r e s p o n d s to a c u m u l a t i v e frequency of 50%. E x a m p l e s are the greatly d i s p r o p o r t i o n a t e g r o w t h of various o r g a n s in s o m e organisms. and the 2. winch is probability graph paper in which the . T h u s .

. which c a n n o t be plotted. T h e ranks of the weight losses are a perfect m o n o t o n i c function .3. we estimate the value of λ' (dosage) c o r r e s p o n d i n g to a kill of probit 5.0.8 A n o n p a r a m e t r i c test for regression When t r a n s f o r m a t i o n s are unable to linearize the relationship between the dependent and independent variables. but it does test whether the d e p e n d e n t variable Y is a monotonically increasing (or decreasing) function of the independent variable .5 1 1 1 2 1 —1 '— 30 .1 (reversing the order of percent relative humidity.01 when looked up in 'I'able XIV. T h e test is carried out as follows.1. n o n p a r a m e t r i c test in lieu of regression analysis.5 in lieu of x . X. T h e c o m p u t a t i o n a l steps for the p r o c e d u r e are shown in Box 12.1 which yielded 0". mortality h a s been assigned a p r o b i t value of 2. Rank variates λ' and V. T h e r e is thus a significant trend of weight loss as a function of relative humidity.1 1 0.V in increasing order of r a n k s a n d calculate the Kendall r a n k correlation of Y with Λ. In fact. in such a case the distinction between regression a n d correlation. 11.V 5 10 Dose in log scale D o s a g e mortalily d a t a illustrating an a p p l i c a t i o n of the probit t r a n s f o r m a t i o n . A r r a n g e the i n d e p e n d e n t variable . which is equivalent to c o m p u t i n g Kendall's rank correlation coefficient (see Box 12. D a t a are m e a n mortalities for two replicates. that is. breaks d o w n . P r o m the regression line the 50"ό lethal does is estimated by a process of inverse prediction. T w e n t y Drosophila melanogaster per replicate were subjected to seven doses of an " u n k n o w n " insecticide in a class experiment. which is equivalent to 50"'.14 2 0.1 1. A regression line is fitted to dosage-mortality d a t a graphed on probit paper (see Figure 11. Such a test furnishes neither a prediction e q u a t i o n nor a functional relationship. γ 0 10 20 30 Dose FIGURE 11.3) a n d can be carried out most easily as such. T h e simplest such test is the ordering lest. If we carry out this c o m p u t a t i o n for the weight loss d a t a of Box 11.14). V). the research worker may wish to carry out a simpler.'. The point at dose 0. we o b t a i n a q u a n t i t y Ν — 72.8 / A NONPARAMETRIC TEST [OR REGRESSION 263 100 ο ε so c a.Y. which is negatively related to weight loss. abscissa has been t r a n s f o r m e d into logarithmic scale. which will be discussed in detail in Section 12. which is significant at Ρ < 0.

Passer domesticus. Larval density 1 3 5 6 10 20 40 Mean weight of adults (in mg) 1.264 CHAPTER 1 1 / REGRESSION o n the r a n k s of the relative humidities.130 0. . 5 0 4 3 .S wjlhin = 3. For the first four points: (a) C a l c u l a t e b. . = 5 9 . α = 100.083 η 9 34 50 63 83 144 24 11. T h e n c a l c u l a t e t h e r e g r e s s i o n of w e i g h t o n d e n s i t y a n d p a r t i t i o n t h e s u m s οΓ s q u a r e s a m o n g g r o u p s i n t o t h a t e x p l a i n e d a n d u n e x p l a i n e d by l i n e a r r e g r e s s i o n . (c) T e s t t h e h y p o t h e s i s t h a t β = 0 a n d set 9 5 % c o n f i d e n c e l i m i t s . 0 5 . 1 3 0 0 . b = 0 . 4 2 8 8 . Ϋ50 = 106.9 103.5.356 1.2 103. l 0. 1958).1 T h e f o l l o w i n g t e m p e r a t u r e s ( Y ) w e r e r e c o r d e d in a r a b b i t a t v a r i o u s t i m e s ( Z ) a f t e r it w a s i n o c u l a t e d w i t h r i n d e r p e s t v i r u s ( d a t a f r o m C a r t e r a n d M i t c h e l l .8.664 0. A d u l t w e i g h t s of f e m a l e Drosophihi persimilis r e a r e d at 2 4 " C a r c a f f e c t e d b y t h e i r d e n s i t y a s l a r v a e .130 0.252 0.141 0.2186.989 0.133 0.475 \ of wciifhts (not \ . C a r r y o u t a n a n o v a a m o n g d e n s i t i e s .180 0. Exercises 11. (d) Set 9 5 % c o n f i d e n c e l i m i t s t o y o u r e s t i m a t e of t h e r a b b i t ' s t e m p e r a t u r e 50 h o u r s a f t e r t h e i n j e c t i o n . t h e last t h r e e d a t a p o i n t s r e p r e s e n t a d i f f e r e n t p h e n o m e n o n f r o m t h e first four pairs. A N S .8 104. MSy = 6 5 7 . Time after injection (h) 24 32 48 56 72 80 96 Temperature CF) 102.2 T h e f o l l o w i n g t a b i c is e x t r a c t e d f r o m d a t a b y S o k o l o f f (1955).5 106.356 1. Analyze and interpret A N S . d e v i a t i o n s a r c n o t .1 G r a p h t h e d a t a . (b) C a l c u l a t e t h e r e g r e s s i o n e q u a t i o n a n d d r a w in t h e r e g r e s s i o n line. Ρ < 0 .105 0.9330. F. under various constant temperature conditions and a ten-hour photoperiod.0 103.3 D a v i s ( 1 9 5 5 ) r e p o r l e d t h e f o l l o w i n g r e s u l t s in a s t u d y of t h e a m o u n t of e n e r g y m e t a b o l i z e d by t h e F n g / i s h s p a r r o w . A-/. MS. C l e a r l y . G r a p h t h e d a t a w i t h t h e r e g r e s s i o n line fitted t o t h e m e a n s . 11. T h e m i n i m u m n u m b e r of p o i n t s required for significance by the r a n k correlation m e t h o d is 5.5 107.284 1. I n t e r p r e f y o u r r e s u l t s .

07 1.1.0 70.77 1.4 24. Empousca labile. s j . F o r 14 h o u r s : b = 0.3 19.7 η 6 4 4 5 7 7 s 1.1 27.6 L e n g t h of d e v e l o p m e n t a l p e r i o d (in d a y s j of t h e p o t a t o l e a f h o p p e r .51 0.5 Using the complete data given in Exercise J 1. Compute the residuals from regression.73 18 21 24 0.3 78.99 2. Discuss the effect of the inclusion of the last three points in the analysis.0 80.1 19. 0 1 9 . K = 0 . 0 6 3 3 .53 0.2 14.EXERCISES 265 Temperature CO 0 4 10 18 26 34 Calories Y 24.3 26.8 26.0 75.4 83.020.6 70.8 67.2 13.89 C o m p u t e r e g r e s s i o n f o r e a c h p h o t o p e r i o d s e p a r a t e l y a n d test f o r h o m o g e n e i t y of s l o p e s . Temperature CC) Photoperiod (h) 10 14 1. T h e o r i g i n a l d a t a w e r e w e i g h t e d m e a n s .8 14.4 14. A N S .61 1.9 23.4 74.4 11. 1966).7 15. s2Y. Icmpt'ralun' ( F) 59.4 91.9 14. F o r 10 h o u r s : b =• 0 .2 88.70 11.64 1.00. j = 0.6 1< " 3 .0 16. . calculate the regression equation and compare it with the one you obtained for the first four points.2 18.4 m c Mean length of developmental period in days Y 58.60.5 15.43 1.000. The following results were obtained in a study of oxygen consumption (microliters/mg dry weight per hour) in Heliothis zea by Phillips and Newsom (1966) under controlled temperatures and photoperiods. 2 6 7 . f r o m e g g t o a d u l t at v a r i o u s c o n s t a n t t e m p e r a t u r e s ( K o u s k o l e k a s a n d D e c k e r . b u t f o r p u r p o s e s of t h i s a n a l y s i s we shall c o n s i d e r t h e m a s t h o u g h t h e y w e r e s i n g l e o b s e r v e d v a l u e s .52 2.4 81. 11.

by Vollenweider and Frei (1953).003. Compute deviations from the regression line (y f — and plot against temperature.1.3 12 5.435. Depth (m) Temperature ("C) 0 24.8 16. Carry out a nonparametric test for regression in Exercises 11.1952.7 Analyze and interpret.2 3 21. .92 24.3 25.2 13.8 11.2 4 5 6 9 18.6 Plot the data and then compute the regression line.2398. The experiment cited in Exercise 11.8 1 23. a = 23. Ρ < 0.4 6 7 8 10 6 11.01.2 2 22.9 Test for the equality of slopes of the regression lines for the 10-hour and 15-hour photoperiod. ANS.1 and 11. Water temperature was recorded at various depths in Rot Lake on August 1.67 4.6.8 9. and the following results were obtained: Temperuiure CO 0 10 18 26 34 Calories Ϋ η s 1.1 22.266 CHAPTER 1 1 / REGRESSION 11. F.98 3.5 5. Fs = 0.384.3 was repeated using a 15-hour photoperiod.6 6.8 15.93 1.8 13. h = . Does temperature varv as a linear function of depth? What do the residuals suggest? ANS. Compute the deviations from regression. = 45.01 2.

m o m e n t correlation coefficient in this scction. in Section 12. the c o m m o n correlation coefficient οΓ the literature. wc treat the m e a s u r e m e n t of the a m o u n t of association between two variables. we will i n t r o d u c e some of the a p p l i c a t i o n s of correlation coefficients.CHAPTER Correlation In this c h a p t e r we c o n t i n u e o u r discussion of bivariate statistics. In Scction 12. In Section 12.3 we will talk a b o u t various tests of significance involving correlation coefficients. .2 you will be introduced lo the p r o d u c t m o m e n t correlation coefficient. in the present chapter. on regression. Wc shall try to m a k e the distinction between these t w o a p p r o a c h e s clear at the outset in Section 12. We shall also c o m p u t c a p r o d u c t . T h e n . T h e close m a t h e m a t i c a l relationship between regression and correlation analysis will be examined in this section. It is not always o b v i o u s which type of analysis regression or correlation one should e m p l o y in a given problem. T h e r e has been considerable confusion in the m i n d s of investigators a n d also in the literature o n this topic. This general topic is called correlation analysis.4. Wc shall derive a formula for this coefficient a n d give you s o m e t h i n g of its theoretical b a c k g r o u n d . we dealt with the functional relation of o n e variable u p o n the other.1. In C h a p t e r 11.

the n a t u r e of the d a t a m a y be such as to m a k e only the o t h e r a p p r o a c h a p p r o p r i a t e . the m a j o r i t y of the c o m p u t a t i o n a l steps are the same whether o n e carries out a regression or a correlation analysis. Let us examine these points at some length. the m a t h e m a t i c a l relations between the t w o m e t h o d s of analysis are quite close. a step that we feel can only c o m p o u n d the confusion. we employ regression e q u a t i o n s for p u r p o s e s of lending s u p p o r t to hypotheses regarding the possible c a u s a t i o n of changes in V by c h a n g e s in X\ for p u r p o s e s of prediction. As wc have seen. age of an animal on blood pressure. Seco n d . the t e m p t a t i o n to d o so is great. It suffices for n o w to state that for a n y given problem. earlier texts did n o t m a k e the distinction between the t w o a p p r o a c h e s sufficiently clear. of variable Y given a value of variable X\ a n d for p u r p o s e s of explaining some of the variation of Y by X. T h u s the t e m p t a t i o n exists to c o m p u t e a correlation coefficient c o r r e s p o n d i n g to a given regression coefficient.1 Correlation and regression T h e r e has been m u c h c o n f u s i o n on the subject m a t t e r of correlation a n d regression. as we shall see shortly. Y o u will recall that the f u n d a m e n t a l q u a n t i t y required for regression analysis is the sum of p r o d u c t s . This is the very s a m e q u a n t i t y that serves as the base for the c o m p u t a t i o n of the correlation coefficient. . First of all. while an investigator may with good r e a s o n intend to use o n e of the two a p p r o a c h e s . or where quick b u t less t h a n fully efficient tests are preferred for reasons of speed in c o m p u t a t i o n o r for c o n venience.2. nitrogen content of soil on g r o w t h rale in a plant. At least o n e t e x t b o o k synonymizes the two. a n d m a t h e m a t i c a l l y o n e can easily m o v e f r o m one to the other. It is t o be used in those cases in which t h e necessary a s s u m p t i o n s for tests involving correlation coefficients d o n o t hold. or dose of an insecticide on mortality of the insect p o p u l a t i o n are all typical examples of regression for the p u r p o s e s n a m e d above.268 CHAPTER 12 / CORRELATION Section 12. Studies of the effects of t e m p e r a t u r e on h e a r t b e a t rate. T h e r e are several reasons for this confusion. In regression we intend to describe the d e p e n d e n c e of a variable Y on an i n d e p e n d e n t variable X. 12.5 c o n t a i n s a n o n p a r a m e t r i c m e t h o d t h a t tests for association. Yel. Finally. bv using the latter variable as a statistical control. Hence. a n d this p r o b l e m has still n o t been entirely overcome. T h e m a n y a n d close m a t h e matical relations between regression a n d c o r r e l a t i o n will be detailed in Section 12. Q u i t e frequently correlation p r o b l e m s are treated as regression p r o b lems in the scientific literature. and the converse is equally true. Let us then look at the intentions or p u r p o s e s behind the t w o types of analyses. this would be w r o n g unless our intention at the outset were to study association and the d a t a were a p p r o p r i a t e for such a c o m putation. T h e r e arc s o m e simple m a t h e m a t i c a l relations between regression coefficients and correlation coefficients for the same d a t a .

o b t a i n e a c h i n d i v i d u a l ' s c h o l e s t e r o l c o n tent a n d weight. It m a y well be t h a t of t h e p a i r of v a r i a b l e s w h o s e c o r r e l a t i o n is s t u d i e d . w h e r e several t e m p e r a t u r e s h a v e been a p p l i e d in a n e x p e r i m e n t . a r e c o m p u t e d w h e n X is fixed. o r covary—that is. we s h o u l d c a r r y o u t a M o d e l II regression. v a r y t o g e t h e r . a n d regress the f o r m e r o n the latter. It suffices for n o w t o s t a t e t h a t w h e n we wish t o establish the d e g r e e of a s s o c i a t i o n b e t w e e n p a i r s of v a r i a b l e s in a p o p u l a t i o n s a m p l e . I n d i v i d u a l v a r i a t e s of the s u p posedly i n d e p e n d e n t v a r i a b l e Λ' will n o t h a v e been deliberately c h o s e n o r c o n trolled by the e x p e r i m e n t e r . not a n e s t i m a t e . if it is the d e g r e e of a s s o c i a t i o n b e t w e e n t h e v a r i a b l e s ( i n t e r d e p e n d e n c e ) t h a t is of interest. T h e c o n v e r s e dilliculty is t r y i n g t o o b t a i n a c o r r e l a t i o n coefficient f r o m d a t a t h a t are p r o p e r l y c o m p u t e d as a regression t h a t is.1 / CORRELATION A N D REGRESSION 269 In c o r r e l a t i o n . W e shall t a k e this u p in Section 12. we a r e c o n c e r n e d largely w h e t h e r t w o variables a r e i n t e r d e p e n d e n t . N o t o n l y w o u l d c o n s t r u c t i o n of such a f u n c t i o n a l d e p e n d e n c e for these variables n o t meet o u r i n t e n t i o n s . T h e u n d e r l y i n g a s s u m p t i o n s of M o d e l I regression d o not h o l d . T h u s we m a y wish t o e s t a b l i s h cholesterol c o n t c n t of b l o o d i d a f u n c t i o n of weight. c o r r e l a t i o n a n a l y s i s is t h e p r o p e r a p p r o a c h . for which these d a t a a r c suitable. E v e n if we a t t e m p t the c o r r e c t m e t h o d in line with o u r p u r p o s e s we m a y r u n a f o u l of the n a t u r e of the d a t a . b e t w e e n foreleg length a n d h i n d leg l e n g t h in a p o p u l a t i o n of m a m mals. s u p p o s e we were t o e v a l u a t e a regression coefficient of o n e v a r i a b l e o n a n o t h e r in d a t a t h a t h a d been p r o p e r l y c o m p u t e d as c o r r e l a t i o n s . C o n v e r s e l y . b u t we s h o u l d p o i n t o u t t h a t a c o n v e n t i o n a l regression coefficient c o m p u t e d f r o m d a t a in which b o t h variables are m e a s u r e d with e r r o r . T h e r e is n o d i s t i n c t i o n b e t w e e n i n d e p e n d e n t a n d d e p e n d e n t v a r i a b l e s . H o w e v e r . b y c o n t r a s t . b e t w e e n b o d y weight a n d egg p r o d u c t i o n in f e m a l e blowflies.4. b o t h these variables will h a v e been m e a s u r e d with e r r o r .12. a n d t o d o so we m a y t a k e a r a n d o m s a m p l e of m e n of the s a m e age g r o u p . but we n e i t h e r k n o w n o r a s s u m e this. T h u s a c o r r e l a t i o n coefficient c o m p u t e d f r o m d a t a that h a v e b e e n p r o p e r l y a n a l y z e d by M o d e l 1 regression is m e a n i n g l e s s as a n e s t i m a t e of a n y p o p u l a tion c o r r e l a t i o n coefficient. H o w e v e r . A n e x a m p l e w o u l d be h e a r t beats of a p o i k i l o t h c r m as a f u n c t i o n of t e m p e r a t u r e . a n d fitting a M o d e l I regression to the d a t a is not legitimate. a l t h o u g h y o u will have n o difficulty f i n d i n g i n s t a n c e s of such i m p r o p e r p r a c tices in t h e p u b l i s h e d research literature. T h u s we m i g h t be i n t e r e s t e d in t h e c o r r e l a t i o n b e t w e e n a m o u n t of fat in diet a n d i n c i d e n c e of h e a r t a t t a c k s in h u m a n p o p u l a t i o n s . o n e is t h e c a u s e of t h e o t h e r . A m o r e typical (but n o t essential) a s s u m p t i o n is t h a t t h e t w o variables a r e b o t h effects of a c o m m o n cause.a s is the case in c o r r e l a t i o n a n a l y s i s — f u r n i s h e s biased e s t i m a t e s of the f u n c t i o n a l relation. S u c h a c o r r e l a t i o n coeflicient is easily o b tained m a t h e m a t i c a l l y but w o u l d s i m p l y be a n u m e r i c a l value. W e d o n o t e x p r e s s o n e as a f u n c t i o n of t h e o t h e r . If it is really a n e q u a t i o n d e s c r i b i n g the d e p e n d e n c e of Y o n X that we are after. R e a s o n s w h y we w o u l d wish t o d e m o n s t r a t e a n d m e a s u r e a s s o c i a t i o n b e t w e e n p a i r s of v a r i a b l e s need n o t c o n c e r n us yet. t h e n we s h o u l d c a r r y o u t a c o r r e l a t i o n analysis. W h a t we wish t o e s t i m a t e is t h e d e g r e e to which these v a r i a b l e s vary t o g e t h e r . o r b e t w e e n age a n d n u m b e r of seeds in a weed.

(Describe functional relationship and/or predict one in terms of the other. (Not treated in this book. Meaningless for this case. it is not in any way an estimate of a p a r a m e t r i c correlation. Model I regression. Nature of the two Purpose of investigator Y random.) of a p a r a m e t r i c m e a s u r e of correlation. 12. both variables r a n d o m . We therefore refer to the t w o variables as V.270 TABLE 1 2 . Y2 both random Model II regression. In this text we depart f r o m the usual c o n v e n t i o n of labeling the pair of variables Y and X or X2 for both correlation and regression analysis. T h e r e is an interpretation t h a t c a n be given to the s q u a r e of the correlation coefficient that has some relevance to a regression p r o b l e m . a n d Y2. which we have t h r o u g h o u t the text designated as V. This discussion is s u m m a r i z e d in T a b l e 12.) Establish and estimate dependence of one variable upon another. This table indicates the correct c o m p u t a t i o n for any combination of purposes and variables. In regression we c o n t i n u e the use of Y for the d e p e n d e n t variable a n d X for the i n d e p e n d e n t variable. Wc shall derive its formula t h r o u g h an . H o w e v e r . the o t h e r variable lixed. If desired. as shown. T h e rows of the table indicate the intention of the investigator in carrying out the analysis. a n d the four q u a d rants of the table indicate the a p p r o p r i a t e p r o c e d u r e s for a given c o m b i n a t i o n of intention of investigator a n d n a t u r e of the pair of variables. Correlation coefficient. (Significance tests entirely appropriate only if y t . in the o t h e r ease. 1 CHAPTER 1 2 / CORRELATION The relations between correlation and regression.2 The product-moment correlation coefficient T h e r e arc n u m e r o u s correlation coefficients in statistics.) Establish and estimate association (interdependence) between two variables. T h e two c o l u m n s of the table indicate the t w o c o n d i t i o n s of the pair of variables: in o n e case one r a n d o m a n d m e a s u r e d with error.1. X fixed variables Yt. which shows the relations between correlation and regression. but in correlation both of the variables are in fact r a n d o m variables. The most c o m m o n of these is called the product-moment correlation coefficient. an estimate of the proportion of the variation of Y explained by X can be obtained as the square of the correlation coefficient between X and Y. which in its current f o r m u l a t i o n is d u e to Karl Pearson. Y2 are distributed as bivariate normal variables.

4) would then yield r ^ = Σ y^y. which yields a perfect correlation of + I. Similarly. a regression coefficient is expressed as so m a n y units of Y per unit of X.3) VX-vrl-vi T o slate Expression (12. The expression now becomes the sum of the p r o d u c t s of standardized deviates divided by η — 1: l 2 = f · (n . Their sum of p r o d u c t s will therefore be Σ y \ y 2 and their covariance [1 j(n — 1)] Σ y 1 y 2 = s 1 2 .2) more generally for variables Yt and Yk.m o m e n t correlation coefficient rYiY.2) Expression (12. that is.ATION COEFFICIENT You have seen that the sum of products is a measure of covariation. grams.s>2 C2-1) This is the formula for the p r o d u c t . a sum of squares divided by its degrees of freedom. (12. or cubic centimeters. However.2) can be rewritten in a n o t h e r c o m m o n form. so that we can c o m p a r e the degree of association in one pair of variables with that in another.syi. a measure of association should be independent of the original scale of measurement. This results in dividing each deviation yl and y 2 by its proper standard deviation and m a k i n g it into a standardized deviate. v 2 Expression (12./\/Σ>' 2 Σ = Σ ^ / Σ } ' ? = 1. O n e way to accomplish this is to divide the covariance by the s t a n d a r d deviations of variables Yt and Y2. A s t a n d a r d deviation is expressed in original measurement units such as inches. such as 5. =*12 between (12.2 grams/day. This is intuitively obvious when we consider the correlation of a variable Yj with itself. a n d it is therefore likely that this will be the basic quantity f r o m which to obtain a formula for the correlation coefficient. Since I = j s 2 ( n 1) = (« - I) = V X . W e shall label the variables whose correlation is to be estimated as Υλ and Y2.2) can be rewritten as Γ 12 = . The latter quantity is analogous to a variance.. Expression (12. 2 /' THE PRODUCT-MOMEN Ε CORK I I. If deviations in one variable were paired with opposite but equal ."l). We shall simplify the symbolism to <·:.. variables Yt and Y2. we can write it as (η - 1 )SjSk The correlation coefficient rjk can range from + 1 for perfect association to — 1 for perfect negative association.1 2 .

the more clearly you will be able to discern a circle with the central area a r o u n d the intersection Y2 heavily darkened because of the aggregation there of m a n y points.1. will likely be associated with a small value of Y2. Were you to sample items from such a population. Let us a p p r o a c h the distribution empirically.1 B i v a r i a t e n o r m . the bivariate normal distribution. the m o u n d will present a two-dimensional appearance. but the larger the sample.. Similarly. Suppose you have sampled a h u n d r e d items a n d measured two variables on each item. it is more likely t h a n not to have a large value of Y2 as well. If we assume that the two variables and Y2 are not independent but are positively correlated to some degree. you will obtain a scatterg r a m of points as in Figure 12. Let us assume that both variables. and if you visualize these points in a physical sense. If the variates follow a specified distribution. T h e f r e q u e n c y d i s t r i b u t i o n m a y be v i s u a l i z e d a s a b e l l . for a sample of 100 items. a m o u n d peaked in a bell-shaped fashion will gradually accumulate. so that the fact that one individual happens to be greater t h a n the m e a n in character Y1 has no effect whatsoever on its value for variable Y2. a small value of V. If you keep sampling. and its outline will be that of a n o r m a l distribution curvc. the correlation coefficient rjk will estimate a parameter of that distribution symbolized by pjk. you will find that the outline of the scattergram is roughly circular. If there is absolutely no relation between and Y2 a n d if the two variables are standardized to make their scales comparable. · I f r e q u e n c y d i s t r i b u t i o n . obtaining two samples of 100 variates in this manner. T h e p a r a m e t r i c c o r r e l a t i o n ρ b e t w e e n v a r i a b l e s V. Proof that the correlation coefficient is b o u n d e d by + 1 and — 1 will be given shortly. such as grains of sand.272 CHAPTER 1 2 / CORRELATION because the sum of p r o d u c t s in the n u m e r a t o r would be negative. are normally distributed a n d that they are quite independent of each other. respectively. Yl and Y2. then if a given individual has a large value of V. the two perspectives giving the distributions of V. This is a three-dimensional realization of a n o r m a l distribution. you will have to superimpose new points u p o n previous points. T h u s this same individual may be greater or less than the mean for variable Y2. If you plot these 100 items on a g r a p h in which the variables a n d Y2 are the coordinates.3 A. Of course. the resulting scattergram (shown in iKii'Ri: 12. Regarded from cither c o o r d i n a t e axis. the circle will be only imperfectly outlined. a n d e q u a l s z e r o . and Y2.s h a p e d mound. . shown in perspective in Figure 12.

T h e circular or elliptical resulting m o u n d is clearly a two variables. essentially two-dimensional normal curve lying on this regression line. are now scarcely represented. all the d a t a will fall along a single regression line (the identical line would describe the regression of Y.2). We can m a k e some statements a b o u t the sampling distribution of (>ik and set confidence limits to it. T o illustrate this point. the elliptical shape of s c a t t e r g r a m s of correlated variables is not usually very clear unless either very large samples have been taken or the p a r a m e t r i c correlation (>jk is very high. on Y2 and of Y2 on Y. s h o w n in Figure 12. (12. W h e n two variables are distributed according to the bivariatc normal.2 B i v a r i a t e n o r m a l f r e q u e n c y d i s t r i b u t i o n .2. 2 /' THE PRODUCT-MOMEN Ε CORK I I. a n d Y2 e q u a l s 0. Figure 12. T h e p a r a m e t r i c c o r r e l a t i o n μ b e t w e e n v a r i a b l e s F. If correlation is perfect. we show in Figure 12. Regrettably.9.).s h a p e d m o u n d of F i g u r e 12. they will result in a flat. Note . This is so because those p a r t s of the circlc that formerly included individuals high for one variable and low for the o t h e r (and vice versa).3D) would b e c o m e elongated in the form of an ellipse. T h e b e l l . the p a r a m e t e r f>jk can be defined as where ajk is the p a r a m e t r i c covariance of variables V( and Yk a n d at and ak arc the p a r a m e t r i c s t a n d a r d deviations of variables Yf and Yk.1 2 . C o n t i n u e d sampling (with Ihc sand grain model) yields a three-dimensional elliptic m o u n d .ATION COEFFICIENT FIGURE 12. as before. a n d this is the By a n a l o g y with Fxprcssion shape of the outline of the scattergram and of the function of the degree of correlation between the p a r a m e t e r />jk of the bivariate n o r m a l distribution. and if we let them pile up in a physical model.3 several g r a p h s illustrating s c a t t c r g r a m s resulting from samples of 100 items from bivariatc n o r m a l p o p u l a t i o n s with differing values of (>jk. a sample correlation cocflicicnt rjk estimates the p a r a m e t r i c correlation coefficient pjk.1 h a s b e c o m e e l o n g a t e d .

> OS. (I I ρ .7. 500. I f ) ρ 0. S a m p l e s i / c s n (Η)/I U . .7. (Ci )p 0. 100 in all g r a p h s e x c e p t ( i .9.2" 0 -1 y ο -1 -2 _" > Y o1 3I 3 1 2 11 0 1X J _ 1 1 2 J 1 l-Kii IRI: 12.1 . w h i c h h a s n 0.5. ( I ) ) . 0. (Α) ρ (1.4.3 R a n d o m s a m p l e s f r o m b i v a r i a l e n o r m a l d i s t r i b u t i o n s w i l h v a r y i n g v a l u e s of t h e p a r a m e t r i c c o r r e l a t i o n c o e l h c i c n t p. «').274 CHAPTER 12 / CORRELATION 2r I Y 2 I Y 0.

a n d the c o m parison of these t w o figures should give you some idea of the variability to be expected on r a n d o m sampling f r o m a bivariate n o r m a l distribution. Σ ^ Σ>Ί 12 Look at the left term of the last expression. In the symbolism of C h a p t e r 11.3A).3C.3F.^ 2 Izi Zri ( !2 · 6ί1 ) . T h e next g r a p h (Figure 12.5 but based on a sample of 500. the circular distribution is only very vaguely outlined. also shows the t r e n d but is m o r e s t r u n g out t h a n Figure 12. b u t w i t h o u t prior knowledge this would be difficult to detect visually. If this were a regression problem.5 unless there are n u m e r o u s sample points. is the ratio formed by the explained sum of squares of variable Y2 divided by the total sum of squares of variable Y2. based o n pjk = 0. representing a correlation of pjk = 0.1 2 . the positive slope a n d elliptical outline of the scattergram are quite evident. • 1 2 .9. one can visualize a positive slope in the scattergram.5) is s o m e w h a t clearer. 2 /' THE PRODUCT-MOMEN Ε CORK I I. It is the s q u a r e of the sum of p r o d u c t s of variables Y.6) T h e s q u a r e of the correlation coefficient. we can write Σ j5 (12. N o w let us r e t u r n to the expression for the sample correlation coefficient shown in Expression (12. N o substantial difference is n o t e d in F i g u r e 12. therefore.3B. on regression.5 a n d + 0.3G). A far greater sample is required to d e m o n s t r a t e the circular s h a p e of the distribution m o r e clearly. correlation c a n n o t be inferred f r o m inspection of scattergrams based on samples f r o m p o p u l a t i o n s with pjk between —0. this would be the f o r m u l a for the explained sum of squares of variable Y2 on variable Y. shows a tight association between the variables a n d a reasonable a p p r o x i m a t i o n to an ellipse of points. Figure 12. divided by the sum of squares of Y. Here. it is simply a f u n c t i o n of sampling error. with pJk = 0.ATION COEFFICIENT t h a t in the first g r a p h (Figure 12.3. In general.3E). based on the same m a g n i t u d e of pJk b u t representing negative correlation. Equivalently. also sampled f r o m a p o p u l a t i o n with pjk — 0. T h u s .. Finally. N o t e t h a t the next graph (Figure 12..3D.3D. K n o w i n g t h a t this depicts a positive correlation. shows the trend m o r e clearly t h a n the first three graphs. Figure 12. E y 2 . based on pjk = 0. but still does n o t exhibit an unequivocal trend. T h e difference in shape of the ellipse h a s n o relation to the negative n a t u r e of the correlation. it would be E y 2 = ( E . x y ) 2 / E x 2 . This point is illustrated in the last g r a p h (Figure 12. a n d Y2.>'2) 2 . S q u a r i n g this expression results in \2 ( Σ J ^ 21 Σ νί V π _ ( Σ >'.3). based o n pjk = 0.7 a n d η = 100.

This quantity.6a) is a p r o p o r t i o n ranging f r o m 0 to 1.09. maximally. we should stress again that though we can easily convert one coefficient into the other. You will recall that in a regression we used a n o v a to partition the total sum of squares into explained and unexplained sums of squares.276 CHAPTER 1 2 / CORRELATION which can be derived just as easily. O n c e such an analysis of variance has been carried out. O n e i m p o r t a n t relationship between the correlation coefficient and the regression coefficient can be derived as follows from Expression (12. it would not be meaningful to take the square root of such a coefficient of determination and consider it as an estimate of the parametric correlation of these variables. The explained sum of squares of any variable must be smaller t h a n its total sum of squares or. this docs not mean that the two types of coefficients can be used interchangeably on the same sort of data. is called the coefficient of determination.6) a n d (12. Since its square is the coefficient of determination and we have just shown that the b o u n d s of the latter are zero to 1.3B. the rate at which the scatter d i a g r a m s go f r o m a distribution with a circular outline to one resembling an ellipse seems to be m o r e directly proportional to r2 t h a n to r itself. As can be seen by a reexamination of Figure 12. it is just as legitimate to have Yt explained by Y2 as the other way around. with μ 2 = 0 . r\2. the square of the correlation coefficient. We shall now take up a mathematical relation between the coefficients of correlation and regression.3. one can obtain the ratio of the explained sums of squares over the total SS as a measure of the p r o p o r t i o n of the total variation that has been explained by the regression.1 a n d + 1 . T h e coefficient of determination is useful also when one is considering the relative i m p o r t a n c e of correlations of different magnitudes. The coefficient of determination is a quantity that may be useful in regression analysis also. in Figure 12. here is proof that the correlation coefficient c a n n o t vary beyond . if all the variation of a variable has been explained. it will be zero if n o n e of the variable can be explained by the other variable with which the covariance has been computed. It ranges from zero to 1 a n d must be positive regardless of whether the correlation coefficient is negative or positive. but certainly no greater. (Remember that since we are n o t really regressing one variable on the other. Minimally.) T h e ratio symbolized by Expressions (12. it can be as great as the total sum of squares. Incidentally. 4 9 . as already discusscd in Section 12. with ρ 2 = 0. we obtain an i m p o r t a n t measure of the p r o p o r t i o n of the variation of one variable determined by the variation of the other. by the time we reach Figure 12. it is difficult to detect the correlation visually.1. However. the presence of correlation is very apparent. Thus.3D. At the risk of being repetitious. Thus. it is obvious that the b o u n d s of its square root will be ± 1.3): J>i>'2 = Σ yi>'2 χΣντ xlvi . This becomes obvious after a little contemplation of the m e a n i n g of this formula. However.

Γ Similarly. we c o u l d d e m o n s t r a t e t h a t r ν" ~ 1 = / Σ yi 'η — 1 — (12-7) \i = • ζ— (12. respectively. T h e c o r r e l a t i o n coefficient m a y thus be regarded as a s t a n d a r d i z e d regression coefficient.8 that this added term is twicc the covariance of Yl a n d Y2... >1ΫΜ _ Σ y ^ i . we o b t a i n /Σ Σ μ » Σ.3) can be put into p r o p e r perspective. we o b t a i n = s i + s2 ~~ 2rl2sls2 (12. W h e n the t w o variables . If the t w o s t a n d a r d deviations a r e identical. is the regression coefficient for variable Y2 on Y. obtain w e .ATION COEFFICIENT M u l t i p l y i n g n u m e r a t o r a n d d e n o m i n a t o r of this expression by V Z y f .8) indicates is t h a t if we m a k e a new c o m p o s i t e variable that is the sum of t w o o t h e r variables.8) where s. ^ s2 (12. We see. N o w t h a t we k n o w a b o u t the coclficicnt of correlation. therefore. for a difference between t w o variables. a n d ri2 is the correlation coefficient between these variables.1 2 .2 = r l 2 s. It is shown in Appendix A 1. Similarly. t h a t the correlation coefficient is the regression slope multiplied by the ratio of the s t a n d a r d deviations of the variables. the variance of this new variable will be the sum of the variances of the variables of which it is c o m p o s e d plus an a d d e d term.. both regression coefficients a n d the correlation coefficient will be identical in value. In Appendix A 1.7a) a n d hence b. a n d Y2. and . s o m e of the earlier work on paired c o m p a r i s o n s (see Section 9.v 2 (12. which is a f u n c t i o n of the s t a n d a r d deviations of these two variables a n d of the c o r r e l a t i o n between them. V Z y i v r a If? 7Σ y\ Dividing n u m e r a t o r a n d d e n o m i n a t o r of the right t e r m of this expression by sjn — 1.8 we s h o w for the c o r r e s p o n d i n g p a r a m e t r i c expressions that the variance of a sum of t w o variables is •\>Ί + i"2) = sf + si + 2r 1 2 s.9) W h a t Expression (12..7b) In these expressions b2.s2 a r e s t a n d a r d deviations of Y. 2 /' THE PRODUCT-MOMEN Ε CORK I I.

403 3.m o m e n t correlation coefficient.19 15.— = 583. η — 12.39 17.40 + · · · + 9.403 124.81 4. Pachygrapsus V) r.40) 2 + · · · + (9. Sum of squares of Yz = = —1— v(2347) 2 12 .— — . . (quantity 3)2 (144.40(159) + • · • + 9.70 14.4782 .1853 5· Σ γ ι γ 2 = 14.837.40 15.25 9.= 2204. Computation 1.52 = 144. „ (quantity l) 2 quantity 2 .57 4.~ .1853 .30 2. = = £Y2 η .52 Source: Unpublished data by L.i η 12 = 462.9167 7.20 11.10 6.90 1.. Sum of squares of Y. = I59 2 + · · · + 210 2 = 583. Miller.57)2 = quantity 4 . Relationships between gill weight and body weight in the crab crassipes. £ Y 2 = 14. Gi It weight in milligrams (2) Y2 Body weight in grams 159 179 100 45 384 230 100 320 80 220 320 210 14.41 15.368. £ Y l = 159 + ••• + 210 = 2347 2.52(210) = 34.50 22.1 C o m p u t a t i o n of the p r o d u c t . γ γ \ = (I4.278 CHAPTER 12 / CORRELATION BOX 12.52)2 = 2204.

m o m e n t correlation coefficient is quite simple.6175 9. in an a n o v a or in a t test of the ditference between the two means. since the n u m e r a t o r of the ratio remains the same. Sum of products = £ y:y2 = £ V.3). whenever correlation between two variables is positive. this a d d e d covariance term will be zero. Box 12.0565 : 0.9167)(462.6175 7(124. Y2 " η .2 /' THE P R O D U C T .6175 7584.M O M E N Ε CORK I I.8652 « 0. T h e existence of a positive correlation might lead you to conclude that a bigger-bodied c r a b with its resulting greater a m o u n t of m e t a b o l i s m would require larger gills in o r d e r to . .6561. T h e e x a m p l e is based on a sample of 12 crabs in which gill weight V. we had to a s s u m e 1 he independence of the two variables to permit us to add their variances.c o m p a r i s o n s technique we expect correlation between the variables. since the m e m b e r s in each pair share a c o m m o n experience.837. Otherwise we would have had to allow for a covariance term.7314 VX y i Σ 2 6561.1 Continued 8. and the variance of the sum will simply be the sum of variances of the two variables.912.3)): r = = quantity 8 χ/quantity 6 χ quantity 7 6561. T h e basic quantities needed are the same six required for c o m p u t a t i o n of the regression coefficient (Section 11. By contrast.87 being s u m m e d are u n c o r r e c t e d . Product-moment correlation coefficient (by Expression (12.ATION COEFFICIENT BOX 12.c o m p a r i s o n s test automatically s u b t r a c t s a covariance term. T h e c o m p u t a t i o n of a p r o d u c t .c o m p a r i s o n s test has to be used in place of (he / test for difference of means. T h e p a i r e d . the latter representing a measure of overall size. These c o n s i d e r a t i o n s are equally true for the c o r r e s p o n d i n g analyses of variance. a n d b o d y weight Y2 have been recorded. (his is (he reason why the p a i r e d .6175 ^577517.34.12. resulting in a smaller s t a n d a r d e r r o r and consequently in a larger value of i s . This is the reason why. T h u s . in the p a i r e d .4782) 6561.1 illustrates how the coefficient should be c o m p u t e d . the variance of their differences will be considerably smaller than the sum of their variances.10 - 12 . quantity 1 χ quantity 3 quantity 5 — η . We wish to know whether there is a correlation between the weight of the gill a n d that of the body.368. singlc-classilication and two-way a n o v a .

4 S c a t t e r d i a g r a m f o r c r a b d a t a of B o x 12. which permits the direct inspection of a s a m p l e correlation coefficient for significance without further c o m p u l a t i o n .1.87 agrees with the clear slope a n d n a r r o w elliptical outline of the s c a t t e r g r a m for these data in F i g u r e 12.4. where the same h test establishes the significance regardless of Hie model. Significance tests following this f o r m u l a have been carried o u t systematically a n d arc tabulated in T a b i c VIII.280 CHAPTER 1 2 / CORRELATION 400 r γ· f i g u r e 12. . Box 12. T h e null hypothesis is therefore H n : ρ = 0. so that it c a n n o t be applied to testing a hypothesis that ρ is a specific value o t h e r than zero. T h e c o r r e l a t i o n coefficient of 0. the s t a n d a r d e r r o r of the c o r r e l a t i o n coefficient is sr = v<( I — r2)/(n — 2). If the sample comes from a bivariate n o r m a l d i s t r i b u t i o n a n d ρ = 0. T h e hypothesis is tested as a / test with η — 2 degrees of f r e e d o m . This implies that the t w o variables are u n c o r r e c t e d . T h e computations a r e illustrated in Box 12. 10 15 20 25 30 H o d y w e i g h t in g r a m s p r o v i d e the necessary oxygen.2 illustrates tests of the hypothesis H 0 : ρ = 0.3 S i g n i f i c a n c e tests in correlation T h e most c o m m o n significance test is w h e t h e r it is possible for a s a m p l e correlation coefficient to have c o m e f r o m a p o p u l a t i o n with a p a r a m e t r i c correlation coefficient of zero.2) = r s In 2) (I r). t „. = (r .1. T h e I test for the significance of r is m a t h e m a t i c a l l y equivalent to the f test for the significance of b. 12. W e should emphasize thai this s t a n d a r d e r r o r applies only when ρ = 0. in either case m e a s u r i n g the strength of the association between the two variables being tested. This is s o m e w h a t a n a l o g o u s to the situation in M o d e l I and Model II single-classification a n o v a .0) N (I r ' l l i i . using T a b l e VIII as well as the t lest discussed at lirst.

Table H0: p~0. yields a very small p r o b a b i l i t y ( < 10 . w i t h deviation. this w o u l d 2)/{T - 0. where t h e critical values o l »· are tabulated for d f = η r is g r e a t e r t h a n the null 2 f r o m I t o 1000. but must m a k e use of the ζ transformation. w e c a n _ a l s o \ f .997 = 1.708 at the c a n c e . 8 6 5 2 . S i n c e t h e o b s e r v e d c o r r e l a t i o n is g r e a t e r t h a n b o t h of these. T h e null is t e s t e d b y m e a n s o f t h e t d i s t r i b u t i o n ( w i t h η o f r.3065) = 0.10 a n d 1% significance tests. hypothesis error 2 d f ) by using the standard Sr E 3 2) Therefore. respectively. we hypothesis. where pt Φ 0 To test this h y p o t h e s i s we cannot u s e T a b l e VHI o r the t test given a b o v e . . In Box 12. rather t h a n Η . S i n c e <r. b a s e d o n a s a m p l e o f η = the critical values are 0.8652(6. : ρ > When η is g r e a t e r 0 o r H t : ρ < 0. {r 0) t s ~ =r ^ be 0. w e w o u l d t s ζ — 1.00nm be used for 5% alternative one-tailed test the 0. gill w e i g h t t o b e 0 .2111 7 4 9 7 = T h i s value. w h e n l o o k e d u p in T a b l e Π .2111 in T a b l e X. w e c a n the null hypothesis. = ζ —0 t _ — = 2 r—— v « 3 S i n c e ζ is n o r m a l l y d i s t r i b u t e d a n d w e a r e u s i n g a p a r a m e t r i c s t a n d a r d we compare i. If t h e a b s o l u t e v a l u e of the value in the c o l u m n for t w o observed reject the tabulated variables.86522) = [ΕΞΆ Wd-'· 2 ^ r ^ w ) F o r the d a t a of B o x is = = For and a 12. " If 0.576 at the 5% level a n d 12. F o r 10 degrees of 1% level of freedom signifireject 0.86527(12 - 0. w e make test use of t h e ζ transformation d e s c r i b e d i n t h e t e x t . hypothesis were Η .1. Test of the null hypothesis H0: ρ = p%.3 / S I G N I F I C A N C E TESTS IN C O K K I I A I I O N BOX 12. when or VIII i s b a s e d u p o n the following test.12. we o r e m p l o y T a b l e U .1 w e found the correlation between body weight and Examples. w h i c h m a y b e carried o u t t h e t a b l e is n o t a v a i l a b l e o r w h e n a n e x a c t t e s t is n e e d e d a t s i g n i f i c a n c e l e v e l s at d e g r e e s of f r e e d o m o t h e r t h a n t h o s e f u r n i s h e d in t h e table. Test of the null hypothesis H0: ρ =* 0 versus Hxi ρ ψ 0 The simplest procedure is t o c o n s u l t Table VIII.6 ).02 values t0.2 Tests of significance and confidence limits for correlation coefficients. S u c h tests w o u l d a p p l y if t h e 0. : ρ Φ than 50. " A r e a s of t h e n o r m a l c u r v e .atP< 0.01.837 between length of find righta n d left-wing Then had a sample correlation of r = veins o f b e e s b a s e d o n η =» 5 0 0 .8652^10/025143 5. 26.J n ~ 3. W h e n ρ = 0.4564 > of t s h o u l d 0.8652^39/7725 = 0.

ρ * + 0.ί at (z .2 Continued Suppose we wish to test the null hypothesis Η Ό : ρ «= +0.2111 .500 Therefore t. = (1. L. 1 2 t « .1232 L2 = ζ + 1 = 1. respectively. For r = 0.5493)(V?97) = 14.111 1 .808 and L2 « 0.5493 ° 0 5 " ) - 117 2 1 1 1 . It is most unlikely that the parametric correlation between rightand left-wing veins is 0.0 .2111. set confidence limits to this z.5 versus Η γ.862 are the 95% confidence limits around r = 0.7538 The probability of obtaining such a value of r. . we can set confidence limits to r using the ζ transformation.05. Test of the difference between two correlation coefficients For two correlation coefficients we may test H0: p{ = p2 versus / / .3 2 = 1.5. Again we compare ts with f atool or look it up in Table I I . From Table V I I I we find For r = 0.ζ ) ν " . α = 0.2111 + 0. r = 1. : pt Φ p2 as follows: 1 • + 3 n.0.837.3 We retransform these ζ values to the r scale by finding the corresponding arguments for the ζ function in Table X. Confidence limits If η > 50.2111 ζ = 0. We shall find 95% confidence limits for the above wing vein length data. «0.2111 . 0 8 7 9 = 1.837 For ρ = 0.837.2990 V" . We would use the following expression: h = l/VfT-3 where ζ and ζ are the ζ transformations of r and p. We first convert the sample r to z.5 tor the case just considered.0879 = 1. 9 6 0 1. by random sampling is Ρ < 1 0 " 6 (see Table II).282 CHAPTER 1 2 / CORRELATION B O X 12. and then transform these limits back to the r scale.

a m o s t inf r e q u e n t case of little interest.229. Canyon: Zj = 0. t h e d i s t r i b u t i o n of s a m p l e v a l u e s of r is m a r k e d l y a s y m m e t r i c a l .665 in a sample of n2 = 20 at Canyon and Flagstaff. a n d . the values of ζ are a p - . T h u s .0 . 5 9 1 5 . a l t h o u g h a s t a n d a r d e r r o r is a v a i l a b l e for r in such cases. T h i s f u n c t i o n h a s been t a b u l a t e d in T a b l e X.82. Grand was found Sokoloff (1966) to be 0. it s h o u l d n o t be a p p l i e d unless the s a m p l e is very large (n > 500)." employ Table F o r example.12.552 in a sample 0. curve. will we clearly have no By linear interpolation in T a b l e Π . By f i n d i n g a given value of ζ in T a b i c X.10) You m a y recognize this as ζ = t a n h ' r. 5 3 1 . (1 + /•)/(! . I n s p e c t i o n of E x p r e s s i o n (12. 0.10) will s h o w that w h e n r = 0.70 c o r r e s p o n d s t o r = 0.1804 Flagstaff: . w h e n r is 0. T h e a d v a n t a g e of the ζ t r a n s f o r m a t i o n is t h a t while c o r r e l a t i o n coefficients arc d i s t r i b u t e d in s k e w e d f a s h i o n for v a l u e s of ρ φ 0. we can also o b t a i n the c o r r e s p o n d i n g value of r. c o n s e q u e n t l y .28 " z 2 = 0 6 0. d e v e l o p e d by F i s h e r .972 yields ζ = 2. Inverse i n t e r p o l a t i o n m a y be necessary. parametric "Areas of the standard normal deviation. the correlation between by b o d y weight a n d w i n g length in Droof sophila pseudoobscura = 39 at the Grand Arizona.0. N o t e byh o w m u c h ζ exceeds r in this last p a i r of values. since i ' n I e q u a l s zero.0. the f o r m u l a for the inverse hyp e r b o l i c t a n g e n t of r. s u b s t a n t i a l differences between r a n d ζ o c c u r at the higher v a l u e s for r.3 / S I G N I F I C A N C E TESTS IN C O R R E L A T I O N BOX 12. H o w e v e r .41) = hypothesis. K e y s for such f u n c t i o n s w o u l d o b v i a t e the need for T a b l e X. r = 0. a n d a value of ζ = . ζ = 0. ζ will also e q u a l zero.992.8017 n Q 0. T h e f o r m u l a for ζ is (12.086. so w h i c h t o reject the null W h e n ρ is close t o + 1. F o r r = . T o o v e r c o m e this difficulty.294.115. S o m e p o c k c t c a l c u l a t o r s h a v e built-in h y p e r b o l i c a n d inverse h y p e r b o l i c f u n c t i o n s .6213 -0.8017 _ ~ _ v'0. T h e r e f o r e .6130 to be about the probability 2(0.458. as r a p p r o a c h e s ± 1 .1 I 55. wc o b t a i n ζ = . w h e r e values of ζ c o r r e s p o n d i n g t o a b s o l u t e v a l u e s of r a r c given.r) a p p r o a c h e s / a n d 0. w e be between evidence on ±0. we t r a n s f o r m r to a f u n c t i o n z. ζ a p p r o a c h e s + infinity. 7 6 c o r r e s p o n d s t o r = —0.2 .604.2 Continued Since z t - zz we is n o r m a l l y compare ts distributed with rI[3D] o r and we are using a Π.0 .601 ~ find ' t h a t a v a l u e o f t.6213 — 0.1273. ζ = 0. Thus.1804 _ 0.

4 Applications of correlation T h e p u r p o s e of correlation analysis is to m e a s u r e the intensity of association observed between a n y pair of variables a n d to test whether it is greater t h a n could be cxpcctcd by c h a n c e alone. 12.5.5 is vanishingly small.2 we show the test of a null hypothesis t h a t ρ φ 0. it will result in asymmetrical confidence limits when these are r e t r a n s f o r m e d t o the r scale. O n c e established. This is d o n e by m e a n s of the ζ t r a n s f o r m a t i o n . a n d we m a y wish t o test observed d a t a against such a hypothesis.11) is that it is i n d e p e n d e n t of the m a g n i t u d e of r. In the e x a m p l e the c o r r e l a t i o n between body weight and wing length in two Drosopliila p o p u l a t i o n s was tested. we s h o w the test of such a hypothesis to illustrate the m e t h o d .2. there is ζ. As s h o w n in Box 12.2. An interesting aspect of the variance of ζ evident f r o m Expression (12. W e n o t e that the probability that the s a m p l e r of 0.5. for s a m p l e sizes greater t h a n 50 we c a n also use the ζ t r a n s f o r m a t i o n to test t h e significance of a sample r e m p l o y i n g the hypothesis H0: ρ = 0. following the usual convention. A test for the significance of the difference between two s a m p l e correlation coefficients is the final e x a m p l e illustrated in Box 12.837 could have been sampled f r o m a p o p u l a t i o n with ρ = 0. such an association is likely to lead to reasoning a b o u t causal relationships between the variables. Alt h o u g h there is n o a priori reason to a s s u m e that the true c o r r e l a t i o n between right a n d left sides of the bee wing vein lengths in Box 12. C o r r e s p o n d i n g to ρ = 0. T h e f o r m u l a given is an acceptable a p p r o x i m a t i o n when the smaller of the two samples is greater t h a n 25. In the second section of Box 12. It is frequently used with even smaller s a m p l e sizes. It is the ζ t r a n s f o r m a t i o n of p. as s h o w n in o u r e x a m p l e in Box 12. W e m a y have a hypothesis that the true c o r r e l a t i o n between two variables is a given value ρ different f r o m zero. W c arc also w a r n e d a b o u t so-called n o n s e n s e corrcla- . Such h y p o t h e s e s a b o u t the expected c o r r e l a t i o n between t w o variables are frequent in genetic w o r k . a n d the difference in correlation cocfficicnts between the t w o p o p u l a t i o n s was found not significant. the p a r a m e t r i c value of z. T h e expected variance of ζ is This is a n a p p r o x i m a t i o n a d e q u a t e for s a m p l e sizes η > 50 a n d a tolerable a p p r o x i m a t i o n even w h e n η > 25. A s t a n d a r d e r r o r for the difference is c o m p u t e d a n d tested against a table of areas of the n o r m a l curvc.2 is 0. S t u d e n t s of statistics are told at an early stage n o t to confuse significant correlation with c a u s a t i o n . Next. in Box 12. but is simply a f u n c t i o n of sample size n.2 we see h o w to set confidence limits to a s a m p l e correlation coefficient r. as when setting confidence limits with variables subjected to s q u a r e root or logarithmic t r a n s f o r m a t i o n s .284 CHAPTER 1 2 / CORRELATION ζ (zeta).2.

if we can s h o w that there is no significant correlation between a m o u n t of s a t u r a t e d fats eaten a n d the degree of atherosclerosis. a n d processes that c h a n g e with time are frequently likely to be correlated. we can consider this to be an illusory correlation. Individual cases of correlation m u s t be carefully analyzed before inferences are d r a w n f r o m them. o r if one is d e m o n s t r a t e d . R e m e m b e r also that when testing significance of correlations at c o n v e n t i o n a l levels of significance. F u r t h e r analysis is needed t o discriminate between the various models. the correlation between Baptist ministers a n d liquor c o n s u m p t i o n is simply a c o n s e q u e n c e of city size. T h e establishment of a significant c o r r e l a t i o n does not tell us which of m a n y possible s t r u c t u r a l m o d e l s is a p p r o p r i a t e . you must allow for type I error. T h e larger t h e city.000 in the U n i t e d States. if Y = Z / A and we c o m p u t e the correlation of A' with Y. no reasonable c o n n e c t i o n between the variables can be f o u n d . not because of any functional biological reasons but simply because the c h a n g e with time in the t w o variables u n d e r c o n s i d e r a t i o n h a p p e n s to be in the same direction. T h u s .4 / APPLICATIONS OF CORRELATION 285 tions. T h e correlation is of little interest t o a n y o n e studying either the distribution of Baptist ministers o r the c o n s u m p t i o n of alcohol. the existing relation will tend to p r o d u c e a negative correlation. for example. the m o r e Baptist ministers it will c o n t a i n on the average a n d the greater will be the liquor c o n s u m p t i o n . C o r r e l a t i o n coefficients have a history of extensive use a n d application d a t i n g back to the English biometric school at the beginning of the twentieth century. which will lead to y o u r j u d g i n g a certain percentage of c o r r e l a t i o n s significant when in fact the p a r a m e t r i c value of ρ = 0. a well-known case being the positive c o r r e l a t i o n between the n u m b e r of Baptist ministers a n d the per capita liquor c o n s u m p t i o n in cities with p o p u l a tions of over 10. are found to be not significant. Recent years have seen s o m e w h a t less application of this technique as increasing segments of biological research have b e c o m e experimental. In so-called illusory correlations. T h e r e may be n o ecological relation between the plant a n d the insects. A n o t h e r situation in which the correlation might be considered an artifact is when o n e of the variables is in part a m a t h e m a t i c a l f u n c t i o n of the other. T h u s . causal c o n n e c t i o n s are k n o w n o r at least believed to be clearly u n d e r s t o o d . T h u s . T h u s . the partial cause of a n o t h e r f r o m others in which the t w o correlated variables have a c o m m o n cause a n d f r o m m o r e c o m p l i c a t e d situations involving b o t h direct influence a n d c o m m o n causes. T h e t r a d i t i o n a l distinction of real versus n o n s e n s e o r illusory c o r r e l a t i o n is of little use. when tested by p r o p e r statistical m e t h o d o l o g y using a d e q u a t e sample sizes. m o r e likely. In experiments in which o n e factor is varied a n d the response of a n o t h e r variable to the . In s u p p o s e d l y legitimate correlations. P e r h a p s the only correlations properly called nonsense o r illusory arc those assumed by p o p u l a r belief o r scientific intuition which. It is useful to distinguish correlations in which o n e variable is t h e entire or. S o m e correlations have time as the c o m m o n factor.12. it is of n o real interest or m a y be s h o w n to be a n artifact of the s a m p l i n g p r o c e d u r e . but this m a y simply be a f u n c t i o n of the passage of time. size of an insect p o p u l a t i o n building u p t h r o u g h the s u m m e r m a y be correlated with the height of some weeds.

Nevertheless. the m e t h o d of regression is m o r e a p p r o p r i a t e . on the other. T h e q u a n t i t y Ν m e a s u r e s h o w well the second variable c o r r e s p o n d s to the o r d e r of the first.5 Kendall's coefficient of r a n k correlation O c c a s i o n a l l y d a t a are k n o w n n o t to follow the bivariate n o r m a l d i s t r i b u t i o n . a n d so on. T h e f o r m u l a for Kendall's coefficient of rank correlation is τ = N/n(n — I). as in assigning positions in a class. n o r c a n historical e v o l u t i o n a r y factors be altered. This a p p r o a c h belongs to the general family of n o n p a r a m e l r i c m e t h o d s we e n c o u n t e r e d in C h a p t e r 10. should be in the s a m e o r d e r as the V. wc can then test w h e t h e r the two sets of r a n k i n g s are i n d e p e n d e n t (which we would not expect if the j u d g m e n t s of the instructors arc based on objective evidence). W e can say that A is the best s t u d e n t . evolution. H o w e v e r . This is typical of d a t a in which we estimate relative p e r f o r m a n c e . H o w e v e r . C a n d D are e q u a l to each o t h e r a n d next-best. will not entirely c o r r e s p o n d to that of V. with its severity as measured by an objective criterion. It has a m a x i m a l value of n{n 1) a n d a minimal value of —n{n 1). but only o n an o r d i n a l scale. O n e m e t h o d of analyzing such d a t a is by r a n k i n g the variates a n d calculating a coefficient of r a n k correlation.3 Kendall's coefficient of rank correlation. In such cases. ecology. 12. the o r d e r of the variates T. as has already been discussed. on the o n e hand.. or o r d e r of g e r m i n a t i o n in a s a m p l e of plants with rank o r d e r of flowering. Wc present in Box 12. a n d o t h e r fields in which experimental m e t h o d s a r e difficult to apply. we c a n n o t measure the variable o n an a b s o l u t e scale. variatcs. Of greater biological a n d mcdical interest arc the following examples. T h e following small example will m a k e this clear. if it is perfectly correlated with the first variable V. generally symbolized by τ (tau). We might wish to correlate o r d e r of emergence in a s a m p l e of insects with a r a n k i n g in size. systematics. A second variable Y2. yet we wish to test for the significance of association between the t w o variables. not a p a r a m e t e r . large areas of biology a n d of o t h e r sciences r e m a i n where the experimental m e t h o d is not suitable because variables c a n n o t be b r o u g h t u n d e r c o n t r o l of the investigator. we need an u n d e r s t a n d i n g of the scientific m e c h a n i s m s underlying these p h e n o m e n a as m u c h as of those in b i o c h e m i s t r y or experimental e m b r y o l o g y . . where η is the conventional s a m p l e size and Ν is a c o u n t of ranks. correlation analysis serves as a first descriptive technique e s t i m a t i n g t h e degrees of association a m o n g the variables involved. An epidemiologist may wish to associate rank o r d e r of o c c u r r c n c c (by time) of an infectious disease within a c o m m u n i t y . As yet. In o t h e r cases especially suited to r a n k i n g m e t h o d s . if the correlation is less t h a n perfect. T h e r e a r e m a n y a r e a s of medicine. If two instructors independently rank a g r o u p of students.286 CHAPTER 12 / CORRELATION deliberate v a r i a t i o n of the first is examined. the weather c a n n o t be c o n trolled.. which can be o b tained in a variety of ways. Β is the second-best student. Epidemiological variables are generally not subject t o experimental m a n i p u l a t i o n . where we learned m e t h o d s for analyses of r a n k e d variates paralleling a n o v a . a l t h o u g h it is a s a m p l e statistic.

in this case.7 6.34 9. 2.5 / k e n d a l l ' s c o e f f i c i e n t <>1 r a n k c ' o r r i l a 1 i o n BOX 113 Kendall's coefficient of r a n k correlation. and Y2 separately and then replace the original variates with the ranks (assign tied ranks if necessary so that for both variables you will always have η ranks for η variates).99 7.6 5. .6 18. 3. Kesfeld.7 7.3 12. 4.8 1 2 15 3 10 4 12 7. it does not matter which of the variables is ordered. Computation of a rank correlation coefficient between the blood neutrophil <. and four following ranks are higher than it. If both variables have ties. .33 5. C 3 = 4.75 8. therefore.39 13. There are fourteen ranks following the 10 and five of them are greater than 10. = 5. (i) Patient <J) r. Computational steps 1. Write down the η ranks of one of the two variables in order.64 7. Therefore. order the pairs by the variable without ties.5 9. In our case.9 4.1 16. taking each rank of the variable in turn and counting the number of higher ranks subsequent to it.97 20. and Koo (1983). Whenever a subsequent rank is tied in value with the pivotal rank Rlt count | instead of 1.93 Rz 1 9 6 12 15 13 4 U) Patient (2) Yt (i) Ri 9 Μ Y2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 7. paired with the rank values assigned for the other variable (as shown below). count all ranks greater than 10. C 2 is equal to 6. but we show it explicitly below so that the method will be entirely clear.12. 4. (y.01 6.12 9.1 2.4 9. If only one variable has ties. τ. χ 10" 3 per μ]) and total marrow neutrophil mass (Y2: x 10''per kg) m ι · > patients with nonhematological tumors. Rank variables Y.12 15. Continue in this manner.66 6. Examine the first value in the column of ranks paired with the ordered column. η = 15 pairs of observations. Thus. Hence.0 3.3 4. This can usually be done in one's head. The third rank is 11. Count all ranks subsequent to it which are higher than the rank being considered. we count a score of C.urn ·.34 12.65 15.07 5 10 8 14 11 2 7 3 Source: Data extracted from Liu.4 6 5 7 11 14 13 8 (4) V. this is rank 10.3 3. Now we look at the next rank (rank 8) and find that six of the thirteen subsequent ranks are greater than it. These ranks are listed in columns (3) and (5) above. Obtain a sum of the counts C i( as follows..

15. A Τ value equal to t(t — 1) is computed for each group of t tied variates and summed over m such groups.14 11.Σ Τ.13.13. the coefficient is computed as follows: Ν n(n .9. 15.13. defined as follows. 5.15.1) = 8. τ.1) .14 13.15. can be found as follows: Ν __ 26 n(n : 0. . It has been suggested that if the ties are due to lack of precision rather than being real. 14 15.15.1) .14 12.15.12.13.12. 3. Φ 1) = 4 ( 5 9 ) 15(14): *« 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 236 .4.15. and Σ"1 Τ 2 are the sums of correction terms for ties in the ranks of variable Yl and Y2.14 9.12.288 CHAPTER 1 2 / CORRELATION BOX J 2.13.2.13. 3.14 12.3 Continued Ri 10 8 11 7 9 1 6 4 5 2 12 3 13 15 14 Subsequent ranks greater than pivotal rank R2 11. Thus if variable Y2 had had two sets of ties.14 12.13. n(n .13.210 = 26 4.14 12.12.12.13. 14 13. 15.15. 13. one would have computed Σ™ T2 = 2(2 . 15.14 6. respectively. 14 5.14 12.£ Γ* where Σ"1 Τ. The Kendall coefficient of rank correlation.15. the coefficient should be computed by the simpler formula. 14 Counts Cj 5 6 4 5 4 9 4 5 4 5 3 3 2 0 0 £ C ( = 59 We then need the following quantity: Ν = 4 Σ C.1) + 3(3 . one involving t = 2 variates and a second involving t = 3 variates.15.124 1) 15(14) When there are ties.

an o b v i o u s coefficient suggests itself as N/n(n 1) = [4 Σ" C.05 is 0. — n(n — 1) = 40 — 5(4) — 20. and = Table XIV must 4 be to consulted T h e table gives various (two-tailed) critical values of τ for η = 0. Obviously. If. for p u r p o s e s of illustration. to o b t a i n the m a x i m u m possible score Ν = n(n — 1) = 20. E x a m i n a t i o n of the data . is always perfectly c o n c o r d a n t with itself. = 4 + 2 + 2 + 0 + 0 = 8. the association between the two variables is low and nonsignificant. sum this q u a n t i t y for all ranks. and subtract f r o m the result a corrcction factor n(n — 1) to o b t a i n a statistic N. by 4. = 4 + 3 + 2 + 1 + 0 = 10. Y2 1 1 2 3 3 2 4 5 5 4 N o t e that the r a n k i n g by variable Y2 is not totally c o n c o r d a n t with that by Y^ T h e technique employed in Box 12. Since the m a x i m u m score of Ν for Y. for Y2 we o b t a i n only Σ" C.m o m e n t correlation of 0. The correlation in (hat box is between blood neutrophil c o u n t s and total m a r r o w neutrophil m a s s in 15 cancer patients.3 is to c o u n t t h e n u m b e r of higher r a n k s following any given r a n k . we u n d e r t a k e to calculate the correlation of variable Y.390. 40. Y. a n d so Ν = 4(8) . Y. T h e a u t h o r s note that there is a p r o d u c t ..1) = 12/20 = 0.69 between these two variables. 4 0 . but when the d a t a arc analyzed by Kendall's rank correlation cocfiicicnt. However. zero. T h e minimal significant value of the coefficient at Ρ Hence t h e o b s e r v e d v a l u e o f τ is n o t s i g n i f i c a n t l y d i f f e r e n t f r o m S u p p o s e we have a s a m p l e of five individuals t h a t have been arrayed by rank of variable Y{ a n d whose r a n k i n g s for a second variable Y2 are entered paired with the r a n k s for Y^. T h e n we c o m p u t e Ν = 4 Σ" C.I ]\/n(n . t h i s a p p r o x i m a t i o n is n o t accurate.n(n . Ties between individuals in the r a n k i n g proccss present m i n o r c o m p l i c a t i o n s that arc dealt with in Box 12. 5 /' K E N D A L L ' S COEFFICIENT ΟΙ KANK CORRELATION 289 BOX 12. we can make use of a n o r m a l 0: app r o x i m a t i o n to test t h e null hypothesis t h a t t h e t r u e v a l u e of τ = = — = = = v/2(2« + = = L = = = 5 ) / 9 φ ·=!) compared with When η <. with itself.6. .3.3 Continued 5. T o test significance for s a m p l e sizes >40. we will find Σ" C. multiply the sum Σ" C.5(4) = 12.1 2 . (the score we would have if the correlation were perfect) is n(n — I) = 20 and the observed score 12. being ordered.

615 0.597 0.150 0.133 0.1 G r a p h t h e f o l l o w i n g d a t a in t h e f o r m of a b i v a r i a t e s c a t t e r d i a g r a m . = t i b i a l e n g t h .551 0.290 CHAPTER 1 2 / CORRELATION reveals t h a t there is m a r k e d skewness in b o t h variables.657 0.2 The f o l l o w i n g d a t a w e r e e x t r a c t e d f r o m a l a r g e r s t u d y b y B r o w e r ( 1 9 5 9 ) o n s p e c i a t i o n in a g r o u p of s w a l l o w t a i l b u t t e r f l i e s .146 0. T h e v a r i a b l e s . l o o k u p critical values of τ in T a b l e XIV.586 0.556 0.140 0. F.653 0.626 0.148 0.631 0.662 0.134 0.655 0.665 0.3.644 0.141 0.612 0.140 0.712 0. .585 0. T h e d a t a w e r e c o l l e c t e d f o r a s t u d y of g e o g r a p h i c v a r i a t i o n in t h e a p h i d Pemphigus populitransversus. C o m p u t e t h e c o r r e l a t i o n c o e f f i c i e n t a n d set 9 5 % c o n f i d e n c e i n t e r v a l s t o p.671 0.144 0. a r e e x p r e s s e d in m i l l i m e t e r s .127 0.147 0.138 0.130 0.155 0.629 0. M o r p h o l o g i c a l m e a s u r e m e n t s a r e in m i l l i m e t e r s c o d e d χ 8.574 0. meet the a s s u m p t i o n s of bivariate n o r m a l i t y .703 0. T h e c o r r e l a t i o n c o e f f i c i e n t will e s t i m a t e c o r r e l a t i o n of t h e s e t w o v a r i a b l e s o v e r l o c a l i t i e s .134 0.142 12.140 0. T h e v a l u e s in t h e t a b l e r e p r e s e n t l o c a l i t y m e a n s b a s e d o n e q u a l s a m p l e sizes f o r 2 3 l o c a l i t i e s in e a s t e r n N o r t h A m e r i c a .136 0. F o r s a m p l e sizes u p to 40.625 0.147 0. therefore. Locality code number 1 2 3 4 5 6 7 8 9 10 1 1 12 13 14 15 16 17 18 19 20 21 22 23 v. r = 0.148 0. the three largest variates for e a c h variable are correlated. A N S . Y2 = t a r s u s l e n g t h . Ρ < 0. a n d this induces the misleadingly high p r o d u c t m o m e n t correlation coefficient. 0. T h e significance of τ for s a m p l e sizes g r e a t e r t h a n 40 can easily be tested by a s t a n d a r d e r r o r s h o w n in Box 12.159 0.632 0.151 0. Exercises 12. e x t r a c t e d f r o m S o k a l a n d T h o m a s (1965).139 0. A l t h o u g h there is little evidence of correlation a m o n g m o s t of the variates. T h e d a t a c a n n o t .01.910.675 0.

0 21.0 1 1.0 10.0 21.5 17.0 21.0 11.0 16.5 13.5 21.5 26.0 14.5 11.EXERCISES 302 Species Papilio multicaudatus Specimen number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 η Length of 8th tergile 24.0 18.5 25.5 15.0 22.5 1 1.0 12.0 21.5 22.0 21.0 11.0 19.0 23.0 21.0 12.5 20.0 10.5 19.0 16.5 1 1.0 20.5 19.0 13.5 Papilio rutulus .5 20.5 21.5 19.5 20.0 15.5 20.0 12.5 16.5 12.0 21.0 20.5 1 1.5 25.0 18.0 11.0 17.0 17.5 21.5 23.5 19.0 η Length of superuncus 14.0 21.0 15.5 12.0 11.5 23.0 19.5 18.0 10.5 28.5 20.0 21.5 16.5 19.0 19.0 11.5 11.0 1 1.0 22.0 17.0 18.0 10.0 15.0 17.5 20.5 10.5 10.0 22.5 16.5 19.0 21.0 20.5 17.5 21.5 12.0 21.5 1 1.5 22.

323 0.251 0.208 0. Twenty-six leaves from one tree were measured when fresh and again after drying.292 CHAPTER 1 2 / CORRELATION 12.315 0.361 0.4 ANS.) and dry-leaf width (y2).254 0.381 0.369 0. Crovello.329 0.346 Blood 0.352 0.022 0.444 0. Calculate r and test its significance. both in millimeters. The variables shown are fresh-leaf width (V. A pathologist measured the concentration of a toxic substance in the liver and in the peripheral blood (in /ig/kg) in order to ascertain if the liver concentration is related to the blood concentration.296 0.202 0. The following tabic of data is from an unpublished morphometric study of the cottonwood Populus deltoides by T.411 0. 90 88 55 100 86 90 82 78 115 100 110 84 76 Y.183 0. y.371 0. 88 87 52 95 83 88 77 75 109 95 105 78 71 100 110 95 99 92 80 i 10 105 101 95 80 103 Y.353 0.159 0.319 0.293 12. Test whether the two correlation coefficients differ significantly. Calculate τ and test its significance.353 0.252 0.315 0.733.283 0.303 0.3 Compute the correlation coefficient separately for each species and test significance of each. τ = 0.259 0. Liver 0. 97 105 90 98 92 82 106 97 98 91 76 97 .177 0.199 0. J.

70 12.29 0.5 Brown and Comstock (1952) found the following correlations between the length of the wing and the width of a band on the wing of females of two samples of t h e b u t t e r f l y Heliconius charitonius: Sample η r 1 2 100 46 0. ANS.104.1 using Kendall's coefficient of rank correlation. No.01.6 Test whether the samples were drawn from populations with the same value of p. Test for the presence of association between tibia length and tarsus length in the data of Exercise 12.EXERCISES 293 12. is = -3. Ρ < 0. .

CHAPTER

Analysis of Frequencies

Almost all o u r work so far has dealt with estimation of parameters and tests of hypotheses for c o n t i n u o u s variables. The present chapter treats an i m p o r t a n t class of cases, tests of hypotheses a b o u t frequencies. Biological variables may be distributed i n t o two or m o r e classes, depending on some criterion such as arbitrary class limits in a c o n t i n u o u s variable or a set of mutually exclusive attributes. An example of the former would be a frequency distribution of birth weights (a c o n t i n u o u s variable arbitrarily divided into a n u m b e r of contiguous classes); one of the latter would be a qualitative frequency distribution such as the frequency of individuals of ten different species obtained from a soil sample. For any such distribution wc may hypothesize that it has been sampled f r o m a population in which the frequencies of the various classes represent certain parametric p r o p o r t i o n s of the total frequency. W e need a test of goodness of fit for our observed frequency distribution to the expected frequency distribution representing o u r hypothesis. You may recall that we first realized the need for such a test in C h a p t e r s 4 and 5, where we calculated expected binomial. Poisson, and normal frequency distributions but were unable to decide whether an observed sample distribution departed significantly f r o m the theoretical one.

13.1

/

TESTS FOR G O O D N E S S OF KIT. I N T R O D U C T I O N

295

In Section 13.1 we introduce the idea of goodness of fit, discuss the types of significance tests that are appropriate, explain the basic rationale behind such tests, a n d develop general c o m p u t a t i o n a l formulas for these tests. Section 13.2 illustrates the actual c o m p u t a t i o n s for goodness of fit when the d a t a are a r r a n g e d by a single criterion of classification, as in a one-way quantitative or qualitative frequency distribution. This design applies to cases expected to follow one of the well-known frequency distributions such as the binomial, Poisson, or n o r m a l distribution. It applies as well to expected distributions following some other law suggested by the scientific subject matter under investigation, such as, for example, tests of goodness of fit of observed genetic ratios against expected Mendelian frequencies. In Section 13.3 we proceed to significance tests of frequencies in two-way classifications—called tests of independence. W e shall discuss the c o m m o n tests of 2 χ 2 tables in which each of two criteria of classification divides the frequencies into two classes, yielding a four-cell table, as well as R χ C tables with more rows a n d columns. T h r o u g h o u t this chapter we carry out goodness of fit tests by the G statistic. W e briefly mention chi-squarc tests, which are the traditional way of analyzing such cases. But as is explained at various places t h r o u g h o u t the text, G tests have general theoretical advantages over chi-square tests, as well as being computationally simpler, not only by c o m p u t e r , but also on most pocket or tabletop calculators.

13.1

Tests

for goodness

of fit:

Introduction

The basic idea of a goodness of fit test is easily understood, given the extensive experience you now have with statistical hypothesis testing. Let us assume that a geneticist has carried out a crossing experiment between two F , hybrids and obtains an F 2 progeny of 90 offspring, 80 of which a p p e a r to be wild type and 10 of which are the m u t a n t phenotypc. T h e geneticist assumes d o m i n a n c e and expects a 3:1 ratio of the phenotypes. When we calculate the actual ratios, however, we observe that the d a t a are in a ratio 80/10 = 8:1. Expected values for ρ and q are ρ = 0.75 and ij = 0.25 for the wild type and m u t a n t , respectively. Note that we use the caret (generally called " h a t " in statistics) to indicate hypothetical or expected values of the binomial proportions. However, the observed p r o p o r t i o n s of these two classes are ρ = 0.89 and q = 0.11, respectively. Yet another way of noting the contrast between observation and expectation is to state it in frequencies: the observed frequencies are J\ = 80 and f2 = 10 for the two phenotypes. Expccted frequencies should be (\ = pn = 0.75(90) = 67.5 and / , = qn = 0.25(90) = 22.5, respectively, where η refers to the sample size of offspring from the cross. N o t e that when we sum the expected frequencies they yield 67.5 + 22.5 = η = 90, as they should. T h e obvious question that comes to mind is whether the deviation from the 3:1 hypothesis observed in o u r sample is of such a m a g n i t u d e as to be improbable. In other words, d o the observed d a t a differ enough from the expected

296

CHAPTER 1 3 /

ANALYSIS OF FREQUENCIES

values to cause us t o reject the null hypothesis? F o r the case j u s t considered, y o u already k n o w t w o m e t h o d s for c o m i n g to a decision a b o u t the null hypothesis. Clearly, this is a b i n o m i a l distribution in w h i c h ρ is t h e p r o b a b i l i t y of b e i n g a wild type a n d q is t h e probability of being a m u t a n t . It is possible to w o r k o u t t h e p r o b a b i l i t y of o b t a i n i n g a n o u t c o m e of 80 wild type a n d 10 m u t a n t s as well as all " w o r s e " cases for ρ = 0.75 a n d q = 0.25, a n d a s a m p l e of η = 90 offspring. W e use t h e c o n v e n t i o n a l b i n o m i a l expression here (p + q)n except t h a t ρ a n d q a r e hypothesized, a n d we replace t h e s y m b o l k by n, which we a d o p t e d in C h a p t e r 4 as t h e a p p r o p r i a t e symbol for t h e s u m of all the frequencies in a f r e q u e n c y distribution. In this example, we have only o n e sample, so w h a t w o u l d ordinarily be labeled k in the b i n o m i a l is, at the s a m e time, n. Such an e x a m p l e was illustrated in T a b l e 4.3 a n d Section 4.2, a n d we can c o m p u t e t h e c u m u l a t i v e p r o b a b i l i t y of t h e tail of the binomial distribution. W h e n this is done, we o b t a i n a p r o b a b i l i t y of 0.000,849 for all o u t c o m e s as deviant or m o r e deviant f r o m the hypothesis. N o t e t h a t this is a one-tailed test, the alternative h y p o t h e s i s being t h a t t h e r e are, in fact, m o r e wild-type offspring t h a n the M e n d e l i a n h y p o t h e s i s w o u l d postulate. A s s u m i n g ρ = 0.75 and q = 0.25, the observed s a m p l e is, c o n sequently, a very u n u s u a l o u t c o m e , a n d we c o n c l u d e t h a t there is a significant d e v i a t i o n f r o m expectation. A less t i m e - c o n s u m i n g a p p r o a c h based on the same principle is to look u p confidence limits for the b i n o m i a l p r o p o r t i o n s , as w a s d o n e for t h e sign test in Section 10.3. I n t e r p o l a t i o n in T a b l e IX s h o w s that for a s a m p l e of η = 90, an observed p e r c e n t a g e of 89% would yield a p p r o x i m a t e 99% confidence limits of 78 a n d 96 for the true percentage of wild-type individuals. Clearly, t h e hypothesized value for ρ — 0.75 is b e y o n d t h e 99% confidence b o u n d s . N o w , let us d e v e l o p a third a p p r o a c h by a g o o d n e s s of fit test. T a b l e 13.1 illustrates how we might proceed. T h e first c o l u m n gives the observed f r e q u e n cies / representing the o u t c o m e of the experiment. C o l u m n (2) shows the observed frequencies as (observed) p r o p o r t i o n s ρ a n d q c o m p u t e d as J\/n a n d f2/n, respectively. C o l u m n (3) lists the expected p r o p o r t i o n s for the particular null hypothesis being tested. In this case, (he hypothesis is a 3:1 ratio, c o r r e s p o n d i n g t o expected p r o p o r t i o n s ρ = 0.75 a n d q = 0.25, as we have seen. In c o l u m n (4) we show the cxpected frequencies, which we have already calculated for these p r o p o r t i o n s as / , = pn = 0.75(90) = 67.5 a n d f2 = qn = 0.25(90) = 22.5. T h e log likelihood ratio test for g o o d n e s s of fit m a y be developed as follows. U s i n g Expression (4.1) for the expected relative frequencies in a b i n o m i a l distribution, we c o m p u t e t w o quantities of interest to us here: C(90, 8 0 ) ( ^ Γ ( ^ ) " ' - 0.132,683,8 C(9(), ' 0 = 0.000,551,754,9

T h e first q u a n t i t y is the probability of observing the sampled results (80 wild type a n d 10 m u t a n t s ) on the hypothesis that ρ = ρ — t h a t is, thai the p o p u l a t i o n p a r a m e t e r e q u a l s the observed s a m p l e p r o p o r t i o n . T h e second is the probability of observing the sampled results a s s u m i n g that ρ = f , as per the Mendelian null

So 5 m o\ Η so

m (N v> -

(N so
in

m

w > < < N N
m

tN Γ O S r")

Ο m Os' Ο < oo' 1

C| N so < N O O Tt

-J c
•Ο

Ο

in

' 5-s· U ,2; J sr «ρ in ο 5 •U ^
= C

Κ < © N so c^ | os II II

m Ι ο

ΰ 2 " ζ £ υ Ο « iu Ο v. Ν
Cl.

,c

8 ο - ε

οι > π » | οσ

II II

Ο c. 2 ΐ J3 Ο. Ο Ο ΙΟ oo — I os · Ο •5 <> ^ C Λ J D ri .S all — α. C • 5 4 ι 1 Λ ΗQΛ
4 — >

Ά O.

Β T3 2 £ 2

298

CHAPTER 1 3 /

ANALYSIS OF FREQUENCIES

hypothesis. N o t e that these expressions yield the probabilities for the observed o u t c o m e s only, not for observed and all worse outcomes. Thus, Ρ = 0.000,551,8 is less t h a n the earlier c o m p u t e d Ρ = 0.000,849, which is the probability of 10 and fewer mutants, assuming ρ = f , q = T h e first probability (0.132,683,8) is greater t h a n the second (0.000,551,754,9), since the hypothesis is based on the observed data. If the observed p r o p o r t i o n ρ is in fact equal to the p r o p o r t i o n ρ postulated under the null hypothesis, then the two c o m p u t e d probabilities will be equal and their ratio, L, will equal 1.0. T h e greater the difference between ρ and ρ (the expected p r o p o r t i o n under the null hypothesis), the higher the ratio will be (the probability based on ρ is divided by the probability based on ρ or defined by the null hypothesis). This indicates that the ratio of these two probabilities or likelihoods can be used as a statistic to measure the degree of agreement between sampled and expected frequencies. A test based on such a ratio is called a likelihood ratio test. In our case, L = 0.132,683,8/0.000,551,754,9 = 240.4761. It has been shown that the distribution of
G =
2

2 In L

(13.1)

can be a p p r o x i m a t e d by the χ distribution when sample sizes are large (for a definition of "large" in this case, see Section 13.2). The a p p r o p r i a t e n u m b e r of degrees of freedom in Table 13.1 is 1 because the frequencies in the two cells for these d a t a add to a constant sample size, 90. The outcome of the sampling experiment could have been any n u m b e r of m u t a n t s from 0 to 90, but the n u m b e r of wild type consequently would have to be constrained so that the total would add up to 90. O n e of the cells in the tabic is free to vary, the other is constrained. Hence, there is one degree of freedom, f n our ease, G — 2\nL — 2(5.482,62) = 10.9652

If wc c o m p a r e this observed value with a χ 2 distribution with one degree of freedom, we find that the result is significant (P < 0.001). Clearly, we reject the 3:1 hypothesis and conclude that the p r o p o r t i o n of wild type is greater than 0.75. The gencticist must, consequently, look for a mechanism explaining (his d e p a r t u r e from expectation. Wc shall now develop a simple c o m p u t a t i o n a l formula for G. Referring back to Expression (4.1), we can rewrite the two probabilities c o m p u t e d earlier as C(n,J\)pr<q and
C(n,j\)pf'q Af,A/2

(13.2)

(13.2a)

But

13.1

/

TESTS FOR GOODNESS OF FIT: INTRODUCTION

299

Since f

= np a n d f\ = np a n d similarly f2 = nq and / , = nq,
L =

JW
//lV'//2 (13.3)

and /Λ , , , J f l lnL = / , l n ^ j + / 2 l n ^ J

T h e c o m p u t a t i o n a l steps implied by Expression (13.3) are s h o w n in columns (5) and (6) of Table 13.1. In column (5) are given the ratios of observed over expected frequencies. These ratios would be 1 in the unlikely case of a perfect fit of observations to the hypothesis. In such a case, the logarithms of these ratios entered in column (6) would be 0, as would their sum. Consequently, G, which is twice the natural logarithm of L, would be 0, indicating a perfect fit of the observations to the expectations. It has been shown that the distribution of G follows a χ2 distribution. In the particular case we have been s t u d y i n g — t h e two p h e n o t y p e classes—the a p p r o p r i a t e χ2 distribution would be the one for one degree of freedom. We can appreciate the reason for the single degree of freedom when we consider the frequencies in the two classes of Table 13.1 and their sum: 80 + 10 = 90. In such an example, the total frequency is fixed. Therefore, if we were to vary the frequency of any one class, the other class would have to c o m p e n s a t e for changes in the first class to retain a correct total. Here the m e a n i n g of one degree of freedom becomes quite clear. O n e of the classes is free to vary; the other is not. The test for goodness of fit can be applied to a distribution with more than two classes. If we designate the n u m b e r of frequency classes in the Table as a, the operation can be expressed by the following general c o m p u t a t i o n a l formula, whose derivation, based on the multinominal expectations (for m o r e than two classes), is shown in Appendix A 1.9: G = 2X./;in^) (13.4)

T h u s the formula can be seen as the sum of the independent contributions of departures from expectation (In [ f / f ] ) ) weighted by the frequency of the particular class ( f ) . If the expected values are given as a p r o p o r t i o n , a convenient c o m p u t a t i o n a l formula for G, also derived in Appendix A 1.9, is η In η (13.5)

Σ/>

T o evaluate the o u t c o m e of our test of goodness of fit, we need to k n o w the a p p r o p r i a t e n u m b e r of degrees of freedom to be applied to the χ2 distribution, f or a classes, the niimher of deorees of freedom is η — 1 Since the sum of

300

CHAPTER 1 3

/ ANALYSIS OF FREQUENCIES

frequencies in any problem is fixed, this means that a — 1 classes are free to vary, whereas the ath class must constitute the difference between the total sum and the sum of the previous a — 1 classes. In some goodness of fit tests involving more than two classes, we subtract more than one degree of freedom from the number of classes, a. These are instances where the parameters for the null hypothesis have been extracted from the sample data themselves, in contrast with the null hypotheses encountered in Table 13.1. In the latter case, the hypothesis to be tested was generated on the basis of the investigator's general knowledge of the specific problem and of Mendelian genetics. The values of ρ = 0.75 and q = 0.25 were dictated by the 3:1 hypothesis and were not estimated from the sampled data. For this reason, the expected frequencies are said to have been based on an extrinsic hypothesis, a hypothesis external to the data. By contrast, consider the expected Poisson frequencies of yeast cells in a hemacytometer (Box 4.1). You will recall that to compute these frequencies, you needed values for μ, which you estimated from the sample mean Ύ. Therefore, the parameter of the computed Poisson distribution came from the sampled observations themselves. The expected Poisson frequencies represent an intrinsic hypothesis. In such a case, to obtain the correct number of degrees of freedom for the test of goodness of fit, we would subtract from a, the number of classes into which the data had been grouped, not only one degree of freedom for n, the sum of the frequencies, but also one further degree of freedom for the estimate of the mean. Thus, in such a case, a sample statistic G would be compared with chi-square for a — 2 degrees of freedom. Now let us introduce you to an alternative technique. This is the traditional approach with which we must acquaint you because you will see it applied in the earlier literature and in a substantial proportion of current research publications. We turn once more to the genetic cross with 80 wild-type and 10 mutant individuals. The computations are laid out in columns (7), (8), and (9) in Table 13.1. We first measure / — / , the deviation of observed from expected frequencies. Note that the sum of these deviations equals zero, for reasons very similar to those causing the sum of deviations from a mean to add to zero. Following our previous approach of making all deviations positive by squaring them, we square ( / — / ) in column (8) to yield a measure of the magnitude of the deviation from expectation. This quantity must be expressed as a proportion of the expected frequency. After all, if the expected frequency were 13.0, a deviation of 12.5 would be an extremely large one, comprising almost 100% of f , but such a deviation would represent only 10% of an cxpected frequency of 125.0. Thus, we obtain column (9) as the quotient of division of the quantity in column (8) by that in column (4). Note that the magnitude of the quotient is greater for the second line, in which the / is smaller. Our next step in developing our test statistic is to sum the quotients, which is done at the foot of column (9), yielding a value of 9.259,26. This test is called the chi-square test because the resultant statistic, X2, is distributed as chi-square with a 1 degrees of freedom. Many persons inap-

consequently. look for a mechanism explaining this departure from expectation. and (9) of Table 13. The formula is straightforward and can be applied to any of the examples we show in the next section. as discussed in the previous section.26 from Table 13.6) which is a generalization of the computations carried out in columns (7). Expected frequencies in three or more classes can be based on either extrinsic or intrinsic hypotheses.75. However.005). We have already stated that the traditional method for such a test is the chi-square lest for goodness of fit. We can apply the chi-square test for goodness of fit to a distribution with more than two classes as well. number of births in a city over 12 months: Is the frequency of births equal in each month? In such a case the expected frequencies are computed as being equally likely in each class. . A phenomenon that occurs over various time periods could be tested for uniform frequency of occurrence—for example. The operation can be described by the formula a Χ 2 = Σ (f fi f·)2 (13. The major advantage of the G test is that it is computationally simpler. The geneticist must. The common presence of natural logarithm keys on pocket and tabletop calculators makes G as easy to compute as X2. Examples of goodness of fit tests with more than two classes might be as follows: A genetic cross with four phenotypic classes might be tested against an expected ratio of 9 : 3 : 3 : 1 for these classes. especially in more complicated designs. although we carry these out by means of the G test. since the sample statistic is not a chi-square. Clearly.1. 13.1. when compared with the critical value of χ2 (Table IV).2 Single-classification goodness of fit tests Before we discuss in detail the computational steps involved in tests of goodness of fit of single-classification frequency distributions. Since the deviations are squared. we have followed the increasingly prevalent convention of labeling the sample statistic X2 rather than χ2. X2 will be numerically similar to G. Our conclusions are the same as with the G test.2 / SINGLE-CLASSIFICATION GOODNESS OF FIT TESTS 301 propriately call the statistic obtained as the sum of column (9) a chi-square. Earlier reservations regarding G when desk calculators are used no longer apply. The chi-square test is always one-tailed. The G tests of goodness of fit for single-classification frequency distributions are given in Box 13. (8). The value of X2 = 9. for a classes.1. the newer approach by the G test has been recommended on theoretical grounds. some remarks on the choice of a test statistic are in order. is highly significant (P < 0.13. In general.259. the expected frequency for any one class would be η/α. negative and positive deviations both result in positive values of X2. we reject the 3:1 hypothesis and conclude that the proportion of wild type is greater than 0. However. Thus. The pertinent degrees of freedom are again a — 1 in the case of an extrinsic hypothesis and vary in the case of an intrinsic hypothesis.

1 G Test for Goodness of F i t Single Classification.082.803. a further degree of freedom is removed. cic? 12 11 10 9 8 7 6 5 4 3 2 1 0 V) 99 0 1 2 3 4 5 6 7 8 9 10 11 12 / •52 181 478 829 1112 1343 1033 670 286 104. Frequencies divided into a 2: 2 classes: Sex ratio in 6115 sibships of 12 in Saxony.assuming a binomial distribution.31 1367.088.4 but are here given to fivedecimal-place precision to give sufficient accuracy to the computation of G.630. In the formula computed below.00 + + + — — — _ + + + + Since expected frequencies ft < 3 for a = 13 classes should be avoided.13 71.475.01 258.012.46] 132.932.84 (>13.70 628.302 CHAPTER 1 3 ANALYSIS OF FREQUENCIES BOX 13. However. ν symbolizes the pertinent degrees of freedom of the .84) 0.429.27) 28.2 = 11 — 2 = 9 degrees of freedom.68 6115. •27 6115 = η = w / Deviation from expectation (J) 2.17 12. The fourth column gives the expectedfrequencies. the degrees of freedom would be α — 1 == 10. we lump the classes at both tails with the adjacent classes to create classes of adequate size.55 5 2 \jlJ +181 + +2 l 7n ·' · (ηέ^)) Since there are a = 11 classes remaining.279. 1.56 854.835. because the expected frequencies are based on a binomial distribution with mean pg estimated from the p . These were first computed in Table 4. Corresponding classes of observed frequencies / ( should be lumped to match.055. to obtain a better approximation to χ2. of the sample.36 1085. The number of classes after lumping is a = 11. We applied Williams' correction to G.73 26.000.246. if this were an example tested against expected frequencies based on an extrinsic hypothesis.65 1265.021.70 410. and the sample value of G is compared with a χ2 distribution with a .4): ( U\ = K K^) = 94. Compute G by Expression (13.871.210.347.

if any): Parameters estimated from sample Ρ μ. 2. one would expect a ratio of 3 wild-typefliesto each mutant fly.877 The null hypothesis—that the sample data follow a binomial distribution—is therefore rejected decisively.0 176.4).837. To test whether the observed results are consistent with this 3:1 hypothesis.75 q = 0.09 > xi00im = 27. we set up the data as follows. of which 130 were wild-type flies and 46 ebony mutants. = 130 f2 = 4 6 η = 176 ρ = 0. we obtain = 2[130In ( Η δ + 46 In iff)] * < ί ' ' = 0.0 qn = 44. We obtain G^ = 94. the following degrees of freedom will pertain to G tests for goodness of fit with expected frequencies based on a hypothesis intrinsic to the sample data (a is the number of classes after lumping.1 Continued problem. a μ a Distribution df —2 Binomial Normal Poisson a-3 a-2 When the parameters for such distributions are estimated from hypotheses extrinsic to the sampled data.2 / SINGLE-CLASSIFICATION (I(K)DNESS OF FIT TESTS BOX 13.25 pn = 132.120.02 + ' ' ' .. the degrees of freedom are uniformly a — 1. Special case of frequencies divided in a = 2 classes: In an Fz cross in drosophila.0 Computing G from Expression (13. Flies f Hypothesis f Wild type Ebony mutant / . Assuming that the mutant is an autosomal recessive. the following 176 progeny were obtained. Typically.

The lumping of classes results in a less powerful test with respect to alternative hypotheses.1 Continued Williams* correction for the two-cell case is < = 1 +·1/2».841. with the parametric proportion of males p . Comparing the corrected sample value .· from the sample. ν is the number of degrees of freedom appropriate to the G test. He divides G by a correction factor q (not to be confused with a proportion) to be computed as q = 1 + (a2 — l)/6m>. In this formula.304 CHAPTER 1 3 / ANALYSIS OF FREQUENCIES BOX 13. α = 11.519. the expected frequencies in these data are based on the binomial distribution. a correction for G to obtain a better approximation to the chisquare distribution has been suggested by Williams (1976). In our case. In this case.4. first introduced in Table 4. Since this is an example with expected frequencies based on an intrinsic hypothesis. estimated from the observed frequencies of the sample (p . as shown in Box 13. and therefore a second degree of freedom is subtracted from a. Because the actual type I error of G tests tends to be higher than the intended level. making the final number of degrees of freedom a — 2 = II 2 9. Section 4. The number of classes a is the number after lumping has taken place.1.osm 3. The effect of this correction is to reduce the observed value of G slightly. we have to subtract more than one degree of freedom from a for the significance test. Clearly. however.1.02 01197 Since G a < J j « Xo. By these criteria the classes of /· at both tails of the distribution are too small. the observed frequencies must be lumped to match. The cells with J] < 3 (when a > 5) or f . = 0. < 5 (when a < 5) are generally lumped with adjacent classes so that the new / are large enough. G β 0. We use the sex ratio data in sibships of 12. is one in which the expected frequencies are based on an intrinsic hypothesis. As you will recall. The computation of this case is outlined fully in Box 13. The G test does not yield very accurate probabilities for small f{. We lump them by adding their frequencies to those in contiguous classes. we estimated p.215).2. we clearly do not have sufficient evidence to reject our null hypothesis.1.120. which is ? 1 + 2 m r l M 2 ' u in this example. • The case presented in Box 13.

respectively. dented).1 is a genetic cross with an expected 3:1 ratio.837. The G test is adjusted by Williams' correction. if among the progeny of a certain genetic cross the probability that a kernel of corn will be red is \ and the probability that the kernel will be dented is 5. dented): 1 (not red. that the resulting value of G adj is far less than the critical value of χ2 at one degree of freedom. The appropriate statistical test for this genetic problem would be to test the frequencies for goodness of fit to the expected ratios of 2 (red.3.09 with the critical value of χ2 at 9 degrees of freedom. We have found the continuity correction too conservative and therefore recommend that Williams' correction be applied routinely. assuming that the null hypothesis is correct). The first null hypothesis tests the Mendelian model in general.4).3 / TESTS OF INDEPENDENCE: T W O . The expected frequencies differ very little from the observed frequencies. Section 4. and it is no surprise. Williams' correction reduces the value of G and results in a more conservative test. The second tests whether these characters assort independently—that is. Had we applied the chi-square test to these data.3 Tests of independence: Two-way tables The notion of statistical or probabilistic independence was first introduced in Section 4. The computation is carried out by means of Expression (13. 13. there is an excess of sibships in which one sex or the other predominates.W A Y TABLES 305 of ^adj — 94.001. work out the exact probabilities as shown in Table 4. not dented):2 (not red. although it will have little elfect when sample sizes are large. the probability of obtaining a kernel both dented and red will be j χ ^ = if the joint occurrences of these two characteristics are statistically independent. therefore. and that these two properties are independent. whether they are determined by genes located in different linkage groups. In tests of goodness of fit involving only two classes. Inspection of the chi-square table reveals that roughly 80% of all samples from a population with the expected ratio would show greater deviations than the sample at hand. usually applied in order to make the value of G or X2 approximate the χ2 distribution more closely. not dented): 1 (red. the value of G as computed from this expression will typically result in type I errors at a level higher than the intended one. The example of the two cell case in Box 13. If the second hypothesis . An alternative correction that has been widely applied is the correction for continuity. where it was shown that if two events were independent. For sample sizes of 25 or less.2.13. We therefore reject this hypothesis and conclude that the sex ratios are not binomially distributed. This would be a simultaneous test of two null hypotheses: that the expected proportions are j and j for red and dented. we find it highly significant (Ρ « 0. As is evident from the pattern of deviations. as before. the critical value would have been the same (Xa[9])· Next we consider the case for a = 2 cells. the probability of their occurring together could be computed as the product of their separate probabilities. Thus.1.

The proportion predated may differ in the two color phases. our statistical test will most probably result in rejection of the null hypothesis of independence. At the same time. The number of surviving moths is counted after a fixed interval of time. A random sample is obtained of 100 individuals of a fairly rare species of tree distributed over an area of 400 square miles. If. the exact proportions of the two properties are of little interest here. and the proportion of survivors is of interest only insofar as it differs for the two phases. Should the statistical test of independence explained below show that the two properties are not independent. Among 10. If the probability of being preyed upon is independent of the color of the moth. nonserpentinepubescent. are dependent on each other. We shall cite several examples of such situations.000 patients admitted to a hospital. the expected frequencies will simply be products of the independent proportions of the two properties. In this example. Thus the sample of η = 100 trees can be divided into four groups: serpentine-pubescent. all patients admitted are tested for several blood groups. light-colored prey. Again. the proportion of pubcscencc differs for the two types of soils. the expected frequencies of these four classes can be simply computed as independent products of the proportion of each color (in our experiment. 5) and the overall proportion preyed upon in the entire sample. often no hypothesis regarding the parametric values p{ can be formulated by the investigator. a certain proportion may be diagnosed as exhibiting disease X. located on the same chromosome. For each tree the ecologist notes whether it is rooted in a serpentine soil or not. There are numerous instances in biology in which the second hypothesis.serpentine versus nonserpentine. specimens of a certain moth may occur in two color phases—light and dark.306 CHAPTER 1 3 / ANALYSIS OF FREQUENCIES must be rejected. In this instance the proportions may themselves be of interest to the investigator. and nonserpentine-smooth. subject to predation by birds. our null hypothesis of the independence of these properties will be upheld. is of little interest. A certain proportion of these arc members of blood group Y. and dark prey. and pubesccnt versus smooth. which lead to the test of independence to be learned in this section. this is the issue of biological importance. regarding the true proportion of one or both properties. concerning the independence of two properties. each occurring in two states. and whether the leaves arc pubcsccnt or smooth. we are led to conclude that one of the color phases is more susceptible to predation than the other. If the probability that a tree is or is not pubesccnt is independent of its location. Fifty specimens of each phase may be exposed in the open. The two properties in this example are color and survival. In fact. The proportion of the color phases is arbitrary. on the other hand. Is there some . serpentine-smooth. A second example might relate to a sampling experiment carricd out by a plant ecologist. We can divide our sample into four classes: light-colored survivors. dark survivors. this is taken as evidence that the characters are linked—that is. For instance. We employ this test whenever we wish to test whether two different properties. is of great interest and the first hypothesis. An analogous example may occur in medicine.

and a control group of 54 that received the bacteria but no antiserum. as seen in the table. For example.514. This type of two-way table.alv ~ nPbacl. and dividing the product by the grand total. The marginal totals give the number of mice exhibiting any one property: 57 mice received bacteria and antiserum.3 / TESTS OF INDEPENDENCE: T W O .13. is known as a 2 χ 2 table. a higher value than the observed frequency of 29. Dead Alive Σ 57 54 111 Bacteria and antiserum Bacteria only 13 25 oo r-1 44 29 73 Thus 13 mice received bacteria and antiserum but died. A question of interest is whether the antiserum had in any way protected the mice so that there were proportionally more survivors in that group. in which each of the two criteria is divided into two classes. We can proceed similarly to compute the expected frequencies for each cell in the table by multiplying a row total by a column total. Such data are conveniently displayed in the form of a two-way table as shown below. In discussing such a table it is convenient to label the cells of the table and the row and column sums as follows: a c a + c b d b + d a + b c + d η From a two-way table one can systematically computc the cxpcctcd frequencies (based on the null hypothesis of independence) and compare them with the observed frequencies. A sample of 111 mice was divided into two groups: 57 that received a standard dose of pathogenic bacteria followed by an antiserum. Λ Jbacl. Altogether 111 mice were involved in the experiment and constitute the total sample. Of those that died.W A Y TABLES 307 association between membership in blood group Y and susceptibility to the disease X? The example we shall work out in detail is from immunology. Two-way and multiway tables (more than two criteria) are often known as contingency tables. alive) would be . After sufficient time had elapsed for an incubation period and for the disease to run its course. Here again the proportions of these properties are of no more interest than in the first example (predation on moths). 38 dead mice and 73 survivors were counted.alv ~ nPbad X {c + d \ f b + d\ A Palv — " I (c + d)(b + d) which in our case would be (54)(73)/l 11 = 35. The expected frequencies can be . 73 mice survived the experiment. 13 had received bacteria and antiserum while 25 had received bacteria only. the expected frequency for cell d (bacteria.

Clearly. For our purposes here it is not necessary to distinguish among the three models of contingency tables. The result of the analysis shows clearly that we cannot reject the null hypothesis of independence between soil type and leaf type. Thus organisms may occur in four color classes and be sampled at five different times during the year.000 37. using the expected frequencies in the table above. We shall examine the reasons for this at the end of this section. This would yield χ 2 = 6.000 You will note that the row and column sums of this table are identical to those in the table of observed frequencies. to these data is 0.514 18. We note that the percentage mortality among those animals given bacteria and antiserum is (13)(100)/57 = 22.2 will give at least approximately correct results with moderate. In Box 13. considerably lower than the mortality of (25)(100)/54 = 46. One could also carry out a chi-square test on the deviations of the observed from the expected frequencies using Expression (13. The statistical test appropriate to a given 2 x 2 table depends on the underlying model that it represents. which should not surprise you. the application of which is shown in the box. since the expected frequencies were computed on the basis of these row and column totals. yielding a 4 χ 5 test of independence.2).486 35. Tests of independence need not be restricted to 2 χ 2 tables. or worse. In the two-way cases considered in this section. Let us state without explanation that the observed G or X 2 should be compared with χ2 for one degree of freedom. but each of these properties may be divided into any number of classes.000 Bacteria and antiserum Bacteria only Σ 19. It should therefore be clear that a test of independence will not test whether any property occurs at a given proportion but can only test whether or not the two properties are manifested independently. one obtains G adj = 6.8%.486 38.7966.308 CHAPTER 1 3 / ANALYSIS OF FREQUENCIES conveniently displayed in the form of a two-way table: Dead Alive Σ 57. The presence of pubescent leaves is independent of whether the tree is rooted in serpentine soils or not. The probability of finding a fit as bad. Such a test would examine whether the color proportions exhibited by the marginal totals are inde- .000 54. We conclude. The G test illustrated in Box 13.2. we are concerned with only two properties. When the test is applied to the above immunology example. therefore.7732. dealing with trees rooted in two different soils and possessing two types of leaves. that mortality in these mice is not independent of the presence of antiserum.01.3% among the mice to whom only bacteria had been administered.to large-sized samples regardless of the underlying model. There has been considerable confusion on this subject in the statistical literature.514 73.2 we illustrate the G test applied to the sampling experiment in plant ecology.005 < Ρ < 0.000 111. the antiserum has been effective in reducing mortality. using the formulas given in Box 13. With small sample sizes (n < 200). it is desirable to apply Williams' correction.

666.02 4. X / In / for the cell frequencies = 12 In 12 + 22 In 22 + 16 In 16 + 50 In 50 = 337.517. Soil Pubescent Smooth Totals Serpentine Not Serpentine Totals 12 16 28 22 50 72 34 66 100»η The conventional algebraic representation of this table is as follows: Σ £ a c a+c b d b+d a+b c +d a + b + c + d<=n Compute the following quantities.w a y t a b l e s 309 BOX 13. 1.784.38 .797.517.2 2 x 2 test of independence.ostu = 3.332.841.635.16 3.332.49 Williams' correction for a 2 χ 2 table is \a + b c+d ^ q = l + ± For these data we obtain a H J\a + c 6η b+d ) L = .784. He records for each tree whether it is rooted in serpentine soils or not.16 + 460.02) = 2(0.81 W-t) G 1. Since our observed Gadj is much less than Zo. η In « = 100 In 100 = 460.W W + 1 ( 6(100) = 1.3 / t e s t s o f i n d e p e n d e n c e : t w o .38 2. we accept the null hypothesis that the leaf type is independent of the type of soil in which the tree is rooted. and whether its leaves are pubescent or smooth.635. . A plant ecologist samples 100 trees of a rare species from a 400-square-mile area. Compute G as follows: G = 2(quantity 1 .022.quantity 2 + quantity 3) = 2(337. £ / for the row and column totals = 34 In 34 + 66 In 66 + 28 In 28 + 72 In 72 = 797.24) = 1.13. .Η W + W .49 _ 13028 Compare GadJ with critical value of χζ for one degree of freedom.

26 48.213 + · · · + 3064. Sum of the transforms of the column totals =Σ(Σ/„Ηςλ) = 818 In 818 + · • + 494 In 494 = 5486.3 If χ C test of independence using the G test. Transform of the grand total = η In η = 2466 In 2466 = 19.83 39.20 17.31 27.46 .2(17.53 36.t t f t J ^ f i j = 591η 59 + 100 In 100 4.28 50.MN 49.49 7..108 + 19.18 51.461 3. G = 2(quantity 1 . The lower bound estimate of q using Williams' correction for an a χ b table is .575 + 460. MN NN 44 41 49 165 104 91 494 Totals 203 203 187 942 503 428 2466 %MM 29.6) MM 59 64 44 342 140 169 818 Genotypes (a = 3) ———.308.475) = 34.· · · + 91 In 91 = 240.••.951 6.310 CHAPTER 1 3 / ANALYSIS OF FREQUENCIES BOX 13.quantity 2 — quantity 3 + quantity 4) = 2(12. . Frequencies for the Μ and Ν blood groups in six populations from Lebanon.06 31.715 2.67 20.68 21.308.49 39.·)'" (έ/«) = 203 In 203 + • • · + 428 In 428 « 1078.03 Druse Greek Catholic Greek Orthodox Maronites Shiites Sunni Moslems Totals 100 98 94 435 259 168 1154 Source: Ruffle and Taleb (1965).20 26.52 20. Sum of transforms of the row totals = Σ (Σ/.25 SNN 21. Populations ( b .581 + • · • + 2593. Compute the following quantities.517 + · · . Sum of transforms of the frequencies in the body of the contingency table .15.053 = 4.330 5.305 = 15.260.752.687.715 .260.27 46..+ 40.53 23.752. 1.488 = 12.••.16.330) =.

001. The quantity fu in Box 13. as in the structurally similar case of two-way anova.3 we feature frequencies from six Lebanese populations and test whether the proportions of the three groups arc independent of the populations sampled. concerns the MN blood groups which occur in human populations in three genotypes—MM. as in this example.1) degrees of freedom. and need be carried out only when the sample size is small and the observed G value is of marginal significance. In the formulas in Box 13. Another case.3 refers to the observed frequency in row i and column j of the table. and we must reject our null hypothesis that genotype frequency is independent of the population sampled.892 = 34.3.3 we employ a double subscript to refer to entries in a two-way table.885.1 ){b . whether the frequencies of the three genotypes differ among these six populations.001.3. the following is a simple general rule for computation of the G test of independence: G = 2 [ ( £ / In / for the cell frequencies) — ( £ / In / for the row and column totals) ϊ η In η] The transformations can be computed using the natural logarithm function found on most calculators.οοΐ[ΐοι = 29. and NN. In our case. R and C standing for the number of rows and columns in the frequency table. The adjustment will be minor when sample size is large.W A Y TABLES BOX 13.3 Continued qmln (a _ ι + .1)(6 . As shown in Box 13.951/1. are often called RxC tests of independence.3 / TESTS OF INDEPENDENCE: T W O . our G value is significant at Ρ < 0. or in other words. examined in detail in Box 13.001. The results in Box 13.3 show clearly that the frequency of the three genotypes is dependent upon the population sampled.892 Thus Gm = G/qmin = 34. We feature a lower bound estimate of its correct value.1) = 10.588. Since χ£. This value is to be compared with a χ2 distribution with (a . Frequencies of these blood groups can be obtained in samples of human populations and the samples compared for differences in these frequencies. MN. Williams' correction is now more complicated. In Box 13. We note the lower frequency of the . df -{3 .13. where a is the number of columns and b the number of rows in the table.+ 1 )(b_+ I) _ (3 + 1)(6 + 1) ~ 6(2466) = 1.

5163. n. subtract one degree of freedom for the observed total sample size.4624.441. of course. the conventional expression for the degrees of freedom in a two-way test of independence. In the 2 x 2 tests of independence of this section. where a and b are the number of rows and columns in the table. was (6 . We have also estimated a — 1 row probabilities and b — 1 column probabilities. respectively. 2.2).323 CHAPTER 1 3 / ANALYSIS OF FREQUENCIES Μ Μ genotypes in the third population (Greek Orthodox) and the much lower frequency of the MN heterozygotes in the last population (Sunni Moslems). Of 65 specimens from the prairie. this expression becomes {a χ b) — a — b + 1 = (a — 1) χ (b — 1). Association is thus similar to correlation. Thus. I dj\ G. An examination of the 167 adult males that have been collected reveals that 35 of these have pale-colored bands around their necks.Mll = 171. 146 wild-type and 30 mutant offspring were obtained when F .1 In an experiment to determine the mode of inheritance of a green mutant. If two properties are not independent of each other they are associated. = 3. The degrees of freedom for tests of independence are always the same and can be computed using the rules given earlier (Section 13. This way of looking at a test of independence suggests another interpretation of these tests as tests for the significance of differences between two percentages.1) χ (3 — 1) = 10. a 6 χ 3 case. applying to attributes as well as continuous variables. 70. Exercises 13. xg 0 5 [ 1 . Locality A has been exhaustively collected for snakes of species S. In the immunology experiment there is a negative association between presence of antiserum and mortality. 6 of which show the bands.3. G . 90 miles away. 1 d f . we obtain a sample of 27 adult males of the same species. Ehrlich). R.841. one way of looking for suspected lack of independence was to examine the percentage occurrence of one of the properties in the two classes based on the other property. there are k — (a — 1) — (b — 1)— 1 = fc — a — b + I degrees of freedom for the test. have such patches (unpublished data by P. in the example testing relative frequency of two leaf types on two different soils. In all 2 χ 2 cases there is clearly only (2 — 1) χ (2 — 1) = 1 degree of freedom. Thus we compared the percentage of smooth leaves on the two types of soils. Thus. There are k cells in the table but we must subtract one degree of freedom for each independent parameter we have estimated from the data. G = 6. Thus. G a d j = 6.8'T.175. the degrees of freedom in the example of Box 13.3 . Test whether the data agree with the hypothesis that the ratio of wild type of mutants is 3:1. have light color patches on their wings.4533. From locality B. But since k = a χ b. or we studied the percentage mortality with or without antiserum. generation houseflics were crosscd.2 13. but it is a more general term.5". Another name for test of independence is test of association.". 13. ANS. we can speak of an association between leaf types and soils. What is the chance that both samples are from the same statistical population with respect to frequency of bands? Of 445 specimens of the butterfly Erebia epipsodea from mountainous areas. ANS. Is this difference significant? llinv First work backwards to obtain original frequencies. We must.

G = 7. . Stem mothers had been placed on the diets one day before the birth of the nymphs (data by Mittler and Dadd. Lewontin and White (1960) gave the following results for the composition of a population at Royalla "B" in 1958.1.8914. Pfanilner (1984) reports the following results. 1966). ANS For Tabic 4. edited for purposes of this exercise.2. Test agreement of observed frequencies with those expected on the basis of a binomial distribution for the d a t a given in Tables 4. Type of diet % winged forms η Synthetic diet Cotyledon "sandwich" Free cotyledon 13. together with an antibiotic.7 13. to 20 persons.6077. G = 160. G a d j = 20.396.6: G = 20. In clinical tests of the drug Nimesulide. 3 df\ G a d ) = 49. Is there evidence for differential susceptibility to melanoma of differing body regions in males and females? ANS. Test agreement of observed frequencies with those expected on the basis of a Poisson distribution for the data given in Table 4.5: G = 49. are as follows: Antibiotic + Ninwstilith' Antibiotic + placebo Negative opinion Positive opinion 1 19 16 4 13.1 and 4.9 Analyze and interpret the results.8 Are the frequencies of the three different combinations of c h r o m o s o m e EK independent of those of the frequencies of the three combinations of chromosome CD? ANS.EXERCISES 313 13. 5 dj\ G\„M = 158. f or Table 4. Chromosome CD St/St Chromosome EF Td/Td St/Td St/St St/Bl 96 56 6 Bl/Bl 75 64 6 22 8 0 13.6 13.2366.9557.6083. The results.4858. The drug was given.5 100 92 36 216 230 75 In a study of polymorphism of chromosomal inversions in the grasshopper Moraba scurra. A control group of 20 persons with urinary infections were given the antibiotic and a placebo.5 and '["able 4 6. 2 J f .4 Test whether the percentage of nymphs of the aphid Myzus persicae that developed into winged forms depends on the type of diet provided. Refer to the distributions of m e l a n o m a over body regions shown in Table 2.

+ A2 + · • · + / ! „ ) + ( β . w h e r e C is a c o n s t a n t . W e h a v e f 1 B. this c a n be c o m p u t e d as follows: η Σ i 1 C = C + C + ·••+ Γ - (η t e r m s ) nC .) = £ A • ι 1 + £ =1 β. i-l £ (A. p a i r of p a r e n t h e s e s w i t h a Σ sign in f r o n t of t h e m b y t r e a t i n g t h e Σ a s t h o u g h it w e r e a c o m m o n f a c t o r . ) + ( A 2 + H 2 ) + • + M „ + B„) = (. 4 . 4 . + Bz + • • • + B„) Therefore.l D e m o n s t r a t i o n t h a t t h e s u m of t h e d e v i a t i o n s f r o m t h e m e a n is e q u a l W e h a v e t o l e a r n t w o c o m m o n r u l e s of s t a t i s t i c a l a l g e b r a . W e c a n o p e n a to zero. + B. w h e n Σ " _ . + B .) = 1 . C is d e v e l o p e d d u r i n g a n a l g e b r a i c o p e r a t i o n . A l s o .APPENDIX Mathematical Appendix Al.

2) is "(η. In the subsequent demonstration and others to follow.( ( Σ Υ ) » .3). The standard error squared from Expression (8. = n. lb·. you may check these rules. - l)(n) = η (sf + s\) which is the standard error squared of Expression (8.s2 n] + n2 — 2 When η ι — η.2 Demonstration that Expression (3. + >n η. using simple numbers.8). this simplifies to {η n. Σ }' = 0. the computational formula for the sum of squares. If you wish.2ΥΫ + Ϋ 2 ) ~ 2 ϋ ς +ηγ2 (since Υ η γ 2 Ά Σ = Σ γ 1 ) +. We wish to prove that Σ(Υ = Σ^ 2 Υ)2 = Σ Υ2 . the expression originally developed for this statistic. = Σ Υ ~ " Υ = ^Σ = Σ Υ isince Υ = η ~ Σ Υ - Therefore.l)(sf + s|)(2) 2(n i.7). Α1. . ( Σ γ ) Hence. n. . Σ ( γ Σ γ 1 (Ση2 η Α1.Ϋ) 2 = Σ ( ^ 2 . We have υ £ ( Y .117 APPENDIX 1 / MATHEMATICAL APPENDIX Since in a given problem a mean is a constant value.1 )s\ + (w2 . whenever all summations are over η items.1 ). Σ"Ϋ = ηΫ. we have simplified the notation by dropping subscripts for variables and superscripts above summation signs. equals Expression (3. By definition. · I» ^ri — 1 )s[ (η ..3 Simplified formulas for standard error of the difference between two means. We wish to prove that Σ y = 0.

n. 2 Ϋζ)2 since the squares of the numerators are identical.2) simplifies to rtjSf + n 2 s j η.^ Ϋ) 2 + ( Ϋ 2 ~ Υ Ϊ 2 = Κ Γ. - Υι + Υ ι Υ . + n 2 + η- n i s i η . A1.4 Demonstration that i 2 obtained from a test of significance of the difference between two means (as in Box 8. ί * + 2 γ. - I m . ts (from Box 8. Then MS„ = η χ MS m c a m = « [ i ( F . r< 2 =n(n ( Ϋ ι - 1 η Π Σy' - . η ~ 2 - F2)2] y2)2 Z ^ MS. n1n2 1 .4).316 APPENDIX 1 / MATHEMATICAL APPENDIX .117 When ti1 φ n2 but each is large. 1 2 2 = (F.1) ΣνΙ n(n \ + 1)(F. the standard error squared of Expression (8.2) Yi Yi 1 (sl + sl) 1 φ η .1) ΣνΙ + Σνΐ) + Σy22 In the two-sample anova. n2sl n1n2 _ s j — n2 1 s\ «j which is the standard error squared of Expression (8. Zyi + . η .2) is identical to the Fs value obtained in a single-classification anova of two equal-sized groups (in the same box). Yi + Υ 2 ^ 2 (since Υ = + Ϋ2)β) Υ Λ 2 Υ - Τ. Σ ή η Y. so that (r^ — 1) ά nl and (n2 — 1) « n2.

2 + Σ-ν2^ " Σ + b2 Σ χ2 2 2 (since y . x — Υ — Y.i)(F x Σ ή + ?2)2 Σ ή = tl Α1. dY. we can subtract Y from both Y and Y to obtain d y . Since Υ = Ϋ.bx) ΜΣ^ (Σ*2)2 ^ = Σ Γ ^ 2 Σ*2 + Σχ 2 .ΫηΧ + ηΧΫ (since Σ Υ/η = Υ.χ Σ Σχν and Σ χ ν = Σ™~ = Σ Χ Υ ./'V)2 = Σ >-2 Σ>'2 .6 Derivation of computational formula for Σ^υ Λ= ((Σ^)2/Σ-)· . Σ χ γ . All summations are over η items.ΧηΫ = .APPENDIX 1 / MATHEMATICAL APPENDIX .^ Σ Χ 1 η ("·5.5 Demonstration that Expression (11.ν . γ = y — y = y — bx Therefore. the expression originally developed for this quantity. We have Σ > = Σ ( * . Σ Χ = ηΧ) Σ Χ γ - ηΧΫ = Σ Χ Υ ~ ηΧ ΪΣ η γ = Similarly. Σ </?·* = Σ (. the computational formula for the sum of products. equals Σ ( Χ — X)(Y— Y). By definition. Σ *'2 ~ ΑΙ.= Σ ύ + Σ ή ) Ι Ά η i)] Φ .χ ) ( γ = Σ Χ Υ Χ Σ γ γ) γ Σ χ + η Χ Υ (since Σ Χ Υ = Υ) = Σ Χ Υ ..117 1(Ά-Ϋ2)2 F. Σ γ = ηΥ: similarly.5).

.5).Ι + = Σο-.χ)2 x = Σ Ϊ 2 + Σ<ΙΙΧ + 2 ς yd*. y = y + άγ χ Σ>·2 = Σ(9 + Ίυ. then (y. then we have demonstrated the required [since y = bx from Expression (11.y2 η ΐ + σί + .7 Demonstration that the sum of squares of the dependent variable in regression can be partitioned exactly into explained and unexplained sums of squares.3) and dy χ = v — bx from Appendix A 1.117 or l d r x = ly 2 - (Σ^)2 Σ* 2 (11.Σ ν .'η Σ η η ι (Υ. written out in terms of variates. + γ2) Ϋι - Ϋ2]2 = Σ 1 η σ (Ν. We have Σϊ'^γ χ = Σ My = — = 0. and σ2 are standard deviations of Yj and Y2. or.8 2 .6) A1. and Y2. By definition (Section 11. + Y2.318 APPENDIX 1 / MATHEMATICAL APPENDIX . Σ ( υ . Y2) X [ ( y . + Υ2) . Χ If we can show that Σ yd r identity. Σ>' 2 = Σ y 2 + ΣίΙ. respectively.y 2 ) = -σι.£y.6] bx) 2 bYxy-b^x since b = = *>Σ*>· ~ = h Σχ2 T x y ο v Therefore. η 1 2 Χ y \ + . + 1 2') '.v 2 ι η Σ(η + Y l) ζ Σ (ν. the cross products canceling out. f>2 + = Σ(ϊ- Proof that the variance of the sum of two variables is = σ ί + σ 2 + 2 f>i2^l^2 where σ. +y 2)2 1 £rf + η £(y\ + y\ + 2y.Υ) A1. . - Ϋ. If Ζ = Υ. and ρλ2 is the paramctric correlation cocfficicnt between Y.

the sum of the observed frequencies over the a classes.2ρ 1 2 σ.4) and (13. In general./> Since /• = npt and /..9) The analogous expressions apply to sample statistics./.4) Σ / . Assuming a multinomial distribution.in(4 If we now replace /'. G is twice the natural logarithm of the ratio of the probability of the sample with all parameters estimated from the data and the probability of the sample assuming the null hypothesis is true.5) Σ. I" η In η Pi Σ. In = 2 A npi (13.117 APPENDIX 1 / MATHEMATICAL APPENDIX But. = nph G = 2£. pt is the observed proportion.σ 2 (12. and the expected proportion of class /.2 f y 1-Ϊ2) = s A1. while η is sample size.1·1" (13. Thus ^(Ti + yz) = s s i + s i + 2r12s1s2 i + s2 ~ 2r12sj.s. G = 2 In L Σ./. ρ{·ρ{ 2 Pa Π Pi ι \Pi where /. since p l 2 = σιι/ σ ι σ 2> we have σ l 2 = Ρ 12σ1σ2 Therefore σΐ = = σι + σ22 + 2 p 1 2 fftff 2 Similarly.5). by nph G = 2 V /.9 Proof that the general expression for the G test can be simplified to Expressions (13./> Pi L . this ratio is Pa L = n'.8) (12. σέ = = σ ι + σ 2 . is the observed frequency.

APPENDIX Statistical Tables I. IX. VIII. V. II. IV. the Mann-Whitney statistic 339 Critical values of the Wilcoxon rank sum 343 Critical values of the two-sample Kolmogorov-Smirnov statistic 346 Critical values for Kendall's rank correlation coefficient τ 348 . XIV. III. Twenty-five hundred random digits 321 Areas of the normal curve 322 Critical values of Student's t distribution 323 Critical values of the chi-squarc distribution 324 Critical values of the F distribution 326 Critical values of F m a i 330 Shortest unbiased confidence limits for the variance 331 Critical values for correlation coefficients 332 Confidence limits of percentages 333 The ζ transformation of correlation coefficient r 338 Critical values of U. XII. X. VI. VII. XI. XIII.

DIX APPENDIX 2 / STATISTICAL TABLES TAB Twi I »-tive hundred random digits. 1 2 14952 38149 25861 03370 58554 10412 62687 04281 24817 91751 04873 67835 39387 39246 50269 37696 48809 12825 06261 19595 87152 55937 75131 68523 91001 81842 88481 66829 84193 23796 26615 65541 71562 96742 24738 60599 81537 77192 07189 78392 54723 73460 28610 78901 92159 40964 68447 73328 78393 07625 3 72619 49692 38504 42806 16085 69189 91778 39979 81099 53512 54053 28302 78191 01350 67005 27965 36698 81744 54265 13687 20719 21417 72386 29850 52315 01076 61191 72838 57581 16919 43980 37937 95493 61486 67749 85828 .59527 50623 80539 11733 18227 88841 87957 59710 21971 98780 35665 13266 33021 05255 4 73689 31366 14752 11393 51555 85171 80354 03927 48940 23748 25955 45048 88415 99451 40442 30459 42453 28882 16203 74872 25215 49944 11689 67833 26430 99414 25013 08074 77252 99691 09810 41105 34112 43305 83748 19152 95674 41215 75927 57703 28449 39602 21497 27396 16901 72418 31530 54898 05867 83254 5 52059 52093 23757 71722 27501 29082 23512 82564 69554 65906 48518 56761 60269 61862 33100 91011 83061 27369 23340 89181 04349 38356 95727 05622 54175 31574 30272 57080 85604 80276 38289 70106 76895 34183 59799 68499 76692 14311 75475 29133 04570 34049 64729 02593 57383 52571 59832 68795 86520 93943 6 37086 15422 59660 93804 73883 44785 97219 28777 55925 91385 13815 97725 94880 78688 16742 51426 43769 88183 84750 01939 54434 98404 05414 89975 30122 94719 23388 15446 45412 32818 66679 89706 46766 99605 25210 27977 86420 42834 73965 71164 18882 20589 64983 05665 34262 18415 49181 40948 45363 52325 7 60050 20498 67844 09095 33427 83638 65921 59049 48379 84983 37707 58438 58812 30339 61640 31006 39948 65846 16317 18447 72344 14850 88727 79042 31796 34656 22463 11034 43556 62953 73799 40829 96395 67803 31093 35611 69930 80651 11796 55355 00023 05701 71551 11964 41744 64362 21914 80808 43066 93230 8 86192 33901 78815 07856 33343 02583 02035 97532 12866 27915 68687 91528 42931 60222 21046 77468 87031 92545 88686 10787 93008 17994 45583 27142 98842 80018 65774 98143 27518 78831 48418 40789 31718 13491 62925 96240 1(X)20 93750 72140 31006 67101 08249 99016 44134 60891 90636 65742 63887 00988 62668 9 67049 10319 23758 55589 45507 96483 59847 54540 51232 48491 15570 24645 71898 74052 31909 61029 30767 09065 86842 76246 83282 17161 22568 99257 37600 86988 10029 74989 90572 54395 12647 59547 48302 09243 72061 62747 72881 59957 48944 25526 06895 74213 87903 00273 57624 38034 89815 89939 64040 79529 10 64739 43397 86814 46020 50063 76553 91403 79472 21580 91068 08890 18544 61534 25740 72641 57108 13953 22655 00879 80072 31670 98981 77700 32349 26025 79234 58376 26885 00563 30705 40044 (X>783 45893 29557 69991 89529 12532 31211 74156 55790 08915 25220 63875 76358 06962 04909 39231 47938 09803 65964 1 2 3 4 5 48461 76534 70437 59584 04285 77340 59183 91800 12066 69907 80467 78057 05648 22304 61346 66793 86411 62098 68775 52679 84096 63964 31191 30545 52573 1658ft 81841 43563 19945 79374 48503 32049 18547 03180 94822 34330 43770 56908 32787 52441 22377 18376 53201 34919 33617 70010 19282 91429 97637 95150 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 36 38 39 40 41 42 44 35 37 43 45 46 47 48 49 50 .

3413 .3 1.4761 .4997 0.3438 .4989 .4997 0.1915 .3 0.0120 .4974 .4625 .4554 .4726 .0636 .1591 .0478 .1 2.4996 .2 3.4418 .4997 .1368 .5 1.4918 .3531 .9 1.4842 .0040 .499997 .2157 .9 1.3315 .2910 .0832 .9 2.1844 .3264 .0517 .4992 .4995 .4986 .4916 .0 0.4099 .6 1.4904 .4719 .4938 .4382 .4850 .4429 .4 0.1 1.0675 .2549 .6 1.07 .05 .4049 .7 0.3980 .4971 .3810 .0398 .3078 .2088 .2794 .4997 0.3 3.2 3.1 1.1331 .1103 .4975 .4265 .4996 .2517 .03 .4984 .4664 .7 1.499968 .4998 y/σ 0.08 .3849 .4948 .4996 .1255 .4896 .499767 .4960 .4826 .4893 .4656 .4834 .4641 .3 1.3133 .0000 .4236 .4932 .1628 .4967 .4997 .4887 .1406 .4979 .4981 .3643 .4963 .4706 .8 1.4817 .8 2.4994 .4994 .4756 .4965 .499999 .4911 .Ί 2.3577 .4982 .3665 .4996 .4988 .02 .3729 .2324 .4993 .7 2.4738 .4989 .4868 .499995 .4830 .4357 .8 0.2967 .499928 .1179 .4808 .4699 .4925 .4976 .499998 .4985 .4968 .4857 .3159 .00 .0753 .4997 0.4985 .α (as shown in the figure).4964 .4884 .4966 .4959 .4983 .4633 .3830 .4812 .5 1.5 2.4474 .2486 .1026 .3051 .3686 .APPENDIX 2 / STATISTICAL TABLES TABLE I I A r e a s o f the n o r m a l curve y/σ 0.4744 .2291 .0080 .2611 .3708 .1 2.3186 .4599 .0793 .4803 .4582 .09 .4995 . The area is generally labeled J .4452 .6 0.4980 .4878 .4953 .0871 .4 2.4993 .4177 .7 2.4992 .4929 .4969 .4890 .1517 .4406 .1772 .4131 .4993 .3461 .2190 .4977 .4616 .StMXMX) 0.4997 0.1 0.4973 .4994 .4987 .3365 .4495 .4991 .1480 .4505 .9 2.2 0.6 2.3 2.4956 .01 .4987 .4192 .4984 .4251 .4949 .3925 .5 0.4 2.0910 .8 2.0948 .4934 .3340 .2019 .4671 .4864 .4920 .4981 .0557 .4573 .4979 .4978 .4927 .2734 .3749 .1664 .3289 .1554 .4931 .4992 .4988 .3485 .4875 .4686 .499991 .8 0.4750 .4982 .4798 .4996 .4940 .2 0.6 2.7 1.1 3.3621 .0 2.4 1.4936 .4713 .4990 .4957 .4463 .4 0.3389 .4946 .4997 0.4989 .4993 .4992 .2 2.4846 .4909 .4955 .4370 .4881 .4207 .4515 .4952 .5 2.3554 . .2642 .4066 .2 1.4854 .4319 .0199 .2054 .5 0.2995 .4332 .4441 .4678 .4732 .1985 .2 2.2454 .4147 .3 2.4922 .4913 .9 3.3869 .2389 .4977 .499952 .4898 .3888 .4767 .7 0.1879 .1950 .0987 .4995 .499841 .4994 .1808 .1064 .1293 .4115 .4778 .3 0.4871 .9 3.8 1.4838 .0438 .1141 .4 1.4995 .4591 .499892 .4345 .3907 .499999 .4990 .0359 .3962 .4535 .2764 .2357 .4564 .3944 .1217 .3790 .3023 .4994 .4997 0.3770 .4970 .2852 .0160 .4788 .4306 .4484 .4861 .3212 .4943 .4279 .4292 . By inverse interpolation one can lind the number of standard deviations corresponding to a given area.2 1.2823 .4972 .3106 .2123 .4783 .3 3.4545 .6 0.2224 .4793 .4990 .4941 .4082 .4995 .2704 .4995 .1736 .4974 .4901 .4608 .4987 .0 0.0596 .2673 .4222 .3238 .499979 .4986 .4015 .4649 .4991 .4962 .0 1.2881 .0 3.4951 .4991 .1 3.4997 0.0714 .4693 .4032 .0 3.3508 .4 Tabled area N i t t c The quantity given is the area under the standard norma! density function between the mean and the critical point.04 .2257 .4394 .0 1.1 0.2939 .0319 .1700 .2422 .0279 .499987 .4821 .4961 .3997 .4162 .1443 .0239 .4525 .4 0.2580 .4906 .4996 .4772 .06 .3599 .4945 .

9 .960 0.746 1.861 2.499 3.128 .2 3.977 2.706 1.<XX) 1.797 2.552 2.372 1.127 .296 1.042 2.323 1.110 2.05 12.922 3.127 .978 .423 2.833 1.363 1.127 .576 0.858 .873 . Yates.000 by ordinary linear interpolation: 'o 0514 Μ .571 2.130 .315 1.602 2.725 3.158 .460 3.069 2.687 .462 2.528 2.688 .619 31.048 2.131 2.689 .683 .610 6.485 2.764 2.137 .980 1.883 3.674 3.021) ι [(1 .291 I > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 60 120 Q O Note: If a one-tailed test is desired.920 .683 .763 2.860 1.699 1.129 .350 1.694 .617 2.129 .056 2.2.859 .645 0.889 .677 .476 1.383 1.781 4.129 .796 1.143 2.132 2.318 1.898 2.337 1.819 3.127 .845 .128 .684 1.061 .126 126 .128 .365 3.358 2.074 2.492 2.895 1.657 9. the probabilities at Ihc head of the table must be halved.879 .080 2.701 1.397 1.700 .703 .2.854 .017 When ν > 120.711 .345 1.000 .078 1.860 .2.866 .015 1.688 . .821 6.692 .750 2.000 and 120/40 .015 3.355 3. to obtain io. Transform the arguments into 120/v .316 1.01 63.965 4.646 3.o5|43j> interpolate between i n n s|4oi ~ 2.365 {2306} 2.711 1.262 2.681 .679 . The table is designed for harmonic interpolation.052 2.041 4.753 1.073 4.126 .756 2.390 2.851 .896 2.686 .863 .508 2.856 .725 1.127 .160 2.624 2.671 1.831 2.479 2.130 .140 4.131 .314 2.500 2.303 3.857 .906 .740 1.341 1. A. S t h ( ϋ ρ A R H I-'ainhnroli I 9S81 with nprmk^inn of I hi.721 1.684 .001 636.659 3.120/43 .303 1.776 2.567 2.541 3.289 1.127 .021 and i 0 „ M ( 1 ) ) | .812 1.473 2.128 .714 1.717 1. f o r degrees of freedom ν > 30.182 2.4 1.356 1.598 12.697 1.127 .533 1.127 .747 3.706 .650 2.064 2.865 .792 3767 3.457 2.583 2.127 .861 .858 .707 3.876 .856 . Values in this table have been taken from a more extensive one (table III) in R.142 .127 .841 4.695 .311 1.683 .353 2.518 2.745 3.965 3.055 3.771 1.791 and interpolate between 120/60 .685 . w 1 « d .330 1.060 2.145 2.878 2.658 1.870 .920 2.319 1.708 1.741 .686 .313 1.376 1.314 1.941 .325 1.282 0.638 1.787 2. Thus.i n their nuhlisher ΠυΓ .012 2.718 2.868 .690 3.943 1.604 4.101 2.02 31.318 4. Fisher and F.896 .021 2. interpolate between l ? 0 / x 0 and 120/120 = 1.959 5.326 0.221 4.819 2.660 2.848 .845 2.1 6.855 .415 1.684 .169 3.000 | 2.921 2.691 .333 1.328 1.128 .697 .250 3.761 1.5 1.127 .998 2. Statistical Tables for Biological. interpolate between the values of the argument v.855 .704 2.551 3.3.729 1.765 .134 .321 1.000.684 .791) χ 2.869 5.690 .674 0.734 1.539 2.(0-791 χ 2.854 .587 4.718 .707 3.045 2.681 2.0. Agricultural and \ Λ „ .128 .779 2.appendix 2 / statistical tables 323 t a b l e III Critical values of Student's t distribution α V 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 60 120 <» 0.685 .821 2.947 2.850 3.127 .706 4.886 1.807 2.437 4.132 .127 .447 2.179 2.032 3.:ι 11 h tr·.782 1.228 2.120 2.842 0.408 5. which are furnished in the table.126 0.373 3.924 8.310 1.201 2.771 2.440 1.106 3.862 .727 .703 1.883 .816 .467 2.925 5.086 2.093 2.

661 V 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 •12 43 44 45 46 47 48 49 50 0.210 11.336 33.615 30.076 39.892 58.059 47.433 25.179 52.886 10.421 22.9 0.619 59.2411 14.037 85.449 16.787 14.143 12.434 22.785 27.584 1.007 33.348 11.231 8.758 56.160 29.81 3 32. ri| ..042 7.869 30.419 78.364 40 646 41.025 5.038 63.344 8.357 4.848 ) 5.805 52.202 65.351) 34.410 66.659 16.616 71.461 13.266 18..5(.610 2.12 33 34 35 3<> 37 3Κ 39 40 41 42 43 44 45 46 47 48 49 5o Γοι values of ν > 100.098 62.812 18.602 49.683 74.194 47..260 9.053 69.337 22.893 73.409 34.1 2.956 30.877 29.437 75.016 .212 48.10462| 2 = 146.268 49.191 5.578 6.420 .339 15.292 18.966 53.995 0.081 35.505 .526 32.957 71.520 11.051 0.003 5(i.256 41.336 26.777 (.487 63.338 18.852 34. I | o e ] in the above formula.322 26.768 20.348 6.}(I7.476 66.467 20.860 16.490 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 .706 4.700 3.363 49.919 18.5 0.231 79.884 64.668 56.338 19.805 36.645 12.710 69.| is computed as III.134 15.676 0.121 59.660 51.215 25. 0 5 | .336 70. :c .1 1(1 23.336 31.384 54.725 26.ικινι.703 72.335 411.979 48.191 37.791 1 7.816 16.569 2 1.336 30.952 24.820 45.720 84.300 29.312 10.ι t I v 240 1 )' .996 26.998 52.486 54.802 50.061 57.932 40.736 27.955 23.443 13.818 37.845 30.928 48.475 20.501 17 192 1 7 887 1 8.982 11.252 40.722 46.584 24.088 40.775 26.247 66.301 59.672 55.289 4 1.625 32.171 66.412 0.267 35.013 17.479 36.645 t s/2'·)) 1 .211 0.691 64.336 22.806 20.654 24.335 4 9 .685 24.701 76.042 25.275 61.588 31.989 1.985 69.217 27.247 3.557 43.582 62.047 16.735 2.643 26.907 9.156 2.620 54.458 24.291 19 0 4 7 19.750 80.488 11.343 28.338 1 7.573 15.476 56.382 35.026 22.402 74. 3 (5 .564 8.335 •15.542 24.337 42.055 73.3.154 .339 67.473 1 7.017 13.042 14.204 2.572 55.312 43.051 29.194 44.828 13.697 6.487 33.480 50.578 32.923 43.964 60.5.304 7.341 11.755 31.337 25.920 23.588 50.024 7.000 0.993 52.335 41.443 73.206 67.797 48.591 10.815 9.766 68.642 46.335 42.077 81.907 30.251 7.141 30.434 8.484 0.222 71.675 21.830 64. When ι -.120 59.919 76.084 77.688 12.619 67.942 58.070 12.005 7.113 41.885 40.090 21.870 65.461 45.203 54.5 employ ι.337 20.264 32.58d 19.312 60.535 19.337 21.757 28.401 42.085 10.483 21.997 41.064 22.166 74.585 A 3.975 0.067 15.336 28.123 37.505 58.589 25.488 28.366 29.603 3.587 28.728 51.023 70.831 1.841 5. l o r * · 0.790 8.689 lKS9f 12.684 15.120 13.790 42.528 36.ι.277 15.001 65.172 36.0.833 3.844 14.515 22.838 14.690 2.216 0.167 .343 9.718 37.592 14.821 69.344 1.910 34.773 44.652 38.329 57.513 50.ivp hppn taken from ι mnr.819 31319 32.262 6.99(> 20.999 26.422 42.138 22.963 48.865 5.023 20.638 42.617 67.507 16.561 61.336 27. 3 (5 44.· In.142 5.645 50.336 32.776 56.808 12.308 16.249 27.649 58.985 46.2.209 24..99(1 64.415 37.336 29. .455 1.160 11.796 44.283 10.769 25.599 21.236 10.340 13.991 7.892 52.815 16.566 38.556 46.215 35.336 34.907 62.064 1.366 3.51 1 27.357 .31 1 25.629 6.336 53.345 13.207 0.074 3.601 5.378 9.991 .400 4K.346 7.052 55.745 76.949 54.291) 49.643 9.3 36.339 14..539 18.075 4. 29.765 31.832 14.034 8.S7S 23.0 Vainer of chi-snnare from 1 to 111 di-i-u-rs of IriN'itom h.307 23.4SI 61.335 47 ( 3 5 48.651 12.490 4.844 7.412 29.666 23.916 39.549 19.865 11.670 33.3(14 60.980 44.335 46.908 7.000 0.01 6.335 37.582 39.892 61.168 4.774 60.342 10.192 53.114 18.335 39.182 65.939 19.180 2.19(.401 13.750 18.340 12.162 62428 63.296 27.05 3.337 24. .342 58.335 38.969 78.605 6.314 45.386 2.812 21.950 66.437 55.635 9.949 36.697 39.459 68.548 20.492 27.779 9.188 26. compulc approximale critical values of χ 2 by formula as follows: r I w h e r e / 2 l | .410 32.121 13.745 44.703 61.458 15.650 62.781 38.362 23.565 4.106 22.271 23.315 46.144 31.346 70.289 19. .307 19.987 1 7.816 4.335 4 3 . ι-ιΚΙ.797 25.338 16.* .400 82.688 29.230 56.009 5.001 10.181 45.337 23.859 23.090 55.170 35.191 31.903 •16.741 37.555 32.196 (4.124 27.336 35 3 3 6 36.201 72.71)7 21.237 1.232 49.2X4.351 5.362 14.725 51.156 38.278 21.000 33.924 35.641 59.404 5.896 58.275 18.801 34.010 0.072 0.369 57.337 24.086 16.appendix 2 / statistical tables TABLE IV Critical values o f the chi-square distribution a V 1 2 3 4 5 6 7 8 9 10 11 12 13 14 .547 9.204 28.879 . | can be looked up in l a h l c Ml Thus χ.265 6.575 28.278 49.989 27.351 86.

46 127.713 42.32 117.70 113.747 82.389 59.379 89.816 44.277 133.39 I 16.113 5S.334 77.334 91.883 51.166 93.208 138.162 33.514 76.16 104.37 106.996 50.633 80.924 48.25 135.776 35.058 44.335 66.88 101.567 145.586 62.99 140.997 48.476 96.937 43.534 36.607 100.391 90.331 100.780 99.335 61.428 51.57 122.438 55.324 128.334 79.50 .624 76.14 132.679 70.952 93.567 54.33-1 74.076 66.24 119.12 116.950 87.513 84.59 121.77 110.082 127.00 1 26.027 38.830 88.29 126.459 47.749 86.88 117.856 95.99 1 1 8.950 43.795 60.548 67.81 HH 8') 9(1 VI 92 93 9.(X)4 89.351 9S.335 65.239 63.810 75.519 92.529 83.43 106.265 52.502 85.422 66.43 136.878 65.386 78.309 57.925 70.47 107.008 34.56 108.586 36.00 103.815 61.176 66.334 90.776 44.055 112.778 77.715 91.334 72.692 60.01 99.516 104.99 122.770 35.311 74.72 125.802 92.308 84.295 65.1 1 123.17 .358 . 3 9 102.1 64.1(H) 122.1 >81 96.78 108.002 76.177 90.442 104.942 53.777 57.417 47.292 83.02 113.63 1 18.361 74.2 Π 69.07 110.839 126.878 98.63 107.490 33.956 91.935 41.397 75.335 63.334 .335 67.02.965 87.968 34.(* 15 60.666 140.334 70.5 50.383 40.11 128.12 125.72 1 23.06 11 8.592 50.13 122.334 76.87 1 20.005 80.476 85.52 118.334 86.578 97.222 .968 89.13 115.52 124.678 105.317 113.460 95.743 8SH50 89.449 82.438 1 39.1 102.758 49.58 108.54 114.183 42.86 125.376 51.334 69.49 1 30.334 82.230 30.995 28.060 42.84 116.117 83.20 10O.680 98.594 124.025 72.398 37.221 57.334 68.166 88.577 114.50 112.635 87.716 105.166 103.168 94.565 129.2.39 1 1 3.818 78.334 83.789 f 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 <'8 93.582 58.334 84./)] 7 UK).745 78.334 85.55 101.87 133.477 90.248 34.422 95.04 114.483 63.998 58.K4 1 0 6 .97 131.616 73.380 64.783 7 1.788 49.09 111.324 99.28 128.843 81.630 77.969 52.752 80.75 119.591 90.77 122.560 39.092 47.65 109.662 40.418 85.821 85.334 73.527 86.893 142.918 71.52 108.335 55.77 109.111 49.945 95.58 108.431 46.80 129.348 123.791 111.56 107.86 109.626 96.103 52.270 94.3-14 144.126 42.62 97.21 105.47 106.092 117.694 66.73 119.793 65.082 80.623 55.670 92.9(H) 59.3 1 0 0 .28 105.675 84.373 95.573 91.039 98.237 89.232 81.335 60.62 137.63 128.993 72.342 48.484 72.335 58.91 2 77.94 124.14 112.( )61 92.196 6().669 69.105 99.160 73.82 9S.32 105.1 .381 82.41 120.05 68.466 56. (.334 80.170 55.770 52.32 123.673 68.512 134.860 79.96 111.085 82.14 127.328 I 4 7.35.068 69.196 75.767 53.510 67.975 33.42 1 29.appendix 2 / statistical tables 325 TABLE IV continued α V 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 .Ο1 1 0 4 14 105.1 "5 97 98 64.001 83.349 91.272 90.735 29.76 110.801 131.998 107.845 59.189 101.278 65.599 119.603 45.642 72.51 114.659 53.41 118.978 137.331 92.850 121.230 149 149 99 loo 99 1(H) .153 73.18 104.501 67.335 51.334 99.577 46.335 56. 4 3 96.91 127.387 73.7.308 41.192 77.0O6 58.040 72.725 79.217 97.08 Κ13.334 81.501 73.275 44.334 88.876 68.34 .23 1 24.153 57.80 138.206 47.1 19 14 i.028 99.981 31.567 79.33-1 98.335 54.844 39.647 66.48 134.51 117.108 88.531 91.(1 [O 148.0(16 76.581 71.334 97.335 64.279 74.158 40.334 95.068 37.14 1 19.300 37.334 75.27 1U6.186 94.250 89.843 4^.782 54.751 97.27 1 15.616 79.335 62.253 84.973 81.226 49.541 81.334 89.94 113.041 132.9 38.931 79.696 45.832 70.654 86.494 43.172 51.582 50.28 120.101 76.98 102.733 85.777 69.689 .433 40.(189 63.941 64.69 1 15.625 62 437 63.835 116. (34 94.43 121.796 69.24 1 12.380 78.85 111.75 101.936 82.482 41.973 56.548 54.334 78.93 121.368 55.346 118.217 94.334 96.01 77.808 93.291 74.872 93.228 .069 82.526 109.828 98.31 133.540 61.65 107.629 16.335 52.745 135.010 93.33 113.30 129.610 39.29 111.880 100.994 88.329 56.481 30.197 83.9-1 110.32 117.22 116.250 64.468 75.78 103-00 104.735 32.001 87.649 96.69 115.39 107.335 59.06 1 34.66 109.64 135.888 102.838 38.389 62.261 47.356 68.334 87.212 38.91 115.419 95.90 1 12.335 53.97(.298 84.33-1 9 3 .68 131.334 71.690 61.303 42.484 99.06.335 57.15 114.

18 3.15 5.47 5.82 6.11 2.2 4.025 .60 9.85 3.4 39.4 99.39 11. 120.71 6.94 14. T h u s lo o b t a i n F l > ( is[ssho]· o r l c ^ r s t interpolates between /'oosisn.85 10.12 7.36 15.4 8.0 2 199 800 5000 19.23 4.69 4.55 4.a p p e n d i x 2 / s t a t i s t i c a l t a b l e s 326 TABLE V Critical values of the F distribution ν ι (degrees of freedom of n u m e r a t o r mean squares) a 1 161 648 4050 18.96 6.2 9.05 . Mcriington and C.05 .01.96 8.45 4.01 5 .29 7.10 3.6 18.3 27.01 1 .14 3.72 3.65 6.35 3.00 5.57 9.05 .47 3.„.3 99. T h o m p s o n (Hiumetrika 3.5 27.31 4.36 5.73 4.05 .59 3.05 .52 7.07 3.0I a n d between /·'„ „ „ .59 8.77 4.05 .32 7.79 14.54 9.87 5.025 .59 9.3 39.42 3.5 38.0 4.99 4.24 5.025 .84 14.43 13.30 5.29 4. 40.21 10.46 6.05 .22 4.19 3.62 4.76 10.95 5.3 99.3 39.91 3.5.2 99.01 .57 11.83 6.eoi a n d Fo.71 12.52 9.39 7 237 948 5930 19.7 6.0 30.06 3.74 14.05 .0 4.74 6.2 99.4 8.5 98.15 4.26 10.025 .61 3.62 10.025 .2S 15.60 8.20 5.76 14.71 α .4 39.1 i«i.06 8.4 4.5 6.25 5.025 .07 5.53 6.18 2.41 7.98 16.005 and for v.37 4.60 4.98 10.05 .4 99.42 7.10 3. u ' .025 .71 4.20 5. 24.77 6.70 8.5 4.46 3.56 3 216 864 5400 19.91 8.01 .48 6.80 3.91 3.94 10.07 5.4 4. one needs to i n t e r p o l a t e for each of these a r g u m e n t s in turn.53 6.61 10.55 16. 30.26 9.62 3.4 39.6 27.4 8.03 5.00 8.12 5.84 5..86 5.02 4.21 5.75 14.07 15.64 4.05.8 6.2 6.5 10.4 99.81 3.89 6.4 8.82 6.9 28.4 8.7 4.025 .55 4 225 900 5620 19.68 10.2 5.82 6.05 .01 6 .89 8.63 3.05 7. to estimate Fa.26 2.08 6.1 5.01 .1 4.98 3.64 6 234 937 5860 19.74 6.72 4.1 17.99 3.2 39.39 5.71 8.9 4.76 6.025 .12 7.3 9.1 7.025 .05 .85 14.67 4.0 5.06 5.39 4.68 4.025 .8 4.88 6.14 3. 60. 12.2 4.07 3.10 5.20 8 239 957 5980 19.26 5.79 3.10 5.05 .76 6.87 3.7 5.1 4.2 9.4 99. require i n t e r p o l a t i o n . 194.6 4.0 16.37 3.9 6.5 5.respectively.20 15.1 28.01 Note: I n t e r p o l a t i o n for number of degrees of f r e e d o m not furnished in the a r g u m e n t s is by m e a n s of h a r m o n i c i n t e r p o l a t i o n (see f o o t n o t e for T a b l e III).76 12. and ν .43 6.3 5.07 12.23 9.85 11 243 973 6080 19.16 9.0 9.025 .12 15.98 14.99 3.46 7. .0.025 .94 3.I5IMU.81 13.82 8.7 27.48 4. i n | and F„ „.05 .025 .4 29.1) with permission of t h e publisher.03 3.3 27.01 7 .99 8.4 27.01 10 .96 5. 0.71 6.72 3.2 39. 20.26 3.is.65 4.02 3.39 9.3 5.77 12 244 977 6110 19.89 4.67 3.58 4.91 5.4 8.93 8.05 .01 4 ..025.79 14.025 .87 5.37 7.01 .0 39.01 2 . If b o t h v.3 8..4 99.5 27.05 7.90 14.44 4.0 5.7 5.025 .3 4.025 .05 .7 6.19 7.18 4.4 39.35 4. „ .3 .01 .01 .95 6.0 99.05 .35 5.01 14. and v .47 3.99 8.01 .32 5.06 9 241 963 6020 19.79 8.025 .5 6.54 3.15 11.03 5.78 4. .„„.89 14.09 9.68 6.97 5. and * were copied f r o m a table by M.28 4.73 3. Entries for α .72 6.60 16.57 4.81 14.05 .4 39.3 6.33 4.025 . M. 0.05155.5 4.4 4.05 .46 7.94 10.98 3.01 .7 4. I t o 10.2 5.4 39.Mii a .75 3.4 34.52 7.01 9 .osi.99 5 230 922 5760 19.85 5.1 5. O n e then i n t e r p o l a t e s between these t w o values to o b t a i n the desired q u a n t i t y .025 .2 21.84 3.50 4.4 99.41 7.48 4.67 6.05 .01 8 . and 0. 15.47 3.01 .94 10 241 969 6060 19.14 7.04 8.05 .78 4.01 3 .10 5.63 4.01 3.28 5.2 6.1:71 8K.99 6.

25 5.04 3.01 7.57 6.90 6.27 4.05 .01 2.90 3.91 3.025 .05 .31 5.64 14.025 .57 2.81 2.15 4.05 .36 13.36 6.52 3.72 3.01 4 .5 99.17 50 252 1010 6300 19.56 .05 .12 .22 4.025 .05 .8 4.41 13.12 3.38 13.16 3.12 2.00 5.99 3.59 14.28 9.01 7 .14 3.02 3.83 3.025 .31 3.66 8.98 7.4 8.43 9.56 3.84 5.31 2.6 4.86 8.01 .05 .01 .4 5.05 .66 3.72 8.53 13.51 13.48 2.28 2.64 3.2 4.05 .26 4.025 .5 99.01 10 .2 26.01 3 .025 .025 .70 19.05 .29 3.46 13.2 5.40 6.05 .46 6.5 8.96 2.66 14.25 40 251 1010 6290 19.5 99.08 3.5 99.85 6.37 4.01 .5 99.77 8.025 .05 .75 4.01 .42 6.67 4.81 5.1 26.0 4.27 7.23 4.05 .5 99.5 39.43 6.51 4.01 .14 5.4 99.73 2.7 5.4 39.01 3.5 8.4 99.73 4.38 4.1 26.08 3.66 14.20 3.47 6.05 .01 .77 3.41 14.08 120 253 1010 6340 19.56 6.47 4.5 39.025 .70 3.79 3.86 3.3 5.54 3.33 30 250 1000 6260 19.025 .appendix 2 / statistical tables t a b l e V continued ν ι (degrees of freedom of numerator mean squares) C O 254 1020 6370 19.63 8.9 4.05 .05 .56 14.01 6 .02 9.23 9.31 13.01 2 .44 4.58 3.4 39.70 4.69 8.55 13.40 2.0 26.38 3.09 3.5 8.14 4.74 4.025 .42 4.9 26.82 3.84 5.58 14.025 .5 39.67 4.78 5.025 .0 26.32 4.01 .97 3.025 .75 8.44 6.30 4.5 39.7 4.00 a .70 8.95 5.77 5.85 3.62 6.52 60 252 1010 6310 19.05 .27 5.74 2.77 4.91 !X 1 .23 3.05 19.0 26.33 9.51 4.05 .86 3.5 8.01 .96 7.025 .34 4.14 9.61 4.57 14.18 9.5 8.9 5.22 4.6 5.80 8.31 3.07 9.07 2.71 3.5 8.10 5.5 4.97 3.4 8.36 5.01 .95 2.025 .17 7.53 6.20 5.75 3.52 4.01 .5 5.74 3.50 6.26 13.06 3.11 3.55 3.07 3.65 2.5 39.39 4.03 2.89 5.87 5.62 3.025 .01 3.62 14.5 99.45 4.94 5.31 4.36 2.12 9.12 7.3 26.33 4.01 .24 3.01 8 .80 5.07 7.93 3.67 4.7 4.5 39.20 4.88 3.025 .1 5.47 3.05 15 246 985 6160 20 248 993 6210 24 249 997 6230 19.20 2.86 2.5 8.65 2.41 4.02 3.5 39.7 4.01 5 .3 5.94 3.9 26.025 9 .05 .025 .56 4.81 3.40 3.

53 2.025 .68 3.75 1.22 2.01 .87 8.18 2.02 6.98 5.79 3.85 3.68 3.05 .29 2.84 3.025 1 4.22 2.82 2.64 3.58 4.01 24 .07 2.54 3.94 2.87 3.82 2.70 2.025 .14 2.025 .61 4.41 2.71 3.17 2.74 3.63 3.51 2.20 4.025 .07 3.01 30 .12 3.96 2.10 3.025 .59 4.89 2.35 5.025 .12 2.328 a p p e n d i x 2 / s t a t i s t i c a l t a b l e s 328 table V continued ν ! (degrees of freedom of numerator mean squares) α 11 .57 3.37 2.46 4.05 .025 .11 3.36 4.01 .01 » .46 2.17 2.31 2.82 3.15 3.55 9.72 3.70 3.37 2.63 6.87 3.45 2.25 12 2.59 3.16 2.48 4.51 4.72 2.05 .67 2.69 3.87 2.64 3.01 .51 9 2.01 .59 4.09 2.72 7.89 2.57 3.49 4.83 2.35 2.48 2.88 2.57 7.18 α .05 .40 1.92 2.56 1.65 4.23 4.01 .08 5.19 2.10 2.025 .69 3.93 4.95 3.37 4.59 3.80 2.79 3.05 .01 .43 4.20 8.13 3.60 3.49 4.56 4.29 2.31 2.39 2.13 2.51 4.025 .33 4.30 2.99 2.80 1.05 .54 6.025 .01 20 .94 3.64 8 2.96 3.23 2.21 2.50 2.38 4.12 2.01 3.34 1.68 4.08 3.26 7.41 3.32 5 3.89 5.03 3.17 2.05 .72 4.03 2.60 3.61 3.61 3 3.10 6.96 2.61 3.41 4.65 3.99 2.27 2.01 .025 .06 2.47 2.09 2.11 3.18 5.51 2.79 3.84 2.025 .66 1.01 .77 3.84 6.52 2.48 2.025 .56 2.46 2.73 4.21 3.15 3.89 3.67 3.28 4.51 2.80 7 3.62 3.32 3.32 4.01 3.39 2.34 4.51 3.01 120 .47 5.68 4.22 2.25 4.00 3.78 3.40 2.78 3.42 2.83 2.01 .80 4.77 6.29 2.01 40 .54 2.33 2.10 4.41 2.25 2.23 3.73 1.04 2.05 .32 5.18 2.00 3.79 3.87 3.79 3.17 5.17 2.89 5.30 2.04 2.34 2.80 3.28 5.29 2.90 3.85 3.09 2.53 4.16 2.01 3.67 2.01 2.29 2.45 2.94 2.98 2.02 6 3.29 4.62 3.82 4.95 3.05 2.88 5.05 2.06 3.25 2.51 2.32 11 2.29 4.05 .98 3.46 5.10 2.30 2.01 .56 1.025 .01 15 .72 1.40 4.76 3.95 2.39 2.17 2.87 2.26 2.22 2.43 2.28 2.29 4.15 6.74 2.51 3.33 2.65 2. .72 3.025 .59 4.53 3.56 2.90 2.67 3.90 3.85 3.22 3.25 2.025 .1 8 3.10 2.66 4.70 2.26 5.06 3.13 3.44 4.42 3.01 3.66 1.64 2.05 .12 2.01 3.42 2.36 2.02 2.05 .05 5.025 .92 5.05 .04 5.16 2.79 1.69 4.12 5.11 2.91 2.93 3.99 3.08 2.54 3.89 2.05 .84 3.21 2.39 3.30 2.79 2.32 4.91 3.73 2.33 2.63 2.05 .75 6.025 .17 2.90 2.76 4.95 2.05 .05 .31 4.39 2.45 2.72 9.75 3.26 4.91 3.50 2.10 2.47 1.89 2.20 4.00 5.00 2.04 2.41 10 2.63 2 3.85 3.37 2.01 60 .78 4 3.05 .05.09 3.46 2.15 5.92 3.05 .27 2.45 2.53 3.22 2.51 2.41 2.99 2.34 2.80 4.36 3.02 2.71 3.84 5.13 2.90 3.29 7.025 .01 12 .07 3.01 .50 1.42 7.36 2.75 3.63 1.86 4.32 2.05 .83 2.95 2.

55 3.40 1.05 .19 1.05 .37 1.07 3.69 2.20 2.00 1.<X) 1.31 1.95 2.21 1.41 2.01 .53 1.84 2.35 1.65 3.01 .62 2.46 2.01 2.56 1.88 2.52 Oo 2.54 2.79 2.72 3.05 .31 1.58 1.025 .01 15 .025 .10 2.29 2.01 .18 2.01 .20 1.53 1.52 2.76 1.01 2.64 3.33 2.21 2.05 .67 1.16 2.72 3.01 .39 1.33 4.79 2.11 2.36 2.52 1.07 2.025 .025 .94 2.46 1.29 2.27 1.01 1.86 3.44 1.01 2.22 1.70 1.60 1.05 .90 2.87 1.52 3.01 .70 1.11 2.20 1.11 2.51 1.97 2.25 1.16 2.92 2.05 .76 1.12 2.01 .69 1.88 .025 .95 1.37 2.67 1.025 .30 1.79 3.39 1.01 24 .43 2.12 3.39 1.98 2.89 2.53 1.45 2.58 1.57 2.84 120 2.88 1.21 2.01 2.79 30 2.92 1.32 1.05 .25 2.48 1.39 1.25 2.07 2.57 1.30 2.76 1.025 .20 2.025 .21 1.44 2.86 2.05 .05 1.53 3.87 2.84 2.61 1.95 2.03 1.57 1.94 2.appendix 2 / statistical tables t a b l e V continued ν ) (degrees of freedom of numerator mean squares) α 11 .59 2.81 2.025 15 2.025 .01 .05 .49 1.39 2.00 1.72 1.64 1.40 2.01 .06 3.64 1.86 1.05 .78 1.35 1.66 1.01 12 .64 1.03 1.94 40 2.91 3.65 1.46 1.70 2.79 2.74 1.85 3.84 2.02 3.79 2.47 2.49 3.73 tx .025 .01 .025 .48 1.06 2.04 20 2.025 .43 1.78 2.70 1.02 2.08 2.02 3.01 3 0 .09 2.01 6 0 .88 3.80 1.56 1.40 2.12 1.07 2.06 1.38 2.71 24 2.29 1.86 1.55 1.74 1.94 3.22 2.27 2.86 2.08 2.68 1.58 1.96 3.35 1.02 1.05 .84 1.05 .89 2.57 3.18 2.99 2.01 ® .42 1.89 50 2.11 1.75 1.69 1.05 .51 3.46 2.94 1.04 2.74 1.61 3.25 1.23 4.03 2.52 60 2.13 1.47 1.64 1.01 2.05 .9-1 2.025 .05 .86 2.05 .76 3.51 3.66 1.73 1.47 1.00 3.88 2.25 2.15 2.025 .34 2.50 1.025 .82 2.54 3.74 1. 0 5 .96 1.18 4.05 .70 1.69 1.87 3.59 1.55 1.66 1.52 1.32 1.83 2.05 2.33 2.025 .62 1.93 2.01 40 .025 .94 2.47 1.75 1.01 1.70 3.01 20 .31 2.01 120 .83 2.94 2.025 .08 1.94 2.45 2.59 3.40 2.20 2.09 2.82 2.57 3.05 .61 1.97 2.70 2.78 2.62 3.84 2.80 2.14 2.61 1.43 1.025 .38 .66 1.35 2.40 2.60 2.43 1.17 4.11 1.

20.12 3. 19.3 3.1 22.9 9. 13.00 9.00 333.91 15.3 1.7 18.0 2.01 6 .29 4.00 4 . 12.4 9.6 34.3 8.4 6.3.91 2.44 14. 2813.72 9. 20.3 16.89 4.0 4.00 1.8 4.5 4.3 4.7 25.9 8. 13.00 1.67 8.2 120.12 13.01 12 .00 3 87. 15.5 3.42 9.41 13.01 represent one tad of the /'„ M .54 4.05 .2 5.2 59.5 16. 15. 62. 11.00 4 142.4 5. 25. 36(1) 51.8 42. 31(0) 44.03 6. 7 403.18 11.85 7.6 l.7 151.1 6.30 7.3 1.05 .4 7.6 1.82 11. 8.1 6.5 448 27.59 7.8 18.05 .0 27.1 4. 10.00 1.9 28(1) 41. 3204.1 3.(X> 1.8 10.00 1.8 3.05 .5 24(9) 37.26 2.8 2.9 7.7 50.78 15.0 9.3.17 2.19 7.3 30.9 7.8 20.60 23.29 4.92 10.4 3.9 2.31 9. 26. 8 475.8 22.72 5. 39.5 3. 22.05 .9 5.oi .9 4.3.95 3. 15.6 2. David (Binnn lnkn 39:422 424.00 1.99 8.7 28.05 .01 13. 729.37 6.9 5.1 3. 11. distribution This table was copied from H.5 4.28 12.1 8.5 32.0 3.9 5. 1362.9 21(6) 33. 18.7 21.3 24. 33(7) 48.9. 16. 10.25 10.0 199.79 6.34 9.5 6.7 37.5 37.7 8.0 1.78 .3 2.05 .1 4.05 .39 4.54 3.95 13.(X) 1.4 19.5 89.2 7.36 2.330 a p p e n d i x 2 / s t a t i s t i c a l t a b l e s 330 t a b l e VI Critical values of F m a x α (number of samples) ν 2 α .05 .07 2.1 2.05 .05 . 114.05 . 1705.2 7.05 .68 6.8 8.66 12.28 4.4 1.2 4.61 3.7 3.04 2.96 1. 10.5 9.01 Noli· Corresponding lo cach value of a {number of samples) and ν (degrees of freedom) arc Iwo critical values of /' m : 1 I representing the upper 5% and 1 '7.49 5.01 .8 2.9 5.50 4. 12.00 11 626.6 5. 104.5 23.05 and 0. 18.37 5.5 69. 24.00 3 .54 4.85 3. 9.9 2. 17.22 2. l l )S2) with permission of the publisher and author .6 3. 15.9 60.03 14. 9 10 550.00 72.7 1.45 14.30 2.2 1.80 12.2 57.05 .2 2.09 8.1 7.6 79.1 97. 3605.4 47.67 1.11 11. 1036.6 106.00 93.05 .40 7.7 16.1 26.76 4.00 1.01 9 .4 2.15 14.42 11.7 4.00 1.5 7.59 5.7 1.01 .07 2.02 3.01 5.3 8. 11.8 27.9 2.00 1.1 6.9 9.9 7. percentage points The corresponding probabilities a 0.01 5 .46 3.5 54. 5 6 266.48 10.00 9.16 6.10 5.96 2.1 5.32 2.6 4. 9.5 8. 28.01 2 39.01 8 .7 22.1 1 2.7 38.77 7.5 1. 20.2 19.6 49.<X> 83.24 5.86 4. 29.8 6.6 5. 12. 29.01 CO 15 20 30 60 . 124. 2432.5 7.3 33. 10.7 6. 10.6 2.87 11. 12. 14.36 4.38 15. 15.3 2.1 17.6 4.01 7 .00 202.72 8.5 5.01 10 .5 1.7 2.6 9. 16.4 120.9 46.70 16.00 12 704. A.40 3.9 3.01 .91 3.94 5.94 12.34 13.95 6.1 6. 13.01 50.0 113.00 1.0 184. 2063.63 1.4 l.7 36.4 4.21 .43 7.8 85.85 2.34 8.

5427 2.99 Confidence coefficients V 0.549 .760 .475 3 4 5 6 7 8 9 10 11 12 13 Note: The factors in this table have been obtained by dividing the quantity n 1 by the values found in a table prepared by D.5135 2.5601 2.664 .905 . 1960).1983 29.720 .4502 2.6636 1.054 . Hamilton (Biometrika 47:433 437.305 26 27 28 29 30 40 50 60 70 80 90 100 .6057 1.3774 4.050 . A.4289 3.354 .95 0.3125 6.4598 2.744 .4242 3.4171 3.5341 2.5073 2.5882 1.4043 3.265 .421 .360 .762 .2956 7.802 .99 . D.3192 6.683 .127 .appendix 2 / statistical tables 331 t a b l e VII Shortest unbiased confidence limits for the variance Confidence coefficients Ρ 2 0.6913 1.850 .464 .750 .091 .314 .1505 114.149 .3 2.097 . East. A.896 .238 .048 .99 Confidence coefficients V 0.590 .825 .4689 2.458 .5900 1.523 .5433 2.3400 5.6966 1.4432 3.3904 3.689 .3585 4.445 .95 0.607 .6824 1.7669 1.605 .6110 1.2685 10.4025 3.971 . .4931 2.4399 3.5139 2.6160 1.223 .5261 2.426 14 15 16 17 18 19 20 21 22 23 24 25 .122 .608 .588 .4855 2.6255 1.5201 2.936 .3752 4.341 .076 .919 .5374 2.7300 1.489 .262 .3480 5.5943 1.5242 2.187 .646 .153 .519 .95 0.5319 2.489.211 . and P. V Lindley.782 .2681 10.7128 1.276 .338 .961 .008 .5677 2.351 .6458 1.4774 2.5004 2.244 .2099 23.508 .668 .2367 15.4755 2.4602 2.5520 2.208 .6209 1.7443 1.6213 1.154 .679 .387 .7564 1.6657 1.844 .5817 1.637 .876 .848 .402 .5019 2.553 .7090 1.6001 1.5749 1.5478 2.

Snedecor.128 1 .01 .01 .959 .470 .661 .01 .05 .05 .01 Nftte: Upper value is V'.433 .01 ."..01 .01 .05 .01 2 3 4 .05 .878 .113 .765 .304 .01 .05 .01 .267 .05 .355 .590 .478 .456 .05 .205 .05 .05 .228 .000 .05 .01 . by (ii-or^c W.349 .05 r .553 .05 .05 .05 .01 .01 .01 .01 .515 .361 .05 .01 .148 .05 .115 .01 r .250 .01 .01 .01 .05 .01 .05 .388 .081 Stuli.01 .01 .423 .05 .325 .05 .468 .01 .05 .05 .01 .062 .098 .05 .01 .449 V α .798 . 5th edition.372 .05 .632 .325 .396 .496 .374 .684 .463 .708 .slitnl 15 1.811 .288 .01 .735 .575 .754 .05 .576 .404 .381 .456 .532 .01 .05 .505 .05 .917 .05 .602 .01 .232 .997 16 17 35 40 45 50 60 18 19 20 5 70 80 90 100 120 150 200 3(X) 400 6 7 21 .05 .181 .01 .418 .05 .01 .01 .01 .332 a p p e n d i x 2 / s t a t i s t i c a l t a b l e s 332 t a b l e VIII Critical values for correlation coefficients V α r V α .497 .487 .413 .195 .537 .273 .990 .01 22 23 24 25 26 27 28 29 30 8 9 10 11 12 13 14 .561 .01 .05 .482 .707 .174 .05 . (c) 1956 by The Iowa State University Press.444 .088 .549 .302 .05 .01 .514 .05 .01 .01 .05 .254 .138 .623 .159 .217 .05 . This table is reproduced by permission from M c t h n h .01 .208 .01 .367 .05 .606 .05 1.01 .950 . lower value is 1 c r i t i c a l value.05 .01 .05 . .01 500 .000 .834 .01 .393 .01 .666 .05 .05 .05 .354 .283 .641 .05 .874 .05 .526 .05 .01 .

is the lower confidence limit to be found by interpolation. η is the" size of the observed sample. the second line indicates that this represents an incidence of the property of 40. The tabled 95% limits for n = 50 are 13. the lower and upper confidence limits at the next lower tabled sample size n~. and the third line of values furnishes the 99% confidence limits for the percentage. in the second of the two lines.appendix 2 / statistical tables 333 TABLE I X Confidence limits for percentages This table furnishes confidence limits for percentages based on the binomial distribution. Argument Y is tabled for integral values between 0 and 15. and 1000). and the third line gives the 99% limits as 14. 100. 500. can be obtained by a corresponding formula by substituting 2 for the subscript 1. The arguments along the left margin of the table are percentages from 0 to 50% in increments of 1%. three lines of numerical values are shown.95)20/22 = 58. the second line lists the observed percentage incidence of the property.L^n~/n = (19. rather than counts. and multiply them by the next lower tabled sample size n~. The second half of the table is for larger sample sizes (n = 50. 200. The upper confidence limit.14%.36% of the individuals showing the property).27%. to obtain the confidence limits of the percentage corresponding to 8 individuals showing the given property in a sample of 22 individuals (which corresponds to 36.19%.10)20/22 = 17. for example. The first part of the table furnishes limits for samples up to size η = 30.n) + Lfn'(« 1 - Ό n(n + — n~) In the above expression. Thus. which yield percentages up to 50%. for Τ = 8 individuals showing the property out of a sample of η = 20.00%. The arguments are Y. n~(n* .10% to 63.10%. For instance. L^ and are corresponding tabled confidence limits for these sample sizes. and L. number of items in the sample that exhibit a given property. Interpolate in this table (up to η = 49) by dividing L a n d L J .36% and the upper confidence limit L 2 = L J n " / « = (63. Interpolation in this table between the furnished sample sizes can be achieved by means of the following formula for the lower limit: __ L. sample size. the 99% confidence limits of an observed incidence of 12% in a sample of 500 are found to be 8. the first line yields the 95% confidence limits of this percentage as 19.60% to 70.95%. and n. For each sample size η and number of items 7 with the given property. the corresponding tabled limits are . n~ and n+ the next lower and upper tabled sample sizes. For n = 100. The first line of values gives 95% confidence limits for the percentage. The 95% and 99% confidence limits corresponding to a given percentage incidence ρ and sample size η are the functions given in two lines in the body of the table. compute the lower confidence limit Li . respectively. L 2 .84 39. By way of an example we shall illustrate setting 95% confidence limits to an observed percentage of 25% in a sample size of 80. For example. by desired sample size n.56-16.

Herrera.66%. For percentages greater than 50% look up the complementary percentage as the argument. I.50) = j j . and Μ. Mainland. o l /o The tabled values in parentheses are limits for percentages that could not be obtained in any real sampling problem (for example. = 0. . These tables have been extracted from more extensive ones in D.50) L l = 80(100^50) = 16 '12% for the lower confidence limit. Sutcliffe. When we substitute the values for the lower limits in the above formula we obtain (13. Confidence limits of odd percentages up to 13% for η = 50 were computed by interpolation. The complements of the tabled binomial confidence limits are the desired limits.27)(50)(100 . 25% in 50 items) but are necessary for purposes of interpolation.334 a p p e n d i x 2 / s t a t i s t i c a l t a b l e s 334 16. for the upper confidence limit we compute L··) = 2 (39. L. one-sided (1 — a)100% confidence limits were computed as L 2 = 1 — oc1/n with L. 1956) with permission of the publisher. Tables for Use with Binomial Samples (Department of Medical Statistics. Similarly.84)(50)(100 .88-34. New York University College of Medicine. The interpolation formulas cited are also due to these authors.80) + (34.80) + (16.50) 80(100 . For Y = 0.88)(100)(80 .66)(100)(80 .

23 3.70-73.38 28.52-55.48 4.90-54.89 26.11-26.54 17.56 14.33 21.39 5.69 25.13-24.50 0.00-25.00-60.50 12.85-61.29-45.66 59.00-13.56 20.14 24.00 11.33 0.67 15.93 0.57 22.20 0.45-60.02-22.(X) 4.17-32.10-81.55-31.16-32.25 0.40 5.34-36.60-70.20-73.79-40.00 1.71 4.64-34.42-32.21-37.07 20.70-81.10 23.08-64.29-73.70-38.83-42.08-74.72 28.53 10.33 0.29 30 0.2 30.00 17.51-71.88-62.(X> 18.08 2.72 20.97 57.57 0.48 45.93 15.00 0.APPENDIX 2 / s t a t i s t i c a l tables χ ice its for percentages η 5 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 0.IXI 18.00 26.20 35.07 67.80 33.67-65.36 21.67 0.93 56.70 10.00 11.00 16.74 13.00 12.38 46.3 8.10 16.40-37.03-68.80 40.40 21.54 0.00 0.90 0.02-31.36 4.95 40.44 0.82-22.00 11.00-14.41-65.67 16.08-17.00 21.00-36.10-63.77-45.40-65.68-80.27 1.70 1.29 23.00 8.33 2.48 36.66 31.41 6.91 18.00-20.72 0.07 7.05 5.55 27.69 58.28-85.30 50.05 8.58 14.05-54.00-18.00 0.53-38.66 46.85 5.26 9.95 15.00 11.00 1.13 25.67-74.92-42.62-51.30 51.00-16.00 0.09 6.00 8.00 5.00 0.35 2.3(1 68.50 10.60-47.00 2.81-68.00-45.50-64.30 27.10-20.77-30.(X) 9.46 62.53 .00 3.98-26.34 40.00 6.43-44.74 16.04 24.80 67.25-44.00 0.00 5.60 20.38-59.00 2.05 11.75-70.58-50.68-49.67 23.01 12.35 19.80-55.06 44.80 6.00-11.65 8.9.03-40.70 50.74 40.4 2.39-56.90-55.36-59.71-48.49 13.06-61.84-40.29 52.12-61.06-49.01-63.00 22.82 0.00 1.33 13.00 0.28-91.02-26.70 3.00 0.24-31.38 12.00 19.(X) 1.33-48.00.00 0.00 2.85-56.24 12.55-36.19 0.CX > 6.24 0.89 16.00 0.80-87.43-55.67 9.56 43.39 40.66-40.00 0.00 14.IX) 14.89 15 0.73-70.44 7.00-26.75-43.47 73.30 30.3.23 0.13 36.35-65.32 40.50 32.62 33.35-45.67 3.33 7.33 0.10 20 0.00 7.40 30.70 19.20-72.67 0.(X) 3.29-48.07 10 0.87-79.14 26.78 11.96-53.65 20.67 4.00 0.91 25 0.60 20.75-78.05-68.03 3.80 50.33-67.35-27.73-49.69 48.84 73.

08 9.05 7.57 15.50 12.12 10.18-27.70-27.5(i 36.63 II.53 11.525.95 II.28) 34.8.45-18.27) 14.58 15. 6 0 3-1.36 9.341.312.00- 50 7.2.84 13.72 22.28 3.31 9.24.52 14.86-13.371.65 26.262.67 21.84) 39.84 37.55 22.37 34.42 (5.09 10.88-15.10-13.86 28 06 10.83 .78 19.92 7.82-32.04-21.74 28.70-18.70 7.92 17.58 21.61 14.44) 38.16 14.98 5.46 8.62 5.77 12.54 24.88-11.98 16 88 1 1.00.15-2(1.42 13.99 13.53 8.51-15.98 (.126.57-11.96 29.22-15.10 12.52-24.02.34 18.00.13 3.77 27.65 24.21 2.49-13.14 11.30 8.634.58 3.39 26.91) (1.433.46) 3.22 27.09 10.22 9.90-19.64 25.07 8.32-10.68 21.06 2-1.81 15.88 (6.71 22.25 17.23-19.75 4.13 23.43 18.33 9.84 12.9 8.82-20.62 3.19) (.14-19.9.19 4.30 10.58) 36.32 2.60-20.68-12.32 16.60 1.89 10.65 7.24 31.37 6.781.07 10.31 4.28 II.12 4.10- .12 19.05 25.00.83 (.37 29 80 95 99 95 99 95 99 95 99 95 99 1 2 .42 11.48.88) (.63 13.66 6.05 8.03) (3.84 8.54) 31.05-10.60 11.06.00.33-27.41 15.03-20.17 6.06 9.02.05 2.30 (9.40-19.70 5.81 5.51 6.58 26.18 8.03 1 3.97-30.78 14.82 6.40 3.55 25.66 .72) 2.74I.58 30.23 9.92 (6.84 24.12 30.88 1-1.99 2.48 28.18-28.57 21.22-25.001.42 7.50 17.47 5.97 12.99-21.17 42.77 33.66 3.83-19.84-11.79-18.834.80 3.4 3 .93 1 3 .04 6.06) 41.39 20.59 95 99 19.16 5.99-15. 6 5 ) .17-30.66 37.65 7.16 17.420.25 8.68 32.82 1 2.60 20.11.00 500 .181 35.08-16.87 2.43 7.40 3.3 6.30 6.27-30.01 3.11I.69-20.17 12.19 32.61 4.70 9.06 15.62 23.20-40.80-24.61 22.56-14.14) 7.92 4.18 20.22 28.76) 1.40 17.56 (13.23 10.56 16.04 34.27-12.31 17.90 1.25 15.26-16.14 8.79 (8.44 6.57 I.83 12.53 1.60 17.02) .54-33.2 .66 27.81-10.44 7.45 23.00 8.57 34.04 7.882.20 9. 25.69 9.173.21 17.31 (7.92 24.44 5.12 II.26 15.51 11.44 19.01-13.64 27.38 22.26-29.82 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 .34-10.791.72 20.05 6.59 5.S3 31.11) 4.63 14.51 13.07 15.922.36 4.81 22.18-10.86 (10.1 >9 1-1.28 33.35 32.30 5.960.82 23.56 8.45 26.34-28.99 19.71 23.02 5.17 29.60-35.46 4.82 16.92 16.64-11.54) (1.86 4.16-15.47 3.81 28.89-10.20 5.75 7.73 9.56 4.42 5.47 18.04 20.21 18.52 17.522.93 5.53 2.255.10.29-16.60 10.87-17.93-23.32.66 14.72 7.74 1.88) 11.394.05 100 .64 8.43-27.11 .40 10.98 1 1.10 13.62-21.60) .95 14. 3 1 16.05 9.34 21.37 0.20 7.67) 5.83 2.93 2.31 (1 1 .87-22.37 18.36 (7.40-33.48-21.4Ί 25.06) (2.42 6.79 12.71 16.643.95 17.00.84 18.56 39.03 7.74 13.73 (9.69 1 3.appendix 2 / IX led η 95 99 95 99 95 α .82-26.90-17.13 7.65 18.92 12.15-22.06 16.24-12.9(.52 16.69 25.45 15.32 (12.61 22. 6 2 .19 16.47 9.16 2.54 38.11 21.90 23.59 26.532.69 6.623.80 6.82 15.00. 2 5 .72 .00-10.21 7.20-16.43 9.37-29.36 5.25 18.29I.66 10.78-20.04-33.98-10.26 29.18 13.56 1000 .35 12.19 30.37 9.05 6.94) (4.04 12.61 3.12.733.67-20.13 (2.45 7.44 95 99 95 99 95 99 14.55.50-27.70-12.31-17.634.32 (1.94 8.57 .41 7.53 19.14) (.72-29.15 1 1.07-20.25 1.1 1 2 9 .74-17.96 II.53 200 .29 1.02 25.87 1.22I.54 7.21 (.23 32.19 14. 4 3 27.63-17.56 10.51 18.78 24.24 9.61 13.82 20.93 1.70-12.50 24.73 9.55 4.16 16.88 13.69 18.113.77-23.43-10.97 13.80 (3.21-17.21-12.41-15.73 26.45 7.16-31.92 14.55 5.32 21.83 16.38-23.00-12.23 6.72 21.25-30.57 4.04 8.00.05.132.43 19.00.38-21.73 19.

95 43.16 37.57-50.04 26.54-54.66 45.07 31.04 38.97-39.86 23.46 24.61-45.36 32.45 27.41-42.83 37.88-40.02 32.99 21.17 34.86 52.02-39.23 34.12 46.54-39.25-45.27 32.18-37.14-41.67-37.87 26.36 23.01 52.24 21.54-53.08) 95 99 95 99 95 99 18.91-34.19 21.12 30.07-60.85 44.93-54.87-59.91 46.03 55.29-48.24-39.33-50.15 39.21 52.77-58.34 37.53) 35.31 30.85 43.05 25.97 30.52) (30.48 39.89-56.79-40.75-35.60-43.11 23.09 54.55-46.22 37.41 34.67 67.98-46.86-38.41-54.26 31.33 36.78 24.71) (27.49-42.76-44.90 22.14 39.61-38.80 25.08-32.06-49.90 27.63-40.84 26.92) 17.42 39.89 54.62 47.97-42.92 23.08-49.15 53.86 41.74 32.46-51.35 31.61 14.91-48.80-51.59-38.14 31.28 29.75 20.55-52.06-51.64-46.65) (15.53-64.78-36.84 45.90 (30.30 35.73 (15.87 51.46 36.76 20.15 18.15 47.27 62.78 34.25 28.75-61.02) 19.89 53.99 (18.90-34.33 34.47 44.04-42.92 28.24 (22.80 26.38 45.0064.62) (28.89 65.42-46.12 63.76 33.58-49.32-47.33 31.41-31.51-44.99 56.31 40.72) (17.24-50.61 63.54) (14.90-52.85-51.91-55.12 42.41 1000 14.95-53.06 36.74-36.28 57.48 13.73 24.10 25.96-50.68-50.47-41.83 22.17 36.37-56.99 26.85 42.19 35.02 19.14 39.07-37.50-29.44 34.90-54.08 21.67-52.89-63.23-57.93-36.25-40.16 34.09-36.86-57.00-45.14 40.81 21.69-42.95-48.73-37.90-47.63-53.13 38.18-47.92 18.95) 30.21-56.06 30.32 34.80 35.23-52.26 36.28 42.78-54.38-47.37-32.97 24.01 24.50-37.87-44.80) (23.81 21.11 22.09 38.37-38.12-34.31-55.27 30.84 35.47 42.37-46.97 25.28 29.56-51.28 28.78 (32.43 33.89 22.31 35.83-60.96-47.99 (29.33 39.55 26.35-46.17-32.25-35.88 (17.06 22.78-66.12) 24.00 31.03 33.60-48.12 43.62-39.07-42.08-37.82) 29.26 500 22.18 54.21-31.20-36.23 27.22-51.05 50.65 18.05 29.10-31.04) 28.70 19.97-33.98 18.24 30.13 59.98-44.30-47.06 27.42-56.10 40.22 (20.28 22.20 19.31-28.93-49.88 22.16 33.82 30.04-32.14 43.69 30.87-59.39 34.62-36.68-38.55 68.76 24.21-30.16-31.14 42.00-40.26 (22.99-33.81 28.33 35.62 28.17 35.80) 25.25 33.31 35.95 29.05 34.81) 27.10 21.76-41.02-44.47-46.57 47.05-58.82-39.46-30.23 48.22-36.45-41.95 24.46 37.28 22.71-45.73-45.57 29.11 50.06 20.87-55.92-45.24 30.93-43.06-43.94 23.94 23.79 16.15 23.74 52.80-57.15-33.12 45.33-47.45-55.70-43.27 31.32 (25.83-44.14) 23.14 41.10 39.91-58.08 37.72 15.87-50.12-40.82-35.07-41.53-54.15 38.31 33.33 33.95-62.06-33.09-41.47 31.09 32.84 39.11 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 .72 31.53-41.44 45.08 28.16-55.22 29.95-57.11 23.65 29.90-37.19 26.14 40.57-40.29 32.14 44.01 25.98-41.82 36.79) 16.40) (12.60-55.74 26.74-59.11 22.15 25.ίΤΑ IX ted η 50 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 95 99 100 17.68 16.31 24.10 35.42 44.87 21.31-44.43-34.36-43.34 38.12 48.97 52.68 62.01-53.32-33.97-35.51 25.01-51.41-49.67 28.30 (23.48 40.01 27.42-61.83 38.13 27.57-60.76 200 20.22-48.79-41.09) 26.73-42.65-46.48 41.16-49.89-48.91 20.91 51.12 44.11-32.27-49.85 40.71-45.14 39.84 19.93-44.85 17.85-53.95-53.23-42.34 11.21 52.15 45.04-43.86) 31.19 (18.84-56.83-60.34 32.45-43.27-29.93-50.05 (20.90 59.11 36.83-35.ΙΟ 57.47 38.19 58.69) 33.08-49.30 41.12 37.34 26.34 37.59 27.76) (25.11 41.20-46.92-50.38 33.08 (27.61 (34.21 35.72-60.17 36.47 43.29-34.19-53.19 38.68-44.13 24.54 53.12 37.88 49.10 29.01-34.18 28.01-61.27-51.76 24.03 28.45 35.49-48.90-43.15 32.09 26.00 58.19-48.

transformation r 0.0601 0.23 0.03 0.1104 0.2132 0.3884 0.6467 Ζ 0.APPENDIX 2 / s t a t i s t i c a l t a b l e s 349 TABLL X The .4722 1.11 0.58 0.4 847 0.75 0.35 0.1 .0714 1.8291 0.6931 0.5493 0.5627 0.0802 0.1881 1.46 0.86 0.30 0.74 0.2237 0.91 0.40 0.04(X> 0.4118 0.2769 0.7582 0.25 0.9505 0.1206 0.6625 0.8673 0.8318 1.39 0.68 0.82 0.1568 1.7089 0.3316 0.15 0.2661 0.18 0.8107 0.31 0.92 0.52 0.34 0.356 0.1923 0.1820 0.6184 0.7250 0.3331 1.4973 0.5901 0.37 0.2933 1.8480 0.4477 0.95 0.26 0.28 0.07 0.70 0.2212 1.8872 0.4599 0.2986 0.80 0.16 0.3045 0.99 Ζ 0.5101 0.69 0.10 0.98 0.78 0.77 0.67 0.0300 0.9962 1.0923 2.4219 1.4(K)1 0.1614 0.57 0.05 0.73 0.0454 1.79 0.1003 0.13 0.85 0.0000 0.17 0.00 0.94 0.19 0.20 0.6777 0.6042 0.4236 0.5890 1.2027 0.33 0.93 0.5230 05V.0100 0.3758 1.66 0.2342 0.21 0.04 0.2562 1.9459 2.2976 2.6475 0.08 0.2554 0.50 0.4722 0.7380 1.3205 0.0500 0.1307 0.63 0.65 0.3769 0.4.45 0.72 0.244 8 0.59 0.90 0.09 0.29 0.5763 0.81 0.61 0.01 0.44 0.0701 0.64 0.83 0.62 0.14 0.9076 0.5275 1.88 0.71 0.7414 0.1717 0.48 O 49 correlation coefficient r r 0.38 0.24 0.9287 0.42 043 0.3541 0.32 0.0200 0.96 0.56 0.76 0.27 0.97 0.54 0.1511 0.55 0.6328 0.89 0.3428 0.9730 0.1270 1.02 0.0203 1.51 0.22 0.12 0.60 0.0986 1.7928 0.41 0.7753 0.47 0.2877 0.3654 0.84 0.87 0.36 0.6584 1.06 0.0902 0.1409 0.53 0.

05 9 12 15 10 14 18 21 12 16 21 25 29 14 19 24 29 34 38 15 21 27 32 38 43 49 17 23 30 36 42 48 54 60 19 26 33 39 46 53 60 66 73 0. Atomic Energy Commission. with permission of the publishers.001 3 4 16 15 19 23 17 22 27 31 20 25 30 36 41 16 22 28 34 40 46 51 18 25 32 38 44 51 57 64 20 27 35 42 49 56 63 70 77 5 20 24 25 6 23 28 33 21 27 32 38 43 24 30 36 42 49 55 24 29 34 7 28 34 39 45 42 48 8 .005 0.4) in D. . "ι ~ The upper bounds of the critical values are furnished so that the sample statistic U. has to be greater than a given critical value to be significant.S.38 46 54 6) 69 77 84 40 49 57 65 74 82 90 U Noli-: Critical values are tahulaled for two samples of sizes and n 2 . Reading. the Mann-Whitney statistic α n \ n2 2 3 2 3 4 2 3 4 5 2 3 4 5 6 2 3 4 5 6 7 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 10 0. The probabilities at the heads of the columns are based on a one-tailed lest and represent the proportion of the area of the distribution of 1' in one tail beyond the critical value. Handbook of Statistical Tables (Addison-Wesley Publishing Co . where > P 1 0 "·..10 6 8 8 11 13 9 13 16 20 11 15 19 23 27 13 17 22 27 31 36 14 19 25 30 35 40 45 9 16 22 27 33 39 45 50 56 10 17 24 30 37 43 49 56 62 68 0. This table was extracted from a more extensive one (table 11. 1962): Courtesy of U. Mass. Owen.01 0. B.appendix 2 / statistical tables 339 TABLE X I Critical values of U. f or a two-tailed test use the same critical values but double the probability at the heads of the columns.31 38 44 50 57 40 47 54 60 9 10 1 26 33 40 47 54 61 67 27 35 42 49 56 63 70 44 52 60 67 74 29 37 44 52 59 67 74 81 30 .025 0.

10 11 19 26 33 40 47 54 61 68 74 81 12 20 28 36 43 51 58 66 73 81 88 95 13 22 30 39 47 55 63 71 79 87 95 103 111 14 24 32 41 50 59 67 76 85 93 102 110 119 127 0.appendix 2 / statistical tables XI led α η· 0.001 2 4 1 3 5 6 7 8 9 10 11 1 2 32 40 48 57 65 73 81 88 96 33 42 50 59 67 75 83 92 100 44 53 62 71 80 89 98 106 3 4 5 6 7 8 9 10 11 12 1 2 34 42 52 61 70 79 87 96 104 113 26 37 47 56 66 75 84 94 103 112 121 130 28 40 50 60 71 81 90 100 110 120 130 139 149 35 45 54 63 72 81 90 99 108 117 48 58 68 77 87 96 106 115 124 3 4 5 6 7 8 10 11 12 9 13 1 2 38 49 58 68 78 87 97 106 116 125 135 51 62 73 83 93 103 113 123 133 143 3 4 5 6 7 8 9 10 11 12 13 14 41 52 63 73 83 94 104 114 124 134 144 154 55 67 78 89 100 111 121 132 143 153 164 .005 0.01 0.05 21 28 36 43 50 58 65 72 79 87 22 31 39 47 55 63 70 78 86 94 102 24 33 42 50 59 67 76 84 93 101 109 118 25 35 45 54 63 72 81 90 99 108 117 126 135 0.025 22 30 38 46 53 61 69 76 84 91 23 32 41 49 58 66 74 82 91 99 107 25 35 44 53 62 71 80 89 97 106 115 124 27 37 47 57 67 76 86 95 104 114 123 132 141 0.

01 30 42 53 64 75 86 96 107 117 128 138 148 159 169 32 45 57 68 80 91 102 113 124 135 146 157 168 179 190 34 47 60 72 84 96 108 120 132 143 155 166 178 189 201 212 0.001 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 43 55 67 78 89 100 111 121 132 143 153 164 174 59 71 83 95 106 118 129 141 152 163 174 185 16 46 59 71 83 94 106 117 129 140 151 163 174 185 196 62 75 88 101 113 125 137 149 161 173 185 197 208 17 49 62 75 87 100 112 124 136 148 160 172 184 195 207 21c> 51 66 80 93 106 119 132 145 158 170 183 195 208 220 232 .a p p e n d i x 2 / s t a t i s t i c a l TABLES 352 TABLE X I continued α "l 15 n 2 0.05 27 38 48 57 67 77 87 96 106 115 125 134 144 153 29 40 50 61 71 82 92 102 112 122 132 143 153 163 173 31 42 53 65 76 86 97 108 119 130 140 151 161 172 183 193 0.025 29 40 50 61 71 81 91 101 111 121 131 141 151 161 31 42 53 65 75 86 97 107 118 129 139 149 160 170 181 32 45 57 68 80 91 102 114 125 136 147 158 169 180 191 202 0.10 15 25 35 44 53 63 72 81 90 99 108 117 127 136 145 16 27 37 47 57 67 76 86 96 106 115 125 134 144 154 163 17 28 39 50 60 71 81 91 101 112 122 132 142 153 163 173 183 0.005 0.

001 1 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 2 3 4 5 6 7 8 9 10 12 11 13 14 15 16 17 18 18 30 41 52 63 74 85 96 107 118 129 139 150 161 172 182 193 204 18 31 43 55 67 78 90 101 113 124 136 147 158 169 181 192 203 214 226 19 33 45 58 70 82 94 106 118 130 142 154 166 178 190 201 213 225 52 66 79 92 105 118 131 143 156 169 181 194 206 218 231 243 38 54 69 83 97 111 124 138 151 164 177 190 203 216 230 242 255 268 40 57 72 87 102 116 130 144 158 172 186 200 213 227 241 254 268 54 69 84 98 112 126 139 153 166 179 192 206 219 232 245 258 57 73 88 103 118 132 146 161 175 188 202 216 230 244 257 271 284 188 2<X> 213 225 237 197 210 222 235 248 60 77 93 108 124 139 154 168 183 198 212 226 241 255 270 284 .01 36 50 63 76 89 102 114 126 139 151 163 175 187 200 212 224 236 37 53 67 80 94 107 120 133 146 159 172 184 197 210 222 235 248 260 39 55 70 84 98 112 126 140 153 167 180 193 207 220 233 247 260 0.005 0.05 32 45 56 68 80 91 103 114 125 137 148 159 170 182 193 204 215 19 34 47 59 72 84 96 108 120 132 144 156 167 179 191 203 214 226 238 20 36 49 62 75 88 101 113 126 138 151 163 176 0.APPENDIX 2 / STATISTICAL TABLES XI ed a 0.10 2 3 0.025 34 47 60 72 84 96 108 120 132 143 155 167 178 190 202 213 225 36 50 63 76 89 101 114 126 138 151 163 175 188 200 212 224 236 248 38 52 66 80 93 106 119 132 145 158 171 184 0.

to find the significant 1**» values for κ -.0508 .0101 .0068 8 .APPENDIX 2 / STATISTICAL TABLUS 343 t a b l e XII Critical values of the Wilcoxon rank sum.0520 .0068 5 6 .0312 . double the probabilities .0547 .005 α 0 .(Χ)52 12 .0059 3 . 0.0108 23 24 27 28 .0043 13 .0549 .0090 .0096 .0078 2 .0052 25 .0542 .0122 0 .0098 .0488 .0137 .0224 .0081 10 .3 .0156 1 .0078 1 .0039 2 .0240 26 .0051 19 20 23 24 27 28 .0107 15 16 .0078 1 .0253 .0046 .<Χ)85 1.0547 .0212 14 .0269 .0312 2 . Μ and 3K.0391 .0090 20 .0492 .0521 .0494 .0093 .0210 11 .0535 . Thus.0239 18 .0253 .0247 . F o r sample sizes η > 59 compute I 7.0487 .0549 .0290 9 .0244 9 .0101 7 .0527 .<Χ)40 10 .0091 33 .0261 17 .0042 16 . obtained in Wileoxon's matched-pair's signed-ranks lest Since the exact probability level desired cannot be obtained with integral critical values of T.(Χ)53 Note This tabic furnishes critical values Tor the one-tailed test of significance of the tank sum / .0098 .0469 .0415 .3 44 .0222 .0054 15 .0049 .0467 .0083 .0054 37 .0391 .0107 .0391 3 4 5 6 .0322 10 .0061 9 .025 α Τ α Τ 0. two such values and iheir attendant probabilities bracketing the desired signlicance level are furnished.0195 .0091 .0104 37 38 4.0047 .0473 . H i e probabilities corresponding to these two values of I are 0.0266 41 46 47 52 53 32 .0277 29 30 34 35 40 19 .0478 .0107 32 .0273 .0047 38 .0102 .01 and would be considered significant by the stated criterion.0471 .0241 . "Ί Λ n(n + 4 1)1 / ln(n + I)(2η + 1) J / V 24 .0625 .0273 .0039 1 .0090 and 0.0105 12 .0046 8 .0055 . Clearly a rank sum of Ί\ 37 would have a probability of less than 0.0269 13 .0137 .0049 4 .0527 Τ nominal α 0.0087 .stated at the head of the table.0117 3 4 5 6 7 8 .01 0.0156 1 .0195 .0544 .0287 21 22 .0234 3 .0045 .0461 .0645 .0781 .<Χ>47 33 .05 η 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Τ 0 1 2 3 3 4 5 6 8 9 10 11 13 14 17 18 21 22 25 26 30 31 35 36 41 42 47 48 53 54 60 61 a .0453 .0420 . I or two-tailed tests in which the alternative hypothesis is that the pairs could diller in either direction.W we note the two critical of / .0055 . in the table.0273 0 .0102.0523 .0247 .0242 .

0257 α .0049 .0247 .0247 .0101 .0475 .0253 .0242 .0502 .0095 .0502 .0098 .0249 .0104 .0052 .APPENDIX 2 / STATISTICAL TABLES TABLE X I I continued nominal α 0.0242 .0097 .0106 76 77 84 85 92 93 101 102 110 111 120 121 130 131 140 141 151 152 162 163 173 174 .0052 .0045 .0479 .0491 .0048 .0249 .0258 .0527 .0051 .0098 .0046 .0245 .0052 .0047 .0048 .0051 .0252 .0093 .0512 .(Χ)48 149 .0097 70 .0241 .0053 .(Χ)49 .0492 .0504 .0101 .0246 .0253 .0051 .0048 .0477 .0104 .005 Τ 42 43 48 49 54 55 61 62 68 69 75 76 83 84 91 92 100 101 109 110 118 119 128 129 138 139 Τ 67 68 75 76 83 84 91 92 100 α .0100 .0488 .0503 .0251 .0481 .0261 .0484 .0105 .0107 α .0251 .0505 .0512 .0501 α .0523 .0053 .0102 .0507 .025 Τ 58 59 65 66 73 74 81 82 89 90 98 99 107 108 116 117 126 127 137 138 147 148 159 160 170 171 182 183 195 196 0.0104 .0048 .0051 .0230 .0103 .0490 .0479 .0240 .0099 .0053 .0252 .0263 .0260 .0239 .0051 159 160 .0095 .0049 .05 η 21 22 0.0492 .0050 .0094 .0095 .01 Τ 49 50 55 56 62 63 0.0231 .0103 .0521 .0046 .0497 .0053 .0482 .0096 .(Χ)52 23 24 25 26 27 28 29 30 31 32 33 34 35 69 .0239 .0100 101 110 111 119 120 130 131 140 141 151 152 163 164 175 176 187 188 2(X) 201 213 214 148 .0263 .0102 .0108 .0516 .0050 .(Χ)99 .0098 .0250 .0096 .0261 .0485 .0053 .0496 .0048 .0506 .0097 .0242 .0260 .0051 .0524 .

0253 .0100 .005 α .0509 .0049 .0050 .0250 .0245 .0251 .0254 .0251 .0049 .0249 .0048 .0501 .0102 .0099 .0508 .0103 .0249 .0101 .0495 .0050 .0248 .0098 .0256 .(Χ)51 .0098 .0257 .0245 .0100 .0051 .0505 .0098 .0048 .0102 .05 η 36 37 38 39 40 41 42 43 44 45 46 47 0.0256 .0245 .0256 .0099 .0253 Τ 185 186 198 199 211 212 224 225 238 239 252 253 266 267 281 282 296 297 312 313 328 329 345 346 362 363 379 380 397 398 α 0.0101 Τ 171 172 0.0096 .0098 .0098 .0503 .0490 .0493 .0104 .0098 .0511 .0489 .0510 .APPENDIX 2 / STATISTICAL TABLES t a b l e XII continued nominal 0.0050 194 195 207 208 220 221 233 234 247 248 261 262 276 277 291 292 307 308 322 323 339 340 355 356 373 374 .0052 Τ 227 228 241 242 256 257 271 272 286 287 302 303 319 320 336 337 353 354 371 372 389 390 407 408 426 427 446 447 α .0051 .0100 .0507 .0247 .0506 182 .0049 .(Χ)49 .0254 .0050 .0498 .01 α .0244 .0488 .0247 .0244 .0050 .0495 .0050 .0051 .025 Τ 208 209 221 222 235 236 249 250 264 265 279 280 294 295 310 311 327 328 343 344 361 362 378 379 396 397 415 416 434 435 α .0509 .0100 .0050 .0505 .0246 .0251 .0050 .0252 .0102 .0490 .0099 .0102 .0048 .0501 .0496 .0500 .0487 .0257 .0051 .0507 .0247 .0498 .0097 .0248 .0497 .0101 .0252 .0051 48 49 50 466 467 .0049 .0495 .0258 .0245 .0048 .0493 .0052 .0051 .0050 .0103 .0099 .0101 .0500 .0050 .0103 .0099 .0049 .0486 .0048 183 .0104 .

05 .05 .01 - 16 - 20 20 - 20 24 24 24 30 30 30 36 36 30 35 36 34 36 40 39 24 28 28 28 30 35 30 35 36 42 42 42 40 41 48 42 28 28 32 30 32 35 34 36 40 40 41 48 48 48 56 46 - 20 20 5 . 0.05.025 .025 .A P P E N D I X 2 / STATISTICAL TABLES TABLE X I I I Critical values of the two-sample Kolmogorov-Smirnov statistic.05 .01 10 .05 .025.025 . Vol. The three values furnished at the intersection of two samples sizes represent the following three two-tailed probabilities: 0. approximate probabilities can be obtained from this table by doubling the nominal ct values.01 f o r two samples with m. This table was copied from table 55 in F S. Sample sizes n. these are not exact.01 S . Hartley. and η λ . and 0. 2 3 «2 a . Pearson and H.01 9 .05 . the 5"ό critical value of n { n j ) is K4 Any value of n t n J ) > 84 will be significant at / ' < 0. the Kolmogoiov-Smirnov test statistic I) multiplied by the two sample sizes m. η . are given at the left margin of the table. . 55 55 60 57 63 69 62 68 75 67 74 81 75 81 90 39 42 45 48 52 56 54 59 64 60 64 72 64 73 77 80 80 88 78 85 94 42 45 48 48 52 60 55 60 68 62 67 73 68 77 84 77 80 88 82 90 99 45 48 51 50 54 60 60 65 70 72 78 84 72 80 87 80 86 94 90 99 108 45 51 54 53 57 64 61 66 71 70 76 83 76 84 91 82 90 98 89 98 107 4 . II (Cambridge University Press.05.01 15 - 25 25 25 6 .025 . When a one-sided test is desired.025 . 16 and /_ ?> 10. O.025 .05 .05 . Biometrika Tables for Statisticians.01 .025 .025 . while sample sizes π ι are given across its top at the heads of the columns.05 . However.05 18 16 21 21 24 28 28 96 98 103 108 94 102 112 101 98 106 115 106 21 24 28 28 32 24 28 111 132 114 135 . since the distribution of cumulative frequencies is discrete.01 - 2 3 4 5 6 7 8 16 9 10 18 20 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 22 24 24 26 26 26 28 28 30 30 32 32 34 - 34 36 36 38 38 38 40 40 48 51 57 60 64 68 65 75 80 72 78 88 79 86 93 88 96 104 93 100 111 38 40 42 51 54 57 59 63 72 69 74 80 75 81 90 91 98 105 89 97 107 99 108 117 40 42 44 51 57 60 62 66 72 70 78 83 78 86 92 84 42 44 46 54 60 63 64 69 76 72 80 87 80 86 97 89 44 16 48 57 60 66 68 72 80 76 81 90 90 96 102 92 102 112 104 112 128 46 48 50 60 63 69 68 75 84 80 90 95 88 96 107 97 105 115 104 112 125 15 - 18 18 21 21 - 21 24 - 24 27 27 28 32 36 35 36 40 39 42 45 42 45 49 46 48 55 54 63 63 27 30 30 30 36 36 40 40 45 40 44 48 46 49 53 48 54 60 53 60 63 30 30 33 33 36 40 39 44 45 43 48 54 48 52 59 53 58 64 59 63 70 30 33 36 36 40 44 43 45 50 48 54 60 53 56 60 60 64 68 63 69 75 33 36 39 39 44 48 45 47 52 52 54 60 56 58 65 62 65 72 65 72 78 36 39 42 42 44 48 46 51 56 54 58 64 63 70 77 64 70 76 70 76 81 36 39 42 44 45 52.01 20 27 27 27 30 30 32 36 36 40 40 40 45 42 45 45 49 46 49 53 18 55 110 115 122 126 120 123 30 36 36 40 44 48 48 54 60 53 60 63 70 70 80 60 68 77 66 72 80 70 77 84 74 82 90 80 84 90 90 100 100 94 110 92 9 6 1 (X) 1 0 3 1 2 0 106 108 113 1 3 0 89 105 108 114 118 125 116 118 124 128 1 35 126 130 137 140 150 Note· This table furnishes upper critical values of n:n2l). London 1972) with permission of the publishers.025 - 18 18 20 24 24 24 30 30 28 30 35 30 32 35 35 .01 7 .

18 2 7 5 154 165 172 182 195 199 207 216 224 2 3 5 2 4 4 2 5 0 262 262 MX) 15 28 30 - 75 8 0 81 9 0 90 100 78 84 85 9 0 9 4 100 82 89 90 96 9 9 106 16 30 32 32 34 17 18 34 36 36 38 38 38 40 40 38 40 42 40 42 44 42 44 46 44 16 48 •16 48 50 80 9 0 92 86 9 9 1 0 0 9 4 1 0 8 108 82 89 9 4 9 0 9 8 103 9 8 107 1 1 3 19 20 7 9 88 9 3 110 86 9 6 100 120 9 3 104 111 130 21 89 9 9 105 75 91 81 98 9 7 108 116 9 0 105 107 117 126 78 84 94 101 108 86 9 6 102 1 10 118 92 103 112 122 130 80 89 9 8 106 1 14 86 9 8 106 115 124 9 7 108 115 126 137 22 23 24 6 8 76 9 0 9 2 104 111 118 72 81 9 6 102 112 120 128 80 9 0 102 112 128 132 140 6 8 80 88 97 104 114 1 2 5 75 9 0 96 105 112 123 135 84 9 5 107 115 125 135 150 25 .01 .025 .05 .01 2 22 3 30 30 33 30 33 36 33 36 39 36 39 42 36 39 42 39 42 45 42 45 48 45 48 51 45 51 54 48 51 57 51 54 57 51 57 60 54 60 63 57 60 66 60 63 69 4 5 6 43 48 54 48 54 60 52 54 60 54 58 64 57 63 69 60 64 72 62 67 73 72 78 84 70 76 83 72 78 88 7 48 52 59 53 56 60 56 58 65 63 70 77 62 68 75 64 73 77 68 77 84 72 80 87 76 84 91 8 53 58 64 60 64 68 62 65 72 64 70 76 67 74 81 80 80 88 77 80 88 9 59 63 70 63 69 75 65 72 78 70 76 84 10 60 68 77 66 72 80 70 77 84 74 82 90 11 77 77 88 72 76 86 75 84 91 12 72 76 86 84 96 96 13 75 84 91 14 15 16 17 18 19 2 0 2 1 22 23 24 25 11 3 3 39 36 4 4 4 0 45 36 4 3 4 0 45 44 50 39 4 5 4 4 47 4 8 52 4 2 46 44 51 4 8 56 44 55 45 55 52 60 4 8 54 52 59 56 6 4 48 5 5 52 6 0 60 68 50 60 54 6 5 6 0 70 5 3 61 57 66 6 4 71 60 65 6 4 75 6 8 80 59 6 9 6 3 74 72 80 62 70 6 6 78 72 83 64 72 6 9 80 76 87 82 84 89 9 3 9 7 102 107 112 121 119 124 129 87 9 4 9 6 102 107 111 116 1 2 3 132 131 137 140 9 6 102 106 110 118 122 127 134 143 142 150 154 12 24 24 - 81 86 9 3 9 6 100 108 108 116 120 124 125 144 138 84 9 4 9 9 104 108 120 120 124 129 134 137 156 150 9 5 104 108 116 119 126 130 140 141 148 149 168 165 13 26 26 - 89 9 6 101 105 110 114 120 126 130 1 3 5 140 145 81 91 84 104 1 0 0 104 111 114 120 126 130 137 141 146 151 158 9 5 117 104 115 121 127 131 138 1 4 3 1 5 0 156 161 166 172 14 26 28 - 82 8 6 89 112 9 8 106 111 116 121 126 140 138 142 146 150 87 9 4 100 112 110 116 122 126 1 3 3 138 147 148 154 160 166 9 6 104 104 126 1 2 3 126 134 140 148 152 161 164 170 176 182 84 9 3 9 6 98 120 1 14 116 1 2 3 127 1 3 5 138 144 149 156 160 94 9 9 104 110 135 119 129 135 141 150 1 5 3 154 163 168 175 102 108 115 1 2 3 1 3 5 1 3 3 142 147 152 160 168 1 7 3 179 186 195 89 9 6 101 106 114 128 124 128 133 140 1 4 5 150 157 168 167 9 6 104 111 116 119 144 136 140 145 156 157 164 169 184 181 106 116 121 126 1 3 3 160 143 154 160 168 1 7 3 1 8 0 187 2 0 0 199 9 3 100 105 111 116 124 136 133 141 146 151 157 1 6 3 168 173 102 108 114 122 129 136 153 148 151 160 166 170 179 183 190 1 1 0 119 127 134 142 143 170 164 166 175 180 187 196 2 0 3 207 9 7 108 110 116 1 2 3 128 133 162 142 152 159 164 170 180 180 107 1 2 0 1 2 0 126 1 3 5 140 148 162 159 166 174 178 184 198 196 118 126 131 1 4 0 147 154 164 180 176 182 189 196 2 0 4 2 1 6 2 1 6 102 108 114 121 127 133 141 142 171 160 1 6 3 169 177 183 187 111 120 126 133 141 145 151 159 190 169 1 8 0 185 190 199 2 0 5 122 130 138 148 152 160 166 176 1 9 0 187 199 2 0 4 2 0 9 2 1 8 224 107 116 120 126 135 140 146 152 160 1 8 0 173 176 184 192 2(X) 116 124 130 138 150 156 160 166 169 2(X> 180 192 199 2 0 8 2 1 5 127 140 143 152 160 168 175 182 187 2 2 0 199 2 1 2 2 1 9 2 2 8 2 3 5 112 120 126 140 138 1 4 5 151 159 163 173 189 183 189 198 202 123 129 137 147 153 157 166 174 180 1 8 0 2 1 0 2 0 3 2 0 6 2 1 3 2 2 0 134 141 150 161 168 1 7 3 180 189 199 199 231 2 2 3 2.025 .05 .05 .05 .05 .025 .05 .APPENDIX 2 / STATISTICAL TABLES TABLE XIII continued 347 1 1 α .05 .01 .025 .01 .025 .025 .05 .01 .025 .025 01 .08 2 1 3 222 226 2 6 4 2 3 8 150 168 166 176 1 86 2 0 0 2 0 3 2 1 6 2 1 8 22.7 237 2 4 4 121 124 130 138 141 150 157 164 169 176 183 198 194 204 2 0 9 132 134 141 148 154 164 170 178 185 192 2 0 3 2 2 0 2 1 4 2 2 2 228 143 148 156 164 173 180 187 196 204 212 2 2 3 242 237 242 2 5 0 119 125 135 142 149 157 163 170 177 184 189 194 2 3 0 2 0 5 216 131 137 146 154 163 169 179 184 190 199 2 0 6 214 2 3 0 2 2 6 237 142 149 161 170 179 187 196 204 2 0 9 2 1 9 227 237 2 5 3 2 4 9 2 6 2 124 144 140 146 156 168 168 180 183 192 198 204 2 0 5 2 4 0 2 2 5 137 156 151 160 168 184 183 198 199 2.01 .025 .05 .01 .01 .05 .05 .01 .025 .05 .05 .01 .025 .05 .025 .01 .(15 .01 .8 2 3 7 242 249 288 262 129 138 1-15 150 160 167 173 180 187 2 0 0 202 2 0 9 2 1 6 2 2 5 2 5 0 140 150 158 166 175 181 19() 196 2 0 5 2 1 5 2 2 0 2 2 8 2 3 7 2.025 .025 .025 .01 .01 .2.

455 0. The values in this table have been derived from those furnished in table XI of J.333 0.356 0.192 0.500 0.197 0.10.232 0.265 0.619 0. an observed value of 0.285 Note: This tabic furnishes 0.242 0.05 0.218 - 1.188 0.383 0.252 0.467 0.292 0.267 0.418 0.261 0.1 .340 0.307 0.367 0.344 0. for a sample size of 15.213 0. 0.237 0.359 0.287 0.185 1.240 0.643 0. and 0.390 and 0.733 0.210 0.410 0. For example. Thus.01 critical values for Kendall's rank correlation coefficient τ.511 0.234 0.317 0.222 0.391 0.255 0.439 0.451 0.49X would be considered significant at the 5% but not at the 1% level.394 0. Distribution-Free Statistical Tests (Prentice-Hall.564 0.312 0. halve the probabilities at the heads of the columns.280 0.218 0.491 0.220 0.294 0. the 5% and 1% critical values of τ are 0. For sample sizes /i > 40 use the asymptotic approximation given in Box 12.348 APPENDIX 2 / STATISTICAL TABLES 348 t a b l e XIV Critical values for Kendall's rank correlation coefficient τ α η 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 0.201 0.325 0.205 0. Negative correlations are considered as positive for purposes of this test. F.516 0.576 0. V. respectively. step 5.228 0. Ν .505.05.571 0.505 0.346 0.436 0.800 0. I96X) with permission of the author and publisher.377 0.223 0.290 0.314 0.333 0.390 0.nylewood Cliffs.257 0.363 0.556 0.264 0. The probabilities arc for a two-tailed test When a one-tailed test is desired. enter the table with the appropriate sample size and find the appropriate critical value.786 0.271 0.407 0.368 0.000 0.231 0.309 0. To test the significance of a correlation coefficient.297 0.471 0.296 0.333 0.304 0.194 0.274 0.302 0.3.905 0.722 0.287 0.323 0.01 1.237 0.000 0. Bradley.714 0.000 0.600 0.326 0.246 0.287 0.314 0.644 0.483 0.360 0.10 0.246 0.394 0. .228 0.421 0.189 0.867 0.

Soc. Enlomol.. 1950. J.. Brower. E. Zool. Archibald. 1966. Carnegie Institution of Washington. Genetics. The effects of insecticidal poisoning on the level of cytochrome oxidase in the American cockroach. Econ. J. C. 1959.S. The globe mutant in thejimson weed (Datura stramonium). 68:183-213. Bowen. B. II. Brown. Bot. Ε.. genetics. J. 1921. Block. L.. 285 pp. Studies on the physiology. N. Ann. E. Zool. Genetics. Paper 39. and A. and chemical studies of the factors involved. Enlomol. 49:675 679. Bowen. and evolution of some Cladocera. 1932. 59: 56-59. A. The effect of homotypic conditioning of water on the growth of fishes. W. Studies in animal aggregations: Mass protection against colloidal silver among goldfishes. J. 1939. C. The estimation of the number of individuals per unit area of species in heterogeneous plant populations. Brown. P. 6:241 264. Exp.. and E.. 14:7-21. Amer. Morphological relationships and hybridization. Exp. 1956.. C. A. I. The relation of temperature to the chirp-rate of male snowy tree c r i c k e t s . W. C. S. and R. E. M.. Welty. 13:40 63. Oecanthus fultoni ( O r t h o p t e r a : G r y l l i d a e ) . Β. Banta. A. A. Evolution. Speciation in butterflies of the Papilio glaucus group. W. Allee. Oesting. . Blakeslee. Ann. 1934. F. Dept. 61:185 207. Plant populations.Bibliography Allee.

. A. Interaction between inversion polymorphisms of two chromosome pairs in the grasshopper. 1966. E. Ecology. 1968. Yule. C. 1960. 1982.2 7 9 . 1964.. 1983. Α. Grundzuge allgemeinen Physiologie eilier Lehre vom Lichtund Farbensinn. K. French. Carter. M. 14:116-129. The distribution of Kendall's score S for a pair of tied rankings. Z. 59:1162 1166. 19:58 73. and M. Comstock. Selection of high temperatures for hibernation by the pocket mouse.. J .. K. 1952. T h e effect of age and parity of the mother on birth weight of t h e o f f s p r i n g . 1. Ann. M e l a n o m a and exposure to sunlight. J. 1 9 2 1 . Τ. Human Genetics. E. C. A. Vitamin and Nutrition Res. P. Mus.D. 4:110 136. Υ. 1976. Nymphalidae). 19: 234-243. 111 pp. Beitrage zur Frage des Geschlechtsverhaltnisses der Geborenen. M. P. J. Meredith. W . Liu. M a r r o w neutrophil mass in patients with nonhematological tumor. 9 0 : 7 . J e n a . Mitchell. 57:185-191. Anthropoi. C h r o m a t o g r a p h i c investigation of urinary amino-acids in the great apes. J. H. and G. 1983. Johnson. E. Jr. Some biometrics of Heliconius charitonius (Linnaeus) (Lepidoptera. 1966. Evolution. Firschein. Cowan. 47:151-171. Amer. Leinert. J. Gartler. Biometrics. Methods and their evaluation to estimate the vitamin B h status in human subjects. Ζ.1 7 8 . 86 pp. Seasonal changes in the energy balance of the English sparrow. Selection of Drosophila melanogaster for length of larval period. 1889. 1954. F r o h l i c h .. J. Unpublished Ph. A procedure for testing the homogeneity of all sets of means in analysis of variance. Lee. New York.. Food and wing determination in Myzus (Homoptera: Aphidae). A. Johnston. and C. Seng. E. and T. L. 14:41-57. thesis. Entomol. Amer. 1960. Premating isolation in the Hyla ewingi complex. I. and Y.. Int. Syst. White. Nelson. Zool. and Η. University of Colorado. Syst. Newman. Stat.. 1920. Geissler.. 101:561-568. and W. and R. Amer. and D. 1962. 1966. J. Greenwood.. 1955. Epidemiol. 35:1-24.. Kosfeld. Ann. A. Elementary Probability for the Biological Sciences. Individual growth in skeletal bigonial diam- . R.. K. Stat. Sachs. Davis. Roy. Science. M o s i m a n n . Novitates. Entomol. 72:385-411. V. The effect of temperature on the rate of development of the p o t a t o leafhopper. A. J. Rev.. Fischer. Dadd. R. P. V. R. J. Millis. Soc. U. 8 3 : 2 5 5 . Decker. The effects of starvation and humidity on water content in Tribolium confusum Duval (Coleoptera). Mittler.350 BIBLIOGRAPHY Brown. Lab. Hotze.. C. 20:459 -477. S. 255 pp. Ε. 5 3 : 1 6 6 . J . Bill size and the question of competition in allopatric and sympatic populations of dusky and gray flycatchers. Burr. R. Hunter. H. and G. 1958. 1964. Kouskolekas. Soc. Phys. R. Dobzhansky. Bur. Methods for adapting the virus of rinderpest to rabbits. M. persicae Appleton- Century-Crofts. J. K. J. Empoasca fabae (Homoptera: Cicadellidae). M . Littlejohn. and P. Soc.. Moraba scurra. Lewontin.2 8 . Ein Beitrag zur der Sinne. 11:131-138. Koo. Ν. An inquiry into the nature of frequencydistributions of multiple happenings.. Α. K. F .. 1956. 1574. and Clinical Med.. E. E. 1959. and V. Perignathus longimembris: Ecological advantages and energetic consequences. 53 pp. 59:292-298. Blood serum protein variations at the species and subspecies level in deer of the genus Odocoilcus. 1956. Ann. Zool. Auk. 1965. Am. I.. 15:70-87. Simon. 128:252-253. Evolution. Μ.. G. Gabriel. F. Vererbungsi. Biometrika. D.

R. W. Entomol. H. Genetics. P e a r s o n . S. A morphometric analysis of DDT-resistant and non-resistant housefly strains... 46:201 252. C.. R.. Anatomy. Agr. Phillips. and L. 1907. W. Miller. 44:120 130. Am. 6:296-315. 59:154-159... Olson. Pathogenic Microorganisms. A. Sokal. 240 pp. R. subgroup SokololT. R. C. Park. Rohlf. Newsom. R. 15:158 167. Etude hemotypologique des ethnies libanaises. Tague. Bull. Amer. Sokal. Univ. A comparison of fitness characters and their responses to density in stock and selected cultures of wild type and black Tribolium castaneum.|. 1959. J. London. 1955. 20:49 71. 5:351-360. Monogr. 1952. and P. Ecology. On the error of counting with a haemacytometcr. Statistical analysis of the frequency distribution of the emerging weevils on beans. E . R. Kansas Sci. Assoc. Geographic variation of Pemphigus populitransversus in Eastern North America: Stem mothers and new data on alates. Am. Coll. S. Hunter. 1984. New York. Kyoto Imp. and M. K. Hermann. and I. and R.. R. J. R. and E. Agr. The effects of larval density on several strains of the housefly. Biometrika. Entomol. Hydrol. T a l e b . Krumwiede. University of Chicago Press. E. E. Williams. Lea & Febiger. Studies on experimental population of the A/uki bean weevil. Klett. Α. S. Freeman and Company. Latshaw. W. de Toulouse. Drosophila Ecol.) Sinnott. Ann. Sullivan. H a r t l e y . Tribolium Information Bull. Mem. 1966. 1964. .. 34:77-79. Evolution. D. L. Univ. A. V o l . Tate. Morphological variation in natural and experimental populations of pseudoobscura a n d Drosophila Student (W..BIBLIOGRAPHY 351 eter during the childhood period from 5 to 11 years of age.. Competition among genotypes in Tribolium castancum at varying densities and gene frequencies (the black locus). Gossctt). Competition between sibling species of the Pseudoobscura of Drositphila.. W. 1 9 6 5 . W.. and G. 1924. and P. and F. and D. 1943. Evolution. 20:855-868. Drug Res. Schweiz. R. J. . (^tonographies du Centre d'HemotypoIogie du C... L. 2 d ed. and C. Nimesulide and antibiotics in the treatment of acute urinary tract infections.. R. Karlen. Diapause in Heliothis zea and Heliothis virescens (Lepidoptera: Noctuidae). Amer. Nat. Soc. 1967. R. L. A. J. Relation of the calcium content of some Kansas soils to soil reaction by the elcctrometric titration. 1965. R u f f i e . Variation in a local population of Pemphigus. R. H a m m o n d . 2d ed. Biometrika Tables for Statisticians. a n d H . J.. Cambridge University Press. 1981./. Vollenweider. O. 317 pp. Chicago. A. E. 1953. Morphological Integration. 811 pp. Pfandner. 64:509-524. O .. Biometry. F. 859 pp. Sokoloff. R. 1958. Optimal confidence intervals for the variance of a normal distribution. Paris. 10:142-147. Utida. I. 1921. Sokal. and R.. R. VIII. a n d N .U. Callosobruchus chinensis (1. persimilis.. 48:499 507. Z. Vertikale und zeitliche Verteilungder Leitfahigkcit in einem eutrophen Gcwiisser wiihrend der Sommerstagnation.. 1935. Res. Ann. Soc. R.. Stat. W. 54:1 22. 1963.. 49: 195-211. R. Sokal. 25:387 409. Sokal. Factorial balance in the determination of fruit shape in Cucurbita. 104 pp. 1 9 5 8 . Thomas. L. Swanson.. Frei. Philadelphia and New York.. 54:674 682. R. 1966. 99:157-187. 11..H. 1955. Sokal. Sokal. Amer.

T„ and L. Biometrika. 1964. Accuracy of sample moments calculations among widely used statistical programs. Lewis. Econ. 31:364-375. A study of striped bass in the marine district of New York State. R. E. and G. J. Bufa..L.. Β. 1981. Woodson. Dallal. The geography of flower color in butterflyweed. 63:33-37. Willison. A. D. Improved likelihood ratio tests for complete contingency tables. 1976. Evolution.. Willis. and N. E. Clinical Res. H. Annual report.352 BIBLIOGRAPHY Wilkinson. 21 pp. Amer. 1977. 50:438-440. of Environmental Conservation. Williams.. E. 18:143-163. The longevity of starved cockroaches. Jr. 1983. L. J. 31:128-131. R. 89-304.. Entomol. Anadromous Fish Act. 1957.. P. Stat. New York State Dept. M. Myocardial infarction—1983. Young. .

186. 57 Acceptance region. 162 introduction to. 212 213 normality. 119 A d d e d c o m p o n e n t d u e t o treatment effects. 157 a (parametric value of Y intercept). 199 Model I. W. (treatment effect). (43 (a/i). 118. 165 168. 160 . 349 Alternative hypothesis ( H .2 1 6 Adjusted Y values. 2 1 4 . ) . 1 5 0 . 232 A{ (random g r o u p effect). 229. a s s u m p t i o n in analysis of variance. 2 1 4 randomness. 167 168 Additive coding. 195 A posteriori c o m p a r i s o n s . 174 planned c o m p a r i s o n s a m o n g means. 149. 173 179 unplanned c o m p a r i s o n s a m o n g means. 174 Absolute expected frequencies. 4 0 Additivity. 214 216 h o m o g e n e i t y of variances.1 5 4 single-classification.Index a (number of groups). 174 a priori c o m p a r i s o n s for. 118 at. 134 a (Κ intercept). 233 α significance level. 154 156 a posteriori c o m p a r i s o n s for. 213 214 independence of errors. 147. with unequal s a m p l e sizes.1 8 1 .148 Added variance c o m p o n e n t a m o n g groups. 168 c o m p u t a t i o n a l rule for. . 148 1 5 0 . 149 estimation of. 133 158 mixed model. 1 5 7 158 partitioning of total s u m of squares and degrees of freedom. 148. 212 average sample size (n 0 ).j (interaction effect of ith g r o u p of factor A and j t h g r o u p of factor fl). 211 228 additivity. 258 Allee. 179 181 Model II. 174 A priori c o m p a r i s o n s . 118 126 Analysis of variance: a s s u m p t i o n s of. C„ 228.

A. 232 β ( p a r a m e t r i c v a l u e for r e g r e s s i o n coefficient). 209. 40 combination. 18. See M e a n A v e r a g e v a r i a n c e w i t h i n g r o u p s . 76 78 b ( r e g r e s s i o n coefficient). 276 of d i s p e r s i o n (CD). 3 0 0 . q). 233 . Table XI C.. 3 2 4 C h i . W. C.354 1ndkx A n a l y s i s of v a r i a n c e continued See also Single-classification a n a l y s i s of v a r i a n c e table. See A n a l y s i s of v a r i a n c e A n t i m o d e . M „ 221. 94 C e n t r a l t e n d e n c y . 4 0 43 a d d i t i v e . 43 s t a n d a r d e r r o r of. 169. 272 Bivariate sample. See Regression coefficient of v a r i a t i o n (I ).s q u a r e (χ 2 ). 350 C o n d i t i o n s for n o r m a l f r e q u e n c y d i s t r i b u t i o n s .. G . B. 348 regression. 69 of r a n k c o r r e l a t i o n . 161 χ2 (chi-square). 33. 7 B i v a r i a t e s c a t t e r g r a m . 54 p a r a m e t r i c (p. 6t p a r a m e t e r s of.3 0 A r i t h m e t i c p r o b a b i l i t y g r a p h p a p e r . 2 4 b i v a r i a l e n o r m a l d i s t r i b u t i o n . 40 Coefficient: c o r r e l a t i o n . 69 C T ( c o r r e c t i o n term). 296 c l u m p i n g in. 272 Blakeslee.. 178 179 B o w e n . 195 B a n t a . 349 A r c s i n e t r a n s f o r m a t i o n . 13 C o m p u t e r . q ) . 66.s q u a r e at p r o b a b i l i t y level ot. 112-114 s a m p l e statistic o f f * 2 ) . 225 228. 58. P. Γ·.s q u a r e table. Table IV. 113. 86 A r r a y . 233 β t (fixed t r e a t m e n t effect of f a c t o r Β o n j t h g r o u p ) . 261. 349 B r o w e r . 112 χ Ι Μ (critical c h i . multiple. 58 as a d e p a r t u r e f r o m P o i s s o n d i s t r i b u t i o n . P. 70 C o d i n g of d a t a .1 5 1 two-way. 2 8 ..1 0 A v e r a g e .6 0 c o n f i d e n c e limits for. Table I X . 134 g r o u p i n g of. 54 64. Table IV. 7 C h i . 349 Bar d i a g r a m . L. 255 Bernoulli. 16. 227. 333 g e n e r a l f o r m u l a for. See C o r r e l a t i o n coefficient of d e t e r m i n a t i o n . 3 2 4 C a l c u l a t o r . 349 . K . 269 test of. 350 C a u s a t i o n . Ε.2 3 4 Attributes. 16 A s s o c i a t i o n . a n d d e g r e e s of f r e e d o m v).3 0 1 of d i f f e r e n c e b e t w e e n s a m p l e a n d parametric variance.. 3 0 0 C h i . 39.. 130. M . 9 . c o n f i d e n c e . Ε. 228.. See also P a i r e d c o m p a r i s o n s tests. R. 25 C o m s t o c k . I h i s t o r y of. 182. 185-207.. Box 123. 6 0 B i o a s s a y . m e a s u r e of. 25 Biometry. 352 CD (coefficient of d i s p e r s i o n ) . A. 349 B o n f e r r o n i m e t h o d . 3 Biased e s t i m a t o r . 349 B r o w n . See Test of i n d e p e n d e n c e A s s u m p t i o n s in r e g r e s s i o n . 40 multiplicative.2 3 C l a s s interval. F . 264. 85 B i n o m i a l d i s t r i b u t i o n . 218 Arithmetic mean. 25 C a r t e r . 40 Comparisons: p a i r e d . Α. 19 C l a s s m a r k .1 3 0 f o r g o o d n e s s of fit. I Bioslatistics. 293. 28 Character. 102. 19 C l a s s limits. 33 A r c h i b a l d . K e n d a l l ' s (τ).3 0 1 Class(es). 257 C e n t r a l limit t h e o r e m . 181 C o m p u t e d variables. 70 Clumping: as a d e p a r t u r e f r o m b i n o m i a l d i s t r i b u t i o n . 286 290 c o m p u t a t i o n of. Α. L.s q u a r e test. t B I O M c o m p u t e r p r o g r a m s . 218 A n o v a . 312 d e g r e e of. 287 289. 1 2 9 . 66. 3 0 0 . 23 Belt. . See also T w o .w a y a n a l y s i s of v a r i a n c e A n g u l a r t r a n s f o r m a t i o n .. 11. 1 5 0 . W. 204 207. 136 B r o w n . 262 Biological statistics. 232 bY X (regression coefficient of v a r i a b l e Y o n v a r i a b l e X). 350 Bufa. 290. implied. 277 279. 349 Block. 112 Chi-square distribution. J. A. 58 . 110 C o m b i n a t i o n coding. 19 C l u m p e d d i s t r i b u t i o n . 58 60 B i n o m i a l p r o b a b i l i t y ( p . 38 B i m o d a l d i s t r i b u t i o n . M. 293. 1 8 . 6 0 r e p u l s i o n in.

338 Covariancc. 258 Derivative of a function. 287 . 118 Crossley. 7 9 . 104 for μ Box 6. 256 for correlation coefficients. 260 df (degrees of freedom). equal to F „ 172 173. 170. 2 8 1 .2. 115 C o n t a g i o u s distribution. 85 dosage-mortality. 2 4 6 .1 1 5 . 4 0 . Α.3 0 1 of a statistic.2 8 4 c o m p u t a t i o n for. 172 173 simplified formulas for.2 8 3 critical values of. See the particular statistic Density. 322 Curve. Box 8. 269. 103. c o m p a r i s o n of. 314 315 from regression (dr x).).2 8 6 rank. 190. J. 3 5 0 D e Fermat. 173 t test for.. 350 Critical region. 2 4 . 305 term (CT). 37. 258 power.1 4 Descriptive statistics...2 9 0 applications of.2. A„ 223 Crovello. 3 0 4 . Box 12.2 5 6 of regression statistics. E. 350 Degree of association.2. 85. 271 C o w a n .x (deviation from regression line). Table I I . statistical.2 8 3 Correlation coefficient(s): confidence limits for.. 2 1 4 between means and variances. 262 standard.m o m e n t . Box 8. 173 significance of. 315 316 standard error of. 281 283 confidence limils for. 104 C o n f i d e n c e limits. 1 2 3 .8 0 .2. C . 258 Correction: for continuity. 284.2 4 7 .3.1. 2 9 8 . T.2 4 Control. 281-283 of difference between t w o means. 36 normal equivalent. Box 12. G. 281-283 test of significance for. 109 based on normally distributed statistic.2.2 8 9 .. Box 12. 280 test of difference between.2 8 6 illusory. 332 p r o d u c t .. 75 D e p e n d e n t variable. 239. 36 sum of. 168 173. 352 Darwin. 2 2 7 . 2 8 4 . 8 3 Deviation(s): from the mean (y).1 6 2 Williams'.2.. 36 43. Box 8. 316 between a sample variance and a parametric variance. 146.2 6 precision of. 307 C o n t i n u o u s variables. 104 for variances. 333 of regression coefficients. 107 dy. 3 Data. 1. Box 12. 3 Deciles.2. 313.2. Table 12.1 0 6 .3.2. 1 0 . G.2 9 0 c o m p u t a t i o n of.8 0 . 292 Cumulative normal curve. 384 and regression. 2 6 8 270.2. 1 8 . 114-115 for a. 277-279 standard error of (s. Box 12. 1 6 1 . 9 frequency distributions of. 2 8 0 . 2 3 2 D e p e n d e n t variates. area under. 169 170 confidence limils of. 27 43 Determination. Table VIII. 2 accuracy of. Jr. 1 0 3 .INDEX 355 C o n f i d e n c e interval. (v). Ε. M „ 184. 6 6 C o n t i n g e n c y tables. 32 Decker. 7 9 . 1 0 9 .2 4 1 standard. 2 8 4 . 270 2 8 0 c o m p u t a t i o n of. tesdng 317 . 1 3 . P. 38. 99 Difference: between t w o means.2 2 8 Table I X . 168 173 c o m p u t a t i o n of. 265. 2 5 3 .2 5 4 upper (L 2 ).1 2 4 Curvilinear regression. 280 2 8 4 c o m p u t a t i o n for. Table X. 207. c o m p u t a t i o n of. 2 8 6 . C. 1 6 9 . 25 Davis. 170.1 1 1 . 2 8 1 . Box 11. 1 1 4 . 276 Deviate.4 3 handling of. 1 0 . 3 5 0 Dallal.1 3 processing of. 281 283 transformation to z. 308 Correlation. contrasted. 2 6 9 Degrees of freedom ( d f ) . 173. Α. 2 7 0 significance tests in. 285 between m e a n s and ranges. Box 12. 264. 238. 2 6 7 . 232 Derived variables. 59.. 109-111 percentages (or proportions). 75 cumulative normal. Table XIV. 83 standard normal. 241 D a d d . 271 relation with paired c o m p a r i s o n s test. 262 empirically fitted. 2 8 4 formula for. 169 170 t l .1 7 0 lower (LJ. D. H. 214 nonsense. R. 170. 283. 2 4 0 .3 0 5 . 39. coefficient of. 2 5 4 . Box 12.1 3 c o d i n g of. 3 D e Moivre. Box 6.4.

106 108. 350 Fisher. T„ 44. Table V. 7 4 . 237 Explained mean square. 16. Table I I I . 138. 5 8 . Box 5.. 79 leptokurtic. 52 Expected frequencies. 75. 262 of m e a n . 57 observed ( / ) . 258 Equality of a s a m p l e variance and a parametric variance. one-tailed. L„ 44. 7 4 . Α.1. 1 4 2 . 352 French. 58. 1 4 .4 3 Distribution: bimodal.5 7 normal. 323 Distribution-free m e t h o d s . 157 treatment. 141. 33 Effects: main.). 6 6 F. 203 Discrete variables. 9 Discrepance. 129 130 Error(s): independence of. 41. Table V.1. 47.9 1 preparation of. 213 Factorial. 1 3 8 . 3 4 . 326 ( m a x i m u m variance ratio). 88-89 l .. Box 2 . 57 f i j (observed frequency in row i and c o l u m n j). 18 normal. | v . 296 bivariate normal. 38.1. 117 125 Frror rate. 33 multinomial.hrlich. given X.1 0 0 m u l t i m o d a l . 56 57 Frequency distribution. 140 F' test. 194 r a n d o m group. 153 standard. vj| ). 1 6 3 . of value of Y in regression. Box 7.9 1 platykurtic. Table IV. A. two-tailed. 38. 59.1 4 2 .1 1 4 . 57 relative expected (/„. 283 Freedom. 20 21 (random deviation of the /th individual of g r o u p 11. 61 Firschcin. 14 24 c o m p u t a t i o n of median of. 56 repulsed. 256-257 between t w o variances: c o m p u t a t i o n of. 1 1 2 .5 7 Expected m e a n squares.1 6 4 Expected value. 138 F test. 28. 326 frequency./ rcl (relative expected frequency). 299.6 0 . / . 178 Estimate: of added variance c o m p o n e n t .5 7 a b s o l u t e . experimentwise. 149. 3 5 0 D o s a g e s . 167 168 . 70 c o n t a g i o u s . 138 F\ l v i Vi) (critical value of the F distribution). 262 D o s a g e . 143 F. R„ 210. Ρ R. 9 4 . 241 Extrinsic hypothesis. I. 158.). 3. See Standard error type 1. mathematical operation.D S 0 (median effective dose). (sample statistics of F distribution).76 graphic test for normalily of. 69 statistics of. 324 clumped. See N o n p a r a m e t r i c tests D o b z h a n s k y . 66. 5 6 .356 INDEX Difference continued b e t w e e n t w o regression coefficients. 5 6 .s h a p e d . 33. R. 66. 158. 326 s a m p l e statistics of (/·. 312 Empirically fitted curves. 141 f . degrees of. 3 0 0 88-89 / (observed frequency).6 4 . 298 -301 Frei. 213. 5 6 . 326 critical value of (F.m o r t a l i t y curves. Box 5. 139.7 1 probability. 69 mcristic. 41. 85 Poisson. 98 for Y. 319 normal. 9 Dispersion: coefficient of.142. Table V. 155 F. 32 of c o n t i n u o u s variables. 50 independence of. 1 1 6 . 212 213 m e a n square. 57 / (absolute expected frequencies). 79 P o i s s o n . „ test. 38 unbiased. cumulative and normal.1 4 3 D i s c o n t i n u o u s variable. 272 chi-square. 57 F (variance ratio). 8 8 . 103 Events. Table VI. Table V. 251 Explained s u m of squares. 68 relative. 85 of means.2 4 function.. 138 142 F. 6 4 . 57 binomial.1 2 1 type II. 330 / distribution.8 9 of standard deviation. 141.1. 85 binomial. 133. 311 . Box 5. 16. 16. 350 Frequencies: absolute expected ( / ) . 5 4 . 237 Estimators: biased. 71 Student's /. 142 testing significance of. M„ 266. 18 24.

3 5 0 J o h n s t o n . C „ 229. 3 0 2 .s a m p l e test. (alternative hypothesis).3 224. Table X I I I .. 209. Box 13. 183. 351 Hypothesis: alternative. 44. 351 Graph paper: normal probability. 3 G r e e n w o o d . 348 Klett. 7 Interaction. 213 Histogram. K. 286 290 c o m p u l a t i o n of. 350 Kosfeld.3 0 4 for t w o classes. 2 9 7 . R. 11. 350 G r o u p i n g or classes. 18 23. F„ 3 Gartler. W.3 1 2 with continuity correction. 85 .3 0 5 c o m p u t a t i o n for.3 0 1 for single classificalion. 33 Frequency p o l y g o n . 19 Independence: a s s u m p t i o n in anova. 115. E.3 1 2 degrees of freedom for. 16. 1 1 8 . 3 5 0 Function.3 0 5 introduction to. Box 13.2 1 4 H o m o s c e d a s t i c i t y .1 2 6 testing. Box 309 by G. K. 192 197 s u m of squares. 232 G (sample statistic of l o g likelihood ratio test).S m i r n o v t w o . V. 184. 158. Ν. 195 Intersection.3 0 8 U-shaped. 305 . 180. 350. 3 1 0 t w o . K.1 2 6 extrinsic. Box 12.. 346 K o o . 24 hanging. 351. 17 quantitative. 90-91 H o m o g e n e i t y of variances. 351 H a r m o n i c mean ( / / .1. 3 0 1 . 107.2 1 3 of events.1 3 0 Illusory correlations. 213 Hunter. 350 Gauss. M.2. 192 Intercept. 2 3 2 Index. 25 Heterogeneity a m o n g sample means. G . 116 11. 3 0 8 . V. M „ 70.. 118 11. 2 1 2 . 153 Intrinsic hypothesis... 31 G o o d n e s s of fil tests: by chi-square. 81. 2 0 -21 Groups: in anova.2. 3 0 7 . Α. 22.INDEX 357 qualitative. 300 Item. 301-305 c o m p u t a t i o n for. 302 . 64. 55 Karten. 269 Interference. 2 1 3 .3 1 2 R x C . 181. 24 Frohlich. 1 3 6 . 287. 2 9 4 . 134 number o f (</). 134 variance a m o n g . 3 0 0 . Α. 3 0 5 .1. P. 319 for g o o d n e s s of fit. 131. single classification.3. 350 G a l t o n . Box 10. 356 Kurtosis. 298 G a d j ((/-statistic adjusted for continuily). 351 K o l m o g o r o v .9 1 Graunt. ) . 3 5 0 H 0 (null hypothesis). 299. D. 1 1 6 . 3 5 0 K o u s k o l e k a s . 136 Heteroscedasticity. 75 slope of.3 1 2 Independent variable.. 31 G test. 3 0 0 null. 312 Gabriel. 265.. 3 5 0 G e o m e t r i c mean ( C M . 31 Hartley. Α. F. 351 Kendall's coefficient of rank correlation (τ). W. F. R. II. 350 K r u m w i e d e . Sox 2. 8 5 .. 305 general expression for. 63. 3 0 1 . 2 9 6 .3 0 1 by G test. 31 H a m m o n d . I. S. 86 probit. 2 8 5 Implied class limits. 8 6 probability. 308. K . P..1.3 1 0 . 52 test of: 2 x 2 c o m p u t a t i o n . 262 Graphic methods. 287 289 critical values oi. J o h n s o n . 3 0 5 .w a y tables in. 143 150 k (sample size of a single binomial sample). J. 153 Individual observations.1 3 7 variance within. S„ 67.. 232 Interdependence.2 9 9 Gossctt. 3 Geissler. Table A IV. ) . 17 two-way.. 3 0 0 intrinsic. 261. C. 7 13.. 1 1 5 . O. 305 GMr (geometric mean).3 0 4 of independence. 223 225. 2 3 2 probability density. 13 Individual mean square. H„ 14.. W.. 287. 231 derivative of. (harmonic mean). 50 Intragroup mean square.

352 L e w o n t i n . correlation between. L. 234 235. Α. 255 M a i n effects. 30.1 7 3 distribution of. 41 conftdencc limits for. statistics of.3 0 c o m p a r i s o n of: planned.. 251 Measurement variables.. 98. 350.2. 41 Miller. 3 5 0 Leptokurtic curve. 355 Mittler. 9 Median. 3 2 .1 6 4 explained. 248 μ (parametric mean). 350 Mitchell. L„ 26. Table X I . 1 9 Linear regression.. 33 Laplace. R.1 1 1 deviation from (V). 3 3 . 298 Limits: confidence. 179 181 c o m p u t a t i o n of. C„ 313. 157. 350 M u l t i m o d a l distribulions. 8 8 . 183. 151 MS γ (mean square due to regression). E„ 313. Box 6. 38 confidence limits for. 251 individual.2 3 4 . 2 4 3 . 3 5 0 L o c a t i o n . 199 Mode. 38 g e o m e t r i c ( G M r ) . 169-173 variance a m o n g .2 0 7 Model II regression. 40 M S (mean square). 200. 38 sample./. 168 173 estimates of.6 9 parametric (μ).3 4 M o d e l I anova.W h i t n e y statistic ( . 102 Meredith. A. 233 μf (expected value for ?. 356 Mixed model t w o . K . 151 for deviations from regression (MSr x). Box 1(1. 269 . o n probability paper. 18 Meristic variables. 33 lethal d o s e ( L D 5 0 ) . 299. 319 Multiple c o m p a r i s o n s tests. 37. 102 s u m of the deviations from.2 7 0 with o n e Y per X . 1 8 5 . Table XI. „ . 3 Latshaw. 287. 53. J. 205.358 INDEX L (likelihood ratio).w a y anova. 2 8 . 2 6 0 harmonic. 2 8 . 37. 42 from unordered data. See Regression Littlejohn.. 221-222 critical values in. 3 5 0 Meristic frequency distribution. correlation between. W. 298 L o g a r i t h m i c transformation. significance. 3 1 4 . J. 87 89. 269 270 M o s i m a n n . 248. 9 Μ id range. L„ 278 Miller. 31 graphic estimate of. 104 L 2 (upper confidence limit). 98 M e a n square(s) (MS). 251 unexplained. 42. E. 142.2. 2 3 5 . H „ 17. 109 μγ (expected value for variable F for any given value of A:). 251 total. J.3 3 effective d o s e ( E D 5 0 ) . 208. Υ. (sy). 218. 29 n„ (average sample size in analysis of v a r i a n c e ) 168 .3 4 L o g likelihood ratio test. 153 expected value of. 1 0 9 .x). 153 intragroup.1. 131. 1 6 8 .| ). 1 3 6 . R.). 222. 1 1 8 . 85 Level. J.1. 136 of P o i s s o n distribution. 33 M u l t i n o m i a l distributions. 351 Least squares.1 5 6 M o d e l I regression: a s s u m p t i o n s for. C. 94 100 equality of two. 264. Η.. 298 L t (lower c o n f i d e n c e limit). 30 a n d ranges. 38 of a sample. 36. 153. 1 4 8 . J. 298 sample statistic of (G).1 3 7 a n d variances.8 9 η (sample size). 186.W h i t n e y U-test. P. 31 m e a n of (F). 33 standard error of. 1 1 . 339 M a n n .. 153 d u e to linear regression ( M S f ) . 1 6 3 . 36 difference between two. 248 error. M. 220 M a n n . 1 5 4 . 211 standard error of. 2 3 3 . T. 220 222 c o m p u t a t i o n for. 2 1 4 weighted. 181 Multiplicative coding. 182. 3 5 0 Leinert.2 4 3 with several Y's per X.158 two-way.1 5 0 . 3 5 0 L i k e l i h o o d ratio test. 39 4 3 from a frequency dislribution.2 4 9 M o d e l II anova. (s}. 248 M S J X (mean square for deviations from regression).. 6 8 . 104 L D 5 0 (median lethal dose). 194 M a n n . N.. 339 Mean(s): arithmetic ( ? ) . Box 3. S. V. Box 5.. 222.W h i t n e y sample statistic (CJJ. 3 5 0 Liu.3 1 5 t test of the difference between two. 351 Millis. 24.1 2 1 Lewis. 350. 235 Lee. 173 179 unplanned. 148. Box 3. See C o n f i d e n c e limits implied class.

1 5 0 .3. 67 c l u m p i n g in. 226 in lieu of regression. 91. 2 2 3 . Observations. 50 Probit(s). 263 in lieu of single classification a n o v a for t w o unpaired groups. 2 2 0 . 1 7 3 . 85 87 Normality of a frequency distribution. R„ 265.83 N o r m a l equivalent deviate. 75 normal. 83 N o r m a l distribution. D „ 66 P o i s s o n distribution. 2 2 3 . K„ 313. 55 Pearson. 350 N e w s o m .1. 2 6 2 N o r m a l probability density function. 25. 258 Probability (/').1. 351 Phillips.1. 64.INDEX ν (degrees of freedom). B. 233 Parametric value of )' intercept (a). D „ 265.2 8 6 N o r m a l curve: areas of. Table 11. 237. 32 Petty. 116 126 N u m b e r of g r o u p s (<. 66. 350 N e w m a n . / . H „ 229. 7 8 statistics. 233 Parametric variance. 277 279 with t. 2 7 2 Parametric regression coefficient. 107 N e l s o n . 88 Normality. 80. 38 Park. 86 Probability scale.7 1 calculation of expected frequencies. 333 drawbacks of. 85 heighl of ordinate of (Z). 2 7 0 Percentages. 318 a m o n g groups.2 2 2 . Box 278 279 formula for. 54 f> (parametric binomial probability). 88 N o r m a l probability scale. 38 of the normal probability density function. 85 303 Null hypothesis ( / / 0 ) . 125 126 Ordering test. 177 Pascal. 7 6 . ( .83 N o r m a l probability graph paper.8 0 . 16. 71 Population. 262 graph paper.. K„ 3. 85 87 Probability space. S. 251. 2 2 0 . 351 Planned comparisons. 123 124 P o w e r of a test. 48 Paired c o m p a r i s o n s . 85 normal.2 2 2 N o n s e n s e correlations.2 2 8 .m o m e n t correlation coefficient ( 270-280 c o m p u t a t i o n of. 351 N o m i n a l variable. 134 P a r a m e t e r s ) . 2 7 2 c o n d i t i o n s for. 78 N o r m a l deviates. Box 5. 79 function. 57 Olson. Box 10. individual. 26. 2 8 4 . 60 Ρ (probability). 66. 70 parameters of. standard. Box 10. J. 262 transformation.9 1 applications of. 13 14 confidence limits of. 7 Observed frequencies. 56 Probability graph paper. 74 91 parameters of.7 8 derivation of. 277 279 c o m p u l a t i o n of. 272 12. 205. 351 Partitioning of s u m s of squares: in a n o v a . K. 262 analysis.2. 2 0 4 207. Table I X . 225 228.1 5 4 with dependent variable. 9 N o n p a r a m e t r i c tests. 169 ρ (binomial probability). L. 140 One-tailed tests. 7 6 . E„ 236. J. 7 9 . 226 / test for. 78. Κ . 351 One-tailed /·' test.. testing departures from. 207 related to correlation. 79 properties of. 351 Pearson. 271 parameter of (pjk). 69 repulsion in. 7 4 .7 8 expected frequencies for. W„ 3 Pfiindner. Box 10. 316 317 . 2 2 1 . Π.. W. S.1 7 9 Platykurtic curve. 86.2 2 8 in lieu of paired c o m p a r i s o n s test. 48 53 Probability density function. 47.m o m e n t correlation coefficient. Box 5. 125. 183. Box 9. 38 Power curve.3. 207. 78 Probability distribution. 83 85 bivariate.1.). 85 Poisson. 205 206. 218 Percentiles. Box 4. identical to F „ 172 173. Box 10. 322 cumulative. 14 transformation of. 123 125 Prediction. 3 Pascal's triangle. E. 78 Parametric mean. 38 Parametric p r o d u c t .3. 270 280. 78.2 2 4 . 262 P r o d u c t . V. 263 264 O r d w a y .. 6 4 .

38 nz4 (parametric' value of added variance component). 251 Model I. 269 270 Model II. Box 11. 270 curvilinear. 255 deviation from (dYX). 286-290 computation of. 2 0 5 . 234 235. 348 Ranked variable. 60 Qualitative frequency distribution.4. 149 R a n d o m numbers. 53.249 nonparametric. 242 confidence limits of. 13 Ratios. 232 Regression line(s). 251 s f g (estimate of standard error of mean of ith sample). 232. 256 standard error of. linear.3. assumption in anova. 9 0 . 321 R a n d o m sampling. χ (mean square for deviations from regression). 254. 351 Rootogram. 257 259 Regression coefficient (/>). 271 Purves. 2 4 4 246 and correlation.. 5 6 . 246 247. 256 parametric value for (//). 2 5 3 . 38 si/. 38 s2 (sample variance).2 5 3 . 233 s (standard deviation). 37. (standard error for correlation coefficient). 232 confidence limits for. 252. Table /. 7 bivariate. 310.2 6 0 Rohlf. 253.2 0 6 Randomness. J. 253 of variable Y on variable X (b r x ). 351 rJk (product-moment correlation coefficient). Table XIV. Box 11. 241 2 4 3 . 248. hanging. 203-204 Repulsed distribution. 71 Repulsion: as departure from binomial distribution. 2 5 2 .3. 237 mean square due to.3. 238. 58 as departure from Poisson distribution. 212 Randomized blocks. 3 4 . 238 uses or. 49.·>*). 255 Rejection region. 203 Repeated testing of the same individuals.2 4 3 explained deviation from (>") 2 4 0 . 7 mean. 268 270. 118 119 critical. J. sum of. 13 14 Reciprocal transformation. 102. Box 9. 2 5 6 .1 1 9 rejection.2 8 9 .6 0 .2 5 4 . 3 0 8 .3 1 0 computation for. 9 Rates. 2 5 3 . 118 Relative expected frequencies.e («1. Kendall's coefficient of. 272 R χ C test of independence. 248 Sf (mean square due to linear regression). 262 Region: acceptance. 256.2 4 1 estimate of Y.3. 38 variance (.. 235 243 tests of significance in. Box 13. W. Box 11. 12 Ruffie. 2 5 6 . 129 n 2 (parametric variance).2 6 4 residuals. 150 Sample. 2 5 9 . Α. 49 statistics. F. 179 181 Si (any statistic).).360 INDEX Products. 254.2 5 4 standard errors of. 151 SS-STP (sum of squares simultaneous lest procedure). 163 q (binomial probability). 1 1 8 . 238 confidence limits of..4.5 7 Remainder sum of squares. 179. 81. 32 Quetelet.. 2 8 7 .2 5 7 .1. 259 . 2 5 2 test of significance for. Table 12. 54 ή (parametric binomial probability). 3 Quintiles. 260 equation for. Box 11. 181. 2 5 3 . 272 R a n d o m group effect (Λ. 241 difference between two. 149 SS (sum of squares). 118 Regression. 32 significance of. 250 257 transformations in.9 1 Rounding off. 2 3 5 .2 5 4 significance tests for. 280 s^ (sample estimate of added variance component a m o n g groups). Box 12. 205 c o m p u t a t i o n of. 57.1. Box 11. 71 Residuals in regression. Box 11.4.2 6 0 with single value of Y per X. 66. 243. 17 Quantitative frequency distribution. 230 264 computation of. Box 11.2 5 7 Regression statistics: computation of. 2 6 3 . 233 -234.3. 29 space. 38 χ. 239. 212 Range. 254 255. 106 s.3 5 Rank correlation. 17 Quartiles. 5 8 . 310 pik (parameter of product-moment correlation coefficient). 259 263 unexplained deviation from (c/r v |. 269 270 with more than one value of Y per A". 38 .4.

. R. 209. 102 of observed sample m e a n in regression. statistical Stem-and-leaf display.4 3 of dispersion. 102 of standard deviation. 1 6 8 .1 6 5 .2 5 3 of regression statistics. 21. 226 critical values for.2.3 4 p o p u l a t i o n . 1 2 6 . 115. R . Table I I I . 271 c o m p u t a t i o n a l formula for.4.4. Box 6.1 2 9 . Α. 150 154 unexplained. 281-283 of the difference between t w o means. W. 1 0 6 . 81.2 2 8 Signed-ranks test. 252 of sample mean. 8 5 .4. 255 of regression coefficient. 351 Skewness. 1 5 1 . 166 Sinnott. Y. R. 85 S l o p e of a function. 28. 1 6 2 . 2 5 8 .1 7 3 . 318 S u m m a t i o n signs. 203 s i m u l t a n e o u s test procedure.2. 192 partitioning of. 71.154 with dependent variable. 161 162 with equal sample sizes. Box H. description of. 115 Sign test. 317 S u m of squares t. 42. O.1 5 2 c o m p u t a t i o n a l rule for. Table 8. 2 7 . variance of.3. 2 1 2 Scale. L„ 209 Sum of deviations from the mean. 235 sum of (SS) 37.8 7 Scientific laws. 87. 1 descriptive.1 2 1 Significance tests: in correlation. 38 sample. 256 of a statistic. 2 8 0 of difference between t w o means. a l o n g regression line. Box 8. Box 10.2 5 9 Statistical significance. r a n d o m . 254. 49. 350 Set. 2 8 . 227. 177 in a n o v a .4 3 c o m p u t a t i o n of.3. Box 6. 182. Box 11. 3 6 . Box 6. 357 Spatially uniform distribution. See also S u m of squares Standard deviate. 66 Square.INDEX 361 Sampling. Box 11. 3 4 . Box 12.2. 4 2 from unordered data.. 101 of coefficient of variation. 37. W i l c o x o n ' s . 241 interaction. 2 2 . 129 Significance levels. 244. 179.4 3 of location.3. See Tables. 53. 343 Significance: of correlation coefficients. C. 241 c o m p u t a t i o n a l formula for. 8 8 . 290. 232 Sokal.1. 50 Single-classification analysis of variance. 107. Box 5. 160. 37. 102 Standard normal deviate. 315 explained. 283. 151. 318 a m o n g groups. 258 Student (W.1 2 9 . Box 6. mean. 218 Squares: least. Table VII.2 biological. 1 2 6 . 208. Gossett). 129 of regression statistics. 38 testing significance of.. E. 2 5 2 . 14.2. 255 of estimated Υ. Y. Table X I I . 102 of correlation coefficient. 151 a m o n g groups.S'). 315-316 of estimated m e a n in regression. 151 explained. 243. Box 3. 29 Swanson. 165 168. Box 3. 253 of a sample variance from a parametric variance. 83 Standard d e v i a t i o n (s). S. 2 2 7 . 172. 102 Standard error.. n o r m a l probability. 264. 239.1. 179 181 total. 331 c o m p u t a t i o n of. 83 Statistic(s).1. 3 9 . 255 of median. 251 Square root transformation. 219. 163-164 for t w o groups. 1 1 8 . 323 Sullivan.2 2 7 c o m p u t a t i o n for. 49 Shortest unbiased c o n f i d e n c e intervals for variance. Box 6.8 9 standard error of. 127 Statistical tables. 37. 152. 120 Simple event. 168-173 of regression coefficient. 351 Synergism. 12 Significantly different. 351 Student's I distribution.2 3 Structural mathematical model. 102 for c o m m o n statistics. 24. 258 Seng. 318 S u m of t w o variables. 67.S'. 351.1 3 0 Significant digits. 2 2 5 . 1 .2 8 4 . 241. 121 conventional statement of.181 c o m p u t a t i o n a l formulas for.1. 2 8 0 . 169-170 with unequal sample sizes.1 0 8 . 314 315 S u m of products. 129 Statistical control. 181. 177 remainder. Box 12. 150. 251. 1 2 9 .4. 83 Standardized deviate. 281-283 of the deviation of a statistic from its parameter. 351 Sokoloff. 162 c o m p u t a t i o n a l formula for.4 3 from frequency distribution.1. 195 . P. 41 graphic estimate of.

151 l a i c . 108. 141 a d d e d c o m p o n e n t d u e to. Table XIV. critical values. 332 / d i s t r i b u t i o n .2.). Table I. 2 0 6 2 0 7 c o m p u l a t i o n for. c o n f i d e n c e limits. 326 /• „.3. 2 2 0 . 1 2 3 .2 0 6 Tahle(s): contingency. 3 0 8 . 307 I w o . Box 13. 2 0 5 . 3 3 8 Treatment effect (a. of s a m e i n d i v i d u a l s .square.3 0 5 . 2 2 3 . l<>9 I /O of difference b e t w e e n t w o v a r i a n c e s . Box 12. 225 227 c o m p u l a t i o n for. 343 T h o m a s . 115 130 Ί esl(s): ot a s s o c i a t i o n . I 115. Table 7. I X .4. 1».//>/. 207. 3 3 0 K e n d a l l ' s rank c o r r e l a t i o n c o e f f i c i e n t . 218. 208.„. Table I . M a n n . 147 Triantilc.r a n k s test). 2 5 3 2 5 4 of a statistic.. 129 130 ol d i f f e r e n c e b e t w e e n t w o m e a n s . 22. Box 13.b v . 2 0 3 sign. 351 l a l e b . Box tf. 2 1 8 l o g a r i t h m i c ..t a i l e d . 2 9 8 Mann-Whitney U. 142 G. 310.1 0 8 f tables. 3 2 4 C o r r e l a t i o n coefficients.t w o table.w a y tables. 7 a b l e X I I I . Table VII.1. 1 7 2 . 64. 321 S h o r t e s t u n b i a s e d c o n f i d e n c e limits for the v a r i a n c e . 108. 2 2 7 τ ( K e n d a l l ' s c o e f f i c i e n t of rank c o r r e l a t i o n ) . Table X I I .2. 351 T o t a l m e a n square.V. 142-143 c o m p u t a t i o n for.3. Table X I I . (81 n o n p a r a m e t r i c . Box 10. Pascal's. Table X I I I .. Table X. 153 T o t a l s u m of squares.1. 3 1 6 .S m i r n o v t w o . 169 . 31 2 clii. Ί tihle VI. Table 7. Table II. 3 2 2 P e r c e n t a g e s .W h i t n e y statistic. Student's. ( 2 7 f2 equal to F.362 INDEX (critical v a l u e s of S t u d e n t ' s d i s t r i b u t i o n for ν d e g r e e s of freedom). 144 Tiansiormation(s): angular. 286 t d i s t r i b u t i o n . 2 8 0 284 222 204 For c o r r e l a t i o n coefficients. 3 4 6 N o r m a l c u r v e . Tabic X I . 3 0 1 . 308.3. 343 ζ t r a n s f o r m a t i o n of c o r r e l a t i o n coefficient r.3.2. Table I I I . 300 of d e p a r t u r e s Irom n o r m a l i t y . J.3 1 0 .r a n k s test). 2 1 8 m a n o v a . 169-173 c o m p u t a t i o n for. 126 129 of d i f f e r e n c e b c l w e c n a s a m p l e v a r i a n c e a n d a p a r a m e t r i c v a r i a n c e . Box 13. 129 t w o .1. 310 2 x 2 c o m p u t a t i o n . 307 . fable 1 7 / / . Box 10.1 7 3 . I' I. 125 126 o r d e r i n g . 262 in regression. 281 -283 of the r e g r e s s i o n coefficient. Α. 8S <)( of d e v i a t i o n of a statistic from its p a r a m e t e r .2 2 5 .3 0 1 for s i n g l e . 55 l u k c y .. Table IV.2. 221 m u l t i p l e c o m p a r i s o n s . h y p o l h e s i s . 333 R a n d o m digits.1 2 5 repeated. 323 ('. 125. 254.. 2 6 2 reciprocal.3 1 7 Τ (critical v a l u e of rank s u m o f W i l c o x o n ' s s i g n e d . I . 216 219 arcsine. 290. 2 6 0 probit. 323 t. Box f>. 91 T w o .c l a s s i f i c a t i o n f r e q u e n c y d i s t r i b u t i o n s . Bo\ 11.2 2 2 c o m p u t a t i o n for. 307 statistical: C'hi-square d i s t r i b u t i o n . N. Student's. 227. 343 T s (rank s u m of W i l c o x o n ' s s i g n e d .. 2 0 5 2 0 7 p o w e r of.1 7 0 for paired c o m p a r i s o n s . 2 9 7 .2 2 4 . 323 / test: lor difference between two means. Table . 2 9 4 . ( s a m p l e statistic o f t d i s t r i b u t i o n ) . 218 οΓ c o r r e l a t i o n r. ISO 154 c o m p u t a t i o n of. 227.1. 2 2 3 . 3 1 0 . 3t)7 308 T a g u c .4. 2 2 6 critical value for. IIox 10. 122 Wilcoxon's signed-ranks. 338 t w o .t w o f r e q u e n c y . 259 263 square rool. Box H.3 1 2 for g o o d n e s s of fit. 1 0 6 . 166.3 1 2 K o l m o g o r o v .w a y f r e q u e n c y . 2 9 8 l o g l i k e l i h o o d ratio. 3 4 6 l i k e l i h o o d ratio. 331 I d i s t r i b u t i o n . R. Tabic I I I . 263 2 6 4 of p a i r e d c o m p a r i s o n s .2.2 2 8 o n e . 339 W i l c o x o n rank s u m . a r e a s of.b y . Box 9. . Box 7. 135. 348 Kolmogorov-Smirnov two-sample statistic. 2 2 7 2 2 8 of significance: in c o r r e l a t i o n . 351 I e s l i n g . 2 2 0 . 3 0 5 . 283. 309 t w o . Table I I I . If>S 17 3 c o m p u t a t i o n lor. Table X I I . 64..t a i l e d .s a m p l e test. 302-304 of i n d e p e n d e n c e : Κ χ C c o m p u t a t i o n .

D „ 313. 221. 115 equality of. 308 Willis. Box ft J . 1 9 7 . 41 U n p l a n n e d c o m p a r i s o n s .2 2 7 c o m p u l a t i o n for. 2 4 0 241 Ϋ. Α. 233 Y values. 2 0 0 significance testing for. Box 10.t w o tests of i n d e p e n d e n c e . Mann-Whitney. 103 U n e x p l a i n e d m e a n square. 142 143. 1 6 7 . 3 0 5 . 2 2 0 . See A n a l y s i s of variance components. 251 U n e x p l a i n e d s u m of squares. by shortest unbiased c o n f i d e n c e intervals.. 3 0 8 . D. 9 nominal.1.1 3 7 of m e a n s . S„ 158. T.1 4 2 m a x i m u m ( F m J .. 7 10 c o m p u t e d . 16. 190. 1 7 4 . 222. 227. 221. S. 256 parametric value of (a). 318 Variance ratio ( f ) . 1 1 7 . 8 U n o r d e r e d data. Β Η. 30.3. 187 w i t h o u t replication. Box 9. 2 3 2 c o n f i d e n c e limits for.2.). 2 2 5 . G. 9 Variance(s). 338 Ζ (height of o r d i n a t e o f n o r m a l curve). coefficient of. 2 2 6 critical value of rank s u m . 222.w a y frequency table. V (coefficient o f variation). 309 T w o . 38 sample. Box 13.1 7 9 Utida. 38 of s u m of t w o variables. 98 parametric (σ 2 ). L. 9 discrete.Ιαι „2) ( M a n n . a d d e d a m o n g groups.F e c h n e r law. 260 W e i g h t e d average.. 283. with replication. 352 Us ( M a n n . 352 Williams' correction.w a y analysis of variance. Box 9. 141 T w o . 4 3 Vollenweider. 9 meristic. R„ 3 Weltz. Table X I . 2 3 2 measurement. 30 Ϋ ( m e a n of means). 339 U-shaped frequency distributions. 78 ζ (parametric value of z). 258 Y o u n g .w a y frequency distributions. 149 e s t i m a t i o n of. Box 3 1. 352 Williams.. 307 T w o . 98 W e l d o n . c o m p u t a t i o n of Ϋ a n d s from. 122 T w o .„ Jr. 300 X1 test. 213. F. critical values in. 266. 3 5 0 W i l c o x o n ' s signed-ranks test. 3 5 2 W o o d s o n ... 3 5 2 Wrighl. E. Table X. 352 Yule. 339 U n b i a s e d estimators. A. 351 h\ (weighling factor). Table 343 W i l k i n s o n . 2 1 3 . e x p e c t e d . 98 W e b e r . Box !0. 130. 174 a m o n g means. Table VI.W h i t n e y s a m p l e statistic). (estimated value of VJ.1 2 1 T y p e II error. 351 Williams.2 0 7 c o m p u t a t i o n of.2.t a i l e d F test. 349 White.1 2 5 a m o n g groups.t a i l e d test. 213 214 X2 (sample statistic of chi-square distribution).b y . M. 3 0 4 -305. 1 1 6 . W. 352 Willison. Sec Chi-square test y (deviation from the mean).W h i t n e y statistic). Α. 4 3 Value.3 1 2 T y p e I error. adjusted.1 3 7 h o m o g e n e i t y of. 10 Variation.1 6 8 c o n f i d e n c e limits for. 3 5 0 ζ t r a n s f o r m a t i o n for r). 307 test of i n d e p e n d e n c e for.2 1 4 a m o n g means.INDEX 363 T w o . 9 dependent. 228.1 9 7 c o m p u t a t i o n of. 13 continuous. 1 3 . 37 analysis of. 220 ί/.1 9 9 T w o . 33 ϋ-test. F. 241 U n i o n . 1 8 6 . 64. 2 2 6 XII. 9 ranked. 36 0 (explained d e v i a t i o n f r o m regression). W . 330 Variate. 98 W e i g h t i n g factor (w. 1 3 6 . R„ 142. 38. 283 284 . 26. 1 3 8 . C. 98 Variable.. 136 Y intercept. U „ 70. 9 independent. R. 229. 304. 232 derived. J. 1 3 6 . 237 Y (arithmetic mean).2 2 2 computation for. 50 Universe. S„ 71. J.. 114 115 c o m p u l a t i o n of.1.3 1 0 c o m p u t a t i o n for. R.1 4 discontinuous. Table X I . 199 .

p r a c t i c a l c o l l e c t i o n of c l a s s i c a l a n d m o d e r n m e t h o d s p r e p a r e d b y U . m u c h m o r e . ίι'ί. 0·48(>-(>. n .r>3. 3 6 0 p r o b l e m s . g r a d e d in dilTicultv. H i g h l y r e a d a b l e . a g r i c u l t u r e . 0 1 Η(» t. Bernoulli (rials.D e c i s i o n Theory. A n a l y s i s of t h e p r o b l e m s . 0-486-60. χ 8'/. Excellent basic text ( o v e r s set t h e o r y . t h e o r y a n d d e s i g n of s a m p l i n g t e c h n i q u e s f o r social scientists.9-X S O M E T H E O R Y O F S A M P L I N G . S t r e s s o n use.ί'Ι χ Κ1/.r >2. E d w i n L. ΓΛ χ Η1/·. r >2-1 GAMES AND DECISIONS: INTRODUCTION AND CRITICAL SURVEY. R e m a r k a b l e pu/.Γ> I t < 1 STATISTICAL Μ ΕΤΙ IOD FROM THE VIEWPOINT OF QUALITY CON'FROL. χ 8'/. e t c . C l e a r i n t r o d u c t i o n to statistics a n d statistical t h e o r y c o v e r s d a t a processing. S . Statistics. a d v a n c e d c a l c u l u s a p r e r e q u i s i t e . a n d m i l i t a r y c o n t e x t s . 5 1 χ 81/./. C r o w et al. o t h e r a r e a s . C . testing h y p o t h e s e s .')2..r>491 -it P R O B A B I L I T Y : A N I N T R O D U C T I O N . . S a m u e l s o n a n d R o b e r t M . S u p e r b n o n t e c h n i c a l i n t r o d u c t i o n to g a m e t h e o r y .9. sell c o n t a i n e d inlrodiK tion ( o v e r s c o m b i n a t i o n of e v e n t s . 2 8 8 p p . 5 S χ 8'ί. S o l o w . i n d u s t r i a l m a n a g e r s a n d o t h e r s w h o f i n d statistics i m p o r t a n t at w o r k . R o z a n o v . . m o d e r n w e l f a r e e c o n o m i c s . ]. 0-48(i-64()84-X LINEAR P R O G R A M M I N G A N D E C O N O M I C ANALYSIS. S a m u e l G o l d b e r g . Utility t h e o r y . 0-tKti1281 1 7 I II FY C H A L L E N G I N G P R O B L E M S IN P R O B A B I L I T Y W I T H S O L U T I O N S .3. . P a u l A .p e r s o n g a m e s . W i l l i a m E d w a r d s D o m i n g .lers. S l i e w h a r t .s u m g a m e s . ' 0-486-65218-1 S T A T I S T I C S M A N U A L . L e o n t i e f i n p u t o u t p u t . χ 8'/. B i b l i o g r a p h i e s . illustrate e l e m e n t a r y a n d a d v a n c e d a s p e c t s ol p r o b a b i l i t y .m a k i n g . b n p o i taut text e x p l a i n s ι e g u l a t i o n ol v a r i a b l e s .r> 2 P R O B A B I L I T Y T H E O R Y : A C O N C I S E C O U R S E . i n c l u d i n g e c o n o m i c .!2 7 . M c K i n s e y . Basics of statistics a s s u m e d .Γ>ίΜΜ-7 IN T R O D U C T I O N T O T H E ' T H E O R Y O F G A M E S . N a v a l O r d n a n c e T e s t S t a t i o n . D u n c a n L u c e a n d H o w a r d R a i f f a . Probability E L E M E N T A R Y D E C I S I O N T H E O R Y . m u c h m o r e . p r o b a b i l i t y t h e o r y lor f i n i t e s a m p l e s p a c e s .r>. F r e d e r i c k M o s l e l l e i . 3 2 2 p p . A p p r o p r i a t e for a d v a n c e d u n d e r g r a d u a t e a n d g r a d u a t e c o u r s e s . z e r o . m o r e . 1 0 2 p p . p r o b a b i l i t y a n d r a n d o m variables. G a m e t h e o r y . χ S7. C . I 18pp. p r i m a r i l y a p p l i e d lo social s c i e n c e s . b i n o m i a l t h e o r e m .509pp.. Exercises. W a l l e r A. 8 8 p p . 0I8(i-(i. Robert Dorfman.CATALOG OF DOVER BOOKS M a t h .. Ft'i. :-S()4pp. m u c h m o r e . 9 0 f i g u r e s . xvii +<>0'2pp. C o m p r e h e n s i v e . u s e s ol statistic al ( o n t r o l lo in liieve (juality c o n t r o l in i n d u s t r y . Y. d e c i s i o n .Γ>* χ Η1'. . -y'k χ 8Ά. 3 7 2 p p . First c o m p r e h e n s i v e t r e a t m e n t of l i n e a r p r o g r a m m i n g in s t a n d a r d e c o n o m i c a n a l y s i s . d e p e n d e n t e v e n l s . D e t a i l e d solutions. R.. B i b l i o g r a p h y . 1052 e d . til t a b l e s . M o s e s . H e r m a n C h e r n o f f a n d Lincoln E. W S p p . sot ial. 0-48(>-(>. . 0 I8(i (. p o l i t i c a l . 0 I8(> ti. x + . 1 3. A. This ( o m p r e h e n s i v e o v e r v i e w ol t h e m a t h e m a t i c a l t h e o r y ol g a m e s illustrates a p p l i c a t i o n s lo s i t u a t i o n s i n v o l v i n g < o n l l i e l s ol i n t e r e s t .

A u t h o r i t a t i v e . l l i e o r y ol elasticity. a n d t h e o r y of i n t e g e r s . M . x i i + 2 6 0 p p . a n s w e r s . 0 ·|8(ί <>7!»82 ί> CHALLENGING MATHEMATICAL I'ROBLEMS WITH El. T r i c o m i . j ' w n vol. " 0-486-64940-7 I N T E G R A L E Q U A T I O N S . ix + 283|>p. (i ·18(>-(>7(>:ί2 :i C A L C U L U S O F V A R I A T I O N S . T o l s t o v . t o p o l o g y . m u c h m o r e . T h o r o u g h a n a l y s i s o f t h e o r e m s . ( i l . F r e d h o l m E q u a t i o n s . T o l a l of I Uipp. m o r e . M . a r t .•! 17 <1 I N T R O D U C T I O N T O M A T H E M A T I C A L T H I N K I N G . 818pp.Ν TARY S O L U T I O N S . se). 0 .EMΙ'. A valua b l e a d d i t i o n t o t h e l i t e r a t u r e oil t h e s u b j e c t . I n c . D o v e t |>ri( e in l o r m a l i o n 01 I ' t i b l i c a l i o i l s .i χ 8'/·. m o r e . Y a g l o m a n d I. !~ri χ 8'/. D o v e r p u b l i s h e s i n o i r veal oil scicilt e. s c i e n c e . N o t e d logic i a n ' s lucirl t r e a l m e n l of historical d e v e l o p m e n t s . E x e r c i s e s . q u a n t u m m e c h a n i c s . 2nd Sheet. V o l . e n g i n e e r i n g . II: 0 186 t i . Morris T e n e n b a u m and Harry P o l l a r d . >00 b o o k s e a c h r in p r i n t . Λ. G . p r o o f t h e o r y .YY'iti !) V o l . p o i n t s a n d lines.rom. l i l e t a r \ l i i s l o l \ .. Basic i n l r o d u c l i o n c o v o i i n g i s o p c t i m e l r i c p i o b l e m s .doverpublications. b i o l o g v . g e o m e t r y . c o m p l e x a n d h v p e r c o m p l e x n u m b e r s . r e c u r s i o n t h e o r y a n d c o n s t r u c t i v i s m . T r a n s l a t e d b y R i c h a r d A .kit>pp. B i b l i o g r a p h y . m o d e l t h e o r y . 3 a p p e n d i x e s . B i b l i o g r a p h y .Ι 8 ( ι (>331 7-9 P O P U L A R L E C T U R E S O N M A T H E M A T I C A L L O G I C . i / 7 l'apei 1 μπιιηΙ unless otherwise indicated. elc. Friedrich Waismann. r a t i o n a l a n d n a t u r a l n u m b e r s . S o l u t i o n s . w i i l o l o D o v e r c o m and see e \ e t y I)o\ei and book P u b l i c a t i o n s o r Ιομ o n lo www.'•!. Y a g l o m . ill lor lice < alalomies cuilent (please i n d i c a t e t i e Id o i i n l e t e s l 1. 3 2 ( i p p . 0-48(1-64828-1 F O U R I E R S E R I E S . 2 3 8 p p . 2 7 f i g u r e s . 1981 e d i t i o n . i b i l i l \ t h e o r y . c o m b i n a l o r i a l a n a l y s i s . c o m p l e t e i n d u c t i o n . 107 p r o b l e m s . w i i l i n i .'·. I n d e x . a n d o l l i c i advanced i n a l I i c n lal i c s . S i l v e r m a n . E x e r c i s e s t h i o u g h o u t . Mineola. F. l o t i p p . m o v i n g c l e a r l y f r o m s u b j e c t to s u b j e c t a n d t h e o r e m tu t h e o r e m . l h a n . . m a i n o i l i e r l o p i c s . A d v a n c e d u n d e r g r a d u a t e to g r a d u a t e level.>% χ 8V·. Robert W e i n s l o c k . r e m a r k a b l e c u r v e s . Classic ol 20lh c e n l u i y f o u n d a t i o n a l r e s e a r c h d e a l s with t h e c o n c e p t u a l p r o b l e m p o s e d by the c o n t i n u u m . χ 8'/. • % χ 8'/. s o i i a l st ι ao'as .CATALOG OF DOVER BOOKS O R D I N A R Y D I F F E R E N T I A L E Q U A T I O N S . )!)*>!) e d . ~r'k χ Η1. B i b l i o g r a p h y . Ιο D e p t . V o l t e r r a E q u a t i o n s . e l r i n e n l a i \ eu< e s .\ r >. c o n v e x p o l y g o n s . o .w r i t t e n t r e a t m e n t of e x t r e m e l y u s e f u l m a t h e m a t i c a l t o o l w i t h w i d e a p p l i c a t i o n s . D i a g r a m s . sel t h e o r y . 0 186 (i:WMi!l 2 T H E C ( ) N T I N l H IM: A C R I T I C A L Ε Χ Λ Μ Ι Ν Λ Ί Κ )N O F T l IE F( H I N D A T K )N < )l·' A N A L Y S I S . H e r m a n n W e y l . I I a n W a n g . E x a m i n a t i o n s of a r i t h m e t i c . '> 0-ΙΚ(ί-(>. w e l l . 5 it χ 81/. NY llolll. Λ* χ 8'/·. G e o r g i P. . '. e l e c t n i s t a l i c s .. 1: 0 18(> ti.doverpubJjcations. E x h a u s t i v e s u r v e y of o r d i n a r y d i f f e r e n t i a l e q u a t i o n s f o r u n d e r g r a d u a t e s in m a t h e m a t i c s . or In tot Available al vour book dealer. O y e r 170 c h a l l e n g i n g p r o b l e m s o n p i o b . limit a n d p o i n t ol a c c u m u l a t i o n . m u s i c . online al Last www.