You are on page 1of 513

BRIEF CONTENTS

PART ONE: Principles of Test Construction


1. Introduction to Measurement 3
2. Test Construction 21
3. Item Writing 36
4. Item Analysis 57
5. Reliability 79
6. Validity 104
7. Norms and Test Scales 125
8. Response Set in Test Scores 140

PART TWO: Principles of Measurement


9. Measurement of Intelligence, Aptitude and Achievement 149
10. Measurement of Personality 186
11. Projective Techniques 219
12. Techniques of Observation and Data Collection 267
13. Scaling Techniques 315

PART THREE: Principles of Research Methodology


14. Sampling 365
15. Social Scientific Research 392
16. Single-Subject Experimental Research and Small N Research 433
17. Historical Research 446
18. The Problem and The Hypothesis 451
19. Reviewing the Literature 466
20. Variables 474
21. Research Design 491
22. Qualitative Research 555
23. Carrying Out Statistical Analyses 593
24. Writing a Research Report and a Research Proposal 669

Objective Questions 683


Appendices and References 711
Glossary 753
Subject Index 777

(vill)
CONTENTS
PART ONE: Principles of Test
Construction
,
INTRODUCTION TO MEASUREMENT3
Measurement and Evaluation 3
The History of Psychological
Levels of Measurement (or Measurement and Mental
Measurement Scales)9 Testing 4
Properties of Scales of Measurement 13
Functions of Measurement 13
Distinction between Psychological Measurement
Steps in Measurement Process 16 and Physical Measurement 15
Problems Related to the Measurement Process
17
General Problems of Measurement 18
Sources of Errors in Measurement 19
Difference among Assessment, Testing and
Review Questions 20 Measurement 20

2. TEST CONSTRUCTION 21
Meaning of Test in Psychology and Education 21
Classification of Tests 23
Characteristics of a Good Test 25
General Steps of Test Construction 26
Uses and limitations of Psychological Tests and
Ethical Issues in Psychological Testing 31
Testings 29

Steps in Selecting Appropriate Published Tests 33


Review Questions 35
3. ITEM WRITING 36

Meaning and Types of ltems 36


Difference Between Essay-Type Tests and Objective-Type Tests 51
General Guidelines tor Item Writing 54
General Methods of Scoring Objective-Test tems 55
Review Questions 56

4 ITEM ANALYSIS 57

Meaning and Purpose of Item Analysis 57


Power Tests 58
Item Difficulty 58
Optimal Difficulty Value for a Reliable Test 63

Index of Discrimination 63
67
ASimplified Item Analysis Form 69
rectiveness of Distractors or Foils (Distractor Analysis)
Speed Tests 70
and
dctors Influencing the Index of Difficulty
the Index of Discrimination 72
Problems of Item Analysis 72
(ix)
Important Interactions Among Item characteristics 74
The Item Characteristic Curve (ICC) and Item Response theory 75
Review Questions 78

5. RELIABILITY 79

History and Theory of Reliability 79

Meaning of Reliability 82
Methods (or Types) of Reliability 85
What is a Satisfactory Size for the Reliability Coefficient? 95
Standard Error of Measurement 96
Reliability of Speed Test 97
Factors Influencing Reliability of Test Scores 97
How to Improve Reliability of Test Scores? 100
Estimation of True Scores 101
Index of Reliability 102
Reliability of Difference Score 102
Reliability of Composite Score 103
Review Questions 103

6. VALIDITY 104

Meaning of Validity 104


Aspects of Validity 105
Discriminant Validation 114
Convergent Validation and
Statistical Methods for Calculating Validity
117

Factors Influencing Validity 118


Cross-Validation 121
Concept of
Extravalidity Concerns 122
123
Relation of Validity to Reliability
Review Questions 124

125
7. NORMS AND TEST SCALES
and Criterion-Referencing 125
Meaning of Norm-Referencing
Norms 127
Steps in Developing
and Test Scales 127
Typesof Norms Testing and Assessment
136
Computer Applications in Psychological
Criteria of Good Test Norms 138
Review Questions 139

8. RESPONSE SET IN TEST SCORES 140

Meaning of Response Sets 140


Types of Response Sets 141
Implications of Response Sets 143
Methods to Eliminate Response Sets 143
Review Questions 146

PART TWO: Principles of Measurement


9. MEASUREMENT OF INTELLIGENCE, APTITUDE AND ACHIEVEMENT 149

Different Viewpoints Towards Intelligence 149


Types of Intelligence Tests 157
Test Scores 175
Typesof Intelligence
Distinction Between Aptitude Test and Achievement Test 177
and Achievement Tests 178
Types of Aptitude
184
Tests of Creativity
Review Questions 185

OF PERSONALITY 186
MEASUREMENT
10,
of
Meaning and Purpose Personality Measurement 186
Assessment 187
Tools of Personality
Pavular Strategies Involved in Construction of Personality Inventories 188

Overcoming Distortions in Self-report Inventories 206


Situational Tests 207
Values and Attitudes 209
Measurement of Interests,
Review Questions 218

TECHNIQUES 219
PROJECTIVE
11.
of
Meaning and Types Projective
Techniques 219

Classification of Projective Techniques 220


Pictorial Techniques 224
224
The Rorschach Test
Rorschach Protocol 241
Interpretation of the
Inkblot Test 252
The Holtzman
Thematic Apperception
Test (TAT) 254
Verbal Techniques 260
Expressive Techniques 261

Techniques 264
Evaluation of Projective
Review Questions 266

OBSERVATION AND DATA COLLECTION 267


12. TECHNIQUES OF
Methods of Data Collection
268
Schedule 269
Questionnaire and
Interview 275
Content Analysis 280

Observation as a Tool of Data Collection 282


and
Difference Between Participant Observation
Nonparticipant Observation 285

Rating Scale 285


Types of Rating Scales 286 293
Other SpecialTypes of Rating Scales
305
Problems in Obtaining Effective Ratings
307
Methods of Improving Effectiveness of Rating Scales
Errors in Ratings 308
Evaluation of Rating Scales 310
Review Questions 313
13. SCALING TECHNIQUES 315

Distinction Between Psychophysical and


Psychological Scaling Methods 315
Psychophysical Scaling Methods 316
Weber's Law and Fechner's Law 333
Steven's Power Law 335
(xi)
Methods 336
Newer Psychophysical
Methods 337
Psychological Scaling
Methods of Attitude Scales or Opinionnaires 352
Review Questions 362

PART THREE: Principles of Research Methodology


14. SAMPLING 365
Meaning and Types of Sampling 365
Need for Sampling 367
Fundamentals of Sampling 368
Principles of Sampling 369
Factors Influencing Decision to Sample 371
How Large should a Sample Be? 372
Methods of Drawing Random Samples 374
Simple Random Sample 375
Stratified Random Sample 378
Area (or Cluster) Sampling 381
Quota Sampling 382
Purposive or Judgemental Sampling 383
Accidental Sampling 384
Snowball Sampling 384
Saturation Sampling and Dense Sampling 385
Double Sampling 386
Mixed Sampling 386
Requisites of a Good Sampling Method 387
Common Advantages of Sampling Methods 388
Sampling Distribution 388
Sampling Error 389
Review Questions 391

15. sOCIAL SCIENTIFIC RESEARCH 392

Meaning and Characteristics of Scientific Research 392


Scientific Approach to the Study of Behaviour 394

Validity in Research or Experimental Validity 397


Controlling Threats to Reliability and Validity in Research 401
Phases or Stages in Research 403
Types of Educational Research 406
Types of Research: Experimental and Nonexperimental 406
Difference Between Research Method and Research Methodology 424
Ethical Problems in Research 425
Comparison Between Experimental and
Nonexperimental Research 427
Types of Experiment 428
Types of Applied Research 429
Review Questions 431

16. SINGLE-SUBJECT EXPERIMENTAL RESEARCH AND SMALL N RESEARCH 433


Meaning and Origin of Single-Subject Experimental Research 433
General Procedures of Single-Subject Experimental Research 434

(xi)
BasicDesigns of Single-Subject
Experimental Research
Data-Collection Strategies in Single-Subject 435
Fvaluating Data in
Single-Subject ExperimentalExperimental Research 439
Strengths and Weaknesses of Research 440
Comparison Between Single-Subject Experimental Research 441
Small N Design: NatureSingle-Subject
and
Research
Historical
and Large NResearch 443
Review Questions 445 Perspectives 444

17. HISTORICAL RESEARCH 446


Meaning of Historical Research and its
Steps in Historical Research 447 Necessity 446
Sources of Historical Data 447
Historical Criticism 448
Limitations of Historical Research 449
Review Questions 450

18, THE PROBLEM AND THE HYPOTHESIS 451


Meaning and Characteristics of a Research
Sources of Stating a Research
Problem 451
Problem 453
Important Considerations in
Ways in Which a Problem is Selecting
a Research
Problem 454
Manifested 455
Types of Research Problems 456
Importance of Formulating a Research Problem
Steps Formulating a Research Problem 457 457
in
Meaning and Characteristics ofa Good
Formulating a Hypothesis 460 Hypothesis 458
Ways of Stating a Hypothesis 461
Types of Hypotheses 461
Sources of Hypotheses 463
Functions of Hypotheses 464
Review Questions 465
19. REVIEWING THE
LITERATURE 466
Purpose of the Review 466
Types of Literature Review 467
Sources of the Review 467
Types of Literature 469
Writing Process of the Literature Review 470
How Old should the Literature be? 471
Preparation of Index Card for Reviewing and Abstracting 471
Abstract 472
Review Questions 473
20. VARIABLES 474

Meaning and Types of Variables 474


Ditference between a Variable and a
Concept 481
Methods of Measuring
Dependent Variables 481
Important Considerations in Selection of Variables 482
important Approaches to Manipulating Independent Variables 483
lechniques of Controlling Extraneous Variables 484
Controlling Demand Characteristics 487
Review Questions 490

(xili)
21. RESEARCH DESIGN 491
Meaning and Purpose of Research Design 492
Criteria of Research Design 494
Basic Principles of Experimental Design 495
Basic Terms used in Experimental Design 496
Some Important Types of Research Design 500
Between-subjects Design 501
Problem of Creating Equivalent Groups in Between-subjects Design 534
Within-subjects Design 535
Problem of Controlling Sequence Effects in Within-subjects Design 536
Comparison of Between-subjects Design and
Within-subjects Design 538
Experimental Design based upon the Campbell and
Stanley Classification 539
Pre-Experimental Design (Nondesigns) 5400
True Experimental Design 541
Quasi-Experimental Designs 543
Ex Post Facto Design 550
Steps in Experimentation 551
Review Questions 554

22. QUALITATIVE RESEARCH 555

Meaning and Essential Features of Qualitative Research 555


AQualitative Research Model: Five Components 556
Relevance of Qualitative Research 558
Qualitative Research: ABrief Historical Introduction 558
Themes of Qualitative Research 560
Theoretical Perspectives of Qualitative Research 562
Research Design Strategies of Qualitative Research 564
Sampling Techniques of Qualitative Research 569
Data Collection Techniques in Qualitative Research 572
Data Analysis and Interpretation 578
Comparison of Methods of Qualitative and Quantitative Data Analysis 587
Combining Qualitative and Quantitative Approaches 589
Review Questions 591
23. CARRYING OUT STATISTICAL ANALYSES 593
Sample and Population 593
Normal Curve 594
Measures of Relative Position: Standard Scores 600
Parametric and Nonparametric Statistical Tests 601
Parametric Statistics 604
Nonparametric Statistics 641
Correlation and Regression 659
Major Terms and Issue in Correlation and Regression 660
Choosing Appropriate Statistical Tests 664
Review Questions 668
24. WRITING A RESEARCH REPORT AND A
RESEARCH PROPOSAL 669
General Purpose of Writing a Research Report 669

(xiv)
Structure or Format of a Research Report
Style of Writing a Research Report 674 (Style Manual) 669

Typing the Research Report 676


Evaluating a Research Report 676
Preparinga Research Proposal 676
Review Questions 679
OBJECTIVE QUESTIONS 683
APPENDICES AND REFERENCES 711

GLOSSARY 753

SUBJECT INDEX 777

(xv)
Sciences
Bebavioural
Metbods in
and Research
b00 Tests,
Measurements
Camrying Out Statistical Analyses 601

Q
K, Pao-Po (23.6) r e s may convenenuy n
s nto any specitied type of distribution, preferably the normal
ttion. The reason for choosing a normal
distribution is twofold. First, most of
cteristics encountered in behaviOural researches are the
where, K, = Kurtosis Clation and second, a normal distribution facilitates further normally distributed among the
statistical computation. Thus,
Q-Quartile deviation Palized standard scores may be defined as those standard scores which have been expressed
the
0.00
and 0.50 although most valtou n farm of a distribution that has been transformed in a
The numerical value of Ku
must always be between way that fits a normal curve. Like
distribution, formula 23.6 gives Ku =
ndard scores, normalized standard scores can be expressed linear
0.31. For normal
fall within the limits of 0.21 and and it less than 0.263, the distriheution ltis andard
in terms of the mean of zero and
deviation 1. If person has obtained zero as a normalized
ot a
the distribution is platykurtic sta na

Ku is greater than 0.263,also be calculated by another formula (Downie & Heath 1970) hat the performance of that person lies exactly at the mean and hence,standard score, it indicates
he excels over 50% of the
leptokurtic. Kurtosis can
SCORES
nersons in his group. LIKewise, IT a person's performance is at +2SD units above the mean or
pe
STANDARD
MEASURES OF RELATIVE POSITION: rmal curve, it means ne surpasses about 98% of the persons in his Likewise, it his
The relative position of a score means its
distance from the mean, expressed in terms of so
nerformance falls at -SDunits below the mean, it means he surpasses onlygroup. about 2% of persons
A standard score, which is a kind of der n his group. The common examples of normalized standard scores are the T-score and Stanine
deviational measures such as standard
deviation. erived
score, is a method of expressing the distance of the score from the mean in terms of standa. ccore, which have already been discussed in detail in Chapter 7. The reader should note carefully
standard score is that it has a fixed mean anda fixed stand: that if the distribution of original scores is a normal one, the linear standard scores and the
deviation. The 'standard' about a tandard normalized standard SCores would yield more or less identical results. A normalized standard
deviation. Standard scores can be classified into two most common categories.
score is preferred only it the situation meets the following requirements:
Linear Standard Scores 1. The sample is large.
The underlying purpose of transforming any original scores into standard scores is to make the
2. The sample is representative.
scores on diferent tests comparable. There may be linear or nonlinear transiormation of the he
3. The non-normality of the distribution of original scores is not due to the behaviouror
original scores. A linear standard score is one where linear transtormation of original scores is
trait under consideration, rather it is due to some defects in the test material itself.
made. When standard scores are based upon linear transtormation, they retain all the
characteristics of original raw scores because they are computed by subtracting a constant (such PARAMETRIC AND NONPARAMETRIC STATISTICAL TESTS
as mean) from each raw score and then dividing the obtained value by another constant (such
as The parametric and nonparametric statistical tests are commonly employed in behavioural
SD). Since all characteristics of original raw scores are duplicated inlinear standard scores, any researches. A parametric statistical test is one which specifies certain conditions about the
statistical computation that can be done with original raw scores can also be done with such parameter of the population from which a sample is taken. Such statistical tests are considered to
standard scores. The most common examples of linear standard scores are the
sigma scores (or z be more powertul than nonparametric statistical tests and should be used if their basic
scores), Army General Classification Test (AGCT) scores, College Entrance Examination Board
(CEEB) scores and Wechsler Intelligence Scale DIQs. These
requirements or assumptions are met. These assumptions based upon the nature of the
are

linearly derived standard scores can population distribution well upon the type of measurement scales used in quantiying the
as as
be compared among themselves only when
they are obtained from distributions which have data. The assumptions may be enumerated as follows:
approximately similar shapes. 1. The observations must be independent. In other words, the selection of one case must
A sigma score is one which expresses how not be dependent upon the selection of any other case.
many standard-deviation units a particular score
falls above or below the mean. To 2. The observations must be drawn from
compute this, we subtract the mean from each original score a normally distributed population.
and then, divide the result by the standard
deviation. A detailed discussion of the 3. The samples drawn from a population must have equal variances and this condition
already been done in Chapter 7. As discussed in that chapter, a z score or sigma score has
more important if the size of the sample is particularly small. When the different samples
important limitations, namely, occurrence of negative values and occurrence of sigma score has two
decimal taken from the same population have equal or nearly equal variances, this condition
fractions, which make a sigma score difficult for use in known as homogeneity of variance. Statistically speaking, by homogeneity of variance is
further statistical calculation as well as in
reporting. get
To rid of these difficulties, further linear
transformation of sigma scores is made. meant that there should not be a significant difference among the variances of
AGCT scores, CEEB scores and
DIQs in Wechsler Intelligence
Scales are different samples.
transformation. The AGCT
scores employ
a mean of 100 and
examples
of such linear
4. The variables must be expressed in interval or ratio scales. Nominal measures (that is,
employ a mean of 500 and standard deviation of standard deviation 20; CEEB scores
100; and WISDIQS employ a mean of 100 and a frequency counts) and ordinal measures (that is, rankings) do not qualify for a parametric
standard deviation of 15
for further
linearly transforming
the sigma scores. To convert the statistical test.
sigma score into any of the above
standard scores, we original 5. The variable under study should be continuous.
by the desired SD and add
or
simply need to
subtract it from the desired multiply
the standard score
The examples of a parametric test are the z test, ttest and Ftest.
mean value.
Normalized Standard Scores A nonparametric statistical test is one which does not specify any conditions about the
Sometimes researchers may wish these statistical tests do not
to
compare the Prameter of the population from which the sample is drawn. Since
dissimilar shapes. In such scores obtained from distributions any specified and assumption about the form of the distribution of the population,
situations, they employ some having
nonlinear transformations so that
dke precise
uese are also known as distribution-free statistics. The nonparametric statistics do not specify any
tne
saseaIDu ezIs ajdues aui se juajpia ssaj pue ssaj auuoeq Aayi ing az1s ajdwps
IJeus e ypim uepiya alou auo3aq sogsyes aau-uoynqusip aup 's1s9) ouuBuweed jo suondunsse
TeSiny puM eep aui oi paydde aue sogSyejs oLunauwejeduou pue oinauwesed yoq soinsnes
1s21 s! ÁDuapua)
LJawejed oi jenba ADuaipiyja jeonsgejs aAey soIisijeis ojauejeduou uayi sojsijejs DujaweJeduou u! SealayM ueew si ADuapuajueipaw |ejuao jo aunseaw ay
>aweJed jejuao jo aunseaw ayi
o 1ou inq soi]siJe]s DLUjPWeJeduou jo suo]dunsse||P sjøaw JPYi yons s/ ejep ayi | 's]s 'js3 DUNaweJed uj.
LUjaweJed ayi ueyi jua|uaAUOD a1ow uayo ae S]s] O11]9uejeduoN ADuaIDIJa |PDIS|]PIS cpaiauM DJawejed "Suondunsse 11o|dx3 ypns ayeu i,uop sjsa)
uoejndod înoqe suondunsse iyIDIds auos uodn DLUBweJeduou
jsau ss3)
s1SI]P]S Dujawejed aui ueyi juajpija ssaj pue anjsuaju!unoqpj auIwnsuo)-awy
awooaq DLU]AWeed
sonsie]s LjeweJeduou 'saseaDu azis ajdwes ayi Se jeyi ajou DUe uaweJed uaaM]aq
pjnous japea auj 'sonsIJe]s sauaiajip jeuawepun, S1s oujauejeduou
Lmauejed aui oj jo;uadns sÁeMje aJe soSI]P]s DLujawejeduou *3ZIS ajdues siyi joj aJoja1ay1 ayi puju u daay pinoys sjuapms
papaje Á|peq j»a o) Ajay!| SI 1jnsai ayi 'sase |jPuus ypns Joj paNejoIn aJe
$s)]e]s oUjeweJed
O suondunsse au }I °soISNejs ojaweJed ayi ueyi
sDs)]e]s oujawejeduou '0L ueyi Ssaj Jo 01 S! aZIS ajdues
juaipiJa aJOw pue Jaypinb jasea ae diysuoneja
oey 'o1az ani/
uaym aziS ajdwes jo Dedwj 9
sDJSIJe]s Dyawejeduou ayi ueya AIqeoijdde 10j aseo 'seAJeu! jenba eos oFeN
jamaq e aney s3s|]E]s D1unawpJed ay SISÁJPUe Jojoej "VAOONV|
ajes oei jo ajeos |eJaNUI 1aya yIM payeiposse sjuaujean ueyi Jajsea aJe
ajeos jeuipjo jo aes ujod
O19Z an ON
eujwou aya yim pajeiDosse suaunean sy *aje»s onei Jo/pue VAONY sa)] Í uosJead
uauainseau ainbai so]s!]ejs mawejed a|Ps jeuejuI ayi uodn paseq "uoneinap pJepueis 'UeaW
seajasm 'ajeos jeuIpjo pue ajeos jpujuou e uodn 'E
UjBweJed| 's|eajui enbaaeDs jeniajui
paseq juauainseau anbai sogsijejs ouawejeduoN
painba Juawainseaw jo 3diL 'S
uoejauo0
Japio yue jeFLued s,j|epuay
ayi yim JO Jeo uaye] $Dsije]s 1J]aweJeduou 1s8)yuey udis sOxOO||M
Aj|eoIuuouoda pue A|!peai aq pue ypayp oj jaiSea aJe suojejoIn
asay 'sIyi Ájuo j0N uojejoin 0) ajqjdasns ssaj aue ue sosiJe]s UjaweJeduou jo suodunsse
aujuM-uuew 'nei jjepuaysan|
'oy
'aJojaiay L 'soNs\Jejs ujawejed jo ase ayi u uey) S,ueeads 'auiueis 'D'ueipayw jauejeduoN
SoJSIJejs UjaueJeduou jo ase u suoJdunsseaNPJoqeja ssaj pue aMa aJe suojjdunsse apio u uey ajpos jeuipPiO7
Jo uoejoIA 0 t
A!!qndaosns pajuno
s9) uIs pue 1s3] 3Jenbs-1up 'apowN
ouaweeduou jo uonejnojes '|eus si ezis ajdues
$S|]e]s UJaweJed ueyi Jajsej SI Sons]eis puepaljisseajes jeujuoN
ay uaym :uoneoijdde jo paads "E Sonsigejs aqeyudoadde jueuodw epea Ssa0d
suoNeniys apIM yonu oi padde Á||Sea
Jo wuoj ay) auipiedai suojdunsse aq ue hay1 'uonnqujsip uonejndod
ajejoqpja pue ssaj pue Jamaj uodn paseq aje so]S}]els Pep o1aueseduou pue oujauesed jo uondiuosap aAneryuenb jo sjanaj:T'ET a1qe1
ouaweied o) paieduuoo se soiysiJejs Dujauwejeduoupialuaouis
adoDs JapiM7 Juawainseau jo janaj Ypea 10} aJPLIdoidde sisAjeue jeonspeis
SNeuayew jo aäpaMouy paoueApe ue saiinbai ypism :uoJeoi|dde jo
Jo dn pue
au jo Isou yiM aj jOu saop aðejueApe uoJeALuap ay1 'sonsiFeis uondiuosap anjequenb jo sjanaj ayi jo Aueuiuns e suasaid l'¬7 ajqel
aq ueo soSIJPIs DLouejeduou ayi
sis sejnuioj Jeuoneinduoo ajdwis Suisn oujauuejedAq paNIJap (8t61 Aauny4M) 51s91 jeoyspeys oujaweed o
jo jsoW uojea[iap
u uoje1Dej pue AIOI|duiis OLJadns aie s]sa)
|eoijsFejs ujaueieduou suounqujsip uojejndod auuos joj 'sIY Ajuo
aJe :SMO||oJ Se
sadeueape jueuOdui siy jo auos 'sjsa] jeoisyejs
oUBweIed
U uM 8961 Pemp) s1sa1 jeousipejsayu jo 1amod ay1 13aye jou saop 3lej si ajdwes e
aweieduou jo saaejueApesiP pue saaeueape jeianaS o) uosIJeduoo uI S]s3)
|jeoijsle enDued 'suondunsse jo uonejoin jeyi isa) 4 pue jsej ]ayj ! | SIsO) |Ponsi]ejs oiouejed
pajeiaunua sey (8961) AaIpeig SoDuapina aJe a1ayi pue siaypieasai ay1 Aq pa1oua uaiyo aje suojdunsse uaueed
JO ondJe 1oyuny Aay 'uopnqunsip uonendod ayi inoqe suondunsSe aui uodn paseq
sag) ("ja 'ajpuaj-a|Pu ej-SsPa se ypns) (9007 uey 8
se
eep snouo]oypip oj se IPeA I1341 asnebaq s]sa1 1jawejed ueyi sLuaw aiou aAPy pue InuaMod aio aJe s]so)
aq ue 'eep oei pue eep |jam eep jeuipio OI
pe
ON (7L61) Siapues Pue
1enNe]u J0J pans aJe ypiYM sainpaooJd
ueDed ssejo jeonjsiejs asaui "SIu} Au Dns 1auejeduou eyi anaie 1anaMoy 'suejopspejs auos pu aq jouueo suojdunsse
auejed aui ua e
pue (7261) aj|iaapuew uayM Ajuo pasn si jsa jeoI]si]P]s Ujawejeduou 'AyM S! JPyi *asjej Si i! uaym
uo
pue VAONY '1sa) 10J A|Pipadsa
aJe
ay| siaypJeasai Áq VAODN
po]ensuouap SIsaujodáy e o1
uaaq sey ssausnqoi siul
'paejoLA uaaq s Jaup Inu palaJ Ájay!| ssaj aie pue 'asipaud ssaj Aayi 'sanjPA painseau ayi uey
mwejeduou asnesag
Aujeuiou Jo uondunsse aui uaM UBAB aIPLIdoidde
VAONY '1s3) 1 Se yons soisijejs DIJBIejed Aueuu j|s paiap1suoo aJe s.uosJeed 'VAODNV SJunoo Aduanbai uodn paseq aJe ssa) |eoijs!je)s
jeyn eap apeu aq pinous uo uaeq 3APy sa|IqPien ay1 'E
'J3AaMOH ue 10) sainseau jeuipjo jo sseq ayi paiynuenb
(Suno
*AJenique s DNSIJeIs 1sa) ayi 1s®) sajqeLeA aYi °7
uo paijnuenb uaaq aNPy
DujaweJeduou u seaaym uoynq,NsIp aui uodn paseq si
anseau
ADuanba 10) saunseau jeujwou jo sIseq ayi
Aa/PuIpio 1o jeujuou uo painseau aje oysgejs 1sa] au jsa] Dujawejed u auo jeuiou e aq o) uMOUy
Seaiaym jana| OJPI Jo .PNJa]U! uo Ou ayiT
SI uMeip
auop si sajqeiJeA Jo sa|qeuea au 1s) DuJaweieduou u
Juawainseau ayi '1s8] uj j dsiu es ee ay1 jo uounq!nsip ayi jo deys
uJaWBJed ajdwes uyoum woij uonejndod oja
jsa) jeoijs/jejs o1ujowejeduouy
aJe Sisa) "SaJnqume pue sajqeiueA 01 sase Suimo|joj ayi u Ájuo pasn aq pinoys
uJaweieduou ayi sea1auM Ajuo sajqeJen o) 410q ajqed!|aa ÁaujuM-uueW ayi 'iso) aJenbs-is
jsa)n
uoFejndod Bui
ajqeoijdde aJe sisa) 3uJaweJed ayL
Duepoouoo
auy ae
Jua1DIJJO s,||Ppuay 'ne s,||epuay
1oJuaiyo e jo aSPD ayi ui puiy aM
au u sealaum inoqe uoFewioju! ou s/ aiay1 sa] OD iuapuadapu!
uoinejnaod oui noqe sisa la u e j e d u o u jo sajdwexa ayi ]sa1 |eoisFpejs oLujauwejed 3q pnoys suoeuasqo
DLUNBWejeauou Pja os. u aie
suondunsse asay) ing
pIu ayiau
uoFPuiojui ajajduoo o1ujwejeduou e J0j 'wayn
si aiass '1sa) 1s3) jeojsijejs
Au Jeup
Dujawejed u sajqe,JeA ayi piau
S09 sastjDuy jpo}is}jviS ino duduny P n o y s Apnisapun O1U]Wejed ay!| suoUIpuo
paJeIDOC jeoys,JPIS
E Suodunsse uieao yânoyue sisa)
suDuamsvan SIs21 709
arT uA s p o q j a y
q2uvasaN puv
Bebavioural
Sciences

Carrytng Out Statistical Analyses 605


in
Research
Methods

disadvantages
of paramet he sample size increases, the critical values
and
thatas oft necessary for rejecting the null
normal probability table. There are hypothesis
towards
lv
Measurements

hint
are as given belo idually reduce
and
O04
ests,
in general, main ones approach the zvalues of the
basicC
advantages
(which,
with
them.
The
have
lower
statistical.

efficiency grations of t test. First, the variances of the two


assumpt
samples are nondifferent, that is, there sts
some

the above
associated

Despite
disadvantages
are
nonparametric
statistics
above
30. homogeneity in variances the sa
variances of
samples. Second, the sample has been randomly selected.exi
statistics),
certain
(1952),
the preterably
helps to ensure that the sample is 1ni
1. According
to
Moses

when sample
size is large,
tulfilled,
Siegel (1956)
and
Siegel
data6&
Colzations can be done trom sample to representative of the population so that
accurd
statistics statistics
are
simply
"wasteful of population. Third, the population distribution o
SCores is ormal. In tact, this requirement stems from a
parametric as
tnan parametric statistics

assumptions of nonparametric signiticance of nonnar unique property of normal distribution.


2.fall the use of
the
scio ic it is stated that the observations are
for testing
ntist, is independent and random, the sample means
variances can be independent only when the population is normally distributed. t ratioand
consider behavioural
(1988) tables for a
Castellan
probability which,
the
also said that publications
s t of significance of means demand that the means and variances must be independent as or
3. It is scattered in
different

the foll
Statistics
are widely
locate andinterpret
statistics
are available,
\lowing hother. They can't vary together in some systematic way-that is, one (mean)
each
difficult to and
nonparametric

another (variance) also becoming larger. If these tend to becomig


larger with
statistics two:
of the
both parametric selecting any
one
should be used
becatc. vary in a systematic way, tne
Where

should be kept in
view for
parametric
statistics
the ord
they derlying population cannot be normal and test of significance based on the assumption
or
guidelines
situation permits, rather than
account
simply order d amality would be treated as invalid. It is for this reason the assumption of normality lies berhina
and the into
(i) If possible
cases
between
of t test.
take the amount of difference the use
will prove benef eficial
that parametric
statistics
Student's t test is referred to as a robust test, which means that statistical inferences are
to be valid even when there are large departures from normality in population. The ttest is likely
the cases. ensures
clearly likely
limit theorem to
i) The central distributions.
same c a n be transformed .
robust to the violations of normality when large samples (N> 30) are used. Therefore, in
for non-normal distribution,
the to be
even
non-normal
from 2014) of any serious doubts Concerning the normality to population distribution, it is wise to
available (Hollander et al. case
(i) If the data are
be used
increase N in each sample (Elitson 1990). Before discussing the small sample t test, five important
can
statistical tests
that parametric tests can be safely used.
30), parametric concepts, namely, degree of treedom, null hypothesis, level of significance, one-tailed test vs
(iv) For larger samples (N > alternative to using a nonparametric statistie

small, there is no two-tailed test and power of test should be considered.


(v) If the sample size is very and nonparametric
statistics commonlu

shall discuss some important parametric Degree of freedom


Now, we

used in behavioural researches. The degree of freedom means freedom to vary. It is abbreviated as df. In statistical language, it can
be said that the degree of freedom is the number of observations that are independent of each
PARAMETRIC STATISTICS
other and that cannot be deduced from each other. Suppose we have five scores and the mean ot
statistics are
The most important parametric five scores is 10. The firth score immediately makes adjustment with the remaining four scores in
a way which assures that the mean of all five scores must be 10. For example, suppose we have
1. Student's ttest and z test
Fratio four scores 12, 18, 5, 12, and then the fifth score must be 3 so that the mean becomes 10. In
2. Analysis of variance:
another distribution if the four scores are 2, 10, 8, 5, the fifth score must be 25 in order to have a
3. Analysis of covariance
mean of 10. The meaning is that four scores in the distribution are independent or they may have
4. Pearson r any value and they cannot be deduced from each other. The size of the fifth score, however, is
correlation
5. Partial correlation and Multiple fixed because the mean in each case is 10. Hence df = N-1 =5 -1= 4. Take an example of
each of them is presented below. larger cases. Suppose we have a set of 101 scores. We compute the mean and in computing the
A detailed discussion of
mean, we lose 1 df. We had initially 101 df (because there were 101 scores) but now after
1. Student's t test and z test
of difference between two means, he uses
computing mean, we have N-1=101-1=100 degrees of freedom. Sometimes we have paired
When the researcher wants to test the significance in such cases, the number of degrees of freedom is
either the (or t ratio) z test (or z ratio). The computation of t or z involves the computation data and equal to one less than the number
ttest or of pairs.
the obtained difference between two
of a ratio between the experimental variances (that is,
means) and the error variance (that is, standard error of the mean difference). However, there are Null hypothesis
we use the
two basic differences between t ratio and z ratio. When the sample size is less than 30,
t test or Student's t for testing the significance of the difference between two means. This concept The starting point in all statistical tests is thenullstatement of null hypothesis (H), which is
difference hypothesis. In other words,
a no

of small sample size test was developed in 1915 by William Seely Gosset, a statistician for
a
hypothesis states that there is significant
no
the judgement about whether the obtained
Guinness Breweries in Dublin, Ireland. Because the service code prohibited publication under a diterence between samples under due to
study, It
makes
a

researcher's name, he signed the name 'student' for publication of this test. Hence, this statistic is ditierences between samples are some true differencesor to some chance errors. The null
hypothesis is formulated for the express purpose of being rejected because if it is rejected, the
known as Student's t. When the sample size is more than 30, the ratio of the diference
also alternative hypothesis (H) which is operational statement of the investigators' research
an
between two sample means to the standard error of this difference is calculated by the z ratio
which is interpreted through the use of normal
hypothesis, is accepted. As we know, a research hypothesis is nothing but predictions or
uses the actual
probability tables. Another iference is that z test deductions drawn from a theory. The tests of the null hypothesis are generally called tests of
population standard deviation whereas t test uses the sample standard deviation SIgnificance, the outcome of which is stated in terms of porbability figures or levels of
as an estimate when actual
population standard deviation is not The known.
should note reader Signiticance.
Scienc
Metbods
in B e b a r t o u r a l
Resanb
Measuremenis
and
Tess
606 and the c o nntrol
te

the
experimental
group
the fa
fact very al
sn
roup is arryng u t Statistical Analyes 607
between indicating at he
the Sma , the
If the
ditterence

to accept
the null
hypothesis,

o r s o m e other cha small


e fluctuation differ Table 23.2 Statistical decision
experimenter
is likely
is dve to
Sampling
errors

and the co
making: Possible correct and incorrect decisions
these two groups
between the
experimental group is t
between
Null
other hand. if
the
difterence
retfute or reject
tne null nypothesis,
indicatin50up
e
hypothesis (H,) True Null hypothesis (H)
is likely to
b e t w e e n or among
r among the
samples under study,
samples fact that the False
the experimenter difterences Fail to reject Ho
obtained differences
are real
Correct decision Type I error
Reject Ho TypeIerror
Level ofsigniicance for the express Drrpose
Correct decision
null hypothesis
is developed of rejection of Table 23.2, there
upon the level of s
above, the op are two
possibilities: H, is true and H, is false. Likewise
whichTheis oft side of the table, there are also two possibilities: Fail to reject H, and Reject H. Tablealong
As stated is
based
of the null hypothesis leveledhce, \
rejection or acceptance levels of significance are also known as alpha pha levels.
In 23.
used as a criterion. The
researches there are
two levels of significance wl psychol arlv
clearly shows
that there are two ways to be correct and two
ways to be in error in any hypothesis
sociological and
educational
the 0.05 or 5% level and
anothordre comna testing or decision-making situation.
the null hypotheses. One is
used for testing at the .05 level, it
Ii the null hypothesis rejected
is eans th s5
timec
19 One-tailed test vs two-tailed test
level of significance. the null hypothesis is true and 95 times this h d in times
100 replications ofthe experiment,
words, this suggests that a 95% probability exists that the oht
esis would
obtained r One-tailed test is a directional test, which indicates the direction of the difference between the
amples under study. Suppose the experimenter conducts an experiment in which he takes two
be false. In other some chance factors. Reiection t s are
treatment rather than due to
due to the experimental
l alpha error. Thusit can
Lhe nl ps-one is the controlgroup and the other is the experimental group. Only the experimental
groups
hypothesis when, in fact, it is true, constitutes lyYpe error
a
or eroup is given training for five days on various kinds ofarithmetical operations. Subsequently, an
at the 0.05 level of significance,
the experimenter commits a 5% Type I error when that Said
want a stringent test and the 0.01 level ieCs a
arithmetical ability test is administered on the two groups and the scores are obtained. In the
null hypothesis. Some investigators may more above situation,
the
experimenter has reason to say that the mean arithmetical score of the
level where the investigator commits a Iype l error of o only. Ihe 0.01 level suggests that a C
probability exists that the obtained results are due to the experimental treatment
experimental group will be higher than the mean arithmetical score of the control group. This is
and he
once in 100 replications ofí the experiment, the null hypothesis would be true. Sometimac
the alternative hypothesis, which indicates the direction of difference. When the alternative
hypothesis states the direction of difference, it constitutes a one-tailed test. The null hypothesis
the
investigator wants even more stringent test of significance and for this, he chooses the t would be that the mean of the experimental group is equal (no difference) to the mean of the
level, which is uncommonly used in This level suggests that only 0.001
behavioural researches. control group. If this is rejected, we accept the above alternative hypothesis.
nce in
1000 replications of the experiment, the nul hypothesIS
WOuid be a true one and in 99g Putting the above facts schematically, we can say
replications (out of 1000) the obtained results can be attributed to the experimental treatment I 1. H =M, = M2 ( no difterence between M, and M
testing the significance of the obtained statistics, sometimes the investigator accents th
null hypothesis when, in fact, it is false. This error is Alternative hypothesis:
technically known as the Type Il error or
beta error. 2. H =M M
It is obvious from the above
the
interpretation that an error of Type I can be reduced
by putting
3. H = M M2
alpha level at the 0.01 or 0.001 level. But as we reduce the chance for 4. H = M, > M2
we increase the chance level for making a
type I error,
making a Type ll error where we do not reject the null
when it should be rejected. Therefore, as we decrease
the possibility of
hypothesis where M, = mean of the experimental group; and
we also increase the making one type of error, M = mean of control group.
probability of making another type of error. The research workers must be
cautious in this situation and should When it is said that the mean of the experimental group will be higher than the mean of the
try, as far as possible,
to limit the
ypel error. probability
for making a
control group, we are concerned with only one end of the distribution. Putting it in terms
of a
What is the normal curve, we are concerned with only one end of the curve (see Figure 23.5). When
the
when it comes to
relationship between Type land Type ll errors?
Researchers are of the view that 5% of of the normal all in one tail rather
chance of
setting significance levels, protecting against one kind of error leads to the alpha level is set at the 0.05 level, we have a area curve
null
making
the alpha level from
the other. The common insurance
policy against Typel error (that is, lowering
than having distributed it equally into two tails of the curve. Therefore, the directional
normal curve
0.05 to 0.01 or
0.001) has the cost of hypothesis is called a one-tailed test. A simple inspection of the table of areas of the
committing Type ll error. This happens probably the increasing probability
or given at the end of this chapter reveals that az score of 1.64 cuts off 5% of the area
under normal
O.001 even if the
research because with a stringent significance
level Curve in the smaller part, and similarly a z score of 2.33 cuts off 1% of the area in the smaller part.
hypothesis (H,) is true, the statistical results must be E
extreme enough to reject null hypothesis quite strong to D If the null hypothesis is rejected, that is hypothesis 1 is not tenable, we automatically accept
the
error
(setting alpha level of say 0.30) has the(H,). Thesafeguard or insurance
policy against lype alternative hypothesis. If the experimenter has somehow reason to believe that the experimental
This is because cost of increasing the
with the level of chance for making
Iype erro 8roup would have a lower than the control group (alternative hypothesis), he can set
mean score
easy to get a significant significance
statistical result
such as 0.30, even, if the
null of the experimental group is lower than the mean of the
wo contlicting enough for rejecting it. Thus the trade-off
is
hypothesis true, up a directional hypothesis that the mean
lead to the
concerns
usually put to rest by hese control group (one-tailed test). Rejection of this hypothesis would automatically
compromise-formulating the between
(5%) or 0.01 (1%) is
a
of the in which the
significance levels.
The whole issue standara acceptance of the above null hypothesis. This time the area

left-hand tail
normal
of the
curve

normal curve. When


of 1% ofthe in the
presented in Table 23.2. possibly correct or mistaken conclusions experimenter is interested is 5% or area

one-tailed test, we say that we are rejecting the


in
hypothesis testing nds
nas been
v ne null hypothesis is rejected by using a
hypothesis at 1% or 5% points, not levels.
irryiny (Ht Statistte al Analyss
rease N. Different types of
nust hese two types of errors. For statistic al tests rAfer the rreatnlity of dilerent
betwee

oMaining wrh halane e, balat e


very nOrtant.
5% The power of a nofieon d prower tA statistie altti
ir when, in fact, it is false,statistir al test is x
defined as the
In terms of prraBnlity of ny
eruatiri, it tan be stated like: rejee tung,
the nul
+1.64

Fig. 23.5
One-tailed test at
o.05 or 59% point
the power of a
Power-1-probability of lype ll error
statistical test
generally increases with increase sw/e of
hove interpretation that the power
in ev
is interested the above
of test and
in
N t is clear trorti
A two-tailed
test is one
in the
which
is
investigator
of no importance here. Juating the a lreatment ettect exists, the
hypothesis
concept of Type Il error closely relate
is

between the groups.


The direction
of difterence

the experimental group


s equal to the mean ofthe
chull | terence (al It can fail to
can havetesting
identify treatrnent effect (a Type ll any one of the two tonc
lusiorns
the mean of
will be that
difference between
the meansexperimental
of thne group and th
of the experimental grouD is ne
olgrom
group,groU thatIhe
control Or,
error)

there is no ot
alternative hypothesis
would be that the
show Our
mean
with both tails of the d
concern
equal to the
distribu
(b) lt can correctly identity the treatrnent effect
(rejecting a false null hypothesis).
of the control group.
Thus, we
curve equally divided at hotk o n . At for example, a hypothesis testing has
20% chance of failing to
under the normal e tails
5% of the area
(see Fig oct. then must have an B0o chance of
it identify the treatrment
level, can easily be read that a see
we have
of areas of the normal
t
curve of
z
er of statistical test is determined correctly identifying it. That is the reason why the
23.61. From the table by 1 probability of
off 2.5% area at both extremes. Since a normal Curve bilaterally symmetrical
is O+1,9+1.96 cs powe
-

Researchers have identified three factors that tend toType


lil.
left-hand tail. In a two-tailed test. a affect the power of a statistical test:
2.5% area in the loha level, choice of one-tailed or two-tailed test and
would also cut off
interpreted in the same way as a positive z score. When the null hypothesis is rejected by Core is size of the sample.
(a) The alpha level: Adiscussion folloWs.
have rejected it at 5% or 1% levels and not at 5% or 19% usinp
at 5o Alpha level is the level of
significance. When alpha level is reduced,
two-tailed test, it is said that we the risk of lype I error
points, (rejecting a true H,) is also reduced by making it more difticult to
reject H Since small alpha level generally reduces the
a

hypothesis, it also reduces the statistical power of the test.probability of rejecting the null
(b) One-tailed vs. Two-tailed tests: As we know, one tailed tests
make it very easier to
reject the null hypothesis. Since one-tailed tests increase the
they also tend to increase the power of the test. probability of rejecting Ho
2.5% 2.5% (c Sample size: The power of statistical test is also
affected by the size of the sample. As
1.96 1 1.96 we know, when the size of the
sample is larger, it tends to represent the population in a
better way. Ii there is a real treatment effect in the
Fig. 23.6 Two-tailed test at 0.05 or 5% level
more likely to locate it than a smaller
population, a larger sample will be
sample. Therefore, the of test is power enhanced by
There are some problems with one-tailed test and a good researcher must take into acco increasing the size of the sample.
Thus we find that the power of test is influenced by various factors.
those
problems. One important problem withone-tailed tests isthat they allow the researcherth
reject null hypothesis (H) even
when the difterence between the sample and the population is Now, we are in a position to proceed ahead for showing the calculation of t ratio from
relatively small. In other words, one-tailed tests make it too easy to make a Typel error (rejectinga different groups. Ordinarily, three types of situations arise while one is calculating thet ratio.
true null hypothesis) Therefore, most researchers consider one-tailed tests as
reason why two-tailed tests are
improper. That is the 1. tratio from independent groups
always more acceptable and generally preferred. 2. tratio from correlated groups
Another problem with one-tailed test arises from the fact that they look for a treatment effect 3. tratio from matched-groups
in one direction
only. Let us illustrate it with an example: Suppose the researcher wants to
examine the effect of Regardless of the nature of the group, t ratio is calculated by the following equation:
background music on productivity of the factory workers. The productivity
is measured in terms of Mean (M-M,)-0
peformance of a period of 30 days. Further, suppose that the t= (23.7)
researcher develops the one-tailed test
mean
stating that background music will tend to enhance the SEp
periormance of the factory workers. In this case since one-tailed test has no critical where, M = mean of the first group;
on the left-thand side of
mean (in normal
region
decrease in productivity. If the researcher probability curve), it would not detect a significant M = mean of the second group;
had developed two-tailed tests,
sensitive to a
change in either direction. they would have been SEp standard error of the difference between two sample means.
=

For this reason, most researchers don't like to use


one-tailed test. In each of the above three groups, the formula for SE varies and hence, we shal illustrate

Power of test the calculation of the t ratio from each of the above types of groups separately.

As tratio Irom independent groups: Two groups are said to be independent when no correlation
wehave just considered,
there is an inversed relation one group of girls (N 22) were
exists between them. Suppose one group of boys (N 20) andsummarized
=
=

Type and Type ll error. In


other words, a decrease
between the likelihood of
comming as given on the next
in daministered a mechanical reasoning test. Their data were
given sample of N elements.
lf the researchers
Typel error will increase Iype e for
wish to reduce both he page.
Type I and 1ype ll ero
Bebantounal
Scences arryng nut Saatistical Arualyws 611
Metbods
in
Rsevarch
and
M, 30.56, Initial test
Finaltest
AMeasurrments

Text M,-
34.56,
610
N, = 22, Ns 20
N =20 20
SD, = 698
Means
SD, = 568.
of
mechanical
reasoning
test?
36.28 40.33
differ on
the
measure

ratio. ndent groups, SE


For independent.
SD
thet 4.28 2.32
Do the
two groups
be
answered by c o m p u t i n g
can
equation:
above question following Coefficient of correlation
The of the 0.80
calculated
with the help
may be (23.8)
SEp t M ttM nce of testing the sißgniticance of mean difference between two correlated means, SE 1s
In

Loalated with help of tollowing formula:


the
between means;
difference

standard error
of the
SE, SEM +SEM, -2r,SE,SEM
=
where, SEp first mean; and =

23.10)
standard error ofthe
SE second mean.

SEM,
=
standard error of
the
as given below:
where 2=Coeticient ot correlation between the initial set of scores and the final
The rest of the subscripts are defined like those in Equation 23.8.
set of
mean can
be calculated scores. The
error of
Here standard
SD (23.9) Thestandard error of mean (SEy) iscalculated by Equation 23.9. Thus,
SE M N-1
.4 = 0.982, 232
have SEM 19 SE M 0.532
23.9, we
the value in Equation
Substituting

5.b8 6.98 1524


SE h 19 1302; SEM 21 SE SEM +SEM-2,SE,SEM,

Now, SE, becomes equal


to:
0982+(0.532)-2(0800982)0.532)
SEp y(1302+ (1.524
= =2.004
= y09643 +02830-08358 =0641
(M,-M)-0 (34.56-30.56)-0 4.00
.

SEp 2.004 2.004 t (M,-M,)-0(36.28-4033)-0-40531


O.625 0641
SEp
=(20-1)+(22-1) = 40
df =
(N -1)+ (N, -1) df =N-1=20-1=19
we find that the obtained t at df
40 is significant at =

Entering the probability table of ratios,


t
Entering the probability table ot t ratios at df find that our obtained value of t
=
19, we

the 0.05 level but not at the 0.01 level. Hence,


we conclude that the null hypothesis is rejected the null hypothesis is
of and girls. exceeds the value of t at even the 0.001 level of significance. Hence,
and that there is a true difference between the
means of the groups boys difference between the
are those which exhibit some
rejected and it is concluded that the training has produced significant
tratio from correlated groups: The correlated groups mean of the final set of scores.
mean of the initial set of scores and the
convenient way of getting the
correlation with each other on the given measures. One of the most match
occasions. In from matched-groups: Sometimes it becomes necessary for the researchers to
t ratio
correlated means is to repeat the same test on the same group twice on two different done the basis of numbers or it can be done in terms
between the initial and the final administration of the test, some experimental treatment is given the groups under study. Matching can be on
basis of the number of the
the Because the group and the test are the of mean and standard deviation. When the matching is done on the
to the group. This is known as repeated measure t test. other group and therefore, the number
same, it ishighly probable that there will be a correlation between the initial measures and the subject, each person has his corresponding match in the
is done in terms of mean
final measures. Suppose a group of 20 students of Class IV is administered an English spelling test of persons in the two matched groups is always equal. When matching
and standard deviation, the number in the two groups may or may not equal.
be
in January. The obtained mean score and standard deviations are given on the next page. After a
on the
year's training in spelling, they are again given the same test. This time their mean is considerably different classes and each group is compared
Suppose we take two groups from two of and standard
matched in terms mean
raised and the standard deviation is lowered. The correlation coeficient between the initial and of numerical test. Both groups have been
reasoning
final set of scores was positive and significant. Dasis Test. Do the groups differ in
terms of
deviations on the basis of scores on general intelligence
Does training produce a significant ditlerence between the intial mean and the final mean mean numerical ability? The data are given on the next page.
Sciences
Bebavioural

Methods in
Research
and
Tests,
Measuremenis
Larrying Out Statistical Analyses 613
612
Class Vill Class X which is, of cumbersome job. Second, thet
course, a
whi tical analysis. The variations in the scores ratio bedoes not account for interaction
is effect
100 120 in
aroups. Such variations
Suchare not
may
are
due to the interactions
taking Place
among groups. accounted for by t ratios. In order to
N limitations we turn to analysis of
variance, originally developed R remove these two
Tests
70.26 70.25 lim ic class of statistical techniques
a by .

of Intelligence
Means
through which we test
9.98 10.02 r more than two normally more than two) sample means.the overall difference among tne
SDs on intelligence
Tests two

types: Simple analysis of Analysis of variance is of two


an variance or
55.62 60.34 Ccic Of variance or two-way analysis of variance. one-way analysis of variance and complex
Numerical Reasoning Tests Analysis of variance (of whatever
Means on
ana.

often referred to by its contraction, ANOVA. type) is

Reasoning Tests 8.67 7.58


SDs on Numerical Before we take up discussion on ANOVA, a fundamental concept of ANOVA, that is, the
cent Of sum of squares must be properly understood. Let us take an
Coefficient of Correlation
between example to illustrate the
scores
aning of the concepts of sum of squares. Suppose we have conducted a with two
study groups
General Intelligence Test f tudents concerning their attitude towards legalization of a (banned) drug. Each member ot
Test scores 0.45 eouO A of students had never taken that drug and each member of the group B had taken this
and Numerical Reasoning grou
r1 at least once in a week. A high score on the attitude scale indicated that students
strongly
matched in terms of means and SD favoured legalization. Here we want to test the hypothesis that there is no difference in this
in case of groups is
The equation for calculating SEp atitude towards legalization because two groups of students have been taken from the same
given below. nanulation. Table 23.3 presents the scores of Group A and Group B towards the legalization of
SEo=SE +SE, 1-) (23.11) drug
Table 23.3 Group A and Group B attitude towards the legalization of drug
usual. Now, we can proceed as under:
Where, subscripts are defined as

7.58 Group A Group B

SEM = 0867;SEM, = 0691


X X
3 8 64
(1-0.45)
SEp =y{(0867 +(0691) 2 4 10 100
= y07516)+(0.4777)(1-02025) 3 9 12 144
8 64 10 100
12293)07975) = 0991
Sum: 16 86 40 408
(M-M,)-0_(55.62-6034)-0
N = 4
0.991 N =4.
SEp
40
4/ 4762
0.991
X 4 X 10
df =(N -1)+ (N -1)-1 =(100 1)+ (120-1)-1 2Xtotal 16+40 =56
(99 +119)-1 =218-1 =217
56 =7
Entering the probability table oft ratios at df 217, we find by interpolation that the obtained A total=
8
value oft exceeds the t value required at even the 0.001 level of significance. Rejecting the nul
hypothesis, we conclude that the two groups differed significantly in terms of numerical In Table 23.3, the mean for Group and the mean for Group B is 10. The overall mean
A is 4
mean ability. IS 7. For obtaining total sum of squares (SSota), it is required that from each
individual score the
Overall mean must be subtracted, squared and added. Thus
2. Analysis of Variance: F Ratio
(23.12)
thet ratio or z ratio is one of the powerfiul parametric tests through which we can test the SSotal ( X -Xtotai
of squares will be then
significance of the diference
First, when there
between two means. There are two general limitations of the t ratio. For the data in Table 23.3, the total sum

several
are
groups and if want to test the significance of the mean
we
ditrerene +(3-7)+(8-7} + (8-7+(10-7 +(12-7¥ +(10-7
among them, several t ratios are required to be computed. For example, suppose there are ive SSotal (3-7 +(2-7 =

groups. Then we need to compute = 16+25+ 16+1+ 1+9+25 +9 = 102

of deviations from
NN-1) 5x4 of squares(SStota) is the sum squared
this way, we find that the total
sum
10t ratios of total variance of the distribution.
2 eldl mean and therefore, is a measure
Sciences
Bebavioural
Methods in
Measurements and Research
O4Tests,
raw score ation as under:
equation Carryin Out Statistical
is through Analyses 615
Another way of
calculating
total sum of squares SS, =
4(4-7f+4(10-7¥ =36+36 =72
Core formula for
SStotalXiotal
(23.13) calculating the between-group sum of
N sS=A_2X squares is:
(16+40 494-392 = 102 N N
= 86+ 408-
-
(23.18)
8
is divided into two
(ANOVA), the
total s u m of squares
Ithin 4 R=(64 +400)-392 =464-392 =72
In analysis of variance s u m of squares(5S,).
The within sum e
8roup s u m of squares (SS,)
and between group v a r i a b l e SCores within each e
ate
squares Hawever, most simply S5, can be calculated by
component is the
m e a s u r e of variability
of the dependent
is the variation in the dependent
variahiaory or SS, = SS otal -SS subtracting the value of SSw from SS tot
variable and therefore, of the Can't (23.19)
8roup of independent or treatment
variable. The s o u r c e within-groun = 102-30 =72
be attributed to the independent
dependent
variables other than the indepenc y In simple analysis of variance, there is
includes all the variables influencing only one independent
treatment variable. Thus all within-group
variability must c o m e
from variables other thOr
han he lassified into several groups on the basis of this variable. Since the variable and the samples are
basis of classification is only
The within group sum
of squares, in the above ex
in the ae i
independent variable, the simple
analysis
specified independent variables. A around X1 plus the stim
example one
Cauch ANOVA is suited to the completely randomized
of varaince is also known as
one-way ANOVA.
deviations of scores in Group of the
or more than two independent variables, which form design.
measures the s u m of the squared
In complex ANOVA there are two
B around X2. the basis of classification of
squared deviations of scores in Group ANOVA is suited to factorial design. groups. Such
sum of squares may be calculated as under
For the data in Table 23.3 the within group
Statistically, the F ratio is calculated as follows.
X-X,+2(X, -X} (23.14) ELargervariance or Between-8roups variance
= (3-4) +(2-4)+(3-4)+(8-4)+(8-10+(10-10 +(12-10)+(10-10? Smaller variance Within-groups variance
= 1+4+ 1+ 16+ 4+ 0+ 4+0 =30 Between-groups variance reters to variation in the mean of each group from the total or
for within group sum of squares may also be used to arrive at the grand mean otallgroups. Within-groups variance refers to the average variability of scores within
Now, raw score equation each group. The theme of the analysis of variance is that if the groups have been randomly
same result, that is, 30.
selected from the population, these two variances, namely, between-groups variance and
SS =SS, +SS (23.15) within-group variance are the unbiased estimates of same population variance. The significance
of difference between these two types of variances is tested through the Ftest.
SS =EX?_2X (23.16) Anova has some assumptions which should be met.
N
(a) The population distribution for each treatment condition should be normal. In other
= 86- (1686-64 =22
4
words, there should be normality within groups who have been measured on dependent
variable.
(b) The individuals who have been observed should be distributed randomly in the groups.
SS =EX (X, N2
(c) The dependent variable should be measured on interval scale and independent
on nominal scale (Elifson 1990)
variable (s) should be measured
408-(40) This is referred
= -
= 408-400 8 (d) Within-groups variances must be approximately equal or homogeneous.
4 two popular methods of
to as homogeneity of variances. There are ordinarily
variances: Bartlett's test of homogeneity of variances
SS =22+8 30 determining the homogeneity of referred to consult Kirk's (1982)
For Bartlett's test, readers are
The between sum ofsquares (S,) is the sum of the sum of squares between each group and
and Hartley's F-Max test. concentrate upon Hartley's
F-Max test only.
Experimental Design. Here we shall
cirectly reilects the impact ofí the independent variable or treatment variable. In other words, the unbiased
F-Max test is based on the principle
that a sample variance presents an
between sum of squares component is a measure of the Harley's variances are the same, the
variation between the groups of tne estimate of the variance. Therefore, if the population
independent variable and therefore, is the variation in the dependent variable that is attributabe population F-Max test is as
similar. The procedure for computing
to the independent or treatment variable. It may be obtained Sample variances should be ordinarily very
from each group mean,
by subtracting the overall mea under: The sample variance is
squaring the result, multiplying by Nin each and finally summing ac the separate samples.
variance for each of
all the groups. Thus (a) Complete sample deviations of each sample
SS where SS is the sum of squared
SS, =EMX, -Xuoal computed by n-1 d
(23.17)
EX
where, N, is the number in the ith
group and X, is the mean of ith group. separately and calculated byEX*- n
Bebavioural Sciences
es
MetbodsS Im
Research

Measuremenis
and
616 Tests,
these sample variances
variane.
and
Carrylng Out Statistical Analyses 617

b) Select
the largest and
the smallest of
compute t-max
under:
as Table 23.4 Simple ANOVA based on the hypothetical scores of three groups
L a n g e s es a m p l e v a r i a n c e

Gr. B Gr.C Gr. A


Gr. A Gr. B Gr.C
F-max= Smallese sample variance

23.20 X X X X
test indicates that
there exists largea d. 10 6084 100 2500
78
population varia erence among
value of F-Max
A relatively large data sug8est that the
situation, the ces are differen 14
variances. In sucha 58 3364 196 3364
sample has been ated. On
violated. On
of homogeneity of
variances
the other
th
and that the assumption
small value of F-Max test (that is,
near 1.00) indicates
that the sample variancoe hand,
he
are similar and 40 9 65 1600 81 4225
of variance is reasonable.
of homogeneity 12 72 900 144 5184
therefore, the assumption t 30
with an
example. 20 10 100 400 100
10. The
10
Suppose there are three independent samples,
each having n =
samola . .
.

12.66, 10.78, 11.79. For these data, iances are 65 25 4225 625 81

12 7744 144 64
FMax=gESt
Sdinpie variance_12.66 88
1.17
Smallest sample variance 10.78 18 10 7569 324 100
87
14 16 6400 196 256
The data do not evidence that the assumption
provide of 80
homogeneity of variance has
violated because the value of F-Max test is near 1. 14 625 400 196
5 20
There is also another way of reaching a at decision about the
assumption of homogeneity 61 154 312 38611 2610 16070
variance by comparing the value of F-Max test with the critical value Sum:
provided in Table"
usingthis Table, we need to know k= number of separate samples, df =n-1 and
predetermined by the investigator. For the above data, we have k= 3. dí=n-1 alpha level a Step 3: Between (or among) sum of squares (BSS):
value of 5.34 at the alpha level of 0.05 and a value of 10-1=9anda
8.5 at the alpha level of 0.01
=

EX,EX , - c
Since the obtained value of F-Max test is less than is needed T

5.34, we may conclude that the 2


homogeneity of variance is reasonable. Had the obtained value been assumption of
in the Table, we would have
concluded that the homogeneity larger than the value given (561, (154 12l-3515763
For an illustration of
assumption is not valid. 10 10 10
simple analysis of variance, with
23.4 in which the scores
of the three groups of subjects A, B independent
measures look at Table
and Care shown on the educational
314721+23716+97344-35157.63
0
achievement test, and ANOVA has been
calculated from those scores. = 43578.10-3515763 =8420.47

Grand sum (EX): 561+ 154 312 =1027 2213337 -8420.47 13712.90
Step 4: Within sum of squares: TSS -BSS
+

Grand sum
of squares (EX ): 38611+ 2610+ 16070 =
57291 Summary: Analysis of
Variance
Mean square or
Sum of squares
df
Step 1:
Correction (C):(2X 10
=4=35157
Source of variance
N 30
variation

8420.47 4210.235
Step 2: Total sum of squares (TSS):2 Between-groups k-1=3 -1=2
X-C 507.885
13712.90
Within-groups N-k =30 -3 =27
= 57291-3515763

Total N-1 30 -1=29 22133.37


= 22133.37

4210235
Between-groups variance 829
For this table, the F= Within-groups variance 507885

(1987). reader is referred to the DOOK


e
book entitled
Statistics for Behavioural Sciences by FJ Gravetter& Lb allnau
V c e s
* V a l O i

fehais m

OT6 esIs,
Measzurements and Researnh

we have N - 1 = 3 0 - 1 29dt
n all. df for
above problem, Currying
Since there are 30 cases in the m i n u s one. Since there are the.
df f
Aposteriori tesis are used for
treatment tatistical Analyses 619
to the number
of groups (A) three
Detween-groups is equal
is K - 1 =3
is
1=2, di tor within-groups equal to the the tototal groups, start of
the experiment. In fact these tests arisons that
number were not
-

hencedffor between-groups it equal t o N - K = 3 0 - 3


is
=
27. After
calcr adjustments number
for the try to control
of different potential the necessarily planned
overall alpha level at the
number of groups (K).
Hence,
minus for each of the three elatinp ratio from
ified when F ratia by making
ANOVA iscompariso
ofcases sum of squares
I
freedom and the over all in the
of degrees of
simply check the significant. If the experiment.
the number
which are oblained by dividinp a still made
can be A
overall F is notposterie est
to eriori
variations, compute m e a n squares
we
or variances,
of freedom. These two types of varis
O
the compa hat priori differences
Sum
of squares by its respective
number of degrees
We obtain F ratio by dividing the betwo
eS
are
beno
a
is significa comparisons, here, are more
significant and the researcher
between means. It signilicant,
the estimates of the population variance. overall Is powerful than post hoc may, howeve,
-groups conventional t test because the is
using post hoc comparison. It the
or than the alpha levei alpha level at which he would betest, he should
variance by the within-groups variance.
(Guilford & Fruchter 1978). In the ET not use the
F ratio is interpreted by the use of the F Table
number of degrees of freedom for greater mean square (df,) is
written at the top and the nunhe lable, higad significant differencesofiginailybe specified. This is likely to testing will be numerically
degrees for freedom for smaller mean square (df,) Is written on the let-hand side ber of may due to
aber of comparisons increases, the happen because
chance factors. It some of the
For may also be clearly noted that
problem, dí, =
df, 27. Locating at these dfs, we
2 and = find that the required F ratio at thos A
Variety of probability of Type l error
post hoc tests are available. For a also
as
level is 3.35 and at the 0.01 level is 5.49. Since the obtained value of F ratio is 8.29 0.05 given situation, different tests
increases.
,which Qtcomes of these
direrent tests may ditferent. Statisticians
be may be used and
exceeds 5.49, we reject the null and
hypothesis that there is an overall
conclude difereen hac comparisons are themselves not in who are
between the three groups of subjects on the educational achievement test. rence
the test. Important post hoc tests are: complete agreement regarding the
working with these

Tests ater the F test (Post Hoc tests) Oeatectedt test, Tukey's Honestly SigniticantNewman-Keuls test, Duncan's appropriateness
Difference Multiple Range test,
tells the most popular post hoc tests are lukey's (HSD) test and Scheffe test. Of these, the
One
general limitation of the Ftest is that it only about
overall difference between
groups under study but tells nothing about the location of the exact difference. For examol the cheffe test which will be discussed. Tukey Honestly Significant Difference (HSD) test and
HSD test allows the researcher to
n value that determines the minimum ditterence
the above problem the obtained Fratio of 8.29 is significant, which lefinitely indicates that there csential for between treatment means compute a
single
is significant difference between the
groups under study, but whether the significant significance. This value is called that is considered
between A and B or A and C or B and C cannot be said. Therefore, when F ratio is difference is
as the
which is then, used to compare any two treatment Honestly Significant Difference or HSD
need some additional tests after this F test. significant, we Tukey's HSD, the researcher concludes that there is a conditions. If the mean difference exceeds
the mean difference is less than significant difference
In fact, in ANOVA the null
hypothesis
states that there is no treatment effect and
therefore, Tukey's HSD, he concludes that there is nobetween treatments. If
sample means are the same. Although this appears to be a very simple conclusion, in most all between the treatments. The formula for
Tukey's HSD is difference significant
as under:
it creates many
problems. For example, suppose there are only two treatment groups cases, in the
experiment, null hypothesis will state that the mean of one treatment
the mean of the other treatment group does not differ from HSD = q,within
group. If this hypothesis is rejected, the n (23.21)
conclusion is that the two means are not
equal. But when there are three (or
straightforward
more) treatment where, M>within Within treatment variance
groups, the problem gets complicated. With k= 3, for
indicates that not all means of the treatment example, rejecting null hypothesis n= number
of scores in each treatment
decide which ones are different. Is group are the same. Now here, the researcher has to
Mean, is different from Mean,? Is Mean, different from q= studentized range statistic.
Is Mean, is different from Mean,? Are all three different? The Mean,? For further details of Table of
these questions. purpose st hoc tests is of to answer Studentized Range statistic, readers are referred to consult
Gravetter and Wallnau (1987, A-35).
Post hoc tests are done after
post hoc
an
analysis of variance has been carried out. These are called Taking example from Table 23.4, we can calculate HSD for
comparisons
because they are done after the fact and
not planned in advance
making three post
comparisons and determining in which pair(s) the significant mean difference hoc
Aron& Coups 2006). In (Aron, actually lies.
data and compare the two
general, a post hoc test enables the researcher to go back From Table 23.4,
individual treatments at a time, that in through the
called making pairwise comparison. is, pair. In statistical terms, this is
MSwithin507.885
Although there are different
post hoc test n= 10
of
controlling the alpha levels, these procedures because there are many different ways
tests can be classified K (or number of
planned comparisons and posteriori or a
into two broad
categories: a priori Or treatments/columns) =03
A priori tests are
meant to
unplanned comparisons. Alpha level for q=.05, dí for errorterm=N-k =30-3 =27
before the compare the specific treatment conditions
experiment begins. Such tests that are
identined 9=3.51(by interpolationfrom Table 23.5)
Planned
comparisons can be made
generally make little effort to control the alpha
behind the tests is that the even when the overall Fratio is not level
which is researcher has a
specific small experiment significant. The
ratiodie 507885
HSD 3.5 '10
contained
this portion of
within a larger
experiment. Since the (comparing two
treatmen
experiment, it is reasonable research has specific
to test its
results
Tor
hypotheses = 3.51W50.79 =3.51 x713 =2502
separately.
Bebaviourai nces
Menbods
tn
KesearcD

Measurements
ana Carring Out Statistical Analyses 621
Tests, conclusio ne
ons:
620 the tollowing
can
make
for distributions B and C:
value (25.02),
we
56.10-
5 6.10- 15.4 = 40.7 (Significant Fratio
A and BB
Aand
=
Using this
=

difference
between
tween
56.10-31.2
= 24.9 (NotSic
Not Significant) (15.40-3120
1. Mean
507.8850+10) 2 . 4 6
andC=
2. Mean
difference between A =31.2 -15.4
= 15.8 (Not Sian
difference Cand B
between
25.02, it is reg.-
arded Significa. (10)(10)
3. Mean A and B only
exceeds

difference between B and Care less tha pointed out earlier,Fat the 0.05 level of significance for df, = 2 and df, = 27 is 3.35. This
Since mean
and C
and ca 25.02, so AsPltiplied by k -1, yields (3 - 1)(335) = 670. Only the F ratio for distributions A and B is
differences, iS, betweenA
that the between any twn
othertwo mean In
difference
mean
e at
Vater The mean differenceded that there a significant
than 6.70.ence
A andis C
this example,
least difference between
and B and C is not significant. the
means
not significant. greate
of A and Bonly. between The samne
25.02 to be significant.
the value of studentized range statisti lusion had been arrived by using Tukey's Honestly Significant Difference Test.
selected portion of conclu

Table 23.5 A
Bold face = 0.01) So far as the computation ot the two-way analysis of variance is concerned, the reader is
(Light face 0.5;
=

So Chapter
referred to Cha 21 where details of statistical calculations have been shown.
K = Number of treatments Analysis of variance ANOVA may be univariate analysis or multivariate analysis. In
dffor ate analysis there is one dependent variable and there may be more than one independent
error term 3 6 ariable. In multivariate analysisot variance, there are more than one dependent variable.
4.53 4.89 lsually dependent variables are ditterent measures of approximately the same thing such as two
.26 4.04 5.71 s t reading ability tests. This is called as multivariate analysis of variance (MANOVA) whicn
5.40
4.75 5.64 6.20 6.62 6.96
7.24 is different from arn ordinarY anaiysis oT variance because in it (MANOVA), there is more than one
dependent variable. When the researcher finds an overall significant difference among groups
2.92 3.53 3.90 4.17 4.37 with MANOVA, this means that the groups differ on combination of dependent variables.
4.54
24 the groups differ on any or all of the
3.96 4.55 4.91
5.17 5.37 Subsequently, the researcher proceeds to know whether
5.54 dependent variables considered individually. In this way, MANOVA is followed by an ordinary
for each of the
analysis of variance dependent variables. In the present text, no attempt wil de
2.89 3.49 3.85 4.10 4.30
4.46
30 or MANOVA.
4.80 made the
to ilustrate computation
3.89 4.45 5.05 5.24
5.40 Likewise, the analysis of covariance in which there are more than one dependent variable is
2.//
3.31 3.63 3.86 4.03 4.17 called as multiple analysis of covariance (MANCOVA). MANcOVA differs from an ordinary
analysis of covariance (ANCOVA) because in the former, there are more than one dependent
3.64 4.12 4.40 4.60 4.76 4.88 variable where as in the latter, there is only one dependent variable. In the present text,
Following the Scheffe technique also, we can locate the difference between three MANCOVA will not be illustrated with numerical examples. Howeve, ANCOVA will be
discussed in detail with numerical examples.
Since there are three means.
groups, three comparisons are likely to be made,
namely, A vs B, Avs C
technique, we are required to compute the F ratio with the and
Bvs C. Following the Scheffe 3. Analysis of Covariance (ANcOVA)
Equation 23.22 for each of three groups. help of The analysis of covariance (ANCOVA) was developed by RA Fisher and the very first example of
its
useappeared in literature in 1932. Most of these applications are from agricultural
F=- M-M,) (23.22) experimentations. A close examination of these examples
in agriculture also helps to clarify its
SDN +N,)/N,N application in the behavioural sciences. Analysis of covariance, a most widely used elaboration
Now, three Fs for three of analysisof variance, isa technique in which indirect method or statistical control is employed
pairs of distribution can be calculated
ratio for distributions as follows:
A and B:
toenhance
equated
the precision of the experiment. In its procedure or methodology, ANCOVA may be
with partial correlation where the researcher seeks a measure of correlation
between
F=56.10-15.40
507.88510+10) 1631 two sets of
variables-dependent the and independent by partialling the impact of third
out or

(10)(10)
ntervening variables. In ANCOVA researcher also tries to
partial out the side effects, if any, in
he experiment due to lack of exercising proper experimental control over the intervening
Fratio for distributions A and C ariables covariables). Each of the variables controlled for (or partialed
or covariates (or covariate. orout
heldconstant) is called asIn ANCOVA, the statistical control is
achieved by including
(56.10-3120) 6.10
easures on concomitant variate (0. This is uncontrolled variable, also called covariate and is
as
not itselt of experimental interest. The other variate which is of experimental interest is termed as
507885 10+10)
ue criterion and designated as Y In this way in ANCOVA, the researcher obtains two
(10)(10)
servations (X and ) from each participant. Measurements on X (covariate) obtained before
Abstracted
trom Table B.5 of Statistics ascreatrment are made to adjust the measurements on Y (criterion). When X and Y are
for Behavioural Sciences
by FS
eted, a
part of the variance of Y occurs due to the variation in X. In fact, ANCOVA is a
nod ot making adjustments that are linked to the problem of correlation. Here the problem is
Gravetter& LAWalnau o" y how much of variation off Ycan be predicted from variation in Xand then subtracting
Bebavioural
es
Sciencec

MethoAs
m
Research

ana
Measurements Carrying Out Statistical Analyses 623
adjusted value. A formu a
valie

622
Tess,
variation
as an
for the
(or
leftover)
scores is as under: pupo Loe data calls for using ANCOVA. There
may many observable differences be
in X
obtain the
remaining differences

Theares of the three groups because no


of the among
this to for
initial
initial
Scores
attempt was made by the researcher to make
Yscores
the ups equivalent at the start of the study, that is, before
ofcorecting
SSy 5S, -SP|S, ributed hby X has been
contributed 2323 t h e s e

different
treatments.
an
subjecting
treatments. In the absence of such experimental control,
a groups to
work forresearcher
the forced
ANCOVAismay
these

where, S5yx
=
Sum of squares
of Y when
variability

ontroled or three
to exercise

started
statistical control by using ANCOVA. The computational
arranging the above data as under (cf. Table 23.7).
be

removed.

of Yscores Table 23.7 Computational arrangement for ANCOVA


SS,
=Sum
of squares
deviations of Xand Y
of Group A
SP=sum ofproducts Gr OupB
=sum of squares
ofX scores

is a method of anale
that enables Y Y Y (X)Y Xi Y
SS, said that ANCOVA
it can be in tterms of relevant
In a nutshell,
pre-experimental
status of the groups
are removed
statisticallv so
that they can
tknown variabl 3
18 9 36 8 10 80 64 100
to equate 32 16 64 8 8 64 64 64
status of the groups s c o r e s that havo
investigator 4
the initial The be
Differences in

compared as though
their initial status
had been equated.

as residuais
b e c a u s e they are what Corrected by 18
25 25
36
5
9
12 60 25 144
100
are technically known interpreted like anv
10 90 81
this procedure or removed. Kesuits are ther analysis o 16
inequalities have been
corrected
While reporting results, instead of giving the
means of eac
4 5 13 65 25 169
exception. 29 101 63 177 35 53 259 577
variance with one major means, the 359
means

group, the
researcher
the adjusted
may give covariates.
adjusting of Sum: 17
Mean: 3.4 5.8 7.0 10.6
the efect of the
partialing out under:
ANCOVA are as Group C
The assumptions of
are not affected by treatments.
(covariate)
(6) The Xscores measurement should be normally distrihita

(Gi) The dependent


variable that is under uted in the
14 98 49 196
population.
scores (Y) on initial scores (X) should ho
coeficient ofthe final more o 4 16 16 6
(ii) The regression that i5, correlations between Xand Yscores aroc
less same in all groups and is linear, 12 60 25 144
for all groups. 6 13 78 36 169
at random from the same population.
(iv) Treatment groups are selected 10 80
8 64 100
(v) Within-group variances should be approximately equal. Sum: 30 53 332 190 625
should be additive.
(vi) The contribution of variances in the total sample
Mean:6 10.6
Numerical Example I EX =17+35+30 =82
Three groups of five students each were randomly selected from class X of a school. They were EY =29+53+ 53 135
then, rated for their leadership quality and their scores were obtained. Subsequently, they were
subjected to three different training techniques tor improving their leadership qualities for a
Steps of computation
month. After a month of training, they were again assessed for their leadership qualities and their
final scores were also obtained. The data so collected are presented in Table 23.6. A. ANOVA of X scores:
Table 23.6 Measures on a covariate (X) and a criterion variate (Y) for single factor experiment () Correction (C.) =A_17+35 +30 (824= 44827
N 15 15
Group A Group B Group C
Initial scores Final scores Initial scores Final scores (i) Total sum of squares (SS,) =EX*-C,
Initial scores Final scores
= (63 +259+190)-44827 = 512-448.27 =63.73
X 3
A2_c
10 14 u) Between group sum ofsquares (Ss,) AN =

N N3
8 4 4

3
12
10
5 12
13
17 35. 4827
Sum: 1 29
13 8 10 482.8-44827 =34.53
35 53 30 53
= (578+245+180)-4487 =

5.8 7.0
Mean:3.4 10.6 6.0 10.6 SS,- Between S5,
Within group of squares (SS= Total
V sum

= 63.73-34.53 =292
Sciences
Bebavioural

Metbods
in

Measurements
and
Researcb

Currying Out Statistical Analyses 625


624 Tests,
of ANOVA
of X s c o r e s
(ii) Sum
of products within groups (SP,)
Summary
=
SP -SP
Table 23.8
MS = 54-49.6= 4,4
df
17.26 enjence, the sum of
Sourceofvariation 34.53
k-1=3-1=2 For convenience,

under
products (SP)may be arranged with sum of
2.43 as
squares for both X
Between groups
29.2
N-k=15-3=12_ 7.10 and
Yvariates

Within groups(error)
14 hle 23.10 Summary o1 Sum ot
squares of X and Y as well as that of
63.73 388
sum of products.
F at 0.05
Total df 2,12 Total
6.93 Within Between
1726 710 F at 0.01
of products
(XY 54 4.4
f2.43 Sum

Sum of
squares (X) 63.73 29.2
49.6
34.53
of Yscores: sqguares (Y) 164 87.2
B. ANOVA (2Y(29+53+53)(135) Sumof 76.8

15 15 iv) Adjusted sum of squares for Y (SS,)= SS, - (SP


G) Correction(C,)N
ZY2-C=(177 577 +
+
625)-1215=1379-1215
-1215 164 SS
(i) Total sum ofsquares (SS,,)= =1164-
4164-4575 11825
(ii) Between groups
sum
of squares (SS,)
=*
YEY,-C,
N N N
63.73

()Within group adjusted sum of squares(SS)


29 53531215
SS
5618 + 5618)-1215
= (1682+
= 12918-1215 = 768 =87.2 4.4-872-066
292
=86.54
Total SS, -
Between SS,
(iv) Within group sum squares (SS,w)
=
of
(vi) Between group adjusted sum of squares (SSy x,)
=164-768 =872
= Total adjusted SS-Within adjusted SS
ANOVA of Yscores
Table 23.9 Summary of =118.25-86.54 = 31.71
MS F
Sourceof variation
SS df Table 23.11: Summary of covariance analysis
76.8 K-1=3-1=2 38.4
Between groups df MS
N-k = 15-3 = 12 7.26 5.28 Source of variation SS
Within groups (error) 87.2 k-1 3-1=2 15.85
31.71
4 Between groups 2.01
114.94 86.54 k (n-1)-1 7.86
Total Within groups (error)
F at 0.05 388 =3(5-1)-1=11
38.4 = 529 df 2.1
y 13
F at 0.01 6.93 118.25
Total
3.98 (005)
C. Analysis of covariance: 1585 20F df2,1)
()Total sum of products (SP,) =
(EXY +2X,Y, +2X,Y,)-X N
EY
Fy786 720 (001)
135
=(101+359 + 332)82x Steps of computation
15 Step 1: ANOVA of X scores
outcome of this
ANOVA
54 scores has been
carried out separately. The
= 792-738 =

In this step, ANOVA of X


ratio shows that the
three groups differ
Summarized in Table 23.8.
The outcome, that is, F level.
IS
Fvalue of0.01 level, so it is significant at 0.01
(i) Sum of products between groups (SP,) 7.10 is more than
Significantly. F (df=2, 12)
=

EX, EYEX, EY X, YEX ZY Step 2: ANOVA of Y scores has been worked out separately.
The o u t c o m e of this
second step, ANOVA of Yscores that the three groups also
differed
in this in Table 23.9. It is clear
(17x29) (35 x53) (30x53) 82 x135 ANOVA of Yscores has been presented

5 5 5
98.6+371 +318738 Y on A
1 df is lost because of regression of
787.6-738 =49.6
Sciences
Bebavioural

tn

and
Researcb
Metboas

Carying Out Statistical


Analyses 627
t

shtained Fexceeds
Measurements

is, 5.29 exceeds


Tes/s, , that
of f the critical value
626 obtained
ve of 4.65 at 0.05
Yas the diftered on the variate of Y(criterion)
of
i)is rejected,
hypothesis) is rejected, we
level for df 1 and 13, Ho (null
on the
criterion measure
that t h e
three groups
conclude that the
regression of Yon
X is
significant. This shows that
significantly
Again, it is
concluded
Xand
Yscores are
ificantly associated.
0.05 (df2, 12). sum of products (Xy has he
Analysis of
covariance
the
total
been neral, when the correlation between Xand Ycores
Step 3:
ANCOVA
has been worked out. First, into
divided
two
components-sum
of
LProducte
In
8ll often produce a significant F In the above is high and
among the means low,
Here
and this
sum of products
has
been Ubsequently,
computationc

svpmb
ave boeen ANC

high 0.08) and correlation


=
example the correlation between X
and
within 8roups. The Yis not high w among means is quite high (r,
calculated
and sum ot products
for
diferences in Xscores.
SS
X scods
been outcome, that is, Ffor,ANCOVA is not significant =0.96) and therefore, the
between groups
of correcting
Yscores
in Y contributed
Y contributed by X scores or th
done for
the p u r p o s e for any variability of X
used to refer
to the adjusted
scores

adjusted
s u m of squares
tne SP 4.4 4.4
constant. f o r convenience,
(XY) h a v e been
summarized Sum w
Xis held
variability in
Yand adjusted
s u m of squares
for products
at 0.05 level. Thus : Table SSwSSw 29.2 x87.2 50.4
of squares of 2.01 1s significantnot of
observed that the o b t a i n e d = difference
between the grounc LS
23.10. It is
variance for adjusted
Yscores does
show significant
not t
a
h e groups differ
dont significantl sP =
49.6
49.6
is accepted
the
with conclusion that
o n e technique can be caider SSxb SSyb 3453 x76.8 5149 0.96
Ho (null hypothesis) techniques, n o to be
out of the t h r e e training
Hence,
giving treatments.
better than another.
it IS c l e a r that the three groun So as this example is concerned, the matter ends
far
the above n u m e r i c a l example, ups differ here. But suppose the obtained F of
to
If pay attention when Y S c o r e s are adi ANCOVA becomes signiticant, and
we
to X as well as
Y s c o r e s . However, theretore, Ho is rejected. In such a case, our objective then
significantly with respect n o longer differ significantly, It an ll be to proceed ahead and analyze the differences in final scores after correction or
two variables, the groups
account for correlation between
difference of Y
Scores was
simply a reflection
of differences in X SCOr nd conclude which one of these treatments is better than the others. Hence, anotheradjustment
and numerical
therefore, that the former of Yand X is really significant. example follows.
Yon X. Let us find if regression
that is, there is regression
measures are diVided into two components
of Y
The total sum of squares from the X scores and the Numerical Example II
scores which c a n be predicted other
corresponding to those parts of Y which a r e independent of X.
(residual) corresponding to those parts of Yscores, In this second example, we are
going to discuss a different experiment where the
groups are not
(SS,
cionificantly different on X as well as Yvariates but the
adjusted F (Fy) is significant. This example
a situation whicn is oppoSite to the first
ofsquares (SS) due to regression illustrates numerical example. The data on this fictitious
=
Sum
SS experiment are as under:

(54) 291 - 45.75 Table 23.13. Scores on a covariate


63.73 63.73 (X) and a criterion variate
(Y) for the single factor experiment
Total SS of Y= 164
Group A Group B
118.25 Group C
Residual =164-45.75 =

Initial scores Final scores Initial scores Final scores Initial scores Final scores
Out ofthetotal variation of Yscores of 1 64, 45.75 is due to those parts of Yscores which can
be predicted from X scores and 118.25 corresponds to those parts of Y scores which are X 2 Y
independent of X scores. The test of significance of the regression of Yon X is being presented in Y
Table 23.12 10 5 15 10 15
10 20 5 10 25
Table 23.12. ANOVA testing the significance of regression of Yon X 25
15 10 5 10 20
Source of variation SS df MS F
Due to regression 20 30 25 30 10
45.75 45.75
Residual (error) 5.03 10 20 15 20 5 10
118.25 13 9.09
lotal Sum:60 105 70 90 55 80
164 14
Mean: 12 21 14 18 11 16
45./5
F=- =
5.03 (P< 0.05)
9.09 X =
60 +70+ 55 =
185
Fldf 1, 13) =
4.65 (0.05 level) EX
=105+90 +80 =275
F(df 1, 13) =
9.07 (0.01 level)
work for ANCOVAfor the data presented in Table 23.13 may be started
Dy
he computational
arranging
the above data as in Table 23.14.
Bebavioural
Sciences
Carrying Out Statistical Analyses 629
Metbods
in
Research

and
for ANCOVA Table 23.15.
628 Tests,
Measurements
Arrangement Summary of ANOVA of Xscores
Computational
Group B
Table
23.14
S o u r c eo fvariation SS df MS
Group A Y2 (X)Y X Between groups
23.33 k-1=3-1 =2 11.66
within groups(error) 620 N-k =15-3 =12 51.66 0.22
X 225 225 643.33 |4
15 225
25 100 50 25
Total
11.66 F at 0.05 388
50 5 100 F(df2, 12)
F5166
= 022
5 10 400
100 5 150 100 F at 0.01 693
20 200 10 225
10 225 625
375 30 750 625
25 25 900 Yscores
15 400 900 225 ANOVA of
30 600
15 20 300 400 B.
20
200 100 400 90 1475 1200
EY
(C,) =
(105+90+80(275504166
-105+90+802(27512

10 20 850 2425 70 1850 (6)


Correction
15 15
1425
105 14 I8
Sum 6 0

Means = 12 21 (i) Total


sum of squares (SS,,) =EY?-C,
Group C
= (2425+1850+1450)-504166
Y (X3) () X =5725 5041.66 68334
X
150 100 225
10 5
(ii) Between group Sum of squares (SS)
625 625 625
25
10
25
20 200 100 400 (EY,EY,Y-C, 2
50 25 100
10
50 25 100
(105 90) (80 -504166
10
5 5 5
80 1075 875 1450
Sum: 55
Mean: 11 b
= 5105-504166 =6334
(iv) Within group sum of squares (SS )=SS -SS
X =60+70+ 55 =185 = 683 34-6333 620.00
Y 105 +90 +80 =275
Table 23.16. Summary of ANOVA of Y Scores

SS df MS
Steps of computation Source
Between groups 23.33 k-1 3 1=2 31.6
A. ANOVA of X scores: 0.61
Within groups (error) 620 N-k =15-3 = 12 |.67

() Correction (C) =
(EX (185-2281.67 643.33 14
N 15 Total
F at 0.05 3.88
(df2,12)
i) Total sum of squares (SS)= F-S167061
5167 Fat 0.01 6.93
EX-C =(850 +1200+875)-2281.67 C. Analysis of covariance
2925-2281.67 =64333 () Total sum of products (SP)
EX-2Y
(i) Between group sum of
squares (SSb)= = (EXY,+EXY, +EX,Y,)-
N
(EX,EX EX,_c *275)
n2 =(1425+1475 +1075)-85
15
(6070 (55)
652281.67=
5 5 5 2305-2281.67 23333 = 3975-3391.66 58334

i) Sum of products between groups (SP,)


(iv) Within group sum of
squares (SS w)= SS%-SSb
EX2Y)EX,DY),X, 2Y,)EXY
= 643.33-23.33 =620 n2 n3 N
Carying Out Statistical Analyses 631
Sciences
Bebavioural

in
ccores (Y) is also
not signiticant at 0.05 level.
Metbods

differ significantly on Ytest also. However, the Again,


Researcb

it is concluded that the three


x80)_(185 x275)
and
Mecasurvments

the find
630
Tests,
x90) (55 15 analysis
aups do ariability contributed by the initial X scores indicated carried out on Ymeasure by
(60x105) 70 significant result. In
significant at 0.05 level. Thus the three groups other
a
Aiustingditsted Fof 4.24 is found to be
3400-339
166 = 834 ionificantly when the groups have been adjusted for initial differences in X. Hence,ditfer the
of the
=

(SP,)=
SPt - SP, parisons
among
means
groups must be tested.
within g r o u p s
Sum of Products = 575
com
testing
ofare diterences of the three groups, let
mean
us determine whether thee
u m of su
(ii) 583.34-8.34
nged with ssum squares for
=

(SP) may
be arranged Bu f Yon X is signiicant. For this, the total sum of squares of Y measures are divided into
total sum of products regrenonents-one corresponding to those parts of Yscores which can be predicted from the
the
For
convenience,

Yvariables as under of Xand


Yand that of sum of producte two CoTd the other (residual) corresponding to those parts of Yscores which are
independent
both Xand
Summary
of s u m of s q u a r e s squares that can be predicted from the Xscores is obtained by the formula: SP2
Table 23.17.
Within Between of X.
The sum of square

Total 75 8.34
583.34 23.3
Sum of products
(XY)
643.33
620 Sum of squares due to regression =8334
643.33
528.94
Sum of squares (X) 620.07 63.33
683.40 of squares of Y= 683.40
Sum ofsquares (Y) Total sum

Residual = 683.40-528.94 = 154.46


SS for Y(SS,x)
(iv) Adjusted
(SP
Total adjusted SS =
SSya Table 23.19 ANOVA for the regression of Y
SS t on X

(583.34)
Source SS df MS
=683.40- 64333 Due to regression 528.94 528.94
154.46
= 683.40-528.94
=
44.52
Residual
154.46 13 11.88
Within adjusted SS = SSw-c
683.4 14
Total
07-575 62007-53326 = 8681 E 4 0 = 44.52 Fat 0.05 45/
=620.07 620 1188 Fat 001 9.07
df =2,13)
(dí =2, 13)
Total within
adjusted SS
-

Between =
Out of the total variation of Y scores of 683.40, 528.94 is due to those parts of Y scores,
= 154.46-86.81 = 67.65
which can be predictedfrom X scores and 154.46 is due to those parts of Y scores which is, in
Table 23.18. Summary of covariance analysis fact, independent of X. The test of signiticance has been carried out in Table 23.19. The obtained
Fvalue is 44.52, which far exceeds the critical value of 9.07 for df1 and 13 at 0.01 level. Thus H
Source of variation SS df MS (null hypothesis) is rejected and it is concluded that the regression of Yon X is highly significant.
Between groups 67.65 k-1=3-1=2 33.82 This provides an evidence for the fact that X and Yscores are significantly associated.
Within groups (error) 86.8 k{n-1)-1 7.89 4.29 Now let us test the mean differences of the three groups by using ttest
=3(5-1)-1=11 The individual Ymeans can be adjusted for differences in Xmeans by the following formula:
Total 154.46 13
My M, -bw (M, -G) (23.24)
Fy 33.54.29
7.94
Fldf 2,13)0.05 = 3.80
where,
Discussion Myx Adjusted mean of Ywhen X is kept constant
=

Steps involved in the


computation of Numerical Example II are essentially the those
involved in the computation of Numerical
Example I. Therefore, those steps
same as
not
M, Mean of uncorrected Yscores
=

explained here.
are being My =Mean of X scores
It is further observed that F test bw =Regression coefficient of within groups (SP/SS)
level. It means that the three applied to the initial scores (X) is not significant even at 0.05
groups don't differ significantly on the X test. GM Grand mean of X scores
Ftest Likevwise, applied
1dfis lost because of
regresson of Yon X.
Here, bw= SPW /2-093
c 620 S xw
Sciences
Methods in Bebavioural
Measurements and Research
632 Tests,
Carying Out
Statistical Analyses 633
18 12.33 23.20, an example of
M In
design) has heen provided. A computation of Ftest from
examined therepeated
15 researcher has measures
23.24, the adjusted Ymeans for each of three grounc ees iin a building construction sales (within-group
by applying formula emplonent, the number ot homes soldcompany. see if there is performance of 5 new
Now n be To
month for 3significant
is recorded trend towards
calculated as given below. improrcher wants to answer the question: Is thereeach
The researcher
months of
For Group A:
with more
experience?
work a
significant change in sale employment.
Mya =21-093 (12-1233)
Table 23.20.
performance
21-0.93 (-33)=21+0306 =21.306 =2131(M,) Computation of repeated measures ANOVA
For Group B: Person Month 1
My =18-093(14-12.33) A
Month 2 Month 3
3 otal
4 4 8
18-1.55 =16.45 (M,) 6
8 18
C
For Group C:
D 11
Myx =16-093 (11-1233) 4
0 13
16-0.93-133) =16 +124 =1724 (M,)
Sum: 10 10
Now we proceed to test differences between the three means by computing t test, Theo 20 30
he X (or mean): =2 60
formula is: 6
mean difference Computation:
t=-
SED (i) Correction Term (C) = 60) =240
SEp of any two adjusted means can be computed by using the following formula: 15
) Total SS(Sum of squares) =(1*+ 4+3+.. +5+72+6)-C
(23.25) =306-240 =66

MS = Mean squares of within groups (adjusted) Gii) Between subject SS =,1811,13, 10


N =Number of subjects in the first group compared
N = Number of subjects in the second group compared =
2593-240 =
193
(iv) Within subject SS = Total SS - Between subject SS
Here, df =k(n-1)-1=3(5-1)-1=12-1=11
where, k =number of groups to be compared = 66-193 = 467

n=number of subjects in each group being compared (Between treatment SS = 20.30


5
SE 789-V 156 -177 280-240 =40
The computed value of tis: (vi) Residual (Error) = Within subjectSS -Between treatment SS

M-M22131-16.45
SED 1.77
486 =274 (Significant at 0.05 level) df =11
1.77
46.7-40 = 6.7

Table 23.21. Summary of ANOVAfor single factor exepriment with repeated measures
M-M,_16.45-17.24 079
Stp 177
0.45 (Not significant) df = 11 Source of variation SS df MS
177
Between SS 19.3 k-1=3-1=2
Ciüi M-M 2131-1724 407 Within SS 46.7 N-k =15-3 = 12

SEp 1.77 77230


1.77
(Significant at 0.05 level) df =11 Treatment SS 40 n-1 5-1=4 10 11.90
0.84
The obtained results show that Residual (error) 6.7 (N-k)-n-1)
and Group A and Group B differ significantly. Likewise, = 12-4=8
Group C differ significantly. However, Group B does not differ Group A Total 14
significantly from Group . 66
Sciences
Carrying Out Statistical Analyses 635
Bebavioural

Methods
in

Measurements
and
Research

trend of onship
between two variables, that is, as
relation,
one increases, the other also
634
Tests,
Similar.

or as one decreases,
the other also decreases.
Consider the relationship between
treatmentM
1190 acreases tst sCores and classroom achievement. Generally, as
Between
84 intelligence test scores are
ntelligereroom achievement iS also raised. And, therefore, the direction
nt
F= Error MS
3.84
raised, classroo

of the correlation
Fat 0.05
=
two variables is positive. Likewise, consider the correlation between fatigue and
numerator
=
4 Fat 0.01 7 0 1 hetwee fatigue increases, output decreases. Here the relationship is negative because as one
dt tor
denominator
= 8 have been
summarized
ed in Table the other decreases. >ometimes, the relationship is not consistent. And in this situation
df for computation
to be zero.
of c o r r e l a t i o r is likely
increases,

the
the results of and 46.7 and k-1-
In summary
of ANOVA
within subjects
SS, were
19.3
that the total
2 efficient
The Pearson duct-moment correlation has two important assumptions.
SS and This indicates
between subject
are all set in bold type. sauum of
23.21. The di) s u m of
The relationship between Xand Yvariables should be linear. Alinear relationship reíers
12 (under and within subjects he
N-k =15-3
=
subjects recid n c y of the data, when plotted, to follow a straight line as closely as possible. Although
and components-between

treatment s u m
of squares and
two into
has only MS residual (erroe test whether or not the relationship is linear,
squares is futherdivided treatment MS by r). The to tnee come statistical tests through which oneof can
sum of squares between
within subjects the
F ratio is obtained by dividing
is rejected. there this is determined by the inspection scatter diagram or correlation table.
squares. The (null hypothesis)
of and therefore, H,
gen

obtained Fis significant


ANOVA
must follow the ANO
measures

Hoc test with


Repeated researcher
OVA
Post difference exists, the
withdetermine exactly
post hoc tests. Like independent
where significantmeasures ANOVA, in repeated measures ANOVA Tipeated
To
for lukey's HSD in case of reneat
The formula
test.
be computed as post hoc
HSD can
measures ANOVA becomes:

(23.26) (a) (b) (c)


Mero
HSD=4n is residual variance and n is tho
Fig. 23.7 (a) and (b) nonhomoscedastic (c) homoscedastic and linear

of the Studentized Range statistic, MSeror


where q is the value is of (homo 'like' and scedasticity
treatment.
2. The second assumption homoscedasticity means
of scores in each refers to the fact
number
0.84 with df =8. At
0.05 level of significance with thek 3, the
=
Defined statistically, we can say that homoscedasticity
experiment MSeor is means scatteredness). in the scatter diagram are equal or
In this 23.5 is 4.04. deviations (or variances) for columns and rows
data with the help of Table that standard non-homoscedastic distributions.
value of q for these 23.7 illustrates homoscedastic and
0.84 nearly equal. Figure near the
HSD 4045 166 three diagrams. In diagram (a) the variance of the distribution
In this figure we have the distribution iS
than the variances near bot extremes, and hence,
centre is smaller lower than the
must be at least 1.66 to be the variances near the bottom extreme are
that the mean difterence
between any two samples non-homoscedastic. In diagram (b) non-homoscedastic. In
It means
conclusions: and, therefore, the distribution is
we can make the following variances at middle or at top extreme,
significant. Using this value, the variances are equal throughout as well as
linear.
1 month =4-2=2(significant) diagram (c),
Mean difference between 2 monthsand correlation assesses the degree and direction of linear relation
months and 1 month 6-2 4 (significant) As we know, the Pearson
by letter ris computed by:
= =

Mean difference between 3 Pearson correlation


interval-level variables.
identified
between two
Mean difference between 3 months
and 2 months =6-4 2 (significant)
in light of Tukey's HSD test that determines degree to which Xand Yvarytogether (23.27)
All these three mean differences are significant which X and Y vary separately
difference between two treatment means that is necessary
for being significant. degree to
the minimum
Covariability of X and Y
4. Pearson r Or r=-
is one of the most variability of X and Y separately
Of all the measures of correlation the Pearson r, named after Prof. Karl Pearson, change in X is
common methods of assessing the association between two
variables under study. The Pearson In any distribution when thereis perfect linear relationship, every
in Y. In such a case, the covariability
perfectly reflects
two interval-level
accompanied by a corresponding change overall result is a perfectcorrelation of 1.00. On the
the and direction of linear relationship between
correlation measures degree the
variables. Pearson rrepresents the extent to which the same individuals, events, etc., occupy of Xand Yseparately. The
variables. It is statistic that makes no distinction the total variability there is no linear relationship, change
a in X variable does not produce any
same relative position on two a symmetric other hand, when and the resulting correlation
is
variable and dependent variable. It is also known as Pearson there will be no covariability
between independent predictable change in Y. In this case, book, we shall illustrate the
product-moment correlation and abbreviated to r. The size of Pearson r varies from +1 througn several formulas. In this
Pearson r can be calculated by is:
to-1. In fact, all correlation coefficients have the limit of +1 and -1. A coefficient of +1 indicates zero. formula or machine formula. The equation
Calculation of Pearson rby the rawscore
perfect positive correlation, and a coefticient of -1 indicates perfect negative correlation. n NEXY -2XEY (23.28)
coefticient of correlation tells us two things. First, it indicates the magnitude of relationship.
INEX-(EX}]NEY2-(2Y1
correlation coefficient of, say, +0.90 or -0.90 gives the same information about the magnitudeo
S1ze ot correlation. The sign makes no variation in the size of the correlation. Second, it gives
indication regarding the direction of the correlation coefficient. A positive correlation indicates a
Sciences

Methods in
Bebavtoural

Carrying Out Statistical Analyses 637


Research
and
ofscos es; X
Measurements

636
Tests, N = number = nterpreting correlation
ile interp coeticient,
the researcher must
product-moment
correlation
coefficient;
SCores in different circumstances, whic can cause a take into account two
higher or lower correlation. One circumstance arises
or relatively few persons have pair of
Pearson
intelio.
where,r
=

and Y =
scores
in
Yvariable.

who were
administered
igence test when on ale's scores that markedly
are
Scores. Such individual's pair of scores are called as
different from the
Xvariable; of 10 students
the scores calculated
scores. outliers. In fact, the actual
Table 23.22
presents
r has
also been pive
table (see table ven in ne estod of correlationis strongiy
arected
by one or more outliers. The second circumstance is
and anxiety
test (Y).
obtained r is
Pearson
help of a
tested with
is
the
s
less than
t the value required Appendix F, where all other things are equal, and there is a hormogeneity
in group of scores. In such a
of the value ofr even 0 05
and it is concluded thatt the
obtained
One

magnitude ofof Pears


Pearson rwill be lower one. In
significance
Heath 1970,
378). The is accepted scores on jon, the simple words, if there is smaller range
Downie &
Hence, the
null hypothesis sin
rrelated. The sign
correlated. there will be
smaller value of r
level ofsignificance. on the anxiety
test are not
no makes of scores,
rhers have pointed out many ways for interpreting a correl
the scores
test and correlation. coefficient,
intelligence of the
is the casein
in the scatter
scatter dia. K non the purpose of the researchers and the circumstances that influence the
difference in
the magnitude
data are arranged
in bivariate distribution as
the following formula:
diagram orin depending
gnitude. One popular method uses a crude criterion for interpreting the
When the be calculated
by correlation's

rshould
correlation. Ihis is presented below.
the correlation
table, Pearson
agnitude of a
x'NY-Cy (23.29)
magr
Value ofr
Interpretation
(o,Ma,) 0.80 to 1.00 High to very high
'
deviation of
Xtest; y
= scoros t

scores from mean on 0.60 to 0.80


r;x'=deviation of Substantial
r =Pearson X-series scores; and C, =Correrti
where, in

mean on Y test; N =sum


of frequencies; C = Correction 0.40 to 0.60 Moderate
Y-series scores. 0.20 to 0.40 Low
method
Table 23.22 Pearson rby raw-score
0.00 to 0.20 Negligible
XY
64 B0 Another method of interpreting correlation coefficient is on the basis of statistic
8 100 Still another way of
10 cianificance of correlation based
on the conceptof sampling error.

5 20 25 400 100 interpreting correlation coefticient is interms of variance. In fact, the variance of the scores that
225 90 the researcher wants to predict is divided into two parts-one that is explained by predictoror
15 36
treatment variable and the other that is explained by other factor (generally unknown) including
13 9 169 39
sampling error. The researcher finds the percentage of explained variance by calculating r
DOpularly called coefficient
as Then, percentage of variance not explained by
of determination.
64 256 128 is1-r.Let us take an example to illustrate this fact. Suppose the researcher
16 variable
the predictor
240 academic achievement (Yvariable) on the basis of 1Q (X variable) and he
20 144 400 wants to predict general
12 between IQ and general academic achievement. He can use this correlationto
obtains rof 0.65
169 169 169 0.42.This means that 42 per cent of variance in general academic achievement
13 13 find r (0.65
=
=

of IQ. It also means that 58 per cent of the variance of general


11 400 121 220 is predictable from the variance schools
20 achievement is due to tactors other than 1Q such as home environment,
academic
0 225 100 150 environment, level of motivation, etc.
15
in terms of standard error of estimate and
100 144 120 Still another way of interpreting Pearson r is
10 12 in predicting Xon the basis Yor Yon
coefficient of alienation denoted by letter K. As we know,
In
XY =1336 of estimate for gauging the predictive strength.
EX =102 EY =138 X=1272 EY = 2048
the basis of X, we also compute standard error the actual
standard deviation of the difference between
fact, standard error of estimate is the
NEXY-EXEY values of the predicted variable and those
estimated from regression equation. Briefly, it is simply
estimate is:
-
the standard deviation of the errors of estimate.
The formula for standard error of
INEX-(Xx 2][NE Y?-(Y)
(23.30)
(Yon the basis of X)
(10(1336)-(102)(138) SElest), =,1- (23.31)
SEles,, =1 - (Xon the basis of Y)
I1O1272)-(102)]I(10)(2048) -(138)
to gauge from v1-r the predictive strength of an
-716 -716 Kesearchers have shown that it is possible K.
0392 = -039 and is symbolized by letter
3325776 1823.671 Ihis
v1-r is called as coefficient ofalienation (23.32)
K=V1-?
df =N-2 = 10-2 =8
Bebavioural
Sciences
Methods in
Research
Measurements
and
638 Tests,
variables just as
Carrylng Out Statisti
between
two sures resence ot
absence of relationship
words when K =1, r=o. and when simple words, partial correlation coefficie Analyses 639
variables. In simple
K measures between two of K, the smaller the.
value of
the value the extent of
o interval-level variable
ontrolling for one or a measure of linear relationship
relationship
r=100. This
obviously
means that
and accurate

the larger
will be the
forecast from
nd the vice
X to Yand vice vversa.
alienation provides us a t

e
relationship also
|led partialing out or adjusti for
djusting or
more other variables. Controlling
rrelation coefficient, holding a variable constant. The
ordinary correlatioon
partial
a
between
variable is
coefficient ranges from+ 1.00meaning is the same.
an
and less precise coefficient
of sure of the Like
X and d Y is found
it can be said
that lation between by
In a nutshell,
estimates (while making
ction) are reduced. he
predictio
tL00 xAent
correlati
ial correlation but when two controlling only one other
to-1.00. When the
we make no errors in pred. variables
controlled, it isvariable,
errors in it is called as
dicting one SCorethe
our
to which are
alienation becomes
0.00 and, in tact, 1.00 and therefoore,
p a r t i a lc o r r e l a t i o n .
called as second-order
coefficient of the coefficient
of alienation is 1.0 the look on the logic of
a brief loa
from the other.
When r 000, = s t a n d a r d deviation of marginal
frean standar Let us give
8 en partial correlation
the same as ncies,
ar variables Xand Y coefficient.
which indicates that our
sions of X on Zand Y on2.controlling Technically,
estimate becomes
error of
of alienation) is 0.87,
When correlat
for variable Zis the correlation partial
r = 0.50, the
K (coefficient
87% = 13%)
smaller than they would stimate are
there were of the
servingresdonendent
In
simple words, one can think
of the residuals
0.87 as large as or
13% (100%
-
as independ variable for
predicting X and Yin two
of control
variable (Z)
correlation between
Xand Yvariables. cets of
two sets of rresiduals are
computed. One set of residuals separate regression analyses. In this
process,
correlation.
There are three important
applications of lained by Z and the oother set of residuals represent the represents the variation in X not
variation in Ynot
variables are correlated in some systematic way, it is no between explained
coefficient
) Prediction: If two t l e to Usa partial correlation X and Y
controlling for Z can be easily by Z. The
accurate prediction about the other. If the correl correlating these
two sets of residuals.
computed byY
one of the variables for making perfect
hundred cent accurate.
(1.00), this prediction is per The formulas for computing partials correlation are as under.
(ii) Validity of the test: Frequently,
the researcher develops a new test and
war First-order partial correlation:
validate it. In other words, he wants to know it the test
is truly measuring what it lato
about this validity is to correlate the new tect to
measure. One common way of knowing y y
other test that is measuring what the new test is measuring. for example, a newly develon
intelligence test may be correlated against another intelligence test developed earlior the
Tsy(1-)1- (23.33)
correlation is high, it is said that the newly developed intelligence test has sufficient degree of
Second-order partial correlation:
validity.
(ii) Theory verification: Correlation is also used in the task of theory verification. Manv yXA.z YAz
psychological theories make specific predictions about the relationship between two variable xy.zA (23.34)
(1-AZ(1-az)
For example, social psychologist may develop a theory predicting personality type and prosocial
behaviour. Likewise, physiological psychologist may predict a relationship between brain size
and learning ability. In each case, the prediction of the theory could be tested
(Computational equations for
higher-order partial correlation coefficients will not be
by calculating discussed here. The order of partial correlation coefficient indicates the number
of variables
correlation between the two concerned variables.
Controlled.)
5. Partial correlation and Muliple correlation
where, X= Independent variable;
When X and Y become correlated with each other, there is the
always
correlation is due to the association between each of the two variables and apossibility
that this Y= Dependent variable;
third variable. For Z= Control variable;
example, among a group of schoolchildren of different ages, the researcher may find a high
correlation between size of A= Control variable.
vocabulary and height. In fact, this correlation may not display a
genuine correlation between size of vocabulary (X) a Let us take an example to illustrate the computation of first-order partial correlation.
(Y) but may also result from the
fact that both
vocabulary size and height may be relatedheight with a third variable that is, age Z. (a) Zero-order correlation between ability to memorize () and ability to solve certain
Likewise, itf the researcher has found a
time
high correlation between academic achievement and ne problems (Y) = ry = 080
devoted to
study, it may not be true because this may result from the fact that both variabies
may be related to a third variable, that is, 6) Zero-order correlation between X and intelligence (Z) = =005
intelligence.
In designing any experiment, the researcher has the alternative of either ) Zero-order correlation between Yand Z = , = 060
experimental control for introducimg
Control eliminate the eliminating
the influence of third variable or
influence of the third variable. using statistical metho0s 080-(0.50) (0.60)
relation between ability to memorize (X) and Suppose the experimenter wishes tO xy Z
these variables are related to inteligence (Z).ability to solve certain kinds of problems (Y, D 1-0.50(1-060)
between ability to memorize and Therefore, for determining genuine 080 030_0.50072
IS,
Intelligence must be
ability
controlled.
to solve problems, it is essential that the thirdco , that
Ta
hat
075) (0.64) 069
If the
xperimental control, he might choose subjects experimenter wants to control intelligen n
experimental control is with equal the
statistical control can intelligence.
not feasible But somenow "
then be applied.
Partial
correlation correlation is one such statistical
between two method that allows the
variables by controlling the impact of the third researcher Residua
variabie e a n the size and direction
of error in prediction.
Sctoncoe

Ieanoural

M
MeOHs

arylng Out Statistical Analyses 641


Ateasunvmentsand
Reseurb
intelligence
then correlation
been
of equal
h rather than 0.80, etween .called coefficient of multiple determination, which
640
Tes,
would
have would
is, K in the dependent variable that is explained indicates the proportion o
prol
problems

that if
all
subjects
to
solve
certain

control
variable
(Zvariable)i s completely
(7.
tal vara the above example, R 66 and therefore
15
jointly by two more independent
or

Ihis
shows ability
andalbilty arISe where be
correla.

n the ariable R=0,4356 044. t means that 44% ot


aCademic achieverment has been explained by studyorhabit
ability
to
memorize

situation
may
rom Y,
and then, X may
semipartial
semipartial
correlatiou residual ofy iation in acader

be
been and intelligence and
explained.
sonetimes variat.
not Note
towever,

from
one
variable only, say
calleds
pan
orrelation
corfelation

with
or

Zremoved,
is pive
y
The par remainin
56% has
dependenl variable a and independent varia
that R' is never smaller than
any single r* between
removed is Yand
correlationn
of
of
reskduals

This type
the
between
X and Correlation and Causation

yy
(23.35
orelation

mon error in interprelng correlation is to assume that a correlation


A very common

tionship between two variables. Our lives are constantly


necessarily implies
bornbarded with
- correlation is;
nd-effect relatior

-075,
then thepart a
Caof relationship between Iwo variables,. A lew examples are: cigarette smoking is related to
If,-0.75,,
-080and, O.75-(0.80)(0.75) 0.15 0.283 repo ase: alcohol consumplion is related to birth defects; hard labour during exarnination by
0.53 Jungs related to good grade; carrot consurnption is related to good eyesight. Do these
ly -072 students is
ere measurements (or
indicate cause-and-ettect relationship between the two concerned variables? Does
arises where res) have SCore
correlation
generally been
and subsequent measurements on the same
and su
l d o mOking causes lungs disease or carrot causes good eyesight? The answer is no.
of part treatment
Application
the experimental hat, in
What, fact, is being said is that correltional studies simply do not allow the inference of
prior to givingunder experimental
treatment.
obtained
participants
Correlation is a necessary but not a suficient condition to establish a causal
causation. Correl
ion.
Ssubjectsor between two variables. Faulty causal inferences from correlation data is called as
relationsh
is
K, in its simplest form,
undo.
Multiple correlation
letter ast hoc fallacy.
coeficient
denoted by
at least two other variables Th dthe
Multiple
correlation
one
variable and a
combination

easure of
ol
linear relatic
of linear relationship between a dependenu emultiple NONPARAMETRIC STATISTICS
relationshipbetween defined as a
measure
sirmlpenden
coeficient (R) is independent variables. In Theimportant nonparametric statistics which have been included in this book are as followS:
correlation two or more R
of
variable and the
combined effects
on the criterion variable an
correlation between
sCores actualy
earned

in multiple
SCores 1. Chi-square (X")test
estimates the
from two or
more predictor
varlables
multiple regression U test
predicted on the criterion variable mulliple regression.
called as
2. Mann-VWhitney
in this situation is methods (both rho and tau)
equation. Making predictions variable) and the combined efiorh
ects of 3. Rank-diference
between Y(dependent
to know the R rank order correlation
formula tor multiple correlation coefficient becon partial
want
When we
the comes 4. Kendall's
variables),
X, X, (two independent 5. Coefficient of concordance (W)
like this:
6. Median test
+62h (23.36)
Rz 1- 7. Kruskal-Wallis H test
8. Friedman test
Let us take an example.
Each of these techniques has been discussed below.
Variable 1:Academic achievement
Chi-square (X) Test
Variable 2: Study habit
The chi-square is of the m0st important nonparametric statistics, which is used for several
one
Variable 3: Intelligence Karl Pearson and therefore also sometimes called
Zero order correlation between these three variables are as under: purposes. This test was originally developed by
as Pearson chi-square. Due to its smooth uses for various purposes, Guilford (1956) has called it
h =0.50 the general-purpose statistic. It is a nonparametric statistic because it involves no assumption
distribution or homogeneity of the variances.
The chi-square test is
=060 regarding the normalcy of
or percentages. The
2s =0.40 Used when the data are expressed in terms of frequencies or proportions
basic ideas behind use of X* are that (a) we have a theory of any
kind concerning how our cases
R= 0.50060-2 (0.50) (060) (0.40) should be distributed, (b) we have a sample which shows how the cases actually are distributed,
between theoretical frequencies and
1-(0.40 and (c) we want to know whether or not the
differences
attributed to the
differences might reasonably be
=v0.4405 =066 0DServed frequencies are of such size that these continuous data can be
discrete data. However, any
Muupie correlation coefficient Ris interpreted in the same way as correlation coefficient r. It is a nance lactor. The chi-square applies only to can be treated as
discrete data and then, the
measure of relation between dependent variable and the combined effect or d of Euced to the categories in such a way that they is
X* given in the next page.
Pplication of chi-square is justified. The formula for calculating
independent variables. Just as r is a
proportion, so also R is a proportion. When R i5
Sciences
beDavioura/

eIhoAs
1m
Researth

and
Measurements

642
Tests,
Carrytng Out Statistical Analyses 643
23 37 rage
as uperior, average and inferior. Now, the
alification? The obtained data question
educationalqualification
is: Is
have been showneducational achievement related to
in Table 23.23.
omit the data siven in
parentheses because they indicate expected
The first step in calcula
For the moment,
23.38 frequency. The lculating X* as a test of frequency
and not observed
relationship
tween
bet
educational
w e e n educational a significance independence
achievement and educational of or the
nd ffe
frequency; and =
expecte
expected frequency. The null
hypothesis is that these qualification is to
compute the
chi-square; 1,
=
obtained or
observed

frequen o dependent,
'and
and if this
if th hypothesisis true, the
two variables are not related or are
expected frequencies should be follows.
where, X=
observations.

N = t o t a l sum
of as
theoretical frequency; less arithmatic than calculat
lculation of x Cells of table
23.38 requires
The calculation
of X with formula
time.
with Expected frequency
Hence, it
saves
formula 23.37. Upper left

There are several


uses of the chi-square
test.
e (100x50200 =25
equal probability hypothesis,
Bu
may be
used as a test of ual probability
in all the given C
Upper middle
60x50Y200 15
First, chi-square of having the frequencies
mean the probability
an item in an attitude scale Th
0ries a Upper right
hypothesis, we students a n s w e r
example,100
equal. Suppose, for options-strongly agree, agree, neutral, utral, disagree and stron five Pas
(40x50y200 10
of
disagree strongly disagree Middle left
categories response of
hypothesis, the expected frequency
responsec (100 x50)200 = 25
According to the equal probability test would test whether or not the equal oroh Middle middle
students would be 20 in
each. The chi-square
the chi-square test is significant, the equal probabilitvh ty (60x50)200 =15
becomes tenable.Ifthe value of Middle right
value of the chl-square is not significant, the equal prohabi (40 x50)200 =10
becomes untenable andif the bility
hypothesis becomes tenable. Lower left
The second use chi-square test is in testing the significance of the independan.
of the (100x100y200 50
is meant that one variable is not affected by. or rol Lower middle
hypothesis. By independence hypothesis ted (60 x100)200 30
to, another variable and hence, these two varibales are independent. The chi-square is not a
measure of the degree of relationship in such a situation. t merely proVides an estimate of some Lower right (40 x100y200 20
factors other than chance (or sampling error), which account tor the apparent relationship,
Generally, in dealing with data related to independence hypothesis, they are first arrangedin a =200
contingency table. When observations on two variables are classified in a two-way table, data
After calculating expected frequency tor each cell, the chi-square may be calculated as
are called the contingency data and the table as the contingency table.
is known Independence shown below:
in a contingency table exists only when each tally exhibits a different event or individual.
The third important use of chi-square is in testing a hypothesis
regarding the normal shape of 6,-
a
frequency distribution. When chi-square is used in this connection, it is commonly referred to
as a test of
goodness-of-fit.
The fourth use of chi-square is in
testing the significance of several statistics. For example, for 30 25 +5 25
testing the significance of the phi-coefficient, coefficient of concordance and coetticient of
contingency, we convert the respective values into chi-square values. lf the 15 15
appears to be a significant one, we also take their chi-square value
As an illustration, let us take an original values as significant. 10 25 2.5
200 students who were example of a 3 x3 contingency table, which shows data o
classified into three classes on the basis of their 25 25 0 0
(see lable 23.23). Their
educational attainment is measured in course of
educational qualification
study by classity1ng tne 10 15 -5 25 1.67
Table 23.23 The use of
Chi-square in a 3 x3 contingency table 5 10 + 25 2.5
Superior Average
Master 30 (25)
Inferior 45 50 25 0.5
15 (15) 5 (10) 50
Bachelor 25 (25)
35 30 25 0.83
10 (15) 15 (10) 50
Intermediate 20 20 0 0
45 (50) 35 (30) 20 (20) 100
100
f =200 f. =200 f,-Ef, =0 2 =9.00
60 40 200
dt = (r - 1)K -1) =(3 -1 )3 -1) =2 x2 =4
Bebarioural
Sciences

Methods
im
Resvanch

Measurements
and

644 Tess,
w e find
that the value
of

sa uare f chi-s for arrying Out Statistical


table of
chi-square,
chi-square is
below it (p df = ho
orobability table of chi-square, we find Analyses 645
probability we con 4a Enteri
Enteringthe should 10.827. As the obtained
obtained

9.488. As
the education be that
be t w o variables,
namely, the 0.001 level; for df 1the value =

level should of
the 0.05
is retained.
Hence, the
to be independent. For clude that nos. 66 an
item nos. and 10 are not value of the
chi-square chi-square at
the null
hypothesis
in the present
study a r e tound
1}wherer the
numhnlatina
= of rowsgand
df it happens tthat with 1
independent,
that is, they are
is much
related.
above it, we
ometimes df, any one of the
-

attainment -
IK
educational

formula, as noted
above,
is(r situation, a corection called expected cell frequencies becomes less
chi-squaretestthe
than 5. In
columns. X* becomes muchei 0ested that Yates' correction forYates' correction for continuity
rs have suggested
mpler as it encies goes below 10. Where continuity should be applied iswhen applied. Some
of
the number the process of computing
formula 23.38,
By applying calculations. Thus
we find x* in a sim
vovds t where frequencies are small, trequencies are large, this correction any of the
ofthe
arithmetical

(5(15 (20 under, ifferc Yates' correction is makes no


many
(10(35 reducing the absolute value of difference significant. Yates'
30 (25) (45)1515 15 30 10 10 20
-200 nsis arger than fe is decreased by 0.5 and between f, and f, by 0.5, thatcorrection
is, each
25 50 each
25
4083 +2.5+ 22.5 + 20)-200 oThe formula for chi-square in such a situation is as f, which is smaller than f, is increased by
6.67 + given below.
= (36+25+ 40.5+15+
=209-200 =9 N AD-BCIN
of chi-square as it was obtained with the hel x2 =. 2
Thus, we get exactly the same value iformula (A+BC +D)A+C\B +D) (23.40)
23.37. in a 2 x2 contingencv t a . are detined as usual.
When data have been arranged where subscripts
Chi-square in 2 x2 table: in the manner described abovo here Caunnose, 60 students (50
df 1, we need not calculate expected
frequency An such a boys and 10 girls) were
=

can be directly calculated


with the help of the following ecuns uation: t o be answered in "Yes and "No form. Theiradministered attitude scale. The items
an

situation, the chi-square W od in Table 23.25. The question is: Do the frequencies towards item no. 10 are
N[IAD-BC2 preser
opinions of boys and girls differ significantly?
x (A+ BMC+ D\A+ C\B+D) (23.39) Table 23.25 Chi-square with Yates' correction in a2 x2 table
where A, B, C and D = symbols for frequency of four cells in a 2 x2 table; N = total numho Yes No
frequencies; bars (1|) indicate that in subtracting BC from AD, the sign is ignored.
Boys 20 30 50
Suppose the researcher wants to know whether or not the two given items in the test ar
independent. Both items have been answered in "Yes" or "No" form. The test was administeredtn A
a sample of 400 students and the obtained data were as follows: Girls
10
Table 23.24 Chi-square in a2 x2 table D
Item No. 6 23 37 60
Yes No According to Equation 23.40,
No 180 120 300
I(20)17)-(30)3)| - b0
Item No. 10 A B
x = 60 2
(50)(10)23)(37)
Yes 90 10 100
60[ |140-901-301 60x400 24000 = 0.056
D 425500 425500 425500
270 n the above example, the expected frequency (23x 10/60) is less than 5. Hence, chi-square
130 400
According to the formula: ds Deen calculated by Equation 23.40. Entering the table for chi-square, we find that for df =1
ne value of chi-square at the 0.05 level should be 3.841. Since the obtained value is less than it
P0.05), we conclude that the opinions of boys and girls do not differ significantly.
X 400|I(18010)-(120)90)1] There
are some of Some important assumptions and restrictions for
assumptions chi-square.
(300)100)270)(1 30) USing chi-square test are presented below.
400 x81000000 32400000000 For the proper of chi-square test, it is assumed that the sample
andom sampling: use

df =(r-1k -1)=(2 1053000000 1053000000 30769


nder study selected randomly from the
is population (Gravetter & Wallnau it19851.
-1)2-1)=1 o dependence of observations:: By independence of observations, is
meant that each
should not be contusec
Euency is generated by a different subject
or a person. This
Bebavioural
Sciences arryng ut Statistical Analyses 647
in
Metbos
re the total number ot cells. But this is
will be to add more participants to the study.solution of the last resort. The best
a
ot independen
ta
the
Measunments and
Reseanh

as
iables a
variables
s
found in the test
found

produce res
ano
646
Tets, between

if a subject could
than
esponse
oha
SO toh using chi-square with snall expected
nota wiser step. inis is Decause he probability frequencies
appropriate
inappropriate

may be acceptable, it is
independence
more
to seuency b)
of and contribute
t to
used if the
or are based on stil ch hypothesis is true, may be quite slim. Inof getting a significant result, evenif
misleading
concept
the be not be
with would cateßory should
one
test
than
test
2006). other words, with small
chi-square

in
more

words,
chi-square
(Aron,
Aron & Coups
ncies, the power ot chi-square test is very low. Thus the risk of
expected
y (fe) tends to di
classified
can be In simple than
once frequencv
ll Type error
category. more expected becomes high.
any
single
being
tested
he
sIze
ortheof chi SunOr
chi-square for a singlee l .. Suppose a
people of chi-squua willbe: test has some limitations, to0.
Important limitations are as under:
same
trequencies: computation

the the total


expected to the
(c) Size of Let us
consider

of this
cell

value of
chi-square.

6. The
contribution

earlier, chi-square test can not be used when the researcher has counted or
aid
the
the f,
1 and =
=

6-1_(5 25 luded some people more than once. Ihis error produces what is called as an inflated
cell has
. 1 and is extremely serious and may easily lead to the rejection of the nul hypothesis
Cellc) betuo in fact, it is true (Type I error).
The difference ween fo when,
f, 15
and e 10. ditfers from that oftho and f.is
=

instance where = chi-square value Aather limitation of chi-square stems from the fact that the value of
another total b) chi-square is
Now consider to the ranortional to the sample size. Let us take an example. In a 2 x2 table the following data
contribution
of this cell
but the Cf_(15-10)(525 = 2.5
still 5
were obtained with respect to
relationship between sex and the level of anxiety.
10 10
10
value.Thie.
upon chi-square Sex
have a great impact s e n s i t i v e when val
small f value can
too f.
It is clear that a less than 5. In tact, the
test
is are Male
Female
when values a r e
f not be
used when any of the expecte
cted cell
becomes serious
small. Thus,
the chi-square
test s h o u l d
Level of anxiety
High| 32 28 60
extremely
frequency is
less than 5. chi-square values are (27.5) (32.5)
theoretica distributions, when there is onlv
.

continuity:
For the Low 23 37 60
(d) Assumption of is
clearly violated one
variable. This assumption (27.5)
assumed to be a continued
is c a l c u l a t e d for
all possible samples of a given siz , the (32.5)
When chi-square In such a situation, statistician
=55 65 120
degree of freedom. is not continuous.

distribution of sample chi-square


consists subtracting 0.5 from
which of
resulting
have suggested the
use of Yates'
correction for continuity,
(Gravetter & Wallnau 1987).
x =236 (G. appears in bracket in each cell)
difference between f,
and f, When the observed frequencies of each cell is doubled, that is, 64, 46, 56 and 74, x
the absolute value ofthe minimum
of continuity, there is controversy regarding would be equal to 5.44 rather than 2.36. Despite the fact that the relationship has not
Associated with this assumption (1949) has demonstrated nine
expected frequency. A research article by Lewis and Burke changed, the researcher would reject rather than accept null hypothesis. Due to this
One e r r o r is related to the expected frequencies that are reason, many researchers prefer to avoid chi-square test when dealing with large
common errors in the use of chi-square.
table should have a samples because the results can be very misleading.
too low. Most statisticians are
of view that every cell of contingency
Some of them recommended a minimum of 10, with 5 as
reasonable size of expected frequency. (c) Still another limitation arises from a generally adopted rule related to the situation with
minimum. Still others recommended that the
bottom limit. Fisher (1938) recommended 10 as small N's or when the expected frequency/proportion/percentage among cell is small.
it depended on whether
minimum should be some proportion or percentage of the total or that The common rule is that with 1 degree of freedom situation, the expected frequency in
the expected frequency were equal or not. A rule that has generally been adopted in the 1 degree
all cells should be equal to or greater than 5. When dí > 1, the expected frequency
of freedom situation is that the expected frequency should be equal to or greater than 5. When should be equal or greater than 5 in at least 80% of cells. Somehow when these
df> 1, the expected frequency should be equal to or greater than 5 in at
When these requirements are not met, other statistical tests are available (Siegel &
least 80%% of the cels. requirements are not met, other statistical tests are available (Siegel & Castellan 1988)
1988). However, a major and significant review of the research on
Castellan
the topic was done by Mann-Whitney U Test
Delucchi (1983). He drew two major conclusions as The Mann-Whitney U Test is a nonparametric substitute for the parametric t test. This test was
under
l a h echi-square test may be properly used in cases where the expected frequencies are jointly published by H B Mann and D R Whitney in 1947. Needless to say,the Mann-WhitneyU
much lower than the value considered test is used when the researcher is interested in testing the significance of difference between two
permissible. The most important principe
to be
that there should be at least five times as many individuals as there are celis. ro independently drawn samples or groups or two separate and uncorrelated groups. For
example, any cell with a very low expected frequency would be acceptabie n application of the U test it is essentialthat the data have been obtained on ordinal measurement,
contingency table if there are at least 20 in the overall. subjects study n e y must have been obtained in terms of rank. Where the data
have been obtained in
in
case the researcher has a table larger than 2 terms of scores for application of the Mann-Whitney U test, itis essential that those scoresbe
expected frequency as well as the number of
x2 with a cell having an extremeiy io Converted into rank without much loss of information. It is not necessary for the application orthe
one
common step would be subjects or
participants are
also sd size. This test can also be applied to
to combine related
categories to increase expecteu" requenncy Dro hitney U test that both groups must have unequal
groups having equal size.
Sciences
Camrytng Out Statistical Analyses 649
Bebavioural

Metbods
in

and
Research

are really 33
Tests,
Measurements

t h a t if
two groups trom 19 4
648 U test is would
in t w o groups 52
The
rationale
behind
Mann-Whitney

ratio of the
sum
Or
two groups.
ranks

However, if the sums of ran


COneS
roughly 35
48
15 22
then the in t h e 4
of c a s e s andfinally 11 20
sample
population,
ratio of the
number

the null
hypothesis
that the
to the rejects
proportional the investigator
proportional,
made un of
20 or less
not so
and N
are
are differ.
M 50
under study is, when both
small, that where method
dss an d
two groups book
sizes are very (1956)
cability
When sample referred Siegel's
to with methods of 61
the students
are w e s h a l l deal
ating 27
than 20 cases, are given.
In
this book size, say, more than
than 20 cases.
to such samples larger sample 58 24
tables that apply which are
concerned
with r e s e a r c h , w e a r e m o r e often confro .

Mann-Whitney
Utest,
m e t h o d s is
that in
Mann_u Onted 53
the
only
these
for calculating the 23
r e a s o n for
selecting 20. Ihe equations
The than
are greater
sizes which 59 25
with sample
beloOw.
test are as given
2 R (23.41) 60
26
U=N, N+ 65 30
U =N, N2+- N,0N+-2 R
2
(23.42) 63 28
which be done by either
U test,
can
of the Mann-Whitney 67 31
calculation
Let illustrate the groups on the e ILie
of two
23.42. Table 2 3 . 2 6 presents the scores
us
23.41by Equation
or is to rank all the scor 64 29
Equation I has 21 subjects. I h e Tirst step
and Group
Scale. Group I has 10 subjects In lable 23.26 the lowest scoro
in one combined distribution in an increasing
order of size. R =88.5 ZR2 = 407.5
w e give it a rank of 1, The
is 7 (second hence,
column) and been
(taking both sets of scores together) second given a rank of 2. The third U, its
next score is 8, which is in
again the column and it
has of n this way, ranking t is the lower
value of U test we want. For testing the
that significance of the obtained
score from below is 10 (in
the first column), which been given rank 3.
has a

of ranks are summed, value is converted into z SCore a s shown below.


receive ranks. Subsequently, the two columns
is continued until all scores
check on arithmetical
calculation is imposed. Ihe check is that the sums of U-NN
At this juncture, a

columns must be equal to NN+ 1/2. (23.43)


these fwo
-88.5+ 407.5 O4= N0N,N +Na+1)
Check:R+R = 496; 496
=

Hence, we can proceed: (10)(21)


176.5 -
(by Equation 23.41) U = N,N+- NN+1- 2
Ri =(10 )(21)+(10 010+)88.5 =176.5 (10)21(10+21+ 1) 71.5=302
23.664
(by Equation 23.42) U =N,N + 2 T - > R,= (10 )21)+ l-407.5 = 33.5 12

Az score from +1.96 to be significant at the 0.05 level of significance and if


t2.58 is taken to
Table 23.26 Calculation of the Mann-Whitney U test from larger sample sizes the z score is greater than even +2.58, take it to be significant at the 0.01 level. Since the
we
Gr.I
(N =10)
Gr. II
R R, obtained z is 3.02,
we can take the value
of the Mann-Whitney Uto be
a
significant one
(N =21) Rejecting the null hypothesis, it is concluded that the two groups differ significantly on the

8 32 measures of the Lie Scale.


13 U differs but the
4 40 According to Equation 23.42, the obtained value of the Mann-Whitney would be
18 of the z score would remain unaffected. However, the sign of the z score
value
31
12 changed.
9 33.5-10N21)

16.5
39
15 16.5 V(10)21(10+21+ 1)
-71.5-302
23.664
26
12
27
makes difference
47
but the sign is negative. The change of sign
no
ODtain the
19 intthe
n
same z score

interpretation of the Mann-Whitney U test.


toural Sciences
Behar
MeNONIS IN
Reseanb

Measunemens
and
Tists
650 Carying Out Statist
Analyses 651
Rank-Difference Methods are very popular 6339)
based upon ranK
direrences
which are based upon the
among havioura P-1
12034
= 1-1185 -0.185
of correlation differe 12(144-1)
The methods common
methods
meth d rences in 1716
There are
two most
is the peamman
rank-difterence
and the ranks
scientists.

the X and Y variables.


One other is hetween
Relation betwe
Spearman Correlation and Pearson
assigned on
method.
p (read rho) correlation an
Pearson correlation
Pearson Correlation
symbolized by as
rank-diterence o
is a and
the Kendall rank-difterence method very If we compute Spearrman correlation from the same set of
b e r w e e n two sets of ranks or het.
The Spearman coefficient wheth Ween tPular popular the two
seem to be related?
correlatic
Since these two
methods data, how will
correlation
will
method of computing
the
ranks. When a
r e s e a r c h e r is not
concerned with er or not
been named afa
a
sets of
relation difererinear
they be
producing different values. As weof know,
correlations are measuring
scores
converted into
method
correlation is a better choice. Ihen u m b e r of
has after Spearman relationship whereas Spearman correlation measures Pearson correlation
linear, Spearman when the pairs of scores or ranl.
nks is preferal who me
tCases, the value obtained by Spearman correlation will monotonic relationship.
This method is applicable
.00 100)
discovered it.
than the value be larger (that is,
smal, that is, 30 or
below. or obtained by Pearson corelation from
the
closer to
The equation is:
*ce it is easier to be pertect by Spearman' s criteria than by same data. This
happens
p=1- 2 D ne situationwhere rearson Pearson's criteria. However, there
Is
ion. When corelation will producing larger value than
be
Spearman
p MN-1) (23.44) there is extreme score in the data, this
single extreme score can have
ence on the value of Fearson correlation. Ihe
single extreme score larger
rank-difference correlation coefficient;
D =
difference betwoo tends to exaggerate the
where p =
Spearman's and aituude of the correlation bringing it nearer to 1.00 or -1.00 than would be expected from
ranks or scores. ar scores points.
rank and N=number of pairs of The exaggerated iniluence of extreme score is eliminated with
To illustrate the calculation ofp let us consider the
data given in Table 23.27, which e .
on the educational tect he
rrelation because the rankin8 process reduces the distance between adjacent scores Spearman to exactly
scores of 12 students on
the intelligence test (X) as well as t ). o (that is, first to second, second to third, third to fourth, and so on). This fact is being illustrated
23.44, the first step is to rank
both sets of scores separatelvv below (N = 05).
computing p by Equation
next highest score a rank of 2, and so on. Then. inne
highest score a rank of 1, the
between two sets of ranks is computed. This is note
algebraic signs in view, the ditference
column D. Subsequently, each difference is
squared and noted under column D2, Substiti 0 Pearsonr =0996
the values in the equation, we get apof -0.185. Following Siegel & Castellan (1988, Table
0 Speaman rho = 0.50
p 360) we can test the significance of the obtained p. Since the obtained value of p is less than the
value given at the 0.05 level (p>005) for N=12, we can accept the null hypothesis and 2
can
conclude that X and Yare independent and whatever correlation has been found is due to the 1
chance factor. 15 15
=21 =21
Table 23.27 llustration of the Spearman rank-difference correlation
It is obvious from the above example that the single score point (15-15) has inflated the
Rank Rank2 D
(R-R) value of Pearson r(0.96) making it much larger than Spearman rho (0.50) and bringing it closer to
+1.00.
47 68 8.5 56.25 Another method of computing the rank-difference correlation has been developed by
50 b0 5.5 2.5 9.00 Kendall. The method is known as Kendall's tau for which the formula is as follows:
S 23.45)
70 2 7 25.00 T
(1/2 MN-1)
53 -1 49.00 which have
T
Kendall's tau, S =actual total; and N= number of objects or scores
60 10 2.5 -7.5 56.25 where =

been ranked.
50 55 5.5 0.25
b -0.5 on Xand Ytest
42 Table 23.28 Scores of 12 students
48 11 4.00
G H K
58 30 12 81.00
A B E 30 10
23 22 24 19 28
55 45 36.00 20 26 17 16 15
10
72 47 6 35
36 43 77 76
2 11 +1 1.00 70 80 40 45 38 49
49 in
59 9.00 tests and their
s c o r e s are presented
administered two rank of 1, the
47 12 students have been the highest score a
56 8.5 PpOse scores giving of
5 +3.5 12.25 2 . 2 8 . The first step is to
rank both sets
the ranks
based upon two
sets the of
next d rank
Table 23.29 presents
ot 2, and so on.
ED=33900
D 00
Beharolll.
MeIhoas
m

Measurements
and Research
652 Tests,
the ranks
of the X test ar

Subsequently, arrang in a
scores given in
Table 23.28.

like 1,2,
3.. way that Kendall's Partial k-order correlation arying Out Statistie
in a natural order
they appear
of scores given in Table xists between
correlation Analyses 653
Ranks based upon
two sets
23.28 two
variables, there isis much possibil
Whenever

Table 23.29 the association between


F G H I s
due to each
B C D E among
a group
ht. This
schoolchildre
of ldren
This correlati
of diverse variables and y that the
correlation
A
might find thirdhighvariable. For example,
a
6 4 8 2 height and weight. correlation not retlect ages, one
9 10 11 5 weight, rather result from the may a a
correlation etween
Likewise it one Tindsthat both heightgenuine
X 7 3 fact
12 age. Likewise
is, age.
and correlation between betw and
c
that
8 10 6 3 4 yariable,
5 1 9
7
11 memorizat ability aility among group of a high correlationweight are associated with
a height
Y 12 these children of between third a

lable 23.30 presents the ranke due to


the tact that both these variables, that the same size of
age, this correlation vocabulary and
the Y test adjusted.
are n a rearrange with third variable
a is, size of
Accordingly, ranks on
order. Subsequently, the value of Sis computed.
for this, we start with the rank
the number of ranks wh est from
be associated
In ino
designing an
an
experiment,
that is,
the
intelligence.vocabulary and memorization
may not be true
ability may
the left side. The first rank on
the left side is 11. Count which are above control researcher
control the hasof the alternative of either
eliminate
to or
below 11 separately. e11
and the number ofranks
which are
thods eliminate the influence of third
to impact third
variable. Using variable by using someintroducing
ildren of the same
Table 23.30 Rearranged order of ranks may crahulary and
cize
intelligence level and thenexperimental statistical
s
control, the researcher
the statistical metho memorization ability. somehow
of vo correlate the two
nethods may be Il
the
experimental controlvariables, that is,
here the effect of applied tor exercising such control.
D E F G H
AB K is not
L feasible,
X and Yvariation
metno in third variable aPartial correlation is
2 3 4 5 6 7 8 9 10 namely, X.
variables, namely, are upon the relation one such
X 1 11 12
le, the researcher can eliminated.
compute the
Ihus by using partial
between other two
relation correlation
Y11 6 2 4 9
8 10 12 vocabulary by holding the influence of intelligencebetween
conct memorization
in the above
Zation ability and the
size of
Vondall's partial rank-order one is
Only one rank (that is, 12) falling at the right of the first rank on the Ytest is above 11 two variables measured on nonparametric
1e
ordinal scale or
statistical method where
andt
remaining 10ranksfall below it. Hence, its contribution to S would be equal to 1 -101he ne
influence of the third converted into ordinal the researcher
Likewi ho variable
on distribution of scores need to be constant. Here no measurement by
the second rank on the Ytest is 7. The four ranks falling right of 7, are above 7 and 6
anks
populat
made. When X and assumptions about shape of
below it. Hence, its contribution to Swould be 4 6. Identical procedures are-

repeatedfor otherare controlling the variable it is worthwhile


X and Z and Y and 2. Ihe formula for that the Kendall's tau
Y are to be
correlated while
ranks on the Ytest. Thus: should be calculated
coefficient is as under. calculating Kendall's partial rank-order betweern X
S=1-10)+(4-6)+(9-0)+(7-1)+(4 -3) +(6-0) +(4 -1)+ (4 -0) correlation

+(2-1)+(2-0)+(1-0) Ty
Tsy )T,)
T)0-T2
=(-9) + (-2) + (9) + (6) + (1) + (6) + (3) +(4) + (1) + (2) + (1) =
33 11=22.
-

(23.47)
Substituting in the form of formula:
where, Tyz Kendall's partial rank order
=

S 22 22 correlation
==0.333 Ty =Kendall's tau between Xand Y
NIN-1) 12(12-1) b6
2 Ty =Kendall's tau between Yand Z
The significance of tau is tested
by converting it into
T =Kendall's tau between Xand Z.
a z score, the formula for which is
given below. as Let take an example. Suppose the size of
us

intelligence (Z) are three variables. The researchervocabulary


(X), memorization ability (Y) and
T wants to compute
have partial rank-order correlation between X Ty 2, that is, he wants to
(23.46) and Y
2(2N+5) further that the values of Kendall's tau obtained from a by controlling the impact of Z. Suppose
distribution (N 22) are as under:
9MN-1) y 0.55
=

Hence, 033 033 033 T =0.46 (Kendall's rank-order correlation)


z2(212)+ 5)
V912) 12-1)
V00488 02209
= 14938
Ty =0.63
Since the obtained z score Now, substituting the values in formula 23.47,
is less than
we
get:
0.05 level. 1.96, we can say that this is not
Accepting the null
hypothesis, we can significant even at ine 0.55-0.46)(063)
According
be
to Siegel (1956, 214), tau has one
say that the given set of scores is not
advantage over rho, and that is that the correlated
Ty 1-0.42)(1-063)
generalized partial
to correlation. If
computed from the samerorne
answer will not be the both tau and rho are
same and ddld, he 0.26
hence, numerically, they are not equa. = 037
070
BebavouPul 3
Methoas
1n
Research

Measurements
aná

Tests,
654
(Z) is
controlled or partialed
out, the correlatior ying Oui Statistical
of intelligence Analyses 655
the impact 0.55 to 0.37. **20+20
Thus when
Yis
or
lowered
corrected
from

yz may be tested by calc


Mean of R; =

31+26+6+18
between
Xand

N(exceeding 20), the signiticance


of
the value 18
8
With larger
of z under:
s 9-18 +(14-18) +(20-18+
as
3Ty2 VMN-1)
(20-18? +(31-18
Z
2 (2N+5) 23 48
+(26-18 +(6-18 +(18-18
=(-9+(-4 +(2+(2
(3)(37)/22 (22-1)
+(13+(8+(-12 +(0 =482
Thus, 22x22+5) in the form of
Now substituting Equation 22.17,
238241
9.90
W
482 482
=
0717 =
0.72
1 4 (8-8) 672
talls short of 2.58, it may be concluded that T 12
ofzexceeds
Since obtained value but 1.96 but
not at 0.01
level.
level
is significant at 0.05 when N >7, the signiticance ot W Is tested by converting its value into X* with the help of
W the following equation:
Coefficient of Concordance
developed the letter W has been by Kend all
The coefficient of concordance symbolized by and is
a measure of correlation between more than two sets of ranks. Thus, W is a measur of X=K(N-1)W
(23.50)
always among
correlation more objects
than two sets of rankings indivieiduals.
of events, and
x= 418-1)(07 2) =20.16 and df in this situation
Thus
When the investigator is interested in knowing the inter-test reliability, W is chosen as the.most is always equal to N-1. Hence
which distinguishes it from other methode
df =8-1-7. Entering in the probability table for chi-square, we find that the value of chi-square
appropriate statistic. One characteristic of W o 7 at 0.05 level of signiticance should be 18.475.
correlation is that it is either zero or positive. It cannot be negative. W can be computed with the for df =
Since the obtained value of the
chi-square exceeds this required value, we can take this value of W as a
help of the formula given below. significant one. Thus,
rejecting the null hypothesis we can say that there is an overall significant relationship in ranking
W (23.49) done by the four teachers.

12 KN-N Median Test


where W = coeficient of concordance; S = sum of squares of deviations from the mean of R;
The median test is used to see it two groups (not necessarily of the same size) come from the same
K=number of judges or sets of rankings; and N = number of objects or individuals whichhave population or from populations having the same median. A median test may be readily extended
been ranked. more than two samples, that is for several
to
independent groups (Kurtz & Mayo 1980). In the
Suppose four teachers (A, B, C and D) ranked 8 students on the basis of performance shown median test, the null hypothesis is that there is no difference between the two sets of scores
in the classroom. The ranks given by the four teachers are presented in Table 23.31. The detailsof because they have been taken from the same population. lf the null hypothesis is true, half of the
the calculations have also been shown. scores in both groups should lie above the median and the remaining half of the scores should lie
the median. Table 23.32 presents the scores
Table 23.3. Ranks given by four teachers to eight students on the basis of
below of two groups of students in arithmetic test.
The first step in computation of a median test is to compute a common median for both
an

classroom performance distributions taken together.


Students
Teachers Table 23.32 Scores of 30 students in an arithmetic test
() (i) i) (iv) (v) (vi) (vii) (vii)
A 4 Gr. A 16, 17, 8, 12, 14,9,7, 5, 20, 22, 4, 26, 27,5, 10, 19
(N=16)
B o
19
C Gr. B 28, 30, 33, 40, 45, 47, 40, 38, 42, 50, 20, 18, 18,
b
D
(N =14)|
2
in
both the distributions are pulled together as shown
R Computing the common median,
9 14 20 20 26
18 the next page.
Dm.
NMeWs

Measurements and ResearcD


656 TeSts,

Scores Kruskal-Wallis H Test urrying Out Statiststical Anulyses 657


49-53 diference betwee the
The primary Ftest and t
t e s t on the other (to
44-48 Median =l+ -Fi Friedman be
considered
variance, whereas the H test and in
1alysis of va
ruskal-WallisHtest on the one
3 the Fricr hand and t
riedman test are that the former is a pa the
! test is a is
39-43 one-way
= 18.5+02-13)5
two-way nonparametric metric analysis nonparametric
etriC analysi
analysis ofof variance. analysis of the parametric
nonpararmetric
-38 The H test isvariance and the analysis cof
=
20.5
Table 23.33 used when
Friedman test issa
the
29-33 Htest tfrom scores
24-28 3 Gr.A (N=6) Gr. B obtained by three groups on Lie Scale
15 (14) 8)
19-23
17
(15)
Gr.C N=10)
10 (9) 20
(17)
14-18 8)
8 (6) 25
3 (19)
-13 3 6)
5 (3) 13
14 (12)
5 (13) 1
4-8 6 (4) (10)
(1) 26
N 30 A (2) (20)
3
(6) 24
Subsequently, a 2 x2 contingency table is set as follows. 12 (18)
(11
Now, the chi-square test can be applied. For computing chi-square from a2 x2tikl 36
18 (24)
may follow Equation 23.39. Yates' correction is not needed here because
none of thwe (16) 30
contains an expected frequency less than 5. cells (23)
29 (22)
Above Not above
27 (21)
Mdn Mdn R = 38
R =76
Gr. A A
R 186
B 16 investigator is interested in
knowing whether or not of
13
drawn from the same population. It the obtained groups independent
samples have been
data do not fulfil the two
assumptions, namely, assumption of normality and basic parametric
Gr..B H test is the most assumption of homogeneity
of variances, the
C D 14 appropriate statistic. The equation for the H test is as
given below.
10 H 12
NIN+ -3{N+1) (23.51)
13 17 30 where N= number in all
samples combined; R, =
sum of ranks in j
sample; and N, =
number in
Now substituting the values in sample.
Equation 23.39, we get: Data to illustrate the calculation of the H test have
been given in Table 23.33. Three
of students were administered a Lie groups
x230 I34)-(13)(10)|]2 Scale and their scores are presented in Table 23.33. The first
(16014)013)(17) step is to combine all the scores from all of the
groups and rank them with the lowest score
receiving a rank of 1 and the largest score by rank N. Ties are treated in the usual tashion in
30[ ranking. Subsequently, the sum of ranks in each group or column is found and the H test
112-130114177208438 =8.44
49504 49504
determines where these sums of ranks are so disparate that the three groups cannot be regarded as
oeing drawn from the same population. The ranks assigned to each score earned by the member of
e
df =
(r -

1) c -1)= (2 -1)(2 group are given In brackets. Now, substituting the value in Equation 23.51, we have
-

1) =
1
Entering the probability table for 12 (38) 76, (186 -3(24+ 1)
should be 6.635. Since chi-square
the 0.01 level we find that for df H
1, the chi-square vaalue al (24)/25) 6 8 10
the obtained value of the ue
pO0, we can reject the null hypothesis and
drawn from the same chi-square eXCee heel
conclude that the two samples have 53067.192_75 - 88.445-75 13.445 =13.44
population or from populations having same no 600
median.
Sciences
Bebavioural

kesearch
Methods
im
Carying Out Statistical Analyses 659
and
intor.
uld be kK -1=5-1=4. Entering the table for df
Measurements

Tests, est
theH test is interprete as chi-square. In wo u l d be
658 cases,

has six
or
more
than six
minus So, hero
So,
one. here dt 2. =
oresent
example

at the
the 0.05 level of signitficance. Since the obtained find that the chi-square
4, we

ntering
9.488
sample o r samples chi-sen be value of the Friedman test
When each number
of group
for di
=
2, the value of are at the he t
0.01
.01 should
this
value (p <005 we reject the null
hypothesis and conclude that the three
df find that level o
=

such a
situation,
chi-square
we of H test exceeds thic eds
groups
differ significantly. matched
table for
Since the
obtained
value Wa
probability
be 9.210.
should
nuil hypothesis,
the null
Rejecting the nypolnesis, we conclude e, it cORRELATION AND REGRESSION
significance is significant. Kejecting
cant.
from the same population.
the H value been drawn
f
can be
said that have not
thods are the most commonly used statistical
and they the H tect can five,
samples
are
independent
in samples
is trom
egel
o n e to

(1956) and
and Downie & Heath (1970ed
be interpr Corre.

iscussed some important methods of correlation,


already discussed
techniques in the testing field. We
When the
number ofcases
be found in Siegel
(1956)
have
method and Kendall's tau. Pearson namely, r, Spearman
which can
throughspecialtables, ank difference Correlation coefficient is
, a a
mathernatical index that
describes the direction
Friedman Test
a two-way
nonparametric analysis of Va.

nce. When nship. There is also a related


tude of a relationsh technique
called
As mentioned
above, the
Friedman test is
about the two b.basic parametric assumptio
and magn.
on in 1885, and today it is used to make a regression,
a term first

the groups are


matched and doubts exist

and the assumption


of homogeneity of variances namely, the basis of known Score on another variable or
prediction
about scores on orne
possibly several other variables
the assumption of normality whether or not les have
the samples drawn ftrom the
have been draun abieTakane 1989). In fact, these predictions are done from the
to the
test for testing
Friedman
of the hriedman test dman have been same Ferguso regression line, which is
resorts
the calculation
been
presented d e f i n e d as the
best-fitting straight line through a set of points in a scatter
population. Data to illustrate Friedman test is as follows:
in diagram. It is estimated
ciple of least square which, in fact, minimizes the squared deviations around the
for calculating the princ
Table 23.34. The equation by using line. Let us explain through an example.
12(R-3MK+ 1) regression
xNKIK+1) 23.52) As discussed above, the mean is the point of least squares for any single variable. other
ds, the sum of squared deviations around the mean will be less than it is around any value
and
where X =Friedman test; N =number of rows;
K = number ol columns, R, =
separate
sums
ords,
than mean. For example, the mean of the five scores, namely, 2, 3, 4, 5 and 6 is
other
of ranks of each column. XN- 20/5= 4. The squared deviation of each score around the mean can now be easily
Aetermined. For score 6, the squared deviation is(6-4 = 4; for score 5, the squared deviation is
ANOVA from scoresor three groups on a recall tect
Table 23.34 Friedman test of two-way (5-4=1. The score 4 Is equalto the mean and therefore, the squared deviation around mean
N will be(4-4 =0. Thus, by definition the mean will always be the point of least squares. The
oaression line is the line of least squares or running mean in two dimensions or in the space
Gr.A 12 (4) 8 (2) 4 (1) 10 (3) 16 (5)
created by two variables.
Gr.B 10 3) 7 (2) 6 (1) 11 (4) 17 (5) As narrated above, a regression line is the best-fiting straight line through a set of several
points in a scatter diagram, which is the picture of the relationship between two variables. The
Gr. C 7 (2) 8 (3) 4 (1) 12 (5) 10(4) regression line is described as a mathematical index called regression equation. The regression
R; 9 3 12 14 equation is an equation for predicting the most probable value of one variable from known value
of another. The general linear regression equation for the straight line is:
The first step in calculation of the Friedman test is to rank each score in each row
separately, Y =bx +a (23.53
giving the lowest score in each row a rank of 1 and the next lowest score in each row rank of 2, a
and so on. The ranking can also be done in reversed order, that is, where, a is the intercept, that is, value of Ywhen Xis
giving the highest score in each
Y a+ bX
row a rank of 1, the next
highest score in each row a rank of 2, and s0 on. The rank assignedto zero. In other words, the point at which the
each score in each row is
11
given in parentheses. The Friedman test is applied to determine regressionline crosses the Y-axis (a) is found by using 0
whether or not the rank totals the following formula:
in Equation 23.52, we have
symbolized by R, differs significantly. Now, substituting the values
a= Y -bx 8-
12
X BNGNS-
T9 +7¥ +3¥+(12+ (14|-3315+1 where, b is the slope of regression line or it is
called the regression coefficient. It is
expressed 6
=
63.866 -

54 9.87 as the ratio of sum of squares for the covariance to

When the number of rows the of


(N and the
sum squares for X. In other words,
of the Friedman test can be ascertained withnumber of columns (K) are too
the help of
small, the significance b=- vertical cnange n Figure 23.8, two regression
when 4, =2 to 4 or when K 3, N =2 to
K N=
special tables (Siegel 1956). For exampe horizontal change Y= 4-5X
donethrough these special tables. But when the9, the significance of the Friedman test can
=

DE lines have been drawn based on equation Y = a+ bx.


those said above, the number of rows and columns are
Friedman test is greater 2
the interpreted
significance of the Friedman test would be the chi-square test. In as
the present
he regression equation gives a predicted value
equalto K-ltor chi-square applied test ofinterpreted in terms of chi-square. Ihe drexampic OTY,on the basis of X. This predicted value is called
The variable being 23 4 5 6 7 89
significance of the Friedman test. Henceis d
as a
Citerion variable and
predicted is called the
the variable from which it is Fig. 23.8
ar
Fryinp nut Statisticeal
Sciences
Analys b
addition toi
to
those predicted
Bebarioural

in
tn
addition

lues, there
I n
otr ed values of Y. differene
Methods

are
ReseaTh

X. value and the observ


Aleasuements
and
p r e d i c t o r varia
is usually value of
nthe predicted edvalues
660
Tests,
In equations, the
Inthe above formula
etween
a s - Y . Let's take an Kample. resickual and esymb ally,
as residuai

as predictor
variable.
as Y. Thus
X predicts
Y
s c o r e on ,Yis
the ilual is defined o l i . 4 6 . The residual will h
Stupprre
Stipprre a
person has an tseryed
is c a l l e d a n d A Is the
person's score scere ot 10
predicted
criterion
variable

exactly dicted
being varlable
as Xand the t h e criterion on IS rarely Som 10-8.46= 1.54
labeled
on score and this ic ne
predicted
score
score and predicted tnat is,
Occurs

ho echnica lly residual has sorne obvious


person's
variable. In
fact, the actual
predicted score, a s Y Y'. The
-

ng lin
n
analysis, residual
analySIs,
sor

properties
observed and thus, is defined the deviatio In
regresion
between the t h e residual it
nminimizes

oof
f residu residuals is always equal to zero (0). That is
difference Symbolically, words, Or negative and wil
een
ZIY-Y'y=0
sum

c a l l e d the
residual.
In other pOSITve (a)
The
residual to minimun. lnance a a to the
keeps the residuas can De enner squaring each residual principle of least squares, the surn oí the squared
actually
predicted Y score.
Since estimated by residuale Way, smallest value. In other words, 2Y-Y) = smallest value. residuals is the
o b s e r v e d and line is rightly t h e s e Squared as
the best-fitting obiained
keeping
by small
to zero if averaged,the best-titting
line is b e s t
determination (r)
be said that
it c a n
and regression
In fact, there is a sdiffer Coefficient of
as possible b e t w e e n correlation c o e f f i c i e n t has a
reciproccrence
2.
of rá is
ic called
call as coefficient of determination
because it measures the
What is the
difference
difference
is that
the correlation
A and Ywill always be
th.dure value
variable
variable that is determined by or proportion of
associated with the relationship
t w o and
the between ame iability in
one
of other
b e t w e e n these The correlation test anxiety correlation ofr=0
lacks this property. correlation
between example, a means that
r (0.90
the als E e s t ariable. For 081 (or 81o) of
=
IT =
w h e r e a s regression X. tor example, a n d test anxiety is
b e t w e e n Yand performance
on We Y
determined
by or 81% of of X is byvariability
Y. To illustrate it determined
as the correlation correlation
between
test is used to transform variability of isthe
variabily e
performance
is .64, the
about regression. In fact, regression
is o f t e n used to predict.e investigator finds
a otr = 075
correlation
Xiand salary (Y )of the between IQ
c a n n o t claim
a
s i m i l a r thing
v a r i a b l e . In
other
words, regression through reardw further,s eans the coefficient determination will be (075* =0.56, indicating the fact that
ot
scores on
the o t h e r v e r s a . For example,
or
vice on empthe enhancement in salary is due to the variation in 1Q of the employees.
o n the intelligence test
into e s t i m a t e d scores on
X
basis of r a w
scores on
Y on the
be interested in predicting
score
might regression of X on Yis usuallv take another concrete example to illustrate the concept of coefficient oí
investigator
equation the The coefticient denoting u s e s the raw scOrae
the anxiety test.
basis ofscore o n , say,
coefficient denoting the regression
r o n A. S i n c e regression
ot true. determination. Suppose nvestigator has been given the task of predicting the
the aon

the task of prediction, he is provided with three bits of iniormation.


the s a m e as the
the trait of reciprocity
does not hold
to a s regressioninanalys alysis.
de Raiesh. To make
of the variables, of correlational research is referred wears a of size-9 shoe.
Making predictions tothe
on basis o n e variable
are
related to changes another (a) Mr Rajesh
tends show how changes in changes in test has been working for 3 months.
Regression analysis
regression
is
often
used
to determine whether who s c o r e higher.on
(b) Mr Rajesh
variable. In psychological testing, For example, do persons monthly income of Rs 6,000.
has a
in performance. (c) Mr Rajesh
scores are
related to changes
as a mechanical engineer?
In
tact, regression analysis clear that the first bit of intormation (shoe size) is of no help because there is n o
mechanical reasoning test perform betterthe which these variables
a r e linearly related It should be
show extent to Size and annual i n c o m e . However, the second bit of information
correlational methods the criterion (such correlation between shoe.
and related estimates scores
on has been in
often create a n equation that does provide some useful intormation. Since Mr Rajesh
Besides, they also o n the basis
of SCores o n a predictor
(such as scores on
(length of employment)
mechanical engineer) he is probably in an entry-level job and is eaming a
performance as
employment for the 3
last months oniy, the
mechanical reasoning test).
going back this information may improve the accuracy of the prediction,
As know, regression literally
we
means
relatively low salary. Although
Why is regression also called prediction? s c o r e s on the criterion would depend o n the strength of correlation between salary and seniority.
is used here because the predicted amount of accuracy seniority. Acorrelation
The term regression distance from the value of r =038 between salary and
returning. variable a s compared to the Suppose the investigator gets correlation
or a

variable is closer to the m e a n of


the criterion
variability in salary is determined by seniority.
m e a n of the predictor
variable. However, this
rule does not apply
of 0.38 would m e a n rá (038)
0.14
= or 14 % =

useful because there is a


variable to the that is, monthly income is highly
of the predictor
between the criterion variable (Y)
and the predictor variable(X Finally, the third bit of intormation,
when there is perfect correlation annual income. Since there is a
regressing or going back between the monthly income and
Thus w e c a n think in terms of predicted
value of the criterion variable
perfect correlation r =1.00) annual income, the investigator is now better
income and
toward the m e a n of the criterion
variable. perfect correlation between monthly coeffticient of determination is r (100} 100 = =

able to predict with perfect accuracy. Here the income.


determined by monthly
cORRELATION AND REGRESSION which m e a n s that annual income is completely
MAJOR TERMS AND ISSUES IN
we often need to know
about the
When we go through the report of correlational analysis, 3. Coefficient of alienation (K)
Such important terms are: residua, two variables. Thus
we
terms and related issues. nonassociation between
meaning some of important the of
Coefficient of alienation is
measure that r
coefficient of determination, coefficient of alienation,
of estimate, shrinkage, cross
standard error
ne
between X and Yin the
same sense

brief discussion o of K a s a m e a s u r e of absence of relationship and when K 000, =

correlation-causation problem and third variable explanation. A think Therefore, when K


100, r =000 =

validation, the presence of relationship. extent of relationship


and
these terms follows. dsures of alienation there is smaller
the larger coefficient
U.With Yand vice versa.
1. Residual erefore, less precise is the forecast from Xto of
Ashort form of residual variance is used in statistics, especially in regression analysis andanat is the coetficient of
determination. In the example
IS Calculated as 1-2, wherer 1-0.14 = v0B6 =0927.
variance (ANOVA) and it refers to that part of the variance in the dependent variable tnat n alienation is v1-038
=
of
the coefficient of income.
accounted for bythe independent variables and thus, attributed to chance variation. In regre a n d annual income, nonassociation
between seniority
and annual

atton, the difference between a score and its expected value according to a theoretical mou that there is high degree of
IScalled as residual. As we know, a regression equation gives a predicted value orr each s
K e s r

Measuements
and
Tests,
662

of e s t i m a t e rylng (ut
4. Standard error

is a measure of acCuracy of prediction. Th.he


task of prediction AS we know,
research psycho
gists are
Statistical Anulyws 63
The standard error of estimate
error
of estimate atively small. As it
is relativ
bec and Y when they occur together, whe generally satisfied
X prer edes
accurate when standard es larger, t X
to sorne theory and
when other
en X causing vity between
most
less accurate.
relation
becomes less and
situations, X may Cause Y but Y can' dioris for the
prediction
As we know, a regression equation, by
itselt, allowS
predictions. Therefore for
o
us to make prediction
it does uCation, ations Ssible to enhance .Using CO-currence can
one s Contidencea about Jure cailed a be ruled out. In
n

intormation about
the accuracy
measurio between variahles at several cross-lagged panel
provide a n y it is customary to compute
a standard e r r o rr of estimate,
the t ross because they are points in time.dirertionality. This technique
precision of the
regression,
values acro55 two variables Therefore, it is a
defined as the standard
deviation of the differences between actual of desoen two Fron et al. (1972)
conducted one important and
lagged because there longitudinal
ne predi tochnique was used. They
stediandard
died
between
More briefly, it is el correlation tec is a
estimated from the regression equation. simply measured study in which la8
variable and those the
deviation of the residuals.
paamme (A) and peer ratings of
aggressiveness preierence
(B). B75
for
watching cross-lagged
violent
tormula or standard error of estimate becomes. m the rui een pa of New York
state participated in a study inschool students from the third television
When Yis predicted from X, the preference tor Violent iV 1960. Amodest grade
correlation
nis, 1970, 427 studenis O ne same group programrne and of 0.21
SyS1- were reassesced aggressiveness. Aler After 10 that
nth araders. By measuring two variables (A and B) at years, that
When Xis predicted from Y the formula for the standard error of estimate becor
comes:
23.54) measure: A,Az, B, and two
points in time
vields fou n variable
variable A of Time 1 and B, means score
score on
B, and six correlation (Time 1 and Tine 2)
were calculated.
means on A
variable B at time 1 and so on.
Sy5,1 (A1)
(23.55) Preference for A2)
violent TV among Preference tor
5. Shrinkage third graders violent TV among
13 graders
Sometimes a regression equation is developed on one group ofsubjects and then used tos
the performance of another group. refers amount of decrease
Shrinkage to the
regression equation created for one population is applied to another one. Formulas are na observed wh
to estimate the amount of shrinkage Jaccard &Wan 1995; McNemar 1969). Let us take
example. Suppose a regression equation is developed to predict the performance of one grou an
MBA students in the first two semesters on the basis scores. the of MAT
rtion of Although proportion -0.31
variance in performance the first
in we can
expect to
two semesters might be fairly high, accou
count Aggressioon
for a smaller proportion of variance, when this equation is used to predict the performanceinth among
Aggression
these two semesters of another group of MBA students selected on the basis of CAT scores, Th third graders among
13th graders
decrease in proportion of variance accounted for is known as shrinkage.
This (8) B2)
Fig. 23.9.: Cross-lagged panel study showing impact of preference for violent
6. Cross validation TV programme on later aggression
Inpsychological and educational researches, the term cross validation is frequently used. As Of special interest here are the diagonal or cross-lagged correlations because they measure
explained in chapter 6 also, in psychometrics, it means evaluating the validity of a test by
administering the test to the relationship between variables separated in time (see Figure 23.9). If third-graders
independent sample drawn from the same population on which it
an
was originally validated. In regression analysis, it means measurement of
predictor variables and aggressiveness really caused a later preference for watching violent TV programme that is, B
comparison of predicted and observed values of the criterion variable on a fresh sample drawn causing A(B, vs. A,), then we should expect a fair-sized correlation between aggressiveness at
from the same population as the sample from which Time 1B,) and preference at time 2(A,), corelation is virtually zero (+0.01). On the other
butthe
for determining the stability of
regression equation was originally derived hand, if an early preference for viewing violent TV programmes led to a later aggressiveness, that
predictions. For doing so, a standard error of estimate can be and
found for the relationship between the values
predicted by the equation and the values actually B(A
is A causing vs. then correlation between preference at Time 14) aggressiveness
B,) the
observed. at Time2(8,) should be substantial. This correlationis +0.31, definitely not large butsignificant.
Based on this finding, it was concluded by Eron et al. (1972) that an early preterencefor watching
other four
7. Correlation-causation problem Violent |V programmes are, at least partially, the cause of later aggressiveness. The
correlations in the study conducted by Eron et al. (1972) were 005(A Vs. A), 021(A, vs. B,),
Just because two variables are correlated, does it
imply that has caused the other? We are one
038(B, vs. B,) and -0.05(A, vs. B,). These correlations Thus on the
not of much
importance. other.
were
constantly bombarded with reports of relationships: Cigarette smoking is related to lungs cance, Dasis Size of the correlation, it is possible to tell which variable
causes the If the
alcohol consumption is related to birth defects.
Do these relationships mean that of than that of between B, and A, one can
easily conclude
lungs cancer or alcohol consumption causes birth defects? The answer is no. cigarette causes Correlation between A and B, is greater Kohle and Berman (1979) also used
be a causal Although there may changes in B rather than vice-versa.whether attitude causes behaviour o
dt changes in Acausescorrelation
relationship, the simple existence of correlation does
not prove it. Thus Cross-lagged panel
to answer the question
existence of correlation does not prove mere cause behaviour.
designed to establish causal relationshipcausality. However, it might lead to other resear that attitude was more likely to
VIce versa and
reported
between variables.
correlation by itself does not allow the investigator to decide aboutTherefore, the
existene 8. Third variable
explanation televisia
correlation between variable A, such
as
the direction of
causing YX>Y) or causing X(Y > X). However, there is a
Y casuany relational research often a high positive occur. Does this
mean that Acauses
problem to some extent. way to deal with directionaly behaviour does
a n d variable B, such as aggressive
Dmi
MenouS
Research
Measureme2is ana
664 7ests,

TV viewing causes aggressive ho


words does excessive
or B causes A? In other to watch a lot of television, Thordlour o
mean that aggressive
children might preter
observed relationship. Thus when
a correlation ic onotheresthi does this Name of Parametric (P) /| arrying Out StatisticalA
Analyses 665
explanation for this the correlation is due to associat
found betwepossible ctatistical test| Nonparametric Purpose
variables, there is always the possibility that ation h (NP) N
the two variables and a third variable.
Ihis
IS called as third variable explanatiWeen eso DV
poor social adjustment can cause
both excessive V viewing as well as aporoce. For eyh of Parametric Tests the (Independent
variable) (Dependen
eness,Iple
ANCOVA

differenceotsignificance
children of ditferent ages, the invectin
among a group of elementary-school
correlation between size of vocabulary and height. Ihis correlation mnay not ay nd a high Likewise, (Analysis of
covariance (F)
between of |One or mo
means more than two nominal variable |One
variable)
relation between these two variables, rather it may result trom the fact that hoth.e a gen ratio interval or
and height are associated with a third variable, that is, age. Thus the apparentabularysip groups while controlling|
between two variables might be the result of some variable not included in relatione the effect of one or
external influence is known as third variable explanation. . This other variables more
MultivariateParametric
Tests the significance of
CHOOSING APPROPRIATE STATISTICAL TESTS ANOVA*
mean difference One or more Two or more
It is customary to choose the appropriate statistical test on the basis of the nature oft
(MANOVA)F

between more than two nominal variable intervals or ratio


data. If the data fulfil the requirement of parametric assumptions, any of the 8roups on more than
which suit the purpose can be selected. On the other hand, if the data do not fulfil
Daramdined one variable
the naest
requirements, any of the nonparametric statistical tests which suit the purposemavrametric Chi-square Nonparametric Tests whether the
Other things to be kept in mind in selection of appropriate statistical tests
are the n (X)test proportions of two or One nominal One nominal
ed
independent and dependent variables and the nature of the variables, that is, whether Er Of more categories differ
nominal, interval or ordinal. When both the independent variable and dependent trom the expected
interval measures and are more than one, multiple correlation is the most variahla are
On the other hand, when they are interval measures and their number is appropriate stte proportions
only one, Pearson r Median Test Nonparametric Tests the differernce
be used. As has been noted earlier with ordinal and nominal One nominal One ordinal
statistics are the common choice.
measures, the nonparametric
ric
between median of two
Sometimes, researchers transform the measures so that the appropriate statistical independent samples
test may h Mann-Whitrney | Nonparametric Tests the differernce
applied without loss of much information. For example, it the One nominal
One ordinal
measures are available but the data do not fultil the score of two groups on interval U-test between ranks of two
transform the interval measures into ordinal requirement
of the t test, the
measures and
researcher can independent samples
Mann-Whitney test. subsequently, apply the Phi correlation Nonparametric |Measures the One nominal One nominal
In this chapter, we have described several
statistical tests. These, and few other not relationship between
in this text, are summarized in Table 23.35 that shows
the purpose of each statistical
described two dichotomous
test and
types of data for which it is considered more appropriate. variables
Table 23.35 Spearman's Nonparametric Measures the One ordinal One ordinal
Summary of some common statistics with their purposes and appropriateness of rank order relationship between
data for which they are suitable two measures
rho (p)
Kruskal-Wallis Nonparametric Teststhe difference One nominal One ordinal
Name of Parametric (P)/| Purpose IV DV H-test between ranks of more
statistical test | Nonparametric (Independent (Dependent than two independent
(NP) samples
variable) variable)
Pearsonr Parametric Measures linear Friedman rank |Nonparametric Tests the difference |One nominal One ordinal
One interval or One interval or
between ranks of more
relationship between ratio ratio
test
than two observations
two variables
Multiple Parametric Measures linear Two or more
done on the same
correlation (R) | One interval or
relationship between intervals or ratios ratio
sample One nominal

more than two variables Coefficient of |Nonparametric Tests relation between One nominal
ttest two variables classified
Parametric Tests the significance of One nominal One interval or contingency
difference between two (C) into equal row and
independent or ratio column such as x3;
3

4 x4,etc.
ANOVA (F dependent groups
test) Parametric Tests the significance of |One or more One interval or
difference between nominal variable |ratio
more than two groups "Not discussed in the
present textbook
Sciences
Methods in
Bebarioural

arrydng (ui
Statistcal Analy
and Reseanh matrix, factor matrix is 57
correlation
Measuremenis

Tests,
666
After
called factor ading showing the
prepare Fator atrx
ma
t oorr loac" Consists
oi table a
Factor Analysis with C Spearman. In fact, factor
s may h
correlati
ns

Table 23.36
sents aa hyp
presents
hypothetical factor
w o i C a l lactor matrix
loading of each of the factors of each
of factor analysis originated in the analysis of table d their
their weights in each ot involving, only two
The technique
multivariate
statistical method,
which is used
intercorrelations between
atrices of test.
sted
acroSs
t h e top
and
the 5 testsare
given
factors. The factors
in the are
defined as a
Spearman analyzed of tables liste
Table 23.36: A appropriate rows.
correlation coefficient.
demonstrate that
the intercorrelatOns COua
De accOunted in to
for in
terms of
hypothetical factor matrix
able to and factors which we
and
all tests, known a s the theory ofECiic Or
was tO
tests
factor called as 8 lactor common
Tests
one general
Stactor. wThis Factor
was
factor o r
test called specific H Thompson. TLdctors, Factor II
unique to each analysIS a s done by G
was done GH Ihompson, LL Thurstone
Following Spearman,
much work o n
tactor 1. Sentence completion
0.75
and R B Cattell. ).24
Guilford 0.85
H Hotelling, J P a a
set of 2. Vocabulary

used to study the interrelationships among 0.77


Factor analysis is basically be
understood as a data-redees 3. Reading comprehension
0.65
to a criterion. Thus 1actor analysis may ).60
analysiS: Confirmatory and exploraton
without reference
of 0.68
technique. Broadly,
there
are two torms is totactor In 4. Analogies
0.67
factor analys, is the purpose
confirm
that test s c o r e s and variables fit aco 6. Arithnmetical problems 0.43
confirmatory For example, it a theory underlying a n -0.33
framework predicted by a theory.
pattern or
that the various subtests belong to three factors such as ver
intelligence test predicted then a contirmatory tactor analysis mav
factor loadings range from -, a perfect negative correlation with a factor, through 0,
comprehension, attention, and memory 1actors, be
relation to the facto to a pertet positive correlation. Variables have loadingsin each
this prediction. In tact, confirmatory factor analvsis i
undertaken to evaluate the accuracy of (no t will high loadings on
essential the validation of many ability tests.
to
have only one. Normally a variable is considered to
contribute
neaningfully to a a c t o r o n y u nds d odaings at least above 0.30 (or below -0.30). Note that
the central purpose ist
Exploratory factor analysis is comparatively more popular where to
summarize the interrelationships among a large number of variables in a concise and accurate
the first variable (or test), sentence completion has a strong positive loading of 0.75 on factor l,
indicating that this test is reasonadiy a good index of tactor I. Also note that this variable has a
way for providing an aid in conceptualization. For instance, an explanatory factor analysis nay adest loadings of -0.24 on tactor I1, indicating that, to a slight extent, it measures the opposite of
help a researcher discover that a battery of 25 tests represent only five underlying variables
this factor that is, high ascores onis sentence completion in' low
usually called as factors. tend to signirydetermined
scores on factor ll and
Factor analysis is usually applied to data where a distinction between dependent variable vice versa. In a sense, factor produced by of 'adding tests.
carefully portions of some
What makes factors special is the
and independent variable is not meaningful. Major concern is with the description and
tests and perhaps methods
'subtracting out fractions other
undertaken to derive them.
interpretation of interdependencies within a single set of variables. Factor analysis achieves elegant analytical
Several different methods for analyzing a set of variables into common factors are available.
these purposes in two waysfirst, it reduces the original set of variables to a smaller number of
variables or factors; and second, factors tend to acquire meaning because of structural properties
As early as 1901, Pearson pointed the way for this type of analysis. Later on,Spearman (1927
and Thurstone
(1947)
that may exist within the set of relationships. Thus the process of reducing the number of developed a precursor of modern tactor analysis. In America, Kelley (1935)
Burt (1941) did much to advance the method. Although these methods differ
in
variables and the concept of structure are basic to the understanding of factor and in England,
analysis their initial most of these methods yield similar results. For a detailed study of the
postulates,
All of and Lee (1992)
techniquesfactor analysis begin with preparation of complete table of a
specific procedures of factor analysis, the reader is reterred to texts by Comrey
intercorrelations among set of tests. Suchanda table is known as correlation matrix, whichfindshows
a
and Loehlin (1992).
should not be ignored. A review ot
the correlation between every variable every other variables. Subsequently,
that describe as many of the
the we
There are some basic issues in factor analysis that
and often misunderstood. Some
inear combinations principal or
components of the variables
interrelationships among the variables as possible. In fact, each principal component, also called literature shows that factor analysis is frequently misused
rod hoping to find gold hidden
factor analysis as a kind of diving
as factor, is extracted according to mathematical rules (which is investigators appear to use
about factor analysis. In fact,
complex) that make it underneath tons of dirt. But the reality is that there is nothing magical
independent or uncorrelated with other
principal components. The first component is statistical analysis can rescue data based on trivial, irrelevant
most suCcessful in usually neither factor analysis nor any other factor
describing the variation among the variables, with each succeeding measures. However, analysis can yield meaningful results only when the
component somewhat less successful. A factor must be viewed as a variable but with a Or naphazard The following three basic issues must be kept in mind:
difference. Most variables involve direct measurement whereas factors are hypothetical variables simple esearch was meaningful to start with.
A particular kind of factor can emerge
from factor analysis only
when
tests and
thetactor
derived by process of analysis from a set of variables obtained by direct measurement. Despite
a
For example, a
memory can t
this, a person may m e a s u r e s contain that factor in
the first place.
be said to have a score on a factor in the same sense that the individual has a tests if n o n e ofthe
tests requires memory function. Ihus,
SCore on a test. When factors are uncorrelated, they are said to be
orthogonal and the meth0a emerge from a battery of ability upon the quality
of input.
involved herein is called an orthogonal solution. When factors are correlated, n a c t o r analysis quality of the output depends
they are said to be sample is also considered important.
oblique and
the method involved herein is known as (ii) stable factor analysis,
the size of the in general, too
practical combination of the tests used by 'addingOblique
solution. As we know, it is considered comtorting,
IS a
in' carefully determined
each tactor to Tabachnick and Fidell (2006) guidelines suggest that a
tests and subtracting out fractions of other tests. What makes portions of some cording test or variable. Comrey's and 1000
factors dve at leastoffive subjects for each 300 good, 500 very good
method used to derive them. Several different methods exist and some ofspecial is the
the common ones
analytica 50 is very poor, 100 poor,
200 fair,
principal component factors, principal axis factors, method of are SIze
excellent for a dependable factor analysis
(Gregory 2006)
image-tactoring, alpha factoring, maximum-likelihood methods. unweighted least
squd the extent to which the techniqueot
tactor
methods are The discussion of thes can't overemphasize point in
beyond the scope of this book. Athird issue is that w e and theoretical prejudices. Acrucial
y s i s IS guided by the
subjective choices
Scionces

Bebauioural

Methods
in
Researh

arnd Wit

668
Tests,
Measurements

orthogonal
dNes
and
oblique
which
axes.
rthogonal
that they
are
24
another,
choice between to one
loadings.
is
the angles of lactor
this regard lie at right clusters c o r r e l a t e d amone
the
factors
cases
o Iselvas
axes
xes However,
in many t h e lactors are bera with Olique
With oblique axes,
axes
orthogonal

WRITING A
uncorrelated.

provide a
better
look.
are
better
considered
t h a n
themselves.
Such proced
provicd
ernds to RESEARCH REPORT AND
general,
o b l i q u e axes
to factor analyze
the tactors f a c t o r s tend to

A RESEARCH PROPOSAL
Second-order

rotations,
it is possible second-order
tactors.
etC.
a
more traits/abilities,

one or of
yield organization
Important advantages a r e ..

the
hierarchical

and
disadvantages.
tder
combinina
advantages v a r i a b l e s by
n u m b e r of
some
has ore
Factor analysis r e d u c t i o n of a
may combine,
throu CHAPTER PREVIEW
analysis
Factor
permits
factor. For example,
the researcher into a single factr
or writing a Research Repot
(i)
variables into
a single and w e i g h t l i f t i n g called General Purpose
or

ball throwin8
batting, jumping, or
Format of a Research Report
analysis, Structure
groups of interrelated
general athletic ability. the
variable
Title Page
in identitying
factor an
Factor analysis
helps the researchere a c h other. For example, Carroll used Abstract
(ii) a r e related with that a factor call
how they or intellgence. e reported
and another factor call
examine Introduction
three-stratum theoryY
his
in formulating to auditory task ability
broad auditory
perception
was
related He also reported that Method1
task
capability.
broad visual perception was
related
to visual was
to both these factors
Results
intelligence) related
called g-factor (or general This, then, automatical Discussion
global factor broad perception.
visual be
that is, broad auditory perception
and will also high o n both broad audito ory References
o n g-tactor,
s o m e o n e high
meant that Appendix
and broad
visual perception.
perception Author Note

a r e as under: Research Report


proper label for the factors
a
Disadvantages the Style of Writing
there is OCcasional disagreement over should be called is left to the Report
i) In factor analysis, factors and what they Typing the Research

The technique itself only identifies R e s e a r c h Report


a
researcher's intution and judgement. Evaluating
data allow. In psychology and Research Proposal
i) Factor analysis c a n good the extenttothe obtained
be to
.Preparing a
education, where researchers often have depend o n less
valid and reliable measures
to be a problematic one.
prove
such as self-reports, ratings, etc., technique
can the
WRITING A RESEARCH REPORT
factor analysis, m o r e than o n e interpretation can
be made from the same data factored
GENERAL PURPOSE OF
ii) In
is no less challenging a task than the research itself.
this type of casuality. It requires
the s a m e way. Factor analysis fails to identity The writing of a research report in a dignified and
and resourcefulness. Research reports should be written
imagination, creativity
is no o n e such style which is acceptable to all.
Review uestions objective style, although there readers but to let them know
The general purpose of a research report is not to convince the
what results were obtained and what the conclusions of
it done,
what has been done, why
normal w a s
Discuss the major characteristics of a curve.
What is normal curve
research aim at telling the readers the problems
statistics. the researcher w e r e . Therefore, the reports
Distinguish between parametric statistics and nonparametric reached. The research
2.
the methods adopted, the results found and the conclusion
3. What is a Chi-square test Discuss its uses and applications, investigated, so that the reader
can
also
written in a clear and unambiguous language
Distinguish between Type I e r r o r and Type II e r r o r in statistical significance. Also, report should be of the research. For attaining objectivity, personal
objectively judge the adequacy and validity
e plain the role that level of significance plays in influencing the probability of making
our, etc., should
be avoided and as their substitutes, expressions
each of these two types of errors. Pronouns such as, you, we, my,should be used. Needless to say, the highest
standard of correct

5. What is Yates' Correction Under what conditions is it applied


IKe
investigator, 'researcher'
usage of words and sentences is expected. to
m o r e complicated due
Why is Student's t so named task. The matter is
purposes is not a n easy
these it w e
Discuss the difference between Correlation and O achieve of writing research reports. However, the problem
can be m i n i m i z e d
Regression. dions in
style in the next section.
Write short notes o n the followings: adopt the style for research reports presented
(a) Correlation and regression (b) Rank difference correlation
(c) Tuckey s SD OR FORMAT OF A RESEARCH
REPORT (Style Manual)
(d) Multiple correlation RUCTURE dissertation o r
short-term research paper, should
be

The it is based o n a research. In


earch report, whether There a r e different types of style manuals of reportingt h e i r t h e s e s or
standardized pattern. manual to w h i c n
fa
y terent institutions develop
departments or
their o w n style
669
Sciences Writing a Research
Report da Research Proposal 671
Bebavioural

Metbods
in
Research

and
tha

differences
Measurements
reveals I. Title Page
howeve,
Test
670 examination,

basically. are
he principles
title page which basically contains
she
Careful

all style
manuals

in so cial sciences rst pageas running head. A three elements: title,


elements is given below.author(s) and
conform.
The fir
dissertations
must

minordetails
only. In general, manuals
commonly
used
ion,
as
well a s aa
discussion of these
running head:
Title adv. It should notThe
style Association: title
concerned
with
and clear
presentation.
The

of American
Psychological

H a n d b o o k for Writer
ed.(200
5th
be
should be concise and should clearly indicate the
of correct Publication
Manual
the
MLA 11. Rese; ce of the stated so broadly that it seems to provide such an
humanities
are: The 16th ed. (2010); Papers, Reports,
Theses ralized either from the data gathered or from the methodology answer
Manual of Style, Kesearch . (Slad
6th be used in the title. The best
The Chicago (Gibaldi 1999); F o r m and Style: of a title is 12 to employed.
that can

a n d Dissertations,

5th ed. of lerm papers,


1neses Puhle.Turahis
bian length 15 words
AbDimately. "atroductory phrases such as "A Study of.." or "An
is the
M a n u a l for writers
m a n u a l
Papers, u s e d style Mans n nderstood that the investigator is studyinginvestigation of..For should be
"

2000) as well as A the most widely


The Publication Ma
Ofthese
various
manuals,
Psychological
Association.
theds given apPded Sin o f the
he title
title like "A Study of unlearning, something. example,
1996). how to present
basic ideas
avoic

edition (2002)
of A m e r i c a n write a report effectively,
associate
spontaneous recovery and partial
5th
excellent suggestions
a s to
how
to expression
and how to avoid
ambiguity and
expression

effects in paired and serial


learning", should be avoided.The preferred
some

economy and
smoothness
of concise outline or format is the most r
popu reinforcehe "Unlearning, spontaneous recovery and partial reinforcement effects in paired
with an
of the report.
The following Association's .ypical title wou dserial learning." he title should be typed in upper case and lower case letters
increase
readaptability
format prepared
according
to the
American Psychological
Publication assoc on the page and when two lines are needed, they should be doutble-spaced.
research
nino head is an abbreviated or a short title which is printed at the top of the pages of a
Manual, 5th ed. (2002).
Aru icle
to identify
the articles. the On title
page the running head, that is, the short title
I. Title Page puDi a maximum of 50 characters including letters, punctuation and spaces between words,
A. Title
ed right sidetheofabove
the page. It is typed inbe all capital letters
which

and Affiliation on the upper at the bottom of


B. Author's Name
running head in example may Reinforcement and
C. Running Head may
e
age. The "Unlearning,
Learning:"
II. Abstract Athor's name and affiliation: On the title page, the author's name should be centred
used)
I. Introduction (no heading
helow the title,
and
ne
the nex than
shouid
inaicate the name of the institution to which the author is
A. Statement of the
Problem
case of more one author from the same institution, the affiliation should be
of Literature
afiliated. In
once.
B. Background/Review listed last and only
And Rationale/Hypothesis
C. Purpose
UNLEARNING, REINFORCEMENT AND LEARNING
V. Method
and Partial reinforcement effects
A. Subjects Unlearning, Spontaneous recovery,
in Paired associate and Serial learning
or instrumentation (it necessary)
B. Apparatus
has been incorporated)
C. Design (if a complex design ALOK K SINGH
D. Procedure PATNA UNIVERSITY, PATNA
the
V. Results
wish to of
acknowledge the assistance computer facilities rendered by
the
A. Tables and Figures wish to thank students, teachers and principals of the
concerned institutions. I also many
I also wish to thank K K Sinha,
B. Statistical Presentation the reported data w e r e collected.
various schools from where
and B Sen, PhD, for his secretarial assistance.
VI. Discussion PhD, for his critical c o m m e n t s
LEARNING
REINFORCEMENT AND
A. Support Or Nonsupport of Hypotheses head: UNLEARNING,
Running - - * * * * *

B. Practical and Theoretical Implications of a title page


An illustration
C. Conclusions Fig. 23.1.

VII. References I1. Abstract about


describes the study in
which is page 2. It
VIl. Appendix (if appropriate) abstract is written on a separate sheet of paper includes the problem
ne the abstract is the summary of the
study, which
IX. Author Note
T00 to 150 words. fact, such characteristics of the subjects, research design, implications.
In apparatuses,
A. Identify each author's departmental affiliation study, method, as
well as
under the statistical significance
levels) and conclusions
as

B. ldentify sources of financial support esuts (including


C. Kererences must not be cited in the abstract.
Acknowledge contributions to articles
D. Give contact information for further information I1. Introduction starts and all the
the text of the report
Thus we find that the APA Publication Manual divides This st a fresh page and that
is page 3. From this, Introduction, M e t h o d ,
Kesults and
the format of writing a research repon on break, that is,
into nine parts. A discussion of these is of the test follow o n e
another without a
parts given on the next
page.
n s
Writin a
Research Re
and
Sciences
a
Research Proposal
Metbods
in
BebatTOual

673
Reseantb

and V. Results

Test
Measuremenis

and it is not
necessary

Te third section of the main body is the results


672 any
break
whose
without

thir about how the conclusion was reached


about ed. The heartpurpose is to
section provide
another

Discussions
follow one
after
ntroduction. H
"lntroduction'.
owever
HOwever, the runnino the hypothesis.
All of this
relevant data is the
sufticient
as
relevaothesis. presentatic
are ation
epresented fo
labelled

started o n
new pages. starts, is not
without the name of the ofdata

Figures Tables and including those


are
where
Introduction

and the complete


title n the
ists of several
several commonly
numbers Y ernployed
of
that don't
don't
Page 3, side of the page
components.
table consists
umbers
p o r t

has three that t h e p p l e m e n t i n d

the upper right A good


introduction

problem a n c uPial. A ioure is grapn,


material.
a summarize
l ze the major findings of the
ain kinds of dataprotograpn, Cnart like materials, which
head on entered. of the
centre of
the page
are
definitive
statement
must showinart
orr
o
the
give a
clear and i n d i c a t e the need tor
like showing the progress of are relevant
(i) The
researcher
must

in the light ot pertinent


studies. He must

in t e r m s
of theory and/or actice.
pract
he larly to Data
Data
todlike
in the text
and tables or tigures in learning or maturation over a
develop it logically is i m p o r t a n t period. should not be
antary. Results of the
statistical analyses carried redundant, rather, they
the problem designated
research and why of the previous liter
iterature.
the review shouldb e c o m p l e r

shouf significance for these statistical analyses should also be pres. should be provided and the
introduction is
present of
The researcher must
importanttry to establish
component an understanding of the exIsting literature relevah the
(i) The second oi
in this section.
body of literature with
level and discussed

needs to connect Iogically


the previOus terpreted
He
his or her study.
VI. Discussion

present study. is to f o r m u l a t e a clean rationale o f tthe


l

final component of
the i n t r o d u c t i o n
stated so that it is clear
of
the main body reepo
report is discussion. The major function of
and section
study: and to relate those results to other studies. The this section is
third
(i) The hypothesis m u s t be clearly how The final of the to
be proposed. Every should be properly define results
hypotheses to tested. Different
v a r i a b l e s and terms ined ret the implications of the
it would be scientifically inte cluding hypotheses, supported not
the or

to the hypothesis, some new explanation supported,


study
discussed. If the findings are
are
and investigated. is required that hypothesis may be so a new
contrayNew hypotheses may also be advanced about any uncommon deviation in the results.
IV. Method The maior
continues with the method section aiter introduction. atimes faulty hypotheses can be moditied to make them consistent with the results obtained.
The main body of the report how the research w a s conducted. The method section 5oT results should also be discussed. Such results occur when a hypothesis predicts
reader
section is to tell the
purpose of this Method'. Subsections are
subsequently NEhino but the results don't support that prediction. A brief speculation about why they
is separated from
introduction by the centred Ihe somethin

underlined. In general, the following subsections,


in order, are occurred is sufficient. A Driet ciscussion ot the limitations of the present study and proposals for
and
labelled at the left margin
included in the method section.
future research is appropriately discussed here. Here the researcher finally includes conclusions
be clearly specified a s well a s the method of drawing that reflect whether the original problem is better resolved as a result of the investigation.
A. Subject: The population should
characteristics like age, sex,
should also be stated. In specifying the population, such
samples should be clearly mentioned. Not VII. References
institutional affiliation, etC.,
geographical location, SES, race, The References' section begins at a new page with the label 'References' at the centre.
that, the total number of subjects and the number assigned to each condition should also be
only References comprise all documents including journals, books, technical reports, computer
out. The researcher should also state that
the treatment given to subjects was in
spelled programms and unpublished works mentioned in the text of the report. References are arranged
accordance with the ethical standards of APA.
the in alphabetical order by the last name of the authorls) and the year of publication in parenthesis
B. Apparatus: If the research has been conducted with help of some relevant
apparatuses, their names and model numbers should also be mentioned so that another or in case of unpublished citations, only the reference is cited. Sometimes no author is listed and

researcher, if he wishes, may obtain or construct it. Ií possible, the researcher should provide a
then, in that condition the first word of the title or sponsoring organization is used to begin the
entry. When more than one name is cited within parenthesis, the references are separated by
diagram of the apparatus in his write-up and this is extremely important where the apparatus is a semicolons. In parenthesis, the page number is given only for direct quotations. The researcher
complex and novel one. Minor pieces of apparatus such as pencil, pen, blade, etc., should not be
listed. should check carefully that all references cited in the text appear in the references and vice versa.
References should not be confused with Bibliography. A bibliography contains everything
C. Design: The type of research design should also be spelled out after the subsection of
which are useful but were no-
apparatus. Here the procedure for assigning participants to groups as well as labels to groups hat is included in the reference section plus other publications Onl-
in the text or manuscript. Bibliography is not generally included in research reports.
experimental control)
or should also be indicated. IV and DV should be clearly spelled out and Cited
are usually included. The Publication Manual of the American Psychologice
these variables should be
carefully defined if they have not been defined in the introduction ererences references of vario
section. The technique of Sth ed. (2002) has given some for
specific guidelines writing
exercising experimental control should also be spelled out. dIOn,
D. Procedure: This subsection describes the types of works as indicated below.
actual steps carried out in conducting the
study. This includes the instruction given to the subjects (if they are human), 1.
administered and DV was measured. It also includes
how I Was author
For references of books with singleStatistics
for the Behavioural Sciences,
New Yo
order of assessments if more than one. assignment of subjects to conditions
an egel,S (1956) Nonparametric
Anyway, enough iníormation must be provided to perm McGraw-Hill.
easy replication by the subsequent researcher. . For references of books with multiple authors Educati
Statistics in Psychology And
tord, JP & Fruchter, B (1978) Fundamental
*Ifthe research does not incorporate a complex design, the New York: McGraw-Hill.
be included in Procedure. design' subsection may be
dropped and rubrics of
desB ay 3,
For references of Editor as author: 4. Delhi: Pearson.
S r a , G. (ed.) (2011), Psychology in India. vol.,
Methoas in Bebavioural Sciences
and Research
674 Test Measurements

association as author:
of corporate or
4. For references Writing Research
a

American Psychological
Association (1983) Publication Manual (3rd ed.) Washing
od Repe
eport
Numbers beginning a sentence should da
Research
Proposal 675
5. For references of journal
article:
ten should be spelled out. The alway be
spelled out. Fractions and
Edwards, AL & Kenny, KC (1946).Acomparison of Thurstone and Liken. fractic
thans the expression should be like "6. pression "one-half
d be spelled out unles ) O r 1 a l f ispreferred but for numb
Attitude Scale Construction, Journal of Applied Psychology, 30, 72-73 hniques hundred) should
unless
sholeout

should be usedeunless
it
is in word all
figures
less
with
they tables
centt and figureeper Cent'
6. For references of thesis or dissertation (unpublished): such as per
20
20 s s they hor dnd figures. Arabic (which means per
Singh, AK (1978) Construction And Standardization of Verbal Intelioe
of aa Verbal housands
millions
ns such
or
20,305,682. begin sentence. Commasnumerals
a
ass a
with cent
should clearly per
Unpublished Doctoral Dissertation, Patna University, Patna. Intelligence
Scal Apart from these specific
fications, Some general rules indicate
7. For reference of chapter in an edited book: conforms APA style should be followed. These for
writing research report/
a

Atkinson, RC& Shiffrin, R M (1968) Human Memory: Aproposed system and it 1. Since.
research report describes general rules may be discussed as under: article that
ts tion iis to write
The exception in the
a
completed study, it should be
processes. In The Psychology of Learning and Motivation, ed. K W W Spence Control
tense.
present tense only the written
Spence, vol. 2, pp 89-195. New York: Academic Press. Spence
and JT
future s i t u a t i o n s .
conclusions that apply toin the past
2. Allssources trom whic nich information are present or
8. For references of magazine articles: Basant., K. (2013,
August). The usefi l authorsand the year. Reterence
and th obtained are cited
psychology to common people. Psychology Today and Tomorrow. 15-20. usefulness of defined learning "...". Or the
can be used as the subject of the by only the last names of the
For references of Unpublished paper presented at a
(1975) researcher may state an idea sentence: Hilgard and Bower
9.
meeting: ntheses:
pare Learning Is
aerinea gard
ds
& Bower
..
and provide the
reference in
Singh, A. K. Kumar, P & Sen Gupta, A (2005, July) Demographic Correlates of d in pace of "ana. 1975). In
parenthetical citation '& is
Paper presented at the Annual Meeting of Council for SOCial Sciences, Job stress, vve dre
uEe
nree to six authors, all
names should be
Lucknow, irct time of citation and supsequenuy, t Can
be reierred to included at the
10. For references of unpublished al. Thus, in the first by using the first author and the
manscripts: t
citation, We can write
Hilgard, Bower and Atkinson (1999), but
Latin
Gupta, R. (1995) Social awareness in relation to media
among college student hsequently, we Can say Higard et al. u9997. t there are more than six
unpublished Manuscript, Ranchi University, Ranchi. udents. the name of the first author and et al. even the first time. authors, we can cite
Orne, M. K. (2013) A scale to assess Big Five
Personality dimensions. Manuscrint
2
Abbreviations should be avoided. They are
justified only in (a) it
submitted for Publication. throughout the report, (b) a term consists of several words, (c) numerousappears very frequently
diferent abbreviations
are not being used. It an abbreviation has to be
11. References of citations from internet sources: used, this should be done by
Ising the first letter of each word ot the term. Ihe creating an acronym,
American Psychological Association (2001) APA style Homepage. Retrieved, complete term should be defined the first time it
2001, from http://www.apastyle.org July 30, is used, with its acronym in parentheses. For example,
Long-Term Memory (LTM) is defined as
. Subsequently only the acronym, except as the first word of a sentence, should be used. As a
VIll. Appendix first word of a sentence, the complete term should be used.
In an appendix, those items of information 4. As far as possible, direct quoting should be avoided. The idea should be
are
provided that would be inappropriate or too long paraphrased
and summarized so that readers should know what the authors intend to say. Besides, attempt
for the main body of research. Each
appendix appears as a new page with the label 'Appendix made to its authors. For example, the phrase "Hilgard et ai"
along with the identifying letter, centred.
statistical treatments, Usually in an
appendix, materials such as tests, detailed should be address study itselfí, not
a

mainly refers to a reported study and not to the people who conducted it. In this way, we should
computer programme, etc., are included. write "the results are reported in Hilgard et al. (1999) instead of "The results are reported by
IX. Author Note Hilgard et al. (1975".
In this part the researcher 5. For distinguishing your study from other studies, refer to as "this study" or "the present
writes about four basic
acknowledgement, contact information and each author's things including source of financial support,
affiliation, in case many authors have study".
contributed to the research work. should be used. When a
6. As tar as possible, approved psychological terminology time and
nonstandard term or name of a variable is to be used, define the term or word for the first
STYLE OF WRITING A RESEARCH
REPORT then use that word consistently.
The research words and numbers that are 10
report should be written in a style that is clear Numbers between zero and nine should be written in
folksy style should be avoided. The research and concise. Slang, hackneyed for any size number it la) the
report should describe and explain rather than ana
can be used
convince or move to some action. di larger should be written in digits. However, digits or larger, or (b)
the number
should be used when they are try
least one is 10
Personal aunor is
writing a series of numbers in which at
appropriate and demanding; otherwise their pronouns mo result or to a precise
measurement. No sentence
use should be
objectivity. Only the last namels) of the cited author(s) discouraged to ensure pro resses a decimal or refers to a statistical
should begin with a number expressed in digts.
should be used. Titles like
Professor, Dean, etc., should be omitted. In
describing research D, Mr the generic term subjects
are used, the APA style
be used. Abbreviations
should be used procedures, past tense sno
the OAlthough in most research reports term for it. Therefore, Participants rather
parenthesis. Of course, there are a few only after totheir referent has been clearly spelled ou th e use of a less intentional and
more precise
more descriptive
terms such as men,
as 1Q or
DIQ. exceptions this rule for well-known abbreviations uDjects should be written but where appropriate,
>
Important and standard statistical formulas are not women, children, pigeons, etc., should be used. rules in view, it
would definitely
above general
statistical
computations are also not presented in the research report Det
included. However, if some unusual
Detailed If a
drch report
is written keeping the the study, (b) evaluating the study
statistical ed in provid for (a) understanding
analysis,
it is
appropriate to mention them. formulas are
usE dders the information n e c e s s a r y
and ( literal replication ofthe study.
Operforming a
Methods in Bebaviournal Sciences
and Reseancb
676 Test M e a s u n e m e n t s

REPORT
/riting a Reseurch
TYPING THE
RESEARCH
proposal is detailed plan Repor
eport and
of a research writer to present manuscript o material to
A rese.
a
of the search
a
Research Proposal 677
It is the responsibility the architect
and form. Except tor minor typographical matters, the correctio pist in an
typist
blue prepares before
basis for evalua the to be
conducted. It
ion of the Work of
ordered proper proposal provides the
of the professional typist. However,
while typing a t or errors y building commences.comparable to the
t
not the responsibility
should carefully be followed:
report, the submitted betore linallysubrnitted project.
must be submitted befor
it Is The
research
ocearch
a proposal

following guidelines tor their work


research ofa.mittinally
work h
by submitting Many
approved. Researchersinstitutions require that
or bond paper, preferably of 8" and 11" size SUpport t
(1) Agood quality paper
be used. agencies. purpose of research 8rant proposals to the
Thei m o l e n p o s e often seek
government and evenfinancial
a
for typing the materials.
result intoproposal is to ensure workable
is used
Only one side of the sheet implemented, may res
analyzable and interpretable experimental private
when
a
and right should be 11, inches.
(2) All margins, that is, top, bottom, left
hich,

ding of significant scientific merit. design


(3) Words should not be split at the of
end the line unless completing
them enerally, written research proposal tollows
a piece of research Or
uld
a word is to be divided at all, a dictionary syllable efinitely
for correct sections the
interfere with
headings of various are bit a
ditterent. The general format of
following are the nine stepsjournalarearticle but
a
be consulted. nould arch proposal.
(Many
(4) Direct quotation of three or less than three typewritten lines are included in tha
followed
the proposals)
research institutions agencies
or that generally
formats for may suggest some other
enclosed in quotation marks. Quotations of larger length (that is, more than three t tand 1. Problem
lines) are set off from the text in a double-spaced paragraph and indented about five s
the left margin without quotation marks. trom Definitions, assumptions, limitations or delimitations
3. Review of related literature
(5) Pagination information is given in parenthesis at the end of the direct quotatiqn
4. Hypothesis
(6) Words, letters, digits or sentences which are to be finally printed
inted in italics
be underlined. italics should a
5. Methods
6. Time schedule
EVALUATING A RESEARCH REPORT 7. Expected results
A critical analysis of a research report is a valuable aid for
the nature of problems, methods adopted tor solution of the
understanding and gaining insight inta 8. References

problems, the in which data at 9. Appendix


processed and conclusions reached. For making a critical evaluation of ways a research
are
following questions may be raised. report, tha he These nine steps may be discussed as follows:
1. The title: (a) Is it clear? (b) Is it concise? 1. Problem: The problem of the research
proposal is expressed
in a declarative statement
2. The but it may be in the question torm. Ihe problem must be stated in such a form
problem: (a) Is it clearly stated? (b) Is its
significance recognized? (c) Have that it clearly tells
questions been raised? (d) Are testable about the major goal of the research. Besides its formulation, the researcher must
limitations recognized? (g) Are hypotheses framed? (e) Are assumptions stated?specific( Are is worth the time, eftort and expense required to conduct the
mention why it
important terms defined? proposed research. In other words,
3. Review of the literature: (a) Does the proposal writer should not only mention the problem clearly but must also demonstrate
it well cover the area? (b) Are the its
main findings of the
previous researches noted? (c) Is it well organized and summarized? significance. A few examples of problem statements are as follows:
4. Methods: (a) Is the
research design appropriate? (b) Is the research (a) Coeducation improves the morality level of students.
detail? (c) Are the
samples adequate? (d) Are the extraneous variables well design described in (b) Active participation by students in politics may have damaging eifect upon their
controlled? (e) Are the
psychometric properties of data-gathering instruments or organized and creativity.
satisfactory? () Are the statistical tests appropriate? questionnaires
5. Results and 2. Definitions, assumptions, limitations and delimitations: The proposal writer must
discussion: (a) Are the
discussion in the text clear and concise? appropriate uses of tables and figures made? (b) Is the define all the important variables included in the study in operational terms. These definitions
(c) Is the data analysis provide a good background with which the researcher approaches the problem. Assumptions
statistical analysis correctly interpreted? logical and perceptive? (d) ls the
implied in the proposed study should also be clearly mentioned. Asumptions are statements
6.
Summary and conclusion: (a) Is the which the researcher believes to be a fact but he can't verify them. Limitations of the study should
hypotheses restated? (c) Are the problem reinstated? (b) Are the
procedures well described? (d) Are the findingsquestions
reported? (e) Are the findings and conclusions concisey
or
also be mentioned. Limitations generally include those factors or conditions
clearly
which are

based upon the data collected and Deyond the control of the researcher but are ikely to influence the conclusions or findings of the
Using the pattern suggested above, the analyzed and unavailability of reliable and
valid
report which would, in turn, reader can make critical analysis of any a uay. Inability to randomly select subjects for the studyof
help him in developing researc ddld-gathering instruments be good limitations any research. Delimitations of the
preparing good report.
a competency in conducting researcn a diu
can
Delimitations refer to the boundaries
of the
Proposed research should also be clearly spelled out. will be included as the sample of the study
PREPARING A RESEARCH I n other words, delimitations clearly
tell who
The writing of
PROPOSAL and for whom the obtained conclusion will be
research proposal is an
a valid more extensive
include a
steps involved in preparing important aspect of the research process. elO we
should
elucidate the literature: The research proposal
the nature and need research proposal, it is
a Neview ofrelated relevant literature includes those studies
whicn

for research essential to throw lig upon of the relevant literature. An effective present
a
proposal. W reported and are closely
related
to the
eE competently executed and clearly
Sciences

and Research
Methods in
Bebavioural
Writing a esearch
678 Test
Measurements
Report anddaa Research
R Proposal
is tamiliar already knous
with what is
wn and 679
ensures that the
researcher
it also helps to eli what is
problem. This step
still unknown
and has to be
veritied and tested. Moreover,
the background for usefl
eliminate the
duplication of
what has already been
In search of
done and provides
related literature, the researcher, among othe
reseai

design of the stud nOuld


ggestions 1. Taking
an example of aan
research project outline the
ReviewQuQuestions
for further investigations. studies, research report.
similar but competently
executed
majc steps in
concentrate upon
population sample,
variable defined, extraneous
variables Controlled, 2. Discuss fully the main points to De considered in writing a

methods,
of the
considerations ooutlining
etc.
recommendations for further research, Discuss some
important research proposal.
a

should include the major hypotheses to betested. 3 research report. be kept in


research proposal mind while
4. Hypotheses: The ce a research
Since writing a

Some minor hypotheses,


if any, should
also be formulated. hypothesi
should be formulatoIa
it is important that the hypothesis
tentative answerto a question, ore
is such a step which clarifies the
the formulation of hypothesis
data are gathered. In fact, The shoof
of the research investigation. hypothesis
the problem and also the underlying logic concerned area, testable and such that c e it
known facts in the
reasonable, consistent with
terms.
simplest possible
stated in the
research proposal is very important. It includes thr
5. Methods: This part of the hree
subsections-subjects, procedures and data analysis. The subjects subsection spells Out tthe
details of the population from which subjects are to be selected. The total number of jects

desired from the population and how they will be selected are generally indicated in thic
subsection. The procedure subsection outlines the details of the research plan. In other word
how it will be done, what data will be needed
this subsection outlines in detail what will be done,
and what data-gathering devices will satisfactorily used. The data-analysis subsection outlines
be
the details of the method of analyzing data by different statistical techniques. The details should
preferably mention the rationale behind selecting the statistical techniques.
6. Time schedule: An effective research proposal must have a clear time schedulein
which the entire subject should be divided into manageable parts and probable dates should be
assigned for their completion. Such steps help the investigator in budgeting his time and energy
effectively, systematizing the study and minimizing the tendency to delay the completion.

7. Expected results: A good research proposal should also indicate the possible or
expected results as far as possible, although in some cases it may prove to bea Herculean taskfor
the investigator to spell out the expected results. The expected results section should include a
brief discussion of the anticipated results of the research and should also highlight those that are
the most important for the research. In this section reasonable alternative to the expected results
should also be mentioned as well as those likely problems should also be spelled out which may
originate if the results show deviation from the research hypothesis.
8. References: The reference section should include the names of the authors along with
the details of the publication of their research work. It should be more or less like that same
section as it would be submitted with the final report. Sometimes it is just possible that the
literature may have to be included in the Discussion' section of the final report that was not
anticipated in the proposal. But this should be an exception and not the rule.
9. A research proposal ends with an appendix. An appendix should include a
Appendix:
of all that are to be used in the study. Among other things, it may include a copy o
list materials
the test or scale used, Iist of stimulus materials and
apparatuses, a copy of instructions to be give
to the subjects, and so on.

hus we find that the research proposal has several steps before it reaches its completion. A
a
good research
research proposer must keep all these steps in view at the time of writing8
proposal.

You might also like