P. 1
SPSS - How to use

SPSS - How to use

|Views: 95|Likes:
Published by makama10

More info:

Published by: makama10 on Nov 01, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





A Step-by-Step Guide to Analysis and Interpretotion

Brian C. Cronk




Choosing the Appropriafe Sfafistical lesf
Ytrh.t b Yq I*l QraJoi?

Dtfbsr h ProportdE

Mo.s Tha 1 lnd€Fndont Varidl6

lldr Tho 2 L6Eb d li(bsxlq* Vdidb

lhre Thsn 2 L€wls of Indop€nddtt Varisd€

f'bre Tha 'l Indopadqrl Vdbue

Ind.Fddrt Vri*b

fro.! Itn I l.doFfihnt Vdi.bb

NOTE: Relevant numbers section are givenin parentheses. instance, For '(6.9)" you refers to Section in 6.9 Chapter 6.


Notice Inc. images@ by SPSS, Inc. Screen SPSSis a registered trademark SPSS, of Usedwith permission. and MicrosoftCorporation. by or This book is not approved sponsored SPSS.

Corporation. "Pyrczak A Publisher, California Publishing" an imprintof FredPyrczak, is and the Althoughtheauthor publisher and havemade everyeffortto ensure accuracy for no responsibility completeness information of contained thisbook,we assume in Any slightsof people, herein. inaccuracies, errors, omissions, anyinconsistency or places, organizations unintentional. or are Director: Project MonicaLopez. M. MatthewGiblin,Deborah Oh, Consulting Editors: Bumrss, L. George Jose Galvan, Rasor. JackPetit.andRichard provided CherylAlcorn,Randall Bruce,KarenM. Disner, R. Editdrialassistance by Brenda Koplin,EricaSimmons, Sharon and Young. Kibler andLarryNichols. Coverdesign Robert by in Printed theUnitedStates America Malloy,Inc. of by All Publisher. rights Copyright 2008,2006,2004,2002,1999 FredPyrczak, @ by in or reserved. portionof thisbookmaybe reproduced transmitted anyform or by any No means withouttheprior writtenpermission thepublisher. of r s B N l -8 8 4 s8 5 -79 -5

4 u.1 ') ') Chapter 3 l7 t7 20 2l 24 )7 3.4 4.5 1.3 1.7 Chapter 2 Getting Started Starting SPSS Entering Data DefiningVariables Loading Saving and DataFiles Running Your FirstAnalysis Examining Printing and OutputFiles Modi$ing DataFiles Entering ModifyingData and Variables DataRepresentation and Transformation Selection Data and of Descriptive Statistics Frequency Distributions percentile and Ranks a singlevariable for Frequency Distributions percentile and Ranks Multille variables for Measures CentralTendency Measures Dispersion of and of for a Single Group Measures Central of Tendency Measures Dispersion and of for MultipleGroups Standard Scores Graphing Data Graphing Basics TheNew SPSS ChartBuilder Bar Charts.Tableof Contents Introduction theFifth Edition to What'sNew? Audience Organization SPSS Versions Availability SPSS of Conventions Screenshots Practice Exercises Acknowledgments '/ Chapter I Ll t.2 5.2 3.6 1.3 4.2 4.4 1. Histograms Pie and Scatterplots Advanced Charts Bar EditingSPSS Graphs Predictionand Association PearsonCorrelation Coeffi cient SpearmanCorrelation Coeffi cient Simple Linear Regression Multiple Linear Regression v v v v vi vi vi vi vii vii I I I 2 5 6 8 ll ll t2 2.1 4.3 3.5 4. Charts.3 5.2 1.6 Chapter5 29 29 29 3l 33 36 39 4l 4l 43 45 49 5.4 3.1 5.5 Chapter 4 4. .1 3.

8 6.Chapter 6 6.3 7.sav RACE.1 6.2 6.sav SAMPLE.sav HEIGHT.sav QUESTIONS.F/ Friedman Test TestConstruction Item-Total Analysis Cronbach's Alpha Test-Retest Reliability Criterion-Related Validiw Effect Size Practice Exercise DataSets Practice DataSet I Practice DataSet2 Practice DataSet3 53 53 )) 58 6l 65 69 72 75 79 8l 85 85 87 .10 Chapter 7 7.4 7.9 6.2 8.5 6.1 7.1 8.0 tv .90 93 95 97 99 99 100 l0l t02 103 r 09 109 ll0 ll0 lt3 Appendix C Appendix D Glossary Sample DataFilesUsedin Text COINS.sav SAT.4 6.4 Appendix A Appendix B Parametric Inferential Statistics Review BasicHypothesis of Testing Single-Sample t Test Independent-Samples I Test Paired-Samples t Test One-Way ANOVA Factorial ANOVA Repeated-Measures ANOVA Mixed-Design ANOVA Analysis Covariance of MultivariateAnalysisof Variance (MANOVA) Nonparametric Inferential Statistics Chi-Square Goodness Fit of Chi-Square Testof Independence Mann-Whitney UTest WilcoxonTest Kruskal-Wallis Test .2 7.5 7.6 Chapter 8 8.6 6.sav GRADES.0 and 14.sav OtherFiles tt7 n7 l l7 l l7 n7 l18 l18 lt8 lt8 l19 t2l AppendixE Appendix F Information Users Earlier for Versions SPSS of of Graphing Datawith SPSS 13.7 6.3 8.3 6.

If you have the dialog box. which will present blank data window. and afternoons). then clicking on Programs.1 A surveywas given to several students from four different classes (Tues/Thurs momings.Dml*ro* fe tf*E h lholifrra 1.On most computers.** . Tues/Thurs afternoons.y urhgDd.1 StartingSPSS Section Startup proceduresfor SPSS will differ of on slightly. then on SPSS. will useExample Example 1. Mon/Wed/Fri The students were asked in ' Items that appearin the glossaryare presented bold.1. depending on the optionsyour systemadministratorselected for your version of the program. .Crcrt*rsrcq.2 EnteringData Section One of the keys to success with SPSSis knowing how it stores gxr__}rry". SPSSshould open automatically with a blank data window.Wbrd (i lpanrnaridirgdataura f. there will be an SPSSicon On many installations.' a If you were not presented with the dialog box to the left. on the desktopthat you can double-click to start the program.b6..2. A blank data window is shownbelow.1 ale| lgj'Slfil Hl*lml sl el*l I rtl basics of data entry with SPSS. click Type in data and OK.lhoari{irgqrory r. The data window and the output window provide the basic interface for SPSS.2. When SPSS is started.:g H*n-g:fH" r! *9*_r1_*9lt and uses your data. Italics areusedto indicatemenu items.you can start SPSS by clicking on Start.we 1. ffi$ t't**** ffi c rrnoitllttt (. To illustrate the rtlxlel&l *'. depending the exact configuration the machine on which it is installed. Mon/Wed/Fri mornings.I Chapter GettingStarted 1.you may be presentedwith the dialog box to the left.

2. Therefore. first stepis to determine variables needto be entered. we Example 1.we must first entersomebasicinformation that: variable into SPSS. Section1. o do not contain space.ChapterI GeningStarted whether or not they were "morning people" and whether or not they worked.columns (variables) two rows and pants.2. below: are The response sheets from two students presented Response Sheet I ID: Dayof class: Class time: person? Are you a morning Finalgrade class: in Do youwork outside school? 4593 MWF Morning Yes X TTh X Aftemoon X No Part{ime 8s% Full-time XNo Response 2 Sheet ID: Dayof class: Class time: Are you a morningperson? l90l x MwF X Morning X Yes _ - TTh Afternoon No Finalgrade class: in Do vou work outside school? 83% Full-time No X Part-time into SPSSfor use in future Our goal is to enterthe data from the two students Any informaanalyses. This survey also asked for their final grade in the class (100% being the highest gade possible).3 Defining Variables about Beforewe can enterany data. will be creating datafile with six columns we a (students/participants). tion that can vary among is 1. The that the Example participants a variable to that needs be considered. instance. For mustfirst be givennames each variables o beginwith a letter.2 ID Dayof class Class time Morningperson Finalgrade Whether not the student worksoutside school or particivariables rowsrepresent and In the SPSS represent data window. a .2 liststhevariables will use.

the variable name "PRE_TEST" is acceptable. To return to the Data View window.For example. One usefulfunctionof SPSSis the ability to definevariableand value labels.we need to first code the values as numbers.e.Tl the window. SPSSrequiresnumerical values.click on the Variable View tab at thebottomofthema inscre e n . The program will then fill in default valuesfor most of the other properties.lrl EiliEltfil_sJ elrl t!-.c"x.-.f ll6rhl| s*l !!+ | Add after eachentry. Mon/Wed/Fri and Tues/Thurs). Similarly. click on the Data View tab. To define a variable.. while the variable name "7Q" is not.b. to To assignvalue labels. Fb m u9* o*.E u?x !!p_Ip l . Click on that button to bring up the Value Labelsdialog box.click in the cell you want to assignvaluesto in the Values column.Variable labels allow you to associate descriptionwith each variable.for most procedures. All variablesmust be given a name. click on the first a empty cell in the Name column and type a valid SPSSvariable name.lt-*l*lr"$.Thus. you must click L.we use value labels. for data such as the day of the class (i.'lul*lEll r"l*l ulhl **l {. --When you enter a iv*rl** v& 12 -Jil value label. Capitalizationdoes not matter.*Trqll . To help us keep track of the numberswe have assigned the values. and each row represents variable.. To do that. This will bring up a small gray button (seeanow. . Each column represents some property of a variable. or Value labelsallow you to associate description a with eachvalue of a variable. but variable namesare capitalizedin this text to make it clear when we are referring to a variable name.We can assignthe number I to Mon/Wed/Friand the number2to Tues/Thurs. This will mOVe the value and itS associated label into the bottom section of J::::*.Thesedescriptionscan a describethe variablesthemselves the valuesof the variables. When all labels have been added. but the variable name "PRE TEST" is not. even if the variable name is not necessarily capitalizedin screenshots.Chapter I Getting Started Thus.Th is wills h o wy o u t h e V a ri-@ able View window.l From the Variable View screen.G q".SPSSallows you to createand edit all of the variables in your data file. the variable name "Q7" is acceptable. click OK to return to the Variable View window. below at left).

Enter the code for eachvariable in the appropriate column. Width. should code DAY as I : Mon/Wed/Fri. The default value is a numeric field that is eight digits wide with two decimalplacesdisplayed.. I : Yes.Note that because value labelsare not appropriatefor ID and GRADE. You ID. will be usingnumericvariables we with all of the defaultvalues. your dataare more than eight digits to the left of the decimal If place. simply click on the Type.or Decimals columns in the Variable View window. you havethe option of definingthe variabletype. theseare not coded.Code TIME as I : morning.000.000 be will displayed 2. it as .2. Practice Exercise Createa data file for the six variablesand two samplestudents presented Examin ple 1. Code WORK as 0: No.d {!. F.g.enterthe student'sclassgrade.E*UaUar Qgtr Irrddn An hna gnphr Ufrrs Hhdow E* ot otttr *lgl dJl ulFId't lr*lEl&lr6lgl slglglqjglej blbl Al 'r i-l-Etetmt olrt' 2 Depending upon your versionof SPSS.'SPSSmaintains as accuracy beyondtwo decimalplaces. I : Part-Time. all outbut put will be roundedto two decimal placesunlessotherwiseindicatedin the Decimals column. to enterthe GRADE variablevalue. MORNING. GRADE."*. To do so.1.When done. 2 : afternoon.000.Name your variables: DAY. beginningwith the first student'sID number.they will be displayedin scientificnotation(e.ChapterI Gening Starred In additionto namingand labelingthe variable. 2 : Full-Time.Enter data horizontally. and WORK. Be sure you enter value labels for the different variables. In our example. may be displayed 2. TIME. ru.the number2.2 = Tues/Thurs._q.your Variable View window should look like the screenshot below: J -rtrr.08 + 009.g r9"o' ldq${:ilpt"?.? -- Click on the Data View tab to open the data-entryscreen..00E+09). CodeMORNING as 0 = No.

it Loadingand savingSpSSdatafiles worksin the sameway as most Windows-based software. there are Open. cellsdisplay the l*. or by clicking the open file folder icon).-tu l{il Ddr lrm#m )1 Instead of clicking the Value Labels icon.sav" extension. Underthe File menu. I .t r lii |: H- When you save your data file (by clicking File. : [[o|vrwl vrkQ!9try / *rn*to*u*J----.sav" extension. This window lists all files with the ". ThistellsWindows thefile is anSpSS that datafile.4 Loading and SavingData Files Onceyou haveentered your data. Section1. cD-R.. trrtist systemsdefault to the. Save.bv value labelsratherthanthe corresponding codes.location<c:\programfiles\spss>. you get a similar window. thenData.you will need to saveit with a uniquenamefor later useso that you canretrieve whennecessary. . which is addedby defaultto the end of the filename. make sure you are D{l lriifqffi looking in the right directory. according to yourpreference. and Save As commands.In this case.t1 . clickingthebuttonwhichappears eachcell asthe as in cell is selected will present drop-down of thepredefined a list lablis. then clicking Save or SaveAs to specify a unique name).If you have trouble locating your saved file.it is not necesIf in saryto entercodes.. then clicking Open. Load YourData When you load your data (by clicking File.Chapter I Getting Started Theprevious data window canbe changed look instead the screenshot to like beclickingon the Value Labelsicon(seeanow). datais entered this mode. SPSSdata files have a .pay specialattentionto where you saveit. or removableUSB drive so that you can taie the file withvou. Save Your Data Anrfrrr Cr6l! r ti il 'i. You may useeithermethod.You will probably want to saveyour data on a floppy disk. you may optionallytogglebetween views by clicking valueLaiels under the Viewmenu.

. this text... Eipbrc. uppercase usedfor clarity.) The majority of the available statistical tests are under the Analyze menu. OdrRrdrrtMr ) Scab Norparimetrlc lcrtt l ) Tirna 5arl6t ) Q. exit SPSS (by clicking File..ChapterI GeningStarted Practice Exercise To be surethat you havemastered saving and openingdata files. then Exit)... Restart SPSS and load your databy selecting "SAMPLE.we will calculatethe students'averagegrade. P-P flok..or lowercase.rlty Corfrd I tj\g*r*qgudrr. It is wise to save your work frequently. .*ts"ussRff(trve.in caseof computer crashes. is In After you have saved your data. nameyour sample datafile "SAMPLE" andsaveit to a removable FilE Edt $ew Data Transform Annhze @al storagemedium..SPSSwill display the name of the file at the top of the data window. then Descriptive Statistics.sav"file you just created..but imaginea data file with 10. the Section 1. then Descriptives. are asking the computerto summarizeour data we set. get started. you can mn any of the analyses To available. A*r*.000studentrecords. Rdio. Therefore. On the right . Once it is saved. ). This menu displaysall the optionsavailablefor your versionof the SPSSprogram (the menusin this book were created may haveslightly with SPSSStudent Version 15.Otherversions different setsof options.5 RunningYour First Analysis Any time you open a data window. Note that the left side of the box containsa OAY list of all the variablesin our data file. where we can 3s.00 ir l. This brings up the Descriptives dialog box. Q€ Phs.(With only two students. you can easily checkyour answerby hand.) Corr*lrtr ) Re$$r$on 't901. j File Edlt Vbw Data Transformnnafzc Gretrs UUtias gdFrdov*Help I (llnl El tlorl rl rttrtJJ r ktlml lff Cottpsr Milns ) al ol lVisible:6 . I f.Sr ql is an area labeled Variable(s). To calculatea mean (average).. Note that filenamesmay be upper.l specifythe variableswe would like to use in this particular analysis.. CrogstSr...0).we run the commandby clicking Analyze. GanoralHnnar ) i f&dd . Classfy ..9mloddrov*p*vri*lq .

your outputis added theendof yourprevious to output. in m Section 1. Standard Windows conventions of "Shift" clicking or "Ctrl" clicking to selectmultiplevariables be usedas well.Ij We want to compute the mean for the variable called GRADE.and we will be readyto examineour output. All of the analyses listed in the order in which they were conducted. left section an outlineof the The is (SPSS "outlineview"). To switchbackandforthbetween data window andtheoutput window. the analysiswill be conducted.J the right arrow between the two windows. To transfer it to the right window. .00 81. Simply click on the sectionyou would like to see.* r*4 The sectionon the left of the output window provides an outline of the entire output window. we need to select the variable name in the left window (by clicking . output refersto this asthe is H. then a new output window will be created.Theright section theoutputitself.6 Examiningand PrintingOutput Files After an analysis is performed. Ee lbw A*t lra'dorm Craphr . are Note that this outline can be used to quickly locate a sectionof the output.select the thedesired windowfrom the Window menubar(seearrow. click on -t:g.*liilca Xsrn Mlnlmum Hadmum 83.Ufr!3 Uhdo'N Udp I -d * lnl-Xj irllliliirrillliirrr slsl*gl l elsl*letssJ rl sl#_# + l *l + l . The -!tJ arrow always points to the window opposite the f. the output is placed in the output window.Note that double-clickingon the variable name will also transfer the variable to the opposite window.and the right window will jump to the appropriate place.Chapter I Getting Started l:rt. * Descrlptlves f]aiagarll 6r** lS Adi\€D*ard ffi Dcscrtfhcsdkdics l: \ lrrs datc\ra&ple.Smdadr{rdvdarvai& PR:l highlighted item and can be used to transfer selectedvariablesin either direction. and the output window becomesthe active window.F* | on it).l &hj ornt El Pccc**tvs* r'fi Trb -qg*g!r*!e!|ro_ o :l ql el .41421 valldN (|lstrylsa) 2 ffiffi?iffi rr---*.lav lle*crhlurr N ufinuc I Sl.0000 Std.00 85. Thus. Dwiation 1.below). can When we click on the OK button. The output window is split into two sections.If you have run previous analyses saved and them. If this is the first analysis you have conducted since starting SPSS.

you could be printing a very large output file containinginformation you may not want or need. then Outpul. selectthe data window and print it. Output can also be selectedand pastedinto a word processorby clicking Edit. then Copy Objecls to copy the output. Practice Exercise Load the sampledata file you createdearlier (SAMPLE. or Example1. Section 1. or click on the printer icon on the toolbar. Thus. it is really quite simple to add additional cases (rows/participants) additional (columns).You can then switch to your word processor and click Edit. To print your output. the output is addedto the end of your previous output.7 ModifyingData Files Once you have createda data file. then Print. You will have the option of printing all of your output or just the currently selected section.7. By pressingtheDeletekey.7. This is a quick way to be sure that the output window containsonly the desiredoutput. then New. simply click File.1. Next. click File. Run the Descriptives command for the variable GRADE and print the output. Be careful when printing! Each time you mn a command.To do this. One way to ensurethat your output window containsonly the resultsof the current commandis to createa new output window just before running the command. All your subsequent commandswill go into your new output window. Your output should look like the exampleon page7.1 Two morestudents provide with surveys. variables Consider Example 1.ChapterI GeningStarted Clicking on a statistical procedure all also selects of the output for that command. thenPaste. you Theirinformation is: Response Sheet3 ID: Day of class: Classtime: Are you a morningperson? Final gradein class: Do you work outsideschool? 8734 MWF Morning Yes X TTh Afternoon XNo Part-time 80% Full-time No Response Sheet4 ID: Day of class: Classtime: Are you a morning person? Final gradein class: Do you work outsideschool? 1909 X MWF X Morning X Yes 73% Full+ime No TTH Afternoon No X Part-time . that outputcan be deletedfrom the output window.sav).

00 No 1909. f+rilf.---Jd* i .00 onrlVed/ morning M Yes 73.4 New variables can also be added. if the first two participantswere given specialtraining on time management.. then add the TRAINING variable to the bottom of the list. the first two participantswould be assigneda "1" and the Iast two participantsa "0.00 Tueffhu aternoon No 85." To do this.Notice that as new participantsare added..0n iiart?mel8734"00 Tueffhu momtng No 80.- I - rll -. mfuUiewffi rb$ Vbw / l{ l Procus*r ready ls 15P55 I '. switch to the Variable View window.For example.00MonAfVed/ mornrng Yeg 73.The new variable could be called TRAINING (whether or not the participant receivedtraining). Save this new data file.the row numbers becomebold. .n0 Noi No 1909.i Adding data and adding variablesare just logical extensionsof the procedures we used to originally createthe data file.0u No l Yes yes 1901. We will be using it again later in the book.00 Tue/Thu mornrng No 80.OCI ManA/Ved/ m0rnrng Yes ffi. and it would be coded so that 0 : No and I : Yes.0f1 Tueffhu aterncon No 85. the data and file can be changedto reflect this additionalinformation.B0 MonMed/ m0rnrng Yes 83. the two new participantswere not.1 =J " rll'l . .SE__-- 14:TRAINING l0 1 I i lvGbt€r of 1r 3 4 I (l) s t0 NAY TIME MORNING GRADE woRK I mruruwe 4593. Then switch back to the Data View window to updatethe data.00 Part-Time No ' I . l-..r View { Vari$c Vlew isPssW . simply place two additionalrows in the Data View window (after loading your sampledata). j '. when done.l lrrl vl nh E*__$*'_P$f_I'Sgr &1{1zc Omhr t$*ues $ilndonHug_ Tffiffi 1 2 3 ) ID DAY TIME MORNING GRADE WORK 4593. Thus.Chapter I Getting Started To add thesedata.00 No 1gnl.the screenshouldlook like the screenshot here..00 Part-Time var \^ .00 Part-Time 8734.t tt Inl vl Sa E& Uew Qpta lransform &rpFzc gaphs Lffitcs t/itFdd^.

Make the modifications your SAMPLE.sav file andsaveit.Chapter I Getting Started Practice Exercise Follow the exampleabove(whereTRAINING is the new variable). to data l0 .

:D gtb it scale.consultyour your data.2-a fte rnoon g trlllql eilr $lclass). The importance of considering the type of data cannot be overemphasized.Thus.l.qg Lt@ ll .Remember TIME was that The outputindicates the average coded as an ordinal variable ( I = m or ni ngcla ss. the mean is not an *lq]eH"N-ql*l appropriatestatisticfor an ordinal :* Sl astts . Newer versionsof SPSS allow you to indicate which types of Measure data you have when you define your variable.25. 2. In this section.6M6. if we collect 4 piecesof information from 100 participants.ffi $arlrba"t S#(| LS a 2.and examine the output. SPSSmay conduct an analysis that is not appropriatefor scales. we start this section with given set of data.variablesare represented columns in the data file. Ordinal. f $cale . interval. GRADE was measured a ra- (assuming the distribution statistic that summary tio scale. Thus. we learnedhow to createa simple data file. a We could have had SPSScalculate mean for the variableTIME insteadof here. You can indicateNominal. Just because ht6x0tMn SPSS will compute a statistic for you doesnot mean that you should . perform a basic analysis. Participantsare represented rows. You do this using the Measurecolumn.sriltr Look at the sampledata file we createdin Chapter l. gettheoutputpresented GRADE.Thus.2 Chapter EnteringandModifying Data In Chapter 1. We calcur Nominal on lated a mean for the variable GRADE. save it. SPSSgenerally does this warning: If you ask it to.andthe mean is an acceptable is normal). and ratio.we will go into more detail about variablesand data. or Scale(SPSS @Nv doesnot distinguishbetweeninterval and ratio scales).but SPSScalculated any:$sh way. There are four types of measurement for scalewill determinewhich statisticaltechniqueis appropriate a While the measurement not discriminate.1 Variablesand DataRepresentation Section as In SPSS.If we did.For a more completedescriptionof thesefour measurement statistics text or the glossaryin Appendix C. we would that TIME was 1.we will as have a data file with 4 columnsand 100 rows. ordinal. Measurement Scales scales:nominal.

which appearon the right side of the Select Cases dialog box. 2. In the example to 3.UB 3.Missing data can weakenany analysis."-"""-*--*--**-""*-^*l 6 Alce a llgdinlctidod . sample our two datafile contains datafrom four participants.ffi l0&t foKl aar I c-"rl x* | t2 . When you select this command. can eliminatea suba singlemissingquestion ject from all analyses.or perhapsshe did not stateher age. q*d-:-"-.l.Often.00 1.Bn 4.tir3.00 the left.tafropc. Later in the text. participantsdo not provide completedata.For example..0 0 4. Note that the total score(which is 4. of whom received special trainingand two of whom did not. a pretestscore but not a posttestscore. ql total If you have missing data in your data 2.Perhapsone studentleft one question blank on a survey. leave that cell blank. If we wantedto conductan analysis usingonly the two participants did not receive training..0 0 2. Dsfti fi*blc Rc*pon$ 5ct5. Missing Data you may have Often.00 calculatedfrom both questions)is also blank becauseof the missing data for Question 2./r. CoptO..00 missing data in the data SPSS represents 1 . wouldneedto specify we who the theappropriate subset. conditions the are underwhich they are appropriate will be addressed.rt Csat {^ lccdotincoarrpr i : ... the fourth subject did not complete Question2. ( t'llitl&JE il :id l. lr{lrfum An*/& e+hr O*fFV{ldrr PrS!tU6.the dialog box below will appear.. Section 2.. We can use the SelectCasescommandto specify a subset of our data.For some students.Chapter Enteringand Modifying Data 2 use it.00 2.j. when specificstatistical procedures discussed.00 window with a period (althoughyou should not enter a period-just leave it blank).00 7.* | c -:--J llaffrvci*lc You can specify which cases(participants) you want to select by using the selection criteria. C6ttSldrDonoan!. Hh.. The Select Cases command is located under the Data menu.:irrlrr! lif l ll:L*s.o*rr..2 Transformation and Selection Data of We oftenhavemoredatain a data thanwe wantto includein a specific analyfile sis.r l ConyD*S r irCmu*dcaa ] i*np* | sd.00 3... O*.. Selectinga Subset F|! Ed vl6{ .00 set.

Jl _!JlJ selected..J.0000 7 .1. Then.!J-El [aasi"-Eo.1-glL1 E{''di'.:r.. If DAY is l.and SPSSwill select the case.for our sampledata.&l Flib'/.J:.m Na Yes Not Selected 1901.mPart-Time No .. are for = ilqex4q lffiIl.". For example.. An additional variable will also be createdin your data file.^.00 83..-.. rt lnl vl 1 I !k_l** -#gdd. -. data window will changeslightly.l-. vbryJ fsPssProcaesaFrcady I .*-.. only the secondand fourth cases selected this subset.m Tueffhui affsrnoon No ffi.*. N Minimum Maximum M e a n Deviation the output at right.m MpnMed/i mornino Yss 83. No No Not Selected ieifrfft MonA/Ved/1 morning Yes ru.-J. then click on the button labeledfi This will bring up a new dialog box that allowsyou to indicate which cases you would like to use. and the casewill not be selected. that case will not be 0 U IAFTAN(r"nasl . The new variable is called FILTER_$ and indicateswhethera casewas selected not..t----i selected.00 78.?. the statement will be false..-...l*.-< 2-'4 4 1 : TIME t *- s MORNING ERADE WORK TRAINING 4533.Once you have enteredthe logical statement.00 Va lid N 2 IliclwisP'l with a samplesize (M) of 2 insteadof 4.*"' .we will receive std. The most common way to selecta subsetis to click If condition is satisfied.': ID 1 'l 'l v. Notice that UKAUE 2 73. l3 ..'J lJ.i.click OK to returnto the data window. 'J-e.*..1 1 'l EffEN'EEEgl''EEE'o .Chapter2 Enteringand Modifying Data By default.U1Fad-Jime Yes Splacled 6h4lto TuElThu morning No m. . you can sl"J=tx -s*t"lBi!?Blt1trb :r select all casesthat were coded ?Ais"I c'-t I Ht I as Mon/Wed/Fri by entering the formula DAY = I in the upperright part of the window. or If we calculatea mean Descripthre Stailstics GRADE using the subset we just selected. If the logical statement is true for a given case.itayss 7 !LJ\ii. thenthe statement will be true..click Continue to return to the SelectCases dialog box.. After you have selected cases. then that case will be U. I I I 1 i{ l .*tI .All caseswill be selected. the first and third casesare not selected.For example..m .If DAY is anything other than l.If the logical statement is false. 0 7 1 1 we now have a mean of 78. *-J I i *] . the the The casesthat were not selected will be markedwith a diagonalline through the casenumber.. You can enter the logic used to select the subset in the upper section.-.

l a*intt m eltj I ables. save this data file as "QUESTIONS.tr T|{dorm manipulateyour existing vari*lslel EJ-rlrj -lgltj{l -|tlf.|J :3 lll--g'L'"J lllmr*dCof 0rr/ti* til &fntndi) Oldio... It is more efficient (and more accurate) to have SPSS compute the totals for you. so type the word "total. different questions. we will create a new data file. We I TrnnsformAnalyze Graphs Utilities Whds into Rersde 5ame Variable*.. this can causeconsiderableconfusionlater. into Varlables.s rt rt rl . These two blank areas are the .errtt*." will be usingit againin laterchapters." Notice that there is an equals sign between the Target Variable blank and the Numeric Expression blank. when your output. The variables represent the points each number of l* .._g-. we are creating a variable called TOTAL.Now enter the data shown on the screen to the right. gl w ca *l U $J-:iidijl lij -!CJ:l Jslcl rtg-sJ ll. rr | {q*orfmsrccucrsdqf n* ri c* rl "*l l4 . Now you will calculatethe total score for eachsubject.sav.---.2 Chapter Enteringand Modifying Data Be careful when you selectsubsets. This file will contain data for four participants and three variables (Ql. To illustrate this. or if there were a lot of questions.this would take a long time. as lected.LHJ participant received on three {#i#ffirtr!.We could do this manually. Vlsual 8inrfrg. The blank field marked Target Variable is where we enter the name of the new variable we want to create. rrw I i+t*.. In addition. After clicking the Compute Variable command.Be careful not to saveyour data file with a subsetselected. N will be less than the total number of recordsin your data set if a subsetis The diagonal lines through some caseswill also be evident when a subsetis seselected. Q2.. Computing a New Variable SPSScan also be used to compute a new variable or nh E* vir$.The subsetremains in ffict until you run the the because commandagain and selectall cases. In this example. When done. E${t iil :J ... and Q3). D.but if the data file were large. Ar*omSic Rarode. click Transform and then click Compute Variable. To do this.You can tell if you have a subsetselected you examine bottom of the data window will indicatethat a filter is on. we get the dialog box at right. Racodo Dffferant ..

.* * ^ c.-.oo 3 0 01 1. r ljl Recodinga Variable-Dffirent Variable F{| [dt ---. Rid&c l4sitE V*s..rs*trr. When we click OK..r . then Recodeinto Dffirent Variables.l |f.l{ rr 'r trl SPSS can create a new variable based upon data from another variable. U*dFhn|ro...t_y!"*_i Variabte ViewJ l-'r --i-----i lit W*. Eile gdit SEw Qata lransform $nalyza 9aphs [tilities Add'gns Sindow Help . a SPSSwill create new variablecalled TOTAL and make it equalto the sum of the threequestions.. l 41 I 2.:1 il I i . total : ql + q2 + q3 is the equation that is entered in the sample presentedhere (screenshot left).. Oc!t6 I}F sairs. i> t ii"alCt i-Jr:J::i l:j -:15 tJ -tJ-il is:Jlll i-3J:J JJJI --q-|J --q*J m J.00 i.' conp$o Cd.c 4.. Rrdon iMbar G. Say we want to split our participants the basisof on their total score. l\qg.0n 4..n0 10..: .!&dioncqdinl tsil nact I c:nt I x* | two sides of an equation that SPSS will calculate..0 0 1 .00 Art(tn*Rrcodr. l5 ..We want to create a variablecalled GROUP.ltrl-Xl 3..:1.00 3.. 2.. we click Transform.n0 4.| ldindm. -o ..00 31 2. For example.. !la{ Data j Trrx&tm Analrra . To do this. Save your data file again so that the new variablewill be available for future sessions.. t::. I I vdiouc'.. i .oo ..rll r-al +... l...u .. which is coded I if the total score is low (lessthan or equal to 8) or 2 if the total score is high (9 or larger). Note that it is posat sible to create any equation here simply by using the number and operational keypad at the bottom of the dialog box..nVail'r*dnCasas..Chapter2 Enteringand Modifying Data iii:Hffiliji:..lu l.m Racodr 0ffrror* Yal lrto S*a *rd llm tllhsd..

You l6F i4i'|(tthah* ...r*t will be redirectedto the data window. based TOTAL. LOWEST through rT. Click Add.value blank and a I in the Value field under New I ir:L-_t' Value.Click OK. .saa*ld6lefl. . then Continue. .fClick Old andNew Values. r {:ei.. : r----**-: variable (GROUP) will have been added and . Now enter an :I "a *r***o lrt*cn*r I I nni. T &lrYdd. Transfer the variable TOTAL to the middle blank." Flc Ed Yl.When we click Add. ' I y.. it's a good idea to open the t *.F--*-_-_-_____ right displaysthe recodingformula. Type "group" in the Name field under Output Variable.". In this example.''. r '- li-l a ri r.Ch a p te 2 En te ri n g n d Mo d i fy i n gD ata r a This will bring up the Recode into Different Variables dialog box shown here.!*lr Variable View and enter "Recoded" in the Label rr&*ri*i*t column in the TOTAL row. A new I " n *'L..r..we have entered a 9 in the Range. This is especially .Click Change..||tf^. on lirli i " r. ! * r h ^ .This will bring i c nq. 8 on the left in the Range.rlnr-":-'-'1** I useful with large datasets which may include i T I r nryrOr:frr**"L many recodedvariables. NtnHbvli|bL-lo|rnrV*#r til ladtnl c€ rlccdm confbil -'tt" I rygJ **l-H+ | To help keep track of variablesthat have been recoded..*l'||. " . the blank on the .ly Drt! Tr{lform {*!c ce|6.and the middle blank will show that TOTAL is becoming GROUP.$.!!!ry I+ -ltrlIl l6 .".. Fup the Recodedialog box. value through HIGHEST field and a 2 in the Value field under New Value.as shownbelow. r it : ..* gf-ll codedas I or 2.t $q I '*J *u"'.

SPSS Data Format frequency distributions The SPSS requires only onevariable.convalueandthe percentage cases of of the in clusions drawnshouldrelateonly to describing numbers percentages cases the or of perconclusions If regarding cumulative the sample. centage and/or can . of valid the and percentages percentages.Thus.Now we will discuss andsummarize arecalleddescriptive statistics.3 Chapter Statistics Descriptive in with ln Chapter we discussed manyof the options available SPSS dealing for 2. the dataareat leastordinalin nature.Because outputcontains line for eachvalueof a varithe one with a relatively able. datafile for obtaining andthatvariable beof anytype. percentages. can tt .thiscommand worksbeston variables smallnumber values.1 FrequencyDistributions and PercentileRanks for a SingleVariable Description produces The Frequencies frequency varicommand distributions the specified for percentages.The procedures usedto describe data. percentiles be drawn. our waysto summarize data. useful It nominalor ordinal scales). is alsouseful a method getting feelof as of the just a meanandstandarddeviationandcan your data. is TheFrequencies command usefulfor describing samples wherethe meanis not (e.g. Assumptions percentages percentiles valid only for datathat are measured are Cumulative and on at leastan ordinal scale. ables.It provides moreinformation than outliers. data Section3. valid percentages the cumulative and cumulative The comprise as only thedatathatarenot designated missing. of Drawing Conclusions produces TheFrequencies outputthatindicates of command boththenumber cases in the sample a particular with that value.A special be usefulin determining feature the command of skewandidentifying percentile is its abilityto determine ranks.. The outputincludes number occurrences.

6 99.0 r8 . aaPUs. If there were any missingvalues.A tth e topof t h e s e c t io n . Notice that the variablelabel 406 records.. lm /Erqlr.(This example uses the i 1 r mpg } Disbtlvlr.4 6. The third columngivesthe percentage of all records (including records with missingdata) for eachvalue.7 7.onispUcamr / Hurepowor [horc Note that the dialog boxes in dv*.Pt'lok.2 7.9 8. It is typically located at <C:\Program F. CARS. click Analyze. ) This will bring up the main dialog r5117gl box. sr**i.0 7..7 70. Rrno.8 61. is ModelYear The second section of the output contains a statistics cumulative frequency distribution for each variable Wse lected.There is a row for eachvalue of your variable.4 7. The output iiself consists five columns. gives the percentage records(without including of records with missing data) for each value.9 8.7 8. Output for a Frequency Distribution The output consists two sections. I slsl}sl &99rv i:rl.The fourth column.2 E4 15.0 79 80 81 82 Total Missing 0 (Missing) Total 7. the l7 Oisgay hequercy tder variable YEAR shows up in the dialog box as Model Year(modulo I0).4 6.In this example. Be surethat xl the Display frequency tables option is per Miles Gallon r q ! l checked.2 6.1"1 | of | given.7 100 .&{l&l&l @ then Frequencies...9 92. sav>.r (modulo 100) Cumulativs Pcr ce n l Valid P6rc€nl I 4 vatE 34 72 73 74 75 76 77 28 40 27 30 34 28 29 29 30 31 405 1 406 I 4 7. cdrFrb'l{tirE i 18 N Erpbr.. t h e v a ria b le la b e lis * oo? y.1 7.Click OK to receiveyour output..6 77. Transfer the variable for which you would like a frequency distributioninto the Variable(s)blank to the right.6 22.8 100.8 84. datafile contained the (modulo100). missing..4 8. The secondcolumn gives the frequency of eachvalue.Ccxr*yol Orbin [c He_l of the variable name) and the variable .5 32.id"w"bir 1|ut jq? | newer versions of SPSS show both the d t!rc toAceileistc left .9 6.3 | 00. these values would be larger than the valuesin column threebecause total the Modol Yo.1. The first sectionindicatesthe numberof reof Recordswith a blank scoreare listedas cords with valid data for eachvariableselected.The first | Missing t I I I Jolumnliststhi valuesof the variablein sortedorder.includingmissingvalues.0 46.C h a p te 3 D e s c ri p ti v e ti s ti c s r Sta Creating a Frequency Distribution To run the Frequer?cies command.4 7.. then Descriptive Statistics. labels if they are entered..9 6.4 54. Thus.:.f"tq I type of variable(the icon immediately dr'. Fi les\SPS S\Cars..9 9.. labeled Valid Percenl...1 7.9 9.savdata file that comeswith SPSS.. I f*:.. croac*a.9 7.1 I rry*.9 f.3 39. and additional rows are added at the bottom for the Total and Missing data.1 6.

the output contains rows for the 25th. enter the desired percentile rank in the blank to the right of the Percentile(s)label.Hrrdilrtl mcur l. l9 . |*"1 To obtain either the descriptiveor lT Oirpbar frcqlcreyttblce percentile functions of the Frequencies command. Check any additional desiredstatisticby clicking on the blank next to it. click Add to add it to the list of percentiles requested. xl PscdibV. Then.. The output containsa row for each piece of information you requested.Chapter3 DescriptiveStatistics numberof recordswould have beenreducedby the numberof recordswith missing values.as well as a variety of percentile SHslsp{rierltuso values (including quartiles.3). 75th. suchas the Median or Mode.r$pn " l* SUaa** n v$*$i I* nmgc f Mi*n n |.00 79.ghsrurt T Kutd*b Statistics ModelYear (modulo100 N Vatid Missing Percentiles 25 50 75 80 Outputfor Percentile Ranl<s 405 1 73. 50th. the example above.lrr tr Ourilr3 F nrs**rtd!i* I .00 76.Vdrixtgor0mi&ohlr Oi$. Note that the Central Tendencyand Dispersior sectionsof this box are useful for calculatingvalues. Cumulative percentages indicate the percentageof records with a score equal to or smaller than the current value. Mla pa Galmlm3 tril This brings up the Frequencies: Statisticsdialog box. I bottomof the main dialog box. 0idthfim' t.The new sectionof the output is shown at left.. all click Continue to return to the main dialog box.and 80th percentiles..crnqo.S"E. cut points.00 80.click the Statistics button at the frfix*. Click OK. Once you have selected your requiredstatistics. These values are equivalentto percentile ranks for the values listed. and /v***v*$t*(ttu YI /lino toaccrbrar to !rydI scorescorresponding a specificpercentile $1C**{ry o{Origr[c rank). we In checkedQuartiles and asked for the 80th percentile. Thus. For percentiles. D eterm ining P ercenti Ie Ranl<s The Frequencies command can be used to provide a number of descriptive Sfndr*Pi*rcsnr statistics. the last value is always 100%. which cannotbe calculatedwith the Descriptiyescommand (seeSection 3. Thus.p.00 The Statistics dialog box adds on to the previous output from the Frequenciescommand. The final column gives cumulative percentages.. :. i c{q I *g"d I Hdo I f.

r---r l rJ I T€K I '-l ryq I The dialog box initially lists all variables on the left and contains two blanks labeled Row(s) and Column(s). in the unlabeled area(ust under theLayer indicator).ilffi. which you createdin Chapter l.this the a command worksbeston variables a relatively with smallnumber values. of .1 D e s c ri p tie S ta ti s ti cs r r Practice Exercise UsingPractice DataSet I in Appendix create frequency B.ho. main Crosstabs dialog box. for The outputincludes number occurrences eachcombination levelJof eachvarithe of of of able.sav the data file.Etbn ) ) scah This example uses SAMpLE. a distribution tablefor the mathematics skills scores..} ftr.Ch a p re . you would enter the third. Those RcF*r ) variables be of anytype.. Assumptions Because outputcontains row or columnfor eachvalue of a variable.. 20 . for The Crosslabs command usefulfor describing is samples wherethe mean is not (e'g. Determine mathematics the skills scoreat which the 60th percentile lies..nominalor ordinal scales). etc. ctick Analyze. below. Enter one variable (TRAINING) in the Row(s) box.then Crosstabs. To analyze more than two variables. ) . Ror{.2 FrequencyDistributions and percentileRanks for Multiple Variables Description The Crosslabs command produces frequency distributions multiple variables. then Descriptive Statistics. is alsouseful a method getting feelfor useful It as for a your data.lm&! i. can Runningthe CrosstabsCommand (orprycrllcEnr ) G*ncral llrgar Flodcl ) . section 3. fourth.SPSS Data Format The SPSS data file for the Crosstabs I lnalyzc Orphn Ut||Uot command requires two or more variables. To run the procedure.. Enter the second (WORK) in the Column(s) box.It is possible havethecommand to givepercentages anyor all variables.This will bring up ttt. ) chrfy DttaRcd.

and a column is added for total.F Bu F corm - r-Bait*" : .0% 100.0% ? 1000% 50. a row is added for total.0% 4 r 00. In addition.0% 50.0% 100. measure central a of tendency a measure disperand of sionprovidea greatdealof information about entiredataset. Column.median. Of the individuals who did not work.one participant received no trainingand doesnot work. and you will get the box at right.0% 50. 0% 50.0% 50.Together. primarytypesare the range and tell The the standarddeviation. all the indiof vidualswho had no training 50oh not work and 50o% did workedpart-time (using the"o/o .U]dadr&ad if. 0% 50. TRAINING' WURK oss|nl)tilntlo|l Cr NO The Cells button allows you to specify W: t C". Each level of WORK is given a column.l. Each cell contains numberof participants the (e.0% 50.Click Cells.0% 100.0% 1 1 50.andmode.0% 50. Column add Forexample.0% 50. two participants received training.3 Measuresof Central Tendencyand Measuresof Dispersion for a SingleGroup Description Measures centraltendency valuesthat represent typical memberof the of are a sample population.sragatrd "1'"1 -_rry-ys___ . Practice Exercise Using Practice Data Set I in AppendixB.. check Row."1 ''Pd€rl. .0% 25. within TRAINING" row).Measures or The of dispersion you the variabilityof your scores.0% 100.!p. 0% 5 00 % 100. 0% 50. the 2l .Then click Continue. Whatpercentage participants married? of is Whatpercentageof participants maleandmarried? is Section3. For the example presented here.0% a Tolal 50. This will return you to the Crosstabs dialog box.0% 25. 50o/ohad trainingand no 50%hadtraining(usingthe"o/o within work" row). The percentages eachcell are also shown. 0% 25.0% 25.0% 50. - TRAINING Yes No Total Count % within TRAININO % within woRK % ofTolal Count % within TRAINING % within WORK % ofTolal Count % within TRA|NtNo % wilhin WORK % ofTolal WORK Parl-Time I 1 50. Determine numberof participants eachcombination the the in of variables SEX andMARITAL.regardless employno of mentstatus). createa contingency table using the Crosstabs command. Click OK to run the analvsis. threeprimarytypesarethemean.0% Interpreting Cros stabs Output The output consistsof a contingencytable. and Total percentages.Chapter3 DescriptiveStatistics percentages and other information to be generatedfor eachcombinationof values. Each level of TRAINING is given a row.Row percentages up to 100% for add horizontally.ti* | t*"1 .0% 50.g. percentages up to 100%vertically.

iitbnl* lli*nn f.rlm Cr* -:.In addition.not highly skewed. f Kutu{b i Assumptions Each measureof central tendencyand measureof dispersionhas different assumptions associated with it. you shouldalso statethe range or interquartilerange.ncr l fnxrgo oidrlatin -.and it of has the most assumptions.'*-' lf Sld. The standard deviation requiresdata measured an interval or ratio scale. by Thus. the data must be measured on For an interval or ratio scale..In addition.dr'.The median requiresat leastordinal data.t : T Modt | | r. the Frequenciesor Compare Means commandsare required to compute the mode or median-the Statisticsoption for the Frequenciescommandis shownhere). when reporting a mean. other is also.-I k'I +l 16I :-^ t5m ':'I l. For nominal scale data. there are no assumptions about the shapeof the distribution.-i .t.l Statistics We will discuss thesemeasures central of tendency and measures dispersion the conof in text of the Descriplives command. f u"g.There are no assumptions the mode. on the distribution should be normal.-r5tcffi: . if one is appropriate.Chapter Descriptive .Because median the indicatesonly the middle score (when scoresare arrangedin order).the entire frequencydistribution shouldbe presented a measure dispersion..SPSS Only one variable is required.Thus. too.To calculatea range. you should also report a standard deviation. the variablemust be at leastordinal.The mode is the weakestmeasureof central tendency.It is a mathematicaltransformationof the variance (the standard deviation is the square the root of the variance). Note that many of these statisticscan also be calculated with several other commands (e.Vdsm$apn&bcirr i0hx*ioo*". as of Drawing Conclusions A measureof central tendencyshouldbe accompanied a measureof dispersion. The range is the weakestmeasureof dispersion. When presentinga median.the distribution shouldbe normally distributedor.to calculatea mean. at least.5. 22 .xl Fac*Vd*c-----:":'-'-"-" "- |7 Arruer |* O*pai*furjF tqLteiotpr F rac$*['* lcer**r**nc*r1 !*{* f. for The standard deviation is the most powerful measure dispersion. Data Format .but it.g. iffi{ltl*::l'. The mean is the most powerful measure centraltendency. example. has of severalrequirements.H**ntrn ]fV"iro f.

If you would like to changethe default statistics that are given.2500 5. F Morr F su aa**n f u"orl. then Descriptive Statistics. mean. ' cond*s ) . maximum.. This example uses the SAMPLE. qil ltl {l '!t .i*hlC ". then clicking on the anow. Reading the Output The output for the Descriptivescommand is quite straightforward. the mean. and standard deviation for this variable.' I .nrrcr l. the minimum value. and eachvariable is given in a row."PI would like information about can be placed in opdqr".. the maximum value.qx /t**ts f S&r dr.*an . The output is in presented here is for the sampledata file. n".then Descriptives. Mi*ilm F7 Maiilrn I.Deviation 80.25198 23 . and the standard deviation.S. Any variables you f.00 Mean Std.bb To run the command.t X dlt : ) ) ) da. click Analyze.r. Note that some of thesemay not be appropriatefor the type of data you have selected.00 85. you will receive the N (number of cases/participants).sav data file ! D GonardtFra*!@ we have used in the previouschapters. the right blank by double-clickingthem or by I selectingthem.l t il F. minimum.npur qq. You will be given the Optionsdialog box presented here. It showsthat we have one variable (GRADE) and that we obtainedthe N.d!r&!d Y*rcr ri vdi.d I This will bring up the main dialog box for the cr*l I Descriptives command.v d** ?n-"* ?.Slm r@t .dn Ltffibc command you will most likely use for obtaining measures centraltendencyand measures disperof of sion.l By default. : Rolrar*n classfy 0€tdRedrctitrt .r. click Options in the main dialog box. r I *car*re mar i r Dccemdnnmre I r lpr.i I . DescriptiveStatistics N graoe ValidN (listwise) 4 4 Minimum Maximum 73.Each type of output requested presented a column.|' ?bl 'i I I : I "i * I otlnyotdq: I {f V.Chapter3 DescriptiveStatistics Running the Command The Descriptives command will be the lA-dy* ct.

then Compare Means. The Means commandis run by clicking Analyze..treatment manipulation). it may not meet the strict criteriathat definea true independentvariable (e. This will bring up the main dialog box for the Means command. the Note that while SPSScalls this variablean independentvariable. Assumptions The assumptions discussed the sectionon Measures Central Tendencyand in of Measures Dispersion a SingleGroup(Section of for 3. When presenting a median. someSPSSprocedures referto it as the grouping variable. ' 1A LA .g.when giving a mean. What is the mean?The median?The mode?What is the standard deviation?Minimum? Maximum?The range? Section 3.One represents dependent the variable and will be the variablefor which you receivethe descriptive statistics. SPSSData Format Two variablesin the SPSSdata file are required. obtainthe descriptive statisticsfor the ageof the participants. Drawing Conclusions A measure centraltendency of shouldbe accompanied a measure dispersion. ! RnalyzeGraphs Utilities WindowHetp F r.. Falred-SarnplEs Ons-Way*|iJOVA. Thus. you shouldalsoreporta standard deviation.3) alsoapply to multiplegroups.you shouldalsostatethe range or interquartile range..Chapter Descriptive 3 Statistics Practice Exercise Using PracticeData Set I in AppendixB.l nsportt ' Descriptive Statistirs ) General Linear ftladel F ) Csrrelata I Regression (fassify F gt5il | Firulb Ona-Sarnplefeft. Place the selected variablein the blank field labeled Dependent List. by of Thus. Runningthe Command This example uses the SAMPLE.The other is the independentvariable and will be usedin creating subsets..4 Measures of Central Tendency and Measures of Dispersion for Multiple Groups Description The measures centraltendencydiscussed of earlierare often needednot only for the entiredataset. One way to obtainthesevaluesfor subsets would be to use the data-selection in techniques discussed Chapter2 and apply the Descriptivescommandto each subset.. -l I ' .sav data file you created in Chapterl. thenMeans.but also for several subsets. easierway to perform this task is to use the Means An command. f Independent-Samdes T Te T Test.The Means commandis designed provide descriptive statisticsfor subsets to ofyour data.

all of whom were included in the analysis. Heset I Cancel I ll".du** /wqrk €tr"ining € arv r T ril ryl Ii .sav data file.Chapter3 DescriptiveStatistics Placethe grouping variable in the box labeledIndependent List. If you would like additional measures.r I Summary GaseProcessing Cases Excluded N Percent lncluded N Total N grade .0% 25 . You can opt to include any numberof measures. and standard deviation are given. I L-:By default. there are four students(cases).rl tffi. gives informationabout the data used.click Options and you will be presented with the dialog box at right.!'l?It.0% 0 .The first section. In our sample data file..OYo 4 | Percent 100. through use of the SAMPLE. fd Stdirtlx: Medan mil'*-*Ca*o* of lltlur$u Doviaion 5tt Minirn"rm Manimlrn Rarqo Fist La{ VsianNc Std.Enor Kutosis d Skemrcro Sld.morning 4 Percent 100.called a case processingsummary.In this example.the mean. r Independent Li$: r:ffi l"rpI l*i. :I tu I List Dependant .Eno ol $karm HanorricMcan :J ml I c""d I lStardad I Lqlry-l x. measures central tendencyand measures of of dispersion for the variable GRADE will be given for each level of the variable MORNING. Reading the Output The output for the Means command is split into two sections. numberof cases.i I lLayarl al1*- I :'r:rrt| I i lr-.

?5198 26 . Yes). yes.Chapter3 Descriptive Statistics The secondsectionof the outRepott put is the report from the Means comGRADE mand. and it defines the main sections.0000 84. The variable you used in Level I (MORNING) is the first one listed.Deviation 1 1 I 3.0000 78.5000 83.No. Rather than adding more variables to the Independent List section of the dialog box. however. and you can select a secondindependent variable (e. along with the Total.0000 76.0000 82.g. Note that SPSS indicates with which layer you are working.the labelswill be usedinsteadof the raw values. Deviation This report lists the name of NO 82. In this example.0000 80.The variable you had in Level 2 (TRAINING) is listed secReport ORADE MORNING TRAINING No Yes NO Total Yes Yes NO Total Total Yes NO Total Mean 85.0000 73.0000 7. named Total. SPSS can break down the output even further.5000 80. where the level of the independentvariable is equalto the row heading(e.the levels are 0 and l. That row contains the combined data. 53553 1 1 1 a z 4 7. The summary statisticsgiven in the report correspondto the data. Now. You now have two main sections(No and yes). you will be presentedwith Layer 2 of 2.54575 5. Thus. you will be given summary statistics for the variable GRADE by each level of MORNING and TRAINING.07107 1...2500 4 5. and Total). TRAINING). Extension to More Than One Independent Variable If you have more than one independent variable. each main section is broken down into subsections(No.g. Every level of the indeTotal 80. and the valuesare the sameas they would be if we had run theDescriptiyescommandfor the variableGRADE.two participantswere includedin eachrow.07107 (GRADE).5000 2 3. Your output will look like the output at right. MOR N IN G Mean N Std. when you run the command(by clicking On. An additional row is added.25198 pendent variable (MORNING) is shown in a row in the table. id If you click Next. 41421 4. Note that if a variable is labeled. labeledNo and Yes. Now.2500 N Std.53553 the dependent variable at the top Yes 78. you need to add them in a different layer.

If the same deviations student scores on a reading 90 exam. A a on a standardnormal distribution of z-score. thescales and should eitherinterval or ratio. z-score -1. distribuare the on tionsthatareconverted z-scores be be to should normallydistributed.Chapter3 DescriptiveStatistics who were not morningpeopleand ond. the UsingPractice DataSet I in Appendix compute meanandstandarddeviaWhat is the average of the marriedpartion of agesfor eachvalueof maritalstatus.g.sheactually betterin relation otherstudents the mathedid on matics because z-score higheron thattest. The student's scoreis l5 pointsabove test z-score 3 because scored standard is the class mean(85 . represents number standarddeviations the above belowthe mean (e.thez-score be I . z-score based A is The mostcommon (e. Thus. above mean(15 + 5 the of will with a class meanof 80 anda standarddeviation 10. a See the for research design textor your instructor moredetails. numerical. 27 . of or therefore. standard scoreis the z-score. Practice Exercise B. werenot morningpeople. was test her .even thoughher raw scorewas higheron thereading to test.. age participants? ticipants? singleparticipants? divorced The The Section3.This is because per One with usingmanysubsets thereis only oneparticipant cell in thisexample. problem is that it increases numberof participants required obtainmeaningful to results. Therefore. Assumptions Z-scores based the standardnormal distribution.SPSS Data Format That variable in mustbe Calculating z-scores requires only a singlevariable SPSS.5 represents a I deviations of a score % standard belowthemean). The student's she 3 :3).70: l5).9..the first row represents thoseparticipants participants werenot morningpeowho who received row training.5 Standard Scores Description of scales transforming scores the Standard scores allow thecomparison different by into a commonscale. student scores on a mathematics that a examin a class hasa meanof 70 andstandarddeviationof 5.0 because she is one standard deviation abovethe mean. meanof 0 anda standarddeviation l). Thus. The second represents ple anddid not receive who The third row represents total for all participants the training.For example. are Noticethat standarddeviations not givenfor all of the rows. Drawing Conclusions of of above Conclusions based z-scores on consist thenumber standarddeviations 85 or belowthe mean.

has been added. lrnsfam elslel&l *il{|lelej sJglel ffilslffilfw. This example uses the sample data file (SAMPLE.You can perform any numberof subsequent analyses the new variable.dI beled Save standardized values as dwnnn c"od I variables. @nerdLlneuFbdel ) Correlate ) This will bring up the standard dialog box for the Descrip/ives command. it.Check this box and move drR$HtNs HdpI the variable GRADE into the righthand blank.it createda new variable with the samename as your old variable preceded a Z. Then click OK to com19 Srva*ndudi3ad vduos vcriaHas ts plete the analysis. gr$t6 lr| -tsJX rc $citffrtirffi Yas Yes No Tua/Thulaiemoon Mi- Reading the Output After you conductedyour analysis.sav) createdin ChaptersI and2. Notice that a new variable. by The z-scoreis computedfor eachcaseand placedin the new variable. They were inserted into the data window as a new variable.Determinethe mean z-scoresfor salariesof male employeesand femaleemployees.When you asked SPSS to save standardized values.lrstlK.To access click Analyze. Notice the checkdoay drnue box in the bottom-left corner ladMonNtNs tsr.Notice that the z-scoresare not listed.qlqj 11i-io- end/2. determinethe z-scorethat corresponds to each employee's salary. ) b. called ZGRADE.the new variable was created. on Practice Exercise Using PracticeData Set 2 in Appendix B.al then Descriptive Statistics. t*l Eb E* S€w Qpt. Determinethe mean z-scorefor salaries the total sample.Chapter3 Descriptive Statistics Running the Command Computingz-scores a component the is of eqhs Uti$tbl WMow Help Myzc Descriptivescommand. of 28 . then Descriptives. Switch to the Data View window and examine your data file. You will be preldry | sented with the standard output from the Descriptivescommand.

1 GraphingBasics In addition to the frequency distributions. will use a new set of data.When you createthe variables. is and the chance makinga transcription of error is reduced. to the it -HEIGHT 66 69 /5 72 68 63 74 70 66 64 60 67 64 63 67 65 WEIGHT 1 50 1 55 1 60 1 60 1 50 1 40 1 65 1 50 ll0 1 00 95 ll0 1 05 100 ll0 1 05 SEX I I I I I I I I 2 bCIb iNiomiiiai 2 2 2 2 2 2 2 29 .. It has been said that a picture is worth a thousandwords.Switchto the Data Viewto Measure enterthe datavalues the 16participants.it is now possibleto make publication-quality graphs using only SPSS.and reduceyour data.the measuresof central tendency and measures dispersiondiscussed Chapter3.One importantadvantage using SPSSto createyour graphsinsteadof of other software (e.Thus.4 Chapter Data Graphing Section 4.2 The New SPSS Chart Builder Data Set For the graphingexamples.g.orof in ganize. 2 = female). In the caseof complicateddatasets. WEIGHT (in pounds). Excel or SigmaPlot)is that the data have alreadybeen entered.this is certainlytrue.Enter the databelow by we defining the three subject variablesin the Variable View window: HEIGHT (in inches). Section 4. duplication eliminated.sav. designateHEIGHT and WEIGHT as Scale measures and SEX as a Nominal measure(in the far-rightcolumnof the VariableView).naming HEIGHT. graphingis a useful way to summarize.0of SPSS.and SEX (l = male. With Version 15. for Now usethe Save comAs Scale mand save file.

0 o f S P S S is t h e Ch a rt B u ild e rc o m. N Minimum Maximum 16 16 16 16 Mean Dpvi2lion l-ttstuFl I WEIGHT SEX ValidN (listwise) 60.moasuranar* shold bc sct gecrh fw cadr vadabb hvel h yourdurt.9375 165.thenDescriptive Statistics. its On the left side of the Chart Builder window are the four main tabs that let you control the graphsyou are making. The first one is the Gallery tab. Pr6srDafine V. you must have a data file open.9Ub// 26.00 129.3 and2.Click of oK.00 06 nn 1. In order to usethe chart builder. and much of it is beyond the scopeof this text.. This command is accessedusing Graphs. then Chart rct"ph. cc[ffy Eesk notnents l"ry{Y:_ litleo/Footndar 30 . In dtbn.te l&b for rhart vsi$bs. :. Lulities Builder in the submenu. The Chart Builder allows you to make any kind of graph that is normally used in publication or presentation. This text.5000 J.00 1. Descrlptlve Statistics srd.rld &fhcd for eachcrtrgory br kass O( to doflrc yorr chart.5164 Chart Builder Basics Make sure that the HEIGHT.will go over the basics of the Chart Builder so that you can understand mechanics. NewwithV ersio n l5 . This is a very versatilenew commandthat can make graphsof excellentquality.or to refreshyour knowledge thistopic.3451 . however.0625 2.Compare your resultswith thosein the tablebelow.sav data file you createdabove is open. W windt mand. f* non't*row $rUdalogagaFr ol( Ocfknvubt# kopcrtcr. When you first run the Chart Builder command.then Descriptives).1 if you had difficulty definingthe variables usedin creatingthe datasetfor this example. This dialog box is you to ensure your asking that variables are properly defined. v*re hbds sha.00 74. The Gallery tab allows you to choosethe basic format ofyour graph. f yow chartcodahs cataqo*d v6d&.you will probably be presented with the following dialog box: Bcforeyur rrc thlsdalog. Refer to Sections 1..00 66.riaHafroportbs to mt masrcnrant brd or ddhe v*.Chapter GraphingData 4 Make sure you have enteredthe data correctly by calculating a mean for each of the three variables(click Analyze.

the screenshothere showsthe different kinds of bar chartsthat the Chart Builder can create. of distributions Drawing Conclusions produces TheFrequencies outputthatindicates of command boththenumber cases percentage cases particular with that value. If the percentiles alsobedrawn. . HUH-ot.Chapter4 Graphing Data For example. 8oph The other tabs (Groups/Point ID DJ'lAm and Titles/Footnotes)can be used for adding other standard elements to your graphs. histograms and of of Theyaregraphical represencursthrough varyingheights barsor sizes pie pieces. in the sample with a valueand the of the for conclusions or drawn shouldrelateonly to describing numbers percentages the perconclusions regarding cumulative sample. Chrtpftrbv [43 airr?b deb Alternatively. to 3l .Thus. After a little experimentation your own. you can use the Badnsrfiom: sic Elemenlstab to drag a coordinatesys. After you have selectedthe basic 0rr9 a 63llst ctrt fsg b re it e form of graph that you want using the y* 6t'fig pohr Gallery tab. Pie Charts."Drag a Gallery chart here to use it as your startingpoint").bh I n"ct I cror | The examples in this text will cover some of the basic types of graphs you can make with the Chart Builder.you will soon gain a full understanding the of ChartBuilder. Section4. and Histograms Description pie represent number timeseachscore the ocBar charts. then drag variables and elements Scalbillot Hbbqran into the window. charts. the of in tations thefrequency discussed Chapter 3.8arts ElpnF& Ll3 tem (labeledChooseAxes) to the top winAroa PleFokr dow.3 Bar Charts. you simply drag the image OR from the bottom right of the window up to Clkl m f€ 86r Ele|mb * b tulH r dwt €lsffirt bf ele|Ft the main window at the top (where it @9Pk8: reads. once you on have masteredthe examplesin the chapter. the dataareat leastordinal in nature. centages and/or can SPSSData Format You needonlv onevariable usethiscommand.

Output The bar chart consistsof a I'axis. n"*dI There are three types of chartsavailc"!q I able with this command: Bar charts.(SeeChapter3 for other optionswith this command.: Click the Charts button at the bottom to producefrequencydistributions.!0 67.a'fJul then Descriptive Statistics. Pie l1t"l charts.Chapter GraphingData 4 Running the Command The Frequenciescommand will produce *nalyze Gr. then Frequencies. where you can GeneralLinearMsdel enter the variables for which vou would like to creategraphsor charts.r. h.Note that the only values represented the X axis are those values on with nonzero frequencies (61. 62. the I axis can be either a frequency count or a (selectedwith the Chart Values percentage option). You will be presented (6fnpSg MBan* with the main dialog box ) ) for the Frequencies command.) ). and 7l are not represented). For each type.. andHistograms.m 68.This will give you the Chartsdialog box.Click Analyze.00 66.s h. xl 0Kl You will receive the charts for any variables lectedin the main Frequencies commanddialog box.and an Xaxis. I a L G a 65.00 70. t LiwL lW . representing the frequency.lght .lgtrt . representing eachscore.pk Udties Window Hdp | graphicalfrequencydistributions.

Therangeof scores splitintoevenly is spaced groups.4 Scatterplots Description Scatterplots(also called scattergrams scatterdiagrams)display two values for or eachcasewith a mark on the graph.The I the axis represents value for the secondvariable.The midpointof each groupis plottedon theX axis..J. This is very useful in determining the disif tribution you have is approximately normal.!0 tos nfit 13!o il. Oae.00 67. of Practice Exercise h166.Chapter4 Graphing Data NEUMAf{l{ COLLEiSE Lt*i:qARy pA A$TO|'.constructa histogramthat represents mathematics the skills scoresand displaysa normal curve.a normalcurve will be superimposed over the distribution. After you have enteredthe data. The distributionrepresented here is clearly not normaldue to the asymmetry thevalues. . The Histogram commandcreatesa groupedfrequency distribution. If you select With Normal Curve.lght Use PracticeData Set I in Appendix B.!0 Gen0 !9.igU14 hclght s0.9l S. the for Section 4.The Xaxis represents value for one variable.00 t3r0 €alr 05!0 66.lr07 flrl0 h. the JJ . and a bar chart that represents frequencies the variableAGE.andthe I axisrepresents number scores the of for each group.m The pie chart shows the percentageof the whole that is representedby eachvalue.

to this Running the Command You can produce scatterplots clicking Graphs. You shouldget your new graph(nextpage)asOutput.lbbgran point.'"? | Click OK. (Note: You can also use theLegacyDialogs.:' . Wrilitll'. opbr. please AppendixF.. For this method. cautious be your interpretation thescattergram. V*l&bi: 8n Lh PlrifsLa Scfflnal xbbrs Hg||rd iEbM{ Ffip*t!4.. This would mean that we are trylr@ Fb/Fq|n ing to predict weights from heights).Then drag the Simple from: Scatter icon (top left) up to the main chart areaas shown in the screenshot left.Jol ^ry.J Y*J . then Chart I 6raph* ulfftlqs Wnd by Builder.and the WEIGHT variable to the Y-Axisarea(rememberthat standardgraphing (& mtrpb dstr Chrifrwr* conventions indicate that dependent variCtffii'w: Frwih ables shouldbe I/ and independentvariables Si LtE shouldbe X. about of . x**J " s*J .o on. Note that your actual data are not shown-just a set of dummy values. .. .At this gnt$rrOol l.ryFl 34 . nominalor ordinal dataare be If used.Ch a p te -1 Gra p h i n g a ta r D Assumptions Both variables should interval or ratio scales. Disreat orrq a 6ilby (h*t fes b & it e gard the ElementPropertieswindow that pops tl ". dragthe HEIGHT variableto the 3 cfst Bleffit by €l8ffit X-Axis area. your screenshould look like the examHlgfFl"tr l@bt Ral Ars ple below. l ln up by choosingClose. iLs clr* s fE Bs[ pleitbnb t b b krth Next.) see ln Gallerv Choose r l 0l selectScatter/Dol.: ..SPSS Data Format You need two variables perform command...

If you are not ableto dragthe variable SEX.it may be because it is not identified nominalor ordinal as in the VariableViewwindow.aontpl*rt 35 .your screen shouldlook like the imageat right. Again.00 t:.${ Adding a Third Variable Even thoughthe scatterplot a is two-dimensional graph. drag thevariable SEX into theupper-right corner where it indicates Color.disregard Element the Properties window that pops up.it canplot a third variable. selectthe Groups/Point tab in the ChartBuilder.cotrnrcpr:tvr$ I. Click OK to have SPSSproduce thegraph. ID Click the Grouping/stacking variable option. !|||d d*|er btrdtn- b$tdl l. arlo i?Jo hdtht ?0. When Set this is done. Next.To make it do so.Chapter4 Graphing Data Output Theoutput consist a markfor each will participant theappropriate and of at X levels.

s Bar chartscan be produced wherethe I/ axis is not a frequency. to SPSS Data Format You need at least two variables to perform this command.5 Advanced Bar Charts Description command(see Section4.00 67..6) to make them different shapes. One set represents male participants.3). 4.Thesetwo setswill apthe You can use the SPSSchart editor (seeSection pear in two different colors on your screen. shown in the examplebelow.00 65. need usetheBar chartscommand. Use the repeated-measures pendent variable for eachvalue of the independentvariable (e.50 helght Practice Exercise the to a B. are interested a bar chart Sometimes. There are two basic designsand those for repeated-measures kinds of bar charts-those for between-subjects methodif one variableis the independentvariable and designs. as sPx iil os 60. DataSet2 in Appendix Construct scatterplot examine relaUsePractice tionship between SALARY andEDUCATION.and the secondset represents female participants. we in however.you would havethree 36 . Section4. Use the between-subjects method if you have a dethe other is the dependent variable. we To produce sucha chart.4 Chapter GraphingData the Now our output will have two different setsof marks. with the Frequencie.g.

Chapter4 GraphingData

variablesfor a designwith threevaluesof the independentvariable). This normally occurs when you make multiple observations over time. This exampleusesthe GRADES.savdata file, which will be createdin Chapter6. Please section6.4 for the dataif you would like to follow along. see Running the Command Open the Chart Builder by clicking Graphs, then Chart wh& Builder. In the Gallery tab, select Bar. lf you had only one inde- tG*ptrl uti$Ues pendent variable, you would selectthe Simple Bar chart example (top left corner).If you have more than one independentvariable (as in this example), tfldr( select the Clustered Bar Chart example l?i;ffitF.td'd{4rfr trrd... from the middle of the top row. /ft,Jthd) /l*n*|ts,., Drag the example to the top workdq*oAtrm, , h4 | G.lary ahd lsr to @ t 6 p cfwxry ing area. Once you do, the working area m should look like the screenshotbelow. ffi * $r 0* dds t bto h.td. drr drrrl by.lr!* (Note that you will need to open the data file you would like to graph in order to run this command.)
9{ m hlpd{ sc.ffp/Dat tffotm tldrtff 60elot oidA#



.*t I

r,* |

yu vttdld {a b. rsd te grmt! lh. y*rfts yw d.t, qa..dr vrt d. {db. Edr.*6ot.' rh ffi h *. dst, vtlB enpcr*.dby |SddSri,lARV vrtdb cdon d b Ur Y d. Vrtdrr U* d.ftr (&gqb n.ryst d !d c **h o b. red o. c&eskd d q 6 *rdd nDe(rd*L, . gdslo a F Ftrg Yrt aic.

Cdtfry LSdrl




*. r! " l

If you are using a repeated-measures design like our example here using GRADES.sav from Chapter 6 (three different variablesrepresenting the i values that we want), you needto selectall threevariables (you can <Ctrl>-click them to selectmultiple variables)and then drag all threevariablenamesto the Y-Axisarea.When you do. vou will be given the warning message above. Click OK.


Chapter GraphingData 4

,'rsji,. *lgl$
r*dlF*... dnif*ntmld,.. /tudttbdJ {i*rEkucrt}&"., &rcqsradtrcq,,.

Next, you will need to drag the INSTRUCT variable to the top right in the Cluster: set color area (see screenshot at left).

iJr g; i ? :
I' '!



*rrrt plYkrlur.r ollmbdaa.
8{ Lll. ,fat H.JPd., t(.&|rih Krtogrqn HCtstoef loxpbt orrl Axas

Note: The Chart Builder pays attention to the types of variables that you ask it to graph. If you are getting etTor messages or unusual results, be sure that your categorical variables are properly designated Nominal as in the Variable View tab (See Chapter2, Section2.l).



crot J




lr nt


Practice Exercise Use PracticeData Set I in Appendix B. Constructa clusteredbar graph examining the relationship betweenMATHEMATICS SKILLS scores(as the OepenOent variabtej and MARITAL STATUS and SEX (as independentvariables). Make sure you classify both SEX and MARITAL STATUS as nominalvariables.


Chapter4 Graphing Data

Graphs Section 4.6 Editing SPSS
Whatever command you rI use to create your graph, you will cFtA$-qli*LBul0l al ll probably want to do some editing $r to make it appear exactly as you want it to look. In SPSS,you do this in much the sameway that you edit graphs in other software programs(e.g., Excel). After your *. graph is made, in the output window, select your graph (this will createhandlesaround the outside of the entire object) and rightclick. Then. click SPSS Chart Object, and click Open. Alternatively, you can double-clickon the graphto open it for editing. ProperWhen you open the graph, the Chart Editor window and the corresponding lies window will appear.
FFF,Ffu FF|* "'4 F&'E' q

qb li. lin.tlla.

,, ; l 61f

L: lr ! . H; gb. t c t - ] pu1 r i



;l Jxr il.dk'nl


r 9-, rt

Ittttr tlttIr tllrwel w&&$!{!rJ JJJJ-JJ
JJJJJJ .nlqrl cnl,f,,!sl





OnceChart Editor is open,you can easily edit eachelementof the graph.To select an element,just click on the relevantspot on the graph. For example,if you have addeda title to your graph ("Histogram" in the examplethat follows), you may selectthe element representing title of the graphby clicking anywhereon the title. the


is the title of the graph).g. o. use the Properties box. 40 .tl*" ffiln*fot*. simply select File. If the item you have selected a text element(e.U:..":. Once you have selected an element. (Text properties are shownbelow.TJ *" ^' ir sGssir :J*r o :l A I 3 *l.-- Ex Yt l tb " :€ k l g tH . flpoft {bdt rf'.Once your graph is formatted the way you want it. you can make excellent graphs using SPSS.. you can tell whether the correct element is selectedbecauseit will have handlesaroundit. then Close. a cursor will be present and you can edit the text as you would in a word processing program.Chapter GraphingData 4 jn Li r::-' ' . '*t A: trTT. If you would like to change another attribute of the element (e.. the color or font size).. AaBbCc123 $o gdt lbw gsion Ek $vr {hat Trm$tr.1 P?*l!r h ?frtmd Sa . . Save..|1. Spdy$a*Tmpt*c.A-I..g.) With a linle practice. Ua*tr$Sie gltaridfu...

1 4l .i lwolalad ry{l i*l Move at leasttwo variablesfrom the box at left into the box at right by using the transfer arrow (or by double-clicking each variable). exists between thatrelationship should linear.This will bring DescripHve Salirtk* ) ) Ccmpara Hranr up the Bivariate Correlations dialog box.sav data file enteredat the startof Chapter4. It is acceptableto move more than two variables. your datado not meetthese If assumptions. Running the Command To selectthe Pearsoncorrelationcoefficient. 0rG-tr8. then Conelate. 4 n 1 . be Because Pearson the with z-scores.d 9@.5 Chapter Prediction Association and Section5. correlation coefficientis computed both variables shouldalsobe normallydistributed. two Assumptions (or Both variables on should measured intervalor ratio scales a dichotomous be nominal variable).1 PearsonCorrelation Coefficient Description (sometimes The Pearson product-moment correlation coefficient calledthe Pearson correlation r) coefficient simplythe Pearson determines strength the linearrelathe or of tionship between variables. Eachsubject musthavedatafor bothvariables.. This ue"qer:d lirwarmo{d ) example uses the HEIGHT.. {. . lfratyil qapnsUtl&i*s t#irdow Heb click Analyze. Make sure that a check is in the Pearson box under Correlation Cofficients. then Bivariate ) Reportr (bivariate refers to two variables). consider usingthe Spearman correlation instead.SS Data Format Two variables required your SPSS are in datafile."tI I . rho coefficient SP.blcr I I n". a relationship If them. rqsl Vdri.

is at To read the correlations.0 represent strongrelationship.806' .000 .000 .correlationsgreaterthan 0.01 levet(2-tailed).3 are considered weak. select a row and a column. a single * will appear next to the heioht weioht sex netgnt Pearsonuorrelalron 1 .05 level. m l/'* lsffi -NOX I nql l_i::x- .0or-1.001. we obtained a correlation of .we have a 3 x 3 table.With enoughparticia pants. For example. I Tc* d $lrfmma*--*=*-*:-**-*l Therefore.we could statethe following in a resultssection: 4/ . of and it is significantat the . Coefficients The correlation coefficientwill be between closeto 0. We entered three variables in our command.644' correlation.000 output at right has a significance N 16 16 16 PearsonCorrelation 1 -.3 and 0.0.the significance level. If a correlation is signifiCorrelations cant at less than the .A significant correlation indicatesa reliable relationship.806 is a strongpositivecorrelation.01 level or lower.Thus. (2-tailed) .If it is significant at Sig.806).000 with ** to indicatethat it is less N 16 16 16 than. A correlation . the correlation in the Sig.a very small correlationcan be significant. Phrasing Significant a Result In the example above. The correlationbetween a variable and itself is always l.We get the sameanswerby selecting the HEIGHT row and the WEIGHT column. ". Generally.806' -.the correlationbetweenheight and weight is determinedthrough selectionof the WEIGHT row and the HEIGHT column (.007 the . Correlationsless than 0. There are also three rows in each cell-the 17 Flag{flbrrcorda&rn correlation. Significant correlationsare flagged with asterisks. and the N.7 are consideredstrong.PleaseseeAppendix A for a discussion of effect sizesfor correlations. For weight PearsonCorrelation . so it is flagged sex Sig.968' example.001level. Correlationsbetween0.Chapter5 Prediction and Association For our example. Every variableyou entered in the command is represented as both a row and a column.0 1 .7 are considered moderate.0 represent weak relationship.we will move all threevariables over and click OK. so thereis a diagonalsetof I s. ** will apN 16 16 16 pear next to the correlation. (2-tailed) . Reading the Output Vdi{$b* The output consists of a :rydl !4 1 correlation matrix.007 . a a Coefficients closeto 1.but not necessarily strong correlation. Correlation significant the 0.0 and +1.968' level of < . (2-tailed) .--i I Drawing Conclusions -1.806 between HEIGHT and WEIGHT.644' -.

While most SPSSprocedures of gives only the N (the numberof pairs). we could write the following in a resultssection(note that of the degrees freedomis N .it is weakerthan the Pearprocedure.strength(strong). degreesof freedom(14).05). A strong between linear relationship (r(14) : . The conclusionstatesthe direction(positive).ID numberis not related grade significant in the course.savdataset from the previous chapters.217 .the degrees freedomis N .2.but it can be usedin more situations. If so. Practice Exercise conelaUse PracticeData Set 2 in Appendix B.They do not needto data. two variables.the correlation give degrees N of 16. Assumptions correlationcoefficient functions on the basisof the ranks of Because Spearman the (or interval or ratio) data for both variables.it requiresordinal be normally distributed.It is a nonparametric tween son correlationcoefficient.783 4 GRADE . The correlation a significance level of .217.we could calculatea correlationbetweenID and GRADE.In addition. > .2 Spearman Section Description the correlationcoefficientdetermines strengthof the relationshipbeThe Spearman Therefore.001) of the correlation. N PearsonCorrelation Sig.000 4 A Pearsoncorrelation was calculatedexamining the relationshipbetween participants' ID numbers and grades.(2-tailed) N ID 1. of is of Note that the degrees freedomgiven in parentheses 14. Coeflicient Correlation 5.001).indicatinga significant to weigh more.2): Correlations lD Pearson Uorrelatlon (2{ailed) Sig. and significancelevel (< .Chapter5 Predictionand Association between for A Pearson correlationcoefficientwas calculated the relationship positive correlation was found participants' height and weight.p < . we get the outPut at has right. Thus.For a correlation.a statement direction is included(taller is heavier). tend Taller participants the two variables.value (. (2): . Phrasing Results That Are Not Significant Using our SAMPLE. The output indicatesan command of freedom. Determinethe value of the Pearson tion coefficient for the relationshipbetweenSALARY and YEARS OF EDUCATION.000 4 . A weak correlation that was not to p was found(.217 7A? 4 GMDE 1. 43 .783.806).806.

0or -1.0 and +1.3 and 0. but it is not zero.correlationsgreaterthan 0. The output listed above indicatesa correlationof .000 Spearman rho can range from 't6 N 16 -1. Scores will be between closeto 0.000.Each pair of Sig. but not necessarily strong correlation.rfiy* Bivariate. a significancelevel of <. Correlationis significantat the . removethe check in the Pearsonbox (by clicking on it) and click on theSpearmanbox.883 1.01 level (2-tailed) son r. Drawing Conclusions The correlation -1. Scores closeto 1.b Use the variablesHEIGHT and WEIGHT 4). (2-tailed) . in fact.883 between HEIGHT and WEIGHT.A significant correlation indicatesa reliable relationship.With enoughparticipants. The actualalpha level roundsout to.0.0.a very small correlation a can be significant. Reading the Output Correlations The output is essentially the sameas for the PearH E IGH T WEIGHT tr-4. (2-tailed)"row.shown in the "Sig. j i* CsreldionCoefficients j f f"igs-"jjl. This is. 44 .just like the Pear".000 Sig. Correlations less than 0. there is a section for indicating the type of " Generd Linear f{udel ) correlation you will compute.if desired. Eachsubject for bothvariables.7 arc considered moderate. Running the Command Click Analyze. then Correlate. then Grapk Utilitior wndow Halp |. For our example. Spearman'srho HEIGHT CorrelationCoeflicient 1.7 are consideredstrong.3 are consideredweak.0 represent strongrelationship.000. The ffi .fienddrs tzu.0 represent weak a relationship.savdatafile (Chapter This is also one of the few commandsthat allows you to choose one-tailed a test.Cha p te r Pre d i c ti o n n dA s s o c i a ti on 5 a SP. a Significantcorrelations are flagged with asterisks.000 son correlation. You can selectas many correlationsas you want. Note the significancelevel of .This will bring up the main dialog box ) RrFarts for Bivariate Correlations(ust like the Pearson Statistics ) Oescri$ive I correlation). from our HEIGHT. About halfway down the dialog ComparcMeans ) box.001. Correlationsbetween0.000 variables has its correlation N 16 16 coefficient indicatedtwice. (2-tailed) ..SS Data Format Two variables required your SPSS mustprovidedata are in datafile.0 to +1. Generally.

Simplelinearregression Assumptions are that both variables interval.000 and has a sigNote that thoughthis value is roundedup and is not. (2{ailed) 1. A correlation . degreesof freedom(14).p > . .strength(strong). Determine strength the relationship Use Practice job classification calculating Spearman correlation.sav to GRADE data set from the previouschapters. degrees freedom is N.Thus.we obtaineda correlationof .883). the The conclusionstatesthe direction(positive). of course. This.000 ffi . assumes Simple linear regression aroundthe prediction In addition. The correlationcoN 4 efficient equals.In addition.For a correlation.001) of the correlation. in fact. we would get the Sig.value (. Taller participants between two variables.3 Simple Linear Regression Description of allowsthe prediction one variablefrom another. (2{ai l ed) and correlation between ID N 1.883 betweenHEIGHT and and it is significantat the WEIGHT.Typiline.assumes 45 .000 output at right.883 is a strongpositivecorrelation.or ratio-scaled.883. p <. If so. and significancelevel (< . of .05). in d ic a t in g s ig n if ic a n re la t io n s h ip tendto weigh more.we could statethe following in a resultssection: A Spearmanrho correlationcoefficient was calculatedfor the relationship betweenparticipants'height and weight. Practice Exercise of the Data Set 2 in AppendixB.Chapter Prediction 5 and Association That PhrasingResults Are Significant In the exampleabove. we could statethe following in a resultssection: actly 1.001level. rho correlationcoefficient was calculatedfor the relationship A Spearman betweena subject's ID number and grade. Phrasing Results That Are Not Significant Correlations Using our SAMPLE.000.The outputindicates N of 16.the dependentvariable shouldbe normally distributed that the variablesare relatedto each other linearly.000. exnificance level of 1.ID numberis not that was not significant relatedto gradein the course.2.UUU CorrelationCoenicten Spearman'srho lD 000 rho we could calculate Spearman a 1. A strong positive correlationwas a t found (rh o (1 4 ):. the r&o by between salaryand Section 5.000. 0 0 1 ).a Note that the degrees freedomgiven of statement directionis included(talleris heavier).000 GRADE.000 S i g. of the of an is in parentheses 14.An extremely weak correlation was found (r (2\ = .

We are interestedin predicting someone's weight on the basisof his or her height.r. The first is called the sui*br.you shouldseethis output. Then we can click OK to run the analysis. a. The standard error of Modetsummarv ModelSummary WLSWaidrl: I'J estimategivesyou a measure Adjusted Std. Thus... This will bring up the main diatog Aulyze Graphs LJtl$ties Whdow Help R. 649% of the variation in weight can be explainedby differences height (talier in individuals weigh more).. On ' Corrolate the right are blocks for the dependent variable (the variable you are trying to Clasifu ) predict).649 .. I Oaly*.and the independent variable (the Data ) Reductbn variablefrom which we are predicting).'l Itd.. Reading the Output lt{*rt* 0coandart t '-J ff*r.bh i E Ar-'"1 Est*6k For simple linear regressions.14801 predictionequationis used.porte box for Linear Regression. Predictors: (Constant).Ch a p te 5 P re d i c ti o n n dA s s o c i ati on r a cally. Running the Command Click Analyze. as .624 16.:.. we should place the variable WEIGHT in the dependent variable block and the variable HEIGHT in the independent variable block. height 68%of thedatawill fall within 46 .then Linear. Thus.R Square (calledthe coeflicient of determination) gives you the proportionof the variance of your dependentvariable (yEIGHT) that can be explainedby variationin your independentvariable (HEIGHT)..E06 . pbr. we are interestedin three components of the output.SPSS Data Format Two variablesare required in the SPSSdata file. then Regression.and it occursafter the Variables Entered/Removed section.I I Model Summary.sav General linear frlod l data file from the start of this section). both variablesshould be normally distributed. For our example. Dichotomous variables (variables with only two levels)are alsoacceptable independentvariables. the left sideof On ' Descrptive5tatistkf > the dialog box is a list of the variablesin Comparc ) Mems your data file (we are using the HEIGHT. I Srrs..p. When the 1 ..nd6r(rl u* I i -Iqil I Crof I rrr l Pmr{r Ucitbd lErra :J SdrdhVui..Model for R R Square R Souare the Estimate tion equation.Errorof of dispersion your predic. Each subject must contribute to both values.'-j iL.

Thus. Dependent weight Variable: /: " I t S i o.we do not.434(60):91.434(HEIGHT).05.f' (pronounced In most texts. as shown above.is the value of b (labeledwith the name of the independent variable).296 123.. 25.95ohof individualswho are 60 inchestall will weigh between59.296).758 F Sio.the valuesof both a andb are found in the B column.806 5.296: 59.067 . A personwho is 60 inchestall shouldweigh -234.614 Total 10410.5.655) 32.e.552 height 1.280 5. In otherwords. then we have a significantlinear regression.359pounds.434 a.148 pounds beingcorrect within 32. This is wherethe actual The final sectionof the output is the table of coefficients. Dependent Variable: WEIGHT in The secondpart of the output that we are interested is the ANOVA summarytable. Coefficientt' Standardized Unstandardized Coefficients Coefficients Beta B Std.000 equation.681.The important numberhere is the significance level in the rightmost If column.is the value of a (labeledConstant).434. you learn that Y' : a + bX is the regression (primes are normally predictedvalues or depend"Y prime") is your dependent variable ent variables).681+ 5.Chapter5 Predictionand Association one standard error of estimate (predicted)value. -3.681 71.926 .The secondvalue. Just over 95ohwill fall within two stanof dard errors. our estimatedweight will be :32.2 x 16.938 I 14 15 Mean Souare 6760.95o/o the time.Thus. our prediction equation for the example above is WEIGHT' : -234.005 . predictionequationcan be found.655 47 .063) and 123.05.0004 a' Predictors: (Constant).the average subjectwho is an inch taller than anothersubjectweighs 5.323 260. and X is your independentvariable. in the previous example. it is largerthan . In SPSSoutput. discussion standarderror of of Givenour earlier estimate.359 32.-234.The first value. HEIGHT b. (91.681+ 5.063(91. (i.092 .323 Residual 3650.434 poundsmore.359= + pounds.Error Model 1 (Constant) -234.296 of ANOVAb Sumof Model Sorrares df 1 Kegressron 6760. If that value is lessthan .

Fsl i m a l e Hodel attt R Souare 221 112 3 06696 a.12U Rssldual Tolal J U/?U r 31 688 170t38 Slo 0621 1a t5 I 408 (Conslan0.681 + 5.strength(.001). Phrasing ResultsThatAre Not Significant If the ANOVA is not significant (e.h8lghl a.the ANOVA results and R2resultsare given. we could statethe following in a resultssection: A simple linear regressionwas calculatedpredicting participants' weight equationwas found (F(1. Predlclors: (Constan0. r 17 .14) : basedon their height.h8lghl rt{)vP Sumof Xodel dl xean Souare I t 4. OependentVarlabler acl Cootlklqrrr Unstandardiz€d Hod€l Slandardizsd ts Std.926).43poundsfor eachinch of height.The F is significant at the less than .14). The ANOVA resultedin .926 with I and 14 degreesof freedom. and significancelevel (<.A results section might include the following statement: A simple linear regression was calculatedpredicting participants' ACT scoresbasedon their height.a statement the equation of itself is included. Phrasing Results That Are Significant In the exampleson pages46 and 47.227.Participants' to -234. is Practice Exercise Use PracticeData Set 2 in Appendix B.68 + 5.411 13590 203 .and (c) the equation itself. deIn greesof freedom(1.what salary would you predict for someonewith l2 years of education?What with a collegeeducation(16 years)? salarywould you predict for someone 48 . Participants' weight increased 5.Eror of lh. and the regression equationis not significant.649 and a regression equationof WEIGHT' : -234.with an R' of . average The conclusion statesthe direction (increase).p < .43 (HEIGHT) pounds when height is measuredin inches. addition. If we want to predict salary from years of education. OBDendsnlva.649). the section of the output labeled SE for the ANOVA will be greaterthan . Erol Bsta J OJI Sio (u0nslan0 hei9hl | 9.434(HEIGHT). The regressionequation was not : significant(F(^1.. Thus. llorlol Srrrrrrry Adjuslsd R Souare Std.2 0 3 0 003 062 a.001) of the regression.A significantregression predictedweight is equal 25.Chapter5 Prediction and Association Drawing Conclusions indicate(a) whether or not a significant preConclusionsfrom regression analyses diction equation was obtained.14) 4.(b) the direction of the relationship. Prodlclors: b.iable: acl Note that for resultsthat are not significant.but the regression equation not.05.001 level. p > .seethe output at right).649.05) with an R' of .12. we obtainedan R Squareof .F = 25.g. value (25.Height is not a significantpredictor of ACT scores.926.35I -.

rI tb.S Data Format At least three variablesare required in the SPSSdata file.l fn f*---*-- S€lcdirnVdir* I . Enterboth in Block l..S. This. Each subject must contributeto all values. Linear. Thus. the left sideof the dialog box is a i &ry!$$sruruct On list of the variables your datafile (we are using Cglpsaftladls in GarnrdLhcar ldd the HEIGHT. }l+ l-. the right side of the dialog box are On you blanks thedependent for variable(thevariable aretrying to predict) variables andthe independent (thevariables from whichyou arepredicting).rr. This will perform an analysisto determine if WEIGHT can be predicted from SEX and/or HEIGHT. of course. the most widely 49 .assumes that the variablesare relatedto eachother linearly.roj I ryl n{.SP. I Pr.or ratio-scaled. Dichotomousvariables are also acceptable indeas pendentvariables. All variablesshouldbe normally distributed. we should place the dependent variable WEIGHT in the Dependentblock and the independent variables HEIGHT and SEX in the Independent(s) block. the dependent variable should be normally distributedaround the prediction line. I We are interested in predicting someone's weight basedon his or her height sex.Chapter5 Prediction and Association Section 5. Er'--- Cs Lrbr&: ti4svlit{ Li-J rsr"u*t..lrdq.savdata file from the start of this chapter). thenRegression. These can be selected with the Method box.4 Multiple Linear Regression Description The multiple linear regression analysisallows the prediction of one variable from severalother variables.it'r:.... Running the Command Click Analyze. We believe that both sex and and height influence weight. s* I | oei*. Assumptions Multiple linear regression that all variables assumes are interval.G LLI l&-*rt . Dmmd* t{. At"h* eoptrc utiltt 5 then I This will bring up the main dialog box for Linear Regression. . Method Enter. There are several methods SPSS can use to conduct this analysis. In addition.

Just over 95% will fall within two standard errors of estimates.you should get the output above. 50 .296.3. Uethod lE. Click OK to run the analvsis.For our example.In our Simple Linear Regression example in Section5. Predictors: (Constant).649 to .68%o the datawill fall within one standard erof ror of estimate (predicted) value. eHoveb Model xegresslon Residual Total Sum of Souares df 03424 24 68. The StandardError of the Estimategives you a margin of error for the prediction equation.212 5.rt-rl Readingthe Output For multiplelinearregression. Predictors: (Constant).puts all variables the equation. Dependent Variable: weight The final sectionof output we are interested is the table of coefficients.ZUZ S i o.3. For now.we havea significantlinearregression. in the exampleabove. height sex. of The secondpart of the output that we are interested is the ANOVA summarytain ble. If that value is lessthan . the importantnumberis the significancein the rightmostcolumn. 99. height b.270 F v61.therearethreecomponents of the output in which we are interested.05. The other in methodsuse various meansto enter only those variables that are significant predictors.514 10410.29571 VariablesEntered/Removed section.we If do not.Thus.3%of the variationin weight can be explained by differencesin height and sex (taller individuals weigh more.99 . it is largerthan .992 Std.993 a. Adjusted R Square .938 z 13 15 Mean Souare 5171.Errorof the Estimate 2. Using the predictionequation. and men weigh more). whether they are significant or not. in this case).This is in wherethe actualpredictionequationcan be found.Cha p te 5 Pre d i c ti o n n dA s s o c i a ti on r a used.591 (2. this numberwas 32.0000 a. For more information on readingANOVA tables. Thus.649was obtained usingthe SimpleLinear Regression examplein Section5. Note the higherdegree accuracy.993. which is foundafterthe ModelSummary Model R R Souare .The . .296 x 2) poundsof being correct.05.refer to the sectionson ANOVA in Chapter6.95ohof the time. Note that when a secondvariable is added. The first is called Model the Summary. our estimatedweight will be within 4. sex.our R Squaregoes up from .R Square(calledthe coefficientof determination) tells you the proportionof the variance in the dependentvariable (WEIGHT) that can be explained variationin the independby ent variables (HEIGHT and SEX.

OspBndontVariablo reighl Coefllcldasr Unslanda.39. In the exampleabove.1 0 1 1.A female who is 60 inchestall should weigh + of 47. + B. Thus.X.341)and99.133(2) 2. our predictionequationfor the example + (whereSEX is codedas aboveis WEIGHT' :47. hsighl sex.1 3 3 Beta t 176 Sio. Morbl Sratrtny PhrasingResults ThatAre Significant In our example.993 anda regression tion of WEIGHT' = 47. seccouldstate followinein a results the tion: xodsl R Souars .the significancelevel of both independent variables is lessthan .39.938 15 a.312 -.and (c) the equation diction equation was obtained.95o/o (94. 1 3 3 ( S E X ) 2.341 who are 60 inchestall will weigh between of females dard error of estimate .424 I 2 R es i dual 68.39. Depsndenl Varlabl€: rei0hl 3 .dizsd Xodel Slandardizad Std.138 .26.5t 4 Tutal | 041 0.1 0 1 (H E IGH TT he with 2 ANOVA resulted F: 981.007 . Adlusted Std.501 -3 9 . the averagedifferencein weight for participants who differ by one inch in height is 2. (94.932+ 4.Error 14..Eror of R Souare lhe Estimatg 992 2 2C 5r 1 ANr:rVAD Sumof Sd r r r r a q Xodel dt Heorsssron r u3t2.133(SEX) 2. Multiple regressionis generallymuch more powerful than simple linear regression.588 .101(HEIGHT) I : Male.Eror 4 843 Beta .138.In other words.Given our earlierdiscussion the stan90. hoighl ser.000 a.1 98 L501 10.101(60):94. itself. and the Xs are your independof Independent ent variables.and HEIGHT is in inches). you learn that Y' = a + bX is the regression + gression.588 -26.843 47j38 .523 Drawing Conclusions analysesindicate(a) whether or not a significant preConclusionsfrom regression (b) the direction of the relationship.138 + ).591: 90.1 01 .Chapter5 Predictionand Association Coefficientf Unstandardized Coefficients Model 1 (Constant) height sex Standardized Coefficients B Std.071 .4. is signifiof we cantat the lessthan. XeanSouare 5171 212 981202 000r b. . Dependent Variable: weight equation.202 in F and 13 degrees freedom.001level.133 pounds more than females.312 I Sio t6 at 38 hei0hl 2.591= 99.523)pounds.Thus.(where z is the number to Variables).Males tend to weigh 39.101 pounds. 39 .932.767 10. must alsoconsider significancelevel of eachindependentvariable.1 sex 33 a..932 pounds. I/' is your dependent variable.000 . 2 = Female. The Bs are listed in a column.997.For multiple reIn most texts.001. obtained we an equaR Square of. Compareour two examples. a Prsdictorsr (Conslan0.071 007 000 000 5l . Predlctors: (Conslan0.our equationchanges l" = Bs + B1X1 BzXz+ . you the With multiple regression.198 2 .

OoDendBnl Vaiablor acl Coetllc lst 3r Unstandardizsd Cosilcisnls Yo d e l Standardized Coeilcionts Beia std Sld E. Neither height nor weight is a significant predictor of lC7" scores. and the regressionequation is not significant.993.s 62 007 019 35{ given. addition. A resultssectionfor the output at right might include the following statement: A multiple linear regression was predicting particicalculated pants' ACT scores basedon their height and sex..202.a In statement the equationitself is included. Phrasing Results That Are Not Significant If the ANOVA does not find a significantrelationship. Participants' p predicted weightis equalto 47.296 3..668 .133 pounds more than females.266 2011 .with an R' of .5r i tn. value (981. (ConslanD.Chapter5 Prediction and Association A multiple linear regressionwas calculatedto predict participants' weight basedon their height and sex. Both sex and height were significantpredictors. Srg section the of the output will be greaterthan . The conclusionstates the direction(increase).39.Because of there are multiple independent variables. strength(. > .133(SEX) 2. < .101 pounds for each inch of height. + whereSEX is coded as I = Male.191 122.and significancelevel (< .576 -t o?? 19. hel9ht a Prsdlclors: se4 ANI]VIP gum of dt qin I Reoressron Rssidual Total 1t.13) 2.13): 981.what would you If conclude afterconducting analysis? this 52 .Whichvariables significant are of and predictors? you believethat men were paid more than womenwere. Practice Exercise Notethatfor results are that . and males weighed 39. and HEIGHT is measuredin inches. 2 : Female. sex.279.993).511.t38 l3 't5 23. Participantsincreased2.1 02 2.88{ .degreesof freedom(2. The regression equation was not significant : (F(2.10l(HEIGHT).9a1 't70.001) of the regression.1 68 . is "o.20).138. on years service. theregression but equation not.rol I (Constan0 h€l9hl s€x oJ ttl .we havenotedwhetheror not eachis significant.717 9. t68 (ConslanD..05. hsight a Pr€dictors: se( o. A significantregression equationwas found (F(2.a57 I 2.13).ir".05)with an p R" of .ilJlovA results R2 and results are UsePractice DataSet2 in Appendix Determine prediction for B. llorlel Surrrwy XodBl x R Souare AdtuslBd R Souare Std Eror of 3 07525 528.. the equation predictingsalary based education.001).

1 Reviewof BasicHypothesis Testing TheNull Hypothesis In hypothesis testing.we typically use the ..05). obtain a probability that the null we hypothesisis true.The other two resultsare enors. 6 fr trO! u true. while a Type II error occurs when l. in fact..suchas 5 or less in 100 (.05 level (or less) as the maximumType I error ratewe are willing to accept. Significance tests determine the probabilityof making a Type I error. A Type I error occurs when we U .Chapter 6 Parametric InferentialStatistics Parametric statistical procedures allow you to draw inferences about populations basedon samplesof those populations. such as . of of Section 6. The alternative hypothesis generallystatesthat any differencewe observeis causedby a systematic differencebetweengroups. TypeI and TypeII Eruors REAL WORLD Null Hypothesis True NullHypothesis False All hypothesistesting attemptsto draw conclusions about the real world basedon the resultsof a test (a statistical test. we rejectthe null hypothesis. both cannotbe true at the sametime) and all inclusive(i."This is synonymous with saying that a difference is "statistically significant. make theseinferences.on a reading tesr.e.In other words.o> No Error I Typell Error we fail to reject the null hypothesis that 'F: n2 is. by convention. In other words.The as null hypothesis generallystatesthat any differencewe observeis causedby random error.We refer to thosetwo hypotheses the null hypothesisand the alternative hypothesis." For example. in this case).r) 6a right). . If there is a low probability.05. in fact. To you must be able to makecertainassumptions aboutthe shape the distributions the populationsamples.we createtwo hypotheses that are mutually exclusive(i. we can statethat the significancetest has led us to "reject the null hypothesis.^6 reject a null hypothesis that is. suppose you found that a random sampleof girls from a school district scoredhigher than a random 53 . When there is a low probability of a Type I error. after performing a seriesof calculations. = ETwo of the possible results corare A rect test results.e.false.one of them must be true).There are four possible zdi TypeI Error I No Error combinationsof results (see the figure at <.

On the other hand. can be created. we can take the significance level in the output and divide it by two. we Phrasing Results Resultsof hypothesistestingcan be statedin different ways.05.05. . when using SPSS. looking up a critical value in a table is not necessary. 54 . One-Tailed vs.If the obtainedvalue is lessthan the critical value. The critical-value approachis well suited to hand calculations. This.Critical Values Moststatistics textbooks present hypothesis testing using concept a critiby the of cal value. can report a significance level of . . value of . you have an opportunity to make a Type I error on either of the two tails of the normal distribution.01.we normally reject the null hypothesis if the output value under Srg.05.. is not practicalto tt createa table for every possiblealpha level. Two-Tailed Tests SPSSoutput generally includes a two-tailed alpha level (normally labeled Srg. if our differenceis in the right direction. in the output). With such an approach.If there is a sufficiently low probability that random errors were the cause(as determinedby a significance test).we can statethat the differencebetweenboys and girls is statistically significant. In other words. we reject the null hypothesis and concludethat we have found a significant difference(or relationship). we fail to reject the null hypothesisand concludethat there is not a significantdifference. A one-tailedtest examinesa differencein a specificdirection. Thus.Thus.05 or lessinA dicatesthat we should reject the null hypothesis(assumingan alpha level of .If the obtainedvalue is larger than the critical value.and if our output indicatesa significance level of . however. Significance Levelsvs.05 indicatesthat we shouldfail to reject the null hypothesis. SPSScan determinethe exact alpha level associated with any value of a test statistic.Tables that give critical valuesfor alpha levels of .doeschangethe basic procedurefor determiningwhetheror not to reject the null hypothesis. A two-tailed hypothesisattemptsto determinewhether any difference (either positive or negative)exists.If we have a one-tailedhypothesis. is equal to or smaller than . The sectionof SPSSoutput labeledSrg. and we fail to reject the null hypothesisif the outputvalueis largerthan .The following examplesillustrate someof thesedifferences.Chapter6 ParametricInferentialStatistics sampleof boys. Thus.Thus. etc. A value greaterthan . but our SPSSoutput gives a two-tailed significance result.001.084 (two-tailed). we can make a Type I error on only one side (tail) of the distribution. but we have a one-tailed hypothesis.we obtain a value for a test statisticand compareit to a critical value we look up in a table.05).042 (one-tailed).(sometimes or alpha) indicatesthe likelip hood of making a Type I error if we rejectthe null hypothesis. dependingon the conventionsspecifiedby your institution. This result may have been obtainedmerely because chanceenors asthe sociatedwith random sampling createdthe observeddifference (this is what the null hypothesis asserts).

statisticalsymbolsare presented italics. : . ns p > .0 0 .01.8 0 5 .8 0 5 . the There are also variousways of describingresultsthat are not significant. StatisticalSymbols in use of Generally. so you are advised to consult your instructoraboutthis.p <.8 0 5 .p <.00. p:.Chapter6 ParametricInferentialStatistics Degreesof Freedom immediately after the Sometimesthe degreesof freedom are given in parentheses symbol representing test.505.both .00.as in the following example: The null hypothesis rejected 7.505 Statementof Results the Sometimes resultswill be statedin terms of the null hypothesis. For example. Section 6.df :3.01 SignificanceLevel When you obtain results that are significant. statisticalsymbols were underlined.006 p Notice that because exactprobabilityis . df :3.0 0 . was 1t: Other times.01 are alsocorrect. any of the following three statementscould be used: t(2 ): .006 on a t test. you could describeit in any of the following threeways: (3 ):7 .p<. t(2 )=.006.0 0 .as in the following example: was A statistically significant difference found:r(3):7.as in this example: the (3):7.as in of this example: t:7.if you obtaineda significancelevel of . It is useful for determining I 55 .Underlining is a signal to a printer that the underlinedtext should be set in italics.05 (3 ):7 . the degrees freedomare given within the statement results.For example. p < . Prior to the widespread computersand desktop publishing.00.01 (3 ) : 7.05 and .006).p:.00.0 1 of Other times.2 Single-Sample I Test Description the The single-sample test compares mean of a single sampleto a known populaI if the current set of data has changedfrom a longtion mean.the resultsare statedin terms of their level of significance. if you obtaineda significancelevel of . they can be describedin different ways.05 t(2):.p <. Institutions vary on their requirementsfor student work.

comparing the curent year's temperatures a historical averageto deto termine if global wanning is occuning). The exMcrns. However.The dialog box for the single-sample test requiresthat we transferthe variable / representing currentset ofscores ry the wllties Wiidnru nrb to the Test Variable(s) section.. standard deviation.SP. Assumptions The distributions from which the scoresare taken should be normally distributed. and standard error)...Chapter6 ParametricInferentialStatistics term value (e. The dependentvariable must be measured an interval or ratio scale. Readingthe Output The output for the single-sample test consistsof two sections. Running the Command The single-sample test is locatedin the CompareMeans submenu...' ) CorrElata Indegrdant-SrmplCs T tion meanof 35 (this exampleusesa ' Regrossion ) Paired-Samples T Tert.. We i Ardvze Graphs ) Reports must also enter the population averI Sescriplive Stati*icr ] age in the Test Value blank.The first section t lists the samplevariable and somebasic descriptive statistics (N. ) Clasdfy 0ne-Wey ANOVA. ample presentedhere is testing the General linearModel ) variable LENGTH againsta popula. / That variablerepresents set of scoresin the samplethat we will compareto the populathe tion mean. hypotheticaldata set).. mean.SS Data Format The SPSSdata file for the single-sample test requiresa single variablein SPSS. the t test is robust and can handleviolations of the assumptionof a normal distribution.g. on . 56 .under theAna/ lyze menu.

197) than the population mean. A significantdifferencewas found (t(9) = 2.377.Thus. greater was significantly p <.041.9000is the difference and the populationaveragewe enteredin the dialog box to conductthe test (35.Chapter6 ParametricInferentialStatistics T-Test S'tatistics One-Sample N LENGTH Mean 10 Std. Refer to your statisticstext for the sectionon failure to reject the null hypothesis.The mean differenceof . A result that is not significantmeansthat there is not a significantdifference betweenthe means.Therefore.9000 Tes{ One-Sample = TeslValue 35 95%gsnl i 6sngg Interual the of Difierence Mean sis. PhrasingResults ThatAre Significant The aboveexamplefound a significantdifferencebetweenthe populationmean and the samplemean. The example preof sentedhere indicatesa / value of 2. Drawing Conclusions The I test assumesan equality of means.with 9 degrees freedomand a significance level betweenthe sampleaverage(35. 57 .1972 .041 .The sample meanof 35.3786 35.90) of .05). Deviation Mean 1.It doesnot mean that they are equal.00). we could statethe following: A single-samplet test compared the mean length of the sample to a populationvalue of 35.377. (2-tailed) DitTerence Lowef U pner 7564 . Enor std.356E -02 ? df LENGTH 2.9000 4.00.377 s The secondsection of output containsthe results of the t test. a significant result indicates that the sample mean is not equivalentto the population mean (hence the term "significantly different").90(sd: 1.

this meansthat one group of participantsprovidesdata for one sampleand a different group of participants provides data for the other sample (and individuals in one group are not matchedwith individualsin the other group). Observations of are independentwhen information about one is unrelated to the other. is $25. the differencewould not be significant.S Data Format The SPSSdata file for the independent test requirestwo variables.but the I test is robust and can handle violationsof the assumption a normal distribution. = TsstValue 57. p > .Determine if the averagesalary of the participants PracticeData Set 2 (Appendix B) is significantlygreaterthan this value.S.03681 L'|D.Eror N tean I OU . I grouping variable.One way to accomplish this is throughrandom assignmentto form two groups. in Note that this is a one-tailed hypothesis.Chapter6 ParametricInferentialStatistics Phrasing Results ThatAre Not Significant If the significance level had been greaterthan . 0 for a control group and I for an experi- 58 .2696 .1 Section 6.I|pla Sl. Normally.S. of Practice Exercise The averagesalary in the U.OOD T Sld Deviation tempetalure 1 9.05).S.4.5. SP.3 Independent-samples I Test Description The independent-samplestest compares meansof two samples. represents value of the independentvariable.05.000.g.736 2 to the long-term average.llsllcs Std.One variable.417. The mean temperature over the past year was 68. The scoresshould be normally distributed. The grouping the the variable should have two distinct values (e.1 013 3. The difference was not significant (r(8) = .67(sd = 9. of The dependent variable must be measured an interval or ratio scale.if we receivedthe output presentedhere. Assumptions The two groupsbeing comparedshouldbe independent eachother.The inon dependentvariable shouldhaveonly two discretelevels. For example.Ittte Tesi A single-sample / test 95% Contldsnc6 Inlsryal ofthe comparedthe mean tempOit8rsnce llean t dt Sro (2-lail€d) DilTer€nce Lowsr UDOEI erature over the past year lemp€latute 1 ?6667 I 688 8.1l) comparedto the longterm average 67..The two samI the ples are normally from randomly assigned groups. we could statethe following: L{n-S.

Group Statistics mornrnq grade No Yes N 2 2 Std. our exvariableinto the GroupingVariable Transfer independent the ample.50000 5.00000 59 . Click Analyze. For our example.savdatafile. click Define Groups and enter the values of the two levels of the independent variable. will usethevariable we MORNING. Next."This sectionprovidesthe basicdescriptivestatisticsfor the dependentvariable(s)for eachvalue of the independent variable. Independent t tests are capable of comparingonly two levels at a time. Conducting an Independent-Samples t Test For our example.y* the SAMPLE.Error Mean 2. For section.we will use 6adrs Utltties window Heh [n.07107 Std. then Compare Std{ic* l Means. it dry lim mk lr*iT '1 Outputfrom the Independent-Samples t Test The output will have a sectionlabeled"Group Statistics. Transfer the dependent ) Can*bto variable(s) into the Test Variable(s) l Regf6s$isl t ctaseff blank. then Independent-SamplesT ?"est. Click Continue. The secondvariablerepresents dependent variable..Chapter6 ParametricInferentialStatistics the mental group). It shouldlook like the outputbelow.53553 78. then click OK to run the analysis. This will bring up the main diaI log box.we will use ls elrl thevariable GRADL. such as scoreson a test.Deviation Mean 82.0000 7.5000 3.

When drawing conclusions abouta I test. graoe tqual vanances assum6d Equal variances not assumed t df Sio.and the meansand standard deviationsfor the of the two groups.minus 2. which mean was larger than the other).59017 -19.09261 39.. r 3 2 7 3 31tl a 53l 029 An independent-samples test comparing the mean scores of the I experimental and control groups found a significant difference between the means of the two groups ((5) = 2. lnd6pondent Samplo! Telt Leveng's Tost for Fdlralitv df Variencac ltest for Eoualitv of Msans 95% Confidenco lntsrual of the Diffor€nc6 Lower Sio .ilhrh! sc0r9 x€an al 0000 Sld.805 2 1. The mean of the experimental group was significantlylower (m = 33. : 4.805 . Odiation a. You should also include information about the value of t. you might statethe following: 0r0uD conlfol €rDoflmontal N 6adO slil|llk! '.08) than the meanof the control group (m: 41. we usethe "Equal variances assumed" row.They provide the value of t.835.55256 5.09261 The columns labeledt.471 505 530 4.uplcr lcsl l l a sl fo r Fo u e l l to o t l n a n s 95(f. the degrees freedom. p < .55256 28. Therefore. Contld9n. sd 60 .e. Error Ditforenc6 Uooer . Drawing Conclusions Recall from the previous section that the .50000 5.05).000. the degrees freedom(number of participants.Chapter Parametric 6 Inferential Statistics Next.333.. and Sig.sd:2. test assumesan equality of means. It should look like the outputbelow. 6 1 2 8 7 r r.24). (2{ailed} Mean Diff6rence Sld.e Inl€ryalofthe xsan Si d Std Error 2 70391 2 138't 2 71605 I 20060 dT 071 5 Sio a2-laile(n U Jb sc0re EqualvarlencSs assum€d Equalvaiancos nol assumgd 5 058 7 66667 I r.59017 -30. of in this case).you must statethe directionof the difference(i. and the significancelevel (oftencalledp).(2-tailed) provide the standard"answer" for the I test.50000 4. Normally. Phrasing Results That Are Significant For a significant / test (for example. there will be a sectionwith the resultsof the I test.7a25a 3 2osr67 Sld Eror xaan 2 121J2 | 20185 hlapdrlra Lmno's T€sl for S.do. significancelevel.the output below). df.a significant result indicatesthat the means are not equivalent.

The other threevariablesrepresent exam scores(100 being the possible). INSTRUCT represents whether the coursewas required or was an elective for a course. I would requirea paired-samples test.SPSS Data Format Two variablesin the SPSSdata file are required.sd = 3.50. Practice Exercise Use Practice Data Set I (AppendixB) to solvethis problem. the scoresshouldbe convertedto z-scoresbefore the .07) was not significantly different from the meanof nonmomingpeople(m = 82.REQUIRED represents (0 . Running the Command We will createa new data file containing five variables:PRETEST.54).Chapter6 ParametricInferentialStatistics Phrasing Results That Are Not Signifcant In our example at the start of the section.These variablesshould represent two measurements from eachparticipant. three different instructors FINAL.comparinga pretestand a posttestscorefor two scoresfrom relatedsamples. highestscore 6l . SeeChapter2 for help.sd = 7.00. testis conducted. INSTRUCT. The mean of the morning people (m : 78.4 Paired-Samples Description The paired-samples test (also called a dependent/ test) comparesthe means of I For example. I : required). a group of participants Assumptions that both variablesare at the interval or ratio The paired-samples test assumes I levels and are normally distributed. MIDTERM.The two variablesshould also be measuredwith the samescale. .We believethat young skills than older individuals. p > .805.we comparedthe scoresof the morning peopleto the scoresof the nonmorningpeople. / Test Section 6. and REQUIRED.so we could statethe following: t test was calculatedcomparingthe mean score of An independent-samples as participants who identified themselves morning people to the mean score of participants who did not identi$ themselvesas morning people.elective.If the scalesare different.We did not find a significant difference.05). No significant difference was found (t(2) : . Hint: You may need to createa new variable that represents which age group they are in.We would test this hyindividualshave lower mathematics pothesisby comparingparticipants25 or younger(the "young" group) with participants26 or older (the "old" group).

Deviation 3. Use INSTRUCT as the independentvariable and enter PRETEST.00 Mean N std.5502 63. 62 .2857 7 6. This will bring up the main dialog box.9552 86.. Click Analyze.6055 59.6348 std.9451 79. data entry by computinga mean for each instructor using the Means command (see Chapter 3 for more information).0000 7 8. then Paired-SamplesT ?nesr.9294 9.s44q @ Cdnrdur*Modd i c*@ i lryt'sn i'w q8-sdndo T Tart.sav. ' Report INSTRUCT 1.7108 78.62'.4286 7 10.00 Mean N PRETEST MIDTERM 67.t'l 4J 7 7 8. ANd FINAL as your dependent variables.t7 78.00 Total 6r5db trl*lct l t t tilkrdo| tbb Bsi*i l)06ctftlv.5714 7 7. then Compare Means.4286 7 5. MIDTERM.6617 FINAL 79. Deviation 2.+ t) 47 78 6l 68 64 53 7l 6l 57 49 7l 6l 58 58 MIDTERM 64 9l 77 69 77 88 85 64 98 77 86 77 67 85 79 77 65 93 83 75 74 FINAL 69 89 8l 7l IJ INSTRUCT I I I I I 2 2 2 2 2 2 2 J J J J J t J J REQUIRED 0 0 0 86 86 69 100 85 93 87 76 95 97 89 83 100 94 92 92 0 0 0 0 0 0 Enter the data and save it as You can check your GRADES.1429 7 10.. 9218 92.vr*.1429 7 11.Chapter6 ParametricInferentialStatistics PRETEST 56 79 68 59 64 t. Once you have entered the I data. i. Deviation Mean N std. 1429 21 9. Deviation Mean N std.3837 63.5032 86.conducta paired-samples test comparing pretest scores and final scores.6190 21 9.3333 21 8.5714 (6.

the The final threecolumnscontain the value of /. The PRETESTaverage was 63. PairedSamplesGorrelations N PaIT1 PRETEST & F IN A L I Correlation 21 .3. As you select them.646. they are placed in the Current Selections area. In the examplepresentedhere.n .Click on the right anow to transfer the pair to the Paired Variablessection.the sectioncalled paired Differences contains information about the differencesbetweenthe two variables. Both variableswill be moved into the Cnrrent Selections area. Note that this is a two-tailedsignificancelevel. Click once on PRETEST.Chapter6 Parametric InferentialStatistics You must select pairs of variables to compare.!K I ril'.and the probability level.93.9485 givesyou basicdescrip1 rtNet 86. the degreesof freedom.Click OK to conductthe test.llqi.535 S i o. . then once on FINAL.ry1 : CundSdacti:n: I VcirHal: Vri{!ilcz |y:tl "l*.00l.tndcimn *. Seethe startof this chapter more detailson computing one-tailed for a test.. 63 .d "ttr. with a standarddeviationof 9.with 20 degrees freedomand a significance of level of lessthan.1025 tive statistics for the pair of variables.I .14.n* atrirdftEl d1.1429 21 9. m ry{l . Within the third part of the output (on the next page).013 The second part ofthe output is a Pearson correlation coefficientfor the pair of variables.you miy have learnedin your statistics classthat the paired-samples test is essentially single/ a samplet testcalculated the differences on between scores.t L*rd fc i Reading the Output The output for PairedSamples Statistics paired-samplestest the t std.63. The first part Pair PRETEST 63. Std.Error consists of three comMean N Deviation Mean ponents.3333 21 8.with a standard deviation of 8.6348 2.n*f.The FINAL average was 86."J xl PaildVaiibblt y.we obtaineda I of -11.pllarq Cf.9294 1.

we could a state: P. You should also give the mean and of standard deviation for each variable. 8571a Std Oryiation Std.l crl Sdr{ror St.Eror X€an 95% ConidBncs lnlsNrlotth€ DilYerenca u008r Lmt 1 88788 . and the mean on the posttest was 86. I the degrees freedom.7239 '|1.For this output.571 1 7 I I 9.93).Our exampleabovewas significant.76a tl b 2 96808 I 12183 Sio (2-lailad) a7a 64 . Significant values indicate that the two scoresare different.the hypotheticaloutput below represents nonsignificantdifference.The mean on the pretestwas 63. Phrasing Results That Are Not Significant If the significancelevel had beengreater than .*od Silrlta! ParredOrflerences Tcrl x s an . Mean Dcvirl Std.33 (sd : 8. we a so could statethe following: A paired-samples test was calculated comparethe mean pretestscoreto I to the mean final exam score. A significantincrease from pretestto final was found (t(20) : -ll .as well as a statement results that indicates of whetheryou conducted one.63).000 Drawing Conclusions Paired-samples testsdeterminewhetheror not two scoresare significantly differI ent from each other. FINAL 8.10 if you were conducting one-tailed a test).Errot Maan l)iffcrcncc sig U D oer I Lower df l)Jailatl\ Pair1 PRETEST -22.p < .rl c(l Sxtlro3 N Cq I olrlqt6 Cotrelali 96S 5 to ts4trI mtdt€rm&flnal I 000 P. Values that are not significant indicatethat the scoresare not significantly different.the resultwould not have beensignificant.00680 P.9586 -26.Chapter6 ParametricInferentialStatistics Paired Samples Test Paired Differences 95%Confidence Inlerval the of srd.001).or two-tailedtest.509 7 95523 3 75889 3. PhrasingResultsThatAre Significant When statingthe resultsof a paired-samplestest.8095 .8952 -18.you shouldgive the value of t. For example.646.rtlstk r Sld.and the significancelevel.05 (or greaterthan .14(sd:9.9756 1.Etrof x6an Patr 1 midlsrm nnal N Sl d D d a l l o n r B t1a3 79.646 20 .

Group I compared to Whenwe conduct multipleI tests. 65 .05). When we groups.764. videonly onescore thedependent for Running the Command we data in For this example. mean the midterm (sd:7.S Data Format datafile.we would conduct factorial ANOVA.Eachparticipant should variable. to of the Type I error rate and increase chance drawingan inappropriate our conclusion. couldusea I testto determine we differences between but wouldhaveto conduct to threet tests(GroupI compared Group2. inflate we Group3.5 One-Wav ANOVA Description (ANOVA) is a procedure determines proportion Analysisof variance that the of variabilityattributed eachof several components. the of The one-way ANOVA compares means two or moregroupsof participants that vary on a singleindependent variable (thus.71(sd: 9. to increased Section6. is oneof the mostusefulandadaptIt to ablestatistical techniques available.57 = -. Which groupparticipants belongto is determined the valueof the by independentvariable.sav file we created the previous section. If we havemorethanone independent a ANOVA.the one-waydesignation).S. variableis at the intervalor ratio levANOVA alsoassumes thedependent that is normallydistributed. will usetheGRADES.95).we will have to conducta repeated-measures variable.p >. from midterm finalwasfound((6) to Practice Exercise data Usethe same GRADES. andcompute paired-samples to detera / test mineif scores from midterm final. elsand SP. One variableserves the deTwo variables required the SPSS in as are pendentvariable andtheotherasthe independent provariable. andGroup2 compared Group3). No significant difference andthe meanon the final was79.Chapter6 ParametricInferentialStatistics to the A paired-samples wascalculated compare meanmidtermscoreto t test The on was78.sav file.96). Groupsshouldbe independent eachother. ANOVA compensates thesemultiplecomparisons givesus a singleanswerthat and for from anyof theothergroups. tellsus if anyof thegroups different is Assumptions The one-wayANOVA requires singledependentvariable and a singleindea pendentvariable. themeanfinal examscore.If our participants of belong to more than one group each. we havethreegroups.

*Click on the Options box to get the Options dialog box. l ':l i rl :.. Eqiolvdirnccs NotfudfiEd . Our dependentvariable will be FINAL. 66 .Fncdrdr*dcndlscls Ro"t I l* Honroga&olvairre. but we would have the sameproblem as before with inflating the Type I error rate. This will give you meansfor the dependentvariable at each level of the independentvariable. I I "?s'qt.and it will be used as our independentvariable.. The ANOVA in only indicatesif any group is different from any other group. click Analyze.'od j Sigflber! h..l odiors"' I You should place the independent variable in the Factor box.we needto determinewhich groupsare different from which other groups.5arylarT16t...{lith.ct / rcqulad f. pela$t /n*Jterm d 'cqfuda -rl ToKl f."*:...''. :J xrded i An$tea &ryhr l.Chapter6 ParametricInferentialStatistics To conduct a one-way ANOVA.* j *_It J mrffi.T2 l.then Continue.odi"*. For our example. There are a variety of post-hoccomparisons that correct for the multiple comparisons. then One-I7ay ANOVA. wr$|f ) R. This test will allow us to determine if the instructor has any effect on final sradesin the course. P*G+tunpbs I IcJt. click Post Hoc to bring up the Post Hoc Multiple Comparisonsdialog box.The most widely used is Tukey's HSD.v!t tffi- I rc"'*i.Now click OK to run the analvsis. Next. Click Continue to return to the main dialog box. then Compare Means. Consult an advanced of text for a discussion the differencesbetween statistics thesevarioustests. Checkingthis box preventsus from having to run a separate means command. cl#y rffillEil f *wm dtin"t S instt. Click Descriptive. c*ll H& l Pqyg*. This will bring up the main dialog box for the Onelltay ANOVA command. INSTRUCT representsthree different instructors..?..We could do I teststo determine that. Csnalda R!E'l'd6al I ) l|1|J46tddg.I R*g I c*"d I miqq|4^__ !qt5..' r"ry{-l H* | " Post-hoctests are necessary the event of a significant ANOVA.l . If it is significant..'.._: t or*ol:: _l6{''6:+l..l Hdp I fqiryt. Click Tukev.. SPSSwill calculatea variety of post-hoc testsfor you.pa*tr I DrrsFfivsft#l|' lhcd friodd ] $alp orn-5dtFh f f64. r TY's.

l]).502 Sia JU4 4.00 76. The column labeled Sig.. Next is InF.one for BetweenGroups(in this case.J462 structor 2 compared to Instructor l.6339 1 -17. ent variable)andthe total.2'. and so on.3 482 1 5.For example. and anotherfor Within Groups(18 is the number minusthe numberof levelsof our independentvariable [2] .00 81.502 4.e.Enor LowerBound UooerBound Minimum Maximum Mean 72.502 4. Next is Inent variable.05 level.0000 3. F is a ratio of explainedvariance to unexplained ance.502 2.00 3.12805 86.8571 Lower B ound -16.00 . For a one-way ANOVA.95523 3. is at 67 .uu 86 9288 7. (Note that this is redundantwith the first row. of Descriptives final N UU 2.5182 5.9 2180 4. 143 the variance have been 1856.396 -24.571 listed.t41 6Y. The mean differ€nce significant the .027 . In our Upper R6r rn.4286 100.57.00680 79.along with their relative sizes. of participants post-hoc The next part of the output consistsof the results of our Tukey's ^EISD comparison.00 1.00 87.304 .NAL variabre: Dooendenr HS D 95% Confidence Mean (J) Difference ( t) Il-. represents tors.6339 -1.3276 96. of us This table presents with every possiblecombination levels of our independInstructor I comparedto Instructor 2.396 .3482 17.tl INSTRUCT INSTRUCT 1 .3482 -4.4286 100.8571' 6. The Within Groupsvariance variThe primary answeris F.8571' 6.l Sld.7572 90. representsthe Type I error (p) rate for the simple (2-level) comparison in that row.08003 92.00 83.491 1. 1429 The next section of the output is the ANOVA sourcetable.00 3.5285 9. Enor 4.49 11 3. hisftrer exam score 79. Consult a statisticstext for more informationon how it is determined.OU €.3 661 18.0000 12.00 4. Deviation Std.00 69.00 69.2 is the number of levels of our independentvariable [3 .4911 '.5714 100.00 Total 7 7 2'l 95% Confidence Interval for Mean Std.0 0 2.8571 -6.3]).502 4.) Next is Instructor 2 comparedto Instructor 3. This is where 579.49 1 24.10248 86. there are two componentsto the variance: Between due to our independent variable) and Within the Groups (which represents differences within eachlevel of our independentvariable).00 2.00 -12.UU tiv.00 1.levelof the independinstructor will Descriptive statistics be givenfor each final Instructor hadan average I in class.429 the various componentsof 1277.InferentialStatistics Chapter6 Parametric Readingthe Output (i.50325 2. The first row represents structor I compared to Multlple Comparlsons Instructor 3.502 4.5296 1 0 .027 .63476 2.The F has two of different degrees freedom. For differences Groups(which represents differencesdue to different instrucour example.3661 -5.the BetweenGroups variancerepresents individual differencesin students.3389 97.

96) than students who had = 92.00 at 0 0 78 00 71 00 79 00 All:'Vl sumof wrlhinOroups Tol.00819 2. Given this result. we would have received the following output: The ANOVA was not significant.92) were not significantly different from either of the other two groups. InstructorI is significantly 3.The students from the three different classes not differ significantlyat the start of the term. p < .605r8 7 6. the degreesof freedom. This analysis revealed that students who had InstructorI scoredlower (m:79.A significant difference was found among the instructors (F(2. sd:7.05). of Phrasing Results That Are Significant In our exampleabove.61).Students sd : 10. Phrasing Results That Are Not Significant If we had conducted the analysis using PRETEST as our dependent variable instead of FINAL.14(sd : 10.we could statethe following: We computed a one-way ANOVA comparing the final exam scores of participantswho took a course from one of three different instructors.l 210.38). Studentswho had Instructor2 had a mean scoreof 63. we may statethe following: Ocrcr lriir 95$ Conlldanc0 Inloml N tol Sld Ddaton Sld Eiror LMf Bo u n d Uooer gound 200 300 Tolel 1 21 r 63. differentfrom Instructor but InstructorI is not significantly differentfrom Instructor andInstructor is not significantly 2 different 2. Drawing Conclusions Drawing conclusionsfor ANOVA requiresthat we indicate the value of F.18) 1.p >.08. from Instructor 3.000 t 594667 t8 20 120 333 15222 1 600 229 The pretest means of students who took a course from three different instructors were compared using a one-way ANOVA. Tukey's f/SD was usedto determinethe natureof the differences between the instructors.57. Instructor (m 3 who had Instructor (m:86.43.Chapter6 ParametricInferentialStatistics example above.667 135a.50).55).at5t1 I ga€sa 5333r a 53 2278 592587 7? 95r 3 55 3a36 67 3979 1700 r9. A significantANOVA should be followed by the resultsof a post-hocanalysisand a verbal statement the results.05).57 (sd : 8.18) : 4.60. 68 .29(sd = 6. No significant = difference was found (F'(2.5501 I 92935 4.Studentswho had lnstructor3 had a meanscoreof 59. so there is no need to refer to the Multiple Comparisons table. Students did who had Instructor I had a mean scoreof 67.1 29 592857 633333 10.and the significance level. 2 sd: 5.43.

fP:.sav Rnalyze Graphs USiUas llilindow Halp I Click Anadata file from earlierin this chapter..married. variables addition. Repsrts lyze. Section6. two independent x 2 x 2 ANOVA hasthreeindependent variables.plustheeffects the interaction. and divorced results..Chapter6 ParametricInferentialStatistics Practice Exercise if mathscores of DataSet I in AppendixB. Select one of your independent variables (INSTRUCT. Runningthe Command This exampleuses the GRADES. of of sess effects eachindependent the Assumptions ANOVA (i.Onehasthreelevels.e. dependent variable mustbe be should independent each of other. must use the repeated-measures we ple variables ANOVA. R{'dsr! F.Factorial variable. in this case) and place it in the Fixed Factor(s) box.the all of Factorial ANOVA requires of the assumptions one-way In at the interval or ratio levelsand normallydistributed).A 3 ANOVA is very powerfulbecause allowsus to asit othertwo havetwo levels.J fi*"qrJ !t.eachwith two levhas able. then UnivariDescrbtiwStatFHcs ate.Write a statement single. for example. then GeneralLinear Model.l This will bring up the main dialog box for Univariate ANOVA.If we haveanyindependent (e.andone variable each for for SPSS requires variable the dependent one variablethatis represented multias independent variable.A 2 x 2 ANOVA.andthe els.SPSS variable. independent the Data Format .dclt} ?**" I -.6 Factorial ANOVA Description variThe factorialANOVA is one in which thereis morethan one independent variables... Place the secondindependent variable (REQUIRED) in the Fixed Factor(s) . PRETESTand POSTTEST).g. Select the dependent variable and place it in the DependentVariableblank (use FINAL for this example). I 9*ll !!{4*. determine the average UsingPractice of participants significantly are different.

FinallY.445 86.076 78. and an in100.00 87.UU 91.UU 72.00 89. IN S TR U C T' R E QU IR E D ran is called a two-way Variable:FINAL we ANOVA.50. Studentswho had Instructor I was not required) had a mean final exam score (for whom it was required)had a mean final exam scoreof 79. We 2. 1.865 93.551 3.and so on.445 84. 70 . move INSTRUCT.565 5. REQUIRED also have means for the two values of REQUIRED.00 INSTRUCT.604 81. 1 .583 3.768 78.886 4.699 87. 2.364 4.208 1. we get three an89. now click Options.00 92.768 73.083 instructors. and INSTRUCT x REQUIRED into the Display Means for blank.667 2.511 1.565 5. This is because Interval 95%Confidence had two independent variUpper Lower Bound Bound ables.750 for REQUIRED.667 design).00 .Error Bound I NS T R U C T 1 . you will find the means 95olo Confidencelnterval for each main effect and inLower Upper Bound teraction you selected with the Mean Std.240 86.136 97.768 68.801 2.886 4.500 INSTRUCT ' REQUIRED (seetop ofnext page). This will provide you with means for each main effect and interaction term. Click OK to run the analvsis.511 1.740 92.67.00 79.208 .250 had Participants who Instructor I (for whom the class of 79. When the Options dialog box comes up.6 Inferential Statistics Chapter Parametric box. The example we just 3.SPSSwould run post-hoc analysesfor the main effects but not for the interaction term.667 ANOVA.445 Optionscommand. FINAL ariable: Interval we have six means representing 95%Confidence the interaction of the two Upper Lower Bound Std. Click Continue.511 1.00 99. Having defined the analysis. 114 69.426 3.00 84.667 3.Error Rorrnd variables (this was a 3 x 2 REOUIRED Mean .565 5. so there is a mean FINAL for each instructor.007 84.500 swers: a main effect for 95.00 94.00 79.IN S T R U C T FINAL ariable: Reading the Output At the bottom of the output.926 79. REQUIRED.00 78.257 3.208 three were There 3.Error 90. a main effect 78. With a two-way INSTRUCT REQUIRED Mean Std.208 . If you were to selectPost-Hoc.00 for result teraction 104.114 84.

92) weie not significantlydifferent from eitherof the other two groups. studentswho had InstructorI had higher final . Phrasing Results That Are Significant If we had obtainedsignificantresults this example. in we could statethe following (Theseare fictitious results.57.p >. Intercept.32'l 1 34.893 1867. p < .none of the main effects or interactions was significant.ll2.72).8214 5 Intercept 1s1998. sd : 7.036 . degreesof freedom (effect and residual/error). In the example.571 20 a.691 INSTRUCT 536. REQUIRED.071 2 11.we would have had to conduct Tukey.893 1 151998.05). in this case).50). A significant main effect for instructor was found (F(2.05). and CorrectedTotal rows are not included. The effect of the instructorwas not influencedby whetheror not the students took the coursebecause was required. sd: 3 5. as ve sd:5.383 Total 157689.tn the statements results.15) l.43.A significant main effect for whetheror not the coursewas requiredwas found (F(-1. The interaction not significant was = (F(2.422 INSTRUCTREQUIRED ' 22. Studentswho had Instructor2 (m : g6.15) : l0. and a verbal statement for each of the answers(three.96) thanstudents who had Instructor (m:92.01)'Students who took the coursebecause was requireddid better(z : it 9l '69.065 .uite that most statisticsbooks give a much simpler version of an ANOVA sourcetable where the CorrectedModel.l5.sd :'10.43.526 .000 21 Corrected Total 1856.136 Error 1220. R Squared .For the resultsthat correspona to the example above.o16 (m = 79.179 3.295 REQUIRED 34.*u. sd:7. of you must indicate two ^F.44.p < .Chapter6 ParametricInferentialStatistics Tests of Between-SubjectsEffects Dependent Variable: FINAL 635.874 The source-table above gives us these three answers (in the INSTRUCT.15) :3g.13. the significance'level.000 .s HSD to determinethe differencesfor INSTRUCT (using thePost-Hoc command).321 .This is not nec- 7l .342(Adjusted Squared .68)thanstudents who took thecourse an electi (m : 77.123) = = R .r. please seethe sectionon phrasingresultsthat are not significant): A 3 (instructor) x 2 (requiredcourse)between-subjects factorial ANOVA was calculated comparingthe final exam scoresfor participants who had one of three instructors and who took the courseeither as a requiredcourseor as an elective. and INSTRUCT * REQUIRED rows).357 2 268.750 15 81. it Note that in the above exampre.

A significant main effect for instructor was found (F(2.68) thanstudents as : 5.This is not nec- 71 . Phrasing Results That Are Signifcant If we had obtainedsignificantresultsin this example.05).The was not significant sd effect of the instructorwas not influencedby whetheror not the students took it the coursebecause was required.50).p > .A significantmain :38.The interaction : (.p < .15) I . and INSTRUCT * REQUIRED rows). Intercept.13. and CorrectedTotal rows are not included.000 . Students who had InstructorI had higher final exam scores : 79.Note that most statistics books give a much simpler versionof an ANOVA sourcetable where the CorrectedModel.295 34.321 INSTRUCTREQUIRED 22.691 268. two of (effect and residual/error).05).69.43.72).57.43. Students who took the coursebecause was requireddid better(ln : it (m:77.357 REQUIRED 34.F(2.179 3.571 Corrected Total df Mean Souare F 5 1 2 1 2 1E 127. 15.sd : 7.96) than students (m who had Instructor3 (m : 92.15\ : l0.sd : 5. who took the course an elective 91.321 .8214 Intercept 151998.563 151998. and a verbal statethe degreesof freedom ment for each of the answers(three.230 .123) = R The source table above gives us these three answers (in the INSTRUCT. . significance level.136 81. could statethe following we (Theseare fictitious results.750 Total 157689.422 11. effect for whetheror not the coursewas requiredwas found (F'(1.874 21 20 = a. Studentswho had Instructor 2 (m : 86.383 siq.In the statements results. sd:7.Chapter6 ParametricInferentialStatistics Tests of Between-Subjects Effects t Variable:FINAL Typelll Sumof Source Souares uorrecleoMooel 635.526 .l 12.44.893 INSTRUCT 536.92) were not significantlydifferent from eitherof the othertwo groups.you must indicatef'.15) p < .893 1867. sd : 10.we would have had to conduct Tukey's HSD to determinethe differencesfor INSTRUCT (using the Post-Hoc command).000 't856.164 1.none of the main effects or interactions was significant.01). REQUIRED.please to seethe section phrasing on resultsthat arenot significant): A 3 (instructor) x 2 (required course)between-subjects factorial ANOVA was calculated comparingthe final exam scoresfor participants who had one of three instructors and who took the courseeither as a requiredcourseor as an elective.342(Adjusted Squared .071 Error 1220.065 . In the example.036 .in this case).For the resultsthat correspond the exampleabove. Note that in the above example. R Squared .

Assumptions on and variableshould normallydistributed measured an interbe The dependent variable shouldbe from the of val or ratio scale. Practice Exercise by are if DataSet 2 in AppendixB.05). determine salaries influenced Using Practice job classification. same related) Data SP. ANOVA Section6.S^S Format repdatafile should in Eachvariable the SPSS are At leastthreevariables required. an variable'Thus.p > . command 72 . Thus. ThatAre Not Significant Phrasing Results the so werenot significant.job classification. ables ANOVA effect. interaction alsonot was the (F(1.15)= .Chapter6 Parametriclnferential Statistics it for essary REQUIREDbecause hasonly two levels(andonemustbe differentfrom the other).Finally. we canstate following: results Our actual factorial ANOVA A 3 (instructor)x 2 (requiredcourse)between-subjects who hadone participants for the wascalculated comparing final examscores or course as an as and of threeinstructors who took the course a required : p (F(2.15) not significant that neitherthe significant(F(2.30.p > .136. was The elective.42.The maineffectfor whether not it wasa required or = . levelof theindependent variableat a single dependent resent single a four varivariable would require with four levelsof an independent of analysis a design in the SPSS datafile.usethe Mixed-Design a representsbetween-subjects If any variable instead.05).05).It functions an independent levelsarebeingcompared.7 Repeated-Measures Description to the ANOVA extends basicANOVA procedure a withinRepeated-measures providedatafor morethanonelevelof variable (whenparticipants independent subjects / test when more thantwo like a paired-samples variable). an interactionbetween and mentof results. it appears effect hasany significant is or instructor whether not thecourse required nor on finalexamscores.Multiple measurements the dependent (or participants. Write a statesex or sex. maineffectfor instructor not significant wasalso course > .15) 3.

You will be presentedwith the Repeated Mea*lxaNsre: I Measures dialog box.. S""a...I nq..Chapter6 ParametricInferentialStatistics Runningthe Command sample the uses GRADES.r"*.. you do not have the proper module installed. tSoar (hencethe within-groups population Click Analyze. ordered tl the values of the independent variable that they represent). After selectingthe command. If we had more than one Hsl independent variable that had repeatedmeasures. and FINAL to the lhthin-Subjects Variables section. If you do not have this command. MIDTERM. -xl presented with the Repeated Measures Define Factor Nanc ffi* .. Click OK to run the command. This is where you identify the Ui**tSubioct qllcv6ls: Nr.sav sets of grades-PRETEST.. Enter 3 l3* B"*. of of time on the test performance our sample f.nrbar factor (we will call it TIME). --J l h{lruc{ l'or I -Ff? l I -9v1" -ssl U"a* | cro.sav Thisexample includesthree data set. I within-subject cr. . Recall that GRADES.. This procedure is NOT included in the student version o/SPSS..e.. Transfer PRETEST. Now click Define.and SPSSwill computethe meansfor the TIME effect (seeone-way ANOVA for more detailsabouthow to do this). Measures. we could enter its name and click Add. I pagoq. I Factor(s)dialog box.. then r ) !8*vxido"' $.then GeneralLinear Model. I I Click Options.ca I for the numberof levels (threeexams) and click Add. you will be W.. /J .txrd*r gp{|ssriori comparison). garlrr! Cffiporur&'. Repeated Note that this procedure requires an optional module. The variable names should be in to according when they occurred time (i..dryrigp. I geb*. and timesduring '@ threedifferent FINAL-that represent the This allowsus to analyze effects t{!idf-!de the semester. MIDTERM..

Note that the Contrasts Tesisol \ /lthin-Subjects SPSS outputwill includemanyothersections of Testsol Between-Subjects Ellec{s output.Chapter6 ParametricInferentialStatistics Readingthe Output o.05.we will usea significance levelof .746 Huynh-Feldt 5673.895 121. significance Phrasing Results ThatAre Significant Because ANOVA results the weresignificant. whichyou canignoreat thispoint.. fil sess Model H &[ GeneralLinear GLM stands "GeneralLinearModel. SPSS but us.F value. But for the basicrepeatedMultivariate Tests measures ANOVA.546 F 121." It is a for Title very powerful command.921 Huynh-Feldt 930.000 Mean Souare 2836.921 Lower-bound 930. Thereare more powerful(andmoreappropriate) post-hoc will not compute them for analyses.939 20. MIDTERM to FINAL. usingpaired-samples Because we I tests.439 37. to areconducting threetests and. and many sections of Notes outputarebeyond scope the ofthis text (seeouts Wlthin-SubjectFactor s put outlineat right).For moreinformation. One of the main limitationsof SPSSis the difficulty in performingpost-hoc analyses within-subjects for factors. effect of TIME has an F In the value of 121.895 121.746 Lower-bound 5673.247 1.921 Source df 2 1. and of degrees freedom. When describing It results.873 4685. we are interested only in the Tes{ol Sphericity Mauchly's Eflecls Testsol Wthin-Subjects Testsof l{ithin-SubjectsEffects.895 121.594 4550. your instructor a moreadvanced statistics text. 000 000 000 000 The TPesls l{ithin-Subjects of Effectsoutput shouldlook very similar to the output from the otherANOVA commands.168 5673.we these should indicate typeof test.746 Gre e n h o u s e -Gee r iss 5673.273 38.05/3) instead .921 Greenhouse-Geisser 930.746 23. With SPSS. andPRETEST FINAL.328 46. easiest solutionto this problemis to the protecteddependent testswith repeated-measures conduct t ANOVA. the aboveexample.895 Siq. consult or To conductthe protectedt tests. is significantat less than the . Tests of WrllrilFstil)iects Effecls Measure: MEASURE 1 Type Sum lll ofSouares time Sphericity Assumed 5673.therefore.90with 2 and 40 degrees freedom(we use the line for Sphericity Asof sumed).000 40 24. needto do somesortof post-hoc we analysis. the level.211 1.we will comparePRETESTto MIDTERM. inflatingour Type I error rate. of 74 .746 Error(time)SphericityAssumed 930.017(.rtput This procedure usesthe GLM command.001 level.218 24.

midterm.5850 1 . so we can concludethat the scoresimproved from pretestto midterm and again from midterm to final. No significant (m: 63.66).05).9 6 41 df sig {2-tailed) -15.8650 -17. p > .1 3 .63)means. At leastone of the independentvariables must 75 . andagainfrom midterm 63.236 20 20 20 .33.646 -5.40) = 1.63). Follow-up protectedI testsrevealedthat scoresincreased significantlyfrom pretest(rn : (m:78.Write a statement results. andfinal(m:86. I of Section 6. Determine if the anxiety level of participants changedover time (regardless which treatmentthey received) using a one-way of repeated-measures ANOVA and protecteddependent tests.0902 . F INAL M IDT ERM .521 3 The three comparisons eachhad a significancelevel of less than.40) : 121. 4 8 1 3 -18.93).8095 -7.000 . F INAL . and final.90.000 . could statethe following (the F valueshere we havebeenmadeup for purposes illustration): of ANOVA was calculatedcomparingthe exam A one-way repeated-measures scoresof participants threedifferent times: pretest.we could statethe folthe lowing: A one-way repeated-measures ANOVA was calculated comparingthe exam scoresof participants three different times: pretest. A at significant effect was found (F(2.90. Because resultsof our exampleabovewere significant.66). to final (m sd:9. Dcvietior Lower UoDer Mean 3 .62. sd:9.62.93)to midterm = 86.p < .670 -11.8 Mixed-Design ANOVA Description The mixed-designANOVA (sometimescalled a split-plot design) teststhe effects of more than one independentvariable. No at significant effect was found (F(2. 9 5 8 6 -26. Practice Exercise Use PracticeData Set 3 in Appendix B.000 -22.5238 8.5264 -'t7. To generatethe descriptive statistics.Error std.8952 1. sd: 8.017. Phrasing Results That Are Not Significant With resultsthat are not significant.KE I E5 | Paired Differences 950/o Confidence Interval the of l)ifferance Std.33.midterm.4370 -10. and final.midterm(m:78.14.7239 4. = 8.97s6 6. we have to run the Descriptivescommand for eachvariable. sd difference existsamongpretest sd:9. sd: 9.14.Chapter6 ParametricInferentialStatistics Paired Samples Test Mean Yatr 1 l.001).2857 M IDT ERM Pai 2 P a i r3 PRETEST .

Thus.Anothervariableshouldbe present the datafile in x 2 mixed-designANOVA would require for each between-subjects variable.. leastone of the independentvariables must be At between-subjects. s*i I This example also uses the t*. SPSS Data Format The dependent variable shouldbe represented one variable for each level of the as within-subjects independentvariables.... then Repeated Measures.SPSS. I We previously enteredthe information for TIME in the RepeatedMeasures Define Factors dialog box.two representing dependent variable (one at each level). Click OK to run the command. There are two independent variables (TIME and INSTRUCT). and one repthe resenting between-subjects the independentvariable. We needto transferINSTRUCT into the Between-Subjects Factor(s) block. I 8q.5 for more details about how to do this). Running the Command The GeneralLinear Model commandruns the Mixed-Design ANOVA command.. I Sm. 34 1 MIDTERM.{*r &rfcrgon tAdhraf } l i The RepeatedMeasures command shouldbe used if any of the independent gfitsrf*|drvdbbht IK variables are repeatedmeasures(withinter I subjects). GRADES. | 8e4.) This exampleis a 3 x 3 mixed-design. If you do not have this command.I cotrl. Uoo*..Chapter6 ParametricInferentialStatistics (repeated be within-subjects measures). and FINAL in the lhthinSubjects Variables block. Assumptions The dependent variable shouldbe normally distributedand measured an interon val or ratio scale. I Podg*.. then General Linear Model. $rtBtlca i T.. (See the RepeatedMeasuresANOVA command in Section 6. Click Options and selectmeansfor all of the main effectsand the interaction (see one-way ANOVA in Section6.d*t Cocpa''UF46 ) ) @ ilrGdtilod6b iifr. Note that this procedure requires an optional module. mT- 76 . lAnqtr:l g{hs Udear Add-gnr Uhdorn Rqprto r oitc|hfiy.sav data file. you do not have the proper module installed. This procedure is NOT included in the st u dent versi on o/. Enter PRETEST. each with three levels. a 2 three variables. Click Analyze.7 for an explanation.

149 341 297. we must perform some sort of post-hoc analysis. Eltecls Testsot Betryeen-Sill)iects Me a s u re :EA SU R1 M E ransformed Variable: Type Sum lll S o u rc e of Squares 9 Intercepl 3 6 4 1 2 .and the In significancelevel for eachmain effect and interaction.6S 8 Siq.583 5673.0 6 3 inslrucl 18 .The first is Tests Within-Subjects of Eflects Tests ot Wrtlrilr-Suuects M eas u re : Type Sum lll of Squares sphe ricity Assumed 6 5673. Both of thesesectionsmust be combinedto oroduce the full answer for our analysis.000 4 2.000 Mean ouare S 2836. GLM procedure a repeated the As with the standard ANOVA.i s s e r Ge 124.468 5.857 Lower-bound 124. tions. Phrasing Results That Are Significant There are three answers(at least) for all mixed-designANOVAs.74 Lower-bound 6 5673.179 403.103 58. we are interested two secin we will not use.181 1.000 817.S 37 F sis 000 000 000 000 .f' t2 2.000 .356 1. The secondsectionof output is Testsof Between-subjects Effects (sample output is below).1 15 6.74 Gre e n h o u s e -Ge i s s el 5673.032 3.51 .8 71 5. Here.954 817. we get the answersthat do not contain any within-subjects effects.857 S our c e time df f 2 1. 77 .873 4802.103 58.74 time' instruct Sphericily Assumed 806. For our example.586 4' t83. addition.103 This sectiongives two of the threeanswerswe need (the main effect for TIME and the interactionresult for TIME x INSTRUCTOR).you should include F.No easyway to perform the appropriate (within-subjects)factors is available. .063 Erro(time) Sphericity Assumed 124857 Gre e n h o u s e .746 201 6 .063 1500.000 .039 242.103 58.063 Huynh-Feldt 3 806.41'.954 58.6 9 8 En o r 43 6 8 .Chapter6 ParametricInferentialStatistics Readingthe Output provides measures command.363 2.For a mixed-design lot of output Effects.6 on factorial ANOVA for more detailsabouthow to interpretand phrasethe results.000 .| 18.954 817.962 If we obtain significant effects.somedescriptivestaor tistics must be included(eithergive means includea figure).06 Lower-bound 806.063 0 re e n h o u s e -Ge i s s e r 806. Pleasesee Section 6. Ask your instructor hoc test for repeated-measures for assistance with this. this is one of the limitations of SPSS. we get the main effect for INSTRUCT.349 .954 817. postAgain.265 24. the degrees freedom.000 .5 7 1 df I 2 18 F Mean ouare S 3641 92.857 Huynh-Feldt 124.746 Huynh-Feldt 6 5673.595 9. of When describingthe results.000 36 21.

> .we could statethe following (note that the .36) = 58.midterm.2. "l ". and 3) and time (pretest. Practice Exercise Use PracticeData Set 3 in AppendixB.3)and time (pretest.2. Determineif anxiety levels changed over time for each of the treatment(CONDITION) types. . the main effect for time instructor interaction (F(2. and final) on instructor scores. I lffi---_ sw&t. The main effect for instructor was not (F(2.r_J x E i I o0 -z -3 -==s.p > .95.p < .36): 817. The time x (F(4.18): .05).Exam scoreswere not influencedby either time or instructor. ilme - E@rdAs I.1. and.039.it appears significant p of that Instructor3 showedthe most improvement scores in over time. and final) on scores.'.tc - f Crtt"-l -l cd. p > ..039.andthe main effectfor instructor (F(2.Cr&Plob: L:J r---r - .001).the main effect for time was significant (F(2. In addition.midterm.No significant main effects or interactions were found..95.10. I ! ia E .:.Upon examination the data.10.Chapter6 Parametric InferentialStatistics For the aboveexample.By selectingthe Plotsoption in the main dialog box.36)= I .001).A significant time x instructor interactionwas present(F(4. How did time changeanxiety levels for eachtreatment? Write a statement results. p <.05).we could statethe following in the resultssection(note that this assumes post-hoctestshave beenconducted): that appropriate A 3 x 3 mixed-design ANOVA was calculated examinethe effectsof the to instructor(Instructors1. Interactionsadd considerable complexityto the interpretation statisticalresults. it is often helpful to provide a graph with the descriptive statistics.F valuesare fictitious): A 3 x 3 mixed-design ANOVA was calculated examinethe effectsof the to (lnstructors1.p > . you can make graphsof the interaction like the one below..05).05) were all not significant.Consult a research methodstext or ask your instrucof tor for more help with interactions.36) = 1. With significant interactions. of 78 ."11::::J gr! I E UT t 2(Il inrt?uct Phrasing Results That Are Not SigniJicant If our resultshad not beensignificant.18): .

I l-.Chapter6 ParametricInferentialStatistics Section of 6.ri I till Crd{'r. Placethe variableHEIGHT as your DependentVariable.lab usedto run ANCOVA.wrndowl. then General Linear Model.t{r} *l*:" I q&. Click OK to run theANCOVA. I |'rii l.This last stepdetermines differthe t{od. With methodological controls (e. random assignment). using the HEIGHT. In bothshould normallydistributed. it becomesa statisticalmethod of control. Assumptions with the dependANCOVA requires the covariatebe significantly that correlated ent variable..sav sample data file.-'l wlswc0't l '|ffi j. In this way.g. Follow the directionsdiscussed for factorial ANOVA.S Data Format onevariable eachindependent for The SPSS variable.one datafile mustcontain variable variable.andat least covariate. I td f-.L. To run it.When such methodological controlsare not possible. ANCOVA can be performed by using the GLM command if you have repeatedmeasures factors. addition.q*J*lgl 79 .Enter SEX as your Fixed Factor. then UniDascrl$lve St$stks variate....*.internal validity is gained.1.The dependent variableandthecovariateshould at theinterval or ratio be levels. click AnaReports lyze. Becausethe GLM commandis not included in the Base Statisticsmodule.9 Analysis Covariance Description Analysis of covariance(ANCOVA) allows you to remove the effect of a known covariate. be SPS. llr. then WEIGHT as the CoDogrdrfvdirt{c variate.:-r [7mencebetween regularfactorialANOVA and c**-1 ANCOVA. representing dependent one the Running the Command The Factorial ANOVA commandis tsr.b"" GrqphrUUnbr.statisticalcontrolscan be used. it is not includedhere.

0274 1U /.13) ll2. of of and significance levelsfor all main effects.476 5.This tablegivesyou the main effects and interactionsyou would have receivedwith a normal factorial ANOVA.02.000 16 Corrected Total 228.13) with males significantl! taler (rn: 69.91 1 13 1.964 1't9.50. to Rather than matchparticipants usemethodological or we controls.70) thanfemales = 64.e.367 66. For ANCOVA results that are not significant.run a norrnal ANOVA).930) = = R F Siq. covarying theeffectof weight. canstatistically remove effectof the weight. needto repeat analysis we the withoutincluding thecovariate(i. for. 100.000 PhrasingResults ThatAre Significant Theabove example obtainedsignificant a result.040 .070 Total 7 1 9 1 9.215 112.ll. In addition.000 . interactions.38. our example.939(Adjusted Squared . thereis a row for eachcovariate.p< . Tests of Between-SubjectsEffects Va ri a b l e :H EIGH T Typelll Sumof Mean Source S orrare Souares df 9orrecreoMooel 215.964 1 SEX 66. R Squared . wecould so state following: the A one-way between-subjects ANCOVA wascalculated examine effect to the of sexon height. you could statethe following (note thattheF values made for thisexample): are up 80 .023 . out Weightwassignificantly : related height (F(1.000 .367 1 Error 1 3.580 WEIGHT 1 1 9. significant (F(1.580 I 5. = 2.938 15 a. we must give F.D 'tJ 2 lntercept 5.and covariates. degrees freedom.33\. afterweightis accounted We knowthatweightis related height.001).. When giving the results ANCOVA.post-hoctests must be conducted. haveonemain In we effect(SEX)andonecovariate(WEIGHT). (m sd Phrasing Results ThatAre Not Significant If the covariateis not significant. Normally. Drawing Conclusions This sample analysis was performed determine malesand females to if differ in height.112 62.p <.Chapter6 Parametric InferentialStatistics Reading the Output The outputconsists onemain source of table(shownbelow). main effectsor If interactions are significant. The maineffectfor sexwas to : 62. Descriptive statistics (meanandstandarddeviation)for eachlevelof theindependent variable should alsobe given.we examine covariateline only the to confirmthatthecovariateis significantly related thedependent to variable. sd:3.001).

SPSS Data Format The SPSSdatafile shouldhavea variablefor eachdependentvariable. I = short-term. One addiindependent variable.Chapter6 ParametricInferentialStatistics to ANCOVA was calculated examinethe effect A one-waybetween-subjects of sex on height. require MANCOVA as well. sd : 2. The main effectfor sexwas not to = taller (F'(1.13)= ll2. p <.covaryingout the effect of weight. Assumptions that MANOVA assumes you havemultipledependentvariables that are relatedto on each other. Each dependent variable should be normally distributedand measured an interval or ratio scale. sd : 3. you do not have the proper module installed. a repeated-measures additionalvariablesin the data file.38. of Section 6. It is also postional variable is required for eachbetween-subjects MANOVA and a repeated-measures sible to do a MANCOVA.Enter the dataand savethem as SAT.Compareand contrastyour two answers.05).70) than females (m : 64. SAT and GRE scoresfor 18 participants. Weight was significantly related height(F(1. even after covaryingout the effect of weight.Repeatthe analysis. This procedure is NOT included in the student version o/SPSS.Write of a statement resultsfor each. determine if salariesare different for males and females.50.ll. this causes Type I error inflation. in much the sameway that ANOVA looks at all levels of an independent variable at once.001). GROUP is coded0 : no training.10 MultivariateAnalysis Variance(MANOVA) Description Multivariatetestsare thosethat involve more than one dependentvariable.33\. Theseextensions Running the Command Note that this procedure requires an optional module.2 = lons-term.statisticallycontrolling for years of service.13) 2. While it is possible to conduct severalunivariatetests (one for each dependent variable). six receivedshort-termtraining before taking the tests.with malesnot beingsignificantly significant (m : 69.sav. and six receivedlong-term training.Six particiThe following data represent pants receivedno specialtraining. 8l .p > . Practice Exercise Using Practice Data Set 2 in Appendix B.02.Multivariatetestslook at all dependentvariables at once. If you do not have this command.

swerfor theMANOVA is a Lambda .in this case)in the Fixed Factor(s)blank.with 4 and28 degrees of of is not significant. test are Thatvalue freedom. Readingthe Output We are interested two primarysections output. ) : . t i .Chapter6 Parametriclnferential Statistics SAT 580 520 500 410 650 480 500 640 500 500 580 490 520 620 550 500 540 600 GRE 600 520 510 400 630 480 490 650 480 510 570 500 520 630 560 510 560 600 GROUP 0 0 0 0 0 0 I I I I I I 2 2 2 2 2 2 gaphs t$ltt6t Ad$tfrs Locate the Multivariate commandby ffi. thenGeneral LinearModel.the anmultivariate results given. 82 . WNry (onddc Bsrysrsn Lgdrcs Iqkrcc cocporpntr'.. I RrW clickingAnalyze.The first onegivesthe results in of of the multivariate tests. in this case) theDependent Enter the independent variables blank. Wq$nv*st*rus thenMultivariate.The mostwidely usedis Wilks'Lambda.Fourdifferenttypesof GROUPhadan effecton anyof our dependent Thus.The sectionlabeledGROUPis the one we want. This tells us whether variables. able(s)(GROUP. This will bring up the main dialog box. Click OK to run the command.828. Enter the dependentvariables (GRE Variin and SAT.1$lie ) ) ) .

703 .667 Sio.000 590 Wilks'Lambda 828 .187: 2.Chapter6 ParametricInferentialStatistics hIIltiv.000 4. R S qu a re= . . Theslatistic an upperbound Fthatyieldslowerbound is on a onthesignificance level. we will first consider are but how to interpret resultsthataresignificant.0 4 6(A d j u s te d S q u a re= -.1 97.000 4.000 4.000 000 Hotelling's Trace 81.70 3 .568 .333 15 gre 66400.587 5205688.000 619 't.000b 2 Intercept sat 5 2 05688.000 .31 2 569.000 17 a. R S qu a re d = . 2.000 1185. Tests ot BelweerFs[l)iectsEflects lll Type Sum S our c e D e p e n d e n ta ri a b l e of Squares V df Conecled Model sat 3077.081) d R d b.889 1 gre 5 2 48800.0 7(Ad j u s te dSq u a re d=051) 3 R - Mean Suuare F 1538.1 2. 83 .723 1538.000 000 Wilks'Lambda .568 Drawing Conclusions We interpret results the univariate the of testsonly if the group Wilks'Lambdais significant.889 4426.1 87! 2.000 28.469b Roy's Largest Rool 196 2.000 603 Holelling's Trace .889 .000 18 4' Corrected Total sat ta 67111.000 000 Roy's Largest Root 81.000 26.669 206 4.000 I Enor sal 64033.000 4. Exact statistic b.693: 1.889 .111 qre 71600.000 15.597 4268.000 261 a.t78 2 gre 5200.000 1 group sat 307t.713 4.360 2600.ile Teslsc Efiecl Value F Hypothesis df Enordf sis.778' 2 gre 5200. Design: Intercept+group The secondsectionof output we want gives the resultsof the univariatetests (ANOVAs)for eachdependent variable.012 569.000 000 group Pillai's Trace 1t4 . Our results not significant.889 1219.448 5248800.000 30.360 2600. c.000 15 Total sat 5272800.000 18 g te 5320400.000 .T i.988 569. .000 .312 569.000 . Intercept Pillai'sTrace 87.

short-term.15): 7.t78 863aral a 6121 667 6 6S| |6 667 5985900.089 431t2.l 1561.000 | 000 I 000 I 000 2.tt8' 963tt att! 5859605.250.989 .000 .250 31038.t25r Roy's Largsst Rool t .000 30.. and we could statethe following: llldbl.Chapter6 Parametric InferentialStatistics Phrasing Results ThatAre Significant If we had received the following output instead.8S9I 31 4. Thestaustic an uppsr is on lml bound F halyisldsa lffisr bound thesiqnitcancB on 2. 6t3.15): = '002). by 84 . example was Therefore.592.912 91.222 9.000 r.000 . 1.000 t.Ielrc Ertscl Inlercepl PillaIsTrace Wilks'Lambda Holsllln0's T€cs RoYsLargssl Root group piilai's Trec€ Wllks'Lambda HolellingbTrace value d] Frr6t dl Sr d .000 . 6a3.828. short-term.000 2.28) . 1 n{o 3. 9'465.000 6152r00 000 126291.000 {.05).71 1 5997338.11 '. P Phrasing Results That Are Not Significant The actual presented not significant. couldstate folwe the lowingin theresults section: A one-way MANOVA wascalculated the examining effectof training(none.000 26.NeitherSAT nor GRE scores p were significantly influenced training.000 032 011 008 002 c.r 65 5859605.GRE scores were also significantly improvedby training(F(2. : by .006).592.579 .|'rr si0 006 002 000 000 006 002 Cor€cled Tolal sal 0re l7 A one-way MANOVA wascalculated the examining effectof training(none.556 I 368. p: . Desi0n: Intercepl+0roup Ieir Sourca corected todet OsoandahlVa.i sat gf8 oaSrtrocs|alEl.885 |.000 .0tI 91.889 t.014). we would have had a significant MANOVA.t01 't0.Follow-up ANOVAs indicated univariate thatSATscores weresignificantly p improved training(F(2. Type gum lll dl Effoda xean htorc€pt gfoup Sat ue sal ore Erol T0lal sat 0rs ote 62017.423.t65 4281. significant on A effectwasfound = (Lambda(4.592.000 I 5. significanteffect was or on No = found (Lambda(4.444 151761 1|1 2 2 2 15 15 18 31038.250 13'1t2.000 1.000 2 000 2.123 613.889 620tt.222 9.592. long-term) SIZ andGREscores. > .000 28.556 5997338.350 a Exatlslalistic b. 613. long-term) MZand GREscores.912 .157.28) .

It can alsobe statistics alsobe appropriate. 85 .to determine a certainneighborhood a statistically to the population rate higher{han-normal of birth defects).sav. of haveexpected should morethan20%o thecategories of Data Format SP. andno frequencies eachcategory The of the distribution.or ratioIf variable is not normallydistributed.7 Chapter InferentialStatistics Nonparametric procedure inapis parametric tests Nonparametric areusedwhenthecorresponding the propriate. for shouldbe at leastl.andcodeH as I andT as 2. astails). has if value (e. may nonparametric counts. the the because dependent scaled. Assumptions aboutthe shape Thereareno assumptions We needto makevery few assumptions.S variable.1 Chi-square Goodness Fit Description proportions whetheror not sample goodness fit testdetermines of The chi-square if it to values. The data Namethe two variables calledCOINI andCOIN2.this is because dependentvariable is not interval. the C o i NI: H T H H T H H T H HHTTTHTHTTH C O i N 2: T H H T H T H TTHHTH HTHTHH T COINI andCOIN2. Normally. T as represent flippingof eachof two coins20 times(H is coded heads.. will file thatyou create have20 rowsof dataandtwo columns. requires onlya single SPSS Runningthe Command The following data We will createthe following dataset and call it COINS. are dataof interest frequency of Section7.g. example. couldbe used determine a die is "loaded" For matchthe theoretical of born with birth defects the It couldalsobe usedto compare proportion children or fair. expected frequencies lessthan5.S.

Chapter7 NonparametricInferentialStatistics To run the Chi-Square comman{ ) click Analyze. we will leave the E'"r@ 5*b. 0 11 10. pected Values setto All categories equal. t I I 3Ll Reading the Output The output consistsof two sections.655 a.In our example. is 86 . t .0%)haveexpected frequencies than less 5.k tails.dirdrarr0o l : i : . O*ERadwllon t ft. i Haad i***.0 10. we shouldget l0 of eachvalue. Click OK to run the analysis.. t 6nsrdlhcir$.200 . KRd*ad Ssrpbs.. i. The expectedvalue is given.0..xeio' . proportions by entering the relative freTail quencies in the Expected Values area.le*l 1-l RePsts Dascri*hrcststistict I then Chi-Square.. ) tkfta We could test a specific set of } Qudry{onbd ROCCt vi.0 9 -1. r (. Therefore. This will bring up the :Y. Sig. .. 0 cells(.. The first sectiongives the frequencies (observed1f) of each value of the variable..:f l Cdmec lih€nt main dialog box for the Chi-Square Test.X akrdoprdc* 5unphg.. 2Rddldtudcs". A "fair" coin has I d.lo*l t Csrd*c Transfer the variable COINI into . Usarp. The minimum expected cellfrequency 10.$dfy an equal chance of coming up heads or . RiErcrskn the Test Variable List. 0 20 Head Tail Total The second section of the output gives the resultsof the chisquaretest. cotNl Observed Expected Residual N N 1. KMgerdant Sffrpbs. 1 .r. pKl n3nI iryd I ' i E$cOrURutClc 6 Garrna*c .. along with the differenceof the observed from the expectedvalue (called the residual).. i) iir ' l i r t .. with 20 flips of a coin.then Nonparametric Tests. Test Statistics coinl unt-uquareo df Asymp.

It washypothesized frequency of and thateachvalue would occur an equalnumberof times.. so knowing the outcome of one coin toss should not tell us anything about the secondcoin toss.. 50Yo are technical. A testthatis not significani indicates thedataareconsistent the that with expected values. exFor ample'with a significant chi-square (for. HINT: Enter sional. The coin apfears to be fair.It washypothesized eachvalue that would occuran equalnumber times. Phrasing ResultsThat Are Significant In describing results.aot.Determinewhetheror not thesample "ExpectedValues" in of therelative proportions thethreesamples order(20.a sample'different from the example above. as we couldstate following: the A chi-square goodness fit testwascalculated of comparing the of occulrence heads tailson a coin. Section 7. In the entire population from which the sample was drawn .p i. to Notethatthisexample hypothetical uses values. For example.20. Phrasing Results ThatAre NotSignificant If the analysis produces significant no difference.ance level. such asif we hadused "loaded"die).loaded. flips of a coin should be independent events. and 30%oare profes- drawnconformsto these values. should the you state v. 87 .4g.anaa description of the results. 20yo of employees are clerical.2 Chi-Square Test of Independence Description The chi-squaretest of independence tests whether or not two variables are independent of each other. of Significanideviation trtr rtypothesized nor values found :25.No significant deviationfrom the hypothesized values wasfound(xre): .we courdstati a the foilowing: A chi-square goodness fit testwascalculated of comparing frequency the of occurrence each of valueof a die. 50. 30) in the afea.alue chi-square the of (whose symbolis thedegrees.'rignin.Chapter7 NonparametricInferentialStatistics Drawing Conclusions A significant chi-square indicates the datavary from test that the expected values.of 1'1' dt. Practice Exercise Use Practice Data Set 2 in Appendix B.Thedie was fXlSl appears be..The chi-squaretest of independence essentiallya nonparametricversion of the inis teraction term in ANOVA.p rlOsl.OS). in the previousexample. it.

Practice Exercise Use PracticeData Set 2 in Appendix B. Phrasing Results That Are Not Significant If the analysisproduces significantdifference.p > .the significance level.we no as could statethe following: A chi-square goodness fit test was calculated of comparingthe frequencyof occurrence headsand tails on a coin..p to Note that this example useshypothetical values.HINT: Enter the relativeproportions the threesamples order(20. in the previousexample.The die appears be "loaded.05). No significant deviation from the hypothesized valueswas found (tttl: .Chapter7 NonparametricInferentialStatistics Drawing Conclusions A significant chi-squaretest indicatesthat the data vary from the expectedvalues. For example. ample. 20o of employees clerical. and30o/o profesare are sional. Significantdeviationfrom the hypothesized < . It was hypothesized of that eachvalue would occur an equal number of times. we could statethe following: A chi-square goodness fit test was calculated of comparingthe frequencyof occunenceof eachvalue of a die.The chi-squaretest of independence essentiallya nonparametricversion of the inis teraction term in ANOVA. Phrosing Results That Are Significant In describing results. A test that is not significantindicatesthat the dataare consistent with the expectedvalues. In the entire populationfrom which the samplewas drawn.05).For exof 1t." valueswas foundC6:25. It was hypothesized that eachvalue would occur an equal numberof times.30) in the "ExpectedValues" of in area.48. 50Yoaretechnical.flips of a coin should be independent events. Section 7.such as if we had useda "loaded"die). Determinewhetheror not the sampledrawn conformsto thesevalues. with a significant chi-square(for a sampledifferent from the example above.20. 87 .The coin appears be to fair. and a descriptionof the results. you shouldstatethe value of chi-square the (whosesymbolis th" degrees freedom.2 Chi-Square Testof Independence Description The chi-squaretest of independence tests whether or not two variables are independentof each other. so knowing the outcome of one coin toss should not tell us anything about the secondcoin toss. 50.

then check the Chi. f Llrnbda as we will do on the next page.:. f[qea . Clarsfy .p&ffi square box.r '' 'ld $.!rd I H'b I Click Statistics.rsarMMd ConeJat* COINI is placed in the Row(s) blank. TnFr f.Cqrddionc -*-."f. This exampleusesthe COINS.[fltias llllnds* Heb component of the Crosstabscommand.iii1ilr'tirir ':l |. Click Continue. Running the Command The chi-square test of independenceis a llfr}1. -Ndridblrttavc -f-Eta lCoc*rgrfs '| '.. x\iref CedilgacycodlU*r want to click Cel/s to select expected fre. OaioRrductlon Scala *QPlotr. details.w . l H +l i lrYa'*o '. You may also .Uatcru . : i f. are For example.Chapter7 NonparametricInferentialStatistics Assumptions aboutthe Very few assumptions needed. Or*ut i il* larna .SP. 88 .. see the section in Chapter 3 on frequency Compalil&nllt distributionsfor more than one variable.::'t':. *:51 siJ . [. for of and no more than 20o/o the categories of shouldhave expectedfrequencies lessthan 5.*.* qrg" l... Data Format .. P+ P{att. ' 6nn*rEll.ridbs l t- l{do}Hlcse6l .Click OK to rvYY ruTa' run the analysis. The expectedfrequencies each categoryshould be at least l..FiadCrrm#rv quenciesin addition to observedfrequencies.lalar'd f Kcnddrldr! - .. Reqn*e$uh is placedin the Column(s)blank..SS At leasttwo variablesare required. For more Rpports fraqrerrhs.savexample. and COIN2 R*lo.we make no assumptions shapeof the distribution. d..

-. Crosstabulatlon COINI' GOIN2 corN2 Head Tail 4 Total 11 11.391 .J i.--! : Cr"* fron I | I P"cc*$or il* npw ...--.^ ' . The secondpart of the output gives the results test.'***--*--***-l .0 11 11.The most commonlyusedvalue of the chi-square is the Pearson chi-square..rrt ..--.--.Sig (2-sided) 1 1 ExactSig.ndc." ' .0 fq. (1-sided) 165 740 ."".1 I on 9 .403 a' ComputedonlYfor a 2x2lable countis 4.l? 0b*cvcd 1v rxpccico N Total uounl Expected Count Count Expected Count Count Expected Count 6. Chi-SquareTests Value L..737). Drawing Conclusions are that the two variables not indetest A significantchi-square resultindicates do that indicates the variables not vary significantly pendent.*. 3 cells(75..".----".-.342 700 20 I ..Chapter7 NonparametricInferentialStatistics Readingthe Output The output consistsof two parts.' .-*-. shown in the first row (valueof .l" Colrm t|* ral 1. ?r{t$t. Innc*cr*vralrf* I f Trurlccalcorntr | I r Xor4...r*rprr I Rrdd'Alc i if uttlaltddtu I lf $.l ! Ccrr*c ..In this example. The minimum 05. 89 .The first part gives you the counts. (2-sided) ExactSig.684 ... valuethatis not significant A from independence.390 .a"e*aru*Auco i I I I Note that you can also use the Cells option to of display the percentages each variable that are each value.0%)haveexpected expected countlessthan 5.0 Y golNl Fleao Tail I !{h I l?ry1!...".-.n*rtu I ll* n4.or"or*r i i l.nr-!'quare Pearson Correctiorf Continuity Ratio Likelihood Fisher's ExactTest Linear-by-Linear Association N of ValidCases {3t" df I Asymp.1 4 5.653 .the actual are frequencies shown and expected with the because they were selected Cel/soption.l : N o d r l c $a W{ t*t l^ 8o.. b."u.--. This is especiallyuseful when your groups are different sizes.0 EN 4..trradtr r no.0 20 20.

couldstate (for nificantchi-square a datasetdifferentfrom the one discussed thefollowing: of the comparing frequency A chi-square of independence calculated was test was interaction found(f(l) : heartdisease menandwomen. datafor the U the of aboutthe shape the dismustbe at leastordinal.and find out if helpingbehavior affected the environwere problemis in the dataentry. significant in A (68%)thanwere p 23.Men weremorelikely to get heartdisease women(40%\ sex. . Enterthesedata.p> flipping two coins. significance the we above).737. whatdo you knowabouteach and Section7.and a description freedom. < . Thereareno assumptions two samples tribution. Assumptions the of The Mann-Whitney testuses rankings the data.No significant relationship events. of equivalent the independent The Mann-Whitney testis thenonparametric U The distribution. 90 !il. coded.The coin example on we couldstate following: the was calculatedcomparing the result of A chi-squaretest of independence was found (I'(l) = .Flipsof a coinappear be independent to Practice Exercise are wantsto knowwhether not individuals morelikely to helpin an or A researcher who Of when they are indoorsthan when they are outdoors. .Of 23 participants 19 and by is 15 did not. 28 participants emergency and who wereindoors.tIFfFF". assumes a testwasrun in whichparticipants' as that Notethatthis summary statement was well aswhether not theyhadheartdisease. example. level.The key to this participant?) there.Therefore. or Phrasing Results ThatAre Not Significant dependthat indicates thereis no significant A chi-square thatis not significant test Therefore. Mannwhether not two independent are It tests or samples from thesame weaker independenttest.05). helped 9 did not.80. enceof onevariable the other. degrees In describing results.Chapter7 NonparametricInferentialStatistics Phrasing ResultsThatAre Significant of the you shouldgive the valueof chi-square. abovewasnot significant.8helped wereoutdoors.(Hint: How many participants ment. the with a sigFor of the results.05).and the / testshouldbe usedif you t WhitneyU testis thanthe canmeetits assumptions.3 Mann-Whitnev UTest Description t test.

rl@ttsstpht. medium races. t Cd||olaaMcrfE | 6on ralthccl'lodd I ) C*ru|*c j r Rooa!33hn ') cta*iry i j ) DatrRoddton i I lbb [6 gl*l :@e@ J llrsSarfr I I Q. There were long races. .SPS. then Nonparametric Tests. and short races.For this example. for this example) in the Test Variable List blank. It represents participants a series l2 in of races.. and SHORT represent the results of the race. This willbring up the maindialog box.. Click Define Groupsto selectwhich two groups you will compare. and savethe data file as RACE. (0)...click Analyze. Enter the independentvariable (EXPERIENCE)as the Grouping Variable. Click OK to run the analvsis. nl nl bl ' n]' rf 3 --ei--' 2: To run the command. The values for LONG. 1-sdTpb X-5. .S Data Format This command requires single variable representing dependent a the variableand a second variable indicating groupmembership. -5J IKJ ].. KRdetcdsrmplo*.''r 1..Make surethat Mann-WhitneyU is checked.ti Doroht*o1raru. MEDIUM. liiiriil'. Participants either had a lot of experience (2).. R. Rur. experience or no experience Enter the data from the figure at right in a new file. WMw I Andy* qaphr U*lx .. Enter the dependentvariable (LONG.."* | ryl fffii0?r-* l .Mosas a<tcnproactkmc l.Chapter7 Nonparametric InferentialStatistics .. Running the Command This example will use a new data file.W#. fry* | l*J iTod Tiryc**il7 Mxn.TYTI GnuphgVadadr: | &r*!r*l C...... e Rlnand5{ndnr.\dWn6yu I_ Ko&nonsov'SrnirnovZ I l.r ..sav..we will compare those runners with no experience(0) to those runners with a lot of experience(2).po.s*y Cr*rd BOCCuv. thd|*rl. some (l). then 2 Independent Samples.' ... with I being first place and 12 being last. ' Sada ) t t O$squr.Wdrowfrznnrg 9l ...

OO ples.UU o. (2-tailed) Sig. with no experience significantly did (m worse place: 6.5A.885 .. Phrasing Results That Are Significant Our example above significant. p Phrasing Results ThatAre Not Significant If we conduct analysis the short-distance instead the long-distance the on race of race. summaries relatingto their ranksareused.0. Sig.cu 2.00 2.Those participants who had no experience 6.t St tlttlct shorl short exDerience . Exact Sig.63 18.cuu 17.000 10. Ranks T. (2-tail6d) Exact Sig.50)thanrunners with a lot of : (m experience place 2.)l a.uu 2.00 Total N 4 4 I MeanRank Sumof Ranks 4.38 17.000 -2.Chapter7 NonparametricInferentialStatistics Reading the Output The output consistsof two sections.50 Manrr-wtltnoy u Wilcoxon W z Asymp. we couldstate following: is so the A Mann-Whitney Utestwascalculated the examining place thatrunners with varyinglevelsof experience took in a long-distance Runners race.145 .rong .Thoseparticipants with a lot of experience averaged astheirplacein therace. Grouping Variable: experience Drawing Conclusions A significant Mann-Whitney resultindicates thetwo samples differentin U that are terms theiraverage of ranks. Not corscled for ties. The first section givesdeexDenence scriptive statisticsfor the two sam.50 10.029 a sis. will getthefollowingresults.U = 0.5 The second section the outputis the resultof of the Mann-WhitneyU test itself. we whicharenot significant. Notcorrected ties. [2'(1-tailed /.)l a.500 -. for b.021. Because data are only rethe Total quiredto be ordinal.309 . < .05). Ranks N 4 Mean Rank 4 8 Sum of Ranks Zb. b GrouprngVanabb: sxporience 92 .[2'(1 -tailed .with a significance levelof .50 4.00 Test Statisticsb lono Mann-wnrrney u Wilcoxon W z Asymp.00.021 . The value obtained was0.5 averaged as their placein the race. 2.886 t sis.

" -"-l * .lggg*g..50..dltf.then 2 Related Samples. . levelof theindependent Running the Command lrnil$ re*|r u*t Locate the command by clicking Analyze. (pairedreferto Section on the dependent 6.. datafor the two samon plesmustbe at leastordinal.l .Determineif youngerparticipants 26) have significantly lower mathematics scoresthan older participants.^ o*sqr'. --: ip ulte! sip l. --f---"" This will bring up the diafor the r r r :t^^.The othervariable the onelevelof the independent ableat thesecond variable.**". If you have trouble.--f l.63.. lcucf8drcdomivti*r.05).'{' sd* . variable two One the represents dependent varivariable..1.I 93 .It testswhether not two related or I Wilcoxontestis weaker thanthe independenttest.we could statethe following: A Mann-Whitney U test was used to examine the difference in the race performance of runners with no experience and runners with a lot of race.No significantdifferencein the resultsof experience a short-distance in (U :7.p > .lggj=l tne dialog box for the dependentt test..4 6.I 9$l . Crrd*! '. Assumptions in The The Wilcoxontestis based thedifference rankings.xrNdil : '. Vrnra2 m .This example uses the RACE..38. *!!fd. Thereareno assumptions about shape thedistribution. Section7. are ent) t test. ' ffi I I log box -35+i . li*. then Nonparametric Tests. samples)testin Chapter I oetm.so the I testshouldbe usedif you can meetits assumptions.Runnerswith no experience averaged the racewas found 4. with a lot of experience averaged a placeof 4. of the SP^SS Data Format represents dependent variable at The testrequires variables.Hrl . Runners Practice Exercise scoresin PracticeExercise I (Appendix B) are measAssumethat the mathematics (< ured on an ordinal scale.. I Note the similaritv hetween it and | wilcoxon test.4 Wilcoxon Test Description (dependequivalent the paired-samples The Wilcoxontestis the nonparametric of The samples from the samedistribution..Chapter7 NonparametricInferentialStatistics Therefore.X.W -.savdataset.

savdata file to determine whetheror not the outcomeof shortyour results. of The first partgivessummary variables.the results the example so in werenot significant. distance races. This will determine the runnersperformequivalently long. Phrase distance races different is from thatof medium-distance 94 . tr. significant No was difference foundin theresults p > . < was . thathappened.50 23.Medium-distance results.38 4. races. LON G = M ED IU M 5. the Ranks Mean Rank 4 a TestStatisticsb Sumof Ranks 21.05).70 z Asymp.904 5 D 3c 12 a' Based on negative ranks.Medium-distance from longdifferent results werenot significantly distance results.and medium-distance if on races.Chapter7 NonparametricInferentialStatistics Transfer variables the LONG andMEDIUM as a pair andclick OK to run the test. couldstate following: If we the A Wilcoxon test examined resultsof the medium-distance longand the (Z p distance races.05).50 MEDIUM LONG N MEUTUM NegaUVe LONG Ranks Positive Ranks Ties Total A .40.lgOtUtr.121. results werebetter thanlong-distance Notethatthese results fictitious. we couldstate above following: A Wilcoxon test examinedthe resultsof the medium-distance longand (Z:4. sig. Phrasing Results ThatAre Significant A significant the resultmeans between two measurethat a change occurred has ments. M ED IU M L ON G < b. Practice Exercise Use the RACE.it> LONG c . Readingthe Output for statistics thetwo The outputconsists two parts. are Phrasing Results ThatAre Not Significant the In fact. second The section contains resultof theWilcoxontest(givenas4. b'wilcoxon Ranks Signed Test was the The example above shows difference foundbetween rethatno significant sultsof the long-distance medium-distance and races. (2-tailed) -1214 . significant A difference foundin theresults = 3.

Enter the de.. the Becausethe test is nonparametric. ing (independent) rong .uu 1.Ii scdi fine the lowest (0) and highest(2) values..?Cl"tsilodd then K Independent Samples.I +ry{ | Hdp I Reading the Output The output consists of two parts. l-Samda(-5. and click Define Range to de.SPS.50 95 ..r. l t ffudd. 5 0 6.rpftlbfF This will bring up the main | 6. Rangp Em{*tEVrri*lt for Mi{run Mrrdlrrrn l0 la t c"{h.*rd I nOCArs. The indean pendentvariable shouldbe nominal or ordinal.00 2. To run ffi the command.. d # r t 0d! ildrdoft as the Grouping Variable. It testswhetheror not severalindependent tion.nr(o.50 2..savdatafile.click Analyze. for test doesassume ordinal level of measurement the dependentvariable. the Running the Command This exampleusesthe RACE.00 Total Ranks exoerience N MeanRank 4 4 4 12 1 0. click OK. ) r I ' I I ('$5*|. The first part gives summary statisticsfor each of the groups defined by the groupvariable. Assumptions there are very few assumptions.F/ test is the nonparametric equivalent of the one-way samplescome from the samepopulaANOVA. and ) qr. then Nonparametric Tests.l@ Sarbr J thr pendent variable (LONG) in the Test VariableList. Ratatdan i Enter the independentvariable (EXPERIENCE) ..Chapter7 NonparametricInferentialStatistics Section7.. However.S Data Format the to SPSSrequires one variableto represent dependentvariable and another represent levelsof the independentvariable. RrJf... 2 (nC*dsdd.. Co..5 Kruskal-WallisIl Test Description The Kruskal-Wallis .. Canl*t i dialog box. ..

Chapter7 NonparametricInferentialStatistics

part of the outputgivesthe results The second value, of the Kruskal-Wallis (givenas a chi-square test but we will describe as an II).The example it hereis a significant valueof 9.846.

Test Statisticf,b

df Asymp. Sig.

9.846 2 .007

a' KruskalWallis Test b. Grouping experience Variable:

Drawing Conclusions that Like the one-wayANOVA, the Kruskal-Wallis assumes the groupsare test is equal. Thus,a significant resultindicates at leastoneof the groups differentfrom at that thereareno ophowever, leastone othergroup.Unlike the One-llayANOVAcommand, tionsavailable post-hoc for analysis. Phrasing Results ThatAre Significant Theexample the above significant, we couldstate following: is so A Kruskal-Wallis the test was conducted comparing outcomeof a longA distancerace for nmnerswith varying levels of experience. significant resultwas found(H(2): 9.85,p < .01),indicating that the groupsdiffered from eachother.Runners a of with no experience averaged placement 10.50, with a lot of while runners with someexperience 6.50 averaged andrunners the experience averaged 2.50. The more experience runnershad, the better theyperformed. Phrasing Results ThqtAre Not Significant race, If we conducted analysis the of usingthe results the short-distance we would getthefollowingoutput, whichis not significant.
Ranks exoenence
N Teet Statlstlc$'b

snon .uu 1.00 2.00 Total

4 4 4 12

MeanRank 6.3E 7.25 5.88

unFuquare df Asymp.Sig.


a. Kruskal Wallis Test b. Grouping Variable: experience

This resultis not significant, we couldstate following: the so A Kruskal-Wallistest was conducted comparingthe outcomeof a shortNo distance race for runnerswith varying levelsof experience. significant > .05),indicating the groups did that difference found(H(2):0.299,p was averaged not differ significantly with no experience from eachother.Runners 7.25 and averaged a placement 6.38,while nmners of with someexperience to did with a lot of experience nmners 5.88.Experience not seem averaged influence results the short-distance race. the of


Chapter7 NonparametricInfbrentialStatistics

Practice Exercise Use Practice Data Set 2 in AppendixB. Job classification ordinal (clerical< is < technical professional). Determine males females if and havedifferinglevelsofjob clasyourresults. sifications. Phrase Section7.6 Friedman Test Description The Friedman is thenonparametric test equivalent a one-way of repeated-measures ANOVA. It is usedwhenyou havemorethantwo measurements related participants. from Assumptions The testuses rankings thevariables, the datamustbe at leastordinal. No the so of otherassumptions required. are Data Format SP,SS SPSSrequires leastthreevariables the SPSS in at datafile. Eachvariablerepresents dependent the variableat oneof thelevelsof theindependent variable. Running the Command Locatethe command clickingAnalyze,then by Nonparametric Tests, This will thenK Related Samples. bringup themaindialogbox.

f r;'ld{iv

f* Cdi#ln

Placeall the variables representing levelsof the independent the variable in the TestVariables area. this example, the RACE.sav For use datafile andthevariables LONG, MEDIUM. andSHORT. Click O1(.


Chapter7 NonparametricInferentialStatistics

Readingthe Output The outputconsists two sections. first section givesyou summary statistics of The for eachof the variables. second The of section the outputgivesyou the results the test of as a chi-square value. The examplehere has a value of 0.049 and is not significant (Asymp. Sig.,otherwise knownasp,is .976, than.05). whichis greater
Ranks Mean Rank

Test Statisticf Chi-Square | 12 .049 .976


2.00 2.04 1.96

d f lz
Asymp. Sis. |

a. Friedman Test

Drawing Conclusions The Friedmantest assumes that the threevariablesare from the samepopulation.A significantvalue indicatesthat the variablesare not equivalent.

PhrasingResults ThatAre Significant
If we obtained a significant result, we could state the following (these are hypotheticalresults): A Friedman test was conductedcomparingthe averageplace in a race of nrnnersfor short-distance, races.A sigmedium-distance, long-distance and < .05). The length of the race nificant differencewas found C@: 0.057,p significantlyaffectsthe resultsof the race.

PhrasingResults Are Not Significant That
In fact,theexample above wasnot significant, we couldstate following: so the A Friedman was conducted placein a raceof test the comparing average runnersfor short-distance, races.No medium-distance, long-distance and = 0.049, > .05).The lengthof the significant p difference was founddg racedid not significantly affect results therace. of the Practice Exercise Usethedatain Practice DataSet3 in Appendix If anxietyis measured an oron B. dinal scale, your results. determine anxietylevelschanged if overtime.Phrase


(For more help on conductingconelations. Ds$rripllvo Statirlicc ) variablerepresenting total scorefor the scale. s j $ TestConstruction Section 8.Spoamm !:*l'. Click Analyze. and click OK.It doesnot tell the the you if it is measuringthe correctconstruct(that is a questionof validity).it must first be reliable. Assumptions All the itemsin the scaleshouldbe measured an interval or ratio scale.q ! l W Riqression ) I"1l J13l .sav . Before a test can be valid. '.In addition.{atud I rr r'oorxaa I then Coruelate.) The total can be calin culatedwith the techniques discussed Chapter2.g. Item-totalanalysis comprises numberof items a (e. rho correlationinsteadof the Pearsonr correlaconduct the analysisusing the Spearman tion. ry1 Pearson Coruelation command. As the such. your itemsare ordinal in nature.1 ltem-Total Analvsis Description Item-totalanalysisis a way to assess internal consistencyof a data set.it is one of many testsof reliability. Data Forrnat SP. Condation Coalfubr*c l.you must have a RPpo*s . To conduct it.fi i$ 't. then Bivariate..In addion If you can tion.intelligence). 99 . open the data file you crel * * . the iilrliililrr' Vuiabla* F Ccmpar*Msans GtneralLinear Madel I .seeChapter5. however.Kcndafsta* l|7 P"as* f." : Conducting the Test Item-total analysisuses the . .Chapter8 . Tsd ot SlJnific&cs ated in Chapter 2. f ona.SS SPSSrequiresone variablefor each item (or I Andyle Grrphs Ulilities Window Help question)in the scale. -_ ^ _ _ QUESTIONS. Place all questionsand the |7 nag ris{icsitqorrcldimr total in the right-hand window. eachitem shouldbe normallydistributed. that make up a scaleor testdesigned measure singleconstruct to a and determines degreeto which all of the items measure sameconstruct.* _ _ .

be Correlatlons o1 l'oarson \.926with thetotal.7 are considered of Thoseof lessthan 0.074 4 . Beforea testcanbe valid. Use the column labeledTOTAL.orrelallon Sig. is oneof many of testsof reliability.intelligence). item-totalanalysis repeated that was rethe without the question is moved. determines degree the to to a and whichall the itemsaremeasuring same It the construct.item-totalcorrelations greaterthan 0.(2-tailed) N TOTAL PearsonConelation Sig.870 4 1.3. (2-tailed) N PearsonCorrelalion UZ Si9.127 4 1.282 4 .theworstquestion removed.873 .3 are considered of with correlations lessthan weak. In the exampleat right. it the correct construct of mustfirst be reliable. addibe tion. havecorrelations of andtheprocess repeated.(2-tailed) N Lll 1.if anyquestions lessthan0.771 4 30 -.1 . Section8. and locate the correlationbetweenthe total scoreand each question.553 4 o2 cll Q3 t to TOTAL o/J ..553 4 . the remainingitems in the are scale considered be those are to thatareinternally consistent.447 .000 4 -.870 4 .Chapter8 Test Construction Readingthe Output The output consistsof a correlation matrix containing all questions and the total.g. QuestionI hasa correlation 0.000 4 -. you obtaina negative tion. doesnot tell you if it is measuring (thatis a question validity). be desirable.the worstoneis removed.130 with the total.2 Cronbach's Alpha Description it Cronbach's As alphais a measure internalconsistency..000 4 100 . such.1 .?82 4 . tal is recalculated. 2 Question hasa correlation of -0.873with the of totalscore. Assumptions In All the itemsin thescale on should measured an interval or ratio scale.(2-tailed) N P€arsonCorrelation Q3 Sig. Cronbach's alphacomprises numberof itemsthat makeup a scale a designed measure singleconstruct (e.YZ O .Any questions 0. After thetoNormally.3.229 .229 . Interpreting the Output correlaItem-total If correlations shouldalwaysbe positive.. is thenthetotal is recalculated. is When all remaining correlations greater than 0.000 4 . Generally.3 should removed be from thescale.074 4 1 .771 4 30 .926 .718 .127 4 . eachitem should normallydistributed. that question shouldbe removedfrom the scale(or you may considerwhetherit should reverse-keyed).however. Question3 has a correlationof 0. Then.

but numbers poorinternal consistency.407. 0.and repre' any click OK. Assumptions be The total scorefor the scaleshould an interval or ratio scale.SS in for requires variable eachitem(or question) the scale.The scalescores be should normallydistributed. ) ) ql * q3 the Notethatwhenyou change oPof measures tionsunderModel. Readingthe Output is the In this example.SP. This will bring up the maindialogbox for Reliability Analysis.Chapter8 Test Construction Data Format .00are very good. is over you whether not the instrument consistent time and/orovermultiple or stability tell administrations. Click Analyze. l.sav datafile we first created Chapter then Reliability 2. reliability coefficient 0.3 Test-RetestReliability Description of reliabilityis a measure temporal stability. Analysis..thenScale. As such.additional (e.407 l 0l . Do not transfer variables senting total scores. Udtbr Wh&* tbb ) f: .split-half) can internal consistency be calculated.00represent Section8. one SPSS Running the Command This example uses the QUESTin IONS.it is a measure Test-retest to that of measures internal consistency tell you theextent whichall of reliability. Unlike of temporal measures the that of the questions makeup a scalemeasure sameconstruct. it.Transferthe questions from your scaleto the Itemsblank. Stntistics RelLrltility Cronbach's N of ltems Aloha 3 .9. closeto Numbers closeto 1.

Readingthe Output The correlation between two scores the criterion-related validity coefficient. for coefficient a correlation Section 5. by is indicated values by close 0. be reliability is indicated values Weakreliability Strong closeto 1.00. to 102 .etc. are for Onevariable represents total score the scale the aretesting.00. Theotherrepresents criterion you aretesting against.00.4 Criterion-Related Validitv Description Criterion-related validity determines extentto which the scaleyou are testing the correlates with a criterion.SP. theydo not. to Section 8.SP. .1 See 5. of the Readingthe Output The correlation between two scores the test-retest reliabilitycoeffrcient. tion. compute coefthe the for (Chapter ficient. the is It should positive.Usethetwo variables representing two administrations thetest.).1). Assumptions All of the same assumptions the Pearson applyto measfor correlation coefficient uresof criterion-related validity(intervalor ratio scales. normal distribution. It the is should positive. second A variable for at representing total score the same the different time (normally two weekslater)is alsorequired. be Strong validity is indicated values Weakvalidity is closeto 1.S. the it Running the Command Calculating criterion-related validity involvesdetermining Pearson the correlation valuebetween scale thecriterion. thatis a measure validity forACT scores. Chapter Section for complete the and informa5. by indicated values by close 0.thatindicates ACT of that If scores maynot be valid for theintended purpose.S.follow the directions computing Pearson 5.00. highly with GPA.S Format Data you Two variables required.For example.Chapter8 Test Construction . If ACT scores shouldcorrelate theydo.S Format Data requires variable SPSS a representing totalscore the scale the timeof first for at the participants a administration. Running the Command The test-retest reliabilitycoefficient simplya Pearson coefficient for is correlation the relationship To between total scores the two administrations.

Educational & Psychological Measurement. It is not only a popularmeasure effect size.2 aresmall. effect sizesprovide a way to judge the relative importanceof thosedifferences. Practicalsignificance:A conceptwhose time has come. of Cohen's d is a memberof a classof measurements called"standardized meandifferences. However. They are alsocritical if you would like to estior matenecessary samplesizes.0.calculat/ ing Cohen's d is a simplematter.J.5 are medium. conducta meta-analysis. 7 46-7 59. We will discussCohen's d as the preferredmeasure effect size for I tests. (1992).conducta power analysis. Cohen'suggested effect sizesof . R.That is.the AmericanPsychological organizations Association) now requirare ing or strongly suggesting that effect sizesbe reportedin addition to the resultsof hypothesis tests. EffectSizefor Single-SampleTests t Although SPSSdoesnot calculate effect size for the single-sample test. ' Kirk.reach with somewhat different properties.SPSSdoes not calculateCohen's d." In essence. ' Cohen. A power primer.Unforof tunately.but Cohenhasalso suggested simof a ple basisto interpretthe value obtained..g. (1996). t t 2. 155-159. that .8 are large.the purposeof this Appendix is not to be a comprehensive resourceon effect size. but ratherto show you how to calculatesomeof the most common measures of effectsizeusingSPSS15. is the difference d the between two meansdivided by the overall standard deviation.and . PsychologicalBulletin. or Many professional (e. 103 . While statisticalhypothesistestingprovidesa way to tell the odds that differencesare real.this appendixwill cover how to calculateit from the output that SPSSdoesproduce. 56. Becausethere are at least 4l different types of effect sizes.AppendixA Effect Size Many disciplines are placing increasedemphasison reporting effect size. they tell us the size of the difference relationship.E. Co h e n ' s d One of the simplest and most popular measures effect size is Cohen's d.

-x . does provide us with the information we need to calculateit.00000 S pooled = (n .JJIDJ UZ.59 =. Oeviation J.752 d- In this example. o =- 104 . x .-1 )s .t ottho D sD o'r. 2+ (n .0000 Std.Srt!- Iad = TsstValuo 35 d slg (2-lailsd) I Igan Dill6f6nao uo00r I l 5 5 F.SUUUU s.Sr552 (2 -t)7. S pooled 82. can calculate we the Cohen'sd. Q=. however. EffectSize Independent-Samples t Tests for Calculating effectsizefromtheindependent output a little morecomplex is t test because SPSSdoesnot provideus with the pootedstandard deviation.07107 78..07 ( 2+ 21 S pool"d = = Spooted 5'59 Once we have calculated pooled standard deviation (spooud).we calculated as indicatedhere: d95$ Conld6nca lhlld.90 t.0 7 71r' sooo .00 5. we would have an effect size betweenmedium and large.- .t972 d = .SUUU 7.80 d .tlr. -l)s r2 n r+ n r-2 rp o o te d-\ /12 + | . Group Statlltlcr morntno No yes N graoe z 2 Mean Std.using Cohen's guidelinesto judge effect size.Appendix A Effect Size T-Test Om'S$da sltb.Error Mean Z. The output presented here is the sameoutput we worked with in Chapter6. If SPSSprovidesus with the following output.50 78. The uppersection of the output.tr std xoan LENOIH Dflalon Std Eror Ygan 0 16 qnnn | 9?2 I Cohen's d for a single-sample test is equal to the mean differenceover the standard deviation.

8952 -18.we representthe effect size (d. EffectSizefor Paired-SamplesTests t As you have probably learned in your statisticsclass. 9oha moderate a effect.01 respectively.5. power analysisfor the behavioral sciences (2"d ed).moderate. we have found a very large effect size. New Jersey:Lawrence 'Cohen. (1988).22.09. and. In using Cohen'sguidelinesfor the interpretation d. FINAL f)eviatior A OTqA Std. EffectSize Correlation for Nowhere is the effect of sample size on statisticalpower (and therefore significance) more apparentthan with correlations. and lo/oa small effect. as a positive number eventhoughit is negative. procedure calculatreally I the for ing Cohen's d is also the same. Thosevaluessquared yield coefficients determination of . Cohen' suggested here that correlations .Thus. of 12(Coefficient of Determination) While Cohen's d is the appropriatemeasureof effect size for / tests.Error Mean f)iffcrcnne sig Upper 11. and .however.so you will be taking your valuesfrom different areas. .l corresponded large. of to and small relationships.7239 20 .000 u -- .9756 d = 2. we would have obtaineda large effect size.looks a little different. This squaredcorrelationis called the coefficient of determination.D JD . in this example.8095 8. a paired-samples test is I just a specialcaseof the single-sample test.Therefore. Effect sizesare alwayspositivenumbers. of It would appear.any correlation can becomesignificant.646 df (2-tailed) Lower -22.Appendix A Effect Size So.25.9586 -26.. Given a large enoughsample. Pair€dSamplos Test Paired Differences 95% Confidence Intervalof the std. effect size becomescritically important in the interpretation of correlations.using Cohen's guidelinesfor the interpretationof d.54 Notice that in this example.809s 1. J. Statistical Erlbaum.3. this example. 105 .that Cohen is suggestingthat accounting for 25o/o the variof ability represents large effbct.therefore.The SPSSoutput. correlation and regressioneffect sizes should be determinedby squaringthe correlation coefficient. Mean YAlt 1 |'KE ttss I .

" sidered To calculatethe coefficientof determination. shouldelectto reportEta Squared asyour effectsizemeasure.-Unlike . Error of the Estimate 16.The GLM (GeneralLinear Model) functionin SPSS(the function that runs the procedures underAnalyze-General Linear Model) will provide Eta Squared (tt').649 . 106 . much smaller correlationcan be cona "important.649\ of the variability in the dependentvariable is accountedfor by the relationship betweenthe dependent and independentvariables. HEIGHT Eta Squared Qtz) A third measure effect size is Eta Squared (ry2)." .72canrepresentanytypeofrelationship. Eta Squared has an interpretation corsimilar to a squared SS"ik". To obtainEta Squared.1 480 Model a. meaningthat almost 65% (. each problem should be interpretedin terms of its true practical significance. and Repeated Measures versions the command of eventhoughonly the Univariate option is presented here). The example output here shows a coefficient of determination of . ho*euer. of Eta Squared is usedfor Analysis of Variancemodels. the tt accountedfor by the effect. Predictors: (Constant).if a treatmentis very expensiveto implement. relationcoefficient1l1. SS* onlylinearrelationships. For example..then a larger correlationshould be required before the relationshipbecomes"important. you Multivariate.Appendix A Effect Size The standardmeasureof effect size for correlationsis the coefficient of determination (r2) discussedabove. simply take the r value that SPSS providesand squareit. The coefficient should be interpretedas the proportion of variance in the dependent variable that can be accounted by the relationshipbetween for the independentand dependentvariables.lt represents proportionof the variance n.or has significant side effects. Effect Sizefor Analysis of Variance For mostAnalysisof Variance you problems. ModelSummary Adjusted R R Souare R Souare .649. which represents Sqr.2. provides calculation you as part of the General SPSS for this LinearModel(GLIrl)command.While Cohenprovidedsomeusefulguidelines for interpretation. EffectSize Regression for TheModelSummary section of the output reports R2 for you." For treatments that are very inexpensive.8064 .624 Std. you simplyclick on theOptions box in the maindialog box for the GLM command arerunning(this worksfor Univariate.

000 .we obtained Eta Squaredof . 107 .2 8 3 Total 15 1 0 5 . One of the optionsin that box will be Estimates ffict sze.000 .*| n* | Testsot EetweerF$iltectsEftcct3 Dependenl Variable: score Type Sum lll nf Sdila!es df tean Souare Source u0rrecleo M00el 1 0 .. SPSSwill provide Eta Squaredvalues partofyour output.|it?ld rli*dr f'9n drffat*l Cr**l.225 1 0 . new a dialog box will appear.|tniliivdrra5: tl l c.'.761for our main effectfor an groupmembership.721) R Padial Eta F 19.225 2 4 Inlercepl I s't. R Squared .622 91. .622 1 gr0up 5.761 = = (AdiusiedSquared . as l-[qoral* |.4 5 0 Enor 12 .$FdYr|dg f A!.862 1S .Appendix A Effect Size Onceyou haveselected Options.0 0 0 't4 Corrected Total 1 3 . When of you select that box. we as wouldconclude thisrepresents that alargeeffectsizefor groupmembership.7 3 3 a.000 Souared 761 965 761 In the example here.096 si q.27 4 3 . Eta Because interpret Squaredusingthe same we guidelines .096 331.4 5 0 r 5.


Professional 3).000 YOS 8 4 I 20 6 6 17 4 8 l5 16 2 SEX Male Female Male Female Male Female Male Female Male Female Male Female CLASSIFY EDUC 14 Technical l0 Clerical Professional l6 Professional l6 20 Professional 12 Clerical 20 Professional 12 Technical 14 Technical 12 Clerical 12 Clerical Professional l6 PracticeData Set3 (CONDITION). Technical 2.000 75. Female: 2) and CLASSIFY (Clerical: l.000 20. 4lcondil 7l .'* '" ll0 .Appendix B PracticeExerciseData Sets PracticeData Set2 A survey of employeesis conducted. Job Classification mation: Salary (CLASSIFY). (ANXIHR). Each employeeprovides the following infor(SALARY).000 18.Numeric i'.Jrs$q Numsric lNumedc SALARY 35. and Education Level (EDUC).000 23.000 45. and againfour hoursafter treatment one hour after treatment Notethatyou will haveto codethevariable CONDITION.000 40.000 50. ll*t:g .000 38. Years of Service (YOS). Participants who havephobias given one of threetreatments are (ANXPRE).000 22. Note that you will haveto code SEX (Male: : : l.000 20. Their anxietylevel (1 to l0) is measured threeintervals-beforetreatment at (ANX4HR). Sex (SEX).000 30.

Appendix B Practice Exercise Data Sets ANXPRE 8 t0 9 7 7 9 l0 9 8 6 8 6 9 l0 7 ANXIHR 7 l0 7 6 7 4 6 5 3 3 5 5 8 9 6 ANX4HR 7 l0 8 6 7 5 I 5 5 4 3 2 4 4 3 CONDITION Placebo Placebo Placebo Placebo Placebo ValiumrM ValiumrM ValiumrM ValiumrM ValiumrM Experimental Drug Experimental Drug Experimental Drug Experimental Drug Experimental Drug lll .

of i . Eta Squared (q2). B. A window that allows you to enter information that SPSS will use in a command. All Inclusive.g. Statisticalprocedures that organizeand summarizedata.A set of eventsthat encompasses normally showing that there Alternative Hypothesis. Dependent Variable. D. The dependent variable is variable. Case ProcessingSummary. $ $ nialog Box.. A variable known to be relatedto the dependent independent variable. Variables onlytwo levels DiscreteVariable. A variable that can have only certainvalues(i. A sectionof SPSSoutput that lists the number of subjects usedin the analysis.e.Used in ANCOVA as a statisticalcontrol technique. The SPSSwindow that containsthe data in a spreadsheet format. F). Correlation Matrix.but not treatedas an Covariate. gender). The value of the correlation. C. Data Window. The oppositeof the null hypothesis.. A common and simple measureof effect size that standardizes difference the betweengroups. Oichotomous with Variables. Generallv.A measure effectsizeusedin Analysisof Variancemodels. this is the statementthat the researcherwould like to support. A section of SPSS output in which correlationcoefficientsare reported allpairs of variables. It provides the proportionof varianceaccounted by the relationship. A measurethat allows one to judge the relative importanceof a differenceor relationshipby reportingthe size of a difference. This is the window usedfor running most commands. for Cohen's d. t * il I 'f I' (e. I 13 . normally dependent the independent on Descriptive Statistics. is like Effect Size.Appendix C Glossary everypossible outcome. squared. for variable. Coefficient of Determination. An outcome or response variable. A. valuesbetween *hich there no score. is a true difference.

g. two flips of a coin). Independent Events. In SPSS.The hypothesis be tested.A measure central of tendency whenthe representing middleof a distribution the dataaresorted from low to high. Inferential Statistics. the SPSS often refersto independent variables groupingvariables.A true independent variable is manipulated the researcher. Two events are mutually exclusivewhen they cannot occur simultaneously.Appropriatetransformations include counting. A measurement scale where items are placed in mutually exclusive categories.g. Mean. Nominal Scale. A measurement scalewhere items are placed in mutually exclusive categories. SPSSsometimes refersto as grouping variables independent as variables.Two events independent information are if aboutoneeventgivesno information aboutthe second event(e.race. are of Mode. is mutuallyexclusive thealternative It of hypothesis. on of Interaction. variableusedto represent group membership.Statistical procedures designed allow the researcher draw to to inferences abouta population thebasis a sample.The values thata variable have. symmetric. and Levels. bell-shaped curve.With morethanone independent whena level variable.A variable with threelevelshasthreepossible can values. Null Hypothesis. Interval Scale.. NormalDistribution. Differentiation by nameonly (e. interaction an occurs of oneindependent variable affects influence another variable..Appendix Glossary C Grouping Variable. with equal intervalsbetweenvalues. Appropriate " transformations include counting. Independent Variable. n4 .normally in which there is no tnre to difference. A unimodal. reliabilitymeasure assesses extentto which all of the A that the itemsin an instrument measure same the construct. the independent of Internal Consistency. A measure centraltendency of representing value (or values)with the most the (the subjects score with thegreatest frequency).Thevariable whose the levels(values) determine groupto whicha subjectbelongs. where sumof thedeviation scores equals the Median. sorting.Appropriate is include categories "same"or "different. addition/subtraction.sex). by See Grouping Variable.Fifty percent thecases belowthe median.A measure central of tendency zero. Mutually Exclusive.

A measureof dispersionrepresenting number of points from the highestscore through the lowest score.Positive skew has outliers on (left) Negativeskew hasoutlierson the negative the positive(right) sideof the distribution. A single value that represents standarddeviation of two groupsofscores.and 75th percentile ranks. Significance. The points that define a distribution into four equal parts. Extreme scoresin a distribution. the level needed when multipletestsare conducted.50th. the null hypothesisis rejected." and "more. Random Assignment. A test is said to be robust if it continuesto provide accurateresultseven after the violationof someassumptions. and a true zero. the The left side Output Window. Protected Dependent / Tests. The SPSSwindow that contains resultsof an analysis." include countingand sorting. A procedurefor assigningsubjectsto conditionsin which each to subject hasan equalchance ofbeing assigned any condition. Ratio Scale. I 15 .Appendix Glossary C Ordinal Scale. sideof the distribution. the Pooled Standard Deviation. Appropriatetransformations Outliers. and of Reliability. A measurementscale where items are placed in mutually exclusive categories.additiott/subtraction. summarizes resultsin an outline. to be significantis reduced Quartiles. Scoresthat are very distant from the mean and the rest of the scoresin the distribution. the Range. the of Percentiles(Percentile Ranks). A relative scorethat gives the percentage subjectswho scoredat the samevalue or lower. A measurementscale where items are placed in mutually exclusive categories. Appropriate transformations include counting." "less. The extent to which a distribution is not symmetrical. A reliable scale is intemally consistent stableover time. multiplication/division. Appropriate categories include "same. An indication of the consistency a scale. To preventthe inflation of a Type I error. A measureof dispersion representinga special type of average deviation from the mean. with equal intervals between values. Skew. The right side containsthe actual results. The scoresat the 25th. and Robust. Standard Deviation. in order.sorting. If a difference is significant. A difference is said to be significant if the probability of making a Type I error is less than the acceptedlimit (normally 5%).

An indication theaccuracy a scale.The data the around deviationequalto the standard errorof the estimate. will Validity. with a meanof 0. A Type I error occurswhen the researcher hypothesis. Most SPSS will commands not function that have determined Temporal Stability. of of Variance. of purportedto reveal an "honestly significant Tukey's HSD. This is achieved when reliability measures remainstable scores overmultipleadministrations the instrument. A normaldistribution deviation 1. of can Numericvariables String Variable. difference" rejectsthe null erroneously Type I Error. and can with stringvariables. contain only numbers.DatathatSPSS usein its analyses.0. A Type II erroroccurs whenthe researcher null hypothesis.Appendix Glossary C deviationfor a regression Standard Error of Estimate. of standard equalto thesquared l16 . fails to rejectthe erroneously Type II Error.A measure dispersion deviation. Valid Data.The equivalent the standard of line pointswill be normallydistributed regression with a standard line. A stringvariable contain letters numbers.0 anda standard StandardNormal Distribution. A post-hoccomparison (HSD).

COINS.sav Variables: COINI COIN2 PRETEST MIDTERM FINAL INSTRUCT REQUIRED Entered Chapter in 6 HEIGHT. Hereis a list A varietyof smalldatafiles areusedin examples of whereeachappears.AppendixD DataFilesUsedin Text Sample this throughout text.Sav Variables: Ql 2) in Q2 (recoded Chapter Q3 2) in TOTAL (added Chapter (added Chapter 2) in GROUP 2 Entered Chapter in 2 Modifiedin Chapter tt7 .sav Variables: Entered Chapter in 7 GRADES.sav Variables: HEIGHT WEIGHT SEX Entered Chapter 4 in QUESTIONS.

sav Variables: SHORT MEDIUM LONG EXPERIENCE Entered Chapter in 7 SAMPLE.sav Variables: SAT GRE GROUP Entered Chapter in 6 Other Files practice For some thatarenot usedin any exercises.sav Variables: ID DAY TIME MORNING GRADE WORK TRAINING (added Chapter 1) in Entered ChapterI in Modifiedin Chapter I SAT. Appendix for needed see datasets B otherexamples thetext. in l18 .Appendix Sample D DataFilesUsedin Text RACE.

Appendix E

Informationfor Users of EarlierVersions SPSS of
Thereare a numberof differences between SPSS15.0and earlierversions the of software. mostof themhavevery little impacton usersof this text. In fact, Fortunately, will be ableto successfully this text withoutneeding to mostusersof earlierversions use reference appendix. this Variable nameswere limited to eight characters. The Versions SPSS olderthan 12.0arelimitedto eight-character variable names. of you to othervariable name rulesstill apply.If you areusingan olderversion SPSS, need of you useeightor fewerletters yourvariable for makesure names.
The Data menu will look different. The screenshots the text where the Data in menu is shown will look slightly different if you are using an older version of SPSS.These missing or renamedcommandsdo not have any effect on this text, but the menusmay look slightly different. If you are using a version of SPSS earlier than 10.0,the Analyzemenu will be called Statistics instead.
Dsta TlrBfqm rn4aa€ getu t

r{ ril

r hertYffd,e
l|]lrffl Clsl*t


Graphing functions. functions SPSS Prior to SPSS12.0,the graphing of werevery limited.If you are using a version of SPSSolder than version 12.0, third-partysoftwarelike Excel or of for If SigmaPlot recommended theconstruction graphs. you areusingVersion14.0of is graphing. to 4, thesoftware, Appendix asan alternative Chapter whichdiscusses F use

l 19

Appendix E Information for Usersof Earlier Versionsof SPSS

Variableiconsindicatemeasurement type.
In versions of SPSS earlier than 14.0,variableswere represented in dialog boxes with their variable label and an icon that represented whether the variable was string or numeric (the examplehere shows all variablesthat were numeric). Starting with Version 14.0, SPSS shows additional information about each variable. Icons now representnot only whether a variable is numeric or not, but also what type of measurement scale it is. Nominal variables are represented the & by icon. Ordinal variables are representedby the dfl i.on. Interval and ratio variables(SPSSrefers to them as scale variables) are represented bv the d i"on.

f Mlcrpafr&nlm /srsinoPtrplecnft / itsscpawei [hura /v*tctaweir**&r /TimanAccAolao U 1c $ Cur*ry Orlgin I ClNrmlcnu clrnaaJ dq$oc.t l"ylo.:J
It liqtq'ftc$/droyte

sqFr*u.,| 4*r.-r-l qql{.,I
Several SPSS data filescan now be openat once.
Versionsof SPSSolder than 14.0 could have only one data file open at a time. Copying data from one file to another entailed a tedious process of copying/opening files/pastingletc.Starting with version 14.0, multiple data files can be open at the same time. When multiple files are open,you can selectthe one you want to work with using the Windowcommand.



$$imlzeA[ Whdows lCas,sav [Dda*z] - S55 DctoEdtry


F Appendix

13.0and14.0 DatawithSPSS Graphing
to This appendix shouldbe usedas an alternative Chapter4 whenyou are usprocedures may alsobe usedin SPSS 15.0, deif ing SPSS 13.0or 14.0. These Legacy Dialogsinsteadof ChortBuilder. sired,by selecting GraphingBasics
In addition to the frequency distributions,the measuresof central tendency and in measures dispersiondiscussed Chapter3, graphing is a useful way to summarize,orof ganize,and reduceyour data. It has been said that a picture is worth a thousandwords. In the caseof complicated datasets,that is certainlytrue. With SPSSVersion 13.0 and later,it is now possibleto make publication-quality of graphsusing only SPSS.One importantadvantage using SPSSinsteadof other software (e.9., Excel or SigmaPlot)is that the data have alreadybeen entered. to createyour graphs Thus, duplicationis eliminated,and the chanceof making a transcriptionerror is reduced. Editing SP,S,S Graphs Whatever command you use to create your graph, you will probably want to do some editing to make it look exactly the way you want. In SPSS,you do this in much the sameway that you edit graphsin other software programs (e.9., Excel). In the output window, select your graph (thus creating handles around the outside of the entire object) and righrclick. Then, click SPSSChart Object, then click Open. Alternatively, you can double-click on the graph to open it for editing.
tl 9:.1tl 1bl rl .l @l kl D l ol rl : j




AppendixF GraphingDatawith SPSS13. t22 .theChartEditor HSGlHffry..0 window and the correspondingProperties window will appear. Whenyou openthe graphfor editing. select element For to the representing title of the graph. it If the item you have selectedis a text element(e.0and 14.' I* -l ql ryl OnceChartEditoris open. select of To just click on therelevant an element. you caneasilyedit eachelement the graph.. example.g.a cursor will be presentand you can edit the text as you would in word processing programs. the color or font size).the title of the graph).If you would like to changeanotherattributeof the element(e. Once you have selected element. use the Properties box (Text propertiesare shownabove).g.click somewhere the title (the word "Histogram" the in on theexample below). spoton the graph.you can tell that the correctelementis selected an because will havehandlesaroundit.

00 129. and HEIGHT 66 69 73 72 68 63 74 70 66 64 60 67 64 63 67 65 WEIGHT 150 155 160 160 150 140 165 150 110 100 9 52 ll0 105 100 ll0 105 SEX r I I I l l I I 2 2 2 2 2 2 2 the by Checkthat you haveentered datacorrectly calculating mean for eachof a (click Analyze. simplyselect File.5164 1. the threevariables thenDescriptives).UU WEIGHT SEX Valid N (listwise) 95. the Onceyour graphis formatted it.0 graphs you can makeexcellent With a little practice.sav.00 1. in Descrlptlve Statlstlcs std.3451 2. N ntrt(. SEX(l : male.00 .Appendix Graphing F Datawith SPSS 13.9U O/ 165. The data representparticipants' HEIGHT (in inches). way you want usingSPSS.2= female).00 Deviation Mean I1. Data Set we For the graphing examples.U U oo. WEIGHT(in pounds). Compare yourresults with those thetablebelow.0625 26.Enterthe databelowand savethe file as HEIGHT. will usea new set of data.thenDescriptiveStatistics.thenClose.0and 14.YJ/0 J .n I Minimum Maximum 'to 16 16 16 OU.s000 123 .

.They are graphicalrepresentations of the frequencydistributionsdiscussed Chapter3." ..:'.. Running the Command The Frequenciescommand will produce I gnahao Qr4hs $$ities graphical frequency distributions.tt "5l 1 (e q. BrsirbUvss.PieCharts.::. tro ps8l E ."-41 8"".. the f axis can be either a frequency count or a (selected percentage through the Chart Valuesoption). 124 .. Thus. Regorts then Descriptive Statistics. andHistograms..0and 14.:.lllnEarlrlodsl variables for which you would like to create r'txedModds Qorrelate graphsor charts.pie charts. You will receivethe chartsfor any variablesselectedin the main Frequencies commanddialog box..u-.:+.You @ will be presented with the main dialog box for the : lauor Comps? U6ar1s Frequenciescommand.n . 15 Erecpencias l -. in Drawing Conclusions The Frequenciescommandproduces output that indicatesboth the number of cases particular value and the percentage caseswith that value. .AppendixF GraphingDatawith SPSS13.. c[iffi I C " ..) available i ) ) . where you can enter the ' Eener. the dataare at leastordinal in nature.' ---* ..and Histograms Description Bar charts. conclusions centages and/orpercentilescan alsobe drawn.then Frequencies.. I .0 Bar Charts.l s r ur ' : sachalts l* rfb chilk . .and histograms the represent number of times each scoreoccurs by varying the height of a bar or the size of a pie piece.. rChatVdues' . SPSS Data Format You needonlv one variableto usethis command.Click Analyze..For each type. Pie charts.(SeeChapter3 for other options with this command. '1 i : f Porcer*ag$ i : . 9or*abr. of in the sample with a for conclusionsdrawn should relate only to describing the numbers or percentages the perthe If regarding cumulative sample. C hartTl p.' There are three types of charts under this command: Bar charts... l. Explorc. kj -i : r llrroclu* . Charts This will give you the Frequencies: dialogbox.. I Click the Chartsbuttonat the bottom to produce frequencydistributions.- .i . j . &dtlo.

This is very useful for helping you determine the distribution if you haveis approximately normal. for the t25 . and the I axis represents numberof scoresfor each the group.00 67. hrleht 1.a normal curve will be superimposed over the distribution.0 Output The bar chart consistsof a Y axis. If you selectLl/ithNormal Curve. Note that the only values represented the X on axis are those with nonzero frequencies(61.The midpoint of each group is plotted on the X axis.00 h.00 68. and 7l are not represented).AppendixF GraphingDatawith SPSS13.5 c a f s a L 1. 0.l$ h{ghl Use PracticeData Set I in Appendix B.106?: ff. 1.0and 14.Sffi td. representing the frequency. and an X axis.0 66. and a bar chart that represents frequencies the variableAGE. 62. Practice Exercise M.constructa histogramthat represents mathematics the skills scoresand displaysa normal curve. .llht : ea t 61@ llil@ I 65@ os@ I r:@ o 4@ g i9@ o i0@ a i:@ B rro o :aa The pie chart shows the percentageof the whole that is represented eachvalue. range The of scores is split into evenly spaced groups.v. representing each score. After you have enteredthe data. by ifr The Histogramcommand creates a grouped frequency distribution.

SP. For example.then click Define. the Assumptions Both variablesshouldbe interval or ratio scales. i Selectthe desiredscatterplot (normally. L*dqs. Running the Command I gr"pl.bx rl[I ti .0 Scatter ffi sittpt" lelLl Oot ll li Define - q"rylJ .you will select Simple ! Scatter).. Helo I ffil.The Xaxis represents value for one variable.t^t Data Format You needtwo variablesto perform this command.u Stllities Add-gns : You can produce scatterplotsby clicking Graphs.J l-. enter HEIGHT as the f axis and WEIGHT as the X axis. then I Scatter/Dot.-' ' C*trK This will give you the main Scatterplot dialog box.fd 3*J qej *fl .. m m *t xl Simpla Scattal 0veday Scattel ffi ffi Mahix Scattel 3. of .0and 14.be cautiousaboutyour interpretation the scattergram. I'u.This will give you the first Scatterplot dialog box.0 Scatterplots Description Scatterplots(also called scattergrams scatterdiagrams)display two values for or eachcasewith a mark on the graph.AppendixF GraphingDatawith SPSS13. using the HEIGHT.i** L":# .r m s.sav data set.P*lb 9dMrk6by IJ T-8off. 126 .If nominal or ordinal data are used.The / the axis represents value for the secondvariable.- | . Click oK. Enter one of your variablesas the I axis and the secondas the X axis.|rlil'.: rI[I l*tCrtc .

One set represents male participants. as fa.0 and 14. dimensional in To makeit do so. the SPSSchart editor to make them different shapes.m Adding a Third Variable Even though the scatterplotis a twograph.it canplot a third variable.enterthethird variable the SetMarkers by field. I e. Now our outputwill havetwo different the sets of marks. the second represents the and set participants.we will Markers by enterthe variable SEX in the ^Sel space.In our example.l at.l ccel x*l t27 . female These two setswill appear You canuse in differentcolorson your screen.00 r20.Appendix Graphing 13. in thegraphthatfollows.0 F Datawith SPSS Output subject theappropriate andI levels. X at Theoutput consist a markfor each will of 74.

need usetheBar Chartscommand.S At leasttwo variables needed performthis command. are interested a bar chartwherethe I axis is not a frewe in quency. a tionship between SALARY andEDUCATION.0 Graph sox aLm o 2.9 s I 56. You canproduce charts bar with theFrequencies command 4.g.00 70.00 wrlght Practice Exercise the UsePractice DataSet2 in Appendix Construct scatterplot examine relato B.Use the repeated-measures methodif you havea dependent variable for eachvalueof the G"dr tJt$ths Mdsns €pkrv independent variable(e. however.0and 14. This ) Map normallyoccurs whenyou takemultipleobservations time.3). we to Data Format SPS. the between-subjects Use is method onevariable the independent if the otheris the dependent variable.00 !00. Thereare two basic are to kinds of bar charts-thosefor between-subjects and designs thosefor repeated-measures variableand designs. Sometimes.you wouldhavethreevariables a for IrfCr$We designwith three valuesof the independentvariable).@ r60.00 c . produce To sucha chart. over 128 .AppendixF GraphingDatawith SPSS13.00 oo o 52.00 120.00 58.00 o 64.6 72.m 110. Advanced Bar Charts Description (seeChapter Section 4..

a DataSet I in Appendix Construct bar graphexamining relaUse Practice Hint: In the BarsRepresent and skills scores maritalstatus.sav entered Section (Chapter lr# I t*ls o rhd Practice Exercise the B. 6.If you are using a refor peated-measures design.0 Datawith SPSS Appendix Graphing F Running the Command Bar Click Graphs.then for eithertypeof bar chart. selectSimple. design.Move eachvariable will placeit insidepaBars Represent area. select If you are usinga between-subjects Summaries groups of cases. Note that this exampleusesthe 6).This will give you a graphlike the one below at right.and SPSS the rentheses followingMean. t29 . If you haveone This will openthe Bar Charts independentvariable.selectClustered.you measures If you are creatinga repeated overto will seethe dialog box below.If you have more thanone. dialog box. area. graph.4 in data GRADES.]1 13. select Summariesof separate variables.0and 14. mathematics tionshipbetween enterSKILL asthevariable.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->