Professional Documents
Culture Documents
Psychological Statistic II LAB
Psychological Statistic II LAB
SPSSNOTES(PART1) Frequencies, Descriptive,&Explore
UsageofSPSS ● Descriptive Stats - Included in Frequencies
We will only be using descriptive statistics, compare and Explore, so if you choose one of the two,
means, general linear model, correlate, linear youdon’thavetododescriptivestatsanymore.
regression,andnonparametrictests. ● Frequencies - Has a f requency table, and
histogram with normal curve. But there is no
CommonAssumptionsofParametricTests way to get the
descriptive of
between-subjects.
● The data measured should be at a continuous It
runs the
data as
one group. It cannot run the
level,whichisintervalorratio. test of normality, which is very important for
● Thedatashouldbenormallydistributed. theassumptionsofparametrictests.
● Thereshouldbenosignificantoutliers. ● Explore - Contains descriptive statistics, test
of
● There should be homogeneity of variance, normality, box plot, lower and upper bound,
whichisonlyapplicableforbetween-subjects. interquartile range, stem-and-leaf, 5%
trimmed
● Thereshouldbelinearity,alinearrelationship. mean, and other assumptions. Has everything,
exceptthef requencytable.
DataViewvs.VariableView ○ DependentList-Dependentvariables.
● Dataview ○ Factor List - The groupings, such as for
○ Thedefaultviewwhereweinputourdata. between-subjectsdesign.
○ Eachcolumnrepresentsonevariable. ○ Outliers-Onlyshowspossibleoutliers.
○ Eachrowrepresentseachparticipant. ○ FactorLevelsTogether-Between-subjects.
● Inputfirstinvariableviewbeforedataview. ○ DependentsTogether-Withinsubjects.
● Variableview ○ Normality Plots with Tests - Test of
○ Wherewelabelourvariables. normality. Determines if the data violated
○ Eachrowrepresentsonevariable. the assumption of
normality or
not.
Easiest
○ Eech column is
the
different labels that
we waytotellifthedataisnormalornot.
canuseforourvariables. ○ Spread vs. Level with Levene Test - This
tells us the homogeneity of variance. Only
LabelsintheVariableView available for between-subjects design.
● Name=nameofthevariable ChoosetheUntransformedoption.
● Type=variabletype(e.g.numeric,date,etc.)
● Width=#ofcharactersallowedinthedata MeasuresofCentralTendency
● Decimals=#decimalsofthedatathat’sshown ● Mean
● Label = entire name of the variable (useful for
● Median
longnames.Eg.HDV→HighDefinitionVideo) ● Mode
● Values = used when data uses a between ● Sum
subjects design (when our data is divided into
twodifferentgroups) MeasuresofDispersion
● Missing=missingvalues ● StandardDeviation
● Columns=leaveasis ● Variance
● Align=leaveasis ● Range
● Measure = scale of measurement (nominal, ● Minimum
ordinal,orscale[interval/ratio]) ● Maximum
● StandardErroroftheMean
BetweenSubjectsvs.WithinSubjectsDesign
● Between subjects = Different conditions are MeasuresofDistribution
administered to different sets of participants. ● Skewness
(undergooneconditiononce) ● Kurtosis
● Within subjects = Both conditions are
administered to one set of participants. DescriptiveStatisticsTable
(undergobothconditionsonce) ● If the standard error of mean is
greater than 1,
orespeciallyabove3,thenthereisanoutlier.
Quartiles/Percentiles ● If the mode has a superscript a, that means
● IncludesQ 1,Q
2,andQ 3.(25%,50%,and75%). there are multiple modes, and they took the
● IncludesP 5,P
10,P 90,andP 95. smallestmodevaluetorepresentthedata.
1
PsychologicalStatisticIILAB
● If the value of skewness is greater than the SPSSNOTES(PART2)
standard error of skewness squared, it Discretevs.ContinuousVariables
indicatesthatthereisasignificantoutlier. ● Discrete-Onlyoccursinwholeunits.
● Lower Bound, Upper Bound - This tells you ● Continuous-Occursinf ractionofunits.
that the true
population mean may lie between
thelowerandupperbound. ShapesofDistributions
● 5% Trimmed Mean - The mean of the data ● Distribution-Agroupofscores.
when the
5%
highest and %5 lowest of the
data ● Bell-Shaped Distributions - Aka normal or
is removed. Only 90% of the data is left. It Gaussian distributions. Most of the scores pile
removes possible outliers, so if
this mean is
far
up in
the
middle.
As
you
move further f rom the
from the regular mean,
then it
is
an indication middle, the f requency of the scores gets less.
thatthereisanoutlier. Symmetrical in that the right and left sides of
thegraphareidentical.
FrequencyTable ● Skewed Distributions - Are asymmetrical, the
● Frequency-#oftimesthedataoccurred. rightandleftsidesarenotidentical.
● Percent / Valid Percent - What percentage it ○ PositivelySkewed-Tailpointstotheright.
representsofthedata. ○ NegativelySkewed-Tailpointstotheleft.
● Cumulative Percentage - Percentage of ● Kurtosis - The extent to which distributions
responsesatorbelowgivenmeasurement. haveanexaggeratedpeakorflatterpeak.
○ Leptokurtic Distributions - Higher, more
TestofNormality exaggeratedpeakthananormalcurve.
● Givesassurancethatthedatais(non)normal. ○ PlatykurticDistributions-Flatterpeak
● Asterisk (*) - Is a lower bound of the true
significance. AnalyzeOption(TopBar)
● Kolmogorov-Smirnov Test - Some textbooks ● CompareMeans-ForT-TestandANOVA
say that this should only be an option if you
● Correlate-ForPearson
haveasamplesizeofmorethan2,000. ● Regression-Forregression
● Shapiro-Wilk Test - Used when there are less ● Scale - For Cronbach Alpha. To find the
than2,000respondents. reliabilityoftheassessment.
● Sig.-ThisistheP-value.
● Ho: There IS NO significant deviation among 3WaystoCheckNormality
the data, which means that the data IS ● Q-QPlot(Quantile-QuantilePlot)
normallydistributed. ● SkewnessandKurtosis
● Ha: There IS
A significant deviation among the
● NormalDistribution
data, which means the data IS NOT normally
distributed. CORRELATIONANDREGRESSION
● YouwantHo>Ha CorrelationandRegression
● RejecttheHo,ifthep <0.05 ● Oneassumptionyouneedtosatisfyisthatthe
● AccepttheHo,ifthep >0.05 variablesshouldhavealinearrelationship,and
youcancheckthisthroughscatterplots.
TestofHomogeneityofVariance ● Almostimpossibletogetaperfectpositiveand
● Anassumptionforparametrictests. negativecorrelationinreallifebecausethere
● Isthedispersionofthegroupsequalornot? arealotofintervening/confoundingvariables.
● Equal=Homogenous=Similar ● Theyaretestsofrelationshipsandpredictors.
● Sig.-ThisistheP-value. ● Correlation
● Ho: There is
NO significant difference
between ○ Acorrelationcanbeusedonlyifthescores
thestandarddeviation/dispersionofthedata. oneachvariablearepairedorlinkedto
● Ha: There is A significant difference between eachotherinsomeway.
thestandarddeviation/dispersionofthedata. ○ Twotestsyoucanuseforcorrelation,
● YouwantHo>Ha pearson(parametric),andspearman
● Dispersionmustbesimilartobehomogenous (nonparametric).
● RejecttheHo,ifthep <0.05 ● Regression
● AccepttheHo,ifthep >0.05 ○ Howonevariableaffectstheother.
○ Providesaregressionline.
2
PsychologicalStatisticIILAB
ComputationalFormulas REGRESSION
● DefinitionalFormulas-Helpexplainthelogic Regression
ofastatisticbutcanbecumbersomewhen ● It tells you how much the dependent variable
workingwithalargedataset. increases or
decreases based on the increase or
● ComputationalFormulas-Allowyouto decreaseoftheindependentvariable.
computeanrvaluedirectlyf romrawscores
withoutfirstconvertingeverythingtozscores. Durbin-WatsonTable
● Checkeroftheassumptions.
Pearson
● You can only use Pearson if both of your ModelSummaryTable
variablesareintervalorratio. ● OnlythingimportantistheR-Square
● Therehastobeanassumedlinearrelationship. ● R-Square (Coefficient of Determination) -
● Examplesofratio:Ageandweight Percentage explained by the predictor variable.
● Examples of interval: Temperature and IQ. Also, Thechangeinthecriterionvariable.
abstract constructs measured by standardized ○ Example: 7.7% of the variation in spirituality
questionnaires, such as the Likert Scale (this is
is explained by years and service and work
onlyapplicableforsocialsciences). engagement.
● Pearson Correlation - (Pearson r or correlation ○ This isn’t explained in Pearson, which is
coefficient) The direction and magnitude or what makes regression more powerful. It
strength of
the data. The closer the magnitude provides the proportion of variation that
isto1or-1,thestrongerthecorrelationis. the predictor variable can explain about the
● Measuresrelationshipsbetweenvariables. criterionvariable.
Spearman CoefficientsTable
● Whenyoucan’tusePearson,useSpearman. ● Unstandardized B - For every one unit
● Thisisanon-parametrictest. measure of one variable, the other variable will
● Measuresrelationshipsbetweenvariables. increasebythismuch.
● Canbeusedifthereisnolinearrelationship. ○ Example: For every one unit measure of
● Spearman’s rs - Used to compute correlations work engagement of an employee,
whenoneorbothvariablesareordinal. spiritualitywillincreaseby0.157.
● The computation for a Spearman’s correlation
involves analyzing the rank orders of two INDEPENDENTSAMPLEST-TESTS
variablesratherthanthevariablesthemselves. Notes
● If one variable is
ordinal and the other isn’t, you ● T-Test and ANOVA are a test of difference. They
need to
use a Spearman’s correlation and begin measure the difference between means. If
they
byconvertingtheothervariabletoranks. are(not)significantlydifferentf romeachother.
● Homoscedasticity & Heteroscedasticity - The ● The independent samples t-test is used when
same as
the homogeneity of variance. It
is
ideal you need to compare two sample means that
for the data to be homogenous. Is there areunrelated.
uniformity in the variation of each variable? The ● It uses two samples f rom the population to
preferredtermsincorrelationandregression. representtwodifferentconditions.
○ In T-TEST and ANOVA, the preferred term is
● If the Ho is true, the obtained t should be 0.
If
homogeneityofvariance. Hoisfalse,theobtainedtshouldbefarf rom0.
● You can use an independent samples t-test to
Significance determine whether the difference between two
● For your data to be significant, your alpha level sample means was likely or unlikely to have
mustbegreaterthanyoursignificancelevel. occurredduetosamplingerror.
● AlphaLevel>SignificanceLevel ● If the experimental group had a higher mean
● AlphaLevel=0.05 and the obtained t value is in
the critical region,
● SignificanceLevel<0.05 you could conclude that the experimental
● * - Correlation is significant at the 0.05 level group increased the DV. If the control group
(2-tailed). had a lower mean, you could conclude that the
● ** - Correlation is significant at the 0.01 level controlgroupdecreasedtheDV.
(two-tailed).
3
PsychologicalStatisticIILAB
GroupStatisticsTable SPSSNOTES(Part3)
● Thelarger/highermeanhasagreatereffect. ONE-WAYINDEPENDENTSAMPLESANOVA
Notes
IndependentSamplesTestTable ● ANOVA=AnalysisofVariance
● Levene’s Test for Equality of Variances - The ● If you fail to satisfy the assumptions, you have
homogeneityofvariances. to do a Kruskal-Wallis H Test (independent
○ You want the Sig. for the Equal Variances samples)orFriedmanTest(repeatedsamples).
Assumedt obegreaterthan0 .05. ● YouhaveoneIVandoneDV.
■ Assumes that the changes between or
● Same as Independent T-Test except for the
the variability of the scores of the two numberoflevelsintheIV.
groups aren’t that different. This means ● The independent samples ANOVA can compare
therearenooutliers. two or more sample means at
the same time to
○ If the
Sig. is
less than 0.05, look at the
Equal determine whether the deviation between any
VariancesNotAssumedr ow. pair of sample means is greater than would be
● If the value of Sig.
(2-tailed) is
greater than 0.05, expectedbysamplingerror.
thedifferenceisnotsignificant. ● The same with T-Test, ANOVA is a test of
● If the value of Sig. (2-tailed) is
less than 0.05, the differenceofmeansbetweendifferentgroups.
differenceissignificant. ● AT-Testcanonlycomparetwosamplemeans.
● AcceptHoifSig.>0.05. ● An independent samples ANOVA can compare
● RejectHoifSig<0.05. twoormoresamplemeans.
● An ANOVA analyzes the variance of scores both
REPEATED/RELATEDSAMPLEST-TEST between and within IV
conditions in an attempt
Notes to determine whether the different treatment
● Repeated-Measures T-Test - Participants are conditionsaffectscoresdifferently.
measured repeatedly, once and once after a
treatment. 3ThingsAffecttheVarianceofScores
● Related Samples T-Test - Each person in the 1. Measurement Error - There will always be variance
first sample has something in
common with or
in
scores between people because variables cannot
islinkedtosomeoneinthesecondsample. bemeasuredperfectly.
● Othernames: 2. Individual Differences - There will always be
○ PairedSamplesT-Test variance in
scores between people because people
○ MatchedSamplesT-Test arenaturallydifferentf romeachother.
○ DependentSamplesT-Test 3. Treatment Effect - There might be variance in
○ Within-SubjectsT-Test scores between groups because groups
● The related measures t-test is similar to the experienceddifferentIVconditionsortreatments.
single-sample t-test in that it compares the
deviation between two means to determine Assumptions
whether it is likely to have been created by ● Data Independence - Scores of individuals
samplingerror. must be measured without one participant’s
● Related samples t-test is different in that the scores affecting another’s. Participants’ scores
two means it compares both come f rom the don’taffecteachother.
same sample, which is measured twice under ● Appropriate Measurement of Variables - DV
differentconditions. must be measured on an interval/ratio scale. IV
● Example:Usedforbeforeandaftertreatments. must identify how the treatments are different
(independenceofobservations).
PairedSamplesStatisticsTable ○ If the DV is ordinal, you use the
● Thelarger/highermeanhasgreatereffect. Kruskal-WallisHTest.
● Normality Assumption - Distribution of sample
PairedSamplesTestTable meansforeachconditionbeanormalshape.
● Sig.(2-tailed)valuemustbe<0.05 ○ This will be the case if the original
● DataissignificantifSig.<0.05 populations are normal or if the sample
● RejectHo,ifSig.<0.05 sizesforeachconditionarenear30.
● AcceptHo,ifSig.>0.05 ● Homogeneity of Variance - If any one of
your
conditions has a standard deviation double that
4
PsychologicalStatisticIILAB
of another, this assumption might be
violated. ● Contains only the sample
size,
mean,
standard
However, if the sample sizes are similar, this
deviation, standard error, lower and upper
assumptioncanbeviolatedwithoutaproblem. bound,minimum,andmaximum.
○ If your data is not
homogenous, you have to
usetheK ruskal-WallisHTest. TestofHomogeneityofVariances
● ALL of the assumptions MUST be met, ● Similar to the Levene Test in the T-Test, you
otherwise, you cannot conduct a one way havetocheckiftheSig.valueis<or>0.05.
betweensubjectsANOVA. ● You want the Sig.
value for
the Levene statistic
● If
one or
more of the
assumptions are not
met, tobegreaterthan0.05.(Sig.>0.05)
youusetheK ruskal-WallisHTest. ● Contains:
○ BasedonMean
Example Problem: Suppose you want to compare ○ BasedonMedian
cognitive behavioral therapy (CBT) and psychodynamic ○ BasedonMedianandWithAdjusteddf
therapy (PDT) as treatment for depression. You identify ○ BasedonTrimmedMean
a sample of people with major depression and ● In the example problem, we satisfied the
randomlydividethemintothreedifferentgroups. homogeneity assumption because all the Sig.
valuesaregreaterthan0.05.
One group undergoes CBT for 6 months. A 2nd group
undergoes PDT for 6 months. A 3rd group functions as
ANOVATable
acontrolgroupandreceivesnotreatment(NT). ● The difference in means is statistically
significantiftheSig.valueislessthan0.05.
After 6 months, you assess their levels of depression ● SignificantifSig.<0.05.(RejectHo)
with the Beck Depression Inventory (scores range f rom ● NotsignificantifSig.>0.05.(AcceptHo)
0-63),withhigherscoresindicatinggreaterdepression. ● F=ComputedFvalue
● ComparecomputedFvaluetoc riticalFvalue.
The IV
is
the type of
treatment (CBT, PDT, or NT). The DV ○ In the T-Test where you compare your
iseachperson’sdepressionscoreontheBDI. computedPvaluetothecriticalPvalue.
StatingtheNullandResearchHypotheses PostHocTestsTable(MultipleComparisons)
● Ho: The three population of people being ● If your data is
not
statistically significant, there
studied (those getting CBT, PDT, or NT) have isnoneedforyoutodoaPostHocAnalysis.
thesamemeandepressionscores. ● In ANOVA and Kruskal-Wallis H Test,
it
can only
● Ha: At least one population mean is different tellyouthatthereisasignificantdifference.
fromatleastoneoftheothers. ○ It does not tell you which among the
means is statistically different f rom the
Analyze→CompareMeans→One-WayANOVA other means. The Post Hoc analysis tells
● DependentList=Depression youthisinformation.
● Factor=Therapy ● Always analyze the Sig. values column, and
● PostHoc→EqualVariancesAssumed check which of the means are significantly
○ Tukey HSD and Fisher’s LSD - Used when differentf romeachother.
you have an equal number of respondents ● Significant=Sig.<0.05
pergroup. ● NotSignificant=Sig.>0.05
○ Scheffe - Used when you have an
unequal
numberofrespondentspergroup.
○ Bonferroni-Usedforrepeatedmeasures.
● Select descriptive statistics and homogeneity of
variancetest.
● SignificanceLevel:0.05
○ Thisissetbydefault.Sometimesit’s0.01.
DescriptiveStatisticsTable
● There is
a difference between the means. But is
thedifferencestatisticallysignificant?
5
PsychologicalStatisticIILAB
HomogenousSubsetsTable(Depression) ONE-WAYREPEATED-MEASURESANOVA
● Values under the column labeled “ 1” are the Notes
significantvariables. ● Repeated Measures ANOVA is the same as a
● Values under the column labeled “2” are the Paired Samples T-Test. The only difference is
the
non-significantvariables. numberoflevelsintheIV.
● TableisavailableforTukey,butnotFisherLSD. ● Assumptions
○ Normality
Analyze→GeneralLinearModel→Univariate ○ HomogeneityofVariance
● AnotherwaytoconductANOVA. ○ TheDVshouldbeintervalorratio.
● UseMultivariateforMANOVA,fortwoDVs. ○ You have treatment conditions, but you
● DependentVariable=Depression only have one set of
participants. That set of
● FixedFactors=Therapy participantswillbetestedatleasttwice.
○ It’spossibletohavetwoIV’s.
○ If you have two IVs, you call it a Two-Way Assumptions
FactorialANOVA. ● Data Independence - The responses within
○ If you have three IVs,
you call it
a Three-Way each condition must not be influenced by other
FactorialANOVA. responseswithinthatsamecondition.
● Options→Display ○ The procedural controls used in this
○ Select descriptive statistics, estimates of study seem likely to provide data
effect size, observed power, and
independence.
homogeneitytests. ● Appropriate Measurement of Variables - 2 or
○ Significancelevel:0.05 more“grouping”IVs.1interval/ratioDV.
○ Confidenceintervalsare95.0% ● Normality - The distribution of sample means
● PostHoc→EqualVariancesAssumed→Tukey for each condition (cell) must have a normal
● EMMeansissimilartoPostHoc. shape. This is
met if
the original populations are
normalorifthesamplesizeislarge.
TestsofBetween-SubjectsEffectsTable ● Homogeneity of Variance - The variability
● ThistableisprovidedinsteadofANOVA. within each cell should be similar. The
● To know if there is a significant difference homogeneity of variance assumption is
among the means, check the Sig. value for
your
satisfied because none of the conditions’
independentvariable. standard deviations is double the size of any
● Significant=Sig.<0.05 otherconditions.
● NotSignificant=Sig.>0.05
● This table has more information because it Example Problem: Does students’ anxiety in
providestheEtaSquared. Psychological Statistics increase or decrease before,
● Eta Squared - Measures the strength of the during, or after a test? Is there a difference in anxiety
relationship.Thestrengthoftheeffect. levelsacrossdifferenttimef rames?
○ ItisliketheE ffectSize.
○ Cohen, anything greater than 0.50 is
strong. The IV is Time. The DV is the Anxiety Scores (scores
0.30-0.50ismoderate.0.10-0.29isweak. range f rom 1-25). The higher the Anxiety Score, the
● ObservedPower-Thepracticalsignificance. moreanxiousthestudentis.
○ A value > than 0.80 in
the observed power
is a good, strong practical significance. Analyze→GeneralLinearModel→RepeatedMeasures
Anythinglessthan0.80isnotgood. ● Within-SubjectVariables
● In research, you analyze both the statistical ○ Name=IV
significanceandpracticalsignificance. ○ NumberofLevels
● Sig.-Thestatisticalsignificance. ● Options→Display
○ Descriptive statistics, estimates of effect
size,observedpower,&homogeneitytests.
○ SignificanceLevel:0.05
● MoveFactortoDisplayMeansforsection.
○ Click“Comparemaineffects”
○ Confidenceintervaladjustment:B onferroni
■ BonferroniisforRepeatedMeasures
6
PsychologicalStatisticIILAB
DescriptiveStatisticsTable SPEARMAN
● Contains the means, standard deviations, and Notes
thesamplesizesforeachvariable. ● SpearmanCorrelation/Rank-OrderTest
● Tellsyouwhichmeanhasagreatereffect. ● If you know that there is a violation in the
parametric test (non-linear/homogenous), then
MultivariateTestsTable you can’t use a parametric test. Instead of
● With this table, you
can tell whether
the means Pearson,useSpearmantotestrelationships.
aresignificantlydifferentf romeachother. ● Or if you
have ordinal
data,
you obviously can’t
● You will focus on the Wilks’ Lambda row for usePearson,andyouhavetouseSpearman.
repeatedANOVA. ● HasthesameprocessasPearson.
● Significant=Sig.<0.05
● NotSignificant=Sig.>0.05 Rankthedata→Analyze→Correlate→Bivariate
● Eta Squared - Measures the strength of the ● MovetherankeddatatotheVariablesarea.
relationship.Thestrengthoftheeffect. ● CorrelationCoefficients→Spearman
● ObservedPower-Thepracticalsignificance.
● It tells you that
there is
a significant
difference CorrelationsTable
among the means, however, it doesn’t tell
you
● A negative correlation coefficient indicates that
whichhasthemost/leastsignificantdifference. thereisanegative,orinverse,correlation.
● If the
Sig. (p)
value
is
less than
0.000,
you write
● Significant=Sig.<0.05
thatit’slessthan0.05or0.001. ● NotSignificant=Sig.>0.05
● *-Indicatesthatthedataissignificant.
TWO-WAYFACTORIALANOVA
Notes
● SincetherearetwoIVs,itisaTwo-WayFactorial
ANOVA.
● TellsyouifIV1hasaneffect,ifIV2hasaneffect,
Mauchly’sTestofSphericityTable andiftheinteractionoftheIVshasaneffect.
● The test of Sphericity is typically the same as
theHomogeneityofVarianceofLevene. Analyze→GeneralLinearModel→Univariate
● If
the
Sig.
value > 0.05,
then your
assumption for
● DV=Depression;Fixed=TherapyandSleep
homogeneityofvarianceissatisfied. ● Options→Display
● ThereishomogeneityofvarianceifSig.>0.05. ○ Select descriptive statistics, estimates of
● ThesamegoesfortheLeveneTest. effect size, observed power, and
homogeneitytests.
PairwiseComparisonsTable ○ Significancelevel:0.05
● For the
Post Hoc test
in
repeated samples, you ● PostHoc→EqualVariancesAssumed→Tukey
havetolookforthePairwiseComparisonstable.
● Always analyze the Sig. values column, and Levene’sTestofEqualityofErrorVariances
check which of the means are significantly ● HomogeneousifSig.>0.05
differentf romeachother. ● Non-homogenousifSig.<0.05
● Significant=Sig.<0.05
● NotSignificant=Sig.>0.05 TestofBetween-SubjectsEffects
● ChecktheSig.valueofIV1,IV2,andIV1*IV2.
● Significant=Sig.<0.05
● NotSignificant=Sig.>0.05
7
PsychologicalStatisticIILAB
SPSSNOTES(Part4) ● The Kruskal-Wallis H Test
is incomplete,
so
you
Notes have to do another procedure to know which
● Youcan’talwaysperformaparametrictest. among the groups are statistically significant
● You can only perform a parametric test, if
all the fromeachother.
assumptions needed for that test have been ● To check this, you can do a Mann-Whitney U
satisfied. Otherwise, you have to use the test’s Testbypairingthem.
nonparametriccounterpart.
● The Mann-Whitney U, Wilcoxon Signed-Rank , Analyze → Nonparametric Tests → Legacy Dialogs → K
Kruskal-Wallis H, and Friedman Test are used RelatedSamples
for ordinal or ranked data. They can also be ● The Friedman Test is
the counterpart of
Within
used when the
data is
interval, but you have to Subjects ANOVA or Repeated Measures
converttheintervaldataintoranks. ANOVA, where the same respondents are
● The Chi-Square is used when you have two testedmorethantwice.
nominalvariables. ● Ex. If you want to test the effects of weight
training, you measure the person before,
Analyze → Nonparametric Tests → Legacy Dialogs → 2 during,andafter.
IndependentSamples ● If you don’t satisfy your assumptions for
● Mann-Whitney U Test is the parallel to the ANOVA, for example if your data is ranked
Two-IndependentSamplesT-Test. (ordinal),youhavetodoaFriedmanStatistic.
● Youwanttocomparetwogroups. ● TestType-SelectFriedman
● TestVariableList=DependentVariable
● GroupingVariable=IndependentVariable Analyze→Correlate→Bivariate
○ Definethegroups ● SpearmanisthecounterpartofP earson.
● TestType-SelectMann-WhitneyU ● Spearman&Pearson=Correlation
● Thedataisintermsofm eanrank,NOTmeans. ● CorrelationCoefficients-SelectSpearman
● Is the difference between the mean ranks ● Spearman is used when the relationship
statisticallysignificant? betweenthevariablesisnotlinear.
● Asymp. Sig. (2-tailed) - The p-value or alpha ● You can’t perform Pearson if your variables
level.Itmustbelessthan0.05torejectHo. aren’tlinear.
● Significant=Asymp.Sig.<0.05
● NotSignificant=Asymp.Sig.>0.05 Analyze → Nonparametric Tests → Legacy Dialogs →
Chi-Square
Analyze → Nonparametric Tests → Legacy Dialogs → 2 ● Chi-Square is used when you have two nominal
RelatedSamples orcategoricalvariables.
● Wilcoxon Signed-Rank Test is
the counterpart ● Nominal/Categorical Variable - You’re only
of the Repeated Measures T-Test, Two Related gettingthenumber,orthef requency.
SamplesTest,ortheC orrelatedSamplesTest. ● Ex.Relationshipbetweensmokingandgender.
● The Wilcoxon Rank-Sum Test is
interchangeablewiththeM ann-WhitneyUTest. Analyze→DescriptiveStatistics→Crosstabs
● TestPairs-Insertvariable1and2 ● Anotherwaytoperform C hi-Square.
● TestType-SelectWilcoxon
Analyze→NonparametricTests→OneSample
Analyze → Nonparametric Tests → Legacy Dialogs → K ● Shortcut.
IndependentSamples ● SPSS automatically runs
and choses what tests
● If your independent variable has more than two andstatisticsitcanperform.
levels,youcandotheK ruskal-WallisHTest. ● It
gives you
a Hypothesis Test Summary, which
● TestVariableList=IndependentVariable containstheH o,T est,S
ig.,andD ecision.
● GroupingVariable=Groupings
○ Definethegroupingrange
● TestType-SelectKruskal-WallisHTest.
● Asymp. Sig. (2-tailed) - The p-value or alpha
level.Itmustbelessthan0.05torejectHo.
● Significant=Asymp.Sig.<0.05
● NotSignificant=Asymp.Sig.>0.05
8