You are on page 1of 8

Psychological‌‌Statistic‌‌II‌‌LAB‌ 


SPSS‌‌NOTES‌‌(PART‌‌1)‌  ‌ Frequencies,‌ ‌Descriptive,‌‌&‌‌Explore‌  ‌
Usage‌‌of‌‌SPSS‌  ‌ ● Descriptive‌  ‌Stats‌  ‌- ‌ ‌Included‌  ‌in‌  ‌Frequencies‌‌  
We‌  ‌will‌  ‌only‌  ‌be‌  ‌using‌  ‌descriptive‌  ‌statistics,‌  ‌compare‌‌   and‌  ‌Explore,‌  ‌so‌  ‌if‌  ‌you‌  ‌choose‌  ‌one‌  ‌of‌  ‌the‌  ‌two,‌‌  
means,‌  ‌general‌  ‌linear‌  ‌model,‌  ‌correlate,‌  ‌linear‌‌   you‌‌don’t‌‌have‌‌to‌‌do‌‌descriptive‌‌stats‌‌anymore.‌  ‌
regression,‌‌and‌‌nonparametric‌‌tests.‌  ‌ ● Frequencies‌  ‌- ‌ ‌Has‌  ‌a ‌ ‌f requency‌  ‌table,‌  ‌and‌‌  
 ‌ histogram‌  ‌with‌  ‌normal‌  ‌curve.‌  ‌But‌  ‌there‌  ‌is‌  ‌no‌‌  
Common‌‌Assumptions‌‌of‌‌Parametric‌‌Tests‌  ‌ way‌‌   to‌‌   get‌‌  the‌‌
  descriptive‌‌  of‌‌
 between-subjects.‌‌  
● The‌  ‌data‌  ‌measured‌  ‌should‌  ‌be‌  ‌at‌‌   a ‌‌continuous‌‌   It‌‌
  runs‌‌   the‌‌
  data‌‌   as‌‌
  one‌‌   group.‌‌  It‌‌ cannot‌‌  run‌‌  the‌‌  
level,‌‌which‌‌is‌‌interval‌‌or‌‌ratio.‌  ‌ test‌  ‌of‌  ‌normality,‌  ‌which‌  ‌is‌  ‌very‌  ‌important‌  ‌for‌‌  
● The‌‌data‌‌should‌‌be‌‌normally‌‌distributed.‌  ‌ the‌‌assumptions‌‌of‌‌parametric‌‌tests.‌  ‌
● There‌‌should‌‌be‌‌no‌‌significant‌‌outliers.‌  ‌ ● Explore‌  ‌- ‌ ‌Contains‌‌   descriptive‌‌   statistics,‌‌   test‌‌
  of‌‌
 
● There‌  ‌should‌  ‌be‌  ‌homogeneity‌  ‌of‌  ‌variance,‌‌   normality,‌  ‌box‌  ‌plot,‌  ‌lower‌  ‌and‌  ‌upper‌  ‌bound,‌‌  
which‌‌is‌‌only‌‌applicable‌‌for‌‌between-subjects.‌  ‌ interquartile‌  ‌range,‌‌   stem-and-leaf,‌‌   5%‌‌
  trimmed‌‌  
● There‌‌should‌‌be‌‌linearity,‌‌a‌‌linear‌‌relationship.‌  ‌ mean,‌  ‌and‌  ‌other‌  ‌assumptions.‌‌   Has‌‌   everything,‌‌  
 ‌ except‌‌the‌‌f requency‌‌table.‌  ‌
Data‌‌View‌‌vs.‌‌Variable‌‌View‌  ‌ ○ Dependent‌‌List‌‌-‌‌Dependent‌‌variables.‌  ‌
● Data‌‌view‌  ‌ ○ Factor‌  ‌List‌  ‌- ‌ ‌The‌  ‌groupings,‌  ‌such‌  ‌as‌  ‌for‌‌  
○ The‌‌default‌‌view‌‌where‌‌we‌‌input‌‌our‌‌data.‌  ‌ between-subjects‌‌design.‌  ‌
○ Each‌‌column‌‌represents‌‌one‌‌variable.‌  ‌ ○ Outliers‌‌-‌‌Only‌‌shows‌‌possible‌‌outliers.‌  ‌
○ Each‌‌row‌‌represents‌‌each‌‌participant.‌  ‌ ○ Factor‌‌Levels‌‌Together‌‌-‌‌Between-subjects.‌  ‌
● Input‌‌first‌‌in‌‌variable‌‌view‌‌before‌‌data‌‌view.‌  ‌ ○ Dependents‌‌Together‌‌-‌‌Within‌‌subjects.‌  ‌
● Variable‌‌view‌  ‌ ○ Normality‌  ‌Plots‌  ‌with‌  ‌Tests‌  ‌- ‌ ‌Test‌  ‌of‌‌  
○ Where‌‌we‌‌label‌‌our‌‌variables.‌  ‌ normality.‌  ‌Determines‌  ‌if‌  ‌the‌  ‌data‌  ‌violated‌‌  
○ Each‌‌row‌‌represents‌‌one‌‌variable.‌  ‌ the‌‌   assumption‌‌   of‌‌
  normality‌‌   or‌‌
  not.‌‌
  Easiest‌‌  
○ Eech‌‌   column‌‌   is‌‌
  the‌‌
  different‌‌   labels‌‌   that‌‌
  we‌‌  way‌‌to‌‌tell‌‌if‌‌the‌‌data‌‌is‌‌normal‌‌or‌‌not.‌ 
can‌‌use‌‌for‌‌our‌‌variables.‌  ‌ ○ Spread‌  ‌vs.‌  ‌Level‌  ‌with‌  ‌Levene‌  ‌Test‌  ‌- ‌ ‌This‌‌  
 ‌ tells‌  ‌us‌  ‌the‌  ‌homogeneity‌  ‌of‌  ‌variance.‌  ‌Only‌‌  
Labels‌‌in‌‌the‌‌Variable‌‌View‌  ‌ available‌  ‌for‌  ‌between-subjects‌  ‌design.‌‌  
● Name‌‌‌=‌‌name‌‌of‌‌the‌‌variable‌  ‌ Choose‌‌the‌‌Untransformed‌‌option.‌  ‌
● Type‌‌‌=‌‌variable‌‌type‌‌(e.g.‌‌numeric,‌‌date,‌‌etc.)‌  ‌  ‌
● Width‌‌‌=‌‌#‌‌of‌‌characters‌‌allowed‌‌in‌‌the‌‌data‌  ‌ Measures‌‌of‌‌Central‌‌Tendency‌  ‌
● Decimals‌‌‌=‌‌#‌‌decimals‌‌of‌‌the‌‌data‌‌that’s‌‌shown‌  ‌ ● Mean‌  ‌
● Label‌  ‌= ‌ ‌entire‌  ‌name‌  ‌of‌  ‌the‌  ‌variable‌  ‌(useful‌‌   for‌‌
  ● Median‌  ‌
long‌‌names.‌‌Eg.‌‌HDV‌‌→‌‌High‌‌Definition‌‌Video)‌  ‌ ● Mode‌  ‌
● Values‌  ‌= ‌ ‌used‌  ‌when‌  ‌data‌  ‌uses‌  ‌a ‌ ‌between‌‌   ● Sum‌  ‌
subjects‌  ‌design‌  ‌(when‌  ‌our‌  ‌data‌  ‌is‌‌   divided‌‌  into‌‌   ‌
two‌‌different‌‌groups)‌  ‌ Measures‌‌of‌‌Dispersion‌  ‌
● Missing‌‌‌=‌‌missing‌‌values‌  ‌ ● Standard‌‌Deviation‌  ‌
● Columns‌‌‌=‌‌leave‌‌as‌‌is‌  ‌ ● Variance‌  ‌
● Align‌‌‌=‌‌leave‌‌as‌‌is‌  ‌ ● Range‌  ‌
● Measure‌  ‌= ‌ ‌scale‌  ‌of‌  ‌measurement‌  ‌(nominal,‌‌   ● Minimum‌  ‌
ordinal,‌‌or‌‌scale‌‌[interval‌‌/‌‌ratio])‌  ‌ ● Maximum‌  ‌
 ‌ ● Standard‌‌Error‌‌of‌‌the‌‌Mean‌  ‌
Between‌‌Subjects‌‌vs.‌‌Within‌‌Subjects‌‌Design‌  ‌  ‌
● Between‌  ‌subjects‌  ‌= ‌ ‌Different‌  ‌conditions‌  ‌are‌‌   Measures‌‌of‌‌Distribution‌  ‌
administered‌  ‌to‌  ‌different‌  ‌sets‌  ‌of‌  ‌participants.‌‌   ● Skewness‌  ‌
(undergo‌‌one‌‌condition‌‌once)‌  ‌ ● Kurtosis‌  ‌
● Within‌  ‌subjects‌  ‌= ‌ ‌Both‌  ‌conditions‌  ‌are‌‌    ‌
administered‌  ‌to‌  ‌one‌  ‌set‌  ‌of‌  ‌participants.‌‌   Descriptive‌‌Statistics‌‌Table‌  ‌
(undergo‌‌both‌‌conditions‌‌once)‌  ‌ ● If‌‌  the‌‌  ‌standard‌‌   error‌‌   of‌‌  mean‌‌   is‌‌
  greater‌‌  than‌‌  1,‌‌
 
 ‌ or‌‌especially‌‌above‌‌3,‌‌then‌‌there‌‌is‌‌an‌‌outlier.‌  ‌
Quartiles‌‌/‌‌Percentiles‌  ‌ ● If‌  ‌the‌  ‌mode‌  ‌has‌  ‌a ‌ ‌superscript‌  ‌a,‌  ‌that‌  ‌means‌‌  
● Includes‌Q ‌ 1‌,‌Q
‌ 2‌,‌‌and‌Q ‌ 3‌.‌‌(25%,‌‌50%,‌‌and‌‌75%).‌  ‌ there‌  ‌are‌  ‌multiple‌  ‌modes,‌  ‌and‌  ‌they‌  ‌took‌  ‌the‌‌  
● Includes‌P ‌ 5‌,‌P
‌ 10‌,‌P‌ 90‌,‌‌and‌P ‌ 95‌. ‌ ‌ smallest‌‌mode‌‌value‌‌to‌‌represent‌‌the‌‌data.‌  ‌

1‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
● If‌  ‌the‌  ‌value‌  ‌of‌  ‌skewness‌  ‌is‌  ‌greater‌  ‌than‌  ‌the‌‌   SPSS‌‌NOTES‌‌(PART‌‌2)‌  ‌
standard‌  ‌error‌  ‌of‌  ‌skewness‌  ‌squared,‌  ‌it‌‌   Discrete‌‌vs.‌‌Continuous‌‌Variables‌  ‌
indicates‌‌that‌‌there‌‌is‌‌a‌‌significant‌‌outlier.‌  ‌ ● Discrete‌‌-‌‌Only‌‌occurs‌‌in‌‌whole‌‌units.‌  ‌
● Lower‌  ‌Bound,‌  ‌Upper‌  ‌Bound‌  ‌- ‌ ‌This‌  ‌tells‌  ‌you‌‌   ● Continuous‌‌-‌‌Occurs‌‌in‌‌f raction‌‌of‌‌units.‌  ‌
that‌‌   the‌‌ true‌‌
 population‌‌  mean‌‌  may‌‌  lie‌‌ between‌‌    ‌
the‌‌lower‌‌and‌‌upper‌‌bound.‌  ‌ Shapes‌‌of‌‌Distributions‌  ‌
● 5%‌  ‌Trimmed‌  ‌Mean‌  ‌- ‌ ‌The‌  ‌mean‌  ‌of‌  ‌the‌  ‌data‌‌   ● Distribution‌‌-‌‌A‌‌group‌‌of‌‌scores.‌  ‌
when‌‌   the‌‌
 5%‌‌
 highest‌‌  and‌‌ %5‌  ‌lowest‌‌  of‌‌ the‌‌
 data‌‌   ● Bell-Shaped‌  ‌Distributions‌  ‌- ‌ ‌Aka‌  ‌normal‌  ‌or‌‌  
is‌  ‌removed.‌  ‌Only‌  ‌90%‌  ‌of‌  ‌the‌  ‌data‌  ‌is‌  ‌left.‌  ‌It‌‌   Gaussian‌  ‌distributions.‌  ‌Most‌  ‌of‌  ‌the‌  ‌scores‌  ‌pile‌‌  
removes‌  ‌possible‌  ‌outliers,‌  ‌so‌‌   if‌‌
  this‌‌  mean‌‌   is‌‌
  far‌‌
  up‌‌  in‌‌
  the‌‌
  middle.‌‌
  As‌‌
 you‌‌
 move‌‌  further‌‌  f rom‌‌  the‌‌  
from‌  ‌the‌  ‌regular‌‌   mean,‌‌
  then‌‌  it‌‌
  is‌‌
  an‌‌  indication‌‌   middle,‌  ‌the‌  ‌f requency‌  ‌of‌  ‌the‌  ‌scores‌  ‌gets‌  ‌less.‌‌  
that‌‌there‌‌is‌‌an‌‌outlier.‌  ‌ Symmetrical‌  ‌in‌  ‌that‌  ‌the‌  ‌right‌  ‌and‌  ‌left‌  ‌sides‌‌  of‌‌ 
 ‌ the‌‌graph‌‌are‌‌identical.‌  ‌
Frequency‌‌Table‌  ‌ ● Skewed‌  ‌Distributions‌  ‌- ‌ ‌Are‌  ‌asymmetrical,‌  ‌the‌‌  
● Frequency‌‌-‌‌#‌‌of‌‌times‌‌the‌‌data‌‌occurred.‌  ‌ right‌‌and‌‌left‌‌sides‌‌are‌‌not‌‌identical.‌  ‌
● Percent‌  ‌/ ‌ ‌Valid‌  ‌Percent‌  ‌- ‌ ‌What‌  ‌percentage‌  ‌it‌‌   ○ Positively‌‌Skewed‌‌-‌‌Tail‌‌points‌‌to‌‌the‌‌right.‌  ‌
represents‌‌of‌‌the‌‌data.‌  ‌ ○ Negatively‌‌Skewed‌‌-‌‌Tail‌‌points‌‌to‌‌the‌‌left.‌  ‌
● Cumulative‌  ‌Percentage‌  ‌- ‌ ‌Percentage‌  ‌of‌‌   ● Kurtosis‌  ‌- ‌ ‌The‌  ‌extent‌  ‌to‌  ‌which‌  ‌distributions‌‌  
responses‌‌at‌‌or‌‌below‌‌given‌‌measurement.‌  ‌ have‌‌an‌‌exaggerated‌‌peak‌‌or‌‌flatter‌‌peak.‌  ‌
 ‌ ○ Leptokurtic‌  ‌Distributions‌  ‌- ‌ ‌Higher,‌  ‌more‌‌  
Test‌‌of‌‌Normality‌  ‌ exaggerated‌‌peak‌‌than‌‌a‌‌normal‌‌curve.‌  ‌
● Gives‌‌assurance‌‌that‌‌the‌‌data‌‌is‌‌(non)normal.‌  ‌ ○ Platykurtic‌‌Distributions‌‌-‌‌Flatter‌‌peak‌  ‌
● Asterisk‌  ‌(*)‌  ‌- ‌ ‌Is‌  ‌a ‌ ‌lower‌  ‌bound‌  ‌of‌  ‌the‌  ‌true‌‌    ‌
significance.‌  ‌ Analyze‌‌Option‌‌(Top‌‌Bar)‌  ‌
● Kolmogorov-Smirnov‌  ‌Test‌  ‌- ‌ ‌Some‌  ‌textbooks‌‌   ● Compare‌‌Means‌‌-‌‌For‌‌T-Test‌‌and‌‌ANOVA‌  ‌
say‌  ‌that‌  ‌this‌  ‌should‌  ‌only‌  ‌be‌  ‌an‌  ‌option‌  ‌if‌  ‌you‌‌
  ● Correlate‌‌-‌‌For‌‌Pearson‌  ‌
have‌‌a‌‌sample‌‌size‌‌of‌‌more‌‌than‌‌2,000.‌  ‌ ● Regression‌‌-‌‌For‌‌regression‌  ‌
● Shapiro-Wilk‌  ‌Test‌  ‌- ‌ ‌Used‌  ‌when‌  ‌there‌  ‌are‌  ‌less‌‌   ● Scale‌  ‌- ‌ ‌For‌  ‌Cronbach‌  ‌Alpha.‌  ‌To‌  ‌find‌  ‌the‌‌  
than‌‌2,000‌‌respondents.‌  ‌ reliability‌‌of‌‌the‌‌assessment.‌  ‌
● Sig.‌‌-‌‌This‌‌is‌‌the‌‌P-value.‌  ‌  ‌
● Ho:‌  ‌There‌  ‌IS‌  ‌NO‌  ‌significant‌  ‌deviation‌  ‌among‌‌   3‌‌Ways‌‌to‌‌Check‌‌Normality‌  ‌
the‌  ‌data,‌  ‌which‌  ‌means‌  ‌that‌  ‌the‌  ‌data‌  ‌IS‌‌   ● Q-Q‌‌Plot‌‌(Quantile-Quantile‌‌Plot)‌  ‌
normally‌‌distributed.‌  ‌ ● Skewness‌‌and‌‌Kurtosis‌  ‌
● Ha:‌  ‌There‌‌   IS‌‌
  A ‌‌significant‌‌   deviation‌‌  among‌‌   the‌‌
  ● Normal‌‌Distribution‌  ‌
data,‌  ‌which‌  ‌means‌  ‌the‌  ‌data‌  ‌IS‌  ‌NOT‌  ‌normally‌‌    ‌
distributed.‌  ‌ CORRELATION‌‌AND‌‌REGRESSION‌  ‌
● You‌‌want‌‌Ho‌‌>‌‌Ha‌  ‌ Correlation‌‌and‌‌Regression‌  ‌
● Reject‌‌‌the‌‌Ho,‌‌if‌‌the‌p ‌ ‌‌<‌‌0.05‌  ‌ ● One‌‌assumption‌‌you‌‌need‌‌to‌‌satisfy‌‌is‌‌that‌‌the‌‌  
● Accept‌‌‌the‌‌Ho,‌‌if‌‌the‌p ‌ ‌‌>‌‌0.05‌  ‌ variables‌‌should‌‌have‌‌a‌‌linear‌‌relationship,‌‌and‌‌  
 ‌ you‌‌can‌‌check‌‌this‌‌through‌‌scatter‌‌plots.‌  ‌
Test‌‌of‌‌Homogeneity‌‌of‌‌Variance‌  ‌ ● Almost‌‌impossible‌‌to‌‌get‌‌a‌‌perfect‌‌positive‌‌and‌‌  
● An‌‌assumption‌‌for‌‌parametric‌‌tests.‌  ‌ negative‌‌correlation‌‌in‌‌real‌‌life‌‌because‌‌there‌‌  
● Is‌‌the‌‌dispersion‌‌of‌‌the‌‌groups‌‌equal‌‌or‌‌not?‌  ‌ are‌‌a‌‌lot‌‌of‌‌intervening/confounding‌‌variables.‌  ‌
● Equal‌‌=‌‌Homogenous‌‌=‌‌Similar‌  ‌ ● They‌‌are‌‌tests‌‌of‌‌relationships‌‌and‌‌predictors.‌  ‌
● Sig.‌‌-‌‌This‌‌is‌‌the‌‌P-value.‌  ‌ ● Correlation‌  ‌
● Ho:‌  ‌There‌‌   is‌‌
  ‌NO‌‌  significant‌‌   difference‌‌
  between‌‌   ○ A‌‌correlation‌‌can‌‌be‌‌used‌‌only‌‌if‌‌the‌‌scores‌‌  
the‌‌standard‌‌deviation‌‌/‌‌dispersion‌‌of‌‌the‌‌data.‌  ‌ on‌‌each‌‌variable‌‌are‌‌paired‌‌or‌‌linked‌‌to‌‌  
● Ha:‌  ‌There‌  ‌is‌  ‌A ‌ ‌significant‌  ‌difference‌  ‌between‌‌   each‌‌other‌‌in‌‌some‌‌way.‌  ‌
the‌‌standard‌‌deviation‌‌/‌‌dispersion‌‌of‌‌the‌‌data.‌  ‌ ○ Two‌‌tests‌‌you‌‌can‌‌use‌‌for‌‌correlation,‌‌  
● You‌‌want‌‌Ho‌‌>‌‌Ha‌  ‌ pearson‌‌(parametric),‌‌and‌‌spearman‌‌  
● Dispersion‌‌must‌‌be‌‌similar‌‌to‌‌be‌‌homogenous‌  ‌ (nonparametric).‌  ‌
● Reject‌‌‌the‌‌Ho,‌‌if‌‌the‌p ‌ ‌‌<‌‌0.05‌  ‌ ● Regression‌  ‌
● Accept‌‌‌the‌‌Ho,‌‌if‌‌the‌p ‌ ‌‌>‌‌0.05‌  ‌ ○ How‌‌one‌‌variable‌‌affects‌‌the‌‌other.‌  ‌
 ‌ ○ Provides‌‌a‌‌regression‌‌line.‌  ‌

2‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
Computational‌‌Formulas‌  ‌ REGRESSION‌  ‌
● Definitional‌‌Formulas‌‌-‌‌Help‌‌explain‌‌the‌‌logic‌‌   Regression‌  ‌
of‌‌a‌‌statistic‌‌but‌‌can‌‌be‌‌cumbersome‌‌when‌‌   ● It‌  ‌tells‌  ‌you‌  ‌how‌  ‌much‌  ‌the‌  ‌dependent‌  ‌variable‌‌  
working‌‌with‌‌a‌‌large‌‌data‌‌set.‌  ‌ increases‌‌   or‌‌
  decreases‌‌  based‌‌  on‌‌  the‌‌  increase‌‌  or‌‌  
● Computational‌‌Formulas‌‌-‌‌Allow‌‌you‌‌to‌‌   decrease‌‌of‌‌the‌‌independent‌‌variable.‌  ‌
compute‌‌an‌‌r‌‌value‌‌directly‌‌f rom‌‌raw‌‌scores‌‌    ‌
without‌‌first‌‌converting‌‌everything‌‌to‌‌z‌‌scores.‌  ‌ Durbin-Watson‌‌Table‌  ‌
 ‌ ● Checker‌‌of‌‌the‌‌assumptions.‌  ‌
Pearson‌  ‌  ‌
● You‌  ‌can‌  ‌only‌  ‌use‌  ‌Pearson‌  ‌if‌  ‌both‌  ‌of‌  ‌your‌‌   Model‌‌Summary‌‌Table‌  ‌
variables‌‌are‌‌interval‌‌or‌‌ratio.‌  ‌ ● Only‌‌thing‌‌important‌‌is‌‌the‌‌R-Square‌  ‌
● There‌‌has‌‌to‌‌be‌‌an‌‌assumed‌‌linear‌‌relationship.‌  ‌ ● R-Square‌  ‌(Coefficient‌  ‌of‌  ‌Determination)‌  ‌- ‌‌
● Examples‌‌of‌‌ratio:‌‌‌Age‌‌and‌‌weight‌  ‌ Percentage‌‌   explained‌‌   by‌‌   the‌‌  predictor‌‌  variable.‌‌  
● Examples‌‌   of‌‌  interval:‌‌   Temperature‌‌   and‌‌  IQ.‌‌ Also,‌‌   The‌‌change‌‌in‌‌the‌‌criterion‌‌variable.‌  ‌
abstract‌  ‌constructs‌  ‌measured‌  ‌by‌  ‌standardized‌‌   ○ Example:‌‌   7.7%‌‌  of‌‌  the‌‌   variation‌‌  in‌‌ spirituality‌‌  
questionnaires,‌  ‌such‌  ‌as‌  ‌the‌‌   Likert‌‌   Scale‌‌   (this‌‌  is‌‌
  is‌  ‌explained‌  ‌by‌  ‌years‌‌   and‌‌   service‌‌   and‌‌   work‌‌  
only‌‌applicable‌‌for‌‌social‌‌sciences).‌  ‌ engagement.‌  ‌
● Pearson‌  ‌Correlation‌‌   - ‌‌(Pearson‌‌   r ‌‌or‌‌   correlation‌‌   ○ This‌  ‌isn’t‌  ‌explained‌  ‌in‌  ‌Pearson,‌  ‌which‌  ‌is‌‌  
coefficient)‌  ‌The‌  ‌direction‌  ‌and‌  ‌magnitude‌  ‌or‌‌   what‌  ‌makes‌  ‌regression‌  ‌more‌  ‌powerful.‌  ‌It‌‌  
strength‌‌   of‌‌
  the‌‌   data.‌‌   The‌‌   closer‌‌   the‌‌  magnitude‌‌   provides‌  ‌the‌  ‌proportion‌  ‌of‌  ‌variation‌  ‌that‌‌  
is‌‌to‌‌1‌‌or‌‌-1,‌‌the‌‌stronger‌‌the‌‌correlation‌‌is.‌  ‌ the‌‌   predictor‌‌  variable‌‌  can‌‌  explain‌‌  about‌‌  the‌‌  
● Measures‌‌relationships‌‌between‌‌variables.‌  ‌ criterion‌‌variable.‌  ‌
 ‌  ‌
Spearman‌  ‌ Coefficients‌‌Table‌  ‌
● When‌‌you‌‌can’t‌‌use‌‌Pearson,‌‌use‌‌Spearman.‌  ‌ ● Unstandardized‌  ‌B ‌ ‌- ‌ ‌For‌  ‌every‌  ‌one‌  ‌unit‌‌  
● This‌‌is‌‌a‌‌non-parametric‌‌test.‌  ‌ measure‌  ‌of‌‌   one‌‌   variable,‌‌   the‌‌  other‌‌   variable‌‌   will‌‌  
● Measures‌‌relationships‌‌between‌‌variables.‌  ‌ increase‌‌by‌‌this‌‌much.‌  ‌
● Can‌‌be‌‌used‌‌if‌‌there‌‌is‌‌no‌‌linear‌‌relationship.‌  ‌ ○ Example:‌  ‌For‌  ‌every‌  ‌one‌  ‌unit‌  ‌measure‌  ‌of‌‌  
● Spearman’s‌  ‌rs‌  ‌- ‌ ‌Used‌  ‌to‌‌   compute‌‌   correlations‌‌   work‌  ‌engagement‌  ‌of‌  ‌an‌  ‌employee,‌‌  
when‌‌one‌‌or‌‌both‌‌variables‌‌are‌‌ordinal.‌  ‌ spirituality‌‌will‌‌increase‌‌by‌‌0.157.‌  ‌
● The‌  ‌computation‌  ‌for‌  ‌a ‌ ‌Spearman’s‌  ‌correlation‌‌    ‌
involves‌  ‌analyzing‌  ‌the‌  ‌rank‌  ‌orders‌  ‌of‌  ‌two‌‌   INDEPENDENT‌‌SAMPLES‌‌T-TESTS‌  ‌
variables‌‌rather‌‌than‌‌the‌‌variables‌‌themselves.‌  ‌ Notes‌  ‌
● If‌‌  one‌‌   variable‌‌   is‌‌
  ordinal‌‌   and‌‌  the‌‌  other‌‌  isn’t,‌‌  you‌‌  ● T-Test‌‌   and‌‌   ANOVA‌‌   are‌‌  a ‌‌test‌‌  of‌‌ ‌difference‌. ‌‌They‌‌  
need‌‌   to‌‌
 use‌‌  a ‌‌Spearman’s‌‌  correlation‌‌  and‌‌  begin‌‌   measure‌‌   the‌‌   difference‌‌   between‌‌   means.‌‌  If‌‌
 they‌‌  
by‌‌converting‌‌the‌‌other‌‌variable‌‌to‌‌ranks.‌  ‌ are‌‌(not)‌‌significantly‌‌different‌‌f rom‌‌each‌‌other.‌  ‌
● Homoscedasticity‌  ‌& ‌ ‌Heteroscedasticity‌  ‌- ‌ ‌The‌‌   ● The‌  ‌independent‌  ‌samples‌  ‌t-test‌  ‌is‌  ‌used‌  ‌when‌‌  
same‌‌   as‌‌
  the‌‌   homogeneity‌‌   of‌‌ variance.‌‌  It‌‌
 is‌‌
 ideal‌‌   you‌  ‌need‌  ‌to‌  ‌compare‌  ‌two‌  ‌sample‌  ‌means‌  ‌that‌‌  
for‌  ‌the‌  ‌data‌  ‌to‌  ‌be‌  ‌homogenous.‌  ‌Is‌  ‌there‌‌   are‌‌unrelated.‌  ‌
uniformity‌‌   in‌‌   the‌‌   variation‌‌  of‌‌ each‌‌  variable?‌‌  The‌‌  ● It‌  ‌uses‌  ‌two‌  ‌samples‌  ‌f rom‌  ‌the‌  ‌population‌  ‌to‌‌  
preferred‌‌terms‌‌in‌‌correlation‌‌and‌‌regression.‌  ‌ represent‌‌two‌‌different‌‌conditions.‌  ‌
○ In‌‌   T-TEST‌‌   and‌‌   ANOVA,‌‌   the‌‌  preferred‌‌  term‌‌  is‌‌
  ● If‌  ‌the‌  ‌Ho‌  ‌is‌  ‌true,‌  ‌the‌  ‌obtained‌‌   t ‌‌should‌‌   be‌‌   0.‌‌
  If‌‌
 
homogeneity‌‌of‌‌variance.‌  ‌ Ho‌‌is‌‌false,‌‌the‌‌obtained‌‌t‌‌should‌‌be‌‌far‌‌f rom‌‌0.‌  ‌
 ‌ ● You‌  ‌can‌  ‌use‌  ‌an‌  ‌independent‌‌   samples‌‌   t-test‌‌   to‌ 
Significance‌  ‌ determine‌‌   whether‌‌  the‌‌  difference‌‌  between‌‌  two‌‌  
● For‌‌   your‌‌   data‌‌   to‌‌   be‌‌   significant,‌‌   your‌‌   alpha‌‌  level‌‌  sample‌  ‌means‌  ‌was‌  ‌likely‌  ‌or‌  ‌unlikely‌  ‌to‌  ‌have‌‌  
must‌‌be‌‌greater‌‌than‌‌your‌‌significance‌‌level.‌  ‌ occurred‌‌due‌‌to‌‌sampling‌‌error.‌  ‌
● Alpha‌‌Level‌‌>‌‌Significance‌‌Level‌  ‌ ● If‌  ‌the‌  ‌experimental‌  ‌group‌  ‌had‌  ‌a ‌ ‌higher‌  ‌mean‌‌  
● Alpha‌‌Level‌‌=‌‌0.05‌  ‌ and‌‌  the‌‌   obtained‌‌  t ‌‌value‌‌  is‌‌ in‌‌
 the‌‌  critical‌‌  region,‌‌  
● Significance‌‌Level‌‌<‌‌0.05‌  ‌ you‌  ‌could‌  ‌conclude‌  ‌that‌  ‌the‌  ‌experimental‌‌  
● *‌  ‌- ‌ ‌Correlation‌  ‌is‌  ‌significant‌  ‌at‌  ‌the‌  ‌0.05‌  ‌level‌‌   group‌  ‌increased‌  ‌the‌  ‌DV.‌  ‌If‌  ‌the‌  ‌control‌  ‌group‌‌  
(2-tailed).‌  ‌ had‌‌  a ‌‌lower‌‌   mean,‌‌   you‌‌   could‌‌  conclude‌‌  that‌‌  the‌‌  
● **‌  ‌- ‌ ‌Correlation‌  ‌is‌  ‌significant‌  ‌at‌  ‌the‌  ‌0.01‌  ‌level‌  control‌‌group‌‌decreased‌‌the‌‌DV.‌  ‌
(two-tailed).‌  ‌  ‌

3‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
Group‌‌Statistics‌‌Table‌  ‌ SPSS‌‌NOTES‌‌(Part‌‌3)‌  ‌
● The‌‌larger/higher‌‌mean‌‌has‌‌a‌‌greater‌‌effect.‌  ‌ ONE-WAY‌‌INDEPENDENT‌‌SAMPLES‌‌ANOVA‌  ‌
 ‌ Notes‌  ‌
Independent‌‌Samples‌‌Test‌‌Table‌  ‌ ● ANOVA‌‌=‌‌Analysis‌‌of‌‌Variance‌  ‌
● Levene’s‌  ‌Test‌  ‌for‌  ‌Equality‌  ‌of‌  ‌Variances‌  ‌- ‌ ‌The‌‌   ● If‌  ‌you‌  ‌fail‌  ‌to‌  ‌satisfy‌  ‌the‌  ‌assumptions,‌  ‌you‌‌   have‌‌  
homogeneity‌‌of‌‌variances.‌  ‌ to‌  ‌do‌  ‌a ‌ ‌Kruskal-Wallis‌  ‌H ‌ ‌Test‌  ‌(independent‌‌  
○ You‌  ‌want‌  ‌the‌  ‌Sig.‌  ‌for‌  ‌the‌‌   ‌Equal‌‌   Variances‌‌   samples)‌‌or‌‌Friedman‌‌Test‌‌(repeated‌‌samples).‌  ‌
Assumed‌t‌ o‌‌be‌‌greater‌‌than‌0 ‌ .05‌. ‌ ‌ ● You‌‌have‌‌one‌‌IV‌‌and‌‌one‌‌DV.‌  ‌
■ Assumes‌  ‌that‌  ‌the‌‌   changes‌‌   between‌‌   or‌‌
  ● Same‌  ‌as‌  ‌Independent‌  ‌T-Test‌  ‌except‌  ‌for‌  ‌the‌‌  
the‌  ‌variability‌  ‌of‌  ‌the‌  ‌scores‌  ‌of‌  ‌the‌  ‌two‌  number‌‌of‌‌levels‌‌in‌‌the‌‌IV.‌  ‌
groups‌‌   aren’t‌‌  that‌‌  different.‌‌  This‌‌ means‌‌   ● The‌‌  independent‌‌  samples‌‌  ANOVA‌‌  can‌‌  compare‌‌  
there‌‌are‌‌no‌‌outliers.‌  ‌ two‌‌   or‌‌ more‌‌  sample‌‌  means‌‌  at‌‌
 the‌‌  same‌‌  time‌‌  to‌‌
 
○ If‌‌  the‌‌
 ‌Sig.‌‌  is‌‌
 less‌‌ than‌‌ ‌0.05‌, ‌‌look‌‌  at‌‌ the‌‌
 ‌Equal‌‌   determine‌  ‌whether‌  ‌the‌‌   deviation‌‌   between‌‌   any‌‌  
Variances‌‌Not‌‌Assumed‌r‌ ow.‌  ‌ pair‌‌   of‌‌  sample‌‌   means‌‌   is‌‌  greater‌‌   than‌‌   would‌‌   be‌‌ 
● If‌‌   the‌‌   value‌‌  of‌‌  Sig.‌‌
  (2-tailed)‌‌  is‌‌
 greater‌‌  than‌‌  0.05,‌‌  expected‌‌by‌‌sampling‌‌error.‌  ‌
the‌‌difference‌‌is‌‌not‌‌significant.‌  ‌ ● The‌  ‌same‌  ‌with‌  ‌T-Test,‌  ‌ANOVA‌  ‌is‌  ‌a ‌ ‌test‌  ‌of‌‌  
● If‌‌   the‌‌   value‌‌  of‌‌ Sig.‌‌ (2-tailed)‌‌  is‌‌
 less‌‌ than‌‌  0.05,‌‌ the‌‌  difference‌‌of‌‌means‌‌between‌‌different‌‌groups.‌  ‌
difference‌‌is‌‌significant.‌  ‌ ● A‌‌T-Test‌‌can‌‌only‌‌compare‌‌two‌‌sample‌‌means.‌  ‌
● Accept‌‌Ho‌‌‌if‌‌Sig.‌‌>‌‌0.05.‌  ‌ ● An‌‌   independent‌‌   samples‌‌   ANOVA‌‌   can‌‌   compare‌‌  
● Reject‌‌Ho‌‌‌if‌‌Sig‌‌<‌‌0.05.‌  ‌ two‌‌or‌‌more‌‌sample‌‌means.‌  ‌
 ‌ ● An‌‌   ANOVA‌‌   analyzes‌‌   the‌‌   variance‌‌  of‌‌ scores‌‌  both‌‌  
REPEATED/RELATED‌‌SAMPLES‌‌T-TEST‌  ‌ between‌‌  and‌‌  within‌‌  IV‌‌
 conditions‌‌  in‌‌ an‌‌  attempt‌‌  
Notes‌  ‌ to‌  ‌determine‌  ‌whether‌  ‌the‌  ‌different‌  ‌treatment‌‌  
● Repeated-Measures‌  ‌T-Test‌  ‌- ‌ ‌Participants‌  ‌are‌‌   conditions‌‌affect‌‌scores‌‌differently.‌  ‌
measured‌  ‌repeatedly,‌  ‌once‌  ‌and‌  ‌once‌  ‌after‌  ‌a ‌‌  ‌
treatment.‌  ‌ 3‌‌Things‌‌Affect‌‌the‌‌Variance‌‌of‌‌Scores‌  ‌
● Related‌  ‌Samples‌  ‌T-Test‌  ‌- ‌ ‌Each‌  ‌person‌  ‌in‌  ‌the‌‌   1. Measurement‌‌   Error‌‌   - ‌‌There‌‌   will‌‌  always‌‌   be‌‌ variance‌‌  
first‌‌  sample‌‌   has‌‌   something‌‌   in‌‌
  common‌‌  with‌‌  or‌‌
  in‌‌
  scores‌‌   between‌‌   people‌‌  because‌‌  variables‌‌  cannot‌‌  
is‌‌linked‌‌to‌‌someone‌‌in‌‌the‌‌second‌‌sample.‌  ‌ be‌‌measured‌‌perfectly.‌  ‌
● Other‌‌names:‌  ‌ 2. Individual‌  ‌Differences‌  ‌- ‌ ‌There‌  ‌will‌  ‌always‌  ‌be‌‌  
○ Paired‌‌Samples‌‌T-Test‌  ‌ variance‌‌   in‌‌
  scores‌‌   between‌‌   people‌‌   because‌‌   people‌‌  
○ Matched‌‌Samples‌‌T-Test‌  ‌ are‌‌naturally‌‌different‌‌f rom‌‌each‌‌other.‌  ‌
○ Dependent‌‌Samples‌‌T-Test‌  ‌ 3. Treatment‌  ‌Effect‌  ‌- ‌ ‌There‌  ‌might‌  ‌be‌  ‌variance‌  ‌in‌‌  
○ Within-Subjects‌‌T-Test‌  ‌ scores‌  ‌between‌  ‌groups‌  ‌because‌  ‌groups‌‌  
● The‌  ‌related‌  ‌measures‌  ‌t-test‌  ‌is‌  ‌similar‌  ‌to‌  ‌the‌‌   experienced‌‌different‌‌IV‌‌conditions‌‌or‌‌treatments.‌  ‌
single-sample‌  ‌t-test‌  ‌in‌  ‌that‌  ‌it‌  ‌compares‌  ‌the‌‌     ‌
deviation‌  ‌between‌  ‌two‌  ‌means‌  ‌to‌  ‌determine‌‌   Assumptions‌  ‌
whether‌  ‌it‌  ‌is‌  ‌likely‌  ‌to‌  ‌have‌  ‌been‌  ‌created‌  ‌by‌‌   ● Data‌  ‌Independence‌  ‌- ‌ ‌Scores‌  ‌of‌  ‌individuals‌‌  
sampling‌‌error.‌  ‌ must‌  ‌be‌  ‌measured‌  ‌without‌  ‌one‌  ‌participant’s‌‌  
● Related‌  ‌samples‌  ‌t-test‌  ‌is‌  ‌different‌  ‌in‌  ‌that‌  ‌the‌‌   scores‌  ‌affecting‌  ‌another’s.‌  ‌Participants’‌  ‌scores‌‌  
two‌  ‌means‌  ‌it‌  ‌compares‌  ‌both‌  ‌come‌  ‌f rom‌  ‌the‌‌   don’t‌‌affect‌‌each‌‌other.‌  ‌
same‌  ‌sample,‌  ‌which‌  ‌is‌  ‌measured‌  ‌twice‌  ‌under‌‌   ● Appropriate‌  ‌Measurement‌  ‌of‌  ‌Variables‌  ‌- ‌ ‌DV‌‌  
different‌‌conditions.‌  ‌ must‌‌  be‌‌  measured‌‌  on‌‌ an‌‌  ‌interval/ratio‌‌  scale.‌‌  ‌IV‌‌
 
● Example:‌‌‌Used‌‌for‌‌before‌‌and‌‌after‌‌treatments.‌  ‌ must‌  ‌identify‌‌   how‌‌   the‌‌   treatments‌‌   are‌‌  different‌‌  
 ‌ (independence‌‌of‌‌observations).‌  ‌
Paired‌‌Samples‌‌Statistics‌‌Table‌  ‌ ○ If‌  ‌the‌  ‌DV‌  ‌is‌  ‌ordinal‌, ‌ ‌you‌  ‌use‌  ‌the‌‌  
● The‌‌larger/higher‌‌mean‌‌has‌‌greater‌‌effect.‌  ‌ Kruskal-Wallis‌‌H‌‌Test‌. ‌ ‌
 ‌ ● Normality‌‌   Assumption‌‌  - ‌‌Distribution‌‌  of‌‌ sample‌‌  
Paired‌‌Samples‌‌Test‌‌Table‌  ‌ means‌‌for‌‌each‌‌condition‌‌be‌‌a‌‌normal‌‌shape.‌  ‌
● Sig.‌‌(2-tailed)‌‌value‌‌must‌‌be‌‌<‌‌0.05‌  ‌ ○ This‌  ‌will‌  ‌be‌  ‌the‌  ‌case‌  ‌if‌  ‌the‌  ‌original‌‌  
● Data‌‌is‌‌significant‌‌if‌‌Sig.‌‌<‌‌0.05‌  ‌ populations‌  ‌are‌  ‌normal‌  ‌or‌  ‌if‌  ‌the‌  ‌sample‌‌  
● Reject‌‌Ho,‌‌if‌‌Sig.‌‌<‌‌0.05‌  ‌ sizes‌‌for‌‌each‌‌condition‌‌are‌‌near‌‌30.‌  ‌
● Accept‌‌Ho,‌‌if‌‌Sig.‌‌>‌‌0.05‌  ‌ ● Homogeneity‌  ‌of‌  ‌Variance‌  ‌- ‌ ‌If‌  ‌any‌‌   one‌‌   of‌‌
  your‌‌  
 ‌ conditions‌‌  has‌‌  a ‌‌standard‌‌  deviation‌‌  double‌‌  that‌‌  

4‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
of‌  ‌another,‌  ‌this‌  ‌assumption‌  ‌might‌‌   be‌‌
  violated.‌‌  ● Contains‌  ‌only‌  ‌the‌  ‌sample‌‌
  size,‌‌
  mean,‌‌
  standard‌‌
 
However,‌  ‌if‌  ‌the‌  ‌sample‌  ‌sizes‌  ‌are‌  ‌similar,‌  ‌this‌‌
  deviation,‌  ‌standard‌  ‌error,‌  ‌lower‌  ‌and‌  ‌upper‌‌
 
assumption‌‌can‌‌be‌‌violated‌‌without‌‌a‌‌problem.‌  ‌ bound,‌‌minimum,‌‌and‌‌maximum.‌  ‌
○ If‌‌   your‌‌ data‌‌ is‌‌ not‌‌
 homogenous,‌‌  you‌‌  have‌‌ to‌‌
   ‌
use‌‌the‌K ‌ ruskal-Wallis‌‌H‌‌Test‌. ‌ ‌ Test‌‌of‌‌Homogeneity‌‌of‌‌Variances‌  ‌
● ALL‌  ‌of‌  ‌the‌  ‌assumptions‌  ‌MUST‌  ‌be‌  ‌met,‌‌   ● Similar‌  ‌to‌  ‌the‌  ‌Levene‌  ‌Test‌  ‌in‌  ‌the‌  ‌T-Test,‌  ‌you‌‌  
otherwise,‌  ‌you‌  ‌cannot‌  ‌conduct‌  ‌a ‌ ‌one‌  ‌way‌‌   have‌‌to‌‌check‌‌if‌‌the‌‌Sig.‌‌value‌‌is‌‌<‌‌or‌‌>‌‌0.05.‌  ‌
between‌‌subjects‌‌ANOVA.‌  ‌ ● You‌  ‌want‌‌   the‌‌   Sig.‌‌
  value‌‌  for‌‌
  the‌‌   Levene‌‌   statistic‌ 
● If‌‌
  one‌‌  or‌‌
  more‌‌  of‌‌  the‌‌
  assumptions‌‌  are‌‌  not‌‌
 met,‌‌  to‌‌be‌‌greater‌‌than‌‌0.05.‌‌(Sig.‌‌>‌‌0.05)‌  ‌
you‌‌use‌‌the‌K ‌ ruskal-Wallis‌‌H‌‌Test‌. ‌ ‌ ● Contains:‌  ‌
 ‌ ○ Based‌‌on‌‌Mean‌  ‌
Example‌  ‌Problem:‌  ‌Suppose‌  ‌you‌  ‌want‌  ‌to‌  ‌compare‌  ○ Based‌‌on‌‌Median‌  ‌
cognitive‌‌   behavioral‌‌   therapy‌‌   (CBT)‌‌   and‌‌  psychodynamic‌‌   ○ Based‌‌on‌‌Median‌‌and‌‌With‌‌Adjusted‌‌df‌  ‌
therapy‌‌   (PDT)‌‌   as‌‌   treatment‌‌   for‌‌   depression.‌‌   You‌‌ identify‌‌   ○ Based‌‌on‌‌Trimmed‌‌Mean‌  ‌
a‌  ‌sample‌  ‌of‌  ‌people‌  ‌with‌  ‌major‌  ‌depression‌  ‌and‌‌   ● In‌  ‌the‌  ‌example‌  ‌problem,‌  ‌we‌  ‌satisfied‌  ‌the‌‌  
randomly‌‌divide‌‌them‌‌into‌‌three‌‌different‌‌groups.‌  ‌ homogeneity‌  ‌assumption‌  ‌because‌  ‌all‌  ‌the‌  ‌Sig.‌‌  
 ‌ values‌‌are‌‌greater‌‌than‌‌0.05.‌  ‌
One‌  ‌group‌  ‌undergoes‌  ‌CBT‌  ‌for‌  ‌6 ‌‌months.‌‌   A ‌‌2nd‌‌  group‌‌    ‌
undergoes‌‌   PDT‌‌   for‌‌  6 ‌‌months.‌‌   A ‌‌3rd‌‌  group‌‌   functions‌‌   as‌‌
  ANOVA‌‌Table‌  ‌
a‌‌control‌‌group‌‌and‌‌receives‌‌no‌‌treatment‌‌(NT).‌  ● The‌  ‌difference‌  ‌in‌  ‌means‌  ‌is‌  ‌statistically‌‌  
 ‌ significant‌‌if‌‌the‌‌Sig.‌‌value‌‌is‌‌less‌‌than‌‌0.05.‌  ‌
After‌  ‌6 ‌ ‌months,‌  ‌you‌  ‌assess‌  ‌their‌  ‌levels‌  ‌of‌  ‌depression‌‌   ● Significant‌‌if‌‌Sig.‌‌<‌‌0.05.‌‌(Reject‌‌Ho)‌  ‌
with‌‌   the‌‌  Beck‌‌   Depression‌‌   Inventory‌‌  (scores‌‌  range‌‌  f rom‌‌   ● Not‌‌significant‌‌if‌‌Sig.‌‌>‌‌0.05.‌‌(Accept‌‌Ho)‌  ‌
0-63),‌‌with‌‌higher‌‌scores‌‌indicating‌‌greater‌‌depression.‌  ‌ ● F‌‌=‌‌Computed‌‌F‌‌value‌  ‌
 ‌ ● Compare‌‌computed‌‌F‌‌value‌‌to‌c ‌ ritical‌‌F‌‌value‌. ‌ ‌
The‌‌   IV‌‌
 is‌‌
 the‌‌  type‌‌  of‌‌
 treatment‌‌  (CBT,‌‌  PDT,‌‌  or‌‌  NT).‌‌ The‌‌  DV‌‌   ○ In‌  ‌the‌  ‌T-Test‌  ‌where‌  ‌you‌  ‌compare‌  ‌your‌‌  
is‌‌each‌‌person’s‌‌depression‌‌score‌‌on‌‌the‌‌BDI.‌   ‌ computed‌‌P‌‌value‌‌to‌‌the‌‌critical‌‌P‌‌value.‌  ‌
 ‌  ‌
Stating‌‌the‌‌Null‌‌and‌‌Research‌‌Hypotheses‌  ‌ Post‌‌Hoc‌‌Tests‌‌Table‌‌(Multiple‌‌Comparisons)‌  ‌
● Ho:‌  ‌The‌  ‌three‌  ‌population‌  ‌of‌  ‌people‌  ‌being‌‌   ● If‌  ‌your‌  ‌data‌‌   is‌‌
  not‌‌
  statistically‌‌   significant,‌‌   there‌‌
 
studied‌  ‌(those‌  ‌getting‌  ‌CBT,‌  ‌PDT,‌  ‌or‌  ‌NT)‌  ‌have‌‌   is‌‌no‌‌need‌‌for‌‌you‌‌to‌‌do‌‌a‌‌Post‌‌Hoc‌‌Analysis.‌  ‌
the‌‌same‌‌mean‌‌depression‌‌scores.‌  ‌ ● In‌‌   ANOVA‌‌   and‌‌   Kruskal-Wallis‌‌   H ‌‌Test,‌‌
  it‌‌
 can‌‌  only‌‌
 
● Ha:‌  ‌At‌  ‌least‌  ‌one‌  ‌population‌  ‌mean‌  ‌is‌  ‌different‌‌   tell‌‌you‌‌that‌‌there‌‌is‌‌a‌‌significant‌‌difference.‌  ‌
from‌‌at‌‌least‌‌one‌‌of‌‌the‌‌others.‌  ‌ ○ It‌  ‌does‌  ‌not‌  ‌tell‌  ‌you‌  ‌which‌  ‌among‌  ‌the‌‌  
 ‌ means‌  ‌is‌  ‌statistically‌  ‌different‌  ‌f rom‌  ‌the‌‌  
Analyze‌‌→‌‌Compare‌‌Means‌‌→‌‌One-Way‌‌ANOVA‌  ‌ other‌  ‌means.‌  ‌The‌  ‌Post‌  ‌Hoc‌  ‌analysis‌  ‌tells‌‌  
● Dependent‌‌List‌‌‌=‌‌Depression‌  ‌ you‌‌this‌‌information.‌  ‌
● Factor‌‌‌=‌‌Therapy‌  ‌ ● Always‌  ‌analyze‌  ‌the‌  ‌Sig.‌  ‌values‌  ‌column,‌  ‌and‌‌  
● Post‌‌Hoc‌‌→‌‌Equal‌‌Variances‌‌Assumed‌  ‌ check‌  ‌which‌  ‌of‌  ‌the‌  ‌means‌  ‌are‌  ‌significantly‌‌  
○ Tukey‌  ‌HSD‌‌   and‌‌   Fisher’s‌‌   LSD‌‌   - ‌‌Used‌‌   when‌‌   different‌‌f rom‌‌each‌‌other.‌  ‌
you‌  ‌have‌  ‌an‌‌   equal‌‌   number‌‌   of‌‌  respondents‌‌   ● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌
per‌‌group.‌  ‌ ● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌
○ Scheffe‌  ‌- ‌ ‌Used‌‌   when‌‌   you‌‌   have‌‌   an‌‌
  unequal‌‌  
number‌‌of‌‌respondents‌‌per‌‌group.‌  ‌
○ Bonferroni‌‌-‌‌Used‌‌for‌‌repeated‌‌measures.‌  ‌
● Select‌‌  descriptive‌‌  statistics‌‌  and‌‌  homogeneity‌‌  of‌‌
 
variance‌‌test.‌  ‌
● Significance‌‌Level:‌‌0.05‌  ‌
○ This‌‌is‌‌set‌‌by‌‌default.‌‌Sometimes‌‌it’s‌‌0.01.‌  ‌
 ‌
Descriptive‌‌Statistics‌‌Table‌  ‌
● There‌‌   is‌‌
  a ‌‌difference‌‌  between‌‌  the‌‌  means.‌‌  But‌‌  is‌‌
   ‌
the‌‌difference‌‌statistically‌‌significant?‌  ‌  ‌
 ‌

5‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
Homogenous‌‌Subsets‌‌Table‌‌(Depression)‌  ‌ ONE-WAY‌‌REPEATED-MEASURES‌‌ANOVA‌  ‌
● Values‌  ‌under‌  ‌the‌  ‌column‌  ‌labeled‌  ‌“ 1”‌  ‌are‌  ‌the‌‌   Notes‌  ‌
significant‌‌variables.‌  ‌ ● Repeated‌  ‌Measures‌  ‌ANOVA‌  ‌is‌  ‌the‌  ‌same‌  ‌as‌  ‌a ‌‌
● Values‌  ‌under‌  ‌the‌  ‌column‌  ‌labeled‌  ‌“2”‌  ‌are‌  ‌the‌‌   Paired‌‌  Samples‌‌  T-Test.‌‌  The‌‌  only‌‌  difference‌‌  is‌‌
 the‌‌ 
non-significant‌‌variables.‌  ‌ number‌‌of‌‌levels‌‌in‌‌the‌‌IV.‌  ‌
● Table‌‌is‌‌available‌‌for‌‌Tukey,‌‌but‌‌not‌‌Fisher‌‌LSD.‌  ‌ ● Assumptions‌  ‌
 ‌ ○ Normality‌  ‌
Analyze‌‌→‌‌General‌‌Linear‌‌Model‌‌→‌‌Univariate‌  ‌ ○ Homogeneity‌‌of‌‌Variance‌  ‌
● Another‌‌way‌‌to‌‌conduct‌‌ANOVA.‌  ‌ ○ The‌‌DV‌‌should‌‌be‌‌interval‌‌or‌‌ratio.‌  ‌
● Use‌‌Multivariate‌‌for‌‌MANOVA,‌‌for‌‌two‌‌DVs.‌  ‌ ○ You‌  ‌have‌  ‌treatment‌  ‌conditions,‌  ‌but‌  ‌you‌‌  
● Dependent‌‌Variable‌‌‌=‌‌Depression‌  ‌ only‌‌ have‌‌  one‌‌  set‌‌  of‌‌
 participants.‌‌  That‌‌  set‌‌ of‌‌
 
● Fixed‌‌Factors‌‌‌=‌‌Therapy‌  ‌ participants‌‌will‌‌be‌‌tested‌‌at‌‌least‌‌twice.‌  ‌
○ It’s‌‌possible‌‌to‌‌have‌‌two‌‌IV’s.‌  ‌  ‌
○ If‌  ‌you‌  ‌have‌  ‌two‌  ‌IVs,‌  ‌you‌  ‌call‌  ‌it‌  ‌a ‌ ‌Two-Way‌‌   Assumptions‌  ‌
Factorial‌‌ANOVA‌. ‌ ‌ ● Data‌  ‌Independence‌  ‌- ‌ ‌The‌  ‌responses‌  ‌within‌‌  
○ If‌‌  you‌‌  have‌‌  three‌‌  IVs,‌‌
 you‌‌  call‌‌ it‌‌
 a ‌‌‌Three-Way‌‌   each‌‌  condition‌‌  must‌‌  not‌‌ be‌‌  influenced‌‌  by‌‌  other‌‌  
Factorial‌‌ANOVA‌. ‌ ‌ responses‌‌within‌‌that‌‌same‌‌condition.‌  ‌
● Options‌‌→‌‌Display‌  ‌ ○ The‌  ‌procedural‌  ‌controls‌  ‌used‌  ‌in‌  ‌this‌‌  
○ Select‌  ‌descriptive‌  ‌statistics,‌  ‌estimates‌  ‌of‌‌   study‌  ‌seem‌  ‌likely‌  ‌to‌  ‌provide‌  ‌data‌‌  
effect‌  ‌size,‌  ‌observed‌  ‌power,‌  ‌and‌‌
  independence.‌  ‌
homogeneity‌‌tests.‌  ‌ ● Appropriate‌  ‌Measurement‌  ‌of‌  ‌Variables‌  ‌- ‌ ‌2 ‌‌or‌‌  
○ Significance‌‌level:‌‌0.05‌  ‌ more‌‌“grouping”‌‌IVs.‌‌1‌‌interval/ratio‌‌DV.‌  ‌
○ Confidence‌‌intervals‌‌are‌‌95.0%‌  ‌ ● Normality‌  ‌- ‌ ‌The‌  ‌distribution‌  ‌of‌  ‌sample‌  ‌means‌‌  
● Post‌‌Hoc‌‌→‌‌Equal‌‌Variances‌‌Assumed‌‌→‌‌Tukey‌  ‌ for‌  ‌each‌  ‌condition‌  ‌(cell)‌  ‌must‌  ‌have‌  ‌a ‌ ‌normal‌‌  
● EM‌‌Means‌‌‌is‌‌similar‌‌to‌‌Post‌‌Hoc.‌  ‌ shape.‌‌   This‌‌  is‌‌
 met‌‌  if‌‌
 the‌‌ original‌‌  populations‌‌  are‌‌ 
 ‌ normal‌‌or‌‌if‌‌the‌‌sample‌‌size‌‌is‌‌large.‌  ‌
Tests‌‌of‌‌Between-Subjects‌‌Effects‌‌Table‌  ‌ ● Homogeneity‌  ‌of‌  ‌Variance‌  ‌- ‌ ‌The‌  ‌variability‌‌  
● This‌‌table‌‌is‌‌provided‌‌instead‌‌of‌‌ANOVA.‌  ‌ within‌  ‌each‌  ‌cell‌  ‌should‌  ‌be‌  ‌similar.‌  ‌The‌‌  
● To‌  ‌know‌  ‌if‌  ‌there‌  ‌is‌  ‌a ‌ ‌significant‌  ‌difference‌‌   homogeneity‌  ‌of‌  ‌variance‌  ‌assumption‌  ‌is‌‌  
among‌‌   the‌‌  means,‌‌  check‌‌  the‌‌  Sig.‌‌  value‌‌  for‌‌
 your‌‌
  satisfied‌  ‌because‌  ‌none‌  ‌of‌  ‌the‌  ‌conditions’‌‌  
independent‌‌variable.‌  ‌ standard‌  ‌deviations‌  ‌is‌  ‌double‌  ‌the‌  ‌size‌  ‌of‌  ‌any‌ 
● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌ other‌‌conditions.‌  ‌
● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌  ‌
● This‌  ‌table‌  ‌has‌  ‌more‌  ‌information‌  ‌because‌  ‌it‌‌   Example‌  ‌Problem:‌  ‌Does‌  ‌students’‌  ‌anxiety‌  ‌in‌‌  
provides‌‌the‌‌Eta‌‌Squared.‌  ‌ Psychological‌  ‌Statistics‌  ‌increase‌  ‌or‌  ‌decrease‌  ‌before,‌‌  
● Eta‌  ‌Squared‌  ‌- ‌ ‌Measures‌  ‌the‌  ‌strength‌  ‌of‌  ‌the‌‌   during,‌  ‌or‌  ‌after‌  ‌a ‌ ‌test?‌  ‌Is‌  ‌there‌  ‌a ‌ ‌difference‌  ‌in‌  ‌anxiety‌‌  
relationship.‌‌The‌‌strength‌‌of‌‌the‌‌effect.‌  ‌ levels‌‌across‌‌different‌‌time‌‌f rames?‌  ‌
○ It‌‌is‌‌like‌‌the‌E ‌ ffect‌‌Size‌. ‌ ‌  ‌
○ Cohen,‌‌  anything‌‌  greater‌‌  than‌‌  0.50‌‌  is‌‌
 strong.‌‌  The‌  ‌IV‌  ‌is‌  ‌Time.‌  ‌The‌  ‌DV‌  ‌is‌  ‌the‌  ‌Anxiety‌  ‌Scores‌  ‌(scores‌‌  
0.30-0.50‌‌is‌‌moderate.‌‌0.10-0.29‌‌is‌‌weak.‌‌    ‌ range‌  ‌f rom‌  ‌1-25).‌  ‌The‌  ‌higher‌  ‌the‌  ‌Anxiety‌  ‌Score,‌  ‌the‌‌  
● Observed‌‌Power‌‌-‌‌The‌‌practical‌‌significance.‌  ‌ more‌‌anxious‌‌the‌‌student‌‌is.‌  ‌
○ A‌  ‌value‌  ‌> ‌ ‌than‌  ‌0.80‌‌   in‌‌
  the‌‌  observed‌‌   power‌‌   ‌
is‌  ‌a ‌ ‌good,‌  ‌strong‌  ‌practical‌  ‌significance.‌‌   Analyze‌‌→‌‌General‌‌Linear‌‌Model‌‌→‌‌Repeated‌‌Measures‌  ‌
Anything‌‌less‌‌than‌‌0.80‌‌is‌‌not‌‌good.‌  ‌ ● Within-Subject‌‌Variables‌  ‌
● In‌  ‌research,‌  ‌you‌  ‌analyze‌  ‌both‌  ‌the‌  ‌statistical‌‌   ○ Name‌‌=‌‌IV‌  ‌
significance‌‌and‌‌practical‌‌significance.‌  ‌ ○ Number‌‌of‌‌Levels‌  ‌
● Sig.‌‌-‌‌The‌‌statistical‌‌significance.‌  ‌ ● Options‌‌→‌‌Display‌  ‌
 ‌ ○ Descriptive‌  ‌statistics,‌  ‌estimates‌  ‌of‌  ‌effect‌‌  
size,‌‌observed‌‌power,‌‌&‌‌homogeneity‌‌tests.‌  ‌
○ Significance‌‌Level:‌‌0.05‌  ‌
● Move‌‌Factor‌‌to‌‌Display‌‌Means‌‌for‌‌section.‌  ‌
○ Click‌‌“Compare‌‌main‌‌effects”‌  ‌
○ Confidence‌‌interval‌‌adjustment:‌B ‌ onferroni‌  ‌
 ‌ ■ Bonferroni‌‌is‌‌for‌‌Repeated‌‌Measures‌  ‌
 ‌
6‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
Descriptive‌‌Statistics‌‌Table‌  ‌ SPEARMAN‌  ‌
● Contains‌  ‌the‌  ‌means,‌  ‌standard‌  ‌deviations,‌  ‌and‌‌   Notes‌  ‌
the‌‌sample‌‌sizes‌‌for‌‌each‌‌variable.‌  ‌ ● Spearman‌‌Correlation‌‌/‌‌Rank-Order‌‌Test‌  ‌
● Tells‌‌you‌‌which‌‌mean‌‌has‌‌a‌‌greater‌‌effect.‌  ‌ ● If‌  ‌you‌  ‌know‌  ‌that‌  ‌there‌  ‌is‌  ‌a ‌ ‌violation‌  ‌in‌  ‌the‌‌
 
 ‌ parametric‌‌   test‌‌  (non-linear/homogenous),‌‌   then‌‌  
Multivariate‌‌Tests‌‌Table‌  ‌ you‌  ‌can’t‌  ‌use‌  ‌a ‌ ‌parametric‌  ‌test.‌  ‌Instead‌  ‌of‌‌  
● With‌‌   this‌‌  table,‌‌   you‌‌
 can‌‌  tell‌‌ whether‌‌
 the‌‌  means‌‌   Pearson,‌‌use‌‌Spearman‌‌to‌‌test‌‌relationships.‌  ‌
are‌‌significantly‌‌different‌‌f rom‌‌each‌‌other.‌  ‌ ● Or‌  ‌if‌  ‌you‌‌
  have‌‌   ordinal‌‌
  data,‌‌
  you‌‌  obviously‌‌   can’t‌‌ 
● You‌  ‌will‌  ‌focus‌  ‌on‌  ‌the‌  ‌Wilks’‌  ‌Lambda‌  ‌row‌  ‌for‌‌   use‌‌Pearson,‌‌and‌‌you‌‌have‌‌to‌‌use‌‌Spearman.‌  ‌
repeated‌‌ANOVA.‌  ‌ ● Has‌‌the‌‌same‌‌process‌‌as‌‌Pearson.‌  ‌
● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌  ‌
● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌ Rank‌‌the‌‌data‌‌→‌‌Analyze‌‌→‌‌Correlate‌‌→‌‌Bivariate‌  ‌
● Eta‌  ‌Squared‌  ‌- ‌ ‌Measures‌  ‌the‌  ‌strength‌  ‌of‌  ‌the‌‌   ● Move‌‌the‌‌ranked‌‌data‌‌to‌‌the‌‌Variables‌‌area.‌  ‌
relationship.‌‌The‌‌strength‌‌of‌‌the‌‌effect.‌  ‌ ● Correlation‌‌Coefficients‌‌→‌‌Spearman‌  ‌
● Observed‌‌Power‌‌-‌‌The‌‌practical‌‌significance.‌  ‌  ‌
● It‌  ‌tells‌  ‌you‌‌   that‌‌
  there‌‌  is‌‌
  a ‌‌significant‌‌
  difference‌‌   Correlations‌‌Table‌  ‌
among‌  ‌the‌  ‌means,‌  ‌however,‌  ‌it‌  ‌doesn’t‌‌   tell‌‌
  you‌‌
  ● A‌‌   negative‌‌   correlation‌‌   coefficient‌‌  indicates‌‌  that‌‌  
which‌‌has‌‌the‌‌most/least‌‌significant‌‌difference.‌  ‌ there‌‌is‌‌a‌‌negative,‌‌or‌‌inverse,‌‌correlation.‌  ‌
● If‌‌  the‌‌
  Sig.‌‌  (p)‌‌
  value‌‌
  is‌‌
  less‌‌  than‌‌
  0.000,‌‌
  you‌‌  write‌‌
  ● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌
that‌‌it’s‌‌less‌‌than‌‌0.05‌‌or‌‌0.001.‌  ‌ ● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌
● *‌‌-‌‌Indicates‌‌that‌‌the‌‌data‌‌is‌‌significant.‌  ‌
 ‌
TWO-WAY‌‌FACTORIAL‌‌ANOVA‌  ‌
Notes‌  ‌
● Since‌‌there‌‌are‌‌two‌‌IVs,‌‌it‌‌is‌‌a‌‌Two-Way‌‌Factorial‌‌  
 ‌ ANOVA.‌  ‌
 ‌ ● Tells‌‌you‌‌if‌‌IV‌‌1‌‌has‌‌an‌‌effect,‌‌if‌‌IV‌‌2‌‌has‌‌an‌‌effect,‌‌  
Mauchly’s‌‌Test‌‌of‌‌Sphericity‌‌Table‌  ‌ and‌‌if‌‌the‌‌interaction‌‌of‌‌the‌‌IVs‌‌has‌‌an‌‌effect.‌  ‌
● The‌  ‌test‌  ‌of‌  ‌Sphericity‌  ‌is‌  ‌typically‌  ‌the‌  ‌same‌  ‌as‌‌
   ‌
the‌‌Homogeneity‌‌of‌‌Variance‌‌of‌‌Levene.‌  ‌ Analyze‌‌→‌‌General‌‌Linear‌‌Model‌‌→‌‌Univariate‌  ‌
● If‌‌
 the‌‌
 Sig.‌‌
 value‌‌  > ‌‌0.05,‌‌
 then‌‌  your‌‌
 assumption‌‌  for‌‌
  ● DV‌‌=‌‌Depression;‌‌Fixed‌‌=‌‌Therapy‌‌and‌‌Sleep‌  ‌
homogeneity‌‌of‌‌variance‌‌is‌‌satisfied.‌  ‌ ● Options‌‌→‌‌Display‌  ‌
● There‌‌is‌‌homogeneity‌‌of‌‌variance‌‌if‌‌Sig.‌‌>‌‌0.05.‌  ‌ ○ Select‌  ‌descriptive‌  ‌statistics,‌  ‌estimates‌  ‌of‌‌  
● The‌‌same‌‌goes‌‌for‌‌the‌‌Levene‌‌Test.‌  ‌ effect‌  ‌size,‌  ‌observed‌  ‌power,‌  ‌and‌‌  
 ‌ homogeneity‌‌tests.‌  ‌
Pairwise‌‌Comparisons‌‌Table‌  ‌ ○ Significance‌‌level:‌‌0.05‌  ‌
● For‌  ‌the‌‌
  Post‌‌   Hoc‌‌   test‌‌
  in‌‌
  repeated‌‌  samples,‌‌   you‌‌  ● Post‌‌Hoc‌‌→‌‌Equal‌‌Variances‌‌Assumed‌‌→‌‌Tukey‌  ‌
have‌‌to‌‌look‌‌for‌‌the‌‌Pairwise‌‌Comparisons‌‌table.‌  ‌  ‌
● Always‌  ‌analyze‌  ‌the‌  ‌Sig.‌  ‌values‌  ‌column,‌  ‌and‌‌   Levene’s‌‌Test‌‌of‌‌Equality‌‌of‌‌Error‌‌Variances‌  ‌
check‌  ‌which‌  ‌of‌  ‌the‌  ‌means‌  ‌are‌  ‌significantly‌‌   ● Homogeneous‌‌if‌‌Sig.‌‌>‌‌0.05‌  ‌
different‌‌f rom‌‌each‌‌other.‌  ‌ ● Non-homogenous‌‌if‌‌Sig.‌‌<‌‌0.05‌  ‌
● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌  ‌
● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌ Test‌‌of‌‌Between-Subjects‌‌Effects‌  ‌
● Check‌‌the‌‌Sig.‌‌value‌‌of‌‌IV‌‌1,‌‌IV‌‌2,‌‌and‌‌IV‌‌1‌‌*‌‌IV‌‌2.‌  ‌
● Significant‌‌=‌‌Sig.‌‌<‌‌0.05‌  ‌
● Not‌‌Significant‌‌=‌‌Sig.‌‌>‌‌0.05‌  ‌

 ‌
 ‌
7‌‌
    ‌
Psychological‌‌Statistic‌‌II‌‌LAB‌  ‌
SPSS‌‌NOTES‌‌(Part‌‌4)‌  ‌ ● The‌‌   ‌Kruskal-Wallis‌‌   H ‌‌Test‌‌
 is‌‌ incomplete,‌‌
 so‌‌
 you‌‌  
Notes‌  ‌ have‌  ‌to‌  ‌do‌  ‌another‌  ‌procedure‌  ‌to‌  ‌know‌  ‌which‌‌  
● You‌‌can’t‌‌always‌‌perform‌‌a‌‌parametric‌‌test.‌  ‌ among‌  ‌the‌  ‌groups‌  ‌are‌  ‌statistically‌  ‌significant‌‌  
● You‌‌  can‌‌  only‌‌  perform‌‌  a ‌‌parametric‌‌  test,‌‌  if‌‌
 all‌‌ the‌‌   from‌‌each‌‌other.‌  ‌
assumptions‌  ‌needed‌  ‌for‌  ‌that‌  ‌test‌  ‌have‌  ‌been‌‌   ● To‌  ‌check‌  ‌this,‌  ‌you‌  ‌can‌  ‌do‌  ‌a ‌ ‌Mann-Whitney‌  ‌U ‌‌
satisfied.‌  ‌Otherwise,‌  ‌you‌  ‌have‌  ‌to‌  ‌use‌‌   the‌‌   test’s‌‌   Test‌‌by‌‌pairing‌‌them.‌  ‌
nonparametric‌‌counterpart.‌  ‌   ‌ ‌
● The‌‌  ‌Mann-Whitney‌‌  U,‌ ‌‌‌Wilcoxon‌‌  Signed-Rank‌‌  ‌, ‌‌ Analyze‌  ‌→ ‌‌Nonparametric‌‌   Tests‌‌  → ‌‌Legacy‌‌   Dialogs‌‌   → ‌‌K ‌‌
Kruskal-Wallis‌  ‌H,‌ ‌ ‌and‌‌   ‌Friedman‌‌   Test‌‌   are‌‌   used‌‌   Related‌‌Samples‌  ‌
for‌  ‌ordinal‌  ‌or‌  ‌ranked‌  ‌data.‌  ‌They‌  ‌can‌  ‌also‌  ‌be‌‌   ● The‌‌   ‌Friedman‌‌  Test‌‌ is‌‌
 the‌‌ counterpart‌‌  of‌‌
 ‌Within‌‌  
used‌‌   when‌‌   the‌‌
  data‌‌  is‌‌
  interval,‌‌   but‌‌   you‌‌   have‌‌  to‌‌   Subjects‌  ‌ANOVA‌  ‌or‌  ‌Repeated‌  ‌Measures‌‌  
convert‌‌the‌‌interval‌‌data‌‌into‌‌ranks.‌  ‌ ANOVA‌, ‌ ‌where‌  ‌the‌  ‌same‌  ‌respondents‌  ‌are‌‌  
● The‌  ‌Chi-Square‌  ‌is‌  ‌used‌  ‌when‌  ‌you‌  ‌have‌  ‌two‌‌   tested‌‌more‌‌than‌‌twice.‌  ‌
nominal‌‌variables.‌  ‌ ● Ex.‌  ‌If‌  ‌you‌  ‌want‌  ‌to‌  ‌test‌  ‌the‌  ‌effects‌  ‌of‌  ‌weight‌‌  
 ‌ training,‌  ‌you‌  ‌measure‌  ‌the‌  ‌person‌  ‌before,‌‌  
Analyze‌  ‌→ ‌ ‌Nonparametric‌  ‌Tests‌‌   → ‌‌Legacy‌‌   Dialogs‌‌   → ‌‌2 ‌‌ during,‌‌and‌‌after.‌  ‌
Independent‌‌Samples‌  ‌ ● If‌  ‌you‌  ‌don’t‌  ‌satisfy‌  ‌your‌  ‌assumptions‌  ‌for‌‌  
● Mann-Whitney‌  ‌U ‌ ‌Test‌  ‌is‌  ‌the‌  ‌parallel‌  ‌to‌  ‌the‌‌   ANOVA,‌  ‌for‌  ‌example‌  ‌if‌  ‌your‌  ‌data‌  ‌is‌  ‌ranked‌‌  
Two-Independent‌‌Samples‌‌T-Test‌. ‌ ‌ (ordinal),‌‌you‌‌have‌‌to‌‌do‌‌a‌‌Friedman‌‌Statistic.‌  ‌
● You‌‌want‌‌to‌‌compare‌‌two‌‌groups.‌  ‌ ● Test‌‌Type‌‌-‌‌Select‌‌Friedman‌  ‌
● Test‌‌Variable‌‌List‌‌=‌‌Dependent‌‌Variable‌  ‌  ‌
● Grouping‌‌Variable‌‌=‌‌Independent‌‌Variable‌  ‌ Analyze‌‌→‌‌Correlate‌‌→‌‌Bivariate‌  ‌
○ Define‌‌the‌‌groups‌  ‌ ● Spearman‌‌‌is‌‌the‌‌counterpart‌‌of‌P ‌ earson‌. ‌ ‌
● Test‌‌Type‌‌-‌‌Select‌‌Mann-Whitney‌‌U ‌ ‌ ● Spearman‌‌&‌‌Pearson‌‌=‌‌Correlation‌  ‌
● The‌‌data‌‌is‌‌in‌‌terms‌‌of‌m ‌ ean‌‌rank‌,‌‌NOT‌‌means.‌  ‌ ● Correlation‌‌Coefficients‌‌-‌‌Select‌‌Spearman‌  ‌
● Is‌  ‌the‌  ‌difference‌  ‌between‌  ‌the‌  ‌mean‌  ‌ranks‌‌   ● Spearman‌  ‌is‌  ‌used‌  ‌when‌  ‌the‌  ‌relationship‌‌  
statistically‌‌significant?‌  ‌ between‌‌the‌‌variables‌‌is‌‌not‌‌linear.‌  ‌
● Asymp.‌  ‌Sig.‌  ‌(2-tailed)‌  ‌- ‌ ‌The‌  ‌p-value‌  ‌or‌  ‌alpha‌‌   ● You‌  ‌can’t‌  ‌perform‌  ‌Pearson‌  ‌if‌  ‌your‌  ‌variables‌‌  
level.‌‌It‌‌must‌‌be‌‌less‌‌than‌‌0.05‌‌to‌‌reject‌‌Ho.‌  ‌ aren’t‌‌linear.‌  ‌
● Significant‌‌=‌‌Asymp.‌‌Sig.‌‌<‌‌0.05‌  ‌  ‌
● Not‌‌Significant‌‌=‌‌Asymp.‌‌Sig.‌‌>‌‌0.05‌  ‌ Analyze‌  ‌→ ‌ ‌Nonparametric‌  ‌Tests‌  ‌→ ‌ ‌Legacy‌  ‌Dialogs‌  ‌→ ‌‌
 ‌ Chi-Square‌  ‌
Analyze‌  ‌→ ‌ ‌Nonparametric‌  ‌Tests‌‌   → ‌‌Legacy‌‌   Dialogs‌‌   → ‌‌2 ‌‌ ● Chi-Square‌‌  is‌‌ used‌‌ when‌‌  you‌‌  have‌‌ two‌‌  nominal‌‌  
Related‌‌Samples‌  ‌ or‌‌categorical‌‌variables.‌  ‌
● Wilcoxon‌‌   Signed-Rank‌‌   Test‌‌   is‌‌
  the‌‌  counterpart‌‌   ● Nominal/Categorical‌  ‌Variable‌  ‌- ‌ ‌You’re‌  ‌only‌‌  
of‌‌  the‌‌  ‌Repeated‌‌  Measures‌‌  T-Test‌, ‌‌‌Two‌‌  Related‌‌   getting‌‌the‌‌number,‌‌or‌‌the‌‌f requency.‌  ‌
Samples‌‌Test‌,‌‌or‌‌the‌C ‌ orrelated‌‌Samples‌‌Test‌. ‌ ‌ ● Ex.‌‌‌Relationship‌‌between‌‌smoking‌‌and‌‌gender.‌  ‌
● The‌  ‌Wilcoxon‌  ‌Rank-Sum‌  ‌Test‌  ‌is‌‌    ‌
interchangeable‌‌with‌‌the‌M ‌ ann-Whitney‌‌U‌‌Test‌. ‌ ‌ Analyze‌‌→‌‌Descriptive‌‌Statistics‌‌→‌‌Crosstabs‌  ‌
● Test‌‌Pairs‌‌-‌‌Insert‌‌variable‌‌1‌‌and‌‌2 ‌ ‌ ● Another‌‌way‌‌to‌‌perform‌ C ‌ hi-Square‌. ‌ ‌
● Test‌‌Type‌‌-‌‌Select‌‌Wilcoxon‌  ‌  ‌
 ‌ Analyze‌‌→‌‌Nonparametric‌‌Tests‌‌→‌‌One‌‌Sample‌  ‌
Analyze‌  ‌→ ‌‌Nonparametric‌‌   Tests‌‌   → ‌‌Legacy‌‌   Dialogs‌‌   → ‌‌K ‌‌ ● Shortcut.‌  ‌
Independent‌‌Samples‌  ‌ ● SPSS‌‌   automatically‌‌   runs‌‌
  and‌‌   choses‌‌   what‌‌  tests‌‌  
● If‌‌   your‌‌  independent‌‌  variable‌‌  has‌‌  more‌‌  than‌‌  two‌‌   and‌‌statistics‌‌it‌‌can‌‌perform.‌  ‌
levels,‌‌you‌‌can‌‌do‌‌the‌K ‌ ruskal-Wallis‌‌H‌‌Test‌. ‌ ‌ ● It‌‌
 gives‌‌  you‌‌
 a ‌‌‌Hypothesis‌‌  Test‌‌  Summary‌, ‌‌which‌‌  
● Test‌‌Variable‌‌List‌‌=‌‌Independent‌‌Variable‌  ‌ contains‌‌the‌H ‌ o‌,‌T‌ est‌,‌S
‌ ig.‌,‌‌and‌D‌ ecision‌. ‌ ‌
● Grouping‌‌Variable‌‌=‌‌Groupings‌  ‌
○ Define‌‌the‌‌grouping‌‌range‌  ‌
● Test‌‌Type‌‌-‌‌Select‌‌Kruskal-Wallis‌‌H‌‌Test.‌  ‌
● Asymp.‌  ‌Sig.‌  ‌(2-tailed)‌  ‌- ‌ ‌The‌  ‌p-value‌  ‌or‌  ‌alpha‌‌  
level.‌‌It‌‌must‌‌be‌‌less‌‌than‌‌0.05‌‌to‌‌reject‌‌Ho.‌  ‌
● Significant‌‌=‌‌Asymp.‌‌Sig.‌‌<‌‌0.05‌  ‌
● Not‌‌Significant‌‌=‌‌Asymp.‌‌Sig.‌‌>‌‌0.05‌  ‌

8‌‌
    ‌

You might also like