You are on page 1of 25

Statistics in Psychological

Assessment
• Scales of Measurement
• Descriptive Statistics
o Measure of Central Tendency
o Measure of Variability
o The Normal Curve
• Inferential Statistics
o Correlation
Scales of Measurement
• Continuous scales – theoretically it is possible to
divide any of the values of the scale. Typically,
having a wide range of possible values (e.g.
height or a depression scale).
• Discrete scales – categorical values (e.g. male or
female)
• Error – the collective influence of all of the
factors on a test score beyond those specifically
measured by the test
• Nominal Scales - involve classification or categorization based on
one or more distinguishing characteristics; all things measured must
be placed into mutually exclusive and exhaustive categories (e.g.
apples and oranges, DSM-IV diagnoses, etc.).
• Ordinal Scales – Involve classifications, like nominal scales but also
allow rank ordering (e.g. Olympic medalists).
• Interval Scales - contain equal intervals between numbers. Each
unit on the scale is exactly equal to any other unit on the scale (e.g.
IQ scores and most other psychological measures).
• Ratio Scales – Interval scales with a true zero point (e.g. height or
reaction time).

Psychological Measurement – Most psychological measures are truly


ordinal but are treated as interval measures for statistical purposes.
Two Different Types of Statistics
Descriptive Statistics Inferential Statistics
These are used to summarize and The subset of statistical
make understandable large procedures designed to evaluate
quantities of data. Among the data. They are used to draw
techniques in descriptive inferences about numerical
statistics are (1) Frequency
Distribution, (2) Measures of quantities (called parameters)
Central Tendency, (3) Measures of concerning populations based on
Variability, & (4) Transformed numerical quantities (called
Scores. statistics) obtained from samples.
Descriptive Statistics
Describing Data
• Distributions - a set of test scores arrayed for
recording or study.
• Raw Score - a straightforward, unmodified
accounting of performance that is usually
numerical.
• Frequency Distribution - all scores are listed
alongside the number of times each score
occurred
Frequency distributions may be in tabular form as in the example
above. It is a simple frequency distribution (scores have not been
grouped).
Grouped frequency distributions have class intervals rather than
actual test scores
Histogram - a graph with vertical Bar graph - numbers indicative of
lines drawn at the true limits of each frequency appear on the Y -axis, and
test score (or class interval), forming reference to some categorization
a series of contiguous rectangles (e.g., yes/ no/ maybe, male/female)
appears on the X -axis.
Frequency polygon - test scores or class intervals
(as indicated on the X -axis) meet frequencies (as
indicated on the Y -axis).
Measure of Central Tendecy
Central Tendency - a statistic that indicates the average or
midmost score between the extreme scores in a
distribution.
• Mean - Sum of the observations (or test scores), in this
case divided by the number of observations.
• Median – The middle score in a distribution.
Particularly useful when there are outliers, or extreme
scores in a distribution.
• Mode – The most frequently occurring score in a
distribution. When two scores occur with the highest
frequency a distribution is said to be bimodal.
Measures of Variability

Variability is an indication of the degree to


which scores are scattered or dispersed in a
distribution.
Among the two Distribution which has the greater
variability in scores?
Distribution A has greater variability scores are more spread out.
Measures of Variability - are statistics that describe the
amount of variation in a distribution.

• Range - difference between the highest and the lowest scores.


• Interquartile range – difference between the third and first
quartiles of a distribution.
• Semi-interquartile range – the interquartile range divided by 2
• Average deviation – the average deviation of scores in a
distribution from the mean.
• Variance - the arithmetic mean of the squares of the differences
between the scores in a distribution and their mean
• Standard deviation – the square root of the average squared
deviations about the mean. It is the square root of the variance.
Typical distance of the scores from the mean.
Skewness - the nature and extent to which symmetry is
absent in a distribution.
• Positive Skew - relatively few of the scores fall at the
high end of the distribution.

• Negative Skew – relatively few of the scores fall at the


low end of the distribution.
Kurtosis – the steepness of a distribution in its center.
• Platykurtic – relatively flat.
• Leptokurtic – relatively peaked.
• Mesokurtic – somewhere in the middle.
Normal Curve
The normal curve is a bell-shaped, smooth, mathematically defined
curve that is highest at its center. Perfectly symmetrical.

Area Under the Normal Curve


The normal curve can be conveniently divided into areas defined by
units of standard deviations.
Standard Score - is a raw score that has been converted from
one scale to another scale, where the latter scale has some
arbitrarily set mean and standard deviation.
• Z-score - conversion of a raw score into a number
indicating how many standard deviation units the raw
score is below or above the mean of the distribution.
• T scores - can be called a fifty plus or minus ten scale; that
is, a scale with a mean set at 50 and a standard deviation
set at 10
• Stanine - a standard score with a mean of 5 and a standard
deviation of approximately 2. Divided into nine units.
• Normalizing a distribution - involves “stretching” the
skewed curve into the shape of a normal curve and
creating a corresponding scale of standard scores
Inferential Statistics
Correlation - a coefficient of correlation (or correlation
coefficient) is a number that provides us with an index of
the strength of the relationship between two things.
• Correlation coefficients vary in magnitude between -1 and
+1. A correlation of 0 indicates no relationship between
two variables.
• Positive correlations indicate that as one variable increases
or decreases, the other variable follows suit.
• Negative correlations indicate that as one variable
increases the other decreases.
• Correlation between variables does not imply causation
but it does aid in prediction.
• Pearson r: A method of computing correlation when both
variables are linearly related and continuous.
• Once a correlation coefficient is obtained, it needs to be
checked for statistical significance (typically a probability
level below .05).
• By squaring r, one is able to obtain a coefficient of
determination, or the variance that the variables share
with one another.
• Spearman Rho: A method for computing correlation, used
primarily when sample sizes are small or the variables are
ordinal in nature.
• Scatterplot – Involves simply plotting one variable on the X
(horizontal) axis and the other on the Y (vertical) axis
Scatterplots of no correlation (left) and moderate correlation
(right).
Scatterplots of strong correlations feature points tightly
clustered together in a diagonal line. For positive correlations the
line goes from bottom left to top right.
Strong negative correlations form a tightly clustered diagonal
line from top left to bottom right.
Outlier – an extremely atypical point (case), lying relatively far
away from the other points in a scatterplot.
Restriction of range leads to weaker correlations.
Meta-Analysis
• Meta-analysis allows researchers to look at the
relationship between variables across many
separate studies.
• Meta-analysis- a family of techniques to
statistically combine information across studies to
produce single estimates of the data under study.
• The estimates are in the form of effect size, which
is often expressed as a correlation coefficient.

You might also like