You are on page 1of 50

PSY 101L:

Psychological Testing

Prof. A.K.M. Rezaul Karim, Ph.D.


North South University
 Lecture Outline
• What is a psychological test?
• Why is psychological testing
important?
• Types of psychological tests
• Mental ability tests
• Personality tests
• Properties of a psychological test
Psychological Test
• A psychological test is a standardized
measure of a sample of behavior that is used
to measure the individual differences that
exist among people.
• It is not an exhaustive measure - it is too
difficult to evaluate every behavior.
Why is Psychological
Testing Important?
• Research:
 Psychological tests are used in research.

• Diagnosis:
 Psychological tests are used in
diagnosing psychological disorders or
problematic behaviors.
Why is Psychological
Testing Important (Cont.)?
• Understanding and decision making:
 Allows us to describe and understand behavior.
 Allows us to make important decisions about people.
e.g. Early School Placement, College Entrance
Decisions, Military Job Selections
Types of Psychological Tests

• There are two types of psychological tests.


– Mental ability tests
– Personality tests
Mental Ability Tests
• Includes three subcategories.
– Intelligence tests
– Aptitude tests
– Achievement tests
Intelligence Tests
• Measure general mental abilities.
They are intended to measure
intellectual potential.
– Wechsler Adult Intelligence Scale
– Stanford–Binet Intelligence Scales
Example Items
• Emily is four years old. Her big
sister Amy is three times as old as
Emily. How old will Amy be when
she is twice as old as Emily?
Example Items
• What would be the next number in
this series? 15 ... 12 ... 13 ... 10 ...
11 ... 8 ... ?
Aptitude Tests
Achievement Tests
• Gauge a
person’s
mastery and
knowledge in
various subjects
Example Items
• Who was the 43rd President of
the United States?
• What is 5x6 divided by 2?
• How many branches of
Government exist in the U.S.?
Achievement Test vs Aptitude Test
• Aptitude tests are intended to measure potential for learning in
a specific area.
– They are usually given before a person has had any
training in the specific area, and used to predict how well
the person will do in that area.
– However, current abilities and future success are often
based on past achievements.
• The SAT consists of verbal and quantitative parts that test the
amount one has learned or achieved.
– It may be that there is no such thing as a “pure” aptitude
test.
Personality Tests
• Measure
aspects of
personality,
including
motives,
interests,
values, and
attitudes.
Types of Personality Tests (Cont.)

• Projective tests

• Objective tests
Personality Tests
• Projective tests: open-ended format and have no
clearly specified answers.
(e.g., Rorschach Inkblot Test, TAT)
Personality Tests (Cont.)
• Objective tests: present test takers
with a standardized group of test
items in the form of a questionnaire.
(e.g., the MMPI, 16PF, CPI)
Test Design
• In order for a test to be good or
effective, it must be standardized.
• Standardization refers to the uniform
procedures used in administrating a test, and
scoring and interpreting its results.
Test Design (Cont.)
• A standardized test should meet the
following three or four criteria:
– Objectivity
– Validity
– Reliability
– Norm
Objectivity
• The test should be free from rater
bias or subjective judgment about the
ability, skill, knowledge, trait or
potentiality to be measured and
evaluated.
Validity
• Refers to the ability of a test or
scale to measure what it was
designed to measure.
• Does our measure really measure
the construct?
• Is there bias in our measurement?
Types of Validity
VALIDITY

FACE CRITERION- CONSTRUCT


RELATED
CONVERGENT
CONCURRENT PREDICTIVE DISCRIMINANT
Face Validity

“This guy seems smart to me,


and
he got a high score on my IQ measure.”

• At the surface level, does it look as if


the measure is testing the construct?
Criterion-related Validity
• The extent to which a measure is related to
a criterion or an outcome.
• Concurrent validity
• Predictive validity
Concurrent Validity
• The extent to which the scores on a particular
test or scale correspond to a criterion or an
outcome assessed at the same time.
• For example, the validity of a cognitive test
for job performance is the correlation
between test scores and performance ratings
by supervisor at the same time.
Predictive Validity
• The extent to which a score on a scale or test
predicts scores on some criterion measure.
• For example, the validity of a cognitive test
for job performance is the correlation
between test scores and performance ratings
by supervisor in future.
Construct Validity
• Usually requires multiple studies, a large
body of evidence that supports the claim
that the measure really tests the construct.
• Convergent validity
• Discriminant or divergent validity
Convergent Validity
• refers to the degree to which two measures of
the same or similar construct that
theoretically should be related, are in fact
related.
Divergent/
Discriminant Validity
• refers to the fact that two measures of
different constructs that theoretically should
have no relationship do, in fact, have
negligible or no relationship.
Reliability
• X=T+E
• X = Observed score
• T = True score
• E = Measurement error
• A reliable measure will have a
small amount of error (E)
Reliability
• Example
• Reliability
• You take a personality
refers to the test and are scored as
measurement “assertive”. Three
weeks later you take
consistency of a the same test and are
test or scale. scored as “passive”. A
drastic change is
probably a result of an
unreliable test.
Testing Reliability
 Test-retest reliability
– Test the same participants more than
once.
– Should be consistent across different
administrations.
– Correlation coefficient
• A numerical index of the degree of
relationship (-1 to +1)
Visual Example

Test A: Reliable Test B:


Unreliable
Testing Reliability
• Internal Consistency
– Multiple items testing the same construct
– Extent to which scores on the items of a measure
correlate with each other
• Cronbach’s alpha (α)
• Split-half reliability
– Correlation of the scores on one half of the
measure with the scores on the other half
(determined randomly or by odd-even
method)
Reliability
• Inter-rater Reliability
 At least 2 raters observe behavior
 Extent to which raters agree in their
observations
– Are the raters consistent?
 Requires some training in judgment
Test Norms
• Norms refers to the performance of a
typical reference (or norm) group on a
particular measure for which a person can
be compared to.
• Scores on psychological test are most
commonly interpreted by reference
to norm that represents the test
performance on standardization sample.
Types of Test Norms
• Age norm: The norm (as for height, weight, or
intellectual achievement) of individuals of a given
chronological age
• Grade norm: The norm (as for height, weight, or
intellectual achievement) of individuals of a given grade
• Percentile norm: A percentile (or a centile) is a measure
used in statistics indicating the value below which a
given percentage of observations in a group of
observations fall. For example, the 20th percentile is the
value (or score) below which 20% of the observations
may be found.
Ethics in Psychological Testing
• Given the widespread use of tests, there is considerable
potential for abuse.
• A good deal of attention has therefore been devoted to
the development and enforcement of professional and
legal standards.
• The American Psychological Association (APA) has
taken a leading role in the development of professional
standards for testing.
APA Ethical Guidelines
 The investigator has the responsibility to make a
careful evaluation of its ethical acceptability.

 The investigator is obliged to observe stringent


(precise) safeguards to protect the rights of human
participants.

 The researcher must evaluate whether participants


are considered “Subject at risk” or “Subject at
minimal risk” - No appreciable risk (physical risk,
mental harm).
APA Ethical Guidelines
(Cont.)
 The principal investigator is responsible for the ethical
practices of collaborators, assistants, employees, etc.
(all of whom are also responsible for their own ethical
behavior).
 Except in minimal-risk research, the investigator
establishes a clear and fair agreement with participants
that clarifies the obligations and responsibilities of
each. Must explain all aspects of the research that may
influence the subjects decision to participate. Explains
all other aspects that the participants inquire about.
APA Ethical Guidelines
 In research involving(Cont.)
concealment or deception, the
research considers the special responsibilities
involved.
 Individual’s freedom to decline, and freedom to
withdraw, is respected.
 Researcher is responsible for protecting participants
from physical and mental discomfort, harm, and
danger that may arise from research procedures. If
there are risks, the participants must be aware of this
fact.
APA Ethical Guidelines
(Cont.)
 After the data are collected the investigator provides
participants with information about the nature of the
study and attempts to remove any misconceptions
that may have arisen.
 The investigator has the responsibility to detect and
remove any undesirable consequences to the
participant that may occur due to the research.
 The information obtained from the participant should
be treated confidentially unless otherwise agreed
upon with the participant.
Informed Consent
• Participants must be fully informed as to the purpose
and nature of the research that they are going to be
involved in.
• Participants must be fully informed about the
procedures used in the research study.
• After getting this information, the participants must
provide consent for their participation.
• Participants must be informed about their right to
Confidentiality and their right to withdrawal without
any penalty.
Debriefing
Post-administration debriefing should:
- Restate purpose of the research.
- Explain how the results will be used (usually.
emphasize that the interest is in the group findings).
- Reiterate that findings will be treated confidentially.
- Answer all of the respondents questions fully.
- Thank the participant!
Participant Feedback
• In clinical research, or research with interpretive
instruments, there may be the need to provide more in-
depth feedback about individual’s responses (e.g.,
Research on Emotional Intelligence).
• In such cases, first and foremost, it is critical that this
kind of detailed feedback be given by a qualified
individual.
Responsibility of the Tester
• Have competence in test administration, interpretation
and feedback.
• Have an understanding of basic psychometrics and
scoring procedures and be competent in interpretation,
and apply scientific knowledge and professional
judgment to the results.
• Take responsibility for the selection, administration, and
scoring, the analysis, interpretation and communication
of test results.
Responsibility of the Tester (Cont.)
 Be familiar with the context of use: the situation,
purpose, setting in which a test is used.
 Have knowledge of legal and ethical issues related to
test use
 Awareness of ethnic or cultural variables that could
influence the results.
 Have the ability to determine language proficiency.
 Have knowledge of important racial, ethnic, or cultural
variables relevant for individuals or groups to whom
tests are administered.
Summary
Questions?

You might also like