You are on page 1of 31

Independent and Dependent Variables

Operational Definitions
Evaluating Operational Definitions
Planning the Method Section

What is an independent variable?

An independent variable (IV) is the variable


(antecedent condition) an experimenter
intentionally manipulates.
Levels of an independent variable are the
values of the IV created by the experimenter.
An experiment requires at least two levels.

Independent and Dependent Variables

Explain confounding.

An experiment is confounded when the value


of an extraneous variable systematically
changes along with the independent variable.
For example, we could confound our experiment
if we ran experimental subjects in the morning
and control subjects at night.

Independent and Dependent Variables

What is a dependent variable?

A dependent variable is the outcome measure


the experimenter uses to assess the change in
behavior produced by the independent variable.
The dependent variable depends on the value
of the independent variable.

Independent and Dependent Variables

What is an operational definition?

An operational definition specifies the exact


meaning of a variable in an experiment by
defining it in terms of observable operations,
procedures, and measurements.

Operational Definitions

What is an operational definition?

An experimental operational definition


specifies the exact procedure for creating
values of the independent variable.
A measured operational definition specifies
the exact procedure for measuring the
dependent variable.

Operational Definitions

What are the properties of a nominal scale?

A nominal scale assigns items to two or more


distinct categories that can be named using a
shared feature, but does not measure their
magnitude.
Example: you can sort canines into friendly and
shy categories.

Evaluating Operational Definitions

What are the properties of an ordinal scale?

An ordinal scale measures the magnitude of


the dependent variable using ranks, but does
not assign precise values.
This scale allows us to make statements about
relative speed, but not precise speed, like a
runners place in a marathon.

Evaluating Operational Definitions

What are the properties of an interval scale?

An interval scale measures the magnitude


of the dependent variable using equal intervals
between values with no absolute zero point.
Example: degrees Celsius or Fahrenheit and
Sarnoff and Zimbardos (1961) 0-100 scale.

Evaluating Operational Definitions

What are the properties of a ratio scale?

A ratio scale measures the magnitude of the


dependent variable using equal intervals
between values and an absolute zero.
This scale allows us to state that 2 meters are
twice as long as 1 meter.
Example: distance in meters or time in seconds.

Evaluating Operational Definitions

What does reliability mean?

Reliability refers to the consistency of


experimental operational definitions and
measured operational definitions.
Example: a reliable bathroom scale should
display the same weight if you measure
yourself three times in the same minute.

Evaluating Operational Definitions

Explain interrater reliability.

Interrater reliability is the degree to which


observers agree in their measurement
of the behavior.
Example: the degree to which three observers
agree when scoring the same personal essays
for optimism.

Evaluating Operational Definitions

Explain test-retest reliability.

Test-retest reliability means the degree to


which a person's scores are consistent across
two or more administrations of a measurement
procedure.
Example: highly correlated scores on the
Wechsler Adult Intelligence Scale-Revised
when it is administered twice, 2 weeks apart.

Evaluating Operational Definitions

Explain interitem reliability.

Interitem reliability measures the degree to


which different parts of an instrument
(questionnaire or test) that are designed to
measure the same variable achieve consistent
results.

Evaluating Operational Definitions

What does validity mean?

Validity means the operational definition


accurately manipulates the independent
variable or measures the dependent variable.

Evaluating Operational Definitions

What is face validity?

Face validity is the degree to which the validity


of a manipulation or measurement technique is
self-evident. This is the least stringent form of
validity.
For example, using a ruler to measure pupil size.

Evaluating Operational Definitions

What is content validity?

Content validity means how accurately a


measurement procedure samples the content
of the dependent variable.
Example: an exam over chapters 1-4 that only
contains questions about chapter 2 has poor
content validity.

Evaluating Operational Definitions

What is predictive validity?

Predictive validity means how accurately a


measurement procedure predicts future
performance.
Example: the ACT has predictive validity if these
scores are significantly correlated with college
GPA.

Evaluating Operational Definitions

What is construct validity?

Construct validity is how accurately an


operational definition represents a construct.
Example: a construct of abusive parents might
include their perception of their neighbors as
unfriendly.

Evaluating Operational Definitions

Explain internal validity.

Internal validity is the degree to which changes


in the dependent variable across treatment
conditions were due to the independent variable.
Internal validity establishes a cause-and-effect
relationship between the independent and
dependent variables.

Evaluating Operational Definitions

Explain the problem of confounding.

Confounding occurs when an extraneous


variable systematically changes across the
experimental conditions.
Example: a study comparing the effects of
meditation and prayer on blood pressure would
be confounded if one group exercised more.

Evaluating Operational Definitions

Explain history threat.

History threat occurs when an event outside


the experiment threatens internal validity by
changing the dependent variable.
Example: subjects in group A were weighed
before lunch while those in group B were
weighed after lunch.

Evaluating Operational Definitions

Explain maturation threat.

Maturation threat is produced when physical


or psychological changes in the subject threaten
internal validity by changing the DV.
Example: boredom may increase subject errors
on a proofing task (DV).

Evaluating Operational Definitions

Explain testing threat.

Testing threat occurs when prior exposure to a


measurement procedure affects performance on
this measure during the experiment.
Example: experimental subjects used a blood
pressure cuff daily, while control subjects only
used one during a pretest measurement.

Evaluating Operational Definitions

Explain instrumentation threat.

Instrumentation threat is when changes in


the measurement instrument or measuring
procedure threatens internal validity.
Example: if reaction time measurements
became less accurate during the experimental
than the control conditions.

Evaluating Operational Definitions

Explain statistical regression threat.

Statistical regression threat occurs when


subjects are assigned to conditions on the basis
of extreme scores, the measurement procedure
is not completely reliable, and subjects are
retested using the same procedure to measure
change on the dependent variable.

Evaluating Operational Definitions

Explain selection threat.

Selection threat occurs when individual


differences are not balanced across treatment
conditions by the assignment procedure.
Example: despite random assignment, subjects
in the experimental group were more extroverted
than those in the control group.

Evaluating Operational Definitions

Explain subject mortality threat.

Subject mortality threat occurs when subjects


drop out of experimental conditions at different
rates.
Example: even if subjects in each group started
out with comparable anxiety scores, drop out
could produce differences on this variable.

Evaluating Operational Definitions

Explain selection interactions.

Selection interactions occur when a selection


threat combines with at least one other threat
(history, maturation, statistical regression,
subject mortality, or testing).

Evaluating Operational Definitions

What is the purpose of the Method section of an APA


report?
The Method section of an APA research report
describes the Participants, Apparatus or
Materials, and Procedure of the experiment.
This section provides the reader with sufficient
detail (who, what, when, and how) to exactly
replicate your study.

Planning the Method Section

When is an Apparatus section needed?

An Apparatus section of an APA research


report is appropriate when the equipment used
in a study was unique or specialized, or when we
need to explain the capabilities of more common
equipment so that the reader can better evaluate
or replicate the experiment.

Planning the Method Section

You might also like