You are on page 1of 7

CHAPTER 7: BASICS OF

EXPERIMENTATIONS SOME RESEARCH EXAMPLES

Independent Variable Schachter


● Schachter tested the hypothesis: If people are
 dimensions that the experimenter intentionally
manipulates. It is the antecedent the anxious, then they will want to affiliate, or be, with
experimenter chooses to vary. others. The hypothesis states a potential
relationship between two variables-anxiety and
affiliation. IV in the experiment is anxiety.
Schachter created two levels of anxiety (high and
Levels of Independent Variable low) by using different sets of instructions. He
manipulated anxiety by giving subjects varying
 2 possible values in every experiment,
instructions, leading them to believe that they
researcher decides which values of the IV to
use. either would or would not be exposed to painful
shock. Affiliation is the DV dependent in the sense
Dependent Variable that its values are assumed to depend on the
values of the independent variable. According to
 behavior we expect to change because of our the hypothesis, anxious subjects will be less likely
experimental treatment. It is the outcome we to want to wait alone. If anxiety has no effect on
are trying to explain affiliation, all subjects, anxious or not will be
 sometimes called dependent measures equally willing to wait alone. But in fact, Schacter’s
experiment supported the hypothesis: he found out
that the subjects who expected painful shocks
were less likely to want to wait alone.
Hess Whether a particular is an independent variable, a
dependent variable, or neither depends on the
● Hess tested the hypothesis: Large pupils make
hypothesis being tested.
people attractive The independent variable (IV) in
the experiment was pupil size. Hess deliberately
varied the size of the pupils so he could test the OPERATIONAL DEFINITIONS
effects of pupil size of attractiveness. The
A variable being investigated may be defined in two
dependent variable (DV) was attractiveness. If the
ways:
hypothesis is correct, measures of attractiveness
should depend on the size of pupils. And in fact ● Conceptual definition
Hess found out that his male subjects were more ➔ defining a term on how we
likely to attribute the attractive traits to the women used it in everyday language
with large pupils. ● Operational definition
➔ definition used in carrying out
the experiment.
Identifying Variables
➔ Specifies the precise meaning
● When you are working with your own hypothesis, of a variable within an
you must ask the same type of questions that you experiment
ask about an experiment that has already been ➔ Defines a variable in terms of
done: observable operations,
○ What will you manipulate or vary to test the procedures, and
hypothesis? measurements
○ What will you measure to find out whether
your independent variable had an effect? Defining the Independent Variable: Experimental
● If you do not need to manipulate the antecedent Operational Definitions
conditions by creating different treatment ● Explain the precise meaning of the independent
conditions, you do not have an experimental variables
hypothesis. ● Describes exactly what was done to create the
● The independent variable (IV) in one experiment various treatment conditions of the experiment.
may function as the dependent variable in another.
● Includes all the steps that were followed to set up  refers to concepts, which are unseen
each value of the independent variable. processes to explain postulated behavior.
● Whether the IV is manipulated or selected, we  we infer their existence from behaviors that we
can observe.
need precise experimental definitions

Defining the Dependent Variable: Measured


Operational Definitions Defining Scales of Measurement

● Describe exactly what procedures we follow to  In setting up experiments and formulating


assess the impact of different treatment operational definitions, researchers also
conditions. consider the available scales of measurement
● It includes the exact descriptions of the specific for each variable.
behaviors or responses recorded and explains
how those responses are scored.
RELIABILITY

 consistency and dependability of experimental


Defining Constructs Operationally procedures and measurement.

● Hypothetical Constructs PROCEDURES FOR CHECKING THE RELIABILITY OF


➔ refers to concepts, which are unseen MEASUREMENT TECHNIQUES
processes to explain postulated behavior.
➔ we infer their existence from behaviors that
we can observe ● Interrater Reliability
➔ one way to assess reliability of
measurement procedure is to
have observers take
measurements of the same
Hypothetical Constructs response.
Content Validity
● Test-Retest Reliability
➔ reliability of measures can
 depends on whether we are taking a fair sample of
also be checked by comparing
the variable we intend to measure.
scores of people who have
been measured twice with the
Predictive Validity
same instrument.
 performance in the test predicts performance in
the condition being modeled. By extension, it
allows extrapolation of the observed responses to
● Interitem Reliability
be applied to other species, testing, and clinical
➔ extent to which different parts
environments
of a questionnaire, test, or
other instruments designed to
assess the variable attain
consistent results.
Concurrent Validity
Validity
 compares scores on the measuring instrument
with an outside criterion
 refers to the principle of studying the variables we
intend to study.  is comparative rather than predictive

Manipulation Check  idea of concurrent validity reflects whether scores


 providing evidence for the validity of an on the measuring device correlate with scores
experimental procedure obtain from another method of measuring the
same concept.
Face Validity
 least stringent type of validity because it does not
provide any real evidence
Construct Validity Classic Threats to Internal Validity
 It deals with the transition from theory to research
application.
 Donald Campbell

➔ psychologist who identified


Evaluating the Experiment: Internal Validity eight kinds of extraneous
variables that can threaten the
Internal Validity
internal validity of experiments
 Refers to the degree to which a researcher can
state a causal relationship between antecedent ➔ designs using different
conditions and the subsequent observed behavior. subjects in each treatment
group can be confounded if an
Extraneous Variables
extraneous variables affects
 Are factors that are not focus of the experiment but some experimental groups but
can influence the findings. they neither not others with regularity.
intentionally manipulated independent variables or
dependent variables measured as indexes of the ➔ Other designs in which the
effect of the independent variables. same subjects are measured
multiple times can be
confounded if an extraneous
Confounding Variables variable is present only in
certain experimental
 A situation wherein the value of an extraneous conditions but not in others.
variables changes systematically across different
conditions of an experiment.
History
Statistical Regression
 Refers to the history of the experiment.
 Some outside event that occurred before their
 also referred to as regression toward the mean.
group testing session could influence responses of
the entire, and the effects produced by the event
could be mistaken for effects of the IV.  This occurs whenever subjects are assigned to
conditions based on extreme scores on a test.
Maturation
Selection
 Refers to any internal changes in subjects that
might have affected scores on the dependent  whenever the researcher does not assign subjects
measure. randomly to the different conditions of an
 Maturation effects can also be a problem in experiment, a selection threat is present.
studies that take months or even years to finish.
Subject Morality
Testing
 Always condition the possibility that more subjects
 A threat that refers to the effects on the dependent dropped out of one experimental condition than
variable produced by a previous administration of another.
the same test or other measuring instrument.  Dropout rates should always be stated in a
research report so that the reader can be on the
lookout for this threat.
Instrumentation
 a potential problem whenever human observers Selection Interactions
are used to record behavior, score questionnaires
by hand, or perform content analyses.  combine with another treat to form a selection
interaction.
 Selection can interact with history, maturation,
morality, and so on to produce effects on the
dependent variable
Selection

 whenever the researcher does not assign subjects


randomly to the different conditions of an
experiment, a selection threat is present.

PLANNING THE METHOD SECTION

The method section of the research report is the place to


describe what you did in your experiment.

DIVISIONS OF THE METHOD SECTIONS

 Participants
- Describe your subjects.
 Materials
- Describe the items in a subsection that is
labeled appropriately.
- one also needs to describe any special or
unusual equipment or software used in the
study.
 Procedure
- Keep careful notes of everything that you
do in an experiment, including verbal
instructions given to subjects, because one
will need to describe all of the procedures
used in experimental session.

You might also like