You are on page 1of 23

Experimental

Designs
Group Members :

 Atika Pakarti Linuih (18220080)


 Nurul Aulia (18220078)
 Rika Kartika (18220037)
 Rizky Zaskia Hilmy (18220303)
What is an Experimental ?

In an experiment, you test an idea (or practice or procedure) to determine whether


it influences an outcome or dependent variable. You first decide on an idea with
which to “experiment,” assign individuals to experience it (and have some
individuals experi- ence something different), and then determine whether those
who experienced the idea (or practice or procedure) performed better on some
outcome than those who did not experience it.
WHEN DO YOU USE AN EXPERIMENTAL ?
You use an experiment when you want to establish possible cause and effect
between your independent and dependent variables. This means that you
attempt to control all variables that influence the outcome except for the
independent variable. Then, when the independent variable influences the
dependent variable, we can say the independent variable “caused” or
“probably caused” the dependent variable.
WHEN DID EXPERIMENTS DEVELOP?

The ideas used in experiments today were mostly in place by the first few decades of the 20th century.
The procedures of comparing groups, assigning individuals to treatments, and statistically analyzing
group comparisons had been developed by 1940. During the 1960s, the types of experimental designs
were identified and the strengths (e.g., control over potential threats) of these designs specified by 1980.
Since the 1980s, experiments have grown in sophistication and complexity, largely because of computers
and improved statistical procedures. Researchers now employ mul- tiple independent and dependent
variables, compare more than two groups, and study different types of experimental units of analysis,
such as entire organizations, groups, and individuals (Boruch, 1998; Neuman, 2000).
WHAT ARE KEY CHARACTERISTICS OF EXPERIMENTS?

01 Random 03 Manipulation of the 05 Group Comparisons


Assignment Treatment Conditions

02 Control 04 Outcome Measures 06 Threats to Validity


Overextraneous
Variables
Control Over Extraneous Variables
 Extraneous factors are any influences in the selection of participants, the
procedures, the statistics, or the design likely to affect the outcome and
provide an alternative explanation for our results than what we expected.

• A pretest provides a measure on some attribute or characteristic that you


assess for participants in an experiment before they receive a treatment.
After the treatment, you take another reading on the attribute or
characteristic.

• A posttest is a measure on some attribute or characteristic that is assessed


for participants in an experiment after a treat- ment.
Control Over Extraneous Variables
• Covariates are variables that the researcher controls for using statistics and
that relate to the dependent variable but that do not relate to the independent
variable. The researcher needs to control for these variables, which have the
potential to co-vary with the dependent variable

• Matching is the process of identifying one or more personal characteristics that


influence the outcome and assigning individuals with that characteristic equally
to the experimental and control groups.

• Homogeneous samples is selecting people who vary little in their personal


characteristics.

• A block- ing variable is a variable the researcher controls before the experiment
starts by dividing (or “blocking”) the participants into subgroups (or categories)
and analyzing the impact of each subgroup on the outcome.
Manipulation of the Treatment Conditions

• In experimental treatment, the researcher physically intervenes to alter the


conditions experienced by the experimental unit (e.g., a reward for good
spelling performance or a special type of classroom instruction, such as small-
group discussion).

• In an experiment, levels are categories of a treatment variable


Outcome Measures & Group
Comparisons

In experiments, the outcome (or response, criterion, or posttest) is the dependent


variable that is the presumed effect of the treatment variable.

A group comparison is the process of a researcher obtaining scores for individuals


or groups on the dependent variable and comparing the means and variance both
within the group and between the groups.
Threats to Validity
• Threats to validity refer to specific reasons for why we can be wrong when
we make an inference in an experiment because of covariance, causation
constructs, or whether the causal relationship holds over variations in persons,
setting, treatments, and outcomes.

• Threats to internal validity are problems in draw- ing correct inferences


about whether the covariation (i.e., the variation in one variable contributes to
the variation in the other variable)

• Threats to external validity are problems that threaten our ability to draw
correct inferences from the sam- ple data to other persons, settings, treatment
variables, and measures.
The Type of Experimental Designs
True experiments Between Group Design

1 3
The researcher randomly assigns Randomizations or equating of the
participants to different conditions of groups minimizes the possibility of
the experimental variable threats.

2 4
One variation on this designs is to Instrumentation exist as a
obtain pretest as well as posttest potential threat in most
measures or observations experiments
The Type of Experimental Designs
Between Group Design Quasi Experiments

1 2 3
It include assignment, We can also apply The quasi-experimental
but not random the pre- and approach introduces
assignment of posttest design considerably more
participants to groups. approach threats to internal
validity
     
Mean rate Mean rate Mean rate
of smoking of smoking of smoking

     
Mean rate Mean rate Mean rate
of smoking of smoking of smoking
The Type of Experimental Designs
Within Group or Individual Design Time Series
Researchers study a single group or single individual

1 2 3
Consist of a studying one There are However, threats to
group, overtime, with interrupted time validity may occur
multiple pretest and series and because of overall length
posttest observation made equivalent time of data collections
by the researcher series
Select Pretest Pretest Pretest Intervention Posttest Posttest
Participants Measure or Measure or Measure or Measure or Measure or
for Group Observation Observation Observation Observation Observation

Select Measure or Intervention Measure or Intervention Measure or Intervention


Participants Observation Observation Observation
for Group
The Type of Experimental Designs
Within Group or Individual Design Repeated Measures

Researchers study a single group or single


individual

1 2 3
All participants in a single After selecting This design does not
group participate in all participants, the affected by the internal
experiment treatment researchers decides on validity threats related to
different experimental comparing groups
treatments.
Select Measure or Experimental Measure or Experimental Measure or
Participants Observation Treatment #1 Observation Treatment #2 Observation
for Group
The Type of Experimental Designs
Single Subject Designs Within Group or Individual Design

The key characteristics of a single subject study


1 3
The researchers notes the pattern of
Prior to administrating the intervention, the
behavior and plot on a graph
researchers establishes a stable baseline of
information about the individual behavior.

2 4
The researchers repeatedly and In a graphic analysis of the
frequently measures behavior. data, the single-subject
researcher plots behavior for
specific individual
Steps in the Implementation of Experiments
1. Decide if an Experiment Addresses Your Research Problem

The type of problem the researchers studied was the need to know whether new practices
affected the results.

2. Form Hypotheses to Test Cause-and-Effect Relationships

A hypothesis advances a prediction about outcomes. The experimenter establishes this prediction
and then collects data to test the hypothesis.

3. Select an Experimental Unit and Identify Study Participants

An experimental unit of analysis is the smallest unit treated by the researcher during an experiment.
The experimental unit receiving a treatment may be a single individual, several individuals, a group,
several groups, or an entire organization.
Investigators may choose participants because they volunteered or they agreed to be involved.
Alternatively, the researcher may select participants who are available in well-defined, intact groups
that are easily studied.
Steps in the Implementation of Experiments
4. Select an Experimental Treatment and Introduce It

The key to any experimental design is to set levels of treatment and apply one
level to each group.

5. Form Hypotheses to Test Cause-and-Effect Relationships

One aspect of preparing for the experiment is choosing the design. You need
to make several decisions based on your experience with experiments, the
availability of participants for the study.
Steps in the Implementation of Experiments
6. Conduct the Experiment

◆Administering a pretest, if you plan to use one


◆Introducing the experimental treatment to the experimental group or
relevant groups
◆Monitoring the process
◆Gathering posttest measures (the outcome or dependent variable
measures)
◆Using ethical practices by debriefing the participants by informing them
of the purpose and reasons for the experiment, such as asking them what
they thought was occurring (Neuman, 2000)
Steps in the Implementation of Experiments
7. Organize and Analyze the Data
Three major activities are required at the conclusion of the experiment:
coding the data, analyzing the data, and writing the experimental report.

8 . Develop an Experimental Research Report


The experimental report follows a standard format. In the “Methods” or
“Procedures” section of an experiment, the researcher typically includes
information about:
◆Participants and their assignment
◆The experimental design
◆The intervention and materials
◆Control over extraneous variables
◆Dependent measures or observations
THANK
YOU

You might also like