You are on page 1of 33

Research Methods

Quantitative & Qualitative


Stimulus material for Paper 3 can come from the following types
of research:

1. Experiments ( classic lab experiments, quasi, field, natural)


2. Correlation studies ( data usually comes from
questionnaires)
3. Surveys (data is often reported as percentages but can be
data for correlation studies)
4. Qualitative studies (interviews, observation, case studies)
5. Mixed methods ( two qualitative methods in the same
study or a mix of qualitative and quantitative in the same
study)
Differences between quantitative and qualitative
research methods
Theories & A theory is an explanation for a psychological phenomenon. It is
a statement used to summarize, organize, and explain

empirical Studies
observations.

Psychological theories are probable rather than certain, and


therefore they are always open to some degree of doubt. It is
often that one theory cannot explain all aspects of a
Building Blocks of psychological phenomenon.

Scientific Psychology A good theory is:

T- testable/ falsifiable
E- empirical evidence
A- application (high heuristic validity)
C- concepts (measurable)
U- unbiased?
P- predictive
Thinking about testability: critical Thinking
For each of the following statements, think
about whether this it "testable" or not. If
so, how would you be able to test it to see
whether it is "true" or not? What are the
problems with testing these claims?

1. Cold weather makes you sick.

2. Married couples are happier than single


people.

3. Playing online games makes you smarter.

4. Eating foods containing high levels of


sugar can affect your concentration on a
test.
Scientific method
Empirical Evidence

Empiricism (founded by John Locke) states that the only source of knowledge comes through
our senses – e.g. sight, hearing etc.

● Refers to data being collected through direct observation or experiment.


● Empirical evidence does not rely on argument or belief.
● Instead, experiments and observations are carried out carefully and reported in detail so that
other investigators
● can repeat and attempt to verify the work.

According to the Pennsylvania State University Libraries, there are some things one can look
for when determining if evidence is empirical:

Can the experiment be recreated and tested?


Does the experiment have a statement about the methodology, tools and controls used?
Is there a definition of the group or phenomena being studied?
Is this experiment “Gold” or “Rubbish”? : https://youtu.be/ApoYwEeDNrc
Quantitative or Qualitative data?
Validity?
Counterbalancing?
Participant Bias?
Sample Size?
Researcher Bias?
Independent variable - the variable that is manipulated
by the researcher.
Dependent variable - the variable that is measured by
the researcher. It is assumed that this variable changes
as a result of the manipulation of the IV.
Controlled variables - variables that are kept constant in
order to avoid influencing the relationship between the
IV and the DV.
Standardized procedure - the idea that directions given
to participants during an experiment are exactly the
same. This is the most basic form of "control" for a
study.
Random allocation to conditions: In a true experiment,
participants are randomly allocated to conditions in
order to avoid sampling bias.
DerRen Brown
an experiment in NYC:
https://youtu.be/dy75GtKsOAw

How is this study experiment different from the cute


aggression study?
Lab experiment: an experiment done under highly controlled
conditions.
Field experiment: an experiment done in a natural setting. There
is less control over variables.

Types of
A true experiment: An IV is manipulated and a DV measured
under controlled conditions. Participants are randomly allocated
to conditions.

experiments
A quasi experiment: Like the "experiment" by Derren Brown - no
IV is manipulated and participants are not randomly allocated to
conditions. Instead, it is their traits that set them apart - a fish
seller, a hot dog vendor and a jeweler.
A natural experiment: An experiment that is the result of a
"naturally occurring event." We will address these later in the
course but a natural experiment might address a question like:
Did stress increase at our school after the introduction of the IB?
Or Did aggression increase in rural Canada after the introduction
of television?
Operationalization

Both the independent and dependent variables must be operationalized. In other


words, they need to be written in such a way that it is clear what is being measured.
Clearly defining the variables so they can be manipulated. (IV) and measured ( DV)

If we are testing the role of noise in one's ability to recall a list of words, one group
would read the list while listening to music. Another group would read the list in
silence, noise is the independent variable. This could be operationalized as
dissonant rock music played at a volume of 100 decibels.

An operationalized dependent variable could be the number of words remembered


from a list of 30 words.

Challenges faced with operationalization:

Some variables are more difficult to operationalize; for instance, anger levels.
Operationalization of variables leads to only one aspect of a variable being
measured. However, without accurate operationalization, results will be unreliable
and could not be replicated to check their validity.
For example, if we are testing the role of noise in one's ability to recall
a list of words, one group would read the list while listening to
music. Another group would read the list in silence. Otherwise, there
should be no other difference between the groups.

Some examples of controls for this study would include:

• The list of words would be the same - the same words, the same
font, the same size font, the same order of the words.

• The conditions of the room should be the same. If one room has a
lot of posters on the wall with information, while the other room has
bare walls, this could theoretically influence the results.

• The temperature of the rooms should be the same.

• The time of day when the test is taken should be the same.
The experimental method is based on
hypothesis testing.

There are two types of hypotheses:


a null hypothesis, which states
that there will be no relationship
between the independent and
dependent variable, and the
alternative hypothesis (aka the

Hypothesis
research hypothesis) which clearly
predicts the relationship between
the independent and dependent
Task: variable.
https://www.thinkib.net/file
s/psychology/files/hypothese
s-worksheet.pdf
Types of reasoning

https://www.youtube.com/watch?v=dBMksZGU6eU
TOK: How can I know anything at all?
Inductive reasoning takes specific observations and draws general conclusions from those observations.
Ex: You may look at 100 dogs and find that they all have fleas and then declare that all dogs have fleas.
The problem, obviously, is that you have not examined all dogs, so as soon as one is found without fleas, your conclusion is
proven wrong.
Inductive reasoning is tied to representative heuristic, which encourages us to judge something purely based off how many
features it shares with something else. This can be very faulty and is definitely not necessarily true.

Karl Popper on the problem of induction in the sciences:

https://youtu.be/wf-sGqBsWv4

Reflect:

CONJECTURE,REFUTATION,FALSIFICATION,

DEMARCATION AND PSEUDOSCIENCE


When we talk about experiments, we talk about the design
that is used - in other words, what strategy was used for the
experiment? The design of an experiment should effectively
address the research problem that is being investigated.
In the IB psychology course, we usually discuss three
designs.

● A within-subjects design (repeated measures)


● A between-subjects design (independent samples)
● A matched pairs design
Evaluate the experimental designs
Activity:
Research question:

● Does raising one's level of self-esteem affect


behaviour?

Design a presentation. The presentation should have the


following slides:
Slide 1: An operationalized research question.
Slide 2: Statement of a null and research hypothesis.
Slide 3: Operationalization of the independent variable.
Slide 4: Operationalization of the dependent variable.
Slide 5: A description of the procedure.
People who take part in a psychological study are called participants.
Normally, psychologists define a target population - that is, a specific

Sampling
group of people whom they are interested in for their study.

The nature of the group of participants who are chosen from the target
population to take part in the study, what psychologists call the
sample, is very important in determining the usefulness of a piece of
research.

The goal in sampling is to obtain a sample that is representative of the


target population so that the results of the study can be generalised.

The extent to which a study can be generalized to the target population


is referred to as external validity.

There could occur a sampling bias due to non-representativeness of


the target population.

William Schofield has said that because university students are often
the participants in psychological studies, there is a YAVIS bias - that
is, Young, Attractive, Verbal, Intelligent, and Social.
Validity and reliability
Whether the research does what it claims to do- validity. Psychologists discuss two major types of
validity - internal and external.
Internal validity refers to how well an experiment is done, especially whether it avoids the influence of
outside or extraneous variables on the outcome of the study. In order to achieve high internal validity,
studies must be well controlled, and the variables must be carefully defined.
When evaluating the internal validity of a study, it is important that we can agree on what is being
measured. For example, if the concept that is being studied has an agreed-upon definition and can be
measured, then we can say that it has a high level of construct validity. Often, however, there are
problems with construct validity in psychological research.
External validity is the extent to which the results of a study can be generalized to other situations and
to other people. External validity is usually split into two distinct types: population validity and ecological
validity.
Population validity is a type of external validity that describes how well the sample used can be
generalized to a population as a whole.
Ecological validity is a type of external validity that looks
at the experimental environment and determines how much it
influences behavior.
In order to determine the level of ecological validity, two
things must be considered. Firstly, the representativeness of
the testing situation. This is often called “mundane
realism” – or the level to which the situation represents a
real-life situation.
Secondly, ecological validity refers to the generalizability of the
study to other settings or situations outside of the laboratory.
It means that what was observed in the laboratory does not
necessarily predict what will happen outside the laboratory.
Finally, reliability means that the results can be replicated.
Usually, reliability is used in reference to experimental study
because the procedure is standardized and, theoretically, if
another researcher uses exactly the same procedure, it should
give the same results.
Replication problems in
psychology

https://www.psychologytoday.c
om/intl/blog/straight-
talk/201511/replication-
problems-in-psychology
Bang Goes the Theory

https://youtu.be/x1pLHVMO4ho

● How valid do you think that the results of this study are?
● What are the limitations of the study?
Evaluating experiments
In an experiment, researchers attempt to control as many variables as possible. However, this is not always easy.

Extraneous variables (also called confounding variables) are undesirable variables that influence the relationship between the
independent and dependent variables.
Demand characteristics occur when participants act differently simply because they know that they are in a study. They may try
to guess the aims of the study and act accordingly.
Evaluating experiments
Researcher bias is when the experimenter sees what he or
she is looking for. In other words, the expectations of the
researcher consciously or unconsciously affect the findings of
the study. Using a double-blind control can help to avoid
this. In this design, not only do the participants not know
whether they are in the experimental or control group, but the
person carrying out the experiment does not know the aim of
the study, nor which group is the treatment and which one the
control group.
Participant variability is a limitation of a study when
characteristics of the sample affect the dependent variable. This
can be controlled for by selecting a random sample and
randomly allocating the participants to the treatment and
control groups.
One other consideration is artificiality. This is when the
situation created is so unlikely to occur that one has to wonder
if there is any validity in the findings.
Sampling bias
Henrich, Heine, and Norenzayan (2010) carried out an analysis of psychological research and found that 67% of
American psychological research uses undergraduate university students as participants. More shockingly, they found
that 96 percent of all psychology samples come from countries that make up only 12 percent of the world’s population.
The researchers coined the acronym WEIRD to describe the population studied by psychologists: Western Educated
Industrialized Rich Democratic. The researchers argue that this means that a significant amount of psychology
research has studied outliers - that is, the least representative populations one could find for generalizing to a global
population.

A recent study by Hanel and Vione (2016) of Cardiff University showed that the issue of student samples may be
more complex than we think. The researchers collected data from over 6000 students from over 50 countries. They
found that there were significant differences between students from different countries.
For example, in New Zealand students showed high respect for the elderly, but in Australia the opposite was found. In
China, students showed more confidence in political institutions than the general public, while in Germany, students
showed less confidence. Students in the USA saw dishonest behaviours, such as stealing, as more justifiable than the
public, but in India, students saw such acts as less justifiable than the public.
Hanel and Vione said their results “further support the claim that generalizing from students to the general
public within personal and social psychology is problematic, at least while we do not know what predicts
those differences.”
Exam TIP:
One of the problems that students have both is
that they try to say that generalization of the
study is not possible because of the nature of the
sample. But this is often incorrect.
For example, if a study looks at eating habits of
working-class white Americans, it is not
appropriate to say that a limitation of the study is
that it cannot be generalized to all Americans.
That was never the goal of the study, so to make
this claim is actually incorrect.
Not all research attempts to make universal
claims.
Vocabulary

https://www.thinkib.net/files/psychology/files/limit
ations-of-exp-vocab-rev.pdf
Activity: Sampling techniques

Let’s work in pairs and


understand the following
techniques-
● Random sampling
● Self-selected sampling
● Opportunity sampling
● Purposive sampling
● Snowball sampling
● Stratified sampling
Crash course: Psychological research

https://www.youtube.com/watch?v=hFV71QPvX2I&t=2s
Task
You have been hired by your local government as a health psychologist with the goal of
increasing exercise in the local community. You decide to carry out interviews at the local
fitness center to learn more about people’s motivation to engage in exercise.

1. What type of sample of this?

2. Your study may be criticized for having a sampling bias. Which group of people
may be over-represented? Which group may be underrepresented?

3. How do you think that you could get a more representative sample for your
study?
References
● https://www.thinkib.net/psychology/page/23361/quantitativ
e-research-methods
● https://www.thinkib.net/psychology/page/24226/unit-
planning-research-methods
● https://www.youtube.com

You might also like