You are on page 1of 277

Module 2: Research Methods in

Psychology

Semester One:
2021/22 Academic Year
Module Syllabus
1. Module and Module Facilitator Information
•Module Information
– Module Title: Research Methods
– Module Code: PsyC501
– Credit Value of the Module: 7 ECTS

•Module Facilitator Information


– Name: Mulat Asnake (PhD)
– Email: mulat.asnake@aau.edu.et
– Office: NCR 386
2. Description of the Module

• The module will present issues in the research process


ranging from problem identification to report writing,
in both qualitative & quantitative methods.

• The module emphasizes problem identification, ethical


issues, research design, preparing a research proposal,
developing instruments for data collection, sampling
techniques, & methods of data analysis & style of
writing research reports.
3. Learning Outcomes
Upon successful completion of the module, students should be able to
•Distinguish scientific methods of inquiry from nonscientific methods.

•Write a research proposal.

•Identify the major features that distinguish theoretical & operational


definitions.
•Select appropriate tool(s) for data collection based on information
provided about the general features of the planned/proposed study.
•Enumerate the main ethical issues that should be considered in
conducting research.
3. Learning Outcomes…
• Identify strengths & limitations of tools of data
collection.
• Describe the main differences b/n probability & non-
probability sampling techniques.
• Identify strengths & limitations of sampling techniques.
• Discuss some major points that distinguish experimental
research from non-experimental research.
• Write a research report following the APA style.
4. Content of the Module

4.1. Introduction
– Methods of Acquiring Knowledge
– Meaning and Goals of the Scientific Method
– Philosophical worldviews/positions of science
– Classification of Research

4.2. Proposal writing


– Meaning and purpose
– Components and Procedures
4.3. Research Designs/Paradigms

• Quantitative
– Survey, Correlational, Experimental Designs, Quasi-
experimental, Ex post facto, Single-subject
• Qualitative
– Case study , Phenomenology, Grounded theory,
Ethnography, Narrative, etc.
• Mixed Methods
– Concurrent, Sequential
4.4. Sampling
– Concepts
– Techniques (probability & non-probability)
– Sample size determination
4.5. Ethical Issues in Psychological Research
– Informed consent
– Confidentiality
– Deception
– Do not harm
4.6. Tools of Data Collection

• General Considerations

• Questionnaires and Interviews

• Scales and Tests

• Observational Methods

• Psychometric properties
4.7. Data Analysis

– The Analysis of Quantitative Data

– The Analysis of Qualitative Data

4.8. Writing the Research Report


– Structure of a Research Report

– Writing Style

– Referencing
5. Teaching Methods

• Lecture

• Discussion

• Assignments and presentations in the form of


independent and collaborative learning
6. Assessment Strategy

• (A) Attendance & classroom participation (10 points)

• (B) Assignments that include term paper/s (20 points)


& presentation (20 points) – (40 points)
• (C) Final exam (50 points)
7. Roles and Responsibilities
Roles and Responsibilities
(A) Roles and Responsibilities of Students
•Students are expected to:
– attend all lecture sessions,
– fully participate in class discussion ,
– fully engage themselves in reading and other activities
during the module period,
– complete and submit assignments on time,
– participate in collaborative learning.
(B) Roles and Responsibilities of the Instructor(s)

• Conduct lecture sessions according to the schedule,

• Review students’ work,

• Provide feedback to students,

• Organize and lead students’ presentations.


References

• American Psychological Association (2010).


Publication Manual of the American Psychological
Association (6th ed.). Washington, DC: Author.
• Creswell, J. W. (2014). Research design: Qualitative,
quantitative, and mixed methods approaches. Los
Angeles: Sage.
4.1. Introduction

– Methods of Acquiring Knowledge

– Meaning and Goals of the Scientific Method

– Philosophical worldviews/positions of science

– Classification of Research
4.1.1. Methods of Acquiring Knowledge

• Way you take a course on Research Methods?


Way you take a course on Research Methods?
A background in research has the following benefits: It:
• gives an opportunity to begin the process of learning how to do
research in psychology.
• provides students a solid foundation for understanding the
information relevant to their professions. Professionals must keep
up with advances in their fields.
• will make students a more informed & critical thinkers.
• introduces students to a scientific thinking.
• makes students to be effective research consumers in everyday life.
• helps students to become authorities, not only on research
methodology, but also on particular topics.
Ways of Knowing

• What are the ways of knowing ?

• How do people know about the world & themselves?


Ways of Knowing

– Tenacity

– Authority

– Reason

– Common sense

– Science

• None are without their flaws.


1. Tenacity
• It refers to the acceptance of a belief based on the idea
that we have known it to be this way.
• This is an all-too-common method of accepting
information.
• These statements are presented over & over again &
accepted as true, yet they are rarely examined & evaluated
• Examples:
– You cannot teach an old dog new tricks.
– Science is always beneficial.
Tenacity…

• TV advertising & political campaigns use this technique


when they present a single phrase or slogan repeatedly.
• If you tell something often enough, people believe it.

• Weaknesses

– The statement may be just an empty phrase & its


accuracy may never have been evaluated
– It offers no means of correcting erroneous beliefs
2. Authority
• Whenever we accept the validity of information from a source
that we judge to be an expert, then we are relying on authority
as a source of our knowledge.
• The claims may be based on opinion, tradition, revelation,
direct experiences, etc.
• Examples:
– As children we are influenced by & believe what our parents tell us (at
least for a while),
– As students we generally accept the authority of textbooks &
professors,
– As patients we take the pills prescribed for us by doctors and believe
they will have beneficial effects, and so on.
Authority…
• Relying on the authority of others to establish our beliefs
overlooks the fact that authorities can be wrong.
– Some parents pass along harmful prejudices to their children,
– textbooks and professors are sometimes wrong or their
knowledge may be incomplete or biased, and
– doctors can miss a diagnosis or prescribe the wrong medicine.
• An important aspect of the attitude of a critical thinker is
the willingness to question authority.
• On the other hand, we do learn important things from
authority figures, especially those who are recognized as
experts in particular fields.
3. Use of Reason
• We sometimes arrive at conclusions by using logic &
reason.
• Example
– Primates are capable of using language.

– Bozo the chimp is a primate.

– It is logical for us to conclude that Bozo the chimp has the


ability to use language.
4. Common Sense
• Common sense is based on our own:
– past experiences

– perceptions of the world

• However, our experiences & perceptions of the world


may be limited.
• We make different attributions depending on whether
we are participants or observers in a given situation.
5. Science : Meaning & Goals of the Scientific
Method
Activity
• Imagine for a moment as vividly as you can, a
scientist at work. Let your imagination fill in as
many details as possible regarding this scene.
– What does the imagined scientist look like?
– Where is the person working?
– What is the scientist doing?
– What is science?
5. Psychologists as scientists
• Most people do not think of psychologists as scientists in
the same way that they think of physicists, chemists, &
biologists as scientists.
• Instead, people tend to think of psychologists primarily in
their roles as mental health professionals.
• Psychology is not only a profession that promotes human
welfare through counseling, education, & other activities,
but also is a scientific discipline that studies behavior &
mental processes.
• Just as biologists study living organisms & astronomers
study the stars, behavioral scientists conduct research
involving behavior and mental processes.
The Beginning of Behavioral Research
• People have asked questions about the causes of
behavior throughout written his­tory.
• Aristotle (384-322 BCE) is sometimes credited for being
the first one to address systematically basic questions
about the nature of human beings & why they behave
as they do.
• More ancient writings from India, including the
teachings of Gautama Buddha (563-483 BCE), offer
equally sophisticated psychological in­sights into human
thought, emotion, and behavior.
The Beginning of Behavioral Research …
• For over two millennia, the approach to answering
questions about human behavior was speculative.
• People would simply concoct explanations of behavior
based on everyday observation, creative insight, or
religious doctrine.
• For many centuries, people who wrote about behavior
tended to be philosophers or theologians, & their
approach was not scientific.
• Some of these early insights into behavior were quite
accurate.
The Beginning of Behavioral Research …
• Many of these earliest explanations of behavior were
wrong.
• Early thinkers should not be faulted for having made
mistakes, for even modem researchers sometimes
draw incorrect conclusions.
• Unlike behavioral sci­entists today, early
"psychologists" did not rely on scientific research to
provide answers about behavior.
• As a result, they had no way to test the validity of
their explanations &, thus, no way to discover
whether or not their interpretations were accurate.
• Scientific psychology (and behavioral science more
broadly) was born during the last quarter of the
nineteenth century through the influence of early
researchers such as Wilhelm Wundt, William James,
John Watson, G. Stanley Hall, and others.
Science & scientific method
• Science is a way of knowing.
• The scientific method is a formalized way of:
– formulating & stating research question,
– selecting a research approach & designing a study,
– collecting data,
– analyzing the data & drawing conclusions, &
– sharing the results

• The goal of using the scientific method is to systematically


identify & argue for one explanation for the behavior of
interest over other possible accounts.
Science & scientific method …
• Science's procedures allow us to know “real things, that are
independent of our opinions about them”.
• The chief advantage of science is its objectivity—meant to avoid
human bias or preconception.
• Modern philosophers of science recognize that, b/c scientists are just
as human as everyone else, the ideal of a pure objectivity among
scientists is impossible.
– to some degree, they rely on authority,

– they often logically argue with each other in an a priori fashion,

– they are prone to social cognition biases in the process of learning from
their experiences.
Science & scientific method …
• How do psychologists go about conducting scientific
research that explains human social behavior?
– There is not one set approach—there are many, so that what
works to study one social phenomenon may be ill-suited to
another.
– Psychologists are flexible when they examine people’s affect,
cognition, and behavior.
– Generally, one shared principle guide psychologists:
Whenever possible, they rely on the scientific method.
Attributes of Scientific Method
• The way of knowing that constitutes science in
general & psychological science in particular involves
a number of interrelated assumptions &
characteristics.
– Science Assumes Determinism
– Science Makes Systematic Observations
– Science Produces Public Knowledge
– Science Produces Data‐Based Conclusions
– Science Produces Tentative Conclusions
– Science Asks Answerable Questions
– Science Develops Theories That Can Be Falsified
1. Science Assumes determinism
• Determinism means that events, including psychological ones,
have causes.
• Discoverability means that by using agreed‐upon scientific
methods, these causes can be discovered with some degree of
confidence.
• Some philosophers have argued for a strict determinism, which
holds that the causal structure of the universe enables the
prediction of all events with 100% certainty, at least in principle.
• Most scientists, influenced by 20th‐century developments in
physics and the philosophy of science, take a more moderate view
that could be called probabilistic or statistical determinism.
Determinism…
• In psychology, we ultimately would like to know what
causes behavior (determinism), and it is with the tools of
science that we can discover those causes (discoverability).
• Even with the best of methods, psychologists do not expect
to predict psychological phenomena with 100% certainty,
but they have faith that psychological phenomena occur
with some regularity & that the regularities can be
investigated successfully.
2. Science Makes Systematic Observations/Systematic
Empiricism
• A major attribute of science as a way of knowing is the
manner in which science goes about the business of
searching for regularities in nature.

• All of us do a lot of observing in our daily lives, & we


draw conclusions about things based on those
observations.

• Science also bases its findings on observations, but they


are made much more systematically than our everyday
observations.
Systematic observation…

• The scientist’s systematic observations include using:


– (a) precise definitions of the phenomena being measured,

– (b) reliable &valid measuring tools that yield useful &


interpretable data,
– (c) generally accepted research methodologies, and

– (d) a system of logic for drawing conclusions & fitting


those conclusions into general theories.
Example
• A behavioral researcher who is interested in the effects
of exercise on stress is not likely simply to chat with
people who exercise about how much stress they feel.
• Rather, the researcher is likely to design a carefully
controlled study in which people are assigned randomly
to different exercise programs, then measure their stress
using reliable and valid techniques.
3. Science Produces Public Knowledge-Public Verification

• The procedures of science result in knowledge that can be


publicly verified.
• Science produces knowledge that is public knowledge.

• This takes the form of defining the terms and research


procedures precisely enough so that any other person can
repeat the study, presumably achieving the same observable
outcome.
• This process of repeating a study to determine if its results
occur reliably is called “replication”.
Science Produces Public Knowledge…

• As results are replicated, public confidence in the reality


of some psychological phenomenon is increased.
• On the other hand, questions are raised when results
cannot be replicated.
• A failure to replicate is also how scientific fraud is
sometimes suspected and then uncovered.
Science Produces Public Knowledge…
• Research must be conducted in such a way that the
findings of one researcher can be observed,
replicated, and verified by others.
• There are two reasons for this.
• First, the requirement of public verification ensures
that the phenomena scientists study are real and
observable, & not one person's fabrications.
Scientists disregard claims that cannot be verified by
others.
Science Produces Public Knowledge…
• Second, public verification makes science self-correcting.
• When research is open to public scrutiny, errors in
methodology & interpretation can be discovered & corrected
by other researchers.
• The findings obtained from scientific research are not always
correct, but the requirement of public verification increases the
likelihood that errors and incorrect conclusions will be detected
and corrected.
• Public verification requires that researchers report their
methods & their findings to the scientific community, usually in
the form of journal articles or presentations of papers at
professional meetings.
• In this way, the methods, results, and conclusions of a study
can be examined and, possibly, challenged by others.
• Not only does replication catch errors, but it allows researchers
to build on and extend the work of others.
3. Science Produces Data‐Based Conclusions

• Another attribute of science as a way of knowing is that


researchers are data driven.
• Research psychologists expect conclusions about
behavior to be supported by evidence gathered through
some systematic procedure.
4. Science Produces Tentative Conclusions
• Conclusions drawn from data are always tentative,
subject to revision based on future research.
• That is, science is a self‐correcting enterprise & its
conclusions are not absolute, yet there is confidence
that research will eventually get one ever closer to the
truth.
5. Science Asks Answerable Questions/Solvable
Problems

• Science deals only with solvable problems.

• Empirical questions are those that can be answered


through the systematic observations & techniques that
characterize scientific methodology.
• They are questions that are precise enough to allow
specific predictions to be made.
5. Science Asks Answerable Questions/Solvable
Problems…
• Researchers can investigate only those questions that are
answerable given current knowledge & research
techniques.
• This means many questions fall outside the realm of
scientific investigation.
• For example, the question Is there life after death? No one
has yet devised a way of studying this in empirical,
systematic, and publicly verifiable ways.
• This does not necessarily imply that life does not exist
after life or that the question is unimportant.
• It simply means that this question is beyond the scope of
scientific investigation.
6. Science Develops Theories That Can Be Falsified

• When designing research studies, an early step in the


process is to reshape the empirical question into a
hypothesis, which is a prediction about the study’s
outcome.
• That is, prior to having empirical data, the hypothesis
is your best guess about the answer to your
empirical question.
4.1.2. Goals of Behavioral Research
• As behavioral researchers design and conduct studies, they
generally do so with one of four goals in mind-
– description

– explanation

– prediction

– application

• That is, they design their research with the intent of describing
behavior, explaining behavior, predicting behavior, or applying
knowledge to solve behavioral problems.
1. Describing Behavior
• Some behavioral research focuses primarily on
describing patterns of behavior, thought, or emotion.
– Perceptions of clients for counseling eservices
– Provision of career counseling services in correctional settings.
– Coping skills of counseling clients
– Career maturity among university students
2. Explaining Behavior
• Most behavioral research goes beyond studying what
people do to attempting to understand why they do it.
• Most researchers regard explanation as the most
important goal of scientific research.
• Basic research is conducted to understand behavior
regardless of whether the knowledge is immediately
applicable.
• The immediate goal of basic research is to explain a
particular psychological phenomenon rather than to
solve a particular problem.
– Effectiveness of CBT in reducing symptoms of depression
3. Predicting Behavior

• Many behavioral researchers are interested in


predicting people's behavior. For example:
– Career counselors try to predict students’ career
development using tests & interviews.
– Forensic psychologists are interested in predicting which
criminals are likely to be dangerous if released from prison.
– Developing ways to predict violent tendencies requires
considerable research.
4. Solving Behavioral Problems
• The goal of applied research is to find solutions for
certain problems rather than to understand
psychological processes. For example:
– School counselors are hired by schools to study and solve
problems related to students’ learning and achievement.
– community psychologists are sometimes asked to investigate
social problems such as racial tension, littering, and violence.
• Other applied researchers conduct evaluation research
(also called program evaluation) using behavioral
research methods to assess the effects of social or
institutional programs on behavior.
4.1.3. Philosophical worldviews/positions of science

• Researchers hold many assumptions that directly


impinge on their work.
• The decisions that researchers make in the course of
designing & conducting research are guided by their
assumptions about:
– the world

– the nature of scientific investigation


4.1.3. Philosophical worldviews/positions of
science…
• The research design process begins with philosophical
assumptions that the inquirers make in deciding to
undertake a study.
• Researchers bring their own worldviews, par­adigms, or
sets of beliefs to the research project, & these inform
the con­duct & writing of the study.
• Good research requires making these assumptions,
paradigms, & frameworks explicit in the writing of a
study, & at a minimum, to be aware that they influence
the conduct of inquiry.
Philosophical assumptions
• The assumptions reflect a particular stance that researchers
make when they choose their research design.
• In the choice of their research, inquirers make certain
assumptions.
• These philosophical assumptions consist of a stance toward:
– the nature of reality (ontology), what do we investigate;
objective versus subjective reality.
– how the researcher knows & what she or he knows (epis­
temology),
– the methods used in the process (methodology).
• Ontological assumptions give rise to epistemological
assumptions; these, in turn, give rise to methodological
considerations; & these, in turn, give rise to issues of
instrumentation & data collection.

• This view moves us beyond regarding research methods


as simply a technical exercise.
Paradigms or Worldviews
• After researchers make this choice, they then further shape their research by
bringing to the inquiry paradigms or world­views.
• A paradigm or worldview is "a basic set of beliefs that guide action“. The
major types include:
• Positivism
• The Postpositivist Worldview
• The Constructivist Worldview
• The Transformative Worldview
• The Pragmatic Worldview
1. Positivism
• Positivism suggests that there is a straightforward r/ship
b/n the world (objects, events, phenomena) & our
perception, & understanding, of it.
• There is a direct correspondence b/n things & their
representation.
• Positivists believe that it is possible to describe what is
‘out there’ and to get it right.
• The goal of research is to produce objective knowledge;
that is impartial and unbiased, based on a view from
‘the outside’, without personal involvement or vested
interests on the part of the researcher.
2. The Postpositivist Worldviews
• Postpositivist assumptions hold true more for quantitative
research than qualitative research.

• The name post-positivism is used b/c it represents the


thinking after positivism, challenging the traditional notion
of the absolute truth of knowledge & recognizing that we
cannot be positive about our claims of knowledge when
studying the behavior & actions of humans.
2. The Constructivist Worldview
• Constructivism or social constructivism is seen as an approach to
qualitative research.
• Constructivists believe that individuals seek understanding of the
world in which they live & work.
• Individuals develop subjective meanings of their experiences—
meanings directed toward certain objects or things.
• These meanings are varied & multiple, leading the researcher to look
for the complexity of views rather than narrowing meanings into a few
categories or ideas.
• The goal of the research is to rely as much as possible on the
participants’ views of the situation being studied.
• Meanings are formed through:
– interaction with others
– historical and cultural norms that operate in individuals’ lives.

• Researchers recognize that their own backgrounds shape their


interpretation.
• The researcher’s intent is to make sense of (or interpret) the
meanings others have about the world.
• Rather than starting with a theory (as in postpositivism),
inquirers generate or inductively develop a theory or pattern
of meaning.
Assumptions of The Constructivist Worldview

1. Human beings construct meanings as they engage with


the world they are interpreting.
Qualitative researchers tend to use open-ended questions so that
the participants can share their views.

2. Humans engage with their world and make sense of it


based on their historical & social perspectives.

3. The basic generation of meaning is always social, arising


in & out of interaction with a human community.
3. The Transformative Worldview

• This position arose during the 1980s & 1990s from individuals who

felt that the postpositivist assumptions imposed structural laws &

theories that did not fit marginalized individuals or issues of power

& social justice, discrimination, & oppression that needed to be

addressed.
– It includes groups of researchers that are : critical theorists; participatory

action researchers; Marxists; feminists; racial and ethnic minorities;

persons with disabilities; indigenous peoples; and members of the lesbian,

gay, bisexual, transsexual, and queer communities.


• This worldview assumes that the constructivist stance did not go far
enough in advocating for an action agenda to help marginalized peoples.
• A transformative worldview holds that research inquiry needs to be
intertwined with politics and a political change agenda to confront social
oppression at whatever levels it occurs.
• Researchers try to address issues such as empowerment, inequality,
oppression, domination, suppression, and alienation.
• The researcher often begins with one of these issues as the focal point
of the study.
• The participants may help design questions, collect data, analyze
information, or reap the rewards of the research.
• Transformative research provides a voice for these participants, raising
their consciousness or advancing an agenda for change to improve their
lives.
• It becomes a united voice for reform and change.
4. The Pragmatic Worldview
• Pragmatism derives from the work of Peirce, James, Mead, and Dewey.

• There is a concern with applications—what works—and solutions to


problems.
• Instead of focusing on methods, researchers emphasize the research
problem and use all approaches available to understand the problem.
• As a philosophical underpinning for mixed methods studies, its
importance for focusing attention on the research problem in social
science research and then using pluralistic approaches to derive
knowledge about the problem.
Assumptions of Pragmatic Worldview
• Individual researchers have a freedom of choice.
– Researchers are free to choose the methods, techniques, and
procedures of research that best meet their needs and purposes.

• Truth is what works at the time. It is not based in a duality


between reality independent of the mind or within the mind.
– Investigators use both quantitative and qualitative data because they
work to provide the best understanding of a research problem.
4.1.4. Classification of Research

• Research is defined by Kerlinger (1970) as the


systematic, controlled, empirical & critical investigation
of hypothetical propositions about the presumed
relations among natural phenomena.
Varieties of Psychological Research

• Research in psychology can be classified along


several dimensions.
• Research can be classified in terms of:

– its goals,

– setting, and

– type of data collected.


The Goals: Basic versus Applied Research

• Some research in psychology emphasizes describing,


predicting, and explaining the fundamental principles
of behavior and mental processes and such research
goals define basic research.
• In contrast, applied research is so named because it
has direct and immediate relevance to the solution of
real‐world problems.
The Setting: Laboratory versus Field Research
• Another way of classifying studies is by location.

• The distinction hinges on whether the study occurs inside or


outside the controlled environment of a laboratory.
• Laboratory research allows the researcher greater control;
conditions of the study can be specified more precisely, and
participants can be selected and placed in the different
conditions of the study more systematically.
• In contrast, in field research, the environment more closely
matches the situations we encounter in daily living.
The Data: Quantitative versus Qualitative Research
• The distinction is between quantitative methods and qualitative
methods of research.
• Most research in psychology is quantitative in nature.
• That is, with quantitative research, the data are collected and
presented in the form of numbers—average scores for different
groups on some task, percentages of people who do one thing or
another, graphs and tables of data, and so on.
• In recent years, however, a number of research psychologists have
begun doing what is known as qualitative research, sometimes
borrowing techniques from sociologists and anthropologists.
4.2. Proposal writing

4.2.1. Meaning and purpose

4.2.2. Components and Procedures

4.2.3. Piloting
4.2.1. Meaning and purpose of a Research Proposal
• Having chosen the title, researchers still have to plan the research.

• This entails working through in detail the sequence of activities

• The planning of research is not an arbitrary matter.

• The research community & consumers of research findings expect


that research be conducted rigorously, scrupulously & in an
ethically sound manner.

• All this necessitates careful planning.

• Researchers should embark on research ‘with their eyes open.


4.2.1. Meaning and purpose of a research proposal …
• Researchers should identify what they need to:

– achieve in each areas of activity.

– gain a realistic idea of how long it will take

– Know what resources (e.g. help from other people or


financial support) it will need.
– consider regarding issues of sampling, reliability, & validity

– consider what avenues of approach are open to them.


4.2.1. Meaning and purpose of a Research Proposal …

• Research planning involves the following issues:


– Designing the study – specifying what information is
needed, & how it will be collected, from whom, & when
and where.
– Preparing materials (including questionnaires,
laboratory space, interview schedules).
– Identifying participants for the study and then contacting
and gaining their agreement to be involved (including
timetabling information collection from them).
– Ensuring that what researchers intend to do is ethical.
– Collection of information.
– Collation and recording of information.
– Analysis of information – always decide on the analytical
approach before collecting the information.
4.2.2. Components and Procedures
I. Introduction
1.1. Background
1.2. Statement of the problems
1.3. Objectives
1.4. Significance of the study
1.5. Delimitation of the study
1.5. Operational definitions of terms

II. Related Literature Review


III. Methods
3.1. Design
3.2. Setting
3.3. Population, sample and sampling
3.4. Tools/instruments of data collection
3.5. Procedures of data collection
3.6. Methods of data analyses
3.7. Ethical issues

Work plan
Budget
References
Appendices
4.2.3. Piloting
• An important part of preparing the main information collection is
the piloting of the research.
• Piloting is necessary irrespective of the method researchers use.

• Piloting entails:
– Checking out whether the data collection techniques are actually doing .

– Running the information collection process with a small number of


participants to see if there are unanticipated difficulties.

• The participants are often asked to give their feedback on how


they reacted to what happened to them & this is used to improve
the study.
4.2.3. Piloting …

• Pilot studies can prevent a lot of wasted time and effort.

• The pilot data should not be included in the main data.

• The pilot is a trial of the way the study is designed & is


being executed, so material from it should not be mixed
with information collected subsequently in the full study.
4.2.3. Piloting …
• Once the piloting has been completed, the overall plan for the
research can be outlined.
• Having the plan, & sticking to it as far as possible, is useful.
• Researchers should not allow circumstances to make them lose
track of what they should be doing.
• Plans often change as researchers get into a study and things do
not quite pan out as expected.
• Consciously adjusting the plan as things change is important.
• Revisit the plan and update it.
• It may be necessary to refine the means of collecting information
or change the participant group, or tweak research question.
Individual Assignment (1)
Instruction
•Identify a research topic in the area of counseling
psychology and develop a research proposal with an average
of five to seven pages ( 12 font size, times new roman, & 1.5
spacing).
•The proposal need to include all relevant components.
•The proposal need to be submitted in soft copy ( through
email) before or on January 29, 2022. It will be presented in
the class for about 10 minutes each on the same date.
4.3. Research Designs
• Researchers not only select a qualitative, quantitative, or
mixed methods study, they also decides on a type of
study within these choices.
• Research designs are types /strategies of inquiry within
qualitative, quantitative, & mixed methods approaches
that provide specific direction.
4. Research Designs
• Quantitative
– Survey, Correlational, Experimental Designs, Quasi-
experimental, Single-subject
• Qualitative
– Case study , Phenomenology, Grounded theory,
Ethnography, etc,
• Mixed Methods
– Concurrent, Sequential
Quantitative Designs
• During the late 19th and throughout the 20th century, strategies
of inquiry associated with quantitative research were those that
invoked the postpositivist worldview.

• Quantitative research is an approach for testing objective

theories by examining the relationship among variables.

• Variables need to be measured using instruments & the

numbered data can be analyzed using statistical procedures.


Quantitative Designs…

• The final written report has a set structure consisting of


introduction, literature & theory, methods, results, &
discussion.

• Those who engage in this form of inquiry have


assumptions about testing theories deductively, building
in protections against bias, controlling for alternative
explanations, & being able to generalize and replicate the
findings.
Types of Quantitative Research Designs
– Survey

– Correlational

– Experimental

– Quasi-experimental

– Single-subject
Survey Research Design
• Survey research provides a quantitative or numeric
description of trends, attitudes, or opinions of a
population by studying a sample of that population.
• It includes cross-sectional and longitudinal studies
using questionnaires or structured interviews for data
collection—with the intent of generalizing from a
sample to a population.
Correlational Research Design

• In correlational design, investigators use the


correlational statistic to describe and measure the
degree or association (or relationship) b/n two or
more variables or sets of scores.
True Experimental Research Design

• Experimental research seeks to determine if a specific


treatment influences an outcome.

• The researcher provides a specific treatment to one group


and withholding it from another and then determining
how both groups scored on an outcome.

• Experiments include true experiments, with the random


assignment of subjects to treatment conditions.
Quasi-experimental Research Design

• Quasi-experiments- less rigorous experiments.

• Quasi-experiments use nonrandomized assignments.


Single-subject Experimental Design

• Applied behavioral analysis or single-subject


experiments in which an experimental treatment is
administered over time to a single individual or a
small number of individuals.
Qualitative Research Designs
– Case study , Phenomenology, Grounded theory,
Ethnography
Qualitative Research Designs
• The research design begins with philosophical assumptions.
• Researchers bring their own worldviews, paradigms, or sets of
beliefs to the research project, and these inform the conduct
and writing of the qualitative study.
• Good research requires making these assumptions, paradigms,
and frameworks explicit in the writing of a study, &, at a
minimum, to be aware that they influence the conduct of
inquiry.
Five philosophical assumptions lead to an individual's choice
of qualitative research:
– ontology, -a stance toward the nature of reality ,
– Epistemology- how the researcher knows what she or he
knows
– Axiology- the role of values in the research
– Rhetorical- the language of research
– Methodology- methods used in the process
The Characteristics of Qualitative Research
• Natural setting

• Researcher as key instrument

• Multiple methods

• Complex reasoning through inductive and deductive logic

• Participants’ multiple perspectives and meanings

• Context-dependent

• Emergent design

• Reflexivity

• Holistic account
What a Qualitative Study Requires From Us
• Qualitative inquiry is for the researcher who is willing
to do the following:
– Commit to extensive time in the field.

– Engage in the complex, time-consuming process of data


analysis.
– Write lengthy and descriptive passages.

– Embrace dynamic and emergent procedures.

– Attend to anticipated and developing ethical issues.


Qualitative Constructivist/ Interpretivist Format
Introduction
•Statement of the problem (including literature about the problem)
•Purpose of the study
•The research questions
•Delimitations and limitations
Procedures
•Characteristics of qualitative research (optional)
•Qualitative research strategy
•Role of the researcher
•Data collection procedures
•Data analysis procedures
•Strategies for validating findings
•Narrative structure
•Anticipated ethical issues
•Significance of the study
•Preliminary pilot findings
•Expected outcomes
Appendices: Interview questions, observational forms, timeline, and proposed
budget
A Theoretical Lens Format
Introduction
•Overview
•Type and purpose
•Potential significance
•Framework and general research questions
•Limitations
Review of related literature
•Theoretical traditions
•Essays by informed experts
•Related research
Design and methodology
•Overall approach and rationale
•Site or population selection
•Data-gathering methods
•Data analysis procedures
•Trustworthiness
•Personal biography
•Ethics and political considerations
Appendices: Interview questions, observational forms, timeline, and proposed
budget
Approaches to Qualitative Inquiry
• Narrative

• Phenomenological

• Grounded Theory

• Case Study

• Ethnography
1. Narrative research
• Narrative research is a design of inquiry in which the researcher studies the
lives of individuals and asks one or more individuals to provide stories about
their lives.
• Narrative is a spoken or written text giving an account of an event/action or
series of events/actions, chronologically connected.
• This information is then often retold or restoried by the researcher into a
narrative chronology.
• It begins with the experiences as expressed in lived and told stories of
individuals.
• The narrative combines views from the participant’s life with those of the
researcher’s life in a collaborative narrative.
Procedures for Conducting Narrative Research
1. Determine if the research problem or question best fits narrative
research.

2. Select one or more individuals who have stories or life experiences


to tell, and spend considerable time with them gathering their
stories through multiples types of information.

3. Collect information about the context of these stories

4. Analyze the participants' stories, and then "restory' them into a


framework that makes sense.

5. Collaborate with participants by actively involving them in the


research.
2. Phenomenological
• Phenomenological research is a design of inquiry in which the
researcher describes the lived experiences of individuals about a
phenomenon as described by participants.
• Phenomenologists focus on describing what all participants have
in common as they experience a phenomenon (e.g., grief is
universally experienced).
• Its purpose is to reduce individual experiences with a
phenomenon to a description of the universal essence (a "grasp
of the very nature of the thing)"
• The inquirer collects data from persons who have experienced
the phenomenon, and develops a composite description of the
essence of the experience for all of the individuals.
• The description consists of "what" they experienced and "how"
they experienced it.
Procedures for Conducting Phenomenological Research
• The major procedural steps in the process would be as follows:
– The researcher determines if the research problem is best examined using a
phenomenological approach.
– A phenomenon of interest to study.
– The researcher recognizes and specifies the broad philosophical assumptions of
phenomenology.
– Data are collected from the individuals who have experienced the phenomenon.
– The participants are asked two broad, general questions : What have they experienced
in terms of the phenomenon? What contexts or situations have typically influenced or
affected their experiences of the phenomenon?
– Phenomenological data analysis
– Significant statements & themes are used to write a description of what the
participants experienced (textural description).
– The researcher writes a composite description that presents the "essence" of the
phenomenon, called the essential, invariant structure (or essence).
3. Grounded theory
• GT is a qualitative research design in which the inquirer generates a
general explanation (a theory) of a process, action, or interaction shaped
by the views of a large number of participants.

• The intent of a grounded theory study is to move beyond description and


to generate or discover a theory, an abstract analytical schema of a
process (or action or interaction)·

• A key idea is that this theory-development does not come "off the shelf,"
but rather is generated or "grounded" in data from participants who have
experienced the process.

• This process involves using multiple stages of data collection and the
refinement and interrelationship of categories of information.
Procedures
• The researcher needs to begin by determining if grounded theory is
best suited to study his or her research problem
• The research questions that the inquirer asks of participants will
focus on understanding how· individuals experience the process
and identifying the steps in the process
• These questions are typically asked in interviews, although other
forms of data may also be collected, such as observations,
documents, and audiovisual materials.
• The analysis of the data proceeds in stages.
Stages of the Analysis
• In open coding, the researcher forms categories of information
about the phenomenon being studied by segmenting information.
• In axial coding, the investigator assembles the data in new ways
after open coding.
• In selective coding, the researcher may write a ~'story line" that
connects the categories.
• Develop and visually portray a conditional matrix that elucidates
the social, historical, and economic conditions influencing the
central phenomenon.
• The result of this process of data collection and analysis is a
theory, a substantive-level theory, written by a researcher close to
a specific problem or population of people.
4. Ethnography

• Ethnography is a qualitative design in which the


researcher describes and interprets the an intact
cultural group in a natural setting over a
prolonged period of time.
Procedures for Conducting an Ethnography
• Determine if ethnography is the most appropriate design to use to
study the research problem.
• Identify and locate a culture-sharing group to study

• Select cultural themes or issues to study about the group.

• Determine which type of ethnography to use (realistic or critical)

• Gather information where the group works and lives

• Forge a working set of rules or patterns as the final product of this


analysis.
5. Case studies

• Case studies are a design of inquiry in which the

researcher develops an in-depth analysis of a case, often a

program, event, activity, process, or one or more

individuals.

• Case study research involves the study of an issue

explored through one or more cases within a bounded

system (i.e., a setting, a context).


Procedures for Conducting a Case Study
– Determine if a case study approach is appropriate to
the research problem.
– Identify their case or cases
– Collect data extensive data, drawing on multiple
sources of information, such as observations,
interviews, documents, and audiovisual materials.
– Conduct a holistic analysis of the entire case or an
embedded analysis of a specific aspect of the case
Mixed Methods
• Mixed methods research is an approach to inquiry involving :
– collecting both quantitative & qualitative data,

– integrating the two forms of data, and

– using distinct designs that may involve philosophical assumptions


and theoretical frameworks

• The core assumption of this form of inquiry is that the


combination of qualitative and quantitative approaches
provides a more complete understanding of a research
problem than either approach alone.
Mixed Methods Designs…

• All methods had bias and weaknesses, and the collection


of both quantitative and qualitative data neutralized the
weaknesses of each form of data.
• Triangulating data sources—a means for seeking
convergence across qualitative and quantitative
methods has been found important.
Types Mixed Methods Designs
• Although many designs exist in the mixed methods
field, we focus on the three primary models found in
the social sciences today:
– Convergent parallel mixed methods

– Explanatory sequential mixed methods

– Exploratory sequential mixed methods


Convergent parallel mixed methods
• This is a form of mixed methods design in which the researcher
converges or merges quantitative and qualitative data in order
to provide a comprehensive analysis of the research problem.
• In this design, the investigator typically collects both forms of
data at roughly the same time and then integrates the
information in the interpretation of the overall results.
• Contradictions or incongruent findings are explained or further
probed in this design.
Explanatory sequential mixed methods
• Explanatory sequential mixed methods is one in which the researcher
first conducts quantitative research, analyzes the results and then builds
on the results to explain them in more detail with qualitative research.
• It is considered explanatory because the initial quantitative data results
are explained further with the qualitative data.
• It is considered sequential because the initial quantitative phase is
followed by the qualitative phase.
• This type of design is popular in fields with a strong quantitative
orientation.
Exploratory sequential mixed methods
• It is the reverse sequence from the explanatory sequential design.

• In this approach the researcher first begins with a qualitative


research phase and explores the views of participants.
• The data are then analyzed, and the information used to build into a
second, quantitative phase.
• The qualitative phase may be used to build an instrument that best
fits the sample under study, to identify appropriate instruments to
use in the follow-up quantitative phase, or to specify variables that
need to go into a follow-up quantitative study.
4.4. Sampling

4.4.1. Meaning and Nature of Sampling

4.4.2. Sample size determination

4.4.3. Sampling Techniques


Activity in Class

• What is sampling?

• Why we sample ?

• How do we determine the sample size?

• What are the types of sampling techniques ?


4.4.1. Meaning and Nature of Sampling
• Among the decisions that researchers face each time they design a study
is selecting research participants.
• Researchers can rarely examine every individual in the population who is
relevant to their interests
• There is absolutely no need to study every individual in the population of
interest.
• Researchers collect data from a subset, or sample, of individuals in the
population.
• Sampling is the process by which a researcher selects a sample of
participants for a study from the population of interest.
Participants or Subjects ?

• It used to be common to refer to the people studied in


psychological research, especially in experiments, as ‘subjects’.
• There are objections to this.

• The British Psychological Society requires all publications to use the


term ‘research participants’ or simply ‘PARTICIPANTS’.
• The terminology used should reflect the relationship between
researcher and researched as a human interaction rather than as a
set of procedures carried out on a passive ‘subject’.
4.4.2. Sample size determination
• In general yes, the larger the sample size the better.

• This is simply because larger samples have less sampling error

• There are a number of factors that should be considered to


determine the appropriate sample size .
– The kind of analysis that will be conducted on the data.
– The likely response rate for the study.
– The heterogeneity of the population from which we draw our sample.
4.4.3. Sampling Techniques

1. Probability Sampling Techniques

2. Non probability Sampling Techniques


Probability Sampling
• When the purpose of a study is to accurately describe the
behavior, thoughts, or feelings of a particular group-as it is with
most descriptive research-researchers must ensure that the
sample they select is representative of the population at large.
• A representative sample is one from which we can draw accurate,
unbiased estimates of the characteristics of the larger population.
• We can draw accurate inferences about the population from data
obtained from a sample only if it is representative.
The Error of Estimation

• Samples rarely mirror their parent populations in every


respect.
• The characteristics of the individuals selected for the
sample always differ somewhat from the characteristics of
the general population.
• This difference, called sampling error, causes results
obtained from the sample to differ from what would have
been obtained had the entire population been studied.
Probability sampling…

• When probability sampling techniques are used,


researchers can estimate how much their results are
affected by sampling error.
• The error of estimation (also called the margin of error)
indicates the degree to which the data obtained from
the sample is expected to deviate from the population
as a whole.
Probability sampling…
• Researchers can calculate the error of estimation only if they
know the probability that a particular individual in the
population was included in the sample, and they can do this
only when they use a probability sample.
• The error of estimation is a function of three things:
– sample size,
– population size,
– and variance of the data
Types of Probability Sampling Techniques

• Simple random sampling

• Systematic Random Sampling

• Stratified sampling

• Cluster sampling
Random Sampling
• The simplest form of probability sampling is to take a simple
random sample.
• Each member of the population has an equal chance of
being selected as a member of the sample.
– For example, to select a random sample of 100 students from a
school, for instance, we could place all of their names in a large hat
and pick out 100.

• The procedure usually involves software that uses a random


number generator or table.
Advantages
• It is often an effective, practical way to create a
representative sample.
• It is sometimes the method of choice for ethical
reasons as well.
– In situations in which only a small group can receive
some benefit or must incur some cost, and there is no
other reasonable basis for decision‐making, random
sampling is the fairest method to use.
Limitations

• There are two problems with simple random sampling.


– There may be systematic features of the population we might
like to have reflected in our sample.
– The procedure may not be practical if the population is
extremely large.

• The first problem is solved by using stratified sampling;


cluster sampling solves the second difficulty.
Systematic Sampling

• Systematic sampling -we draw the sample from the


population at fixed intervals from the list.
• The starting point must be randomized or else the first
from the list would be selected.
Stratified Sampling
• In a stratified sample, the proportions of important subgroups in the
population are represented precisely.
• Some judgment is required ; the researcher must decide how many
layers (or strata) to use.
• It is up to the researcher to use good sense based on prior research or
the goals of the current study.
• Stratified random sampling is a variation of simple random sampling.
• Rather than selecting cases directly from the population, we first
divide the population into two or more strata.
• A stratum is a subset of the population that shares a particular
characteristic.
Cluster Sampling

• The researcher randomly selects a cluster of people all having


some feature in common.
• Often, cluster sampling involves a multistage sampling process
in which we begin by sampling large clusters, then we sample
smaller clusters from within the large clusters, then we sample
even smaller clusters, and finally we obtain our sample of
participants.
Nonprobability Sampling

• With a nonprobability sample, researchers have no way of


knowing the probability that a particular case will be
chosen for the sample
• The primary types of nonprobability samples are
– Convenience sampling
– Quota sampling
– Purposive sampling
– Snowball sampling
Convenience Sampling

• In a convenience sample, researchers use whatever


participants are readily available.
• individuals who meet the general requirements of the
study and are recruited in a variety of nonrandom
ways.
Quota Sampling
• A quota sample is a convenience sample in which the researcher
takes steps to ensure that certain kinds of participants are
obtained in particular proportions.
• The researcher specifies in advance that the sample will contain
certain percentages of particular kinds of participants
• In quota sampling, the researcher attempts to accomplish the
same goal as stratified sampling—representing subgroups
proportionally—but does so in a nonrandom fashion.
Purposive Sampling
• For a purposive sample, researchers use their judgment to decide which
participants to include in the sample, trying to choose respondents who
are typical of the population.
– For instance, when Stanley Milgram first recruited participants for his
obedience studies, he placed ads in the local newspaper asking for
volunteers. He deliberately (i.e., purposely) avoided using college students
because he was concerned they might be “too homogeneous a group. . . .
[He] wanted a wide range of individuals drawn from a broad spectrum of
class backgrounds.
Snowball sampling

• In snowball sampling, once a member of a particular group


has been surveyed, the researcher asks that person to help
recruit additional subjects through a network of friends.
• This sometimes occurs when a survey is designed to
measure attitudes and beliefs of a relatively small group or
a group that generally wishes to remain hidden (e.g.,
prostitutes).
4.5. Ethical Issues in psychological researches

4.5.1. Overview on Ethics in Psychological Research

4.5.2. Approaches to Ethical Decisions

4.5.3. Benefits and costs of research from the


ethical point of view

4.5.4. Major ethical issues


4.5.1. Overview on ethical issues in psychological
research
• Most ethical issues in research arise b/c behavioral scientists have
two sets of obligations that sometimes conflict.

1. The behavioral researcher's job is to provide information that


enhances our understanding of behavioral processes and
leads to the improvement of human or animal welfare.
– This obligation requires that scientists pursue research they believe
will be useful in extending knowledge or solving problems.

2. Behavioral scientists have an obligation to protect the rights


and welfare of the human & nonhuman participants that
they study.
4.5.2. Approaches to Ethical Decisions
• When the two obligations coincide, few ethical issues arise.

• When the researcher's obligations to science & society conflict


with obligations to protect the rights & welfare of research
participants, the researcher faces an ethical dilemma.
• People tend to adopt one of three general approaches to
resolving ethical issues about research.
• These three approaches differ in terms of the criteria that
people use to decide what is right and wrong.
Approaches to Ethical Decisions…
1. Deontology
– An individual operating from a position of deontology maintains
that ethics must be judged in light of a universal moral code.
– Certain actions are inherently unethical and should never be
performed regardless of the circumstances.
– A researcher who operates from a deontological
perspective might argue, for example, that lying is
immoral in all situations regardless of the consequences.
2. Ethical skepticism
– Ethical skepticism asserts that ethical rules are
arbitrary and relative to culture and time.
– According to ethical skepticism, ethical decisions must
be a matter of the individual's conscience: One
should do what one thinks is right and refrain from
doing what one thinks is wrong.
– The final arbiter on ethical questions are individuals
themselves.
3. Utilitarian
• Utilitarian maintains that judgments regarding the
ethics of a particular action depend on the
consequences of that action.
• An individual operating from a utilitarian perspective
believes that the potential benefits of a particular
action should be weighed against the potential costs.

• If the benefits are sufficiently large relative to the

costs, the action is ethically permissible.


Utilitarian …
• The official approach to research ethics in the APA principles is
essentially a utilitarian or pragmatic one.
• Rather than specifying a rigid set of do's and don'ts, these
guidelines require that researchers weigh potential benefits of
the research against its potential costs and risks.
• In determining whether to conduct a study, researchers must
consider its likely benefits and costs.
• Weighing the pros & cons of a study is called a cost-benefit
analysis.
4.5.3. Potential Benefits and Costs in Research
Benefits
– Knowledge

– Improvement on research techniques

– Practical outcomes

– Benefits for the researcher

– Benefits for the participants


Potential Costs
• Research participants invest a certain amount of time &
effort
• Research costs money, equipment, & supplies, &
researchers must determine whether their research is
justified financially.
• Participants may suffer social discomfort, threats to
their self-esteem, stress, boredom, anxiety, pain, etc.
• Climate of distrust toward behavioral research
Balancing Benefits and Costs
• The issue is whether the benefits expected from a study are
sufficient to warrant the expected costs.
• A study with only limited benefits warrants only minimal
costs and risks, whereas a study that may make a potentially
important contribution may permit greater costs.
• Researchers themselves may not be the most objective
judges of the merits of a piece of research.
• Guidelines require that research be approved by an
Institutional Review Board (IRB)
4.5.4. Major Ethical Issues
• Six issues dominate the discussion of ethical issues in
research that involves human participants :
– lack of informed consent,
– invasion of privacy,
– coercion to participate,
– potential physical or/and mental harm,
– deception, and
– violation of confidentiality
1. Informed consent
• One of the primary ways of ensuring that participants'
rights are protected is to obtain their informed consent
prior to participating in a study.
• It involves informing research participants of the nature
of the study & obtaining their explicit agreement to
participate.
Elements of an Informed Consent Form
– a brief description of why the study is being conducted

– a description of the activities in which the participant will engage

– a brief description of the risks entailed in the study, if any

– a statement informing participants that they may refuse to participate in the study

or may withdraw from the study at any time without being penalized

– a statement regarding how the confidentiality of participants' responses will be

protected

– encouragement for participants to ask any questions they may have about their

participation in the study

– instructions regarding how to contact the researcher after the study is completed

– signature lines for both the researcher and the participant


Problems with Obtaining Informed Consent
• Compromising the Validity of the Study

• Participants Who Are Unable to Give Informed


Consent
• Ludicrous Cases of Informed Consent
2. Invasion of Privacy

• The right to privacy is a person's right to decide "when,


where, to whom, and to what extent his or her attitudes,
beliefs, & behavior will be revealed" to other people
Which of the following have problems with privacy?
• Men using a public restroom are observed surreptitiously by a researcher

hidden in a toilet stall, who records the time they take to urinate

• Participants waiting for an experiment are videotaped without their prior

knowledge or consent.

• Researchers hide under dormitory beds and eavesdrop on college students'

conversations

• Researchers embarrass participants by asking them to sing

• Researchers approach members of the other sex on a college campus and

ask them to have sex


3. Coercion to Participate
• All ethical guidelines insist that potential participants
must not be coerced into participating in research.
• Coercion to participate occurs when participants agree
to participate b/c of real or implied pressure from some
individual who has authority or influence over them.
4. Physical and Mental Stress
• Researchers may design studies to investigate the effects of
unpleasant events such as stress, failure, fear, and pain.
• Researchers find it difficult to study such topics if they are
prevented from exposing their participants to at least small
amounts of physical or mental stress.
• Minimal risk is "risk that is no greater in probability and severity
than that ordinarily encountered in daily life or during the
performance of routine physical or psychological examinations or
tests.
5. Deception in Research
• Perhaps no research practice has evoked as much
controversy among behavioral researchers as deception.
• Deception may involve:

– using an experimental confederate who poses as another


participant or as an uninvolved bystander
– providing false feedback to participants

– presenting two related studies as unrelated,

– giving incorrect information regarding stimulus materials


Why Deception?
• Behavioral scientists use deception for a number of
reasons.
• The most common one is to prevent participants from
learning the true purpose of a study so that their
behavior will not be artificially affected
Debriefing

• Whenever deception is used, participants must be


informed about the subterfuge "as early as it is feasible“
• Usually participants are debriefed immediately after
they participate, but occasionally researchers wait until
the entire study is over and all of the data have been
collected.
Why debriefing?
• A good debriefing accomplishes four goals.

– 1. It clarifies the nature of the study for participants.

– 2. It removes any stress or other negative consequences that the


study may have induced.
– 3.It enables the researcher to obtain participants' reactions to the
study.
– It enables the participants to feel good about their participation.
Researchers should convey their genuine appreciation for
participants' time and cooperation, and give participants the sense
that their participation was important.
6. Confidentiality
• Confidentiality means that the data that participants
provide may be used only for purposes of the research
and may not be divulged to others.
Scientific misconduct
• In addition to principles governing the treatment of human
and animal participants, behavioral researchers are bound by
general ethical principles involving the conduct of scientific
research.
• Such principles are not specific to behavioral research but
apply to all scientists regardless of their discipline.
• Most scientific organizations have set ethical standards for
their members to guard against scientific misconduct.
Categories of scientific misconduct
1. Serious and blatant forms of scientific dishonesty, such as
fabrication, falsification, and plagiarism.
2. Questionable research practices that, although not
constituting scientific misconduct per se, are problematic.
3. Unethical behavior that is not unique to scientific
investigation, such as sexual harassment (of research
assistants or research participants), abuse of power,
discrimination, or failure to follow government regulations.
4.6. Tools of data collection

• General Considerations

• Questionnaires and Interviews

• Scales and Tests

• Observational Methods

• Psychometric properties
4.6.1. General Considerations:
Writing Questions
• Researchers spend a great deal of time working on the
wording of the questions that they use in their
questionnaires and interviews.
• Misconceived and poorly worded questions can doom a
study, so considerable work goes into the content and
phrasing of self report questions.
Guidelines for writing good questions
• Be Specific and Precise in Phrasing the Questions.
• Write the Questions as Simply as Possible,
• Avoiding Difficult Words, Unnecessary Jargon, &
Cumbersome Phrases.
• Avoid Making Unwarranted Assumptions About the
Respondents.
• Do Not Use Double-Barreled Questions.
• Choose an Appropriate Response Format
– free-response format (open ended)
– a rating scale response format (Strongly agree, agree, neutral,
disagree, strongly disagree)
– the multiple choice or fixed-alternative response format
– The true-false response format
Questionnaires versus Interviews

Questionnaires Interviews
• Less expensive • Necessary for illiterate
• respondents
Easier to administer
• Necessary for young children
• May be administered in groups and persons with low IQ
• Less training of researchers • Can ensure that respondents
• Anonymity can be assured understand questions
• Allows for follow-up
questions
• Can explore complex issues
more fully
4.6.2. Interviewing
1. Meaning and Nature of Interview
•In interviews, an interviewer asks the questions and the participant responds orally.

•Research interviews are the most common type of interview that you will encounter as a
student of psychology.
•Their main function is to gather data from the participants about a topic that the
interviewer decides to study.
•Researchers talk to their interviewee face to face or on the telephone and use a series
of questions to try to gather information about the thoughts, feelings and beliefs that the
interviewee has about a particular topic
•With the increasing use of qualitative approaches in psychology the use of interviews to
collect data has grown.
•Interviews have always been a valuable method of data collection in psychology,
particularly social psychology.
2. Interview structures
• Interviews used to collect data in psychological
research are generally categorized as one of three
types:
– unstructured,

– semi-structured and

– structured
Unstructured interviewing
• Unstructured interviews are characterized by their lack of a
predetermined interview schedule.
• More unstructured approaches, using a ‘conversational’ style
with open-ended questions, tend to produce richer, fuller and
perhaps more genuine responses.
• They are usually exploratory and most useful where little is
known about the topic.
• They may be particularly appropriate for ethnography or life-
story research
Unstructured interviewing…

Advantages Disadvantages

• Flexible • Unsystematic

• Rich data • Difficult to analyse data

• Relaxes interviewee • Strongly influenced by

• Should produce valid interpersonal variables

(meaningful) data • Not reliable


Semi-structured interviewing
• It uses a standardized interview schedule.
• The interview schedule consists of a number of pre-set
questions in a mostly determined order.
• This type of interview is not completely reliant on the
rigorous application of the schedule.
• If the interviewee wanders off the question then the
interviewer would generally go with it rather than try to
return immediately to the next question in the schedule.
• The questions that make up the schedule are usually open-
ended to encourage the respondents to elaborate their
views about the topic.
• And to further focus attention on the interviewee and their
views the interviewer generally says very little.
• This is the most common type of interview used in
psychological research.
Semi-structured interviewing…

Advantages Disadvantages

• Can compare responses and • Some loss of flexibility for


interviewer
analyze data more easily
• Question wording may reduce
• No topics missed richness
• Reduction of interpersonal bias • Less natural
• Coding responses still subject to
• Respondents not constrained
bias
by fixed answers
• Limits to generalization
Structured interviewing
• In a more STRUCTURED design, every RESPONDENT
(person who answers questions) receives exactly the same
questions, usually in much the same or exactly the same
order.
• Structured interviews rely on the application of a fixed and
ordered set of questions.
• The interviewer sticks to the precise schedule rigorously
and will (politely!)keep the interviewee concentrated on
the task at hand.
• There is also a set pattern provided for responses.
• In many ways this type of interview is more like a
Structured interviewing…
Advantages Disadvantages
• Easy to administer
• Respondent constrained
• Easily replicated
• Reduced richness
• Generalisable results (if the
sample is adequate) • Information may be distorted

• Simple data analysis through poor question wording


• Reduced bias • Suffers from difficulties
• Lower influence for associated with questionnaires
interpersonal variables
• High reliability
3. Recording Interview data

• There are obviously a variety of methods of


recording data from interviews, including:
– Note taking

– Audio recording

– Video recording
4. Interview techniques

• For an interview to be successful it is vital that the


interviewer has:
– knowledge of (and sensitivity to) the effect of interpersonal
variables,
– knowledge of techniques needed for successful interviewing,

– a good interview schedule and

– plenty of practice.
Effect of interpersonal variables
• It is important to be aware of the possible effects of interpersonal
variables on an interview and the data to be collected.
– Because interviews are generally conducted face to face, people being
interviewed are particularly susceptible to the influence that your sex,
ethnicity, use of language or formal role (as a researcher/academic/expert)
may have on them.
– People often feel the need to present themselves to others who are
perceived to be in positions of power (such as an academic) in as good a
way as possible (the social desirability effect).
Key skills for successful interviewing
• Give your respondent permission to speak

• Learn when not to talk

• Be comfortable with silence

• Do not trivialize responses

• Do not dominate

• Look interested!

• Use of appropriate language

• Establish good rapport

• Neutrality

• Confidentiality
A good interview schedule
• A good interview schedule is a key factor for successful interviewing.
• A well-constructed schedule will enable the researcher to gather much
more detailed information about a topic.
• Semi-structured interviews (the most common form used in psychology)
rely on a good schedule.
• Unstructured interviews generally require a list of questions or topics but
not really a schedule that you follow in the interview.
• Conversely, you cannot conduct a structured interview without a formal
list of questions and responses to communicate to your participants
(often looking more like a complicated questionnaire than an interview
schedule).
Writing interview schedules

• Avoid jargon

• Try to use open rather than closed questions

• Minimal encouragers/probes
Practice

• There is no substitute for plenty of practice if you


wish to become proficient at interviewing.
• You will only develop the key skills you need through
practice
4.6.3. Group discussions

• Group discussions have become an increasingly popular


& important method of data collection in psychology in
recent years.
• Focus groups offer a number of advantages, the
primary one being the collection of large amounts of
data in a relatively short space of time.
Points for considerations
• Selection and grouping of participants
• The group discussion need to be planned in advance and that the
group facilitator is well placed to ensure that the group members
feel comfortable and able to contribute to the discussion that
takes place.
• Establishing group rules is vital when conducting group discussion
research.
• At the most basic it would be usual to make clear to all participants
that it is crucial to respect each other and ensure that all
information will be kept confidential.
• The role of the group facilitator.
– Occasionally one or more people may dominate, and it may be appropriate
to intervene to enable others to speak.
– Occasionally one or more people may dominate, and it may be appropriate
to intervene to enable others to speak.
4.6.4. Questionnaires

• Questionnaires are :
– valuable method of data collection as they allow
researchers to collect data from large numbers of people.

– useful if you want to know something about the


prevalence of some behaviour or the opinions, beliefs or
attitudes of large numbers or groups of people.

– They are often used as a convenient way of collecting


background data (age, sex, ethnicity etc.)
• With good questionnaire design researchers should
be able to maximise the quality of data collected.
• Better to use already developed ones if their
psychometric characteristics are well established.
• A good questionnaire is one that has reliability and
validity.
• The term ‘survey’ is used to describe a study design
where the questionnaire is the primary focus.
• Questions in questionnaires are often called
questionnaire items, particularly in psychology.
• Personality scales & tests are said to contain ‘items’
rather than ‘questions’.
• The term scale is used to describe a set of items that
measure something psychological.
• Several scales might be published as an inventory.
General principles

• There are a variety of general principles that inform


good questionnaire design.
– Keep the questionnaire as short as possible.

– Ensure that it is written in language appropriate for your


respondents (Readability)
• Many researchers overestimate the reading ability of the
general population.
• Passive sentences are difficult to read, and should be avoided
Attach a Participant Information Sheet
• All questionnaires should begin with a Participant Information Sheet,
which tells participants about the study.
• Explain clearly what it is about and why people should spend the time to
complete the questionnaire.
• It should explain who the researcher is and give contact details in case

• Inform respondents about what the researcher intend to do with the


data, and whether the researcher can assure them of anonymity and/or
confidentiality.
• One final purpose of the introductory statement is to ‘sell’ the project.
Attach a consent form
• Researchers need to attach a separate consent form.

• A consent form records the date, name and signature of


each participant, indicating that they have read the
Participant Information Sheet and agree to take part in
the study.
• Names that can be linked to the questionnaire (e.g. by ID
number) are a threat to anonymity.
Stages of design
• There are a number of stages that researchers need to go
through to carry out a questionnaire study.
– 1. decide on a topic to research and then develop research
question(s).
– 2. Careful planning and design of the questionnaire.
– 3. All questionnaires should be piloted – that is, tested out on a
small number of people – before being widely distributed.
– 4. Administer the questionnaire
Question types
Open versus closed questions

• Open-ended questions often generate more detailed information


than closed questions, but at a cost.
• They increase the size of questionnaires and have a dramatic effect
on how long it takes to complete a questionnaire.
• There is also a danger that many people will not complete open-
ended questions
• It is advisable to avoid using open-ended questions without good
grounds for including them.
• Closed questions require very careful wording and thought about the
appropriate type of response options
Tips for preparing questionnaires
• Ask one question at a time
• Avoid ambiguous questions
• Avoid double negatives
• Tell respondents if your questions are going to be
sensitive
• Questions should be neutral rather than value-laden or
leading
• Avoid technical terms or jargon
Scales and Tests
• Psychologists develop psychological tests and scales as
measuring instruments.
• Psychometric tests are standardised forms of questionnaires
designed to measure particular traits or personality types.
• By standardised we mean they have been tested on large
numbers of participants, and had their reliability ascertained and
norms established for particular groups of people.
• The items will often be published as a set of scales called an
inventory.
• Whereas questionnaires often simply gather information, tests and
scales are seen as scientific measuring instruments.
• Questionnaires used in surveys are usually constructed for the
specific research topic and tend to test for current opinion or
patterns of behaviour.
• Attitude scales, ability tests and measures of intellectual reasoning
or personality are usually intended to be more permanent
measures
Types of scales
• Dichotomous scales

– In dichotomous-category scaled items; questions offering only two


answer choices.
• Multiple category scale items

– Multiple category scale items offer three or more choices for the
respondent.
• Rating scales rate some attribute from negative to positive, low to high,
weak to strong.
– In the Likert scale approach, participants are asked to provide their
level of agreement with a statement.
4.6.5. Observational Methods
• Behavioral research involves the direct observation of
human or nonhuman behavior.

• Researchers who use observational methods must make


three decisions about how they will observe and record
participants‘ behaviors :
– (1) Will the observation occur in a natural or contrived setting?

– (2) Will the participants know they are being observed?

– (3) How will participants' behavior be recorded?


Naturalistic observation
• Researchers observe & record behavior in real-world settings.

• Naturalistic observation involves the observation of ongoing

behavior as it occurs naturally with no intrusion or intervention

by the researcher.

• In naturalistic studies, the participants are observed as they

engage in ordinary activities in settings that have not been

arranged specifically for research purposes.


– For example, researchers have used naturalistic observation to study

behavior during riots and mob events, littering, & nonverbal behavior.
Participant Observation

• Participant observation is one special type of


naturalistic observation.
• In participant observation, the researcher engages in
the same activities as the people he or she is observing.
Contrived observation
• Contrived observation involves the observation of behavior in settings
that are arranged specifically for observing and recording behavior.
• Studies are conducted in laboratory settings in which participants know
they are being observed, although the observers are usually concealed,
such as behind a one-way mirror.
• In other cases, researchers use contrived observation in the "real
world.“
– Researchers set up situations outside of the laboratory to observe people's
reactions.
– For example, field experiments on determinants of helping behavior have
been conducted in everyday settings.
Nondisguised Observation

• The second decision a researcher must make when using

observational methods is whether to let participants know they are

being observed.

• Sometimes the individuals who are being studied know that the

researcher is observing their behavior (undisguised observation).

• The problem with undisguised observation is that people often do

not respond naturally when they know they are being scrutinized.

• When they react to the researcher's observation, their behaviors

are affected-a phenomenon referred as reactivity.


Disguised Observation…
• Researchers may conceal the fact that they are observing and recording
participants' behavior (disguised observation).
• Disguised observation raises ethical issues because researchers may invade
participants' privacy as well as violate participants' right to decide whether
to participate in the research (the right of informed consent).
• As long as the behaviors under observation occur in public & the researcher
does not unnecessarily upset the participants, the ethical considerations
are small.
• If the behaviors are not public or the researcher intrudes uninvited into
participants' everyday lives, then disguised observation may be
problematic.
Psychometric properties
• Psychological research requires precision in ways that variables are
measured.

• Without precision researchers cannot guarantee the quality of the


findings they produce.

• Researchers expect to have confidence in measures by seeing that:


– it produces consistent measures where we expect them (RELIABILITY)

– it somehow measures it is intended to measure (VALIDITY).

– It is applicable to a population of people participating in a study


(STANDARDISATION)
Reliability

• Reliability concerns the stability of what researchers


are measuring.
• Reliability has to do with the consistency of a measure
either across different testings (external) or within itself
(internal).
– External reliability – stability across time

– Internal reliability – internal consistency of the test


External reliability – stability across time
• To check the external reliability of a scale, researchers need to see

whether it produces the same scores each time they use it on the same

people.

• The method is known as TEST–RETEST RELIABILITY & to use it

researchers test the same group of people once, then again some time

later.

• The two sets of scores are correlated to see whether people tend to get

the same sort of score on the second occasion as they did on the first.

• If they do, the test has high external reliability.

• Correlations are expected to be at least around .75 – .8.


• Rather than testing at two different times, it is also
possible to test the same group of people on two
PARALLEL FORMS of the same test, though these are
rare and expensive to create as well as raising doubts
as to whether two ‘versions’ of the same test really
can be entirely equivalent.
Methods for checking internal reliability
Split-half method
•A psychological scale items can be split so that items are divided
randomly into two halves, or by putting odd numbered items in
one set & even numbered ones in the other.
•If the test is reliable then people’s scores on each half should be
similar.
•The extent of similarity is assessed using correlation

•Positive correlations of .75 upwards would be expected.


Cronbach’s alpha
• CRONBACH’S ALPHA is the most commonly used statistic for estimating a
test’s reliability.
• It depends largely on how people vary on individual items.

• If they tend to vary a lot on the individual items relative to how much
they vary overall on the test, then a low value for alpha is achieved and
the test is assessed as unreliable.
• Good reliability is represented with alpha values from around .75 up to 1.

• If the items in the scale are dichotomous (e.g., ‘yes’ or ‘no’ ), then a
simpler version is used, known as the KUDER–RICHARDSON measure.
Validity
• Validity is about whether a test (or measure of any kind) is
really measuring the thing we intended it to measure.

• A test may have high reliability but may not be measuring


what was originally intended – that is, it may lack
VALIDITY.

• There are various recognised means by which the validity


of tests can be assessed.
Face validity

• Simply, a test has FACE VALIDITY if it is obvious to


researchers and test-takers what it is measuring.
• On its own, face validity just refers to the test ‘making
sense’ and is in no way a technically adequate measure
of test validity.
Content validity

• A researcher may ask colleagues to evaluate the


content of a test to ensure that it is representative of
the area it is intended to cover.
• They will carry out this task using their expertise and
the literature in the topic area to judge whether the
collection of items has failed to test certain skills or is
unduly weighted towards some aspects of the
domain compared with others.
Criterion validity

• In general, with criterion validity procedures,


researchers are looking for criteria on which scale
scores successfully relate to other known data in ways
that could be predicted, given the theoretical nature of
the measure.
Concurrent validity

• If the new test is validated by comparison with a currently


existing measure of the construct, we have CONCURRENT
VALIDITY.
– A new IQ or personality test might be compared with an older but
similar test known to have good validity already, or simply with an
existing competitor.

• An issue here might be that if we are developing a test for a


new theoretical construct, or are challenging the validity of an
existing one, then we may well want to show that the new test
does not correlate very well with previous relevant tests.
Predictive validity
• A prediction may be made, on the basis of a new intelligence test for
instance, that high scorers at age 12 will be more likely to obtain
university degrees or enter the professions several years later.
• If the prediction is borne out then the test has PREDICTIVE VALIDITY.

• There is virtually no difference between the concepts of predictive and


concurrent validity except for the point of time involved in the
prediction.
• However, ‘concurrent validity’ tends to be reserved in the literature for
those occasions where performance on a new test is compared with
performance on a similar test or older version of the same test.
Construct validity
• CONSTRUCT VALIDITY is a much wider concept than the previous
ones which are all mainly attempts to show that a scale is a good
measure of a relatively familiar construct.
• In another sense they can all be part of the process of construct
validity.
• What we are discussing with construct validity is the whole scientific
process of establishing that a psychological construct, such as
extroversion or achievement motivation, in fact exists or at least is a
theoretically sound concept that fits into surrounding theory.
4.7. Data Analyses

– The Analysis of Quantitative Data

– The Analysis of Qualitative Data


The Analysis of Quantitative Data
• Quantitative research is designed to generate numbers, and this is
the reason we need both descriptive and inferential statistics.
• The first thing we need to do with our data is to summarise some
basic features of the numbers that make up our set of data.
• Just re-presenting the numbers generated in a study is rarely
enough.
• We need to do some work on the data so that we can understand
what they mean in more detail.
• Descriptive statistics are usually the first stage in any quantitative
analysis, and concern the range of techniques that enable us to
summarise our data (‘descriptive statistics’ are sometimes called
‘summary statistics’ for this reason).
• The most fundamental and important descriptive statistics are those
concerned with identifying the central tendency (or typical value
such as the average) of a set of numbers, and those concerned with
how the remaining numbers are distributed (how much they vary)
around this central (or typical) value.
• Inferential statistics are those ways of exploring data that enable us to
move beyond a simple description of individual variables to making
inferences about the likelihood of our findings occurring.
• We are particularly concerned with identifying whether our findings
(relationships or differences between one or more variables) are more
or less likely to have occurred than by chance alone.
• Or, what are the probabilities (likelihood) that the
relationships/differences occurring between variables within our data
set are significant, and not just the result of ‘meaningless’ chance
relationships/differences that occur with all variables?
• The primary goal of data analysis is to determine whether
our observations support a claim about behavior
• Trusting we have obtained data based on a sound research
study, what should we do next?
• There are three distinct, but related stages of data analysis:
– getting to know the data,

– summarizing the data, and

– confirming what the data reveal


Getting to know the data
• In the first stage we want to become familiar with the data.

• This is an exploratory or investigative stage

• We inspect the data carefully, get a feel for it, and even, as some
experts have said, “make friends” with it

• Questions we ask include,


– What is going on in this number set?

– Are there errors in the data?

– Do the data make sense or are there reasons for “suspecting fishiness”

– What do the data look like?


• Visual displays of distributions of numbers are
important at this stage.
• Only when we have become familiar with the general
features of the data, have checked for errors, and
have assured ourselves that the data make sense,
should we proceed to the second stage.
Summarizing the data
• n the second stage we seek to summarize the data in a meaningful way.

• The use of descriptive statistics and creation of graphical displays are important
at this stage.
– How should the data be organized?

– Which ways of describing and summarizing the data are most informative?

– What happened in this study as a function of the factors of interest?

– What trends and patterns do we see?

– Which graphical display best reveals these trends and patterns?

– When the data are appropriately summarized, we are ready to move to the
confirmation stage.
Confirming What the Data Reveal

• In the third stage we decide what the data tell

us about behavior.
– Do the data confirm our tentative claim (research

hypothesis) made at the beginning of the study?

– What can we claim based on the evidence?


• At this stage we may use various statistical techniques to counter
arguments that our results are simply “due to chance.”
• Null hypothesis testing, when appropriate, is performed at this stage of
analysis.
• Our claim about behavior may be based on an evaluation of the probable
range of effect sizes for the variable of interest.
• The confirmation process actually begins at the first or exploratory stage
of data analysis, when we first get a feel for what our data are like.
• As we examine the general features of the data, we start to
appreciate what we found.
• In the summary stage we learn more about trends and patterns
among the observations.
• This provides feedback that helps to confirm our hypotheses.
• The final step in data analysis is called the confirmation stage to
emphasize that it is typically at this point when we come to a
decision about what the data mean.
The Analysis of Qualitative Data
• As for quantitative studies, qualitative studies
can be carried out for a number of research
purposes.
• A non-exhaustive list of these might include:
– l descriptive/exploratory
– l theory/hypothesis generation
– l intervention and change/action research
Transcribing, coding & organizing textual data
• Once you have collected data through audio
recordings of semi- or unstructured interviews, the
next step in most qualitative research is to transcribe
the material.
• Transcription is the process of turning the spoken word
into written form.
– For a start, there are many different ways of transcribing,
and these vary from one methodological position to
another.
– Secondly, transcription is hard work. Ask anyone who has
had to transcribe just one or two interviews and they will
confirm this! Qualitative research is not an easy option.
– Finally, transcription is not simply a mechanical task, for
properly understood it may be thought to constitute the
first stage of analysis.
• Once you have some data in spoken, written or even visual
form, you will want to carry out some form of analysis.
• In most qualitative research the first stage would involve
coding the data.
• Coding is the process of describing and understanding your
data through the systematic categorisation of the data.
• Almost any thematic or discursive analysis begins with
coding in an attempt to organise and manage the data.
• However, there will be differences in the process of coding
depending on the methodology being employed.
• Many qualitative analyses are forms of thematic analysis
where the aim of the analysis is to draw out themes of
meaning.
Thematic analysis
• It is in fact only a short step from pattern coding (the third level
above) to the next stage in a thematic analysis, which is to draw
out the overarching themes in your data.
• Pattern coding sometimes results in this situation anyway, which
is why some people do not recognise three distinct levels of
coding.
• Once you have gone through the systematic process of coding
your data, you will then want to draw out the higher-level
concepts or themes.
• There will usually be only a few (often three or four) themes in a
transcript on a particular topic, and these will represent more
general concepts within your analysis and subsume your lower
level codes.
• In a sense you were always building to this level of analysis.
4.8. Writing the Research Report

– Structure of a Research Report

– Writing Style

– Referencing
Structure of a Research Report
• The APA‐style report describing the outcome of an empirical
study includes each of the following elements, in this order:
• Title page
• Abstract
• Introduction
• Method
• Results
• Discussion
• References
• Appendices
• Tables
• Figures
1. Title Page
• A title page is required for all APA Style papers.

• There are professional & student versions of the title page.

• Professional Title Page- includes the following elements:


– title of the paper ,

– name of each author of the paper (the byline; the order of authorship and
formatting the byline),
– affiliation for each author,

– author note,

– running head, and

– page number
Student Title Page
• Students should follow the guidelines of their instructor or
institution when determining which title page format is most
appropriate to use. If not instructed otherwise, students should
include the following elements on the title page:
– title of the paper;
– name of each author of the paper;
– affiliation for each author, typically the university attended
(including the name of any department or division);
– course number & name for which the paper is being submitted
– instructor name (check with the instructor for the preferred
form; e.g., Dr. Hülya F. Akış; Professor Levin; Kwame Osei, PhD);
– assignment due date, written in the month, date, and year
format used in your country (usually November 4, 2020, or 4
November 2020); and
– page number (also included on all pages).
TITLE
• By reading the title alone, readers should understand the main
idea of the article. A title :
– should communicate the purpose of the research in about 12
words or less.
– does not typically contain abbreviations.

– does not use the words “method,” “results,” “a study of,” or


“an experimental investigation of.”
– is centered and presented in uppercase and lowercase letters;

– is not boldfaced.
2. Abstract
• The abstract is a summary of the content.
• It does not present the details of the research.
• It should clearly communicate the report’s :
– main research question,
– the methods used,
– the major results, &
– an indication of why the results are important—for example, how
the results support or shape a theory.
• The abstract should be concise, accurate, and clear.
• Readers should be able to get a good sense of the research
question & the results obtained by reading the abstract alone.
• The abstract is the first section of your paper that most readers
will encounter, but it is often the last thing you will write.
ABSTRACT CHECKLIST
• An abstract clearly and accurately summarizes the research question,
methods used, primary results, and interpretation of the results in terms
of some theory or application.
• An abstract should be about 250 words long .

• In an APA-style, the abstract is presented on its own page.

• The page is labeled with the word Abstract in plain text at the top center.

• The abstract itself should be typed as a single paragraph with no


indentation.
3. Introduction
• The first major section of the paper is the introduction.
• This is where the main narrative of the research report begins.
• It is a description and an explanation of:
– Why it is important
– The problems studied
– the theories the study is testing
– past researches that are pertinent to the problem studied &
their relevance
• For many students, it is the hardest section to write.
• Writing about the sources to create a coherent introduction is a
challenge.
Introduction…

• In writing an introduction, focus on the following points.

– Introduce the main area of research & the topic in a


creative way to describe why this area of research is
potentially interesting.
– Write about the method used & the variables studied.

– State research question or hypothesis.


Introduction…

• Link the topic to contemporary events, but it is not appropriate to


explain why you personally became interested in the research topic.
• Use past studies to build an argument that leads to the hypothesis.

• Do not generally use direct quotes when describing work by other


researchers.
• Paraphrase it- putting it into one’s own words.

• Alternate b/n summaries of past research & statements that reflect


interpretations and arguments.
INTRODUCTION CHECKLIST

• Follow the typical introduction format:

– The 1st paragraph of the introduction describes the general area


of research.
– The middle paragraphs summarize past research studies and
give interpretation of their meaning and importance, arranged
in a way that logically leads to your hypothesis.
– The last paragraphs briefly describe the method used & the
variables studied, & they state the hypothesis or research
question.
Introduction…
• Document the sources of summaries by listing the authors’ last names
& year of publication, using parentheses or a signal phrase.
• Be careful to avoid plagiarizing, & be sure to cite each source.

• Describe completed research in the past tense.

• In general, it is appropriate to write the entire introduction in the past


tense.
• If needed for organizing a long introduction, use subheadings.

• In the opening paragraph, avoid phrases that are vague and


undocumentable, such as “in our society today” and “since the
beginning of time.”
4. METHOD
• The Method section explains, in concise and accurate detail, the
procedures followed in conducting your study.
• A conventional Method section contains the following subsections.

– Design,

– Participants,

– Measures,

– Materials (or, alternatively, Apparatus),

– Procedure
METHOD CHECKLIST

• The reader should be able to use the Method section to conduct


an exact replication study, without asking any further questions.
• Do not put the same information in more than one section.

• Describe each tool in its own short paragraph, citing its source,
sample items, relevant computations, and response options.
• Indicate reliability and validity results for each questionnaire.

• In an APA-style, the Method section is labeled Method in boldface,


centered.
5. RESULTS
• The Results section presents the study’s numerical results,
including any statistical tests & their significance, sometimes in the
form of a table or figure.
• Don’t report the values for individual participants; you present
group means or overall associations.
• Enter the results and symbols for any statistical tests
– The symbols for statistical computations such as M for mean, SD for
standard deviation, t for t test, and F for ANOVA are presented in italics, but
the numerals themselves are not (e.g., M = 3.25).
Results…

• A well-organized Results section is systematic & its


sentence structure may even be a little repetitive.
• It is best to begin with simple results & then move to
more complicated ones.
• It might be appropriate to refer to the study’s
hypothesis when writing the Results section.
Results…

• In an APA-style manuscript, the Results section begins right after


the Method section.
• It is labeled with the word Results in boldface.

• Researchers may insert additional subheadings to organize a long


Results section; such subheads should be boldfaced, capitalized,
and flush left.
• Check the APA Manual or the sample paper on how to present
statistical results.
Tables

• A table can summarize a larger set of descriptive statistics.

• Use a table to present multiple values such as means, correlations,


or multiple inferential tests.
• Tables must follow APA style guidelines.

• They may be single- or double-spaced, but they must include only


horizontal lines as spacers, not vertical lines.
• Never copy & paste tables of output from a statistical program into
the manuscript.
• Tables are not included within the main body of the text.
• They are placed near the end of the paper, after the References.

• Number each one consecutively (Table 1, Table 2, and so on), and


place each on its own page.
• A title for each table appears at the top of the page.

• The label (e.g., Table 1) is in plain text, and the title itself appears
on the next line, italicized, in upper- and lowercase letters.
• In the text of the Results section, refer to tables.
Figures
• A figure can often highlight the data’s strongest result.

• Don’t overdo it; most papers contain only one to three figures.

• Provide a descriptive caption for each figure, typed on the same


page as the figure, appearing below it.
• The label (e.g., Figure 1) appears in italics followed by a period.

• The caption follows on the same line, in plain text.


6. Discussion
• The Discussion section starts directly after the Results section

• Head the section with the word Discussion, boldfaced & centered.

• Additional subheadings could be inserted ; such subheads should


be boldfaced, capitalized, and flush left.
• Do not report new statistical findings in the Discussion section;
numerals and statistics belong in the Results section.
Discussion…
• A well-written Discussion section achieves three goals.

1. It summarize the results & describe the extent to which the results
support the hypothesis or answer the research questions.
– Tell the reader how well the results fit with the theory or
background literature that described in the introduction.

2. Evaluate the study, advocating for its strengths & defending its
weaknesses.

3. Suggest what the next step might be for the theory-data cycle.
Summarizing and Linking to the Theory-Data Cycle
• The first paragraph or two of the Discussion section summarizes the
hypotheses and major results of the study.
• Indicate which results supported the hypothesis & which did not.
• Tie the results back to the literature & theories mentioned in the
introduction.
• Describe how your results support the broader theory and why (or why
not).
• If the results support the theory, explain how.
• If the results do not support theory, it will mean one of two things :
Either the theory is incorrect (and therefore must be modified in some
way), or your study was flawed (and therefore a better study should be
conducted).
• The Discussion is the place to explore these options and explain what you
think is going on.
Evaluating Your Study

• In the next paragraphs of the Discussion section,


evaluate the choices made in conducting study.
• Generally speaking, authors advocate for the strengths
of their own studies & anticipate criticisms others might
make so they can deflect them in advance.
• An excellent strategy is to write about the validities .
Specifying the Next Step
• In the last paragraph or two, write about some directions for further
research.
• It’s not sufficient to conclude with a vague statement, such as
“Future research is needed.”
• Suggest a specific, theory-driven direction for the next study to take.

• If a study was flawed in some way, could specific steps be taken to


correct the flaws, and what results would be expected ?
8. References
• In the course of writing the introduction & the
Literature review sections, researchers consulted &
summarized articles written by other authors.
• Researchers cite these papers within the text using the
authors’ last names and the year of publication.
• Near the end of the paper, in a section titled References,
researchers provide the full bibliographic information
for each of the sources cited, in an alphabetized list.
Formatting an APA-Style Manuscript

• When researchers prepare a research report in APA style, they


follow a number of specific guidelines.
• It can be hard to assimilate all the rules the first time.

• A good strategy is to use the sample papers.

• Pay attention to every detail, such as:

– line spacing,

– font size and style,

– the use of boldface and italic terms


• APA format rules that apply to all sections of the
research report include:
– Everything is double-spaced with the exception of tables and
figures, which may contain single-spaced text.
– All text is in the same-sized font (usually 12-point) & printed in
black.
– Section headings are the same font size & color as the main
text.
– Margins on all sides should be 1 inch (2.54 centimeters).
– Do not right-justify the text- leave the right side ragged.
• The paper’s title is not boldfaced but is centered & capitalized.

• Headings are boldfaced.


– The 1st -level heading—used for section headings such as Method, Results,
and Discussion—is centered and capitalized.
– The 2nd -level heading is boldfaced and flush left, and the first letters of
major words are capitalized; the text following the heading begins on the
next line.
– The 3rd -level heading is boldfaced & indented & is followed by a period;
only the first letter of the heading & any proper nouns are capitalized, &
the text following the heading begins on the same line.
• The order of pages is as follows:
– title page

– abstract

– main body of paper (introduction, Method, Results, & Discussion sections)

– References

– appendices

– Tables

– figures

• The top of each page includes the three- to four-word shortened title
(running head), printed in all capitals and flush left, as well as the
page number, which goes on the same line but flush right.
Citing Sources in APA Style

• When researchers are describing another researcher’s ideas,


words, methods, instruments, or research findings in their
research report, they cite the source by indicating the:
– author’s last name

– year of publication

• There are two ways to provide in-text documentation:

– by using a signal phrase or

– by placing the entire citation in parentheses


• When using a signal phrase, present the last names as part of the
sentence & place only the year of publication in parentheses:
– Results by Strayer and Drews (2004) indicate . . .

– According to Brummelman and his colleagues (2015), . . .

• Alternatively, provide in-text documentation by putting both the


author name(s) and the date in parentheses:
– One study showed that both older and younger drivers brake more slowly
when talking on the phone (Strayer & Drews, 2004).

• With the second method, remember to use an ampersand (&) &


to place the sentence’s period outside the closing parenthesis.
• In APA-style papers, you will not usually quote directly;

• instead, you paraphrase the research descriptions in your own


words.
• However, if you do quote directly from another author, use
quotation marks and indicate the page number:
– “It is also important to note that performance decrements for cell phone
drivers were obtained even when there was no possible contribution from
the manual manipulation of the cell phone” (Strayer & Drews, 2004, p.
648).
• When a source is written by either one or two authors, cite their
names and the date every time you refer to that source.
• When a source is written by three or more authors, cite all the
names and the date the first time.
• The next time cite the source, use the first author’s name followed
by “et al.” and the date:
– Parental overvaluation predicted higher levels of child narcissism over time (
Brummelman, Thomaes, Nelemans, de Castro, Overbeek, & Bushman,
2015). However, parental warmth predicted higher levels of child self-
esteem (Brummelman et al., 2015).
Citation …

• These are the rules for sources with obvious authors—


the most common types of sources used in psychology .
• You might need to cite other sources, such as websites
with no author or government agency reports.
• In those cases, consult the APA Manual for the correct
documentation style.
FULL DOCUMENTATION IN THE REFERENCES
• The References section contains an alphabetized list of all the sources
cited in the paper—and only those sources.
• If you did not cite an article, chapter, or book in the main body of your
text, it does not belong in the References.
• Do not list the sources in the order in which you cited them in the text;
alphabetize them by the first author’s last name.
• Notice the capitalization patterns for the titles of different types of
publications—articles, journals, books, and book chapters—& whether
they are italicized or formatted as plain text.
Journal Articles with One Author
• Notice that only the volume number of a journal, not the issue,
is included, & that the volume number is italicized along with
the journal title.
– Gernsbacher, M. A. (2003). Is one style of autism early
intervention “scientifically proven”? Journal of
Developmental and Learning Disorders, 7, 19–25.
– McNulty, J. K. (2010). When positive processes hurt
relationships. Current Directions in Psychological Science, 19,
167–171. doi: 10.1177/0963721410370298
Journal Articles with Two or More Authors
• These follow the same pattern as a single-authored article.
• Notice how a list of authors is separated with commas in APA style.

– Brummelman, E., Thomaes, S., Nelemans, S. A., Orobio de Castro,


B., Overbeek, G., & Bushman, B. J. (2015). Origins of narcissism in
children. Proceedings of the National Academy of Sciences of the
United States of America, 112, 3659–3662.
doi:10.1073/pnas.1420870112
– Mueller, C. M., & Dweck, C. S. (1998). Intelligence praise can
undermine motivation and performance. Journal of Personality
and Social Psychology, 75, 33–52.doi: 10.1037/0022-3514.75.1.33
Books

• Pay attention to how the publisher & its location are


listed.
– Tomasello, M. (1999). The cultural origins of human cognition.
Cambridge, MA: Harvard University Press.
– Heine, S. J. (2017). DNA is not destiny: The remarkable,
completely misunderstood relationship between you and your
genes. New York: W. W. Norton.
Chapters in Edited Books
• Compare the way the chapter authors’ names & those of
the book editors are given, & notice how and where the
page numbers appear.
– Geen, R. G., & Bushman, B. J. (1989). The arousing effects of
social presence. In H. Wagner & A. Manstead (Eds.), Handbook of
psychophysiology (pp. 261–281). New York, NY: John Wiley.
– Kitayama, S., & Bowman, N. A. (2010). Cultural consequences of
voluntary settlement in the frontier: Evidence and implications. In
M. Schaller, A. Norenzayan, S. J. Heine, T. Yamagishi, & T. Kameda
(Eds.), Evolution, culture, and the human mind (pp. 205–227).
New York, NY: Psychology Press.
CHECKLIST FOR DOCUMENTING SOURCES

• In the References, entries are listed in alphabetical order


by the author’s last name.
– They are not listed in the order in which & mentioned them in
the paper.

• Within a source entry, the order of the authors matters;


often the first author contributed the most to the paper,
and the last author contributed the least.
– Therefore, if an article or book chapter has multiple authors,
list them all in the same order that they appeared in the
publication.
• The entry for each source starts on a new line, using a
hanging indent format (see the sample paper).
• Do not insert extra line spacing between entries.

• The list is double-spaced and starts on a new page.


– The heading References appears at the top of the page, in
plain text and centered.
– Do not label this section Bibliography or Works Cited.
Reading assignments
1. What are research questions and how are they stated in
qualitative and quantitate studies?

2. What are research objectives and how are they stated in


quantitative and qualitative studies?

3. What is a hypothesis & how it is stated in quantitative


researches?

4. What is a variable? Describe types of variables.

5. Describe the meaning, nature & types of scientific misconducts.

6. What are the major ethical considerations in conducting


researches with animals?

You might also like