You are on page 1of 57

Introduction to Research Methods

CRIJ 3378.05
Study Guide - Midterm
Ghady Hbeilini
Last Week
• Sample Planning
• Sampling Components and the Population
• Evaluating Generalizability
• Sampling Methods
• Probability and Non-Probability Sampling
• Multistage Cluster Sampling
• Snowball Sampling
• Generalizability in Qualitative Methods
• Sampling Distribution
Errors in Reasoning
• Overgeneralization: what is observed for one case is true
for all cases
• Observations:
• Selective: observe based on preferences or beliefs
(a form of bias)
• Inaccurate: observe based on false perceptions of
reality
• Illogical Reasoning: jump to conclusions, argue based on
assumptions
• Resistance to Change: refuse to change ideas in light of
new information
• Ego, tradition, culture, disagreement
Social Science vs.
Pseudoscience
• Scientific method:
• Epistemology: study how knowledge is gained or
acquired
• Transparent: of the procedures, methods, data,
and analyses to allow replication
• Peer Review: Accepted, Revise & Resubmit, Reject

• Pseudoscience:
• Findings based on intuition, reactions, or
experiences
• Sold as “scientifically valid”
• Not based on the scientific method
Four Types of Social Research

Descriptive: Exploratory: Explanatory Evaluation


Describe and define Investigate social Identify causes and Determine the effects
phenomena phenomena effects of social of programs
Descriptive research No direct experience phenomena Type of explanatory
allows to see how in the matter research
statistics are “Explore” the new
distributed ides
An Integrated Approach
• Mixed-methods
• Combines both qualitative and quantitative methods
• To study one or more related research questions
• Triangulation
• Provides the researcher with a clearer picture of what is being studies from several
different perspectives
• Use multiple methods to study one research question
• Can mean multiple measures for the same variable
Strengths and Limitations of Social
Research
• Limits some of the reasoning errors
• Allows clarity and broader visualization

• Will not answer more questions


• Prone to varying opinions
Research is mainly an effort to connect theory and empirical
data

Social Deductive
Reasoning
• Moves from general to specific
• Starting with a theory and testing its components
• Largely used for quantitative methodology

Research
Strategies • Moves from specific to general
Inductive Reasoning • Starting with the data and then developing the theory
• Usually used for qualitative methodology

Serendipitous • Unexpected patterns in data, which stimulate new


findings (anomalous ideas or theoretical approaches.
findings)
The Research Circle
• Deductive research: The type of research
in which a specific expectation is deduced
from a general premise and is then tested.
• Hypothesis: A tentative statement about
empirical reality involving the relationship
between two or more variables.
• Variable: Characteristics or properties that
can vary (take on different values or
attributes).
• Constant: A variable that has a fixed value
in a given situation; a characteristic or
value that does not change.
Hypothesis
• Independent variable: A variable that is hypothesized
to cause or lead to variation in another variable.
• Dependent variable: A variable that is hypothesized to
change or vary depending on the variation in another
variable.
• If the independent variable increases (or decreases),
then the dependent variable increases (or decreases).”
• Direction of association: When the values of variables
tend to change consistently in relation to change in the
other variable. Direction of association can be either
positive or negative.
• Positive relationship: The independent and
dependent variables move in the same direction;
as one increases, the other increases.
• Negative relationship: The independent and
dependent variables move in opposite directions;
as one increases, the other decreases.
The Research Circle

• Inductive research: The type of research in


which specific data are used to develop (induce)
a general explanation.
• Empirical generalizations: Statements that
describe patterns found in data.
• The inductive researcher begins with specific
data, which are then used to develop (induce) a
general explanation (a theory) to account for the
data.
The
Research
Spiral
Controversial
experiments
• Milgram - Obedience Experiment
• Zimbardo - Stanford Prison Experiment
• Tuskegee - Syphilis Study
The Belmont Report
• A 1979 National Commission for the Protection
of Human Subjects of Biomedical and
Behavioral Research report that established three
basic ethical principles for the protection of
human subjects, including respect for persons,
beneficence, and justice.
• Three Basic Ethical Principles:
1. Respect for persons: treating persons as
autonomous agents and protecting those
with diminished autonomy
2. Beneficence: minimizing possible harms
and maximizing benefits
3. Justice: distributing benefits and risks of
research fairly
• Federal Policy for the Protection of Human Subjects (Common
Rule): Federal regulations established in 1991 that are based on
the principles of the Belmont Report.
• The Common Rule generally requires that researchers get
informed consent from those who participate in research
• Academy of Criminal Justice Sciences (ACJS) Code of Ethics:
Protection of The Code of Ethics of the Academy of Criminal Justice
Sciences (ACJS) sets forth
Human 1. General Principles
Subjects 2. Ethical Standards that underlie members of the Academy’s
professional responsibilities and conduct, along with
3. The Policies and Procedures for enforcing those principles
and standards.
• Membership in the Academy of Criminal Justice Sciences
commits individual members to adhere to the ACJS Code of
Ethics in determining ethical behavior in the context of their
everyday professional activities.
Ethical Principles
The pursuit of objective
Achieving valid results is the knowledge about human
necessary starting point for behavior that motivates and
ethical research practice. justifies our investigations
and participation.

The scientific concern with


Research distorted by
validity requires in turn that
pressures to find outcomes,
scientists be open in
or the most marketable
disclosing their methods and
results is unlikely to be
honest in presenting their
carried honestly.
findings.

Openness about research


Openness is also essential if
procedures and results goes
researchers are to learn from
hand in hand with honesty in
the work of others.
research design.
Protecting Research Participants
Debriefing: A researcher’s Deception: Used in social
informing subjects after an experiments to create more
experiment about the realistic treatments in which
experiment’s purposes and the true purpose of the
methods and evaluating research is not disclosed to
subjects’ personal reactions to participants, often within the
the experiment. confines of a laboratory.

Privacy Certificate: National Certificate of Confidentiality:


Institute of Justice document National Institutes of Health
that protects researchers from document that protects the
being legally required to privacy of research participants
disclose confidential by prohibiting disclosure of
information. identifiable information.
Guidelines for References

REFERENCE GUIDE REFERENCE EXAMPLE G REFERENCE EXAMPLES


UIDE
Title page Abstract Keywords

Research Introduction Literature Review


Methods
• Data
• Variables

Paper Layout • Analytic Strategy

Discussion and Footnotes and


References
Conclusion Appendices
Conceptualization
• Concept: A mental image that summarizes a set
of similar observations, feelings, or ideas

• Conceptualization
• The process of specifying what we mean by
a term.

• In deductive research, conceptualization


helps translate portions of an abstract
theory into testable hypotheses involving
specific variables.

• In inductive research, conceptualization is


an important part of the process used to
make sense of related observations.
Measurement

• To help in the conceptualization


process, the use of variables as
measures is important
• Operationalization: The process of
specifying the operations that will
indicate the value of a variable for each
case
• Operational definition: The set of rules
and operations used to find the value
of cases on a variable
• Indicator: The question or other
operation used to indicate the value of
cases on a variable
The Process

Conceptualize Operationalize Indicate

Identify a set of common Specify the operations that will Question indicating the value of
observations into a concept, and indicate the value of a variable cases for each variable
define the specific term Assassination, bombing, cyber How many terrorist attacks have
Terrorism attacks there been within the last year?
What are the recent methods
that terrorists have been using?
• Variables whose values have no mathematical
interpretation; they vary in kind or quality but not in
amount.

• The attributes we use to measure (categorize) cases


must be mutually exclusive and exhaustive.
• Mutually exclusive attributes: A variable’s attributes
Nominal or values are mutually exclusive if every case can
have only one attribute.
Level of • Exhaustive attributes: A variable’s attributes or
Measurement values in which every case can be classified as having
one attribute.

• When a variable’s attributes are mutually exclusive and


exhaustive, every case corresponds to one, and only one,
attribute.

• Categorized into discrete measures: A measure that


• A measurement of a variable in which the numbers indicating a variable’s
values specify only the order of the cases, permitting greater than and less
than distinctions
Ordinal Level of • As with nominal variables, the different values of a variable measured at the
Measurement ordinal level must be mutually exclusive and exhaustive.
• Variables measured at the ordinal level in this way classify cases in discrete
categories and so are termed discrete measures
• A series of similar questions may be used instead of one question to
measure the same concept.

Indexes • Index: The sum or average of responses to a set of questions about a


concept.
• A multi-item index, or scale; numbers are assigned to reflect the order
of the responses which are then summed or averaged to create the
index score.
• These scores would be discussing the same overarching topic
Interval Level
of Measurement
• A measurement of a
variable in which the
numbers indicating a
variable’s values
represent fixed
measurement units but
have no absolute or fixed
zero point.
• Values must be mutually
exclusive and exhaustive
Ratio Level of
Measurement
• A measurement of a variable in which
the numbers indicating a variable’s
values represent fixed measuring units,
and there is an absolute zero point.
• There does not actually have to be any
group with a size of 0; what is important
is that the numbering scheme begins at
an absolute zero
Continuous and
Dichotomous Measures
• In addition to having numerical values, both
the interval and ratio levels also involve
continuous measures
• A measure with numbers indicating the
values of variables as points on a continuum,
not discrete categories
• Variables having only two variables are
dichotomous
• Variables with only two categories are
generally thought of as nominally measured
• We can also think of a dichotomy as
indicating the presence or absence of an
attribute
• All four levels of measurement allow researchers to assign different
values to different cases
Comparing Levels of • All three quantitative measures allow researchers to rank cases in order
Measurement • Researchers choose levels of measurement in the process of
operationalizing the variables; the level of measurement is not inherent
in the variable itself
• Interval-ratio level of measurement: A measurement of a variable in
which the numbers indicating the variable’s values represent fixed
measurement units, but there may be no absolute or fixed zero point.
Face validity

Content
validity
Measurement
Validity validity
Criterion
validity

Construct
validity
Measurement
validity
• The type of validity that is achieved when a measure
measures what it is presumed to measure.
• The extent to which measures indicate what they are
intended to measure can be assessed with one or more of
four basic approaches
• Face validation
• Content validation
• Criterion validation
• Construct validation.
• No one measure will be valid for all times and places
Face validity

• The type of validity that exists when an


inspection of the items used to measure a
concept suggests that they are appropriate
“on their face.”
• Disadvantages:
• Face validation on its own is not the
gold standard of measurement
validity
• A question or measure might seem as
though it has good face validity, but it
could not cover all that it needs to,
making it an invalid measure
Content validity

• The type of validity that establishes that


a measure covers the full range of the
concept’s meaning.
• Disadvantages:
• To determine that range of
meaning, the researcher may solicit
the opinions of experts and review
literature that identifies the
different aspects of the concept
Criterion validity
• The type of validity that is established by
comparing the scores obtained on the measure
being validated to those obtained with a more
direct or already validated measure of the
same phenomenon (the criterion).
• Disadvantages
• Inconsistent findings
• Such inconsistent findings can occur
because of differences in the adequacy of
measures across settings and populations
• You cannot assume that a measure that
was validated in one study is also valid in
another setting or with a different
population
Construct validity
• The type of validity that is established by showing that a
measure is related to other measures as specified in a theory.
• Disadvantages
• The distinction between criterion and construct
validation is not always clear
• Opinions can differ about whether a particular indicator
is indeed a criterion for the concept that is to be
measured
• What construct and criterion validation have in common
is the comparison of scores on one measure to scores on
other measures that are predicted to be related
Test-retest
reliability

Reliabilit
Interitem
reliability
Measurement

y Reliability
Alternate-forms
reliability

Intraobserver and
interobserver
reliability
Measurement Reliability
• A measure is reliable when it yields consistent scores
or observations of a given phenomenon on different
occasions.
• Reliability is a prerequisite for measurement validity.
• Problems in reliability can occur when inconsistent
measurements are obtained after the same
phenomenon is measured multiple times, with
multiple indicators, or by multiple observers
• To assess these different inconsistencies, there are
four possible methods:
• Test-retest reliability
• Interitem reliability
• Alternate-forms reliability
• Intraobserver and interobserver reliability
Test-Retest
Reliability
• A measurement showing that measures of a
phenomenon at two points in time are highly
correlated if the phenomenon has not changed or
have changed only as much as the phenomenon
itself.
• Of course, if events between the test and the retest
have changed the variable being measured, then the
difference between the test and retest scores should
reflect that change
Interitem
Reliability
• Also known as Internal Consistency
• An approach that calculates reliability based on
the correlation among multiple items used to
measure a single concept.
• This is used when researchers use different items
to measure a single concept
• The stronger the association between the
individual items and the more items included, the
higher the reliability of the index
• Cronbach’s alpha: A statistic that measures the
reliability of items in an index or scale, thus
measuring interitem reliability.
Alternate-Forms Reliability
• A procedure for testing the reliability of responses to
survey questions in which subjects’ answers are
compared after the subjects have been asked slightly
different versions of the questions or when randomly
selected halves of the sample have been administered
slightly different versions of the questions.
• If the two sets of responses are not too different,
alternate-forms reliability is established
• A similar test of reliability is Split-halves reliability
• Reliability achieved when responses to the same
questions by two randomly selected halves of a
sample are about the same.
Intraobserver and
Interobserver
Reliability
• Intraobserver/Intrarater reliability:
Consistency of ratings by an observer
of an unchanging phenomenon at two
or more points in time.
• Interobserver/Interrater reliability:
When similar measurements are
obtained by different observers rating
the same persons, events, or places.
• Intercoder reliability: When the same
codes are entered by different coders
who are recording the same data.
Sample Planning

• The Purpose of Sampling: generate a set of


individuals or other entities that give us a
valid picture of all such individuals or
other entities
• Population: The entire set of elements
(e.g., individuals, cities, states, countries,
prisons, schools) in which we are
interested.
• Sample: A subset of elements from the
larger population.
• Elements: The individual members of the
population whose characteristics are to be
measured.
Sampling Components
and the Population

• Sampling frame: A list of the elements


of a population from which a sample
actually is selected

• Enumeration units: Units that contain


one or more elements and that are listed
in a sampling frame

• Sampling units: The units actually


selected in each stage of sampling
Evaluating Generalizability
• Sampling error: Any difference between the characteristics of a sample and the
characteristics of the population from which it was drawn. The larger the sampling error,
the less representative the sample is of the population

• Target population: A set of elements larger than or different from the population
sampled and to which the researcher would like to generalize study findings

• Census: Research in which information is obtained through the responses that all
available members of an entire population give to questions
• Representative sample: A sample that looks like the population from which it
was selected in all respects that are potentially relevant to the study. The
distribution of characteristics among the elements of a representative sample is
the same as the distribution of those characteristics among the total population. In
an unrepresentative sample, some characteristics are overrepresented or
underrepresented, and sampling error emerges
Sampling Methods
Probability sampling methods: Sampling methods that rely on a random, or chance, selection
method so that the probability of selection of population elements is known

Nonprobability sampling methods: Sampling methods in which the probability of selection of


population elements is unknow

Nonresponse: People or other entities who do not participate in a study although they are
selected for the sample

Systematic bias: Overrepresentation or underrepresentation of some population characteristics


in a sample resulting from the method used to select the sample; a sample shaped by
systematic sampling error is a biased sample.
• Simple random sampling: A method of
sampling in which every sample element is
Random selected only on the basis of chance through
Sampling a random process
Multistage Cluster • Multistage cluster sampling: Sampling in which
elements are selected in two or more stages, with the
Sampling first stage being the random selection of naturally
occurring clusters and the last stage being the random
selection of multilevel elements within clusters
• Cluster: A naturally occurring, mixed aggregate of
elements of the population
Non-
Probability
Sampling
• Availability sampling:
Sampling in which elements
are selected on the basis of
convenience
• Quota sampling: A
nonprobability sampling
method in which elements are
selected to ensure that the
sample represents certain
characteristics in proportion to
their prevalence in the
population
Non-
Probability
Sampling
• Purposive sampling: A
nonprobability sampling
method in which elements
are selected for a purpose,
usually because of their
unique position. Sometimes
referred to as judgment
sampling
Snowball
Sampling
• Snowball sampling: A
method of sampling in
which sample elements are
selected as they are
identified by successive
informants or interviewees
Generalizability
in Qualitative
Research
• Studying the Typical
• Multisite Studies
• 25 Questions Total
• 18 Multiple Choice
• 2 Open Ended
• 4 True or False

Midterm • Bonus Justification


• 1 Match
• Out of 50 points
• Each question is worth 2 pts
• Bonus Questions are an extra 1 pt
Contact
information
• Ghady Hbeilini
• gxh040@shsu.edu
• Office hours: Mon. 2:00-3:00 pm;
Wed. 2:00-3:00 pm; or by
appointment

You might also like