Professional Documents
Culture Documents
1. Surveys:
Surveys involve collecting data from a sample of individuals using
standardized questionnaires or structured interviews.
Surveys can be conducted through various modes, including online
surveys, telephone interviews, or paper-and-pencil surveys.
Surveys are useful for collecting data on attitudes, opinions, behaviors,
and demographic characteristics of participants.
2. Experiments:
Experiments involve manipulating one or more variables to observe
their effects on another variable while controlling for potential
confounding factors.
Experiments often employ random assignment of participants to
experimental and control groups to ensure the validity of causal
inferences.
Experiments are commonly used in psychology, medicine, and social
sciences to establish cause-and-effect relationships.
3. Observational Studies:
Observational studies involve observing and recording behaviors,
events, or phenomena in naturalistic settings without intervention or
manipulation by the researcher.
Observational studies can be conducted in various contexts, such as
participant observation, naturalistic observation, or archival research.
Observational studies are valuable for studying behaviors, social
interactions, and phenomena that cannot be ethically or practically
manipulated in experiments.
4. Meta-analysis:
Meta-analysis involves synthesizing data from multiple independent
studies to draw conclusions or identify patterns across a body of
research.
Meta-analysis uses statistical techniques to quantitatively analyze effect
sizes and assess the overall magnitude and consistency of research
findings.
Meta-analysis is particularly useful for summarizing findings from
disparate studies and providing more robust estimates of effect sizes.
5. Longitudinal Studies:
Longitudinal studies involve collecting data from the same individuals
or groups over an extended period to examine changes or trends over
time.
Longitudinal studies can provide insights into developmental
trajectories, lifespan changes, and the effects of interventions or
treatments over time.
Longitudinal studies require careful planning and management to
minimize attrition and ensure the validity of longitudinal data.
6. Cross-sectional Studies:
Cross-sectional studies involve collecting data from a diverse sample of
individuals or groups at a single point in time.
Cross-sectional studies provide a snapshot of a population's
characteristics, behaviors, or attitudes at a specific moment, allowing
for comparisons across different groups or variables.
Cross-sectional studies are commonly used in epidemiology, public
health, and social sciences to examine prevalence, correlates, and
associations among variables.
Data analysis is a critical phase in quantitative research, where researchers
examine collected numerical data to draw conclusions, identify patterns, test
hypotheses, and answer research questions. Various techniques and approaches are
used in data analysis for quantitative research. Here's an overview of the key steps
and methods involved:
Reliability (quantitative)
Inter-rater reliability
It is concerned with the consistency of observations and ways of recording data
across the people who are involved (the ‘raters’), in studies where there is more than
one.
For now, simply think of inter-rater reliability in terms of exams and graders. In
order to grade an exam reliably the grades of two or multiple examiners can be
compared: if they show great agreement on the same exams, inter-rater reliability is
higher than if they give different grades.
Validity
- Accuracy: is the scale showing CORRECT weight?
Involves…
- Construct validity: does age measure physical ability
- Content validity: does political position cover concept “green attitude”
- Internal validity: are the causal relationships indeed causal?
- External validity: Are causal relationships generalizable to other settings?
Validity
Measurement validity is concerned with whether a measure of a concept really
measures that concept (see Key concept 7.4). When we discussed the reliability of
weighing scales earlier, it is worth noting that they could have been consistent in
their measurement, but they may always over-report weight by 2 kilograms. So these
particular scales might be reliable in that they are consistent, but they are not valid
because they do not correspond to the accepted conventions of measuring weight.
When people argue about whether a person’s IQ score really measures or reflects that
person’s level of intelligence, they are raising questions about the measurement
validity of the IQ test in relation to the concept of intelligence.
Face validity
At the very minimum, a researcher developing a new measure should establish that it
has face validity—that is, that the measure apparently reflects the content of the
concept explored. Face validity might be identified by asking other people, preferably
with experience or expertise in the field, whether the measure appropriately
represents the concept that is the focus of attention. Establishing face validity is an
intuitive process.
Another possible test for the validity of a new measure is predictive validity, in
which the researcher uses a future criterion measure, rather than a current one as is
the case for concurrent validity. With predictive validity, future levels of absenteeism
would be used as the criterion against which the validity of a new measure of job
satisfaction would be examined. Research in focus 7.7 provides an example of testing
for predictive validity.
Construct validity
1. Definition of Constructs:
Constructs are abstract concepts or variables that cannot be directly
observed but can be inferred from observable indicators or behaviors.
Examples of constructs include intelligence, anxiety, self-esteem, and
job satisfaction.
2. Types of Construct Validity:
Content Validity: Content validity refers to the extent to which the
items or indicators included in a measurement instrument represent
the entire domain of the construct. It involves ensuring that the
measurement instrument covers all relevant aspects of the construct.
Criterion-Related Validity: Criterion-related validity assesses the
degree to which the scores obtained from a measurement instrument
correlate with scores from another established measure (concurrent
validity) or predict future outcomes (predictive validity) related to the
same construct.
Convergent and Discriminant Validity: Convergent validity refers to
the degree to which scores obtained from different measures that
theoretically should be related are indeed positively correlated.
Discriminant validity, on the other hand, examines the extent to which
scores from measures that should not be related are not correlated.
Construct-Related Nomological Validity: This form of validity
assesses whether the relationships between constructs conform to
theoretical expectations. It involves examining the relationships
between the construct of interest and other related constructs as
predicted by theory.
There are a number of different ways of evaluating the measures that are used to
capture concepts. In quantitative research it is important that measures are valid and
reliable. When new measures are developed, these should be tested for both validity
and reliability. In practice, this often involves fairly straightforward but limited steps
to ensure that a measure is reliable and/or valid, such as testing for internal
reliability when a multiple-indicator measure has been devised (as in Research in
focus 7.8) and examining face validity.
Although reliability and validity can be easily distinguished in terms of the analysis
they involve, they are related because validity presumes reliability: if your measure is
not reliable, it cannot be valid. This point can be made with respect to each of the
three criteria of reliability that we have discussed:
If the measure is not stable over time, it cannot be providing a valid measure;
the measure cannot be tapping the concept it is supposed to be measuring if
the measure fluctuates, and if the measure fluctuates, it may be measuring
different things on different occasions.
If a measure lacks internal reliability, it means that a multiple-indicator
measure is actually measuring two or more different things, so the measure
cannot be valid.
If there is a lack of inter-rater consistency, it means that observers do not
agree on the meaning of what they are observing, which in turn means that a
measure cannot be valid.
Random sampling and probability sampling are two common methods used in
research to select participants from a population. Both methods aim to ensure that
each member of the population has an equal chance of being selected for inclusion
in the sample, thus increasing the generalizability of the findings. However, there are
differences in how these methods are implemented:
1. Random Sampling:
Random sampling involves selecting participants from a population in
such a way that each member has an equal probability of being chosen.
This method is often used when the population is relatively
homogeneous and well-defined.
Random sampling techniques include simple random sampling,
systematic random sampling, and cluster random sampling.
Simple Random Sampling: Every member of the population has an
equal chance of being selected, and each selection is independent of
every other selection. This can be done using random number
generators or random sampling tables.
Systematic Random Sampling: Researchers select every nth member
from the population after randomly determining a starting point. For
example, if the population size is 1000 and the desired sample size is
100, researchers might select every 10th person from a list of the
population.
Cluster Random Sampling: The population is divided into clusters (e.g.,
geographic areas, schools, households), and then a random sample of
clusters is selected. Data are then collected from all members within the
selected clusters.
2. Probability Sampling:
Probability sampling is a broader term that encompasses random
sampling techniques as well as other sampling methods that involve
the use of probability theory to determine the likelihood of selection.
In addition to random sampling techniques, probability sampling
methods include stratified sampling, proportional sampling, and
multistage sampling.
Stratified Sampling: The population is divided into subgroups (strata)
based on certain characteristics (e.g., age, gender, income), and then
random samples are selected from each stratum. This ensures that each
subgroup is represented proportionally in the sample.
Proportional Sampling: Similar to stratified sampling, but the sample
sizes from each stratum are determined proportionally to their
representation in the population.
Qualitative (sampling)
1. Purposive Sampling:
Purposive sampling, also known as judgmental or selective sampling,
involves selecting participants based on specific criteria or
characteristics relevant to the research question.
Researchers intentionally choose participants who possess certain
attributes or experiences that are deemed essential for addressing the
research objectives.
Purposive sampling allows researchers to target individuals or groups
who can provide rich and relevant information, maximizing the depth
and richness of the data collected.
Example: In a study examining the experiences of frontline healthcare
workers during the COVID-19 pandemic, researchers might purposively
sample healthcare professionals with diverse roles (e.g., nurses, doctors,
paramedics) and experiences (e.g., working in COVID wards, vaccination
centers) to capture a comprehensive range of perspectives.
2. Snowball Sampling:
Snowball sampling, also known as chain referral sampling, involves
identifying initial participants who meet the research criteria and then
asking them to refer other potential participants.
As the study progresses, new participants are recruited through
referrals from existing participants, creating a "snowball" effect.
Snowball sampling is particularly useful for accessing hard-to-reach or
marginalized populations, as well as for studying phenomena where
participant networks are relevant.
Example: In a study exploring the experiences of LGBTQ+ individuals in
a conservative community, researchers might start by recruiting a few
LGBTQ+ individuals known to them and then ask these participants to
refer others from their social networks who may be willing to
participate.
3. Convenience Sampling:
Convenience sampling involves selecting participants based on their
availability and accessibility to the researcher.
Researchers typically recruit participants who are convenient to access
or readily available, often using methods such as approaching
individuals in public places, soliciting volunteers from specific settings,
or recruiting participants from existing organizational networks.
Convenience sampling is quick, cost-effective, and convenient but may
introduce bias, as the sample may not be representative of the
population of interest.
Example: In a study conducted at a university, researchers might recruit
undergraduate students as participants by posting recruitment flyers
around campus or sending out email invitations to students enrolled in
specific courses. While this method is convenient for the researcher, it
may not capture the perspectives of non-student populations.
data collection
Certainly! In qualitative research, methods aim to gather
rich, in-depth insights into participants' experiences, perspectives, and behaviors.
Two common methods for collecting qualitative data are semi-structured interviews
and participant observation:
1. Semi-Structured Interviews:
Explanation: Semi-structured interviews are guided conversations
between the researcher and participants, where the researcher has a
predefined set of open-ended questions but also allows flexibility for
follow-up questions and probing based on participants' responses. This
approach combines the benefits of structured interviews (ensuring
consistency across interviews) with the flexibility of unstructured
interviews (allowing for exploration of unexpected topics).
Example: Suppose a researcher is conducting a study on the
experiences of first-generation college students adjusting to university
life. They may use semi-structured interviews to explore various aspects
of the participants' experiences, such as academic challenges, social
interactions, and support systems. The researcher might start with
broad questions like "Can you tell me about your transition to college?"
and then ask follow-up questions based on the participant's responses,
such as "How did you navigate the academic workload in your first
semester?"
2. Participant Observation:
Explanation: Participant observation involves the researcher immersing
themselves in the natural setting or context of the study as an active
participant or observer. By observing participants' behaviors,
interactions, and social dynamics firsthand, the researcher gains a
deeper understanding of the phenomena under study. Participant
observation often involves detailed field notes or journaling to record
observations and reflections.
Example: Imagine a researcher studying group dynamics in a corporate
team environment. They might engage in participant observation by
joining the team as a member or observer, attending meetings, and
participating in team activities. Throughout the observation period, the
researcher might take detailed notes on communication patterns,
leadership dynamics, and decision-making processes. For example, they
might note instances of dominant personalities overshadowing quieter
team members during brainstorming sessions.
Data analysis
1. Thematic Coding:
Explanation: Thematic coding involves systematically identifying,
organizing, and analyzing themes or patterns within qualitative data.
Researchers code segments of text (e.g., interview transcripts, field
notes) based on recurring ideas, concepts, or topics, which are then
grouped into overarching themes. Thematic coding allows researchers
to identify key patterns, trends, and meanings in the data.
Example: Let's say a researcher is conducting interviews with cancer
survivors to explore their experiences with coping strategies. After
transcribing the interviews, the researcher reads through the data and
identifies recurring topics such as social support, emotional coping
mechanisms, and changes in lifestyle. The researcher then codes
segments of text related to each topic and identifies broader themes
such as "Social Support Networks," "Coping Strategies," and
"Adaptation to Change."
2. Narrative Coding:
Explanation: Narrative coding focuses on analyzing the structure,
content, and meaning of stories or narratives shared by participants.
Researchers examine the narrative elements (e.g., plot, characters,
setting) and interpret how individuals construct and convey their
experiences through storytelling. Narrative coding helps researchers
understand how individuals make sense of their lived experiences and
construct coherent narratives.
Example: Suppose a researcher is studying the narratives of refugees
fleeing conflict zones. They analyze interview transcripts or written
narratives from refugees, paying attention to how individuals describe
their journeys, challenges faced, and hopes for the future. The
researcher identifies narrative elements such as protagonists, plot
developments, and themes of resilience or trauma. By analyzing these
narratives, the researcher gains insights into the refugees' experiences
and the meanings they attribute to their journeys.
3. Content Analysis:
Explanation: Content analysis involves systematically analyzing the
content of textual data to identify patterns, themes, or trends.
Researchers categorize and quantify specific elements of the text (e.g.,
words, phrases, concepts) to uncover underlying meanings or
relationships. Content analysis can be deductive (applying pre-
established categories) or inductive (allowing categories to emerge
from the data).
Example: Let's consider a study analyzing media coverage of climate
change. Researchers collect articles from newspapers and online
sources and systematically analyze the content to identify key themes
and frames related to climate change discourse. They might code
articles based on categories such as "Causes of Climate Change,"
"Impacts on the Environment," and "Policy Responses." By quantifying
the frequency of specific themes and analyzing their representation in
the media, researchers gain insights into public perceptions and
discourses surrounding climate change.
methods of evaluating quality that sit between quantitative and qualitative research
criteria.
External reliability is taken to refer to the degree to which a study can be replicated.
This is a difficult criterion to meet in qualitative research because, as LeCompte and
Goetz recognize, it is impossible to ‘freeze’ a social setting and the circumstances of
an initial study to make it replicable in the sense we discussed in Chapter 7
researchers are increasingly conscious of the impact of both their and their participants’
values and social positions on the research process, and it may be impossible to reproduce
specific characteristics of a project.
Internal reliability is the extent to which, when there is more than one observer,
members of the research team agree about what they see and hear. This is similar to
inter-rater reliability (see Key concept 7.3).
Trustworthiness
What is triangulation?
KEY CONCEPT 16.4
This emphasis on multiple accounts of social reality is especially clear in the criterion of
credibility. After all, if there can be several possible accounts of an aspect of social reality, it
is the credibility of the account that determines whether it is acceptable to others. There are a
number of ways to establish credibility: making sure there is prolonged engagement ‘in the
field’; analysing negative (divergent) cases; and the triangulation of data, analysis, and
findings
Triangulation may also include submitting research findings to the members of the social
world who were studied so that they can confirm that the investigator has correctly
understood what they saw and/or heard. This technique is often referred to as respondent
validation or member validation
Lincoln and Guba propose the idea of dependability as a parallel to reliability in quantitative
research. They suggest that researchers should adopt an ‘auditing’ approach in order to
establish the merit of research. This idea requires researchers to keep an audit trail of
complete records for all phases of the research process, including problem formulation,
selection of research participants, fieldwork notes, interview transcripts, data analysis
decisions, and so on. Keeping these records allows peers to act as auditors, possibly during
the course of the research and certainly at the end, checking how far appropriate research
procedures have been followed. This would also include assessing the degree to which
theoretical inferences can be justified.
And Experimental
Research design you use (add more to the different research designs)
If you are looking at the impact of an intervention, then you might consider conducting an
experiment;
if you are interested in social change over time, then a longitudinal design might be
appropriate.
Research questions that are concerned with particular communities, organizations, or groups
might use a case study design,
while describing current attitudes or behaviours at a single point in time could use a cross-
sectional design.
Experimental research design and quasi-experimental research design are both types
of research designs used in empirical research to investigate cause-and-effect
relationships. Here's an explanation of each along with examples:
1. Explanation:
Snapshot in Time: Cross-sectional research collects data from
participants at a single point in time, providing a snapshot of their
characteristics, attitudes, behaviors, or outcomes.
Multiple Variables: Researchers collect data on multiple variables of
interest simultaneously, allowing for the examination of associations,
correlations, or differences among variables.
No Follow-Up: Unlike longitudinal studies that track the same
participants over time, cross-sectional studies do not involve follow-up
assessments, making them less resource-intensive and time-
consuming.
2. Examples:
Health Surveys: A health survey conducted to assess the prevalence of
various health conditions and risk factors in a community is an example
of a cross-sectional study. Researchers administer questionnaires or
conduct interviews with a sample of participants to collect data on
demographics, health behaviors, medical history, and current health
status. The data collected provides insights into the distribution of
health outcomes and risk factors within the population at a specific
point in time.
Market Research: Market researchers often use cross-sectional surveys
to gather information about consumers' preferences, purchasing
behaviors, and demographic characteristics. For example, a company
conducting a cross-sectional study may survey customers at a shopping
mall to understand their preferences for different product features,
brands, or pricing strategies. The data collected helps inform marketing
strategies and product development decisions.
Educational Assessments: Cross-sectional research designs are
commonly used in educational research to assess students' academic
achievement, attitudes, and learning outcomes. Researchers administer
standardized tests or surveys to a sample of students from different
grade levels or educational settings to examine differences in academic
performance, motivation, or learning styles. The findings from cross-
sectional studies can inform educational policies, curriculum
development, and instructional practices.
1. Explanation:
Tracking Change Over Time: Longitudinal research design involves
collecting data from participants at multiple time points, enabling
researchers to track changes, trends, or patterns in variables of interest
over an extended period.
Follow-Up Assessments: Participants are typically assessed or
surveyed at regular intervals (e.g., annually, biennially) to measure
changes in their characteristics, behaviors, or outcomes over time.
Cohort Comparisons: Longitudinal studies may involve following a
specific cohort (group of individuals born or experiencing an event
during the same time period) over time or comparing multiple cohorts
to examine cohort effects or generational differences.
2. Examples:
Panel Studies: Panel studies are longitudinal studies that follow the
same individuals or households over time, collecting data at multiple
waves of assessment. For example, the National Longitudinal Survey of
Youth (NLSY) in the United States tracks a nationally representative
sample of individuals from adolescence into adulthood, collecting data
on employment, education, family dynamics, and health outcomes at
regular intervals.
Birth Cohort Studies: Birth cohort studies track individuals born during
a specific time period (birth cohort) to examine how factors such as
prenatal exposures, early life experiences, and socio-economic
conditions influence health and development over the life course. For
instance, the Avon Longitudinal Study of Parents and Children
(ALSPAC) in the UK follows a cohort of children born in the early 1990s,
collecting data on their physical, cognitive, and social development
from birth through adolescence and adulthood.
Panel Surveys: Panel surveys are longitudinal studies that follow the
same sample of participants over time, collecting data on various topics
such as employment, income, political attitudes, or social networks. For
example, the Panel Study of Income Dynamics (PSID) in the United
States has been tracking a nationally representative sample of
individuals and families since 1968, providing insights into economic
mobility, intergenerational transfers, and household dynamics over
several decades.
A case study research design is a qualitative research method that involves an in-
depth examination of a single case or a small number of cases to gain a
comprehensive understanding of a particular phenomenon. Case studies are
particularly useful for exploring complex, contextually rich, and understudied topics.
Here's an explanation of case study research design along with examples:
1. Explanation:
In-Depth Exploration: Case studies involve a detailed investigation of
a specific case, which could be an individual, group, organization,
community, or event. Researchers collect and analyze data from
multiple sources, such as interviews, observations, documents, and
artifacts, to provide a holistic view of the case.
Contextual Understanding: Case studies emphasize understanding
the unique context and dynamics surrounding the case under
investigation. Researchers pay close attention to the historical, social,
cultural, and environmental factors that shape the case, allowing for
rich, nuanced insights.
Multiple Data Sources: Case studies often involve triangulating data
from various sources to enhance validity and reliability. Researchers
may use multiple methods of data collection, such as interviews,
observations, document analysis, and archival research, to capture
different perspectives and dimensions of the case.
2. Examples:
Clinical Case Study: In psychology and medicine, case studies are
commonly used to examine individual patients' experiences, symptoms,
diagnoses, and treatment outcomes. For example, a psychologist might
conduct a case study of a patient with a rare psychological disorder to
explore the symptoms, underlying causes, and treatment approaches.
The case study provides valuable insights into the unique
manifestations and treatment challenges of the disorder.
Business Case Study: In business and management research, case
studies are often used to analyze organizations' strategies, practices,
and performance. For instance, a researcher might conduct a case study
of a successful company to investigate its business model, competitive
advantage, leadership style, and organizational culture. The case study
helps identify key factors contributing to the company's success and
extract lessons for other organizations.
Comparative research design involves comparing two or more groups, populations,
or variables to identify similarities, differences, patterns, or relationships between
them. This approach allows researchers to examine how different factors may
influence outcomes or phenomena of interest across different contexts. Here's an
explanation of comparative research design along with examples:
1. Explanation:
Comparing Groups or Variables: Comparative research involves
comparing two or more groups, populations, or variables on one or
more dimensions of interest.
Understanding Differences or Similarities: The goal of comparative
research is to identify and understand differences, similarities, patterns,
or relationships between the groups or variables under study.
Contextual Understanding: Comparative research provides insights
into how factors such as culture, environment, or policy influence
outcomes or phenomena across different settings or populations.
2. Examples:
Cross-Cultural Studies: Comparative research is commonly used in
cross-cultural studies to compare behaviors, attitudes, or beliefs across
different cultural groups. For example, researchers may compare
parenting practices, communication styles, or social norms between
Western and non-Western cultures to understand cultural differences in
child development.
Policy Analysis: Comparative research is also used to evaluate the
effectiveness of policies or interventions across different regions or
jurisdictions. For instance, researchers may compare the
implementation and outcomes of healthcare policies in different
countries to identify best practices and areas for improvement.
Educational Research: Comparative research is prevalent in
educational research to compare educational systems, teaching
methods, or student outcomes across different countries or educational
settings. Researchers may compare academic achievement, graduation
rates, or teaching practices between public and private schools to
assess the impact of educational policies.
Social Sciences: Comparative research is widely used in the social
sciences to examine social phenomena, such as inequality, poverty, or
crime, across different groups or regions. For example, researchers may
compare income inequality levels between urban and rural areas to
understand the socioeconomic disparities within a society.