You are on page 1of 64

Rigour in Research &

Qualitative Data Analysis

NURS 3515
Week 9
November 2023
(1 of 4)
Learning Outcomes
 Discuss the purposes of reliability and validity.
 Define reliability.
 Discuss the concepts of stability, equivalence,
and homogeneity as they relate to reliability.
 Compare the estimates of reliability.
 Define validity.
 Compare content, criterion-related (‫)معیار‬, and
construct validity.
(2 of4)

2
Learning Outcomes
 Discuss how measurement error can affect
the outcomes of a research study.
 Identify the criteria for critiquing the reliability
and validity of measurement tools.
 Use the critiquing criteria to evaluate the
reliability and validity of measurement tools.
 Understand how to evaluate the quality of
qualitative research
(2 of 4)

3
Learning Outcomes
 Discuss the purpose of credibility, auditability,
and fittingness.
 Apply the critiquing criteria to evaluate the
rigour in a qualitative report.
 Discuss how evidence related to research
rigour contributes to clinical decision making.
(2 of 4)

 Examine the processes of qualitative data


analysis.

4
Learning Outcomes
 Outline the steps common to qualitative data
analysis.
 Describe how data are reduced to meaningful
units (themes).
 Summarize the process of identifying themes and
categories and the relationships between them.
 Compare the process of creating and presenting
interpretations from select qualitative methods
 Assess the integrity of data analysis from a
qualitative study.

5
Rigour in Research
(1 of 2)

 Ideally, research results are transferable and


generalizable.
 Rigour is necessary for this to occur.
 Rigour = the quality, believability, and
trustworthiness of the study findings.

6
Rigour in Research
(2 of 2)

 Quantitative rigour—psychometric measures


used to ensure instrument reliability and
validity.( Psychometrics generally refers to specialized fields within
psychology and education devoted to testing, measurement, assessment,
and related activities.)

 Qualitative rigour—shown by credibility,


auditability, and fittingness.
7
Reliability and Validity Broadly Defined
• Reliability = Consistency
• Validity = Accuracy
• Reliability and validity are assessed for measures and at the
study level.(quantitative)
˜ Reliability: The extent to which the instrument yields the same results on
repeated measures: stability, consistency, accuracy, equivalence.
Adams, Research Methods, Statistics, and Applications 2e. © SAGE Publishing, 2019. 8
.

9
Reliability and Validity of Measurement
• Measurement reliability: Consistency of a measure
• Measurement validity: ability of an instrument or factor to
accurately measure.
• A study or measure CAN be reliable but not valid; but CANNOT be valid and
not reliable.

• To assess these, you must first clearly define:


• Constructs: Cannot be directly observed or measured. A construct is a
theoretical concept, theme, or idea based on empirical observations. It's a variable
that's usually not directly measurable. Example: Constructs Psychologists develop
and research constructs to understand individual and group differences. Some
common constructs include: motivation, pain, Self-esteem. a "construct" refers to
an abstract idea or concept that is not directly observable. It's something
researchers want to study but can't touch or see. For example, happiness, "Higher
level of abstraction" means that the concept is not concrete or tangible( ‫مفهوم عینی‬
‫(یا ملموس نیست‬. It's not something you can touch or see directly; instead, it involves
more general and theoretical ideas. when researchers are defining the construct
at a higher level of abstraction, they are working to clearly articulate what they
mean like tools for reliability and validity measurement. This involves creating a
precise and agreed-upon definition for the abstract concept they want to
investigate

• Operational Definitions: The explicit and clear explanation of a variable in terms of how it
is measured or manipulated. It refers to a detailed explanation of the technical terms and
measurements used during data collection.

Adams, Research Methods, Statistics, and Applications 2e. © SAGE Publishing, 2019. 9

How to Measure Your Constructs


• Qualitative measures: Non-numerical

• Quantitative measures: Numerical

Adams, Research Methods, Statistics, and Applications 2e. © SAGE Publishing, 2019. 10
Reliability (Quantitative)

•Extent to which instrument gives same results


on repeated measures:
• Stability. Test-retest reliability, parallel-or alternate-form reliability
• Internal Consistency.item-to total correlation, split-hald reliability,
kuder- Richardson coefficient, Crobach’s alpha
• Accuracy.
• Equivalence ( sameness, equalness): Parallel-or alternate-form reliability, interrater
reliability.

11
Reliability: Consistency
• Consistency—the absence of variation in measuring a stable attribute for an
individual. ( Attributes refer to the characteristics and quality of the item ,like person, things, under study, like the
habit of smoking, or drinking. So 'smoking' and 'drinking' both refer to the example of an attribute.)

• Reliability assessments involve computing a reliability coefficient.( is a numerical representation,


between 0.0 and 1.0, representing the accuracy of a test, research instrument, or rating)
• Most reliability coefficients are based on correlation coefficients.
• Expresses the relationship between the:
Error variance: i The extent to which the variance in test scores is attributable to error
rather than to a true measure of behaviours.
• t is made up of individual differences between participants, experimenter errors, equipment variations. When the
error variance in a measurement instrument is low, the reliability coefficient is closer to 1. The
closer to 1 the coefficient is, the more reliable the tool is. The extent of variability in test scores that is attributable to
error rather than a true measure of the behaviours is the error variance.

-1. Transient human conditions, such as hunger, fatigue, health, lack of motivation, and anxiety, which are often beyond the
awareness and control of the examiner.

2.Variations in the measurement procedure, such as misplacement of the blood pressure cuff, not waiting for a specific time
period before taking the blood pressure
A systematic error or a constant error is a measurement error that is attributable to relatively stable characteristics of the study
population that may bias their behaviour, cause incorrect instrument calibration, or both. Level of education, socioeconomic
status, social desirability, response patter

• True variance: ( actual variation) naturally occurring variability within or among research
participants. This variance is inherent in the nature of individual participants and is not due to
measurement error,

• Observed score: that is derived from a set of items consists of the true score plus error (Figure 14.2). The error
may be either chance (random) error or systematic error. A chance error or a random error is an error that is difficult
to control (e.g., a respondent’s anxiety at the time of testing). These errors are unsystematic and are not predictable;
thus, they cannot be corrected.
• Reliability coefficient (R) can range from 0 to 1
0 = no relationship
1 = perfect relationship (better reliability)
0.89 = strong relationship

Reliability: Stability
• Replication approaches
• Test–retest reliability: administration of the same measure to the same
people on two occasions
• Interrater reliability: measurements by two or more observers or raters
using the same instrument or measurements by the same observer or
rater on two or more occasions
• Parallel test reliability: measurements of the same attribute using
alternate versions of the same instrument, with the same people.
Internal Consistency or homogeneity
•Cronbach’s alpha: (most commonly used)it is how closely related a set of items are as a
group.. each item in the scale is simultaneously compared with the others, and a total score is then used to analyze
the data. In a Likert-type scale format, the participant responds to a question on a scale of varying degrees of
intensity between two extremes. The two extremes are anchored by responses ranging from, for example,
“strongly agree” to “strongly disagree” or from “most like me” to “least like me.” The points between the two
extremes may range from 1 to 5 or 1 to 7. Participants are asked to circle the response that most closely represents
what they believe

• Split-half reliability: Administer the first half to all students, then administer
the second half to all students.--> measuring internal consistency
• Item-to-total correlation: is a measure of the relationship between each scale item and the total
scale. When item-to-total correlations are calculated, a correlation for each item on the scale is generated

• Kuder-Richardson coefficient: A dichotomous response format is one in which the answer to a


question should be either “yes” or “no” or either “true” or “false.” The technique yields a correlation that is based
on the consistency of responses to all items of a single form of a test that is administered once.

Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. 14


Validity: accuracy
• Refers to whether a measurement instrument accurately
measures what it is supposed to measure.
• Precision
• Accuracy
• Consistency
The three major kinds of validity
1.Content: When an investigator is developing a tool and issues of content validity arise, the concern is whether
the measurement tool and the items it contains are representative of the universe of content that the researcher intends
to measure. The researcher begins by defining the concept and identifying the dimensions that are the components of
the concept.

2. criterion-related: is the degree of relationship between the participant’s performance on the measurement
tool and the participant’s actual behaviour. The criterion is usually the second measure, which is used to assess the same concept
being studied.

3.construct validity : Construct validity is about checking if a test really measures what
it's supposed to measure. To make sure the test is valid, researchers look at the
theory behind it and test if the expected relationships between different ideas
match up with real-world evidence.
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. 15
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.

Face Validity
Refers to whether the instrument looks as though it is measuring the
appropriate construct. type of validity in which the instrument intuitively gives the appearance of
measuring the concept.

• Based on judgment, no objective criteria for assessment: Face validity is the


extent to which a test or measure appears, on the surface, to measure what it
is intended to measure. It is a subjective judgment about whether a test looks
like it's measuring the right thing
Construct Validity
• Concerned with the questions (self-efficacy)
• What is this instrument really measuring?
• Does it adequately measure the construct of interest?
• To establish this type of validity, the researcher attempts to validate a body of
theory underlying the measurement and testing of the hypothesized relationships.
Empirical testing confirms or fails to confirm the relationships that would be predicted
among concepts and, as such, provides more or less support for the construct validity
of the instruments measuring those concepts.
• They want to see if the expected relationships between different ideas, as
suggested by the theory, show up when they do real-world tests. If the actual
test results match what the theory predicts, it gives more confidence that the
test is valid for measuring those ideas. If there's a mismatch, it raises doubts
about how well the test is really capturing what it's meant to.

Methods of Assessing Construct Validity
• Hypothesis-testing validity: is used, the investigator uses the theory or concept
underlying the measurement instrument to validate the instrument. first by developing
hypotheses about the behaviour of individuals with varying scores on the measure;
then by gathering data to test the hypotheses; and, finally, on the basis of the findings,
by making inferences about whether the rationale underlying the instrument’s
construction is adequate to explain the findings. Ex:(1) women who had previously
breastfed would have higher breastfeeding self-efficacy than those who had not, and (2)
women who had depressive symptoms would have lower breastfeeding self-efficacy.

• Convergent validity: it exists when two or more tools that are intended to measure the same construct are
administered to participants and are found to be positively correlated.
• Known-groups validity: identifies two groups of individuals expected to score
extremely high or extremely low in the characteristic being measured by the instrument.

• Divergent validity (discriminant validity): differentiate one construct from others that
may be similar.
• Multitrait–multimethod matrix method (MTMM): relationships between
instruments that are intended to measure the same construct and between those that are
intended to measure different constructs.
• Structural validity
• Cross-cultural validity
Content Validity
• Represents the universe or domain of content or the domain of a
given behavior.
• The degree to which an instrument has an appropriate sample of
items for the construct being measured. The researcher begins by defining the concept
and identifying the dimensions that are the components of the concept. The items that reflect the concept and its
dimensions are formulated

• Relevance
• Comprehensiveness
• Balance
• Evaluated by expert evaluation, via the content validity index (CVI)
30
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.

Criterion-Related Validity
• To what degree the participant’s performance on a measurement tool and the
participant’s actual behavior are related
• Concurrent validity: the instrument’s ability to distinguish individuals who differ on
a present criterion.it is the degree of correlation of two measures of the same construct
administered at the same time. A high correlation coefficient indicates agreement between
the two measures. For example, a therapist may use two separate depression scales with a
patient to confirm a diagnosis.

o Specificity, sensitivity o Predictive values o Likelihood ratios o Receiver


operating characteristic curve (ROC curve), area under the curve (AUC)

31
• Predictive validity: the instrument’s ability to distinguish people whose
performance differs on a future criterion. is the degree of correlation between the measure of the
concept and a future measure of the same concept. Because of the passage of time, the correlation coefficients are likely to
be lower for predictive validity studies. For example, an honesty test has predictive validity if persons who score high
are later shown by their behaviors to be honest.

Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.

Qualitative Rigour
Qualitative researchers seek to achieve two goals: (1) to account for the method and the data,
which must be independent so that another researcher can analyze the same data in the same
way and make the same conclusions, and (2) to produce a credible and reasoned explanation
of the phenomenon under study.

32
rigour in qualitative methodology is judged by unique criteria appropriate for the research
approach and is called trustworthiness. Credibility, auditability, and fittingness are some of
the scientific criteria for trustworthiness .

 Credibility—like internal validity; member


checks, peer debriefing, prolonged data
collection, persistent observation triangulation.
Truth of the results as decided by the people who took part and
other experts in the field. For instance, the researcher might go
back to the participants to talk about how they see the results and
make sure the questions are correct from the point of view of the
people who are actually having the experience.
 Auditability—establishes trustworthiness,
leaves an audit trail that others can replicate
33
and clearly understand. developed by the investigator’s
research process, that allows another researcher or a reader to
follow the thinking or conclusions of the investigator.
Accountability is measured by how well the reader can follow the
information from the research question and the initial data to the different
steps of analysis and how the results are interpreted. One example is that
you should be able to follow the researcher's thought process step by
step by looking at clear examples of data, explanations, and summaries.

the most important trustworthiness technique available to the


naturalistic”

 Fittingness—how well the study “fits” other


contexts and settings; findings “ring true. It is the
degree to which study findings are applicable outside the study situation

34
and the degree to which the results are meaningful to individuals not
involved in the research
Faithfulness to the everyday reality of the participants, and giving
enough information so that other people in the field can figure out how
important it is for their own practice, study, and theory development.
For instance, you'll know enough about the human experience being
talked about to be able to judge whether it "rings true" and can help you
in your work.

Authenticity
Authenticity refers to fairness in the presentation in that all value conflicts, differences,
and views of the participants are noted in the analysis.

35
Critical Thinking Challenges
 Is it necessary to establish more than one
measure of reliability for each instrument used
in a study? to ensure that data are sound and
replicable, and the results are accurate. The evidence of
validity and reliability are prerequisites to assure the
integrity and quality of a measurement instrument. An
indicator of a study's excellence is the establishment of the
reliability and validity of the instruments used to measure
variables.

36
 Which do you think is the most essential
measure of reliability? Cronbach's alpha (sometimes
called coefficient alpha). It measures how consistently
participants respond to one set of items

 Is it possible to have a valid instrument that is


not reliable? No, A measurement can be reliable without
being valid.
 What are some ways in which credibility,
auditability, and fittingness can be evaluated?
Qualitative rigour

https://quizlet.com/ca/343422172/chapter-14-rigour-in-
research-flash-cards/
Key points
• Reliability and validity are crucial aspects of
conducting and critiquing research.
• Validity refers to whether an instrument measures
what it is purported to measure. It is a crucial aspect of
evaluating a tool.
• Three types of validity are content validity, criterion-
related validity, and construct validity.
• The choice of a validation method is important and is
made by the researcher on the basis of the characteristics
of the measurement device in question and its use.
• Reliability refers to the ratio between accuracy and
inaccuracy in a measurement device.
• The major tests of reliability are test-retest reliability,
parallel- or alternate-form reliability, split-half reliability,
item-to-total correlation, the Kuder-Richardson
coefficient, Cronbach’s alpha, and interrater reliability.
622
• The selection of a method for establishing reliability
depends on the characteristics of the tool, the testing
method that is used for collecting data from the
standardization sample, and the kinds of data that are
obtained.
• Credibility, auditability, and fittingness are criteria for
judging the scientific rigour of a qualitative research
study.

Qualitative Data Analysis .


Learning Outcomes
 Examine the processes of qualitative data analysis.
 Outline the steps common to qualitative data
analysis.
 Describe how data are reduced to meaningful units
(themes).
 Summarize the process of identifying themes and
categories and the relationships between them.
 Compare the process of creating and presenting
interpretations from select qualitative methods
 Assess the integrity of data analysis from a
qualitative study.
25
Qualitative Data Analysis
 The overall goal is to look for “insight,
meaning, understanding, and larger patterns
of knowledge, intent, and action” in the data
(Averill, 2015, p. 1).
 Many methods available but in the end each
study is unique and is reliant on the creativity,
intellect, style, and experience of the
researcher.

26
UNDERSTAND AND INTERPRET QUALITATIVE
RESEARCH
“No one ever made a decision because of a number. They need a story.”
—Daniel Kahneman

• Key characteristics of qualitative research


• No universal rules; no one set way to do an analysis correctly.
• Frequently a massive amount of data, in the from of words = lots of
intensive work
• Need for strong inductive powers and creativity.
• Content analysis is a common method of analyzing data.

The characteristics of qualitative


• A theoretical basis or set of assumptions.
• People’s varying realities
• The importance of participants’ perspectives
• Acknowledgment that the researcher is part of the study.
• The importance of the context
• The collection of narrative data that are used to describe, analyze, and interpret
participants’ perspectives.
• An ongoing analysis of the data being collected.
Methodologies used in qualitative research
• Ethnography.
• Grounded theory.
• Phenomenology.
• Interpretive description.
• Case studies.
Qualitative Data Analysis
• Content analysis.
• Narrative analysis.
• Discourse analysis.
• Framework analysis.
• Grounded theory.

Steps in qualitative data analysis


Step 1: Developing and Applying Codes.

Open coding: Data are examined carefully line by line, categorized into discrete parts, and compared for
similarities and difference . Data are compared with other data continuously as they are acquired during research. This
process is called the constant comparative method.
▪ Axial coding
▪ Selective coding

Steps in qualitative data analysis


Step 2: Identifying themes, patterns and relationships
▪ Word and phrase repetitions
▪ Primary and secondary data comparisons
▪ Search for missing information
▪ Metaphors and analogues
The end product of this stage is a detailed index of the data, which labels the
data into manageable chunks for subsequent retrieval and exploration.

Difference Between “Themes,” “Codes,” and “Categories”


How do young children engage with technology as part of early literacy learning?
Steps in qualitative data analysis
Step 3: Summarizing the data.
▪ Charting rearranging the data according to the appropriate part of the
thematic framework to which they relate and forming charts.
▪ The charting process involves a considerable amount of abstraction and
synthesis.
▪ Linking research findings to hypotheses or research aim and objectives.
▪ Mapping and interpretation using the charts to:
• define concepts,
• map the range and nature of phenomena,
• create typologies and
• find associations between themes with a view to providing explanations for the findings.
Data Interpretation
• Answer these questions
• What is important in the data?
• Why is it important?
• What can be learned from it?
• So what?

Categories and
Collected data interpretations
themes

Questions to Ask When Reading Qualitative Research Papers


Trustworthiness and Integrity in Qualitative
Research
• Rigour (Trustworthiness) in Qualitative Research:

▪ Credibility
▪ Dependability
▪ Confirmability
▪ Transferability (fittingness)
Appraisal of rigour in qualitative research
FACTS for Appraising Rigour in Qualitative Research Papers
F Fittingness
A Auditability
C Credibility
T Transferability
S Saturation
Source: Based upon “Assessing the FACTS: A Mnemonic for Teaching and Learning the Rapid Assessment of Rigor in
Qualitative Research Studies,” by M. El Hussein et al., 2015, The Qualitative Report, 20(8), pp. 1182–1184.
(F)ittingness (also termed transferability) is the ability of the researcher to demonstrate that the
findings have meaning to others in similar situations. Transferability is dependent on the degree of
similarity between two contexts (Koch, 1994). It is not considered the responsibility of the qualitative
researcher to provide an “index of transferability,” rather the researcher should assume the
responsibility of exhibiting, through data and analysis, the transferability in their research paper.

(A)uditability is the systematic record keeping of all methodological decisions, such as a record of
the sources of data, sampling, decisions, and analytical procedures and their implementation. This is
sometimes referred to as confirmability.

(C)redibility) suggest that a study is credible when it presents such a vivid and faithful description
that people who had that experience would immediately recognize it as their own.
(T)rustworthiness is a concept in qualitative research that encompasses all of the above-mentioned
steps and refers to the degree of confidence one can have in the data. This assessment addresses the
quality or credibility of the data and/or the research paper as a whole. The believability of the overall
findings is another aspect of trustworthiness – confidence in the truth in the findings.
(S)aturation: Data saturation in qualitative research occurs when the researcher is no longer hearing
or seeing new information – that there is sufficient data. This can also be referred to as informational
redundancy.
“Data analysis is the process used to answer the research question.

qualitative researchers gather data from a variety of sources, including interviews,


observations, narrative, and focus groups. The most common is interview

common features among different approaches to qualitative data analysis:

• Affixing codes or themes to a set of field notes, interview transcripts, or documents

• Sorting and shifting though these coded materials to identify similar phrases, relationships
between variables, patterns, themes, distinct differences between subgroups, and common
sequences

• Isolating these patterns and processes, and commonalities and differences, and taking
them out to the field in the next wave of data collection
• Noting reflections of other remarks in the margins

• Gradually elaborating a small set of assertions, propositions, and generalizations that


cover the consistencies discerned in the database

• Confronting those generalizations with a formalized body of knowledge in the form of


constructs or theories

data reduction is “the process of selecting, focusing, simplifying, abstracting, and


transforming the data that appear in written-up field notes or transcriptions . This process is
ongoing as data are collected. Initially, the data can be organized into meaningful clusters of
data by grouping related or similar data. Often, these clusters or groups of data are labelled
as themes. Thematic analysis—the process of recognizing and recovering the emergent
themes—is an important aspect of organizing data.

Three foundational types of first cycle coding include the following:


• Descriptive coding: labels are assigned, composed as a short phrase or word

• In vivo coding: short phrases or words are drawn from the participants’ own language

• Process coding: “ing” is used to describe observations or actions (e.g., knowing)

The coding process itself is analysis

data display as “a visual format that presents information systematically so the user can
draw conclusions and take needed action”

In grounded theory and many other qualitative methods, researchers use the constant
comparative method, in which new data are compared as they emerge with data previously
analyzed.
rigour in qualitative research is determined by credibility, auditability, and fittingness as
the criteria for evaluation. Trustworthiness is also important for determining the validity of
the data interpretation or analysis:

What do you notice? The researcher has captured some impressions about the data; however,
information may be missing. Detailed or thick descriptions of the phenomenon also allow the
reader to assess whether the account “rings true.”

• Why do you notice what you notice? Researchers must consider their own biases and
predispositions as they interpret the data to produce trustworthy interpretations. Many
researchers use a journal to document their reflections to monitor their own developing
interpretations.

• How can you interpret what you notice? Credibility stems from prolonged engagement
and persistent observation. To be able to complete a full interpretation, the researcher must
spend a sufficient amount of time in the field to build sound relationships with the
participants.
• How can you know that your interpretation is the “right” one? The quickest way to know
whether the interpretation is accurate is through sharing the findings with the participants
(member checking)

In addition, some researchers analyze their data from several different frameworks (a form of
triangulation) to increase the trustworthiness of the data analysis. Finally, it is important to
consider the limitations of the study.

Key points
• Qualitative data are text derived from transcripts of interviews, narratives, documents,
media such as newspapers and movies, and field notes.

• Computer software can be used to simplify the storage and retrieval of data.
• Qualitative research data can be managed through the use of computers, but the researcher
must interpret the data.

• Data analysis and data collection are parallel processes.

• Qualitative analysis is not a linear process; rather, it is a cyclical and iterative process.

• The three discrete stages of data analysis are data reduction, data display, and conclusion
drawing and verification.

• Data are organized into meaningful chunks of data through a clustering of related or
similar data and are labelled as themes.

• Coding is the process of progressively marking, sorting, resorting, and defining and
redefining the collected data.

• Data display involves the use of graphs, flowcharts, matrices, or any other visual
representation to assemble data and to allow for conclusion drawing.
• Grounded theorists use the constant comparative method, in which new data are
compared with data previously analyzed.

• Member checking is the process of sharing findings with the participants in order to check
whether the interpretation of the findings is accurate.

You might also like