Professional Documents
Culture Documents
NURS 3515
Week 9
November 2023
(1 of 4)
Learning Outcomes
Discuss the purposes of reliability and validity.
Define reliability.
Discuss the concepts of stability, equivalence,
and homogeneity as they relate to reliability.
Compare the estimates of reliability.
Define validity.
Compare content, criterion-related ()معیار, and
construct validity.
(2 of4)
2
Learning Outcomes
Discuss how measurement error can affect
the outcomes of a research study.
Identify the criteria for critiquing the reliability
and validity of measurement tools.
Use the critiquing criteria to evaluate the
reliability and validity of measurement tools.
Understand how to evaluate the quality of
qualitative research
(2 of 4)
3
Learning Outcomes
Discuss the purpose of credibility, auditability,
and fittingness.
Apply the critiquing criteria to evaluate the
rigour in a qualitative report.
Discuss how evidence related to research
rigour contributes to clinical decision making.
(2 of 4)
4
Learning Outcomes
Outline the steps common to qualitative data
analysis.
Describe how data are reduced to meaningful
units (themes).
Summarize the process of identifying themes and
categories and the relationships between them.
Compare the process of creating and presenting
interpretations from select qualitative methods
Assess the integrity of data analysis from a
qualitative study.
5
Rigour in Research
(1 of 2)
6
Rigour in Research
(2 of 2)
9
Reliability and Validity of Measurement
• Measurement reliability: Consistency of a measure
• Measurement validity: ability of an instrument or factor to
accurately measure.
• A study or measure CAN be reliable but not valid; but CANNOT be valid and
not reliable.
• Operational Definitions: The explicit and clear explanation of a variable in terms of how it
is measured or manipulated. It refers to a detailed explanation of the technical terms and
measurements used during data collection.
Adams, Research Methods, Statistics, and Applications 2e. © SAGE Publishing, 2019. 9
Adams, Research Methods, Statistics, and Applications 2e. © SAGE Publishing, 2019. 10
Reliability (Quantitative)
11
Reliability: Consistency
• Consistency—the absence of variation in measuring a stable attribute for an
individual. ( Attributes refer to the characteristics and quality of the item ,like person, things, under study, like the
habit of smoking, or drinking. So 'smoking' and 'drinking' both refer to the example of an attribute.)
-1. Transient human conditions, such as hunger, fatigue, health, lack of motivation, and anxiety, which are often beyond the
awareness and control of the examiner.
2.Variations in the measurement procedure, such as misplacement of the blood pressure cuff, not waiting for a specific time
period before taking the blood pressure
A systematic error or a constant error is a measurement error that is attributable to relatively stable characteristics of the study
population that may bias their behaviour, cause incorrect instrument calibration, or both. Level of education, socioeconomic
status, social desirability, response patter
• True variance: ( actual variation) naturally occurring variability within or among research
participants. This variance is inherent in the nature of individual participants and is not due to
measurement error,
• Observed score: that is derived from a set of items consists of the true score plus error (Figure 14.2). The error
may be either chance (random) error or systematic error. A chance error or a random error is an error that is difficult
to control (e.g., a respondent’s anxiety at the time of testing). These errors are unsystematic and are not predictable;
thus, they cannot be corrected.
• Reliability coefficient (R) can range from 0 to 1
0 = no relationship
1 = perfect relationship (better reliability)
0.89 = strong relationship
Reliability: Stability
• Replication approaches
• Test–retest reliability: administration of the same measure to the same
people on two occasions
• Interrater reliability: measurements by two or more observers or raters
using the same instrument or measurements by the same observer or
rater on two or more occasions
• Parallel test reliability: measurements of the same attribute using
alternate versions of the same instrument, with the same people.
Internal Consistency or homogeneity
•Cronbach’s alpha: (most commonly used)it is how closely related a set of items are as a
group.. each item in the scale is simultaneously compared with the others, and a total score is then used to analyze
the data. In a Likert-type scale format, the participant responds to a question on a scale of varying degrees of
intensity between two extremes. The two extremes are anchored by responses ranging from, for example,
“strongly agree” to “strongly disagree” or from “most like me” to “least like me.” The points between the two
extremes may range from 1 to 5 or 1 to 7. Participants are asked to circle the response that most closely represents
what they believe
• Split-half reliability: Administer the first half to all students, then administer
the second half to all students.--> measuring internal consistency
• Item-to-total correlation: is a measure of the relationship between each scale item and the total
scale. When item-to-total correlations are calculated, a correlation for each item on the scale is generated
2. criterion-related: is the degree of relationship between the participant’s performance on the measurement
tool and the participant’s actual behaviour. The criterion is usually the second measure, which is used to assess the same concept
being studied.
3.construct validity : Construct validity is about checking if a test really measures what
it's supposed to measure. To make sure the test is valid, researchers look at the
theory behind it and test if the expected relationships between different ideas
match up with real-world evidence.
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. 15
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.
Face Validity
Refers to whether the instrument looks as though it is measuring the
appropriate construct. type of validity in which the instrument intuitively gives the appearance of
measuring the concept.
• Convergent validity: it exists when two or more tools that are intended to measure the same construct are
administered to participants and are found to be positively correlated.
• Known-groups validity: identifies two groups of individuals expected to score
extremely high or extremely low in the characteristic being measured by the instrument.
• Divergent validity (discriminant validity): differentiate one construct from others that
may be similar.
• Multitrait–multimethod matrix method (MTMM): relationships between
instruments that are intended to measure the same construct and between those that are
intended to measure different constructs.
• Structural validity
• Cross-cultural validity
Content Validity
• Represents the universe or domain of content or the domain of a
given behavior.
• The degree to which an instrument has an appropriate sample of
items for the construct being measured. The researcher begins by defining the concept
and identifying the dimensions that are the components of the concept. The items that reflect the concept and its
dimensions are formulated
• Relevance
• Comprehensiveness
• Balance
• Evaluated by expert evaluation, via the content validity index (CVI)
30
Copyright © 2018 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.
Criterion-Related Validity
• To what degree the participant’s performance on a measurement tool and the
participant’s actual behavior are related
• Concurrent validity: the instrument’s ability to distinguish individuals who differ on
a present criterion.it is the degree of correlation of two measures of the same construct
administered at the same time. A high correlation coefficient indicates agreement between
the two measures. For example, a therapist may use two separate depression scales with a
patient to confirm a diagnosis.
31
• Predictive validity: the instrument’s ability to distinguish people whose
performance differs on a future criterion. is the degree of correlation between the measure of the
concept and a future measure of the same concept. Because of the passage of time, the correlation coefficients are likely to
be lower for predictive validity studies. For example, an honesty test has predictive validity if persons who score high
are later shown by their behaviors to be honest.
Qualitative Rigour
Qualitative researchers seek to achieve two goals: (1) to account for the method and the data,
which must be independent so that another researcher can analyze the same data in the same
way and make the same conclusions, and (2) to produce a credible and reasoned explanation
of the phenomenon under study.
32
rigour in qualitative methodology is judged by unique criteria appropriate for the research
approach and is called trustworthiness. Credibility, auditability, and fittingness are some of
the scientific criteria for trustworthiness .
34
and the degree to which the results are meaningful to individuals not
involved in the research
Faithfulness to the everyday reality of the participants, and giving
enough information so that other people in the field can figure out how
important it is for their own practice, study, and theory development.
For instance, you'll know enough about the human experience being
talked about to be able to judge whether it "rings true" and can help you
in your work.
Authenticity
Authenticity refers to fairness in the presentation in that all value conflicts, differences,
and views of the participants are noted in the analysis.
35
Critical Thinking Challenges
Is it necessary to establish more than one
measure of reliability for each instrument used
in a study? to ensure that data are sound and
replicable, and the results are accurate. The evidence of
validity and reliability are prerequisites to assure the
integrity and quality of a measurement instrument. An
indicator of a study's excellence is the establishment of the
reliability and validity of the instruments used to measure
variables.
36
Which do you think is the most essential
measure of reliability? Cronbach's alpha (sometimes
called coefficient alpha). It measures how consistently
participants respond to one set of items
https://quizlet.com/ca/343422172/chapter-14-rigour-in-
research-flash-cards/
Key points
• Reliability and validity are crucial aspects of
conducting and critiquing research.
• Validity refers to whether an instrument measures
what it is purported to measure. It is a crucial aspect of
evaluating a tool.
• Three types of validity are content validity, criterion-
related validity, and construct validity.
• The choice of a validation method is important and is
made by the researcher on the basis of the characteristics
of the measurement device in question and its use.
• Reliability refers to the ratio between accuracy and
inaccuracy in a measurement device.
• The major tests of reliability are test-retest reliability,
parallel- or alternate-form reliability, split-half reliability,
item-to-total correlation, the Kuder-Richardson
coefficient, Cronbach’s alpha, and interrater reliability.
622
• The selection of a method for establishing reliability
depends on the characteristics of the tool, the testing
method that is used for collecting data from the
standardization sample, and the kinds of data that are
obtained.
• Credibility, auditability, and fittingness are criteria for
judging the scientific rigour of a qualitative research
study.
26
UNDERSTAND AND INTERPRET QUALITATIVE
RESEARCH
“No one ever made a decision because of a number. They need a story.”
—Daniel Kahneman
Open coding: Data are examined carefully line by line, categorized into discrete parts, and compared for
similarities and difference . Data are compared with other data continuously as they are acquired during research. This
process is called the constant comparative method.
▪ Axial coding
▪ Selective coding
Categories and
Collected data interpretations
themes
▪ Credibility
▪ Dependability
▪ Confirmability
▪ Transferability (fittingness)
Appraisal of rigour in qualitative research
FACTS for Appraising Rigour in Qualitative Research Papers
F Fittingness
A Auditability
C Credibility
T Transferability
S Saturation
Source: Based upon “Assessing the FACTS: A Mnemonic for Teaching and Learning the Rapid Assessment of Rigor in
Qualitative Research Studies,” by M. El Hussein et al., 2015, The Qualitative Report, 20(8), pp. 1182–1184.
(F)ittingness (also termed transferability) is the ability of the researcher to demonstrate that the
findings have meaning to others in similar situations. Transferability is dependent on the degree of
similarity between two contexts (Koch, 1994). It is not considered the responsibility of the qualitative
researcher to provide an “index of transferability,” rather the researcher should assume the
responsibility of exhibiting, through data and analysis, the transferability in their research paper.
(A)uditability is the systematic record keeping of all methodological decisions, such as a record of
the sources of data, sampling, decisions, and analytical procedures and their implementation. This is
sometimes referred to as confirmability.
(C)redibility) suggest that a study is credible when it presents such a vivid and faithful description
that people who had that experience would immediately recognize it as their own.
(T)rustworthiness is a concept in qualitative research that encompasses all of the above-mentioned
steps and refers to the degree of confidence one can have in the data. This assessment addresses the
quality or credibility of the data and/or the research paper as a whole. The believability of the overall
findings is another aspect of trustworthiness – confidence in the truth in the findings.
(S)aturation: Data saturation in qualitative research occurs when the researcher is no longer hearing
or seeing new information – that there is sufficient data. This can also be referred to as informational
redundancy.
“Data analysis is the process used to answer the research question.
• Sorting and shifting though these coded materials to identify similar phrases, relationships
between variables, patterns, themes, distinct differences between subgroups, and common
sequences
• Isolating these patterns and processes, and commonalities and differences, and taking
them out to the field in the next wave of data collection
• Noting reflections of other remarks in the margins
• In vivo coding: short phrases or words are drawn from the participants’ own language
data display as “a visual format that presents information systematically so the user can
draw conclusions and take needed action”
In grounded theory and many other qualitative methods, researchers use the constant
comparative method, in which new data are compared as they emerge with data previously
analyzed.
rigour in qualitative research is determined by credibility, auditability, and fittingness as
the criteria for evaluation. Trustworthiness is also important for determining the validity of
the data interpretation or analysis:
What do you notice? The researcher has captured some impressions about the data; however,
information may be missing. Detailed or thick descriptions of the phenomenon also allow the
reader to assess whether the account “rings true.”
• Why do you notice what you notice? Researchers must consider their own biases and
predispositions as they interpret the data to produce trustworthy interpretations. Many
researchers use a journal to document their reflections to monitor their own developing
interpretations.
• How can you interpret what you notice? Credibility stems from prolonged engagement
and persistent observation. To be able to complete a full interpretation, the researcher must
spend a sufficient amount of time in the field to build sound relationships with the
participants.
• How can you know that your interpretation is the “right” one? The quickest way to know
whether the interpretation is accurate is through sharing the findings with the participants
(member checking)
In addition, some researchers analyze their data from several different frameworks (a form of
triangulation) to increase the trustworthiness of the data analysis. Finally, it is important to
consider the limitations of the study.
Key points
• Qualitative data are text derived from transcripts of interviews, narratives, documents,
media such as newspapers and movies, and field notes.
• Computer software can be used to simplify the storage and retrieval of data.
• Qualitative research data can be managed through the use of computers, but the researcher
must interpret the data.
• Qualitative analysis is not a linear process; rather, it is a cyclical and iterative process.
• The three discrete stages of data analysis are data reduction, data display, and conclusion
drawing and verification.
• Data are organized into meaningful chunks of data through a clustering of related or
similar data and are labelled as themes.
• Coding is the process of progressively marking, sorting, resorting, and defining and
redefining the collected data.
• Data display involves the use of graphs, flowcharts, matrices, or any other visual
representation to assemble data and to allow for conclusion drawing.
• Grounded theorists use the constant comparative method, in which new data are
compared with data previously analyzed.
• Member checking is the process of sharing findings with the participants in order to check
whether the interpretation of the findings is accurate.