Professional Documents
Culture Documents
Quantitative
o You have to control the environment so that other things don’t get
involved
Literature review
o Quick review of the relevant literature
o Some qualitative researchers conduct limited reviews because they want
to be open and not biased about the phenomena under study
Qualitative - developing knowledge and use it when there is
enough information, building theory and guide practice
Quantitative - you want to test the hypothesis and use systematic
review
Purpose of qualitative research methods-based on the description and
understanding
o Description and understanding - in diverse fields
o Instrument development (coping strategies)
Smtg to measure, questionnaire, coping, fundamental
understanding of the phenomena
o Therory building - indigenous population, covid 19
GENRATING REAL WORLD CONTEXT
o Guide practice - knowledge of pain and pain management
the meaning of an
معنای یک مشاهده با شرایط یا محیط آن تعریف می شود
NEW TERMS
Bracketing
o Means that the researcher identifies personal biases about the
phenomena of interest to clarify how personal experiences and beliefs
may color what is heard and reported
o The researcher is expected to set aside personal biases - bracket them-
when engaged with the participants
-this method is to understand an experience in the way that people having the
experience understand it and is well suited to the study of phenomena important to
nursing
- For example, what is the experience of men facing prostate surgery? What is the
meaning of pain for people with chronic arthritis
Various phenomenological methods exist, including the following:
Research question. The research question is not exactly the same as the question
used to initiate dialogue with the participant, but often the research question and the
question used to begin dialogue are very similar.
Researcher's perspective:
Sample selection.
You will find that the selected participant is either living the experience the
researcher is querying about or has lived the experience in the past. Because
phenomenologists believe that each individual’s history is a dimension of the present, a
past experience exists in the present moment. Even when a participant is describing an
experience occurring in the present, remembered information is being gathered.
Don’t forget qualitative studies often involve the use of purposive sampling
Written or oral data may be collected when the phenomenological method is
used. (The researcher may pose the query in writing and ask for a written
response or may schedule a time to interview the participant and record the
interaction ) and either way they can come back to ask about clarification of
transcribed or written information.
Various analysis techniques require different numbers of interviews. In general,
open-ended questions—such as “What comes to mind when you think of . . .
?”—guide the participants to describe their lived experience. Researchers
continue with clarifying questions
Data saturation usually guides decisions regarding how many interviews are
enough
Data analysis
As data are collected, data analysis begins. There are several techniques;
1. thorough and sensitive reading of presence with the entire transcription of the
participant’s description
2. Identification of shifts in participant thought, resulting in division of the transcription
into thought segments
4. Distillation of each significant phrase to express the central meaning of the segment
in the researcher’s words
5. Grouping together of segments that contain similar central meanings for each
participant
6. Preliminary synthesis of grouped segments for each participant with a focus on the
essence of the phenomenon being studied
7. Final synthesis of the essences that have surfaced in all participants’ descriptions,
resulting in an exhaustive description of lived experiences.
three techniques developed by van Manen (1997) to isolate themes during data
analysis:
(1) the “wholistic approach” of reading each transcript in its entirety to understand the
overall meaning,
(3) the “line-by-line approach” to discover what each line reveals about the
participant’s experience.
When reading the report of a phenomenological study, you should find that detailed
descriptive language is used to convey the complex meaning of the lived experience.
These direct quotations from participants enable the reader to evaluate the connection
between what the participant said and how the researcher labelled it.
Narrative analysis
When narrative inquiry is used as a form of qualitative research, stories of people are
collected and examined as the primary source of data. The hermeneutic tradition is
extended to include in-depth interview transcripts, memoirs, stories, and creative
nonfiction.
-On the basis of the “stories” of people, at times including those of the researcher,
researchers using narrative analysis in an attempt to interpret and understand
experiences in terms of cultural and social meanings
-The story is data, whereas the narrative analysis “involves interpreting the story,
placing it in context, and comparing it with other stories”
-for example: Queer theory, which emerged from feminist theory, focuses on lived
experience and practice through the lens of gender identity and sexual orientation.
It is a systematic set of procedures used to explore the social processes that guide
human interaction and to inductively develop a theory on the basis of those
observations.
-The philosophical spectrum of grounded theory ranges from the post-positivist view to
the constructivist view
- The grounded theory method is based on the sociological tradition of the Chicago
School of Symbolic Interactionism, a tradition that reflects on issues related to human
behavior. Glaser and Strauss (1967) developed the method of grounded theory and
published the classic first text describing the methodology: The Discovery of Grounded
Theory. They say: grounded theory is discovered, developed, and provisionally verified
through systematic data collection and analysis of data pertaining to that
phenomenon. Therefore, data collection, analysis, and theory stand in reciprocal relationship
with each other. One does not begin with a theory, then prove it. Rather one begins with an area
of study and what is relevant to that area is allowed to emerge
-grounded theory is distinctive from the other traditional qualitative research methods
because its primary focus is on generating theory about dominant social
processes. The three major premises that continue to underlie grounded theory
research are outlined in:
1.Humans act toward objects on the basis of the meaning that those objects have for
them.
2. Social meanings arise from social interactions with other people over time and
are embedded socially, historically, culturally, and contextually.(social interaction)
3. People use interpretive processes to handle and change meanings in dealing
with their situations.
Grounded theory has contributed substantially to the body of knowledge in the
field of nursing. Often, the theories generated from grounded research are then
tested empirically. Qualitative data are gathered through interviews and observation.
Through analysis of the data, substantive codes are generated and then are
clustered into categories. Propositions link the concepts to create a foundation that
guides further data collection.
-The idea of creating a theory means that you believe there are rules that explain part
of how things work in the real world. To find these rules, you need to gather information
from people who are directly involved, like patients who are seriously ill.
Researchers typically use the grounded theory method when they are interested in
either social processes from the perspective of human interactions or patterns of action
and interaction between and among various types of social units
Research question. It address basic social processes that shape human behaviour. it
can be a statement or a broad question that permits in-depth explanation of the
phenomenon. The researcher does not always need to identify a problem or research
question but chooses an area of interest.
Researcher’s perspective. theory emerges directly from data and reflects the
contextual values that are integral to the social processes being studied. Thus, the
theory product that emerges is “grounded in” the data.
Sample selection.
(1) choosing participants for a purposive sample who are experiencing the
circumstance and “are judged to have good knowledge of the study domain”
(2) selecting events and incidents that are related to the social process under
investigation.
Data gathering: In the grounded theory method, data are collected through interviews
and through skilled observations of individuals interacting in a social setting.
-Interviews are audio-recorded and then transcribed, and observations are recorded as
field notes. Open-ended questions are used initially to identify concepts for further
focus.
Data analysis
-The process requires systematic, detailed record keeping through the use of field
notes and transcribed interview recordings. Hunches about emerging patterns in the
data are noted in memos, and the researcher directs activities in the field by
pursuing these hunches
-The initial analytical process is called open coding (Strauss, 1987). Data are
examined carefully line by line, categorized into discrete parts, and compared for
similarities and differences. Data are compared with other data continuously as they
are acquired during research. This process is called the constant comparative
method.
-When you think about the evidence generated by the grounded theory method,
consider whether the theory is useful in explaining, interpreting, or predicting the study
phenomenon of interest.
- In a report of research in which the grounded theory method was used, you can
expect to find a diagrammed model of a theory in which the researcher’s findings are
synthesized in a systematic way.
Ethnographic method
-The cognitive perspective is the view that culture consists of the beliefs, knowledge,
and ideas that people use as they live.
-Culture refers to the structures of meaning through which people shape experiences.
(holistic perspective)
-The aim of ethnographic research is to combine the emic perspective (the insider’s
view of the world) with the etic perspective (the view of the researcher [outsider]) to
develop a scientific generalization about different societies. In other words,
generalizations are drawn from special examples or details from participant
observation.
-Critical ethnography it focuses on beliefs and practices that limit human freedom,
justice, and democracy. they document tacit rules that govern human interaction and
behavior. They also explore how dominant social groups oppress those in the minority
or those without power. (feminist researchers)
-Ethnogeriatrics, focuses on examining the “health and aging issues in the context of
cultural beliefs, values, and practices among racial and ethnic minority elders .
the clinical utility of ethnography in describing the “local world” of groups of patients who
are experiencing a particular phenomenon, such as suffering. The local worlds of
patients have cultural, political, economical, institutional, and social-relational
dimensions in much the same way that larger, complex societies do
Structuring the study
Research question. Ethnographic nursing studies address questions that concern how
cultural knowledge, norms, values, and other contextual variables influence a person’s
health care experience. Ethnographers have a broader definition of culture, whereby a
particular social context is conceptualized as a culture. - hospital-based study of
nurse and patient perceptions of illicit substance use, the medical unit in a large urban
inner-city hospital is seen as a culture appropriate for ethnographic study.
Researcher’s perspective.
Data gathering
-in nursing: interviewing in the natural setting is the major data-collection method .
2. structural, or in-depth, questions that expand and verify the unit of analysis;
3. contrast questions, which further clarify and provide criteria for exclusion.
Data analysis
As with other qualitative methods, data are collected and analyzed simultaneously.
Analysis begins with a search for domains, or symbolic categories that include
smaller categories. Language is analyzed for semantic relationships, and
structural questions are formulated to expand and verify data.
Analysis proceeds through increasing levels of complexity until the data,
grounded in the informant’s reality and synthesized by the researcher, lead to
hypothetical propositions about the cultural phenomenon under investigation.
coded the interviews, field notes, and documents. The data were reviewed to
identify patterns of interaction, key concepts, and emerging themes. The
emerging themes and data were further conceptualized to higher levels. Each
researcher read the data independently and then, with the involvement of
advisory groups with expertise in the field, the emergent themes were confirmed.
“Researchers stay close to their data and to the surface of words and events. . . .
Qualitative descriptive study is the method of choice when straight descriptions of
phenomena are desired”
-This is what differentiates a case study from other qualitative inquiries: The unit of
analysis is intrinsically bounded. A limited number of participants can be
included.
-The case study method is about studying the peculiarities and commonalities of a
specific case over time to provide an in-depth description of the essential dimensions
and processes of the phenomenon—familiar ground for practising nurses.
-In instrumental case study the researcher is pursuing insight into an issue or wants to
challenge some generalization.
-Nurses have a long and continuing tradition of using case studies for teaching and
learning about patients. Nightingale (1858/1969) stressed the importance of coming to
know patients and of basing practice on experience.
Research question.
research questions evolve over time and are re-created in case study research
Researcher’s perspective.
Sample selection.
Sample selection is one of the areas of which scholars in the field present differing
views, ranging from choosing only the most common cases to choosing only the most
unusual cases. No choice is perfect when a case is selected.
Data gathering
Data are gathered through the use of interview, observation, document review, and any
other methods by which researchers accumulate evidence that enables understanding
of the complexity of the case.
Reflecting and revising meanings are the work of the case study researcher, who has
recorded data, searched for patterns, linked data from multiple sources, and devised
preliminary thoughts regarding the meaning of collected data. This reflective dynamic
evolution is the iterative process of creating the case study story that can be
thought of as the evidence.
As the reader of a case study, you may have difficulty determining how data analysis
was conducted because the research report generally does not list research activities.
-Findings are embedded in (1) a chronological development of the case, (2) the
researcher’s story of coming to know the case, (3) the descriptions of individual case
dimensions, and (4) vignettes that highlight case qualities
-Case studies are a way of providing in-depth, evidence-informed discussion of clinical
topics that can be used to guide practice
The historical research method is a systematic approach for understanding the past
through collection, organization, and critical appraisal of facts.
-Through historical research, we can better understand how nurses in the present can
assume control of their practice, education, and roles in the contemporary healthcare
system.
-In the historical method expect to find the research question embedded in the
phenomenon to be studied. The question is stated implicitly rather than explicitly.
-The three theoretical frameworks that guide historical research are as follows :
Fact
Two independent primary sources that agree with each other Or One
independent primary source that receives critical evaluation and one independent
secondary source that is in agreement and receives critical evaluation and no
substantive conflicting data
Probability
One primary source that receives critical evaluation and no substantive conflicting data
or Two primary sources that disagree about particular points
Possibility
One primary source that provides information but is not adequate to receive critical
evaluation or Only secondary or tertiary sources
-When you critique a study based on the historical method, expect not to find a report of
data analysis but simply a description of findings synthesized into a continuous narrative
-The presentation of a historical study should be logical, consistent, and easy to follow.
We start by really understanding the people involved (the "look" phase). This
helps us define the problem in a way that makes sense to those people.
Then, in the "think" phase, we take what we've learned and analyze it. We try
to connect the ideas from the people involved so that they make sense to the
community.
Finally, in the "act" phase, we use all this information to make a plan, put it
into action, and then evaluate how well it works. It's all based on what we found
out in the "look" and "think" phases.
PAR methods are used to improve health care services in communities. PAR has been
applied to health and wellness programs, program evaluation, care plans, community
nursing, and health care delivery and policy. Studies have ranged from issues and
conditions stemming from chronic illness, pregnancy and childbirth, pain management,
and incontinence to rehabilitation. - PAR investigators, like
ethnographers, immerse themselves in the field for deep understanding and to build
trust and credibility.
Sample selection.
Data gathering
In the “look” phase, data are gathered from a variety of sources; interviews are the
principal means for understanding the experiences of the participants.A literature review
may add information to enhance the understanding of the data emerging from the
interviews and other sources
Data analysis
In the “think” phase, the researchers think about and reflect on all of the data
gathered. The purpose of data analysis is to distill and reduce the volume of
information into a manageable and organized set of concepts or ideaS.
TWO APPROACHES:
A second process involves the categorization and coding of data to reveal patterns
and themes.
Key points
• Data saturation occurs when the information being shared with the researcher
becomes repetitive.
• Qualitative research methods include five basic elements: identifying the phenomenon,
structuring the study, gathering the data, analyzing the data, and describing the
findings.
• The historical research method is the systematic compilation of data and the critical
presentation, appraisal, and interpretation of facts regarding people, events, and
occurrences of the past.
CHAPTER 9
The purpose of the research design is to provide the plan for answering research questions.
These questions can be because a researcher wants to understand something theoretical
question(this is called basic research). On the other hand, applied research is about solving
practical problems to improve a patient's healthcare.
In quantitative research, the design is how we test our ideas or answer questions, whether
they are basic or applied.
- The main goals of research design are to help solve research problems and keep things
controlled to avoid mistakes that could affect the results.
- The design gives us a plan, structure, and strategy for writing our questions, doing the research,
and analyzing the data.
-Control means the steps the researcher takes to keep everything in the study the same and
prevent any bias (distortion of results) in what we measure.
-Several things need to be considered when conducting a study. These include the design of the
study, being objective (using facts without personal bias), accuracy, practicality, control of the
experiment, and the study's reliability within and outside the research setting.
-The type of design used in a study also affects how it can be applied in real-world practice.
-in conceptualization of the problem being objective means relying on a review of existing
knowledge and developing a theoretical framework.
-By examining the literature, the researcher evaluates how much we already know about the
problem. The literature review and theoretical framework should demonstrate that the researcher
assessed existing literature critically and without personal bias, which is crucial because it
influences the choice of research design.
-The literature review should cover details such as 1. when the problem was studied, 2. what
aspects of the problem were investigated, 3. where it was studied, 4. who did the research, 5. any
gaps or inconsistencies in the existing literature.
-For instance, if the research question is about the impact of the length of a breastfeeding
education program, the literature may suggest using an experimental or correlational design.
On the other hand, if the question is about physical changes in a woman's body during
pregnancy and how mothers perceive their unborn child, a survey or correlation study might be
more appropriate. Accuracy
For new researchers, it's a good idea to start with questions that involve fewer variables
and don't require complex designs.
-If there's not much existing research on a particular clinical issue, it's a good practice to conduct
a preliminary or pilot study before a larger one. A pilot study is a small, basic study that helps
test the accuracy, validity, and objectivity of the research design. The key is that the researcher
uses sound methods to answer the research question.
Feasibility
feasibility means thinking about whether the study can actually be carried out successfully. It
involves factors like the availability of participants, timing, participant commitment, costs, and
data analysis. Studies designed to test feasibility are sometimes referred to as pilot studies.
-Before conducting a large experimental study, like a randomized clinical trial, it's a good idea to
start with a pilot study. practical considerations (recruiting participants, carrying out the
intervention, data collection, ensuring that participants complete the study, testing new
measurement tools, and assessing the study's costs) may not be formal steps in the research
process like developing a theoretical framework or methods, but they impact every part of the
research and should be taken into account when evaluating a study.
-When critiquing a study, it's important to consider the author's credentials and whether the
study was part of a student project or a fully funded grant project. Student projects might be
evaluated with a bit more leniency than those conducted by experienced researchers. Ultimately,
these practical issues influence the scope and general applicability of a study.
Control
Control means keeping the conditions of the study constant and setting specific criteria for
who can participate.For example, in a study , researchers wanted to validate a Breastfeeding
Self-Efficacy Scale among Indigenous women. They included Indigenous women who could
read and speak English and had access to a telephone for follow-up. They excluded women if
their babies were born prematurely, if there were health issues, or if there were multiple births or
complications during childbirth.An efficient design can help get better results, reduce mistakes,
and manage conditions that might affect the outcome.
-Control in research means getting rid of any extra factors that could confuse the results. These
extra factors, also called extraneous variables (or mediating variables), interfere with the
things the study is trying to understand (for example, factors like age and gender). Some ways to
control these factors include using a similar group of people (homogeneous sample), using the
same methods( consistent) to collect data, manipulating the independent variable, and using
randomization.
-As you read a report, assess whether the study includes a tested intervention and whether the
report contains a clear description of the intervention and how it was controlled. If the details are
not clear, the intervention may have been administered differently among the participants, which
would affect the interpretation of the results.
Homogeneous sampling
-homogeneity, or similarity with regard to the extraneous variables relevant to the particular
study.
For example, in a smoking cessation study, factors like age, gender, how long someone has
smoked, how much they smoke, and even smoking rules (like where smoking is allowed) can
affect the outcome, even though they are not part of the main study.
- creating a homogeneous sample, which means that the participants have similarities in terms
of relevant extraneous variables. However, making the sample too similar can limit how
widely the results can be applied to other populations.
-It's important to note that this kind of control can limit the applicability of the results to other
populations. What's crucial is that the researcher should identify, plan for, and control these
extraneous variables before collecting data.
- remember that it is better to have a “clean” study, whose results can be used to make
generalizations about a specific population, than a “messy” study, whose results may be poorly
or not at all generalizable.
Constancy means making sure that the way data is collected in a study is consistent and
follows a standard procedure. This includes ensuring that all participants are exposed to the
same environmental conditions, data collection timing, data collection tools, and data collection
steps.
-This type of control helps researchers draw conclusions, discuss their findings, and highlight the
need for further research in the field. For those reviewing the study, constancy ensures that data
is collected clearly, consistently, and in a specific way, and discuss limitations.
When interventions are part of a study, researchers often describe how the interventionists and
data collectors were trained and supervised to ensure that data collection remains constant. While
all study designs should demonstrate constancy in data collection, it's especially critical in
studies testing interventions.
-Another method of control in research involves manipulating the independent variable. This
means administering a program, treatment, or intervention to one group of participants in the
study but not to the others.
-The group that gets the intervention is called the experimental group, while the other group is
known as the control group or comparison group. In the control group, the variables being
studied are kept constant or at a standard level.
-This type of manipulation is typically seen in experimental and quasi-experimental designs, but
it's not used in nonexperimental designs where the independent variable is not manipulated
-Be aware that the lack of manipulation of the independent variable does not mean that the study
is weaker. The level of the problem, the amount of theoretical work, and the research that has
preceded the project affect the researcher’s choice of the design. If the problem is amenable to a
design in which the independent variable can be manipulated, the power of a researcher to draw
conclusions will increase, provided that all of the considerations of control are equally addressed.
Randomization
-the level of control may vary depending on the type of research design.
In exploratory studies, where little or no existing literature on a concept exists, the researcher's
aim is to describe or categorize a phenomenon among a group of individuals. Control in such
studies should be applied more flexibly due to the preliminary nature of the work.
-Experimental designs demand the highest level of control. Researchers using experimental
designs must ensure that the conditions of the research remain constant throughout the study, that
participant assignment is random, and that both experimental and control groups are used. The
rigorous control in experimental studies helps address extraneous variables, and this control
should be evident in the research report.
-Remember that establishing evidence for practice is determined by assessing the validity of each
step of the study, assessing whether the evidence assists in planning patient care, and assessing
whether patients respond to the evidence-informed care.
two crucial criteria for evaluating the credibility and dependability of research results are internal
validity and external validity:
Internal Validity: Internal validity is concerned with the extent to which the observed effects in
a study are genuinely caused by the experimental treatment rather than some other
uncontrolled factors.
To establish internal validity, researchers must rule out alternative explanations for the
relationship between the variables. Threats to internal validity are particularly relevant to
experimental designs but should also be considered in various quantitative research designs.
The threats to internal validity include:
-History Threats: These are events that occur outside the study and could influence the results.
-Maturation Effects: Changes that naturally happen over time and might impact the
participants' responses.
-Testing Effects: The act of being tested or measured multiple times may affect how participants
respond.
-Instrumentation Threats: Changes in the measurement tools or methods can introduce bias
into the results.
-Mortality (Attrition): When participants drop out of a study, it can affect the
representativeness of the remaining sample.
-Selection Bias: This occurs when participants in different groups are not equivalent at the
beginning of the study, potentially affecting the results.
External Validity: External validity is about how well the findings of a study can be applied or
generalized to other populations, settings, or conditions. Threats to external validity include:
-Selection Effects: If the sample used in the study is not representative of the larger population,
it may limit the generalizability of the findings.
-Reactive Effects: Participants may react differently because they know they are part of a study
(the so-called "Hawthorne effect").
-Measurement Effects: The measurement instruments or procedures used in the study may not
accurately represent real-world situations.
-In essence, internal validity is about ensuring that the observed effects can be attributed to the
treatment being studied, while external validity is about how well those findings can be applied
in real-world scenarios. Researchers and consumers of research need to be aware of these
potential threats to validity to make reliable and meaningful interpretations of study results.
History threats
Not only the independent variable but also another specific event may affect the dependent
variable, either inside or outside the experimental setting. This threat to internal validity is
referred to as the history threat.
Maturation effects
Maturation effects could also occur in a study of the relationship between two methods of
teaching about children’s knowledge of self-care measures.
Testing effects
Taking the same test repeatedly could influence participants’ responses the next time the test is
completed. For example, the effect on the participant’s posttest score as the result of having
taken a pretest is known as a testing effect. Individuals generally score higher when they take a
test a second time, regardless of the treatment. The differences between posttest and pretest
scores may be a result not of the independent variable but rather of the experience gained
through the testing.
Instrumentation threats
Instrumentation threats are changes in the variables or observational techniques that may
account for changes in the obtained measurement.
For example, a researcher may wish to study various types of thermometers (e.g., tympanic,
digital, electronic, chemical indicator, plastic strip, and mercury) to compare the accuracy of
the mercury thermometer with the other temperature-taking methods. To prevent
instrumentation threats, the researcher must check the calibration of the
thermometers according to the manufacturer’s specifications before and after data collection.
Although the researcher can take steps to prevent problems of instrumentation, the threat of
instrumentation may still occur. When a research critiquer finds such a threat, it must be
evaluated within the total context of the study.
Mortality/attrition
Mortality or attrition is the loss of study participants from the first data-collection point (pretest)
to the second data-collection point (posttest). If the participants who remain in the study are not
similar to those who dropped out, the results could be affected
- The loss of participants may be from the sample as a whole, or, in a study that has both an
experimental group and a control group, more of the participants may drop out from one group
than from the other group; this effect is known as differential loss of participants.
Selection bias
If precautions are not used to gain a representative sample, selection bias—the threat to internal
validity that arises when pretreatment differences exist between the experimental group and the
control group—could result from the way the participants were chosen. Selection effects are a
problem in studies in which the individuals themselves decide whether to participate in a study.
Selection bias happens when the people in a study are not chosen carefully and there are
differences between the group getting the treatment and the group not getting it because of how
people were picked. This is a concern in studies where people decide on their own whether to be
in the study.
-The list of threats to internal validity is not exhaustive. More than one threat can be found in a
study, depending on the type of study design. Finding a threat to internal validity in a study does
not invalidate the results and is usually acknowledged by the investigator in the “Results” or
“Discussion” section of the study.
-Avoiding threats to internal validity in clinical research can be difficult. However, this reality
does not render (give) studies that have threats useless. It is important to take the threats into
consideration and weigh the total evidence of a study for not only its statistical meaningfulness
but also its clinical meaningfulness.
External validity
External validity is all about how well we can take the findings of a study and use them with
different groups of people and in various situations. To have good external validity, the results
should be similar when you change things like the conditions and the types of people involved.
-Factors that can affect external validity are linked to the selection of people for the study, the
conditions under which the study is done, and the way observations are made. These factors are
called selection effects, reactive effects, and testing effects. You might notice that their names
sound a bit like the factors that threaten internal validity. The difference is that when we talk
about them as internal threats, we're looking at how they relate to the things being studied in the
research. But when we talk about them as external threats, we're thinking about how well we can
use the results in different situations and with different groups of people.
-When considering factors as internal threats, the reader assesses them as they relate to the
independent and dependent variables within the study; when assessing them as external threats,
the reader considers them in terms of the generalizability, or use outside the study with other
populations and settings.
Selection effects
Reactive effects
Reactivity is defined as the participants’ responses to being studied. Participants may behave
in a certain way with the investigator not because of the study procedures but merely as an
independent response to being studied. This response is also known as the Hawthorne effect,
The researchers developed several different working conditions, such as turning up the lights,
piping in music loudly or softly, and changing work hours. They found that no matter what was
done, the workers’ productivity increased. They concluded that production increased as a result
of the workers’ knowing that they were being studied rather than because of the experimental
conditions.
Measurement effects
-when you test people before the actual study, it can affect not only the results you get within
that study but also how well those results can be applied to other situations or groups of people
outside the study. This means that the pretest can have an impact on both the study's internal
validity (the reliability of results within the study) and its external validity (how well the results
can be used in different situations or with different people).
-When you review a study, be aware of the internal and external threats to validity. These
threats do not render(give) a study useless; instead, they make it more useful to you.
Recognition of the threats allows researchers to build on data and allows consumers to think
through what part of the study can be applied to practice. Specific threats to validity depend on
the type of design and generalizations that the researcher hopes to make.
-Minimizing threats to internal and external validity enhances the strength of evidence for any
quantitative design.
-The concept of the research design is all- inclusive and parallels the concept of the theoretical
framework. The research design is similar to the theoretical framework in that it deals with a
piece of the research study that affects the whole.
-The research outcome should demonstrate that an objective review of the literature and the
establishment of a theoretical framework guided the choice of the design. No explicit statement
regarding these areas is made in a research article. A consumer can evaluate the design by
critiquing the theoretical framework and literature review.
-The research consumer should be alert for the methods that investigators use to maintain
control (e.g., homogeneity in the sample, consistent data-collection procedures, manipulation of
the independent variable, and randomization) .
-Once you have established whether the necessary control or uniformity of conditions has been
maintained, you must determine whether the study is believable or valid. You should ask
whether the findings are the result of the variables tested—and thus internally valid—or whether
another explanation is possible. To assess this aspect, you should review the threats to internal
validity. If the investigator’s study was systematic, was well grounded in theory, and followed
the criteria for each of the processes, you will probably conclude that the study is internally
valid. In addition, you must know whether a study has external validity or generalizability to
other populations or environmental conditions. External validity can be claimed only after
internal validity has been established. If the credibility of a study (internal validity) has not been
established, a study has no generalizability to other populations (external validity).
Determination of external validity is related directly to the sampling method If the study is not
representative of any one group or phenomenon of interest, external validity may be limited or
not present. The establishment of internal and external validity requires not only knowledge of
the threats to internal and external validity but also knowledge of the phenomena being studied,
which allows critical judgements to be made about the linkage of theories and variables for
testing. You should find that the design follows from the theoretical framework, literature
review, research question, and hypotheses. You should believe, on the basis of clinical
knowledge and knowledge of the research process, that the investigators are not, as the
expression goes, comparing apples with oranges.
As a consumer of research, you recognize that control is an important concept in the issue
of research design. You are critiquing an assigned experimental study as part of your
“open- book” midterm examination. From what is written, you cannot determine how the
researchers kept the conditions of the study constant.
-You are critiquing the research design of an assigned study as a consumer of research. How
does the research design influence the findings of evidence in the study?
-How do threats to external validity contribute to the strength and quality of evidence provided
by the findings of a research study?
Key points
• The purpose of the design is to provide the format of masterful and accurate research.
• Many types of designs exist. No matter which type of design the researcher uses, the purpose
remains the same.
• The research consumer should be able to locate within the study a sense of the question that the
researcher wished to answer. The question should be proposed with a plan or scheme for the
accomplishment of the investigation. Depending on the question, the consumer should be able to
recognize the steps taken by the investigator to ensure control.
• The choice of the specific design depends on the nature of the question. To specify the nature
of the research question, the design must reflect the investigator’s attempts to maintain
objectivity, accuracy, pragmatic considerations, and, most important, control.
• Control affects not only the outcome of a study but also its future use. The design should also
reflect how the investigator attempted to control threats to both internal and external validity.
• Internal validity must be established before external validity can be established. Both are
considered within the sampling structure.
• No matter which design the researcher chooses, it should be evident to the reader that the
choice was based on a thorough examination of the research question within a theoretical
framework.
• The design, research question, literature review, theoretical framework, and hypothesis should
all be interrelated.
• The choice of the design is affected by pragmatic issues. At times, two different designs may be
equally valid for the same question.
-Problems with internal validity are usually easier to deal with because they're about the
relationship between variables in the study itself. But when it comes to external validity, it gets
trickier because we're assuming that what we find in one group will be true for ot