You are on page 1of 42

Southern Luzon State University

College of Allied Medicine

DESIGN AND
MODULE PLANNING PHASE
4
“Strategically, I Know You
Can Handle the Study”
“If we knew what we were doing, it would not be called
research, would it?”
― Albert Einstein

Dear students

This module is dedicated to the students of Southern


Luzon State University at College of Allied Medicine in support
Southern Luzon State to distant learning during this me of pandemic, we hope that
University Brgy Kulapi, the students who read this module will prepare you to shape
Lucban Quezon your future in health care.
fazogue@slsu.edu.ph
arabuy@slsu.edu.ph

CP - 09183733614
CP - 09178851252
h ps://
meet.google.com/
lookup/czuzcjiiwv
h ps://
meet.google.com/
lookup/bvs25dpolj

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

tt
tt

ti

Southern Luzon State University


College of Allied Medicine

OVERVIEW

As a young researcher, you must have a clear understanding on the various types of research design in
order for you to select which model to implement for your study. Like research itself, the design of your
study can be broadly classi ed into quan ta ve and qualita ve. So what is your percep on about the
research design? Well, to give you some overview about the research design is the framework of research
methods and techniques chosen by a researcher. This allows you to hone in on research methods that are
suitable for the subject ma er and set up your studies up for success. The design of a research topic
explains the type of research (experimental, survey, correla onal, semi-experimental, review) and also its
sub-type (experimental design, research problem, descrip ve case-study). Specifying the methods to
measure, gather, analyze the data or variables. The design phase of a study determines which tools to use
and how they are used. Addi onally, an impac ul research design usually creates a minimum bias in data
and increases trust in the accuracy of collected data. A design that produces the least margin of error in
experimental research is generally considered the desired outcome. Therefore, there should have an
accurate purpose statement, techniques for collec ng and analyzing research, types of research
methodology, method applied for analyzing collected details, se ng for the research study, meline, and
measurement of analysis.

In this module, proper research design sets your study up for success and a successful research
studies provide insights that are accurate and unbiased. As part of your self assessment or evalua on you
are advised to submit the requirements. Please include the tle of the research, short overview and the
research design. Please use legal size, 1-2 pages, in Microso word, me new romans, font size 12 with
1.5 space. Same with diagram, you can use legal size, 1-2 pages, in Microso word, mes new roman,
font size 12. Kindly submit your requirement, on or before 12 midnight of the speci ed date. Submit your
requirements with rubrics. You will be evaluated using rubrics. Keep safe and healthy.

LEARNING OBJECTIVES

At the end of the module you will be able to:

o Determine appropriate research design and sampling plan.


o Select appropriate data, measurement tools, and analysis method.
o Develop a research design and sample plans guide and diagram for qualita ve and quan ta ve
research.

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

fi
tt
ti

ti

ti
tf
ti

ti
ti
ti
ft
ti
tti

ti
ti
ft
fi

ti
ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

DISCUSSION

RESEARCH DESIGN
The research design refers to the overall strategy that you choose to integrate the di erent components of
the study in a coherent and logical way, thereby, ensuring you will e ec vely address the research
problem; it cons tutes the blueprint for the collec on, measurement, and analysis of data. Note that the
research problem determines the type of design you should use, not the other way around!
I. QUANTITATIVE DESIGN
Quan ta ve designs develop from a strong theore cal base, emerge from ques ons regarding
explana ons of and rela onships between and among variables, and generally derive from evolving
knowledge in the area of inquiry. Quan ta ve designs focus on approaches that emphasize explana ons
of variables, veri ca on of data, measuring variables, use of instruments for data collec on, sta s cal
signi cance, and internal validity.

I. EXPERIMENTAL
Experimental research is concerned with cause-and-e ect rela onships. A cause-and-e ect rela onship
occurs when one object or event makes some other object or event happen. All experimental studies
involve manipula on or control of the independent variable (cause) and measurement of the dependent
variable (e ect). Although experimental research designs are highly respected in the scien c world,
causal rela ons are di cult to establish, and, as discussed elsewhere in this book, researchers should
avoid using the word prove when discussing research results. Controls are di cult to apply when
experimental research is conducted with human beings.

Characteris cs of True Experiments


A true experimental design or randomized controlled trial (RCT) is characterized by the following
proper es:
Manipula on—the experimenter does something to some subjects—that is, there is some type of
interven on.
Control—the experimenter introduces controls into the study, including devising a good approxima on of
a counterfactual —usually a control group that does not receive the interven on.
Randomiza on—the experimenter assigns subjects to a control or experimental condi on on a random
basis.

Using manipula on, experimenters consciously vary the independent variable and then observe
its e ect on the dependent variable. Researchers manipulate the independent variable by administering
an experimental treatment (or interven on) to some subjects while withholding it from others. To
illustrate, suppose we were inves ga ng the e ect of physical exer on on mood in healthy young adults.
One experimental design for this research problem is a pretest–pos est design (or before–a er design).
This design involves the observa on of the dependent variable (mood) at two points in me: before and
a er the treatment. Par cipants in the experimental group are subjected to a physically demanding
exercise rou ne, whereas those in the control group undertake a sedentary ac vity. This design permits us
to examine what changes in mood were caused by the exer on because only some people were subjected
to it, providing an important comparison. In this example, we met the rst criterion of a true experiment
by manipula ng physical exer on, the independent variable.
This example also meets the second requirement for experiments, the use of a control group. For
example, if we were to supplement the diet of premature neonates with special nutrients for 2 weeks, the
infants’ weight at the end of 2 weeks would tell us nothing about the treatment’s e ec veness. At a
minimum, we would need to compare their post treatment weight with their pre-treatment weight to
3

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ft

ff
fi
ti

ti
ti
ti
ti
ti
ti
ff
ti
ti

ti
ti

ti
ti
fi

ti
ffi
ti

ti
ti
ti
ti
ti

ti
ti
ti
ti
ff
ti
ff

ti
ti

ti
ti
tt
fi
ff
ti
ti
ti

ffi
ff
ti
ff
ff
ti
ti
ti

ti
ft
ti
fi
ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

determine whether, at least, their weights had increased. But suppose we nd an average weight gain of
half a pound. Does this nding support an inference of a causal rela onship between the nutri onal
interven on (the independent variable) and weight gain (the dependent variable)? No, it does not. Infants
normally gain weight as they mature. Without a control group—a group that does not receive the
supplements—it is impossible to separate the e ects of matura on from those of the treatment. The
term control group refers to a group of par cipants whose performance on a dependent variable is used
to evaluate the performance of the experimental group (the group receiving the interven on) on the
same dependent variable. The control group condi on used as a basis of comparison represents a proxy
for the ideal counterfactual as previously described.
Experimental designs also involve placing subjects in groups at random. Through randomiza on
(random assignment), every par cipant has an equal chance of being included in any group. If people are
randomly assigned, there is no systema c bias in the groups with regard to a ributes that may a ect the
dependent variable. Randomly assigned groups are expected to be comparable, on average, with respect
to an in nite number of biologic, psychological, and social traits at the outset of the study. Group
di erences on outcomes observed a er random assignment can therefore be inferred as being caused by
the treatment.
Random assignment can be accomplished by ipping a coin or pulling names from a hat. Researchers
typically either use computers to perform the randomiza on or rely on a table of random numbers, a
table displaying hundreds of digits arranged in a random order.

VALIDITY OF EXPERIMENTAL DESIGN


In experimental studies, as well as in other types of research, the researcher is interested in
controlling extraneous variables that can in uence study results. Extraneous variables are those variables
the researcher is unable to control, or does not choose to control, and which can in uence the results of a
study.
Other names for extraneous variables are confounding and intervening. These variables are also
called study limita ons. The researcher acknowledges these study limita ons in the discussion sec on of a
research report. In experimental studies, the extraneous variables, or compe ng explana ons for the
results, are labeled threats to internal and external validity.
In an experimental study, the researcher is trying to establish a cause-and-e ect rela onship. The
internal validity of an experimental design concerns the degree to which changes in the dependent
variable (e ect) can be a ributed to the independent variable (cause). Threats to internal validity are
factors other than the independent variable that in uence the dependent variable. These factors
cons tute rival explana ons or compe ng hypotheses that might explain the study results.
External validity concerns the degree to which study results can be generalized to other people
and other se ngs. These kinds of ques ons should be answered about external validity: With what
degree of con dence can the study ndings be transferred from the sample to the en re popula on? Will
these study ndings hold true with other groups in other mes and places?
Internal and external validity are related in that as the researcher a empts to control for internal validity,
external validity is usually decreased. Conversely, when the researcher is concerned with external validity
or the generalizability of the ndings to other se ngs and other people, the strict control necessary for
high internal validity may be a ected. Therefore, the researcher must decide how to balance internal and
external validity.

THREATS TO INTERNAL VALIDITY

SELECTION BIAS The selec on bias threat occurs when study results are a ributed to the experimental
treatment or the researcher’s manipula on of the independent variable when, in fact, the results are
related to subject di erences before the independent variable was manipulated. This selec on threat

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022





ff
ti

ti
fi
ff
fi

tti
fi

ti
ff
ti
fi
tt
ti
fi
ff

ti

fi

ft
ti
ti
ti
ti
fl
ti
fl
tti
ff
ti

fl
ti
ti

ti
tt
ti
ti
fi

tt
tt
ti
fl
ff
ti

ti
ti
ti
ti
ti
ff
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

should be considered in experimental studies when subjects are not randomly assigned to experimental
and comparison groups. Can you give us an example?
HISTORY The threat of history (history threat) occurs when some event besides the experimental
treatment occurs during the course of a study, and this event in uences the dependent variable. History is
controlled by the inclusion of at least one simultaneous control or comparison group in a study.
Addi onally, random assignment of subjects to groups helps control the threat of history. Environmental
events (history) would, therefore, be as likely to occur for subjects in one group as in another. If a
di erence is found between the groups at the conclusion of the study, the researcher is much more
con dent that the manipula on of the independent variable is the cause of this di erence.
MATURATION Matura on becomes a threat when changes that occur within the subjects during an
experimental study in uence the study results. People may become older, taller, or sleepier from the me
of the pretest to the pos est. If a school nurse were interested in the weight gain of children who receive
a hot breakfast at school each day, she would have to keep in mind that changes may occur in these
children during the course of the study that are not related to the experimental treatment. The children
will probably gain some weight as they eat more and grow older, regardless of whether they eat a hot
breakfast at school or not. Again, a comparison group of similar children helps control for this threat.
Matura on processes are then as likely to occur in one group as in another.
TESTING The tes ng threat may occur in studies where a pretest is given or where subjects have
knowledge of baseline data. Tes ng refers to the in uence of the pretest or knowledge of baseline data on
the pos est scores. Subjects may remember the answers they put on the pretest and put the same
answers on the pos est. Also, subjects’ scores may be altered on the pos est as a result of their
knowledge of baseline data.
INSTRUMENTATION CHANGE When mechanical instruments or judges are used in the pretest and pos est
phases of a study, the threat of instrumenta on must be considered. Instrumenta on change involves the
di erence between the pretest and pos est measurement caused by a change in the accuracy of the
instrument or the judges’ ra ngs, rather than as a result of the experimental treatment. Judges may
become more adept at the ra ngs or, on the contrary, become red and make less exact observa ons.
Training sessions for judges and trial runs to check for fa gue factors may help control for instrumenta on
changes. Also, if mechanical instruments are used, such as sphygmomanometers, these instruments
should be checked for their accuracy throughout the study.
MORTALITY The mortality threat occurs when the subjects do not complete a study. A ri on or drop-out
may occur in any research study. However, the term mortality is reserved for experimental studies in
which the dropout rate is di erent between the experimental and the comparison groups. The observed
e ects may occur because the subjects who dropped out of a par cular group are di erent from those
who remained in the study. The researcher might falsely conclude that the treatment really worked well.
There is no research design that will control for mortality because, for ethical reasons, par cipants can
never be forced to remain in a study. The longer a study lasts, the more likely that subject dropout will
occur. It suggests that subjects drop out of studies due to lack of interest and mo va on. This may be of
par cular concern in randomiza on where subjects drop out of one group at a rate higher than another
group.

THREATS TO EXTERNAL VALIDITY


HAWTHORNE EFFECT The Hawthorne e ect occurs when study par cipants respond in a certain manner
because they are aware that they are being observed. Working condi ons, such as changing the length of
the working day, were varied. Worker produc vity was found to increase no ma er what changes were
made. The increase in produc vity was nally determined to be the result of the subjects’ knowledge that
they were involved in a research study, and that someone was interested in them. The Hawthorne e ect
may also be considered a threat to internal validity. It might be possible to control this threat by using a
double-blind experiment. In a double-blind experiment, neither the researcher nor the research

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff

ff
ff
ti
fi
ti

ti
tt

ti
tt
fl
ti
tt
ti

ff
ti
ti
ti
ti
ti

fi
ff
tt
ti
ti
fl

ti

fl
ti
ti
ti
ti

tt
ff
ti
tt
ti
ti
tt
ff
ti

ti
ti
ti
tt
ff
ti
Southern Luzon State University
College of Allied Medicine

par cipants are aware of which par cipants are in the experimental group and which par cipants are in
the control group.
EXPERIMENTER EFFECT The experimenter e ect is a threat to study results that occurs when researcher
characteris cs or behaviors in uence subject behaviors. Examples of researcher characteris cs or
behaviors that may be in uen al are facial expressions, clothing, age, gender, and body build. Although
the term experimenter e ect is appropriate to use only when discussing experimental research, a term
with a similar meaning is used in non-experimental research. The Rosenthal e ect, named a er the
person who iden ed this phenomenon, is used to indicate the in uence of an interviewer on
respondents’ answers. It has been shown that researcher characteris cs such as gender, dress, and type of
jewelry may in uence study par cipants’ answers to ques ons in non-experimental studies.
REACTIVE EFFECTS OF THE PRETEST When a pretest and a pos est are used in an experimental study, the
researcher must be aware not only of the internal validity threat that may occur, but also of the external
validity threat that may exist. The reac ve e ects of the pretest occur when subjects have been sensi zed
to the treatment because they took the pretest. This sensi za on may a ect the pos est results. People
might not respond to the treatment in the same manner if they had not received a pretest. The pretest
does not have to be from a ques onnaire-like test. As men oned previously, if study par cipants were
told their weight prior to a weight-reduc on study, this knowledge of baseline data would be considered a
pretest.

SYMBOLIC PRESENTATION OD RESEARCH DESIGN

Research designs are o en easier to understand when seen in a symbolic form.


R = random assignment of subjects to groups
X = experimental treatment or interven on
O = observa on or measurement of dependent variable

The Xs and Os on one line apply to a speci c group. The me sequence of events is read from le to right.
If an X appears rst and then an O, this means the interven on occurred rst then an observa on was
made. If a subscript appears a er an X or O (X1; X2; O1; O2), the numbers indicate the rst treatment,
second treatment, rst observa on, second observa on, and so forth.
R O1 X O2 (Experimental group) R O1 O2 (Comparison group)
This example has two groups, both of which were formed through random assignment (R) of subjects to
groups. Random assignment is a procedure that ensures that each subject has an equal chance of being
assigned or placed in any of the groups in an experimental study. Both groups in the example were
measured or given a pretest (O1) on the phenomenon of interest (dependent variable). The experimental
group was exposed to an experimental treatment (independent variable); the comparison group was not
exposed to this treatment. Then, both groups were again measured or given the pos est (O2) on the
phenomenon of interest (dependent variable).
There are a number of ways of carrying out random assignment. Today, it is o en done through a
computer-generated process.

A. EXPERIMENTAL DESIGN

a. Basic Designs
The most basic experimental design involves randomizing subjects to di erent groups and then measuring
the dependent variable. This design is some mes called a pos est-only (or a er-only) design. A more
widely used design, discussed previously, is the pretest–pos est design, which involves collec ng pretest

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti

ti
ti

fl

fi
ti

fi
fi
ft
ff
fl

ti
ft
ti
ti
fl
ti

ti
ti
ti
ti
fi

ff
ff

ti

ti

ti
ti
ti
tt
ti

ti
tt
ti
tt
ti

ff
ff
fl
fi

ft
ff
ft
tt
tt
fi
ti
ti

ti
ft
ti
ft
ti
ti
Southern Luzon State University
College of Allied Medicine

data (also called baseline data) on the dependent variable before the interven on and pos est (outcome)
data a er it.
1. PRETEST-POSTTEST CONTROL GROUP DESIGN The pretest-pos est control group design is probably the
most frequently used experimental design. In this design, (a) the subjects are randomly assigned to
groups, (b) a pretest is given to both groups, (c) the experimental group receives the experimental
treatment and the comparison group receives the rou ne treatment or no treatment, and (d) a pos est is
given to both groups.
R O1 X O2 (Experimental group)
R O1 O2 (Comparison group)
The researcher is able to determine if the groups were equal before the treatment was administered. If
the groups were not equivalent, the pos est scores may be adjusted sta s cally to control for the ini al
di erences between the two groups that were re ected in the pretest scores.
The pretest-pos est control group design controls for all threats to internal validity. The disadvantage of
this design concerns the external threat of the reac ve e ects of the pretest. The results of the study can
be generalized only to situa ons in which a pretest would be administered before the treatment.
2. POSTTEST-ONLY CONTROL GROUP DESIGN In the pos est-only control group design, (a) subjects are
randomly assigned to groups, (b) the experimental group receives the experimental treatment and the
comparison group receives the rou ne treatment or no treatment, and (c) a pos est is given to both
groups.
R X O1 (Experimental group)
R O1 (Comparison group)
The pos est-only control group design is easier to carry out and superior to the pretest-pos est design.
The researcher does not have to be concerned with the reac ve e ects of the pretest on the pos est. The
generalizability of the results would be more extensive. A study similar to the example described
regarding the pretest-pos est control group design could be developed. The only di erence would be that
the two groups would not receive a pretest on their knowledge of diabetes.
Random assignment of subjects into groups in the pos est-only control group design helps ensure
equality of the groups. The use of a large sample size will increase the e ec veness of random
assignment. Although random assignment should ensure equality of groups, researchers seem to be
fearful that the groups may not, in fact, be similar. Therefore, they some mes choose to administer a
pretest. The pos est-only control group design should be used when it is not possible to administer a
pretest or when it would not make sense to administer a pretest.
3. SOLOMON FOUR-GROUP DESIGN In the Solomon four-group design, (a) subjects are randomly assigned
to one of the four groups; (b) two of the groups, experimental group 1 and comparison group 1, are
pretested; (c) two of the groups, experimental group 1 and experimental group 2, receive the
experimental treatment, whereas two of the groups, comparison group 1 and comparison group 2,
receive the rou ne treatment or no treatment; and (d) a pos est is given to all four groups.
R O1 X O2 (Experimental group 1)
R O1 O2 (Comparison group 1)
R X O2 (Experimental group 2)
R O2 (Comparison group 2)
The Solomon four-group design is considered to be the most pres gious experimental design because it
minimizes threats to internal and external validity. This design not only controls for all of the threats to
internal validity, but also controls for the reac ve e ects of the pretest. Any di erences between the
experimental and the comparison groups can be more con dently associated with the experimental
treatment. Unfortunately, this design requires a large sample, and sta s cal analysis of the data is
complicated.

b. Factorial Design

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022








ff
ft

tt








ti
tt








tt



tt
ti

ti
tt
ti
fl
ti

ff
ti

ff
tt

tt
ti
tt

fi
tt

ff
ti
ti
ti

ti
ti
ti

ti
ff
ff
ff
tt
ti
tt

tt
tt

tt
ti
Southern Luzon State University
College of Allied Medicine

Researchers some mes manipulate two or more independent variables simultaneously. Suppose we were
interested in comparing two therapeu c strategies for pre-mature infants: tac le s mula on versus
auditory s mula on. We are also interested in learning whether the daily amount of s mula on a ects
infants’ progress. This factorial design allows us to address three ques ons: (1) Does auditory s mula on
cause di erent e ects on infant development than tac le s mula on? (2) Does the amount of s mula on
(independent of modality) a ect infant development? and (3) Is auditory s mula on most e ec ve when
linked to a certain dose and tac le s mula on most e ec ve when coupled with a di erent dose?
The third ques on demonstrates an important strength of factorial designs: they permit us to evaluate not
only main e ects (e ects resul ng from the manipulated variables, as exempli ed in ques ons 1 and 2)
but also interac on e ects (e ects resul ng from combining the treatments). Our results may indicate, for
example, that 15 minutes of tac le s mula on and 45 minutes of auditory s mula on are the most
bene cial treatments. We could not have learned this by conduc ng two separate experiments that
manipulated one independent variable at a me.
In factorial experiments, subjects are assigned at random to a combina on of treatments. In our example,
premature infants would be assigned randomly to one of the six cells. The term cell refers to a treatment
condi on and is represented in a diagram as a box. In this factorial design, type of s mula on is factor A
and amount of exposure is factor B. Each factor must have two or more levels. Level one of factor A is
auditory, and level two of factor A is tac le.

c. Crossover Design
Thus far, we have described experiments in which subjects who are randomly assigned to treatments are
di erent people. For instance, the infants given 15 minutes of auditory s mula on in the factorial
experiment are not the same infants as those exposed to other treatment condi ons. This broad class of
designs is called between-subjects designs because the comparisons are between di erent people. When
the same subjects are compared, the designs are within subjects designs.
A crossover design involves exposing par cipants to more than one treatment. Such studies are true
experiments only if par cipants are randomly assigned to di erent orderings of treatment. For example, if
a crossover design were used to compare the e ects of auditory and tac le s mula on on infants, some
would be randomly assigned to receive auditory s mula on rst followed by tac le s mula on, and
others would receive tac le s mula on rst. In such a study, the three condi ons for an experiment have
been met: there is manipula on, randomiza on, and control—with subjects serving as their own control
group.
A crossover design has the advantage of ensuring the highest possible equivalence among the subjects
exposed to di erent condi ons. Such designs are inappropriate for certain research ques ons, however,
because of possible carryover e ects. When subjects are exposed to two di erent treatments, they may
be in uenced in the second condi on by their experience in the rst. However, when carryover e ects are
implausible, as when treatment e ects are immediate and short lived, a crossover design is extremely
powerful.

Experimental and Control Condi ons

In designing experiments, researchers make many decisions about what the experimental and control
condi ons entail, and these decisions can a ect the results.
To give an experimental interven on a fair test, researchers need to carefully design one that is
appropriate to the problem and of su cient intensity and dura on that e ects on the dependent variable
might reasonably be expected. Researchers delineate the full nature of the interven on in formal
protocols that s pulate exactly what the treatment is for those in the experimental group.
The control group condi on (the counterfactual) must also be carefully conceptualized. Researchers have
choices about what to use as the counterfactual, and the decision has implica ons for the interpreta on
of the ndings. Among the possibili es for the counterfactual are the following:

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff
fl
fi
ti
ti
fi

ff
ti

ff

ff
ti
ti
ti
ti
ff
ti

ff
ff
ti
ti
ti
ti
ff
ti
ff
ti
ti
ti
ti
ff
ti
ti
ff
ti
ti

ti
ti

ffi
ti
ti
ti
ti
fi
ti
ti
ff
ti

ti
ti
ff

ti
ff
ti
ti
ti
ti

ff
ti
fi
fi
ti
ti
ti
ti
ff
ti

ti
ff
ti
ti
ti
ti

fi
ti
ti
ti
ti
ff
ff
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ff
ti
ti
ti
ti
ti
ti
ff

ff
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

No interven on—the control group gets no treatment at all


An alterna ve treatment (e.g., auditory versus tac le s mula on)
A placebo or pseudo-interven on presumed to have no therapeu c value
“Usual care”—standard or normal procedures used to treat pa ents
An a en on control condi on—the control group gets researchers’ a en on but not the ac ve
ingredients of the interven on
A lower dose or intensity of treatment, or only parts of the treatment
Delayed treatment (i.e., control group members are wait-listed and exposed to
the experimental treatment at a later point).

QUASI-EXPERIMENTS

Quasi-experiments (called controlled trials without randomiza on in the medical literature), also involve
an interven on; however, quasi-experimental designs lack randomiza on, the signature of a true
experiment. Some quasi-experiments even lack a control group. The signature of a quasi-experimental
design, then, is an interven on in the absence of randomiza on.

1. NONEQUIVALENT CONTROL GROUP DESIGN

The nonequivalent control group design is similar to the pretest-pos est control group design except
there is no random assignment of subjects to the experimental and comparison groups.
O1 X O2 (Experimental group)
O1 O2 (Comparison group)

A researcher might choose a group of pa ents with diabetes on one hospital oor for the experimental
group and a group of pa ents with diabetes on another oor for the comparison group. The experimental
treatment would be administered to the experimental group; the comparison group would receive the
rou ne treatment or some alterna ve treatment.
Threats to internal validity controlled by the nonequivalent control group design are history, tes ng,
matura on, and instrumenta on change. The biggest threat to internal validity is selec on bias. As the
two groups may not be similar at the beginning of the study, it is possible to test sta s cally for
di erences in the groups. For example, it could be determined if the ages and educa onal backgrounds of
the subjects in both groups were similar. If the groups were similar at the beginning of the study, more
con dence could be placed in a cause-and-e ect rela onship between variables. A sta s cal test called
analysis of covariance (ANCOVA) can be used to help control for di erences that might have existed,
through chance, between the experimental and control groups at the beginning of the study.

2. TIME-SERIES DESIGN

In a me-series design, the researcher periodically observes or measures the subjects. The experimental
treatment is administered between two of the observa ons.

O1 O2 O3 X O4 O5 O6

A researcher might assess the pain levels of a group of clients with low back pain. A er three weeks of
pain assessment (O1, O2, O3), subjects could be taught a special exercise to alleviate low back pain.
During the next three weeks, pain levels would again be measured (O4, O5, O6). The results of this study

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022




ff
ti
fi
ti
tt
ti



ti
ti
ti
ti


ti
ti
ti

ti
ti

ti

ti

ti

ff

ti

ti
ti

ti

fl

ti
ti

ti
ti

ti

ff
tt

ti
tt

fl
ti
ti
ft

ti
ti
ti

ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

would help the researcher determine if low back pain persists, if a speci c exercise is e ec ve in reducing
low back pain, and if the e ec veness of the exercise persists.
The me-series design with its numerous observa ons or measurements of the dependent variable helps
strengthen the validity of the design. The greatest threats to validity are history and tes ng.

RANDOMIZED CLINICAL TRIALS

Randomized clinical trials (RCTs) are the most widely accepted approach to evalua ng the
e ec veness of an interven on/treatment in a sample of pa ents. RCTs require the researcher to adhere
strictly to principles of experimental design. Clinical trials typically use a pretest and post-test control
group design, follow subjects in me (prospec ve), collect outcome data a er an extended period, and
frequently sample from mul ple sites. Essen al features of clinical trials include use of an experimental
group and a control group, randomiza on, blinding of pa ents and health-care providers, and su cient
sample sizes.
Blinding of pa ents and health-care providers is important because it reduces biasing of
percep ons or ways of ac ng on everyone’s part. Blinding means that the treatment assignment is not
known to certain individuals. On the one hand, pa ents who know they have been assigned to receive a
par cular treatment/interven on might an cipate or look forward to favorable outcomes. On the other
hand, those pa ents not assigned to receive a par cular treatment/interven on might feel deprived or
even relieved. In either case, knowledge of being assigned to a par cular group could possibly a ect
outcomes. In a single-blinded study, the treatment assignment is unknown to pa ents; in a double-
blinded study, the treatment assignment is unknown to the pa ents and to the health-care providers. In
double-blinded studies, the treatment assignment is revealed only if there are serious or unexpected side
e ects from the interven on/treatment. Blinding is some mes referred to as masking.

B. NON-EXPERIMENTAL DESIGN

Non-experimental designs, the researcher observes the ac ons of variables as they occur in the
natural state. The major purpose of non-experimental research is to uncover new knowledge and describe
rela onships among variables.
a. Descrip ve Designs
Descrip ve designs gather informa on about condi ons, a tudes, or characteris cs of individuals or
groups of individuals. The purpose of descrip ve research is to describe the meaning of exis ng
phenomena. Enumera on and brief descrip on of characteris cs are the key elements in this design. For
example, a researcher decides to survey the religious a lia ons of pa ents because there has been li le
pastoral care at a par cular ins tu on. A brief descrip on and enumera on of a lia ons will help
support the need for further religious support within the ins tu on.
b. Correla onal Designs
A correla onal design arises from a level of knowledge needing further re nement of variable
measurement or clari ca on of rela onships among variables. Unlike the experimental and quasi-
experimental designs, the independent variable is not manipulated. However, this design gives support to
later work, using more sophis cated designs that explicate causal rela onships. Correla onal research
inves gates the rela onship between or among two or more variables. The researcher can use this
approach to describe rela onships, predict rela onships, or test rela onships supported by clinical theory.
In correla onal studies, no a empt is made to control or manipulate the variables under study.
Correla onal research can be conducted retrospec vely or prospec vely. Retrospec ve design research
involves examina on of data collected in the past, o en obtained from medical records or surveys. Many
epidemiological studies use retrospec ve data. For example, in retrospec ve lung cancer research,
researchers begin with some people who have lung cancer and others who do not, and then look for

10

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022



ff
ff

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti

ti
fi
ti
ti

ti
ti
ff
ti
ti
ti
tt
ti
ti
ti

ti

ti
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ft
ti
ffi
ti
ti
ti
ti
ti
ti
tti
ti

ti
ti
ti
ti

ti
ti
ti
fi
ti
ft
ti
ti
ti
fi
ti
ti
ti

ti
ffi
ff
ti
ti
ti

ti
ffi
ff
ti
tt
Southern Luzon State University
College of Allied Medicine

di erences in antecedent behaviors or condi ons, such as smoking habits. Such a retrospec ve design is
some mes called a case-control design—that is, cases with a certain condi on such as lung cancer are
compared with controls without it. In designing a case-control study, researchers try to iden fy controls
without the disease who are as similar as possible to the cases with regard to key confounding variables
(e.g., age, gender). To the degree that researchers can demonstrate comparability between cases and
controls with regard to extraneous traits, causal inferences are enhanced. The di culty, however, is that
the two groups are almost never comparable with respect to all poten al factors in uencing the outcome
variable.
Prospec ve design (called a Cohort design by medical researchers), research involves the examina on of
variables through direct recording in the present. Prospec ve studies are more reliable than are
retrospec ve studies because of the poten al for greater control of data collec on methods. For example,
in prospec ve lung cancer studies, researchers start with samples of smokers and nonsmokers and later
compare the two groups in terms of lung cancer incidence. Prospec ve studies are more costly, but much
stronger, than retrospec ve studies. For one thing, any ambiguity about the temporal sequence of
phenomena is resolved in prospec ve research (i.e., smoking is known to precede the lung cancer). In
addi on, samples are more likely to be representa ve of smokers and nonsmokers, and inves gators may
be be er able to impose controls to rule out compe ng explana ons for observed e ects.
Descrip ve Correla onal Studies, the purpose of a descrip ve correla onal study is to describe and
explain the nature and magnitude of exis ng rela onships, without necessarily clarifying the underlying
causal factors in the rela onship. The nature of a rela onship explains the type of rela onship that exists
(that is, whether the rela onship is posi ve or nega ve). The rela onship between the number of
cigare es smoked and lung func on has been shown to be nega ve, referred to as an inverse rela onship.
People who smoke more cigare es have a lower lung func on, and vice versa (people who smoke a lower
number of cigare es have a higher lung func on). Descrip ve correla onal designs are essen ally
exploratory and o en target the genera on of hypotheses.
Other Types of Research Designs
Historical Research
Historical research systema cally inves gates and cri cally evaluates informa on related to past events.
Its purpose is not to review literature about past events but to shed light on present events through
analysis of the causes and e ects of and trends in historical events. The goal is to create new insights, not
to rehash historical informa on. Qualita ve and quan ta ve approaches and data collec on techniques
are used.
Secondary Analysis
Secondary analysis uses previously gathered data to test new hypotheses, explore new rela onships
among variables, or create new insights. Because the process of data collec on is me-consuming and
expensive, use of these data is an e cient way to create new insights and develop new knowledge.
However, data may be de cient or problema c in areas of variables or popula ons studied.
Meta-Analysis
Meta-analysis is a technique that uses the ndings from several studies to create a data set that may be
analyzed as a single piece of datum. By applying sta s cal procedures to these ndings, this technique can
be a unique approach to integrate knowledge. Meta-analyses are common in nursing literature.
Epidemiological Research
Epidemiology studies the distribu on and determinants of disease and injury frequency in human
popula ons. Epidemiological ques ons o en arise out of clinical experience and public health concerns
about the rela onship between social prac ces and disease outcomes. Epidemiology began as the study
of epidemics, concerned primarily with mortality and morbidity from acute infec ous disease. As medical
cures and treatments were developed to control many of these problems and as pa erns of disease have
changed, chronic diseases and disabili es have gained increased prominence in epidemiology. Today,
epidemiology encompasses a broader context of “epidemic” and “disease,” including studies of AIDS,

11

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff

ti
ti
tt
tt
ti
ti
ti

ti

ti

ti

ft
tt

ti

ti
fi
ti

ti
ti
ff
ti
tt
ti

ti
ti
ti

ffi
ti
ti
ti
ti
ft
ti
ti
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
ti
ffi
fl
ff
tt
ti
ti

ti

ti
ti
ti

ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

outbreaks of controlled diseases such as measles, and more chronic condi ons such as cardiac disease,
arthri s, diabetes, trauma c injuries, and birth defects.

QUALITATIVE DESIGN
Qualita ve studies use an emergent design in which a design that emerges as researchers make ongoing
decisions re ec ng what has already been learned. An emergent design in qualita ve studies is not the
result of laziness on the part of researchers, but rather a re ec on of their desire to have the inquiry
based on the reali es and viewpoints of those under study reali es and viewpoints that are not known or
understood at the outset.

Characteris cs of Qualita ve Research Design


Is exible and elas c, capable of adjus ng to what is being learned during the course of data collec on
O en involves a merging together of various data collec on strategies (i.e., triangula on)
Tends to be holis c, striving for an understanding of the whole
Requires researchers to become intensely involved, o en remaining in the eld for lengthy periods of me
Requires ongoing analysis of the data to formulate subsequent strategies and to determine when eld
work is done

Qualita ve Design and Planning


Selec ng a broad framework or tradi on to guide certain design decisions
Determining the maximum amount of me for the study, given costs and other constraints
Developing a broad data collec on strategy (e.g., will interviews be conducted?)
Selec ng the study site and iden fying appropriate se ngs
Taking steps to gain entrée into the site through nego a ons with key “gate-keepers”
Iden fying the types of equipment that could aid in the collec on and analysis of data in the eld (e.g.,
recording equipment, laptop computers)
Iden fying personal biases, views, and presupposi ons vis-à-vis the phenome- non or the study site
(re exivity)

Qualita ve Design Features

Interven on, Control, and Masking - Qualita ve research is almost always non-experimental—although a
qualita ve sub-study may be embedded in an experiment. Qualita ve researchers do not conceptualize
their studies as having independent and dependent variables, and they rarely control or manipulate any
aspect of the people or environment under study. Masking is also not a strategy used by qualita ve
researchers because there is no interven on or hypotheses to conceal. The goal is to develop a rich
understanding of a phenomenon as it exists and as it is constructed by individuals within their own
context.

Comparisons - Qualita ve researchers typically do not plan in advance to make group comparisons
because the intent is to describe and explain a phenomenon thoroughly. Nevertheless, pa erns emerging
in the data some mes suggest comparisons that are illumina ng. Indeed, all descrip on requires
comparisons. In analyzing qualita ve data and in determining whether categories are saturated, there is a
need to compare “this” to “that.”

Research Se ngs - Qualita ve researchers usually collect their data in real-world, naturalis c se ngs.
And, whereas a quan ta ve researcher usually strives to collect data in one type of se ng to maintain
control over the environment (e.g., conduc ng all interviews in study par cipants’ homes), qualita ve
researchers may deliberately strive to study phenomena in a variety of natural contexts.

12

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ft
fl

fl
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti

fl
tti


ti
ti
ti
ti
ti

ti
ti
ti
ti
ti

ti

ti

ti
ti

ti
ti
ti

ti
ti
ti

ti
ft
ti
tti
ti

ti

fl

ti
ti
ti
ti
ti

fi
ti
ti

ti
ti

tti

tt

ti
ti
fi
ti
tti
ti
fi
ti
ti

Southern Luzon State University


College of Allied Medicine

Timeframes - Qualita ve research, like quan ta ve research, can be either cross-sec onal, with one data
collec on point, or longitudinal, with mul ple data collec on points over an extended period, to observe
the evolu on of a phenomenon. Some mes qualita ve researchers plan in advance for a longitudinal
design, but, in other cases, the decision to study a phenomenon longitudinally may be made in the eld
a er preliminary data have been collected and analyzed.

PHENOMENOLOGICAL STUDIES
Phenomenological studies examine human experiences through the descrip ons provided by the people
involved. These experiences are called lived experiences. The goal of phenomenological studies is to
describe the meaning that experiences hold for each par cipant.
In phenomenological research, par cipants are asked to describe their experiences as they perceive them.
They may write about their experiences (for example in diaries) but informa on is generally obtained
through interviews.
The two main schools of thought are descrip ve phenomenology and interpre ve phenomenology
(hermeneu cs).

Descrip ve Phenomenology
Descrip ve phenomenology emphasized descrip ons of human experience. Descrip ve
phenomenologists insist on the careful portrayal of ordinary conscious experience of everyday life—a
depic on of “things” as people experience them. These “things” include hearing, seeing, believing,
feeling, remembering, deciding, and evalua ng.
Descrip ve phenomenological studies o en involve the following four steps: bracke ng, intui ng,
analyzing, and describing.
a. Bracke ng - refers to the process of iden fying and holding in abeyance preconceived beliefs and
opinions about the phenomenon under study. Although bracke ng can never be achieved totally,
researchers strive to bracket out any presupposi ons in an e ort to confront the data in pure form.
Bracke ng is an itera ve process that involves preparing, evalua ng, and providing systema c ongoing
feedback about the e ec veness of the bracke ng. Phenomenological researchers (as well as other
qualita ve researchers) o en maintain a re exive journal in their e orts to bracket.
b. Intui ng - the second step in descrip ve phenomenology, occurs when researchers remain open to the
meanings a ributed to the phenomenon by those who have experienced it.
c. Analysis - extrac ng signi cant statements, categorizing, and making sense of the essen al meanings of
the phenomenon.
d. Descrip ve - the descrip ve phase occurs when researchers come to understand and de ne the
phenomenon.

Interpre ve Phenomenology
Also term as hermeneu cs (“understanding”) is a basic characteris c of human existence. Indeed, the
term hermeneu cs refers to the art and philosophy of interpre ng the meaning of an object (e.g., a text,
work of art, and so on). The goals of interpre ve phenomenological research are to enter another’s world
and to discover the wisdom, possibili es, and understandings found there. The interpre ve process as a
circular rela onship known as the hermeneu c circle where one understands the whole of a text (e.g., a
transcribed interview) in terms of its parts and the parts in terms of the whole. In his view, researchers
enter into a dialogue with the text, in which the researcher con nually ques ons its meaning. In an
interpre ve phenomenological study, bracke ng does not occur because it was not possible to bracket
one’s being-in-the-world. Hermeneu cs pre-supposes prior understanding on the part of the researcher.
Interpre ve phenomenologists, like descrip ve phenomenologists, rely primarily on in-depth interviews
with individuals who have experienced the phenomenon of interest, but they may go beyond a tradi onal
approach to gathering and analyzing data. For example, interpre ve phenomenologists some mes

13

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ft

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
tt
ti

ti

ti

ti
ti
ff
ti
ft

ti
fi

ti

ti

ti
ti
ti
ti
ti
ft
fl
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti

ti
ti
ti
ff

ti
ff
ti
ti
ti
ti

ti
ti
ti

ti
ti
ti
ti
ti
ti
fi
ti
ti
ti
fi
ti

Southern Luzon State University


College of Allied Medicine

augment their understandings of the phenomenon through an analysis of supplementary texts, such as
novels, poetry, or other ar s c expressions—or they use such materials in their conversa ons with study
par cipants.

ETHNOGRAPHY
Ethnography is a type of qualita ve inquiry that involves the descrip on and interpreta on of a culture
and cultural behavior. Culture refers to the way a group of people live—the pa erns of human ac vity and
the symbolic structures (e.g., the values and norms) that give such ac vity signi cance. Ethnographies are
a blend of a process and a product, eld work and a wri en text. Field work is the process by which the
ethnographer inevitably comes to understand a culture, and the ethnographic text is how that culture is
communicated and portrayed. Because culture is, in itself, not visible or tangible, it must be constructed
through ethnographic wri ng.
Ethnographers seek to learn from rather than to study members of a cultural group—to understand their
world view. Ethnographic researchers some mes refer to “emic” and “e c” perspec ves. An emic
perspec ve refers to the way the members of the culture regard their world—it is the insiders’ view. The
emic is the local language, concepts, or means of expression that are used by the members of the group
under study to name and characterize their experiences. The e c perspec ve, by contrast, is the
outsiders’ interpreta on of the experiences of that culture— the words and concepts they use to refer to
the same phenomena. Ethnographers strive to acquire an emic perspec ve of a culture under study.
Moreover, they strive to reveal what has been referred to as tacit knowledge, informa on about the
culture that is so deeply embedded in cultural experiences that members do not talk about it or may not
even be consciously aware of it.
Three broad types of informa on are usually sought by ethnographers: cultural behavior (what members
of the culture do), cultural ar facts (what members of the culture make and use), and cultural speech
(what people say). This implies that ethnographers rely on a wide variety of data sources, including
observa ons, in-depth interviews, records, and other types of physical evidence (e.g., photographs,
diaries). Ethnographers typically use a strategy known as par cipant observa on in which they make
observa ons of the culture under study while par cipa ng in its ac vi es. Ethnographers observe people
day a er day in their natural environments to observe behavior in a wide array of circumstances.
Ethnographers also enlist the help of key informants to help them understand and interpret the events
and ac vi es being observed.
Ethnographic research typically is a labor-intensive and me-consuming endeavor—months and even
years of eldwork may be required to learn about the cultural group of interest. The study of a culture
requires a certain level of in macy with members of the cultural group, and such in macy can be
developed only over me and by working directly with those members as ac ve par cipants.
The product of ethnographic research usually is a rich and holis c descrip on of the culture under study.
Ethnographers also interpret the culture, describing norma ve behavioral and social pa erns. Among
health care researchers, ethnography provides access to the health beliefs and health prac ces of a
culture or subculture. Ethnographic inquiry can thus help to facilitate understanding of behaviors a ec ng
health and illness. Indeed, Leininger has coined the phrase ethno nursing research, which she de ned as
“the study and analysis of the local or indigenous people’s viewpoints, beliefs, and prac ces about nursing
care behavior and processes of designated cultures”.

GROUNDED THEORY
Grounded theory has become an important research method for nurse researchers and has contributed to
the development of many middle-range theories of phenomena relevant to nurses. Grounded theory was
developed in the 1960s by two sociologists, Glaser and Strauss, whose theore c roots were in symbolic
interac on, which focuses on the manner in which people make sense of social interac ons and the
interpreta ons they a ach to social symbols.

14

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti
ft
ti
ti
ti
ti
ti
fi
ti
ti

ti
ti
tt
ti
ti
ti
ti

ti
ti

ti

fi

ti
ti

ti
tt
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
tt
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
tt
ti

ti
ti
fi
ff
ti
Southern Luzon State University
College of Allied Medicine

Grounded theory tries to account for people’s ac ons from the perspec ve of those involved. Grounded
theory researchers seek to understand the ac ons by rst discovering the main concern or problem and
then the individuals’ behavior that is designed to resolve it. The manner in which people resolve this main
concern is called the core variable. One type of core variable is called a basic social process (BSP). The goal
of grounded theory is to discover this main concern and the basic social process that explains how people
resolve it. The main concern or problem must be discovered from the data. Grounded theory researchers
generate emergent conceptual categories and their proper es and integrate them into a substan ve
theory grounded in the data.
Grounded Theory Methods
Grounded theory methods cons tute an en re approach to the conduct of eld research. In grounded
theory, both the research problem and the process used to resolve it are discovered during the study. A
fundamental feature of grounded theory research is that data collec on, data analysis, and sampling of
par cipants occur simultaneously. The grounded theory process is recursive: researchers collect data,
categorize them, describe the emerging central phenomenon, and then recycle earlier steps.
A procedure referred to as constant comparison is used to develop and re ne theore cally relevant
concepts and categories. Categories elicited from the data are constantly compared with data obtained
earlier in the data collec on process so that commonali es and varia ons can be determined. As data
collec on proceeds, the inquiry becomes increasingly focused on emerging theore cal concerns.
In-depth interviews and par cipant observa on are the most common data source in grounded theory
studies, but exis ng documents and other data sources may also be used. Typically, a grounded theory
study involves interviews with a sample of about 20 to 40 informants.

CASE STUDIES
Case studies are in-depth inves ga ons of a single en ty or a small number of en es. The en ty may be
an individual, family, ins tu on, community, or other social unit. In a case study, researchers obtain a
wealth of descrip ve informa on and may examine rela onships among di erent phenomena, or may
examine trends over me. Case study researchers a empt to analyze and understand issues that are
important to the history, development, or circumstances of the en ty under study.
One way to think of a case study is to consider what is at center stage. In most studies, whether
qualita ve or quan ta ve, certain phenomena or variables are the core of the inquiry. In a case study, the
case itself is central. The focus of case studies is typically on determining the dynamics of why an
individual thinks, behaves, or develops in a par cular manner rather than on what his or her status,
progress, or ac ons are. It is not unusual for probing research of this type to require detailed study over a
considerable period. Data are o en collected that relate not only to the person’s present state but also to
past experiences and situa onal factors relevant to the problem being examined.
Case studies are some mes a useful way to explore phenomena that have not been rigorously researched.
Informa on obtained in case studies can be used to develop hypotheses to be tested more rigorously in
subsequent research. The intensive probing that characterizes case studies o en leads to insights about
previously unsuspected rela onships. Case studies also may serve the important role of clarifying
concepts or of elucida ng ways to capture them.
The greatest strength of case studies is the depth that is possible when a limited number of individuals,
ins tu ons, or groups is being inves gated. Case studies provide researchers with opportuni es of having
an in mate knowledge of a person’s condi on, feelings, ac ons (past and present), inten ons, and
environment.

NARRATIVE ANALYSES
Narra ve analysis focuses on story as the object of inquiry, to determine how individuals make sense of
events in their lives. Narra ves are viewed as a type of “cultural envelope” into which people pour their
experiences and relate their importance to others. What dis nguishes narra ve analysis from other types
of qualita ve research designs is its focus on the broad contours of a narra ve; stories are not fractured

15

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti
ti
ti
ti
ti
ti
ti
ti
ti

ti

ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
ti

ti
ti

ti
ti
ti
ft
ti
ti

ti
ti
ti
ti
ti

ti
ti
ti
tt
fi
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ff
ft
fi
fi

ti
ti

ti
ti
ti

ti
ti

ti
Southern Luzon State University
College of Allied Medicine

and dissected. The broad underlying premise of narra ve research is that people most e ec vely make
sense of their world—and communicate these meanings—by construc ng, reconstruc ng, and narra ng
stories. Individuals construct stories when they wish to understand speci c events and situa ons that
require linking an inner world of desire and mo ve to an external world of observable ac ons. Analyzing
stories opens up forms of telling about experience, and is more than just content. Narra ve analysts ask,
“Why was the story told that way?”.

CRITICAL THEORY
Cri cal social science is typically ac on oriented. Its broad aim is to integrate theory and prac ce such that
people become aware of contradic ons and dispari es in their beliefs and social prac ces, and become
inspired to change them. Cri cal researchers reject the idea of an objec ve and disinterested inquirer and
are oriented toward a transforma on process. Cri cal theory calls for inquiries that foster enlightened
self-knowledge and sociopoli cal ac on. Moreover, cri cal theory involves a self-re ec ve aspect. To
prevent a cri cal theory of society from becoming yet another self-serving ideology, cri cal theorists must
account for their own transforma ve e ects. Essen ally, a cri cal researcher is concerned with a cri que
of society and with envisioning new possibili es.

FEMINIST RESEARCH
Feminist research is similar to cri cal theory research, but the focus is sharply on gender domina on and
discrimina on within patriarchal socie es. Similar to cri cal researchers, feminist researchers seek to
establish collabora ve and nonexploita ve rela onships with their informants, to place themselves within
the study to avoid objec ca on, and to conduct research that is transforma ve.
Gender is the organizing principle in feminist research, and inves gators seek to understand how gender
and a gendered social order have shaped women’s lives and their consciousness. The aim is to facilitate
change in ways relevant to ending women’s unequal social posi on.
The scope of feminist research ranges from studies of the subjec ve views of individual women, to studies
of social movements, structures, and broad policies that a ect (and o en exclude) women. Feminist
research methods typically include in-depth, interac ve, and collabora ve individual interviews or group
interviews that o er the possibility of reciprocally educa onal encounters. Feminists usually seek to
nego ate the meanings of the results with those par cipa ng in the study and to be self-re ec ve about
what they themselves are experiencing and learning. Feminist research, like other research that has an
ideologic perspec ve, has raised the bar for the conduct of ethical research. With the emphasis on trust,
empathy, and non exploita ve rela onships, proponents of these newer modes of inquiry view any type
of decep on or manipula on as abhorrent.

PARTICIPATORY ACTION RESEARCH


Par cipatory ac on research is, as the name implies, par cipatory. There is collabora on between
researchers and study par cipants in the de ni on of the problem, the selec on of an approach and
research methods, the analysis of the data, and the use to which ndings are put. The aim of PAR is to
produce not only knowledge, but ac on and consciousness-raising as well. Researchers speci cally seek to
empower people through the process of construc ng and using knowledge. The PAR tradi on has as its
star ng point a concern for the powerlessness of the group under study. Thus, a key objec ve is to
produce an impetus that is directly used to make improvements through educa on and sociopoli cal
ac on.
In PAR, the research methods are designed to facilitate emergent processes of collabora on and dialogue
that can mo vate, increase self-esteem, and generate community solidarity. Thus, “data-gathering”
strategies used are not only the tradi onal methods of interview and observa on (including both
qualita ve and quan ta ve approaches), but may include storytelling, socio drama, drawing, plays and
skits, and other ac vi es designed to encourage people to nd crea ve ways to explore their lives, tell
their stories, and recognize their own strengths.

16

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti

ti
ti
ti
ti
ti

ti
ti
ti

ti
ti

ti
ff
ti
ti
ti

ti
ti
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti

ti

ti
ti
ff
ti
ti

ti
fi
ti

ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
ff
ti
ti
ti
ti
ti
fi

ti
ti
ti
ti
ft
fi
ti
ti

ti
ti
ti
ti
fl
ti
ti
ti
ti
ti
ff
ti
ti
fl
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

III. MIXED METHOD RESEARCH


A growing trend in nursing research is the planned integra on of qualita ve and quan ta ve data within
single studies or coordinated clusters of studies.

Ra onale for Mixed Method Research


The dichotomy between quan ta ve and qualita ve data represents a key methodologic dis nc on in the
social, behavioral, and health sciences. Some argue that the paradigms that underpin qualita ve and
quan ta ve research are fundamentally incompa ble. Others, however, believe that many areas of
inquiry can be enriched and the evidence base enhanced through the judicious triangula on of qualita ve
and quan ta ve data. The advantages of a mixed method design include the following:
* Complementarity. Qualita ve and quan ta ve approaches are complementary— they
represent words and numbers, the two fundamental languages of human communica on. By using mixed
methods, researchers can allow each to do what it does best, possibly avoiding the limita ons of a single
approach.

* Incrementality. Progress on a topic tends to be incremental, relying on feedback loops.


Qualita ve ndings can generate hypotheses to be tested quan ta vely, and quan ta ve ndings
some mes need clari ca on through in-depth probing. It can be produc ve to build such a loop into the
design of a study.

* Enhanced validity. When a hypothesis or model is supported by mul ple and complementary
types of data, researchers can be more con dent about their inferences and the validity of their results.
The triangula on of methods can provide opportuni es for tes ng alterna ve interpreta ons of the data,
and for examining the extent to which the context helped to shape the results.

Applica ons of Mixed Method Research


Instrumenta on
Researchers some mes collect qualita ve data that are used in the development of formal, quan ta ve
instruments used in research or clinical applica ons. The ques ons for a formal instrument are some mes
derived from clinical experience or prior research. When a construct is new, however, these mechanisms
may be inadequate to capture its full complexity and dimensionality. Thus, nurse researchers some mes
gather qualita ve data as the basis for genera ng and wording the ques ons on quan ta ve scales that
are subsequently subjected to rigorous tes ng.

Hypothesis Genera on and Tes ng


In-depth qualita ve studies are o en fer le with insights about constructs or rela onships among them.
These insights then can be tested and con rmed with larger samples in quan ta ve studies. This most
o en happen in the context of discrete inves ga ons. One problem, however, is that it usually takes years
to do a study and publish the results, which means that considerable me may elapse between the
qualita ve insights and the formal quan ta ve tes ng of hypotheses based on those insights. A research
team interested in a phenomenon might wish to collaborate in a project that has hypothesis genera on
and tes ng as an explicit goal.

Qualita ve data are some mes used to explicate the meaning of quan ta ve descrip ons or
rela onships. Quan ta ve methods can demonstrate that variables are systema cally related but may fail
to provide insights about why they are related. Such explica ons help to clarify important concepts and to
corroborate the ndings from the sta s cal analysis; they also help to illuminate the analysis and give
guidance to the interpreta on of results. Qualita ve materials can be used to explicate speci c sta s cal

17

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022



ft
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
fi
ti

ti
ti
ti

fi

ti
ti
ti
fi
ti
ti
ti
ti
ti

ti
ti
ft

ti
ti
ti
ti

ti
ti
ti
ti
fi
fi
ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
ti
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

ndings and to provide more global and dynamic views of the phenomena under study, some mes in the
form of illustra ve case studies.

Theory Building, Tes ng, and Re nement


A par cularly ambi ous applica on of mixed method research is in the area of the- ory construc on. A
theory gains acceptance as it escapes discon rma on, and the use of mul ple methods provides great
opportunity for poten al discon rma on of a theory. If the theory can survive these assaults, it can
provide a stronger context for the organiza on of clinical and intellectual work.

Interven on Development
Qualita ve research is beginning to play an increasingly important role in the development of promising
nursing interven ons and in e orts to test their e cacy and e ec veness. There is also a growing
recogni on that the development of e ec ve interven ons must be based on some understanding of why
people might not adhere to interven on protocols, or of the barriers they face in par cipa ng in a
treatment. Interven on research is increasingly likely to be mixed method research.

Mixed Method Designs and Strategies


The growth in interest in mixed methods research in recent years has given rise to numerous typologies of
mixed methods designs. For example, one typology contrasts component designs and integrated designs.
In studies with a component design, the qualita ve and quan ta ve aspects are implemented as discrete
components of the overall inquiry, and remain dis nct during data collec on and analysis.
In mixed method studies with an integrated design, there is greater integra on of the method types at all
phases of the project, from the development of research ques ons, through data collec on and analysis,
to the interpreta on of the results. The blending of data occurs in ways that integrate the elements from
the di erent paradigms and o ers the possibility of yielding more insigh ul understandings of the
phenomenon under study.
The scheme focuses on two key design dimensions for a mixed methods study:
which approach (qualita ve or quan ta ve) has priority
how the approaches are sequenced in a study. In some cases (especially in component studies), data
collec on for the two approaches occurs more or less concurrently. In others, however, there are
important advantages to a sequen al approach, so that the second phase builds on knowledge gained in
the rst.
Although mixed method approaches can be used in a variety of contexts, they are becoming increasingly
valuable in research involving interven ons, which we discuss next.

Clinical Trials
Clinical trials are studies designed to assess clinical interven ons. Methods associated with clinical trials
were developed primarily in the context of medical research, but the vocabulary is being used by many
nurse researchers.
Clinical trials undertaken to test a new drug or an innova ve therapy are o en designed in a series of four
phases, as described by the U. S. Na onal Ins tutes of Health, as follows:
* Phase I of the trial occurs a er the ini al development of the drug or therapy and is designed
primarily to establish safety and tolerance, and to determine op mal dose or strength of the therapy. This
phase typically involves small-scale studies using simple designs (e.g., before–a er with no control group).
The focus is not on e cacy, but on developing the best possible (and safest) treatment.

* Phase II of the trial involves seeking preliminary evidence of controlled e ec veness. During
this phase, researchers ascertain the feasibility of launching a more rigorous test, seek evidence that the
treatment holds promise, look for signs of possible side e ects, and iden fy re nements to improve the
interven on. This phase is some mes considered a pilot test of the treatment, and may be designed

18

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
fi


fi
ti
ti
ff
ti
ti
ti
ti

ti
ti
ti

ti
ti
ti
ffi
ti
ti

fi
ff
ti

ff
fi
ti
ti

ti
ti
ff
ti

ft
ti
ti
ti

ti
ti
ti
fi
ti
ti
ti
ti
ffi
ti

ti
ff
ti
ti
ti
ti
ti
ff

ti
ti

ti
ft
ti
ti
tf
ft

fi

ff

ti
ti

ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

either as a small-scale experiment or as a quasi-experiment.

* Phase III is a full experimental test of the treatment—a randomized controlled trial (RCT)
involving random assignment to treatment condi ons under ghtly controlled condi ons. The objec ve of
this phase is to develop evidence about the treatment’s e cacy (i.e., whether the innova on is more
e cacious than the standard treatment or an alterna ve counterfactual). Adverse e ects are also
monitored. When the term clinical trial is used in the nursing literature, it most o en is referring to a
phase III trial, which may also be called an e cacy study. Phase III RCTs o en involve the use of a large
sample of subjects, some mes selected from mul ple sites to ensure that ndings are not unique to a
single se ng, and to increase the sample size and hence the power of the sta s cal tests.

* Phase IV of clinical trials involves studies of the e ec veness of an interven on in the general
popula on. The emphasis in these e ec veness studies is on the external validity of an interven on that
has shown promise of e cacy under controlled (but o en ar cial) condi ons. Phase IV e orts o en
examine the cost-e ec veness and clinical u lity of the new treatment. In pharmaceu cal research, phase
IV trials typically focus on post approval safety surveillance and on long-term consequences over a larger
popula on and mescale than was possible during earlier phases.
The recent emphasis on evidence-based prac ce (EBP) has underscored the importance of research
evidence that can be used in real-world clinical situa ons. One problem with tradi onal phase III RCTs is
that, in an e ort to enhance internal validity and the ability to infer causal pathways, the designs are so
ghtly controlled that their relevance to real-life applica ons comes into ques on. Concern about this
situa on has led to a call for prac cal (or pragma c) clinical trials (i.e., trials for which the study design is
formulated based on informa on needed to make a decision). Pragma c trials address prac cal ques ons
about the bene ts and risks of an interven on—as well as its costs—as they would unfold in rou ne
clinical prac ce.

Evalua on Research
Evalua on research focuses on developing useful informa on about a program, prac ce, procedure, or
policy—informa on that decision-makers need on whether to adopt, modify, or abandon a prac ce or
program. Usually (but not always), the evalua on is of a new interven on.
The term evalua on research is most o en used when researchers want to assess the e ec veness of a
complex program, rather than when they are evalua ng a speci c en ty (e.g., alterna ve sterilizing
solu ons). Evalua on researchers tend to evaluate a program, prac ce, or interven on that is embedded
in a poli cal or organiza onal context. Evalua ons o en try to answer broader ques ons than simply
whether an interven on is more e ec ve than care as usual—for example, they o en seek ways to
improve the program (as in phase II of a clinical trial) or to learn how the program actually “works” in
prac ce.
Evalua ons are undertaken to answer a variety of ques ons. Some ques ons involve the use of an
experimental (or quasi-experimental) design, but others do not. Because of the complexity of evalua ons
and the programs on which they typically focus, many evalua ons are mixed method studies using a
component design.
A process or implementa on analysis is undertaken when a need exists for descrip ve informa on about
the process by which a program gets implemented and how it actually func ons. A process analysis is
typically designed to address such ques ons as the following: “Does the program operate the way its’
designers intended?” “What are the strongest and weakest aspects of the program?” “What exactly is the
treatment, and how does it di er (if at all) from tradi onal prac ces?” “What were the barriers to
implemen ng the program successfully?” “How do sta and clients feel about the interven on?”
Many evalua ons focus on whether a program or policy is mee ng its objec ves, and evalua on
researchers some mes dis nguish between an outcome analysis and an impact analysis. An outcome
analysis tends to be descrip ve and does not use a rigorous experimental design. Such an analysis simply

19

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022


ti

ffi

ti
ti
ti
ti
ti
ti
ti
ti
tti
ti

ti
ti
ff

ti
ti
ti
fi

ti
ti
ti
ff

ti
ti
ffi
ti
ti
ti
ti
ti
ti
ff
ti

ff
ff
ft
ti
ti
ti
ti
ti
ti
ffi
ti
ti
ti
ti
ti
ti
ff
ft
ti
ft
ti
ti
ti
ti
ti

ffi
ff
ti
ti
ti
ti
fi

ti
fi
ti
ti
ti
ti
ti

ft
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti
ft
ti
ti
ft
ti
ti
ff

ti
ff
ti
ti
ti
ti
ff
ti

ti
ti
ti
ti
ti
ft
ti
ti
Southern Luzon State University
College of Allied Medicine

documents the extent to which the goals of the program are a ained—the extent to which posi ve
outcomes occur—without rigorous comparisons. Before-and-a er designs without a control group are
especially common.
An impact analysis a empts to iden fy the net impacts of a program, that is, the impacts that can be
a ributed to the program, over and above the e ects of the counterfactual (e.g., usual care). Impact
analyses use an experimental or strong quasi-experimental design because the aim is to make a causal
inference about the bene ts of the special program. Many nursing evalua ons are impact analyses,
although they are not necessarily labeled as such.
In the current situa on of spiraling health care costs, program evalua ons may also include an economic
(cost) analysis to determine whether the bene ts of the program outweigh the monetary costs.
Administrators and public policy o cials make decisions about resource alloca ons for health services not
only on the basis of whether something “works,” but also on the basis of whether it is economically
viable. Cost analyses are typically done in conjunc on with impact analyses (or phase III clinical trials),
that is, when researchers establish persuasive evidence regarding program e cacy.

SYSTEMATIC REVIEWS
A systema c reviews (some mes called integra ve reviews) in the form of meta-analyses and meta-
syntheses.
Un l fairly recently, the most common type of systema c review was narra ve integra on, using non-
sta s cal methods to synthesize research ndings. Such reviews con nue to be published in the nursing
literature. More recently, however, meta-analy c techniques that use a common metric for combining
study results sta s cally are being increasingly used to integrate quan ta ve evidence. Most reviews in
the Cochrane Collabora on, for example, are meta-analyses. Sta s cal integra on, however, is not always
appropriate, as we shall see.
Qualita ve researchers also are developing techniques to integrate ndings across studies. Many terms
exist for such endeavors (e.g., meta-study, meta-method, meta-summary, meta-ethnography, qualita ve
meta-analysis, formal grounded theory), but the one that appears to be emerging as the leading term
among nurses researchers is meta-synthesis.
META-ANALYSIS
Meta-analyses of randomized clinical trials (RCTs) are at the pinnacle of tradi onal evidence hierarchies.
The essence of a meta-analysis is that informa on from various studies is used to develop a common
metric, an e ect size. E ect sizes are averaged across studies, yielding informa on about both the
existence of a rela onship between variables in many studies and an es mate of its magnitude.
Advantages of Meta-Analyses
Meta-analysis o ers a simple yet powerful advantage as a method of integra on: objec vity. It is o en
di cult to draw objec ve conclusions about a body of evidence using narra ve methods when results are
disparate, as they o en are. Narra ve reviewers make subjec ve decisions about how much weight to
give ndings from di erent studies, and thus di erent reviewers may come to di erent conclusions about
the state of the evidence in reviewing the same set of studies. Meta-analysts also make decisions—
some mes based on personal preferences—but in a meta-analysis these decisions are explicit, and
readers of the report can evaluate the impact of the decisions. Moreover, the integra on itself is objec ve
because it relies on sta s cal formulas. Readers of a meta-analysis can be con dent that another analyst
using the same data set and making the same analy c decisions would come to the same conclusions.
Criteria for Using Meta-Analy c Techniques in a Systema c Review
In doing a synthesis project, reviewers need to decide whether it is appropriate to use sta s cal
integra on. One basic criterion is that the research ques on being addressed or hypothesis being tested
across studies should be nearly iden cal. This means that the independent and the dependent variables,
and the study popula ons, are su ciently similar to merit integra on. The variables may be
opera onalized di erently, to be sure. A second criterion concerns whether there is a su cient base of
knowledge for sta s cal integra on. If there are only a few studies, or if all of the studies are weakly

20

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

tt
ffi

ti
ti
fi
ti
ti
ti
ti
ti
ti

ff

ff
ti
ti
ff
ti
ti
ti

ft
ff
ti
tt

ti
ti
ti
ti
ff
ti
fi

ti
ti

ti
ffi
ti

ti
ti
ffi
fi

ff
ti
ti
ti

fi
ff
ti
ti
ti
ti
ti
ti
ft
ti
ti

tt
fi
ti
ti
ti
ti
ti
ti
ti
ffi
ti
ti
ti
ti
ti
fi
ti
ff

ti
ti
ti
ti
ffi

ti
ti
ft

ti
ti
ti
Southern Luzon State University
College of Allied Medicine

designed and harbor extensive bias, it usually would not make sense to compute an “average” e ect. One
nal issue concerns the consistency of the evidence.
Steps in a Meta-Analysis
Problem Formula on
As with any scien c endeavor, a systema c review begins with a problem statement and a research
ques on or hypothesis. Data cannot be meaningfully collected and integrated un l there is a clear sense
of what ques on is being addressed. As with a primary study, the reviewers should take care to develop a
problem statement and research ques ons that are clearly worded and speci c. Key constructs should be
conceptually de ned, and the de ni ons should indicate the boundaries of the inquiry. The de ni ons are
cri cal for deciding whether a primary study quali es for the synthesis.
The Design of a Meta-Analysis
Meta-analysts, as do other researchers, make many decisions that a ect the rigor and validity of the study
ndings. Ideally, these decisions are made in a conscious, planned manner before the study gets
underway, and are communicated to readers of the review.
Sampling is a cri cal design issue. In a systema c review, the sample consists of the primary studies that
have addressed the research ques on. Reviewers must formulate exclusion or inclusion criteria for the
search, which typically encompass substan ve, methodologic, and prac cal elements. Methodologically,
the criteria might specify that only studies that used a true experimental design will be included.
A related issue concerns the quality of the primary studies, a topic that has s rred some controversy.
Researchers some mes use quality as a sampling criterion, either directly or indirectly. Screening out
studies of lower quality can occur indirectly if the meta-analyst excludes studies that did not use a
randomized design, or studies that were not published in a peer-reviewed journal. More directly, each
poten al primary study can be rated for quality, and excluded if the quality score falls below a certain
threshold.
The Search for Data in the Literature
Before a search for primary studies begins, reviewers must decide whether the review will cover both
published and unpublished results. Some disagree about whether reviewers should limit their sample to
published studies, or should cast as wide a net as possible and include grey literature—that is, studies
with a more limited distribu on, such as disserta ons, unpublished reports, and so on. Some people
restrict their sample to reports in peer-reviewed journals, arguing that the peer review system is an
important, tried-and-true screen for ndings worthy of considera on as evidence.
Meta-analysts can use various aggressive search strategies to locate grey literature, in addi on to the
usual tac cs used in a literature review. These include hand searching journals known to publish relevant
content, contac ng key researchers in the eld to see if they have done studies that have not (yet) been
published or if they know of such studies, and contac ng funders of relevant research.
Evalua ons of Study Quality
In systema c reviews, the evidence from primary studies needs to be evaluated to determine how much
con dence to place in the ndings. Strong studies clearly should be given more weight than weaker ones
in coming to conclusions about a body of evidence. In meta-analyses, evalua ons of study quality o en
involve quan ta ve ra ngs of each study in terms of the strength of evidence it yields. Quality criteria
vary widely from instrument to instrument, and the result is that study quality can be rated quite
di erently with two di erent assessment tools.
Some meta-analysts use an alterna ve or supplementary strategy of iden fying individual methodologic
features that they judge to be of cri cal importance for the type of ques on begin addressed in the
review a component approach, as opposed to a scale approach.
Extrac on and Encoding of Data for Analysis
The next step in a systema c review is to extract and record relevant informa on about the ndings,
methods, and study characteris cs. The goal of these tasks is to produce a data set amenable to sta s cal
analysis.

21

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
fi
fi

ff
ti

fi
ti
ti
ti
ti

ti
ti

ti

ti
ti
fi
ti
ti
ti
ti
ti
fi

ff
ti

fi
ti

ti

ti
fi
ti

ti
ti
fi

ti
ti
fi
ti
ti

ti

fi
ti

ti

ti
ff

ti
ti
ti
fi
ti
ti
ti
ti

fi
ti
ff

ti
fi
ti
ti
ft
Southern Luzon State University
College of Allied Medicine

Basic source informa on must be recorded, including year of publica on, country where data were
collected, and so on. In terms of methodologic informa on, the most cri cal element across all studies is
sample size. Other important a ributes that should be recorded vary by study ques on. Examples of
features likely to be important include whether subjects were randomly assigned to treatments, whether
subjects, agents, or data collectors were blinded, and the response or a ri on rates. Informa on about
the ming of the measurements (i.e., the period of follow-up) is also likely to be cri cal. Characteris cs of
study par cipants must be encoded as well (e.g., the percentage of the sample that was female, the mean
age of par cipants). Finally, informa on about ndings must be extracted. Reviewers must either calculate
e ect sizes (discussed in the next sec on) or must enter su cient sta s cal informa on that the
computer program can compute them.
Calcula on of E ects
Meta-analyses depend on the calcula on of an index that encapsulates the rela onship between the
independent and dependent variable in each study. Because e ects are captured di erently depending on
the level of measurement of variables, there is no single formula for calcula ng an e ect size.
Data Analysis
Meta-analysis is o en described as a two-step analy c process. In the rst step, a summary sta s c that
captures an e ect is computed for each study, as just described. In the second step, a pooled e ect
es mate is computed as a weighted average of the individual e ects. The bigger the weight given to any
study, the more that study will contribute to the weighted average. Thus, weights should re ect the
amount of informa on that each study provides. One widely used approach in meta-analysis is called the
inverse variance method, which involves using the standard error to calculate the weight. Larger studies,
which have smaller standard errors, are given greater weight than smaller ones.
In formula ng an analy c strategy, meta-analysts must make many decisions. In this brief overview, we
touch on a few of them. One concerns the heterogeneity of ndings (i.e., di erences from one study to
another in the magnitude and direc on of the e ect size). As discussed earlier, heterogeneity across
studies may rule out the possibility that a meta-analysis can be done. Unless it is obvious that e ects are
consistent in magnitude and direc on based on a subjec ve perusal, heterogeneity should be formally
tested and meta-analysts should report their results in their reports.
Heterogeneity a ects not only whether a meta-analysis is appropriate, but also which of two sta s cal
models should be used in the analysis. Although this is too complex a topic for this book, su ce it to say
that when heterogeneity is fairly low, the researchers may use a xed e ects model. When study results
are more varied, it is usually be er to use what is called a random e ects model. Some argue that a
random e ects model is almost always more tenable. One solu on is to perform a sensi vity analysis
which, in general, refers to an e ort to test how sensi ve the results of an analysis are to changes in the
way the analysis was done. In this case, it would involve using both sta s cal models to determine how
the results are a ected. If the results di er, es mates from the random e ects model would be preferred.
A random e ects meta-analysis is intended primarily to address study-by-study varia on that cannot be
explained. Many meta-analysts seek to understand the determinants of e ect size heterogeneity through
formal analyses. Varia on across studies could re ect systema c di erences with regard to important
clinical or methodologic characteris cs.
One strategy for exploring modera ng e ects on e ect size is to perform sub-group analyses. A second
strategy is to undertake sensi vity analyses to determine whether the exclusion of lower-quality studies
changes the results of analyses based only on the most rigorous studies. A mix of strategies, together with
appropriate sensi vity analyses, is probably the most prudent approach to dealing with varia on in study
quality.
One nal analy c issue concerns publica on biases. Even a strong commitment to a comprehensive
search for reports on a given research ques on is unlikely to result in the iden ca on of all studies.
Meta-synthesis
Meta-synthesis is not a literature review (i.e., not the colla ng or aggrega on of research ndings) nor is it
a concept analysis.

22

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff
ti

ti
fi
ti

ti
ff
ti
ti
ff

ff

ff
ti
ff
ff
ti
ft
ti

ti
ti
ti
ti
ff
tt
tt
ti
ti
ti

ti
ti
ti
ff

ti
ff
ti
ti
ti
fi
fl
ff
ff
ti
ti
ti
ti
ti
fi
ff
ff
ti
ffi
ti
fi

ff
fi
ti
ff
ff
ff
ti
ti
tt
ti
ff
ti
ti
ti
ti
ti
ff
fi
ti

ti
ff
ti
ff
ti
ti
ti
fi
ti
ti

ffi
ti
ti
ti
ff
fl
ti
ti
ti

ff
ti

Southern Luzon State University


College of Allied Medicine

Three-category typology of meta-syntheses


1. Theory-building meta-syntheses are inquiries that extend the level of theory beyond what could be
achieved in individual inves ga ons.
2. Theory explica on meta-syntheses, researchers “ esh out” and re-conceptualize abstract concepts.
3. Descrip ve meta-synthesis involves a comprehensive analysis of a phenomenon based on a synthesis of
qualita ve ndings; ndings are not typically deconstructed and then reconstructed as they are in theory-
related inquiries.
Steps in a Meta-synthesis
Many of the steps in a meta-synthesis are similar to ones we described in connec on with a meta-
analysis, and so many details will not be repeated here. However, we point out some dis nc ve issues
rela ng to qualita ve integra on that are relevant in the various steps.
In meta-synthesis, researchers begin with a research ques on or a focus of inves ga on, and a key issue
concerns the scope of the inquiry.
Design of a Meta-synthesis
Like a quan ta ve systema c review, a meta-synthesis requires considerable advance planning. Having a
team of at least two researchers to design and implement the study is o en advantageous because of the
highly subjec ve nature of interpre ve e orts. Just as in a primary study, the design of a qualita ve meta-
synthesis should involve e orts to enhance integrity and rigor, and inves gator triangula on is one such
strategy.
Like meta-analysts, meta-synthesists must also make upfront decisions about sampling, and they face the
same issue of deciding whether to include only ndings from peer-reviewed journals in the analysis.
Search for Data in the Literature
It is generally more di cult to nd qualita ve than quan ta ve studies using main-stream approaches
such as searching electronic databases.
Evalua ons of Study Quality
Formal evalua ons of primary study quality are not as common in meta-synthesis as in meta-analysis, and
developing quality criteria about which there would be widespread agreement is likely to prove
challenging. One available instrument, which covers such aspects of a qualita ve study as the sampling
procedure, data analysis, researcher creden als, and researcher re exivity. The tool was designed to be
used to screen primary studies for inclusion in a meta-synthesis.
Extrac on of Data for Analysis
Informa on about various features of the study need to be abstracted and coded as part of the project.
Just as in quan ta ve integra on, the meta-synthesist records features of the data source (e.g., year of
publica on, country), characteris cs of the sample (e.g., age, sex, number of par cipants), and
methodologic features (e.g., research tradi on). Most important, of course, informa on about the study
ndings must be extracted and recorded.
Data Analysis
Strategies for metasynthesis diverge most markedly at the analysis stage.
Three approaches:
1. The Noblit and Hare Approach
Noblit and Hare’s methods of integra on have been highly in uen al among nurse researchers. Noblit and
Hare, who referred to their approach as meta-ethnography, argued that the integra on should be
interpre ve and not aggrega ve (i.e., that the synthesis should focus on construc ng interpreta ons
rather than descrip ons).
2. The Paterson, Thorne, Canam, and Jillings Approach
The method developed by Paterson and a team of Canadian colleagues involves three components: meta-
data analysis, meta-method, and meta-theory. These components o en are conducted concurrently, and
the metasynthesis results from the integra on of ndings from these three analy c components.
3. The Sandelowski and Barroso Approach

23

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
fi

ti
ti
ti
ti
ti
ti
ti

ti
fi
ti
ti

ti
ti
ti
ti

ti

ti
ti
fi
ffi

ff
ti

ti

ti
ti
ti

ti
fi

ti
ti

ti

ff

ti

ti

ti
ti
fi
fi
fl

ti
ti
fl
ti

ti
fl
ft

ft

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti

Southern Luzon State University


College of Allied Medicine

The strategies developed by Sandelowski and Barroso are likely to inspire meta-synthesis e orts in the
years ahead. In their mul year, methodologic project, they dichotomized integra on studies based on
level of synthesis and interpreta on. Reports are called summaries if the ndings are descrip ve synopses
of the qualita ve data, usually with lists and frequencies of themes, without any conceptual reframing.
Syntheses, by contrast, are ndings that are more interpre ve and that involve conceptual or metaphoric
reframing.

SAMPLING PLAN
Quan ta ve and qualita ve researchers have di erent approaches to sampling. Quan ta ve researchers
desire samples that will allow them to achieve sta s cal conclusion validity and to generalize their results.
They develop a sampling plan that speci es in advance how par cipants are to be selected and how many
to include. Qualita ve researchers are interested in developing a rich, holis c understanding of a
phenomenon. They make sampling decisions during the course of the study based on informa onal and
theore cal needs, and typically do not develop a formal sampling plan in advance.

BASIC SAMPLING CONCEPT

Popula ons
A popula on is the en re aggrega on of cases in which a researcher is interested. A popula on may be
broadly de ned, involving thousands of individuals, or may be narrowly speci ed to include only
hundreds.
Popula ons are not restricted to human subjects. Researchers (especially quan ta ve researchers) specify
the characteris cs that delimit the study popula on through the eligibility criteria (or inclusion criteria).
Researchers establish criteria to determine whether a person quali es as a member of the popula on.
Quan ta ve researchers sample from an accessible popula on in the hope of generalizing to a target
popula on. The target popula on is the en re popula on in which a researcher is interested. The
accessible popula on is composed of cases from the target popula on that are accessible to the
researcher as study par cipants.

Samples and Sampling


Sampling is the process of selec ng a por on of the popula on to represent the en re popula on. A
sample is a subset of popula on elements. In nursing research, the elements (basic units) are usually
humans.
Informa on from samples can, however, lead to erroneous conclusions, and this is especially a concern in
quan ta ve studies. In quan ta ve studies, a key criterion of adequacy is a sample’s representa veness.
A representa ve sample is one whose main characteris cs closely approximate those of the popula on.
Unfortunately, there is no method for ensuring that a sample is representa ve. Some sampling plans are
less likely to result in biased samples than others, but there is never a guarantee of a representa ve
sample. Researchers operate under condi ons in which error is possible, but quan ta ve researchers
strive to minimize or control those errors. Consumers must assess their success in having done so—their
success in minimizing sampling bias.
Sampling bias is the systema c overrepresenta on or underrepresenta on of some segment of the
popula on in terms of a characteris c relevant to the research ques on. Sampling bias is a ected by
many things, including the homogeneity of the popula on. If the elements in a popula on were all
iden cal on the cri cal a ribute, any sample would be as good as any other. Indeed, if the popula on
were completely homogeneous (i.e., exhibited no variability at all), a single element would be a su cient
sample for drawing conclusions about the popula on. For many physical or physiologic a ributes, it may
be safe to assume a reasonable degree of homogeneity. For example, the blood in a person’s veins is
rela vely homogeneous, and so a single blood sample chosen haphazardly from a pa ent is adequate for
clinical purposes. Most human a ributes, however, are not homogeneous. Variables, a er all, derive their

24

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

fi

ti

ti
ti

ti
ti
ti
ti

ti
ti
tt
ti

fi
ti
ti
ti
ti
ti
ti

tt
ti
ti

ti
fi
ti
ti
ti
ff
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
ti
ti
fi
ti
ti
ti
ti
ti

ti
fi
ti
ti
ft
ti
ti
ti
tt
ti
ti
ti
ff
ti
ff
ti
ti
ti
ffi
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

name from the fact that traits vary from one person to the next. Age, blood pressure, and stress level are
all a ributes that re ect the heterogeneity of humans.

Strata
Popula ons consist of subpopula ons, or strata. Strata are mutually exclusive segments of a popula on
based on a speci c characteris c. For instance, a popula on consis ng of all RNs could be divided into
two strata based on gender. Alterna vely, we could specify three strata consis ng of nurses younger than
30 years of age, nurses aged 30 to 45 years, and nurses 46 years or older. Strata are o en used in sample
selec on to enhance the sample’s representa veness.

QUANTITATIVE SAMPLING DESIGN

The two main sampling design issues in quan ta ve studies are how the sample is selected and how many
elements are included. The two broad types of sampling designs in quan ta ve research are (1)
probability sampling and (2) non-probability sampling.

1. Non-probability Sampling
In nonprobability sampling, researchers select elements by nonrandom methods. There is no way to
es mate the probability of including each element in a non-probability sample, and every element usually
does not have a chance for inclusion. Nonprobability sampling is less likely than probability sampling to
produce representa ve samples, yet most research samples in nursing and other disciplines are
nonprobability samples.
Four primary methods of nonprobability sampling in quan ta ve studies are convenience, quota,
consecu ve and purposive.

a. Convenience Sampling
Convenience sampling entails using the most conveniently available people as par cipants. The problem
with convenience sampling is that available subjects might be atypical of the popula on, and so the price
of convenience is the risk of bias.
Snowball sampling (also called network sampling or chain sampling) is a variant of convenience sampling.
With this approach, early sample members are asked to refer other people who meet the eligibility
criteria. This method is most o en used when the popula on is people with characteris cs who might be
di cult to iden fy (e.g., people who are afraid of hospitals).
Convenience sampling is the weakest form of sampling. It is also the most commonly used sampling
method in many disciplines. In heterogeneous popula ons, there is no other sampling approach in which
the risk of sampling bias is greater.

b. Quota Sampling
In quota sampling, researchers iden fy popula on strata and determine how many par cipants are
needed from each stratum. By using informa on about popula on characteris cs, researchers can ensure
that diverse segments are adequately represented in the sample. Despite its problems, however, quota
sampling is an important improvement over convenience sampling for quan ta ve studies. Quota
sampling is a rela vely easy way to enhance the representa veness of a non-probability sample, and does
not require sophis cated skills or a lot of e ort. Surprisingly, few researchers use this strategy.

c. Consecu ve Sampling
Consecu ve sampling involves recrui ng all of the people from an accessible popula on who meet the
eligibility criteria over a speci c me interval, or for a speci ed sample size. Consecu ve sampling is a far
be er approach than sampling by convenience, especially if the sampling period is su ciently long to deal
with poten al biases that re ect seasonal or other me-related uctua ons. When all members of an

25

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ffi
ti

tt
tt
ti

ti
ti
ti
ti
ti

ti
fi
ti
ti

fl
ti

fl
fi
ft
ti
ti

ti

ti
ti
ti
ff
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
fi

ti
ti
fl
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti
ffi
ti
ti
ft
ti
ti
ti

ti
Southern Luzon State University
College of Allied Medicine

accessible popula on are invited to par cipate in a study over a xed me period, the risk of bias is
greatly reduced. Consecu ve sampling is o en the best possible choice when there is “rolling enrollment”
into an accessible popula on.

d. Purposive Sampling
Purposive sampling or judgmental sampling is based on the belief that researchers’ knowledge about the
popula on can be used to hand-pick sample members. Researchers might decide purposely to select
subjects who are judged to be typical of the popula on or par cularly knowledgeable about the issues
under study. Sampling in this subjec ve manner, however, provides no external, objec ve method for
assessing the typicalness of the selected subjects. Nevertheless, this method can be used to advantage in
certain situa ons. For example, purposive sampling is o en used when researchers want a sample of
experts. A purposive sampling is o en used produc vely by qualita ve researchers.

Evalua on of Non-probability Sampling


Non-probability samples are rarely representa ve of the popula on. When every element in the
popula on does not have a chance of being included in the sample, it is likely that some segment of it will
be systema cally underrepresented. And, when there is sampling bias, there is always a chance that the
results could be misleading. Why, then, are non-probability samples used in most nursing studies? Clearly,
the advantages of these sampling designs lie in their convenience and economy. Probability sampling
requires skill and resources. There is o en no op on but to use a non-probability approach. Quan ta ve
researchers using non-probability samples must be cau ous about the inferences and conclusions drawn
from the data, and you as a reader should be alert to the possibility of sampling bias.

2. Probability Sampling
Probability sampling involves the random selec on of elements from a popula on. Random assignment
refers to the process of alloca ng subjects to di erent treatment condi ons at random. A random
selec on process is one in which each element in the popula on has an equal, independent chance of
being selected.
The four most commonly used probability sampling designs are simple random, stra ed random, cluster,
and systema c sampling.
a. Simple Random Sampling
Simple random sampling is the most basic probability sampling design. Because more complex probability
sampling designs incorporate features of simple random sampling, the procedures involved are brie y
described here so you can understand what is involved.
Samples selected randomly in such a fashion are not subject to researcher biases. There is no guarantee
that the sample will be representa ve of the popula on, but random selec on does guarantee that
di erences between the sample and the popula on are purely a func on of chance. The probability of
selec ng a markedly deviant sample through random sampling is low, and this probability decreases as
the sample size increases.
Simple random sampling is a laborious process. Developing the sampling frame, enumera ng all the
elements, and selec ng the sample elements are me-consuming chores, par cularly with a large
popula on.
b. Stra ed Random Sampling
In stra ed random sampling, the popula on is rst divided into two or more strata. As with quota
sampling, the aim of stra ed sampling is to enhance representa veness. Stra ed sampling designs
subdivide the popula on into subsets from which elements are selected at random. Stra ca on is o en
based on such demographic a ributes as age or gender.
Stra fying variables usually divide the popula on into unequal subpopula ons. Researchers may sample
either propor onately (in rela on to the size of the stratum) or dispropor onately. By using stra ed
random sampling, researchers can sharpen the representa veness of their samples. Furthermore, a

26

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff

ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
fi
ti

ti
ti

ti

ti
ti
ti

ti
ti

ti
fi

tt

ti
ti
ft

ti
ti
ft

ti
ft
ti
ti
ti
ti
ti
ti
ti
fi
ff
ti
ti
ti

ti

ft
ti
ti
ti
ti
fi
ti
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
fi
ti

fi
ti
ti
fi
ti
ti
ti
ti
ft
fi
ti
fl
Southern Luzon State University
College of Allied Medicine

stra ed sample requires even more work than simple random sampling because the sample must be
drawn from mul ple enumerated lis ngs.
c. Cluster Sampling
In cluster sampling or mul stage sampling, there is a successive random sampling of units. The rst unit is
large groupings, or clusters.For a given number of cases, cluster sampling tends to be less accurate than
simple or stra ed random sampling. Despite this disadvantage, cluster sampling is more economical and
prac cal than other types of probability sampling, par cularly when the popula on is large and widely
dispersed.
d. Systema c Sampling
Systema c sampling involves the selec on of every kth case from a list, such as every 10th person on a
pa ent list. Systema c sampling designs can be applied in such a way that an essen ally random sample is
drawn. First, the size of the popula on is divided by the size of the desired sample to obtain the sampling
interval width. The sampling interval is the standard distance between the selected elements. For
instance, if we wanted a sample of 50 from a popula on of 5,000, our sampling interval would be 100
(5,000/50 100). In other words, every 100th case on the sampling frame would be sampled. Next, the
rst case would be selected randomly (e.g., by using a table of random numbers). If the random number
chosen were 73, the people corresponding to numbers 73, 173, 273, and so forth would be included in the
sample.

Evalua on of Probability Sampling


Probability sampling is the only viable method of obtaining representa ve samples. If all the elements in
the popula on have an equal probability of being selected, then the resul ng sample is likely to do a good
job of represen ng the popula on. A further advantage is that probability sampling allows researchers to
es mate the magnitude of sampling error. Sampling error refers to di erences between popula on values
(e.g., the average age of the popula on) and sample values (e.g., the average age of the sample).
Probability sampling is the preferred and most respected method of obtaining sample elements, but it is
o en imprac cal.

Sample Size in Quan ta ve Studies


Sample size the number of subjects in a sample is a major issue in conduc ng and evalua ng quan ta ve
research. No simple equa on can determine how large a sample is needed, but quan ta ve researchers
o en strive for the largest sample possible. The larger the sample, the more representa ve it is likely to
be. Every me researchers calculate a percentage or an average based on sample data, the purpose is to
es mate a popula on value. The larger the sample, the smaller the sampling error.
Sta s cal conclusion validity is threatened when samples are too small, and researchers run the risk of
gathering data that will not support their hypotheses— even when those hypotheses are correct. Large
samples are no assurance of accuracy, however. With nonprobability sampling, even a large sample can
harbor extensive bias. A large sample cannot correct for a faulty sampling design; nevertheless, a large
nonprobability sample is preferable to a small one. When cri quing quan ta ve studies, you must assess
both the sample size and the sample selec on method to judge how representa ve the sample likely was.

QUALITATIVE SAMPLING DESIGN

Types of Qualita ve Sampling


Qualita ve researchers usually eschew probability samples. A random sample is not the best method of
selec ng people who will make good informants, that is, people who are knowledgeable, ar culate,
re ec ve, and willing to talk at length with researchers. Various non-probability sampling designs have
been used by qualita ve researchers.
a. Convenience and Snowball Sampling

27

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
fi

ft
ft
fl
ti
ti
ti

ti
ti
ti
ti
fi
ti
ti
ti
ti

ti
ti
􏰀
ti

ti
ti

ti
fi
ti
ti
ti
ti

ti
ti
ti

ti
ti
ti

ti

ti

ti

ti
ti

ti
ti
ti
ti
ff
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
ti
fi
ti
ti
ti

Southern Luzon State University


College of Allied Medicine

Qualita ve researchers o en begin with a convenience sample, which is some mes referred to as a
volunteer sample. Volunteer samples are especially likely to be used when researchers need to have
poten al par cipants come forward and iden fy themselves. For example, if we wanted to study the
experiences of people with frequent nightmares, we might have di culty readily iden fying poten al
par cipants. In such a situa on, we might recruit sample members by placing a no ce on a bulle n board,
in a newspaper, or on the Internet, reques ng people with nightmares to contact us. In this situa on, we
would be less interested in obtaining a representa ve sample of people with nightmares, than in
obtaining a diverse group represen ng various experiences with nightmares.
Sampling by convenience is o en e cient, but it is not usually a preferred sampling approach, even in
qualita ve studies. The key aim in qualita ve studies is to extract the greatest possible informa on from
the small number of informants in the sample, and a convenience sample may not provide the most
informa on-rich sources. However, convenience sample may be an economical way to begin the sampling
process.
Qualita ve researchers also use snowball sampling, asking early informants to make referrals for other
study par cipants. This method is some mes referred to as nominated sampling because it relies on the
nomina ons of others already in the sample. A weakness of this approach is that the eventual sample
might be restricted to a rather small network of acquaintances. Moreover, the quality of the referrals may
be a ected by whether the referring sample member trusted the researcher and truly wanted to
cooperate.
b. Purposive Sampling
Qualita ve sampling may begin with volunteer informants and may be supplemented with new
par cipants through snowballing, but many qualita ve studies eventually evolve to a purposive (or
purposeful) sampling strategy—to a strategy in which researchers deliberately choose the cases or types
of cases that will best contribute to the informa on needs of the study. That is, regardless of how ini al
par cipants are selected, qualita ve researchers o en strive to select sample members purposefully
based on the informa on needs emerging from the early ndings. Who to sample next depends on who
has been sampled already.
Within purposive sampling, several strategies qualita ve researchers have adopted to meet the
conceptual needs of their research:
* Maximum varia on sampling involves deliberately selec ng cases with a wide range of varia on on
dimensions of interest.
* Extreme (deviant) case sampling provides opportuni es for learning from the most unusual and extreme
informants (e.g., outstanding successes and notable failures).
* Typical case sampling involves the selec on of par cipants who illustrate or highlight what is typical or
average.
* Criterion sampling involves studying cases who meet a predetermined criterion of importance.
Maximum varia on sampling is o en the sampling mode of choice in qualita ve research because it is
useful in documen ng the scope of a phenomenon and in iden fying important pa erns that cut across
varia ons. Other strategies can also be used advantageously, however, depending on the nature of the
research ques on.
A strategy of sampling con rming and discon rming cases is another purposive strategy that is o en used
toward the end of data collec on in qualita ve studies. As researchers note trends and pa erns in the
data, emerging conceptualiza ons may need to be checked. Con rming cases are addi onal cases that t
researchers’ conceptualiza ons and strengthen credibility. Discon rming cases are new cases that do not
t and serve to challenge researchers’ interpreta ons. These “nega ve” cases may o er new insights
about how the original conceptualiza on needs to be revised or expanded.
c. Theore cal Sampling
Theore cal sampling is a method of sampling that is most o en used in grounded theory studies.
Theore cal sampling involves decisions about what data to collect next and where to nd those data to
develop an emerging theory op mally. The basic ques on in theore cal sampling is: “What groups or

28

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
fi

ti
ti
ti
ti
ff
ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti

ti
ti
ti
ti
ti

ti

ft
fi
ti

ti
ti
ft
ti
ti
ft
ti
ti

ffi
ti
ti
ti
ti
ti
ti
fi
ti
ti
ti
ti
ft
ti
ti
ti
ti
ti
fi
ti

ti
fi
ft
fi
ffi
ti
ti

ti
ti
ti
tt
ti
fi
ff
ti
tt
ti

ti
ft
ti
ti
ti
ti
fi
Southern Luzon State University
College of Allied Medicine

subgroups should the researcher turn to next?”. Groups are chosen as they are needed for their relevance
in furthering the emerging conceptualiza on. These groups are not chosen before the research begins but
only as they are needed for their theore cal relevance in developing further emerging categories.
Theore cal sampling is not the same as purposeful sampling. The objec ve of theore cal sampling is to
discover categories and their proper es and to o er new insights about interrela onships that occur in
the substan ve theory.
Sample Size in Qualita ve Research
There are no rules for sample size in qualita ve research, the sample size is usually determined based on
informa onal needs. Hence, a guiding principle in sampling is data satura on and that is, sampling to the
point at which no new informa on is obtained and redundancy is achieved. The number of par cipants
needed to reach satura on depends on a number of factors. For example, the broader the scope of the
research ques on, the more par cipants will likely be needed. Data quality can also a ect sample size. If
par cipants are good informants who are able to re ect on their experiences and communicate
e ec vely, satura on can be achieved with a rela vely small sample. Also, if longitudinal data are
collected, fewer par cipants may be needed, because each will provide a greater amount of informa on.
Type of sampling strategy may also be relevant. For example, a larger sample is likely to be needed with
maximum varia on sampling than with typical case sampling. Sample size also depends on the type of
qualita ve inquiry, as discussed next.
Sampling in the Three Main Qualita ve Tradi ons
There are similari es among the various qualita ve tradi ons with regard to sampling: samples are
usually small, probability sampling is not used, and nal sampling decisions usually take place in the eld
during data collec on.
Sampling in Ethnography
Ethnographers may begin by ini ally adop ng a “big net” approach—that is, mingling with and having
conversa ons with as many members of the culture under study as possible. Although they may converse
with many people (usually 25 to 50), ethnographers o en rely heavily on a smaller number of key
informants, who are highly knowledgeable about the culture and who develop special, ongoing
rela onships with the researcher. These key informants are o en the researcher’s main link to the
“inside.”
Key informants are chosen purposively, guided by the ethnographer’s informed judgments, although
sampling may become more theore cal as the study progresses. Developing a pool of poten al key
informants o en depends on ethnographers’ ability to construct a relevant framework. For example, an
ethnographer might decide to seek out di erent types of key informants based on their roles (e.g., health
care prac oners, advocates). Once a pool of poten al key informants is developed, key considera ons
for nal selec on are their level of knowledge about the culture and how willing they are to collaborate
with the ethnographer in revealing and interpre ng the culture.
Sampling in ethnography typically involves more than selec ng informants. To understand a culture,
ethnographers have to decide not only whom to sample, but what to sample as well. For example,
ethnographers make decisions about observing events and ac vi es, about examining records and
ar facts, and about exploring places that provide clues about the culture. Key informants can play an
important role in helping ethnographers decide what to sample.
Sampling in Phenomenological Studies
Phenomenologists tend to rely on very small samples of par cipants typically 10 or fewer. There is one
guiding principle in selec ng the sample for a phenomenological study: all par cipants must have
experienced the phenomenon and must be able to ar culate what it is like to have lived that experience.
Although phenomenological researchers seek par cipants who have had the targeted experiences, they
also want to explore diversity of individual experiences. Thus, they may speci cally look for people with
demographic or other di erences who have shared a common experience.
Interpre ve phenomenologists may, in addi on to sampling people, sample ar s c or literary sources.
Experien al descrip ons of the phenomenon may be selected from a wide array of literature, such as

29

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff
ti

ti
fi
ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ft

ti
ti
ti
ti
ti
ti
ti
ti
ti

ti
ff

ti
ti
ti
ti
ti

ti

ti

ti
ti
ff
ti
ti
ti
ti
ti

ff
ti
ti
fi
ti
ti
ti
ft
fl
ti
ti
ti

ft
ti
ti
ti
ti

fi
ti
ti
ti
ti
ff
ti
ti

ti
ti
ti
fi
Southern Luzon State University
College of Allied Medicine

poetry, novels, biographies, auto-biographies, diaries, and journals. These sources can help increase
phenomenologists’ insights into the phenomena under study. Art including pain ngs, sculpture, lm,
photographs, and music is viewed as another source of lived experience. Each ar s c medium is viewed
as having its own speci c language or way of expressing the experience of the phenomenon.
Sampling in Grounded Theory Studies
Grounded theory research is typically done with samples of about 20 to 40 people, using theore cal
sampling. The goal in a grounded theory study is to select informants who can best contribute to the
evolving theory. Sampling, data collec on, data analysis, and theory construc on occur concurrently, and
so study par cipants are selected serially and con ngently (i.e., con ngent on the emerging
conceptualiza on). Sampling might evolve as follows:
The researcher begins with a general no on of where and with whom to start. The rst few cases may be
solicited purposively, by convenience, or through snowballing.
In the early part of the study, a strategy such as maximum varia on sampling might be used to gain
insights into the range and complexity of the phenomenon under study.
The sample is adjusted in an ongoing fashion. Emerging conceptualiza ons help to inform the sampling
process.
Sampling con nues un l satura on is achieved.
Final sampling o en includes a search for con rming and discon rming cases to test, re ne, and
strengthen the theory.

MEASUREMENT
An understanding of measurement principles is crucial in the data-collec on phase of a study. Research
variables must be opera onally de ned. As stated elsewhere, opera onal de ni ons indicate how
variables will be observed or measured. An opera onal de ni on should not be confused with a
conceptual de ni on, which is a dic onary de ni on of an abstract idea that is being studied by the
researcher.
Measurement is the process of assigning numbers to variables. Ways to assign these numbers include
coun ng, ranking, and comparing objects or events. Measurement, as used in research, implies the
quan ca on of informa on. This means that numbers are assigned to the data. Some qualita ve studies
gather data in narra ve form, and numbers are not associated with these types of data. Thus, the
narra ve types of data that are collected from qualita ve studies are not included in the concept of
measurement as it is discussed in this book. However, you should know that if qualita ve data were
summarized and placed into categories, they would then t the criteria for measurement. In the classic
sense, measurement implies that some kind of comparison is made between pieces of informa on.
Numbers are the means of comparing this informa on.
LEVEL OF MEASUREMENT
The types of mathema cal calcula ons that can be made with data depend on the level of measurement
of the data. The terms level of measurement and measurement scale are frequently used interchangeably.
Four levels of measurement or measurement scales have been iden ed: nominal, ordinal, interval, and
ra o.
a. NOMINAL
While using the nominal level of measurement, objects or events are named or categorized. The
categories must be dis nct from each other (mutually exclusive categories) and include all of the possible
ways of categorizing the data (exhaus ve categories). There may be only two categories, or there may be
many categories. Numbers are obtained for this type of data through coun ng the frequency or
percentages of objects or events in each of the categories.
Examples of types of nominal data are gender, religious a lia on, marital status, and poli cal party
membership. Some types of nominal data may appear to contain what we call real numbers. The nominal
level of measurement is considered the lowest level or least rigorous of the measurement levels.
b. ORDINAL

30

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti

ti
ti
ti

fi

ti

ti
ti
fi
ti

ft
ti
ti
ti
ti
fi
ti

ti

ti
ti
ti

fi
ti
ti
ti

ti

fi
ti
fi
ti

ti

ti
ti

fi
ffi

fi
ti
ti
ti
ti
fi
fi
ti

ti
ti
ti
ti
ti
ti
fi
fi
ti
ti
ti
ti

ti
ti
fi

ti
fi
ti
Southern Luzon State University
College of Allied Medicine

Data that can be rank ordered as well as placed into categories are considered to be at the ordinal level of
measurement. The exact di erences between the ranks cannot be speci ed with this type of data. The
numbers obtained from this measurement process indicate the order rather than the exact quan ty of the
variables. For example, anxiety levels of people in a therapy group might be categorized as mild,
moderate, and severe. It would be appropriate to conclude that those individuals with severe anxiety are
more anxious than those individuals with moderate anxiety. In turn, moderate anxiety su erers in the
group could be considered more anxious than group members with mild anxiety. You could not, however,
determine the exact di erence in anxiety levels of any individual within each of the categories. Frequency
distribu ons and percentages are used with this type of data as well as some sta s cal tests.
c. INTERVAL
Interval data consist of real numbers. Interval level of measurement concerns data that not only can be
placed in categories and ranked, but also the distance between the ranks can be speci ed. The categories
in interval data are the actual numbers on the scale, such as on a thermometer. If body temperature was
being measured, a reading of 37°C might be one category, 37.2°C might be another category, and 37.4°C
might cons tute a third category. The researcher would be correct in saying that there is a 0.2°C
di erence between the rst and second category, and between the second and third category. The
researcher could even go one step further and nd the average temperature reading.
d. RATIO
Data collected at the ra o level of measurement are considered the highest or most precise level of data.
Ra o level of measurement includes data that can be categorized and ranked; in addi on, the distance
between ranks can be speci ed, and what is referred to as a true or natural zero point can be iden ed.
The zero point on the ra o scale means there is a total absence of the quan ty being measured. The
amount of money in your bank account could be considered ra o data because it is possible to be zero.
Similarly, if a researcher wanted to determine the number of pain medica on requests made by pa ents;
it would be possible for some pa ents’ requests to rank at the natural zero point because they request no
pain medica ons. This type of data would be considered ra o data.

CONVERTING DATA TO A LOWER LEVEL OF MEASUREMENT


Data can always be converted from one level to a lower level of measurement, but not to a higher level.
Interval and ra o data can be converted to ordinal or nominal data, and ordinal data can be converted to
nominal data. For example, the number of requests by pa ents for pain medica on could be converted to
ordinal data. Requests could be categorized as follows: more than 10 requests per day, 5–10 requests per
day, and 0–4 requests per day. This would be an instance of conver ng interval data to ordinal data.

DETERMINING THE APPROPRIATE LEVEL OF MEASUREMENT


Now that you are familiar with the levels of measurement, you may wonder how one should determine
which level of measurement to use in a study. If the researcher is very concerned about the precision of
the data, the interval or ra o level of measurement should be selected when possible. If ranked or
categorized data will be su cient to answer the research ques ons or test the research hypotheses,
ordinal data may be used. Finally, if categories of data are all that is called for, nominal data will be
appropriate.
If the researcher were trying to determine the di erences in the number of complica ons experienced by
pa ents with diabetes who have varying blood glucose levels, accuracy would be very important. The two
categories of elevated and non elevated blood glucose levels (nominal data) would not be precise enough
for making comparisons among the pa ents. The opera onal de ni on of the variable will determine the
level of data that will be gathered.
Some variables, by their very nature, can be measured at only one level. For example, gender can be
measured at only a nominal level. A person is either a male or a female. The main considera ons in
determining the level of measurement for data are the level of measurement appropriate for the type of

31

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff
ti
ti

ti

ti

ti

ti
ff
ti
fi
ti
ff
fi
ffi
ti
ti

ti
fi
ff
ti
ti

ti

ti
fi
ti
ti

ti
fi
ti
ti
ti
ti
ti
ti

fi
ti
ff

ti
ti

ti
ti
fi
Southern Luzon State University
College of Allied Medicine

data that are being sought, and the degree of precision desired when it is possible to consider the data at
more than one level of measurement.

DATA COLLECTION PROCESS


There are ve important ques ons to ask when the researcher is in the process of collec ng data: Who?
When? Where? What? How? Use the acronym WWWWH.
Who will collect the data? If the researcher is going to collect all of the data, this ques on is easy to
answer. However, scien c inves ga ons frequently involve a team of researchers. The decision will then
need to be made about who will collect the data. Other people outside the research team may also be
used in the data-collec on phase; some mes data collectors are paid for their services. Any me more
than one person is involved, assurances must be made that the data are being gathered in the same
manner. Training will be needed for the data collectors, and checks should be made on the reliability of
the collected data.
When will the data be collected? The determina on will need to be made about the month, day, and
some mes even the hour, for data collec on. Also, how long will data collec on take? Frequently, the only
way to answer this ques on is through a trial run of the procedure by the researcher. If ques onnaires will
be used, they should be pretested with people similar to the poten al research par cipants, to determine
the length of me for comple on of the instrument. The decision may be made to revise the instrument if
it seems to take too long for comple on. Unfortunately, data collec on usually takes longer than
envisioned.
Where will the data be collected? The se ng for data collec on must be carefully determined. Op mum
condi ons should be sought. Having par cipants ll out ques onnaires in the middle of the hallway while
leaning against a wall would de nitely not provide the op mum se ng. Some mes it is di cult to decide
on the se ng. If ques onnaires are being used, a researcher might ask respondents to complete the
ques onnaire while the researcher remains in the same immediate or general area. This procedure will
help ensure return of the ques onnaires. If the par cipants happen to be red or the room is too hot or
too cold, the answers that are provided may not be valid. If respondents are allowed to complete the
ques onnaires at leisure, their answers may be more accurate. A disadvantage of using this procedure
may be a reduc on in the return rate of the ques onnaires.
What data will be collected? This ques on calls for a decision to be made about the type of data being
sought. For example, is the study designed to measure knowledge, a tudes, or behaviors? The type of
data needed to answer the research ques ons or to test the research hypothesis should be the main
considera on in data collec on. If the researcher is concerned with the way crises a ect people, the
“what” of data collec on becomes persons’ behaviors or responses in crises.
How will the data be collected? Some type of research instrument will be needed to gather the data. This
can vary from a self-report ques onnaire to the most sophis cated of physiological instruments. Choosing
a data-collec on instrument is a major decision that should be made only a er careful considera on of
the possible alterna ves.

DATA COLLECTION INSTRUMENT


Research instruments, also called research tools, are the devices used to collect data. The instrument
facilitates the observa on and measurement of the variables of interest. The type of instrument used in a
study is determined by the data-collec on method selected. If physiological data are sought, some type of
physiological instrument will be needed. If observa onal data are needed to measure the variable of
interest, some type of observa onal schedule or checklist will be called for. One area of the research over
which the inves gator has a great deal of control is in the choice of the data-collec on instrument. Great
care should be taken to select the most appropriate instruments.

A. USE OF EXISTING INSTRUMENTS

32

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti
ti
ti
ti
ti
tti
fi

ti

ti
ti
ti

ti
ti
ti
ti
ti
ti
ti
fi

ti
ti
ti
ti
ti
fi
ti

ti

ti
ti

ti
ti
ti
ti
ti
tti
ti
ti
fi
ti
ti
ti
ti

ti
ti
ti

tti
ti
tti
ti
ti
ti

ft
ti
ti
ti
ff
ti
ffi
ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

While conduc ng a review of the literature on the topic of interest, a researcher may discover that an
instrument is already available to measure the research variables. The use of an already tested instrument
helps connect the present study with the exis ng body of knowledge on the variables. Of course, the
instrument selected must be appropriate to measure the study variables.
Many research instruments are available for nurse researchers. Some of the best sources are published
compila ons of instruments. These compila ons are par cularly useful because they contain discussions
of the instruments, such as the reliability and validity of the tools. In some cases, the instrument is printed
in its en rety in these sources. If not, informa on is provided about where a copy of the tool can be
obtained.
The oldest and most well-known sources of research instruments are the Mental Measurement Yearbooks
(MMYs). There are currently 19 volumes; the rst was published in 1938, and the most recent one
published in 2014. To be reviewed in the MMY, a test must be commercially available, be published in the
English language, and be new, revised, or widely used since it last appeared in the MMY series.
Many of the exis ng instruments are copyrighted. The copyright holder must be contacted to obtain
permission to use such an instrument. Some mes this permission is given without cost, and other mes
the researcher has to pay for permission to use the instrument or purchase copies of the tool. Instruments
developed in research projects supported by public funding generally remain in the public domain.
Inves gators have free access to these types of instruments.
If an exis ng instrument will be used, it may be desirable to contact the developer of the instrument to
obtain informa on on its use in past research. This informa on is usually provided freely. Tool developers
are generally pleased when other researchers want to use their crea ons. Frequently, the only request
that will be made is that a copy of the study results and the data, par cularly data on the reliability and
validity of the instrument, be forwarded to the person who developed the instrument.

B. DEVELOPING AN INSTRUMENT
If no instrument can be discovered that is appropriate for a par cular study, the researcher is faced with
developing a new instrument. Also, it may be possible to revise an exis ng instrument. Cau on must be
exercised when this approach to instrument development is used. If any items are altered or deleted, or
new items added to an exis ng instrument, the reliability and validity of the tool might be altered. New
reliability and validity tes ng will need to be conducted. Also, permission to revise the instrument will
have to be obtained from the developer of the tool.
The development of a completely new instrument is a demanding task. Volumes of books have been
wri en concerning tool development. Consult some of these sources for further informa on on this
subject.

C. PILOT STUDIES
One of the primary reasons a pilot study is conducted is to pretest a newly designed instrument.
Whenever a new instrument is being used in a study, or a preexis ng instrument is being used with
people who have di erent characteris cs from those for whom the instrument was originally developed, a
pilot study should be conducted.
A pilot study is a small-scale trial run of the actual research project. A group of individuals similar to the
proposed study subjects should be tested in condi ons similar to those that will be used in the actual
study. No set number of persons is needed for a pilot study. Factors such as me, cost, and availability of
persons similar to the study subjects help determine the size of the pilot group. The readability, accuracy,
and comprehensibility of the ques onnaire or scale items can be assessed. They can also be used to
determine if they are valid measurements, measuring what they are intended to measure and also to
determine if the instruments reliably measure the intended concept in di erent popula ons from which
they were developed.

CRITERIA FOR SELECTION OF A DATA COLLECTION INSTRUMENT

33

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

tt
ti
ti

ti
ti

ti
ti

ti
ff

ti
ti

ti
ti
ti
ti
ti
ti
fi
ti

ti
ti

ti
ti
ti
ti
ti

ff
ti

ti
ti
ti

ti
Southern Luzon State University
College of Allied Medicine

Several criteria must be considered when deciding on a data-collec on instrument; these include the
prac cality, reliability, and validity of the instrument.
A. Prac cality of the Instrument
Before the researcher examines the reliability and validity of an instrument, ques ons should be asked
about the prac cality of the tool for the par cular study being planned. The prac cality of an instrument
concerns its cost and appropriateness for the study popula on. How much will the instrument cost? How
long will it take to administer the instrument? Will the popula on have the physical and mental stamina to
complete the instrument? Are special motor skills or language abili es required of par cipants? Does the
researcher require special training to administer or score the instrument? If so, is this training available? Is
someone available who is quali ed to analyze the data? These are very important ques ons; the
researcher must a end to the prac cality of the instrument before considering the reliability and validity
of the instrument.
B. Reliability of the Instrument
The researcher is always interested in collec ng data that are reliable. The reliability of an instrument
concerns its consistency and stability. If you are using a thermometer to measure body temperature, you
would expect it to provide the same reading each me it was placed in a constant temperature water
bath.
Regardless of the type of research, the reliability of the study instruments is always of concern. Reliability
needs to be determined whether the instrument is a mechanical device, a wri en ques onnaire, or a
human observer. The degree of reliability is usually determined by the use of correla onal procedures. A
correla on coe cient is determined between two sets of scores or between the ra ngs of two judges:
The higher the correla on coe cient, the more reliable is the instrument or the ra ngs from the judges.
Although correla on coe cients can range between -1.00 and +1.00, correla on coe cients computed to
test the reliability of an instrument are expected to be posi ve correla ons. Correla on coe cients
computed to test the reliability of an instrument are expected to be posi ve correla ons. It is risky to use
an instrument with a reliability lower than .70. These have cau oned researchers to check for reliability as
a rou ne step in all studies that involve observa onal tools, self-report measures, or knowledge tests
because of their suscep bility to measurement errors.
Correla on coe cients are frequently used to determine the reliability of an instrument. However, when
observers or raters are used in a study, the percentage or rate of agreement may also be used to
determine the reliability of their observa ons or ra ngs.
In general, the more items that an instrument contains, the more reliable it will be. The likelihood of
coming closer to obtaining a true measurement increases as the sample of items to measure a variable
increases. If a test becomes too long, subjects may get red or bored.
Be cau ous about the reliability of instruments. Reliability is not a property of the instrument that, once
established, remains forever. Reliability must con nually be assessed as the instrument is used with
di erent subjects and under di erent environmental condi ons. An instrument to measure pa ent
autonomy might be highly reliable when administered to pa ents while in their hospital rooms, but very
unreliable when administered to these same pa ents while lying on a stretcher outside the opera ng
room wai ng for surgery.
Researchers should choose the most appropriate type of reliability for their par cular studies. Three
di erent types of reliability are discussed here: stability, equivalence, and internal consistency.
a. STABILITY RELIABILITY
The stability reliability of an instrument refers to its consistency over me. A physiological instrument,
such as a thermometer, should be very stable and accurate. If a thermometer were to be used in a study,
it would need to be checked for reliability before the study began and probably again during the study
(test-retest reliability).
Ques onnaires can also be checked for their stability. A ques onnaire might be administered to a group of
people, and, a er a me, the instrument would again be administered to the same people. If subjects’
responses were almost iden cal both mes, the instrument would be determined to have high test-retest

34

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ff
ff

ti
ti
ti

ti
ti
ti
ti
ti

ft
ti
ffi
ffi
ti
tt

ti
ti

ti

ffi

ti
ffi

ff
fi
ti

ti
ti
ti
ti
ti
ti
ti
ti
ti

ti

ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti
tt
ti
ti
ti
ti
ti
ffi
ti
ti
ti
ti
ti

ti
ffi
ti
ti

Southern Luzon State University


College of Allied Medicine

reliability. If the scores were perfectly correlated, the correla on coe cient (coe cient of stability) would
be 1.00. The interval between the two tes ng periods may vary from a few days to several months or even
longer. This period is a very important considera on when trying to determine the stability of an
instrument. The period should be long enough for the subjects to forget their original answers on the
ques onnaire, but not long enough that real changes may have occurred in the subjects’ responses.
If you were interested in developing a test to measure a personality trait, such as asser veness, you might
expect stability of responses. But because there has been a great deal of emphasis on asser veness
training in recent years, subjects might not score the same on an asser veness test if the period between
administra ons is more than a few days. Many nursing studies are concerned with a tudes and behaviors
that are not stable, and changes would be expected on two administra ons of the same ques onnaire.
Stability over me (test-retest reliability), therefore, may not be the appropriate type of reliability for a
research instrument when you an cipate a change among the respondents over me.
b. EQUIVALENCE RELIABILITY
Equivalence reliability concerns the degree to which two di erent forms of an instrument obtain the same
results, or two or more observers using a single instrument obtain the same results. alternate forms
reliability and parallel forms reliability are terms used when two forms of the same instrument are
compared. Inter rater reliability and inter observer reliability are terms applied to the comparisons of
raters or observers using the same instrument. This type of reliability is determined by the degree to
which two or more independent raters or observers are in agreement.
When two forms of a test are used, both forms should contain the same number of items, have the same
level of di culty, and so forth. One form of the test is administered to a group of people; the other form is
administered either at the same me or shortly therea er to these same people. A correla on coef- cient
(coe cient of equivalence) is obtained between the two forms. The higher the correla on, the more
con dence the researcher can have that the two forms of the test are gathering the same informa on.
Whenever two forms of an instrument can be developed, this is the preferred means for assessing
reliability. However, researchers may nd it di cult to develop one form of an instrument, much less two
forms.
c. INTERNAL CONSISTENCY RELIABILITY
Internal consistency reliability, or scale homogeneity, addresses the extent to which all items on an
instrument measure the same variable. This type of reliability is appropriate only when the instrument is
examining one concept or construct at a me. This type of reliability is concerned with the sample of
items used to measure the variable of interest.
If an instrument is supposed to measure depression, for example, all of the items on the instrument must
consistently measure depression. If some items measure guilt, the instrument is not an internally
consistent tool. This type of reliability is of concern to nurse researchers because of the emphasis on
measuring concepts such as asser veness, autonomy, and self-esteem. However, in the case of an
instrument that contains subscales of an instrument, such as an instrument that measures anxiety in
terms of the state of anxiety and the trait of anxiety, you could calculate the internal consistency of both
subscales.
Before computers, internal consistency was tedious to calculate. Today, it is a simple process, and accurate
split-half procedures have been developed. A common type of internal consistency procedure used today
is the coe cient alpha (α) or Cronbach’s alpha, which provides an es mate of the reliability of all possible
ways of dividing an instrument into two halves.
C. VALIDITY OF THE INSTRUMENT
The validity of an instrument concerns its ability to gather the data that it is intended to gather. The
content of the instrument is of prime importance in validity tes ng. If an instrument is expected to
measure asser veness, does it, in fact, measure asser veness? It is not di cult to determine that validity
is the most important characteris c of an instrument.
The greater the validity of an instrument, the more con dence you can have that the instrument will
obtain data that will answer the research ques ons or test the research hypotheses. Just as the reliability

35

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

fi
ffi
ti

ffi
ffi
ti

ti
ti

ti
ti

ti

ti
fi

ti
ti
ffi
ti

ti

ti
ft
fi
ff
ti
ti
ti
ffi

ti
ti
ffi
ti
ffi
tti

ti
ti
ti
ti
ti
fi

ti
Southern Luzon State University
College of Allied Medicine

of an instrument does not remain constant, neither does an instrument necessarily retain its level of
validity when used with other research par cipants or in other environmental se ngs. An instrument
might accurately measure asser veness in a group of par cipants from one cultural group. The same
instrument might actually measure authoritarianism in another cultural group because asser veness, to
this group, means that a person is trying to act as an authority gure.
When a emp ng to establish the reliability of an instrument, all of the procedures are based on data
obtained through using the instrument with a group of respondents. Conversely, some of the procedures
for establishing the validity of an instrument are not based on the administra on of the instrument to a
group of respondents. Validity may be established through the use of a panel of experts, or an
examina on of the exis ng literature on the topic. Sta s cal procedures, therefore, may not always be
used in trying to establish validity as they are when trying to establish reliability. When sta s cal
procedures are used in trying to establish validity, they generally are correla onal procedures.
Four broad categories of validity are considered here: face, content, criterion, and construct. Face and
content validity are concerned only with the instrument that is under considera on. Criterion and con-
struct validity are concerned with how well the instrument under considera on compares with other
measures of the variable of interest.
a. FACE VALIDITY
An instrument is said to have face validity when a preliminary examina on shows that it is measuring
what it is supposed to measure. In other words, on the surface or the face of the instrument, it appears to
be an adequate means of obtaining the data needed for the research project. The face validity of an
instrument can be examined through the use of experts in the content area, or through the use of
individuals who have characteris cs similar to those of the poten al research par cipants.
Because of the subjec ve nature of face validity, this type of validity is rarely used alone.
b. CONTENT VALIDITY
Content validity is concerned with the scope or range of items used to measure the variable. In other
words, are the number and type of items adequate to measure the concept or construct of interest? Is
there an adequate sampling of all the possible items that could be used to secure the desired data? There
are several methods of evalua ng the content validity of an instrument.
The rst method is accomplished by comparing the content of the instrument with material available in
the literature on the topic. A determina on can then be made of the adequacy of the measurement tool
in light of exis ng knowledge in the content area. For example, if a new instrument were being developed
to measure the empathic levels of nurses in hospice se ngs, the researcher would need to be familiar
with the literature on both empathy and the hospice se ng.
A second way to examine the content validity of an instrument is through the use of a panel of experts, a
group of people who have exper se in a given subject area. These experts are given copies of the
instrument and the purpose and objec ves of the study. They then evaluate the instrument, usually
individually rather than in a group. Comparisons are made between these evalua ons, and the researcher
then determines if addi ons, dele ons, or other changes need to be made.
A third method is used when knowledge tests are being developed. The researcher develops a test
blueprint designed around the objec ves for the content being taught and the level of knowledge that is
expected (e.g., reten on, recall, and synthesis).
The actual degree of content validity is never established. An instrument is said to possess some degree of
validity that can only be es mated.
c. CRITERION VALIDITY
Criterion validity is concerned with the extent to which an instrument corresponds to, or is correlated
with, some criterion measure of the variable of interest. Criterion validity assesses the ability of an
instrument to determine the research par cipants’ responses at the present me or predict par cipants’
responses in the future. These two types of criterion validity are called concurrent and predic ve validity,
respec vely.
c.1. Concurrent validity

36

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

fi
ti
tt
ti

ti
ti

ti

ti

ti
ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ti

tti
ti
tti
ti
ti

fi
ti

ti
ti

ti
ti
ti
ti
ti
ti
tti

ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

Concurrent validity compares an instrument’s ability to obtain a measurement of par cipants’ behavior
that is comparable to some other criterion of that behavior. Does the instrument under considera on
correlate with another instrument that measures the same behavior or responses? For example, a
researcher might want to develop a short instrument that would help evaluate the suicidal poten al of
people when they call in to a suicide crisis interven on center. A short, easily administered interview
instrument would be of great help to the sta , but the researcher would want to be sure this instrument
was a valid diagnos c instrument to assess suicide poten al. Responses received on the short instrument
could be compared with those received when using an already validated, but longer, suicide assessment
tool. If both instruments seem to be obtaining the essen al informa on necessary to make a decision
about the suicide poten al of a person, the new, shorter instrument might be considered to have criterion
validity. The degree of validity would be determined through correla on of the results of the two tests
administered to a number of people. The correla on coe cient must be at least .70 to consider that the
two instruments are obtaining similar data.
C.2. Predic ve validity
The second type of criterion validity, predic ve validity, is concerned with the ability of an instrument to
predict behavior or responses of subjects in the future. If the predic ve validity of an instrument is
established, it can be used with con dence to discriminate between people, at the present me, in
rela on to their future behavior. This would be a very valuable quality for an instrument to possess. For
example, a researcher might be interested in knowing if a suicidal poten al assessment tool would be
useful in predic ng actual suicidal behavior in the future.
d. CONSTRUCT VALIDITY
Construct validity is the most di cult to measure. Construct validity is concerned with the degree to
which an instrument measures the construct it is supposed to measure. A construct is a concept or
abstrac on created or “constructed” by the researcher. Construct validity involves the measurement of a
variable that is not directly observable, but rather is an abstract concept derived from observable
behavior. Construct validity is derived from the underlying theory that is used to describe or explain the
construct.
Many of the variables measured in research are labeled constructs. Nursing is concerned with constructs
such as anxiety, asser veness, and androgyny.
d.1. Known-group procedure
The instrument under considera on is administered to two groups of people whose responses are
expected to di er on the variable of interest. For example, if you were developing an instrument to
measure depression, the theory used to explain depression would indicate the types of behavior that
would be expected in depressed people. If the tool was administered to a group of supposedly depressed
subjects and to a group of supposedly happy subjects, you would expect the two groups to score quite
di erently on the tool. If di erences were not found, you might suspect that the instrument was not really
measuring depression.
d. 2. Factor analysis
A method used to iden fy clusters of related items on an instrument or scale. This type of procedure helps
the researcher determine whether the tool is measuring only one construct or several constructs.
Correla onal procedures are used to determine if items cluster together.

RELATIONSHIP BETWEEN RELIABILITY AND VALIDITY


Reliability and validity are closely associated. Both of these quali es are considered when selec ng a
research instrument. Reliability is usually considered rst because it is a necessary condi on for validity.
An instrument cannot be valid unless it is reliable. However, the reliability of an instrument tells nothing
about the degree of validity. In fact, an instrument can be very reliable and have low validity.
Reliability was considered rst in this chapter’s discussion of reliability and validity. In actuality, validity is
o en considered rst in the construc on of an instrument. Face validity and content validity may be
examined, then some type of reliability is considered. Next, another type of validity may be considered.

37

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ft
ff

ti
ti
ti

ti

ff
ti
fi
ti

ti

ti

ti

fi
ff

ffi
ti

fi
ti

ti
ff

ti

fi
ti
ti

ffi
ti
ti
ti
ti
ti

ti
ti
ti

ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

The process is not always the same. The type of desired validity and the type of reliability are decided,
then the procedures for establishing these criteria for the instrument are determined.

UTILIZING THE DATA


Varia ons are usually expected in data that are collected from par cipants in a study. If the researcher did
not expect to nd some type of varia on in the data, there would probably be no interest in conduc ng
the study. Ideally, the varia ons or di erences that are found are real rather than ar cial. Every
researcher must recognize that some error component is likely to exist in the data that are obtained,
especially when the data are being collected from human beings, and the degree of control that can be
placed on the research situa on is limited. Data-collec on errors can arise from instrument inadequacies,
instrument administra on biases, environmental varia ons, and temporary subject characteris cs during
the data collec on process.

SOURCES OF ERROR IN DATA COLLECTION


Instrument inadequacies concern the items used to collect data and the instruc ons to par cipants that
are contained within the instrument, such as a ques onnaire. Are the items appropriate to collect the
data that are being sought? Do the items adequately cover the range of content? Will the order of the
items in uence subjects’ responses? Are the items and the direc ons for comple ng items clear and
unbiased?
Even when there are no errors in the research instrument, biases or errors may occur in the
administra on of the instrument. Is the instrument administered in the same fashion to all par cipants?
Are observers collec ng data in the same manner?
Environmental condi ons during data collec on can also in uence the data that are gathered. Is the
loca on for data collec on the same for all par cipants? Are condi ons such as temperature, noise levels,
and ligh ng kept consistent for all par cipants?
Finally, the characteris cs of the par cipants during data collec on can be a source of error in research
data. Are there any personal characteris cs of the par cipants, such as anxiety levels, hunger, or
redness, in uencing responses? This source of error may be called transitory personal factors.

Preparing Data for Analysis


Once data have been gathered, this informa on must be prepared for analysis. If a computer will be used
to analyze data, it is very important that the data are in a form that facilitates entry into the computer.
Quan ta ve data, such as age and weight, may be entered directly into the computer. Qualita ve data,
such as informa on obtained from open-ended ques ons, will need to be transferred into data that the
computer can understand if computer analysis will be used. It is important to have data ready for speedy
entry, and not be shu ing pieces of paper searching for data. Data coding should be considered, and
decisions about missing data made before data entry begins.
Some researchers visit a sta s cian a er their data are collected to nd out what to do with their data.
This is not the proper way to use the sta s cian’s talents. The me to seek help is in the early planning
stages of a study. Thus, a sta s cian can not only help to determine which instruments, methods of data
collec on, and sta s cal analysis are most appropriate for the researcher’s study purposes, but also assist
with determining which study design and sample size will most e ec vely ensure outcomes that will lead
to valid conclusions.

Cri quing Data-Collec on Procedures


It is important for the reader of a research report to determine if the measurement and collec on of data
has been conducted appropriately. This may be a very di cult task because the reader will not get to see
the instruments. Even when ques onnaires are used, they are rarely contained in the research ar cle or
report.

38

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022
ti

ti
ti
ti
ti
ti

ti
fl
ti
ti

fl

fi
ti
ti
ti
ti

ti

ti
ti
ti
ffl
ti
ti

ti
ti
ti
ti
ti
ti
ti

ti
ti
ti
ft

ff

ti
ti
ti
ti
ti
ti

ti
ti
ti
ti
ffi
ti

fl
ti
ti
ff
ti
ti
ti
ti
fi
ti
ti

ti
ti
fi
ti

ti
ti
ti
ti
ti
Southern Luzon State University
College of Allied Medicine

However, there are some guidelines that may be used and some ques ons that may be asked when
cri quing the data-collec on sec on of a research report.
The reader rst tries to nd a sec on in the research report where the measurement and collec on of
data are reported. The informa on sought concerns who collected the data, when the data were
collected, where the data were collected, what data were collected, and how the data were collected.
A determina on is made of the level of measurement that would be appropriate to test the research
hypothesis or answer the research ques on. For example, if compliance with diabe c regimen was the
dependent variable, has the researcher used a physiological measure of compliance, or has the pa ent
been asked to self-report compliance?
The research instruments should be described clearly and thoroughly. Informa on should be provided
about the reliability and validity of the instruments. The types and degree of reliability and validity should
be reported.
Finally, the results of the pilot study should be reported. If a pilot study was not conducted, the ra onale
for failure to do so should be discussed.

God Bless!!!

39

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti

fi

ti

ti
fi
ti
ti
ti

ti

ti
ti
ti
ti
ti
ti

Southern Luzon State University


College of Allied Medicine

EVALUATION

Ac vity:

1. Choose one (1) quan ta ve and one (1) qualita ve research and analyze the research
design/method used thoroughly. Create a research design and sample plans guide and
diagram for qualita ve and quan ta ve studies.
Note:
Please include the tle of the research, short overview and the research design. Please use
legal size, 1-2 pages, in Microso word, me new romans, font size 12 with 1.5 space. Kindly
submit your requirement, on or before 12 midnight of the speci ed date. Same with your
diagram, you can use legal size, 1-2 pages, in Microso word, mes new roman, font size 12.
You can send it thru google classroom. Submit your requirements with rubrics. You will be
evaluated using rubrics.

40

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

ti

ti
ti
ti

ti
ft
ti

ti
ti

ti
ft
ti
fi
Southern Luzon State University
College of Allied Medicine

NAME: E C G N I N A
X O O E M O P
YEAR/SECTION: C M O E P T P
NURSING RESEARCH 1 E P D D R L
Critical Analysis Essay Rubrics L E S O I
L T V C
This rubric is designed to make clear the grading process for E E E A
written communication by informing you, what key elements are N N M B
expected by the university in a “good” piece of written work. T T E L
N E
T
Your written work will be evaluated by the criteria below in order
to give you specific feedback to help guide your development. 4 3 2 1 0

Presentation

1. The purpose and focus are clear and consistent.

2. The main claim is clear, significant, and challenging.

3. Organization is purposeful, effective, and appropriate.

4. Sentence form and word choice are varied and appropriate.

5. Punctuation, grammar, spelling, and mechanics are


appropriate.
Content

6. Information and evidence are accurate, appropriate, and


integrated effectively.
7. Claims and ideas are supported and elaborated.

8. Alternative perspectives are carefully considered and


represented.
Thinking

9. Connections between and among ideas are made.

10. Analysis/synthesis/evaluation/interpretation are effective and


consistent.
11. Independent thinking is evident.

12. Creativity/originality is evident.

Assignment Specific Criteria

13. Responds to all aspects of the assignment.

14. Documents evidence appropriately.

15. Self discipline through timeliness is acquired.

41

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

Southern Luzon State University


College of Allied Medicine

REFERENCES

Textbook :
Fain, J. (2017). Reading Understanding and Applying Nursing Research 5th edi on. Philadelpia: F. A. Davis
Company
Nieswiadomy, R. & Bailey, C. (2018). Founda on of Nursing Research 5th Edi on.Pearson Educa on, Inc.
Polit, D. & Beck, C. (2014). Essen als of Nursing Research Appraising Evidence for Nursing Prac ce 8th
Edi on. Philadelpia: Lippinco Williams & Wilkins.

LINKAGES
The following Web sites o er students some helpful informa on on research, funding possibili es, and
network opportuni es:
Agency for Healthcare Research and Quality
h p://www.ahrq.gov
Centers for Disease Control and Preven on
h p://www.cdc.gov/
Eastern Nursing Research Society
h p://www.enrs-go.org/ Midwest Nursing Research Society
h p://www.mnrs.org/ Na onal Ins tutes of Health
h p://www.nih.gov/
Na onal Ins tutes of Nursing Research
h p://www.ninr.nih.gov
16 Nature of Research and the Research Process
Sigma Theta Tau Interna onal h p://www.nursingsociety.org/
Southern Nursing Research Society h p://www.snrs.org/
SLSU E-Library

42

NCM 111-Nursing Research 1 | Prepared by: FRITZIE L. AZOGUE RN,RM,MAN/ DR. ROSALINDA A. ABUY –
SY 2021-2022

tt
tt
tt
tt
tt
tt

ti
ti

ti

ti




ti

ti
ff
tt
tt

ti
ti

tt

ti

ti


ti

ti
ti
ti
ti
ti

You might also like