You are on page 1of 8

INTERPRETATION OF RESULTS

The analysis of research data provides the results of the study. These results need to be evaluated
and interpreted, giving consideration to the aims of the project, its theoretical basis, the existing
body of related research knowledge, and limitations of the adopted research methods. The
interpretive task involves a consideration of five aspects of the results: (1) their credibility, (2)
their meaning, (3) their importance, (4) the extent to which they can be generalized, and (5) their
implications.

1. CREDIBILITY OF THE RESULTS

One of the first interpretive tasks is assessing whether the results are accurate. This assessment,
in turn, requires a careful analysis of the study’s methodologic and conceptual limitations.
Regardless of whether one’s hypotheses are supported, the validity and meaning of the results
depend on a full understanding of the study’s strengths and shortcomings.

Such an assessment relies heavily on researchers’ critical thinking skills and on their ability to be
reasonably objective. Researchers should carefully evaluate the major methodologic decisions
they made in planning and executing the study and consider whether different decisions might
have yielded different results.

In assessing the credibility of results, researchers seek to assemble different types of evidence.
One type of evidence comes from prior research on the topic. Investigators should examine
whether their results are consistent with those of other studies; if there are discrepancies, a
careful analysis of the reasons for any differences should be undertaken. Evidence can often be
developed through peripheral data analyses. For example, researchers can have greater
confidence in the accuracy of their findings if they have established that their measures are
reliable and have ruled out biases. Another recommended strategy is to conduct a power analysis.

Researchers can also determine the actual power of their analyses, to determine the probability of
having committed a Type II error. It is especially useful to perform a power analysis when the
results of statistical tests are not statistically significant. For example, suppose we were testing
the effectiveness of an intervention to reduce patients’ pain. The sample of 200 subjects (100
subjects each in an experimental and a control group) are compared in terms of pain scores,
using a t-test.

Suppose further that the mean pain score for the experimental group was 7.90 (standard
deviation [SD] 1.3), whereas the mean for the control group was 8.29 (SD 1.3), indicating lower
pain among experimental subjects. Although the results are in the hypothesized direction, the t-
test was non-significant. We can provide a context for interpreting the accuracy of the non-
significant results by performing a power analysis. First, we must estimate the effect size, using
the formula.
A critical analysis of the research methods and a scrutiny of various types of external and
internal evidence almost inevitably indicates some limitations. These limitations must be taken
into account in interpreting the results.

2. MEANING OF THE RESULTS

In qualitative studies, interpretation and analysis occur virtually simultaneously. In quantitative


studies, however, results are in the form of test statistics and probability levels, to which
researchers need to attach meaning. This sometimes involves supplementary analyses that were
not originally planned. For example, if research findings are contrary to the hypotheses, other
information in the data set sometimes can be examined to help researchers understand what the
findings mean. In this section, we discuss the interpretation of various research outcomes within
a hypothesis testing context.

Interpreting Hypothesized Results

Interpreting results is easiest when hypotheses are supported. Such an interpretation has been
partly accomplished beforehand because researchers have already brought together prior
findings, a theoretical framework, and logical reasoning in developing the hypotheses. This
groundwork forms the context within which more specific interpretations are made. Naturally,
researchers are gratified when the results of many hours of effort support their predictions. There
is a decided preference on the part of individual researchers, advisers, and journal reviewers for
studies whose hypotheses have been supported. This preference is understandable, but it is
important not to let personal preferences interfere with the critical appraisal appropriate to all
interpretive situations. A few caveats should be kept in mind.

First, it is best to be conservative in drawing conclusions from the data. It may be tempting to go
beyond the data in developing explanations for what results mean, but this should be avoided. An
example might help to explain what we mean by “going beyond” the data. Suppose we
hypothesized that pregnant women’s anxiety level about labor and delivery is correlated with the
number of children they have already borne. The data reveal that a significant negative
relationship between anxiety levels and parity exists. We conclude that increased experience
with childbirth results in decreased anxiety. Is this conclusion supported by the data? The
conclusion appears to be logical, but in fact, there is nothing in the data that leads directly to this
interpretation.

An important, indeed critical, research precept is: correlation does not prove causation. The
finding that two variables are related offers no evidence suggesting which of the two variables—
if either—caused the other. In our example, perhaps causality runs in the opposite direction, that
is, that a woman’s anxiety level influences how many children she bears. Or perhaps a third
variable not examined in the study, such as the woman’s relationship with her husband, causes or
influences both anxiety and number of children.
Alternative explanations for the findings should always be considered and, if possible, tested
directly. If competing interpretations can be ruled out, so much the better, but every angle should
be examined to see if one’s own explanation has been given adequate competition. Empirical
evidence supporting research hypotheses never constitutes proof of their veracity. Hypothesis
testing is probabilistic. There is always a possibility that observed relationships resulted from
chance. Researchers must be tentative about their results and about interpretations of them. In
summary, even when the results are in line with expectations, researchers should draw
conclusions with restraint and should give due consideration to limitations identified in assessing
the accuracy of the results.

Example of hypothesized significant results:

Steele, French, Gatherer-Boyles, Newman, and Leclaire (2001) tested four hypotheses about the
effect of acupressure by Sea-Bands on nausea and vomiting during pregnancy. All four
hypotheses were supported.

Interpreting Non-significant Results

Failure to reject a null hypothesis is problematic from an interpretative point of view. Statistical
procedures are geared toward disconfirmation of the null hypothesis. Failure to reject a null
hypothesis can occur for many reasons, and researchers do not know which one applies. The null
hypothesis could actually be true, for example. The non-significant result, in this case, accurately
reflects the absence of a relationship among research variables. On the other hand, the null
hypothesis could be false, in which case a Type II error has been committed.

Retaining a false null hypothesis can result from such problems as poor internal validity, an
anomalous sample, a weak statistical procedure, unreliable measures, or too small a sample.
Unless the researcher has special justification for attributing the non-significant findings to one
of these factors, interpreting such results is tricky.

We suspect that failure to reject null hypotheses is often a consequence of insufficient power,
usually reflecting too small a sample size. For this reason, conducting a power analysis can help
researchers in interpreting non-significant results, as indicated earlier.

In any event, researchers are never justified in interpreting a retained null hypothesis as proof of
the absence of relationships among variables. Non-significant results provide no evidence of the
truth or the falsity of the hypothesis. Thus, if the research hypothesis is that there are no group
differences or no relationships, traditional hypothesis testing procedures will not permit the
required inferences.

When significant results are not obtained, there may be a tendency to be overcritical of the
research methods and under critical of the theory or reasoning on which hypotheses were based.
This is understandable: It is easier to say, “My ideas were sound, I just didn’t use the right
approach,” than to admit to faulty reasoning. It is important to look for and identify flaws in the
research methods, but it is equally important to search for theoretical shortcomings. The result of
such endeavors should be recommendations for how the methods, the theory, or an experimental
intervention could be improved.

Example of non-significant results:

Vines, Ng, Breggia, and Mahoney (2000) did a pilot study to test the effect of a multimodal pain
rehabilitation program on the immune function, pain and depression levels, and health behaviors
of patients with chronic back pain. Changes in these measures between baseline and the last
week of the treatment program were not significant, which the researchers speculated might
reflect the small sample size (N 23).

Interpreting Un-hypothesized Significant Results

Un-hypothesized significant results can occur in two situations. The first involves finding
relationships that were not considered while designing the study.

For example, in examining correlations among variables in the data set, we might notice that two
variables that were not central to our research questions were nevertheless significantly
correlated—and interesting. To interpret this finding, we would need to evaluate whether the
relationship is real or spurious.

There may be information in the data set that sheds light on this issue, but we might also need to
consult the literature to determine if other investigators have observed similar relationships. The
second situation is more perplexing: obtaining results opposite to those hypothesized. For
instance, we might hypothesize that individualized teaching about AIDS risks is more effective
than group instruction, but the results might indicate that the group method was better. Or a
positive relationship might be predicted between a nurse’s age and level of job satisfaction, but a
negative relationship might be found.

It is, of course, unethical to alter a hypothesis after the results are “in.” Some researchers view
such situations as awkward or embarrassing, but there is little basis for such feelings. The
purpose of research is not to corroborate researchers’ notions, but to arrive at truth and enhance
understanding. There is no such thing as a study whose results “came out the wrong way,” if the
“wrong way” is the truth.

When significant findings are opposite to what was hypothesized, it is less likely that the
methods are flawed than that the reasoning or theory is incorrect. As always, the interpretation of
the findings should involve comparisons with other research, a consideration of alternate
theories, and a critical scrutiny of data collection and analysis procedures.

The result of such an examination should be a tentative explanation for the unexpected findings,
together with suggestions for how such explanations could be tested in other research projects.
Example of un-hypothesized significant results:

Pellino (1997) tested the Theory of Planned Behavior to predict postoperative analgesic use after
elective orthopedic surgery. The theory predicted that patients with high levels of perceived
control would be more likely to intend to use analgesics than those with low levels of perceived
control. Just the opposite was found to be true, however, leading Pellino to conclude that the
findings “raise issues for research in the use of medications”.

Interpreting Mixed Results

Interpretation is often complicated by mixed results: Some hypotheses are supported by the data,
whereas others are not. Or a hypothesis may be accepted when one measure of the dependent
variable is used but rejected with a different measure. When only some results run counter to a
theoretical position or conceptual scheme, the research methods are the first aspect of the study
deserving critical scrutiny. Differences in the validity and reliability of the various measures may
account for such discrepancies, for example. On the other hand, mixed results may suggest that a
theory needs to be qualified, or that certain constructs within the theory need to be re-
conceptualized. Mixed results sometimes present opportunities for making conceptual advances
because efforts to make sense of disparate pieces of evidence may lead to key breakthroughs.

Example of mixed results:

Neitzel, Miller, Shepherd, and Belgrade (1999) tested the effects of implementing evidence
based postoperative pain management strategies on patient, provider, and fiscal outcomes. As
hypothesized, there were significant effects on provider behavior (e.g., decreased use of
meperidine) and nurses’ knowledge, as well as on costs. However, patient outcomes (e.g.,
decreased pain intensity, increased satisfaction) were not affected by the intervention. In
summary, interpreting research results is a demanding task, but it offers the possibility of unique
intellectual rewards. Researchers must in essence play the role of scientific detectives, trying to
make pieces of the puzzle fit together so that a coherent picture emerges.

TIP: In interpreting your data, remember that others will be reviewing your interpretation with a
critical and perhaps even a skeptical eye. Part of your job is to convince others of your
interpretation, in a fashion similar to convincing a jury in a judicial proceeding. Your hypotheses
and interpretations are, essentially, on trial—and the null hypothesis is considered “innocent”
until proven “guilty.” Look for opportunities to gather and present supplementary evidence in
support of your conclusion. This includes evidence from other studies as well as internal
evidence from your own study. Scrutinize your data set to see if there are further analyses that
could lend support to your explanation of the findings.
3. IMPORTANCE OF THE RESULTS

In quantitative studies, results that support the researcher’s hypotheses are described as
significant. A careful analysis of study results involves an evaluation of whether, in addition to
being statistically significant, they are important.

Attaining statistical significance does not necessarily mean that the results are meaningful to
nurses and their clients. Statistical significance indicates that the results were unlikely to be a
function of chance. This means that observed group differences or relationships were probably
real, but not necessarily important. With large samples, even modest relationships are
statistically significant. For instance, with a sample of 500, a correlation coefficient of .10 is
significant at the .05 level, but a relationship this weak may have little practical value.
Researchers must pay attention to the numeric values obtained in an analysis in addition to
significance levels when assessing the importance of the findings.

Conversely, the absence of statistically significant results does not mean that the results are
unimportant—although because of the difficulty in interpreting non-significant results, the case
is more complex. Suppose we compared two alternative procedures for making a clinical
assessment (e.g., body temperature). Suppose further that we retained the null hypothesis, that is,
found no statistically significant differences between the two methods. If a power analysis
revealed an extremely low probability of a Type II error (e.g., power .99, a 1% risk of a Type II
error), we might be justified in concluding that the two procedures yield equally accurate
assessments. If one of these procedures is more efficient or less painful than the other, non-
significant findings could indeed be clinically important.

4. GENERALIZABILITY OF THE RESULTS

Researchers also should assess the generalizability of their results. Researchers are rarely
interested in discovering relationships among variables for a specific group of people at a
specific point in time. The aim of research is typically to reveal relationships for broad groups of
people. If a new nursing intervention is found to be successful, others will want to adopt it.
Therefore, an important interpretive question is whether the intervention will “work” or whether
the relationships will “hold” in other settings, with other people. Part of the interpretive process
involves asking the question, “To what groups, environments, and conditions can the results of
the study reasonably be applied?”

5. IMPLICATIONS OF THE RESULTS

Once researchers have drawn conclusions about the credibility, meaning, importance, and
generalizability of the results, they are in a good position to make recommendations for using
and building on the study findings. They should consider the implications with respect to future
research, theory development, and nursing practice. Study results are often used as a springboard
for additional research, and researchers themselves often can readily recommend “next steps.”
Armed with an understanding of the study’s limitations and strengths, researchers can pave the
way for new studies that would avoid known pitfalls or capitalize on known strengths. Moreover,
researchers are in a good position to assess how a new study might move a topic area forward. Is
a replication needed, and, if so, with what groups? If observed relationships are significant, what
do we need to know next for the information to be maximally useful?

For studies based on a theoretical or conceptual model, researchers should also consider the
study’s theoretical implications. Research results should be used to document support for the
theory, suggest ways in which the theory ought to be modified, or discredit the theory as a useful
approach for studying the topic under investigation.

Finally, researchers should carefully consider the implications of the findings for nursing
practice and nursing education. How do the results contribute to a base of evidence to improve
nursing? Specific suggestions for implementing the results of the study in a real nursing context
are extremely valuable in the utilization process.

INTERPRETATION OF QUALITATIVE FINDINGS

In qualitative studies, interpretation and analysis of the data occur virtually simultaneously. That
is, researchers interpret the data as they categorize it, develop a thematic analysis, and integrate
the themes into a unified whole. Efforts to validate the qualitative analysis are necessarily efforts
to validate interpretations as well. Thus, unlike quantitative analyses, the meaning of the data
flows from qualitative analysis.

Nevertheless, prudent qualitative researchers hold their interpretations up for closer scrutiny—
self-scrutiny as well as review by peers and outside reviewers. Even when researchers have
undertaken member checks and peer debriefings, these procedures do not constitute proof that
results and interpretations are credible. For example, in member checks, many participants might
be too polite to disagree with researchers’ interpretations, or they may become intrigued with a
conceptualization that they themselves would never have developed on their own—a
conceptualization that is not necessarily accurate. Thus, for qualitative researchers as well as
quantitative researchers, it is important to consider possible alternative explanations for the
findings and to take into account methodologic or other limitations that could have affected study
results.

Example of seeking alternative explanations:

Cheek and Ballantyne (2001) studied family members’ selection process for a long-term care
facility for elderly patients being discharged from an acute setting. In describing their methods,
they explicitly noted that “attention throughout was paid to looking for rival or competing
themes or explanations”. In their analysis, they made a conscious effort to weigh alternatives.
In drawing conclusions, qualitative researchers should also consider the transferability of the
findings. Although qualitative researchers rarely seek to make generalizations, they often strive
to develop an understanding of how the study results can be usefully applied. The central
question is: In what other types of settings and contexts could one expect the phenomena under
study to be manifested in a similar fashion?

The implications of the findings of qualitative studies, as with quantitative ones, are often
multidimensional. First, there are implications for further research: Should the study be
replicated? Could the study be expanded (or circumscribed) in meaningful and productive ways?
Do the results suggest that an important construct has been identified that merits the development
of a more formal instrument?

Does the emerging theory suggest hypotheses that could be tested through more controlled,
quantitative research? Second, do the findings have implications for nursing practice? For
example, could the health care needs of a subculture (e.g., the homeless) be identified and
addressed more effectively as a result of the study? Finally, do the findings shed light on the
fundamental processes that are incorporated into nursing theory? .

You might also like