You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/329563437

Commentary—After Triangulation, What Next?

Article in Journal of Mixed Methods Research · January 2019


DOI: 10.1177/1558689818780596

CITATIONS READS

4 538

1 author:

David L Morgan
Portland State University
68 PUBLICATIONS 11,888 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Basic and Advanced Focus Groups, from Sage View project

A New Era of Focus Groups View project

All content following this page was uploaded by David L Morgan on 09 February 2019.

The user has requested enhancement of the downloaded file.


Commentary
Journal of Mixed Methods Research
2019, Vol. 13(1) 6–14
Commentary—After Ó The Author(s) 2017
Article reuse guidelines:
Triangulation, What Next? sagepub.com/journals-permissions
DOI: 10.1177/1558689818780596
journals.sagepub.com/home/mmr

David L. Morgan1

Abstract
This commentary agrees with the editors’ recent decision to do away with triangulation as a
term in mixed methods research, but before doing so, it argues for a review of its original popu-
larity, and a careful consideration of what should replace it. Triangulation depends on the com-
parison of results from qualitative and quantitative studies that attempt to answer the same
research question(s), so there are three possible outcomes: convergence, complementarity,
and divergence. After reviewing each of these alternatives, I present an approach that cross-
tabulates tests of hypotheses as quantitative results and themes as qualitative results, based on
the extent to which those results are convergent, complementary, or divergent.

Keywords
triangulation, convergence, complementarity, divergence

Let me begin by agreeing with Fetters and Molina-Azorin’s (2017a) proposal that we divest our-
selves of triangulation as a term for research designs in mixed methods research (MMR). Even
though they offer six reasons for eliminating triangulation from our terminology, I believe their
fourth is sufficient: triangulation ‘‘has multiple meanings and lacks sufficient clarity and preci-
sion.’’ This problem has been recognized since Greene, Caracelli, and Graham (1989) examined
more than 50 studies to assess the match between the stated reasons for doing MMR versus what
the studies did. They found that although triangulation was the most frequently stated reason,
less than a third of such studies actually used triangulation as intended. So triangulation has a
long history of multiple meanings and insufficient clarity.
Yet simply saying good-bye to triangulation is not enough. Instead, we need to understand
why it was so popular, in terms of both its initial purpose and the various other purposes that
were assigned to it. Here, Fetters and Molina-Azorin (2017a) are mistaken is saying that trian-
gulation ‘‘developed within, and is virtually synonymous with the field of qualitative research’’
(p. 7), since the concept originated in the work of Donald Campbell and his coauthors
(Campbell & Fiske, 1959; Webb, Campbell, Schwartz, & Sechrest, 1966). There, the term came
from an analogy to navigation, with two separate lines of sight converging on a single point
and forming the tip of a triangle. For Campbell and colleagues, comparing the results from mul-
tiple methods aimed to minimize the chance that the weaknesses of any single method might
produce ‘‘invalid’’ conclusions.

1
Department of Sociology, Portland State University, OR, USA

Corresponding Author:
David L. Morgan, 2513 NE Skidmore St, Portland, OR 97211, USA.
Email: morgand@pdx.edu
Morgan 7

Concerns about validity were the center of much of Campbell’s career. One such concern
involved problems in measurement (Campbell & Fiske, 1959), where the research results actu-
ally derived from deficiencies in how the data were captured. A different concern with validity
was highlighted in his work on threats to inference in experimental and quasi-experimental
design (Campbell & Stanley, 1963; Cook & Campbell, 1979). A third concern was limitations
inherent in any given method, so that using the only one method to do multiple studies of the
same topic might produce similar results due to shared biases in the method itself. Seen in this
light, his work on unobtrusive measures in Webb et al. (1966) was devoted to producing a new
method that was not subject to the ‘‘reactivity’’ that occurred when participants knew they were
being studied, as in methods such as interviews, self-reports, and participant observation.
This last approach to validity issues is important because Webb et al. (1966) relied on the
concept of triangulation to counteract the limitations of single methods. In particular, they noted
the importance of cross-validating results by using multiple methods: ‘‘Once a proposition has
been confirmed by two or more independent measurement processes, the uncertainty of its inter-
pretation is greatly reduced’’ (p. 3); and ‘‘When a hypothesis can survive the confrontation of a
series of complementary methods of testing, it contains a degree of validity unattainable by one
test within the more constricted framework of a single method’’ (p. 196). Denzin (1970) relied
explicitly on Webb et al. (1966) to promote the goal of comparing the results of multiple meth-
ods. This in turn produced the version of triangulation that was so widely used as the justifica-
tion for MMR in the 1970s and 1980s.
But if triangulation initially meant assessing the convergence of different methods, how did
the other interpretations arise? I believe the key insight here is that there are multiple possible
outcomes in the comparison of different methods. Beyond convergence, there is the possibility
that each method will target a different aspect of the underlying phenomenon, leading to results
that are complementary to each other. There is also the obvious possibility of divergence when
multiple methods produce distinctly different outcomes.
Convergence, complementarity, and divergence summarize the three possible alternatives
from comparing qualitative and quantitative results, thus leading to three different reasons for
doing MMR. Of course, these are not the only reasons to do MMR; at a minimum, they omit all
of the ‘‘sequential’’ design formats. Still, the direct comparison of the results from multiple meth-
ods remains an important element of MMR, even if we abandon triangulation as a label for this
work. In the core section of this commentary, I will describe convergence, complementarity, and
divergence, along with an assessment of the strengths and limitations of each as a goal for doing
MMR. I will then offer a specific proposal for presenting the more complex results that can come
from combining two or more of these goals, followed by some brief concluding remarks.

Convergence to Increase the Credibility of Similar Results


(QUAL = QUANT)
Convergence replaces the original meaning of triangulation, as presented above. Following
(Morgan, 2013), I have used the ‘‘=’’ sign to indicate the goal of producing nearly identical
results from different methods. In this case, both the qualitative and the quantitative studies
should be complete in themselves, rather than one supplementing the other, and they should be
independent, so that the results of one do not influence the design or conduct of the other. These
two portions of the overall project thus proceed separately until they are compared after the
completion of data collection.
The main advantage for convergence as originally offered by both Webb et al. (1966) and
Denzin (1970) was to enhance the credibility of the research results by minimizing the chance
8 Journal of Mixed Methods Research 13(1)

that those results were due to the biases of any one method. Note that I have replaced the quan-
titatively oriented term validity with the broader criterion of credibility (literally, ‘‘believabil-
ity’’), as proposed by Lincoln and Guba (1985), who advocated triangulation as a way to
enhance such credibility. One of the strengths of this approach is its direct link to issues of inte-
gration in mixed methods (Fetters & Molina-Azorin, 2017b), because it proposes a direct com-
parison of the qualitative and quantitative results to determine their similarity.
In contrast to these strengths, major problems can arise if the actual results produce either
outright divergence or a muddled interpretation where each method targets different aspects of
the research goal. In either of those cases, studies that were exclusively aimed at convergence
may yield very little in the way of usable conclusions. Furthermore, even when there is clear
convergence, that still amounts to answering the same research question twice. This duplication
of effort is worthwhile only when the need for additional credibility is important enough to jus-
tify the expense and effort of conducting separate qualitative and quantitative studies.

Complementarity to Cover Multiple Aspects of a Topic


(QUAL + QUANT)
Complementarity assigns different goals to the qualitative and quantitative portions of a project,
according to the strengths each method offers for a particular purpose (Fielding & Fielding, 1986;
Flick, 1992). Hence, the basic strategy in complementarity is to create a division of labor, so that
each method offers something that would be difficult for the other to produce. I have used the ‘‘ + ’’
sign to join the two components (Morgan, 2013), because this captures the underlying goal of using
the qualitative and quantitative components to cover more content than would be possible by either
method alone. Although I have stated this purpose in terms of two self-sufficient studies (QUAL +
QUANT), the same principles would also apply if one of the studies played a supplementary role
(i.e., QUAL + quant or QUANT + qual designs). It is also worth noting that there are other names
for this purpose, such as completeness or comprehensiveness, but I wanted to avoid the implication
that any combination of methods could ever be truly complete or comprehensive.
The advantages of complementarity are most apparent when the integration of the results
from the methods (Fetters & Molina-Azorin, 2017b) can be well-specified in advance of the
data collection. In that case, there is a clear separation between methods, where each contri-
butes its own unique strengths to meeting a composite goal. The analogy here is two pieces of
a jigsaw puzzle that fit together in a seamless fashion.
In terms of limitations, research that begins with a predetermined set of complementary
goals may encounter problems because they cannot avoid the kinds of comparison were also
central to convergence. This problem occurs because the methods in complementary designs
often follow ‘‘parallel’’ procedures, where the qualitative and quantitative components proceed
separately. This means that, just as in convergence, the methods are only integrated at the end.
Unfortunately, this lack of communication between the methods raises the possibility that the
results may not fit together as planned. At this point, little can be done to correct this problem,
given that data collection has been completed.

Divergence to Initiate Complex Comparisons of Results


(QUAL 6¼ QUANT)
The goal in divergence is to use differences in the qualitative and quantitative results to create
a dialog around those contradictions. I have introduced the ‘‘6¼’’ notation specifically for this
commentary. Historically, divergence is undoubtedly the rarest of the three possible alternatives
Morgan 9

for comparing results, and as such it has not received as many competing labels. The other
major option is ‘‘initiation,’’ which was used by Greene et al. (1989). The reason for preferring
divergence as a label is not only that initiation never caught on but also that divergence is a
possible outcome from comparing the results of qualitative and quantitative studies, while initi-
ating new research is a choice that might be made in the face of divergence.
The main advantage of divergence is not the differences that it generates but the opportuni-
ties that it provides for investigating those differences. This typically involves moving back and
forth between the qualitative and quantitative results to produce a richer interpretation of the
original contradictions. Maxwell and Loomis (2003) called this an ‘‘interactive model of
design,’’ and they provided a number of detailed examples to demonstrate how pursuing diver-
gent results can produce insights that go well beyond the initial recognition of difference. In
this case, the point where integration occurs can be somewhat indeterminate. On the one hand,
the research may cease with the discovery of divergence, producing only hypotheses about the
sources of the different results. On the other hand, the divergent results may lead to further data
collection and analysis in an attempt to resolve the discrepancy.
Divergence has limitations because it requires differences that are both theoretically interest-
ing and empirically addressable, but there currently are no protocols for producing such results.
Interestingly, these problems are also demonstrated in the detailed examples provided by
Maxwell and Loomis (2003), since a number of those studies began with failed attempts at con-
vergence. In other words, much of the work that exemplifies research based on divergence also
indicates how hard it is to design a study around divergence as an explicit goal. In addition,
when further research is undertaken to resolve discrepancies, it is difficult to predict in advance
how much effort it will take to produce meaningful results.

Combining the Alternatives


Although these three alternatives, taken together, exhaust the logical possibilities for a direct
comparison of qualitative and quantitative studies, a research project may well involve more
than one of these outcomes. This is particularly likely to be the case with the multiple possible
outcomes from attempting to resolve divergence, but it can occur in any situation where a com-
parison of results yields a complex pattern. What is needed is a format for reporting more com-
plicated kinds of results, and I propose a cross-tabulation, as illustrated in Table 1.
The obvious goal of Table 1 is to facilitate more complex comparisons, so that convergence,
complementarity, and divergence can all be given an equal footing in reporting the comparison
of qualitative and quantitative results. In its most specific form, what I propose is a framework
for comparing quantitative results from tests of hypotheses with qualitative results in the form of
themes. Rather than trying to create some intermediate or hybrid form of results as a basis for
comparison, Table 1 relies on the most common ways of expressing both qualitative and quanti-
tative results. Connecting quantitative hypothesis tests with qualitative themes has the advantage
of relying on two widely used types of outcomes, but the format in Table 1 will work with any
mechanism that systematically cross-tabulates results across methods.
Note, however, that any judgment about whether two results are convergent, complementary, or
divergent will always be a matter of degree. For example, it would be difficult to define an ironclad
set of criteria for when a set of qualitative and quantitative methods do or do not converge. But it is
important not to let the perfect be the enemy of good. Instead, rather than demanding complete pre-
cision in the specification of whether results are convergent, complementary, or divergent, it is
more reasonable to require authors to justify their assertions about those outcomes. Thus, rather
than defining airtight, universal definitions of convergence, complementarity, and divergence, the
current approach is to hold authors accountable for their claims in this regard.
10 Journal of Mixed Methods Research 13(1)

Table 1. A System for Comparing Qualitative and Quantitative Resultsa.

Convergent Complementary Complementary Divergent


results qualitative results quantitative results results
Qualitative results —————— —————— ——————
—————— —————— ——————
Quantitative results —————— —————— ——————
—————— —————— ——————
a
Note that this table could also be converted to a vertical layout, to help accommodate multiple items in the four
basic categories that currently form the columns.

Conclusions
In many ways, triangulation was a victim of its own success. From the 1970s into the 1990s, it
was by far the best-known reason for doing MMR. Hence, anyone doing MMR during that
period might have been tempted to use triangulation as a justification, if only because there
were few obvious alternatives. As a result, triangulation came to mean too many things. Yet
that does not imply that the original purposes of triangulation have disappeared; instead, those
purposes have been clarified and expanded.
Today, we still have the goal of comparing the results of qualitative and quantitative studies
on the same phenomena, but we have developed a better understanding of the alternative rea-
sons for making such comparisons. Furthermore, as Table 1 indicates, we now realize that there
may be multiple outcomes from comparing the results from qualitative and quantitative meth-
ods. Building on these advances creates greater clarity about the differences between conver-
gence, complementarity, and divergence, and that provides a much better chance of laying
triangulation to rest.

Declaration of Conflicting Interests


The author declared no potential conflicts of interest with respect to the research, authorship, and/or publi-
cation of this article.

Funding
The author received no financial support for the research, authorship, and/or publication of this article.

References
Campbell, D., & Fiske, D. (1959). Convergent and discriminant validation by the multitrait-multimethod-
matrix. Psychological Bulletin, 56, 81-105.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Belmont,
CA: Wadsworth.
Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings.
Boston, MA: Houghton Mifflin.
Denzin, N. (1970). The research act. Chicago, IL: Aldine.
Fetters, D., & Molina-Azorin, J. (2017a). The Journal of Mixed Methods Research starts a new decade:
Principles for bringing in the new and divesting of the old language of the field. Journal of Mixed
Methods Research, 11(1), 3-10.
Fetters, D., & Molina-Azorin, J. (2017b). The Journal of Mixed Methods Research starts a new decade:
The mixed methods integration trilogy and its dimensions. Journal of Mixed Methods Research, 11(3),
291-307.
Morgan 11

Fielding, N., & Fielding, J. (1986). Linking data. Thousand Oaks, CA: Sage.
Flick, U. (1992). Triangulation revisited: Strategy of validation or alternative? Journal for the Theory of
Social Behavior, 22, 175-197.
Greene, J., Caracelli, V., & Graham, W. (1989). Toward a conceptual framework for mixed methods
evaluation designs. Educational Evaluation and Policy Analysis, 11, 259-274.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Thousand Oaks, CA: Sage.
Maxwell, J., & Loomis, D. (2003). Mixed methods design: An alternative approach. In A. Tashakkori & C.
Teddlie (Eds.), Handbook of mixed methods in social & behavioral research (pp. 241-271). Thousand
Oaks, CA: Sage.
Morgan, D. (2013). Integrating qualitative and quantitative methods: A pragmatic approach. Thousand
Oaks, CA: Sage.
Webb, E., Campbell, D., Schwartz, R., & Sechrest, L. (1966). Unobtrusive measures. New York, NY:
Guilford.

View publication stats

You might also like