You are on page 1of 23

Article

Journal of Mixed Methods Research


2020, Vol. 14(2) 184–206
Modeling the Use of Mixed Ó The Author(s) 2019
Article reuse guidelines:
Methods–Grounded Theory: sagepub.com/journals-permissions
DOI: 10.1177/1558689819872599
Developing Scales for a New journals.sagepub.com/home/mmr

Measurement Model

Michelle C. Howell Smith1 , Wayne A. Babchuk1,


Jared Stevens1, Amanda L. Garrett1, Sherry C. Wang1,3,
and Timothy C. Guetterman2

Abstract
Mixed methods–grounded theory (MM–GT) has emerged as a promising methodology that
intersects the value of mixed methods with rigorous qualitative design. However, recent reviews
have found that MM–GT empirical studies tend to lack procedural details. The purpose of this
article is to apply the ‘‘best practices’’ for conducting MM–GT in a study designed to develop
and then test a theoretical model for how undergraduate engineering students develop interest
in the engineering PhD. This study contributes to the field of mixed methods research by (a)
illustrating best practices for MM–GT, (b) providing an MM–GT scale development example, (c)
demonstrating how an MM-GT scale could potentially bypass exploratory factor analysis and
proceed directly to confirmatory factor analysis for testing psychometric properties, and show-
ing how a joint display for data collection planning can be used to strengthen integration in an
instrument development study.

Keywords
mixed methods, grounded theory, qualitative, research design, engineering education

Introduction
There has been increased attention in the mixed methods research (MMR) community to inte-
grate mixed methods and grounded theory to yield a potentially powerful hybrid approach com-
bining salient elements of each. Referred to in the literature as mixed methods–grounded
theory (MM–GT; Guetterman, Babchuk, Howell Smith, & Stevens, 2019; Johnson, McGowan,
& Turner, 2010), pragmatist grounded theory (Babchuk, 2015), and mixed grounded theory

1
University of Nebraska–Lincoln, Lincoln, NE, USA
2
University of Michigan, Ann Arbor, MI, USA
3
Santa Clara University, Santa Clara, CA, USA

Corresponding Author:
Michelle C. Howell Smith, Department of Educational Psychology, Nebraska Center for Research on Children, Youth,
Families & Schools, University of Nebraska–Lincoln, 201 Louise Pound Hall, Lincoln, NE 68588-0365, USA.
Email: mhowellsmith@unl.edu
Howell Smith et al. 185

(MGT; Johnson & Walsh, 2019), this approach has gained momentum in the fields of education
and the social and health sciences as a particularly fruitful and potentially rewarding methodol-
ogy with wide applications across research contexts and practice settings (Birks & Mills, 2015;
Charmaz, 2014; Guetterman et al., 2019; Holton & Walsh, 2017; Johnson et al., 2010; Walsh,
2015). Although a well-articulated and persuasive argument has been advanced by Johnson and
Walsh (2019) that the umbrella term mixed grounded theory may best represent a wide array of
epistemological and procedural integrations of this hybrid approach, here we will stick with the
term mixed methods–grounded theory coined by Johnson et al. (2010) to remain consistent with
previous literature surrounding this newly formed methodology, temporarily sidestepping the
nuances of this emerging debate.
There are two primary methodological goals for this article. First, we provide an overview
of an MM–GT dissertation aimed at developing and then testing a theoretical model for how
undergraduate engineering students develop interest in the engineering PhD. This MM–GT
study broadens previous conceptualizations of MMR sequential exploratory instrument devel-
opment designs where priority is usually given to the quantitative testing strand. By integrating
grounded theory across the instrument development process, we elevated the typical role of the
qualitative strand. This research also extends MMR methodology by providing a pragmatic
model for mixed methods instrument development designs focusing on the translation of the
exploratory qualitative findings into a quantitative instrument (Howell Smith, 2011).
A second methodological goal of this article is to tie this MM–GT dissertation into the larger
MM–GT literature to further underscore our shared belief that MM–GT is a particularly effec-
tive social research strategy to examine a wide range of research problems across disciplines.
In particular, we applied a recently developed set of best practices for MM–GT (Guetterman
et al., 2019) to provide a useful example for novice and experienced methodologists interested
in employing MM–GT. We argue that, although it might not be necessary to formally canonize
these procedures for all applications of MM–GT, they can serve to help researchers think
through the decision-making processes across domains in the design and implementation phases
of the study. It is our hope that this study will contribute to collective efforts to advance the use
of MM–GT as this approach becomes increasingly popular in the research landscape.
We begin by providing a brief overview of MM–GT. We then present the MM–GT instru-
ment development dissertation, which was an exploratory sequential design (Howell Smith,
2011). The dissertation presentation ends with recommendations for conducting MM–GT
instrument development studies. We extend our discussion of the dissertation to the current
MM–GT literature by highlighting the use of best practices for MM–GT throughout the study
(Guetterman et al., 2019) and implications for mixed methods instrument development designs.

Mixed Methods–Grounded Theory


Mixed methods and grounded theory are particularly complementary. First, conceptualizations
of grounded theory have been open to both quantitative and qualitative data sources for the pur-
pose of theory development (Glaser, 2008; Glaser & Strauss, 1967; Holton & Walsh, 2017).
However, MMR offers a methodology to systematically collect and integrate both qualitative
and quantitative data, which can be applied toward the goal of theory development. Second,
while grounded theory offers an approach to develop theory, intersecting grounded theory with
MMR can offer the possibility of testing the theory quantitatively. Finally, grounded theory in
the qualitative component of a mixed methods study can add a theory or model that provides
an explanation for quantitative results.
As MM–GT has become increasingly popular across disciplines, several scholars have
lauded the potential of MM–GT as a particularly effective hybrid approach, useful in diverse
186 Journal of Mixed Methods Research 14(2)

fields and settings. In the first article in this newly emerging tradition, Johnson et al. (2010)
coined the term mixed methods–grounded theory and specifically detailed epistemological,
methodological, and practical considerations needed to successfully conduct this form of
research. Here, they argued that qualitative dominant mixed methods is linked to a qualitative
epistemology (‘‘constructivist’’), quantitative dominant mixed methods to a quantitative episte-
mology (‘‘postpositivist’’), and equal status ‘‘pure’’ (p. 68) mixed methods is linked to a mixed
methods epistemology of pragmatism or dialectical pluralism. The authors appeared to view
equal-status mixed methods as potentially the most fruitful of these approaches but at that time
had not identified any studies that fully meet these criteria although they list several ‘‘that are
heading in that promising direction in varying degrees’’ (p. 74).
In their study of the contemporary use of MM–GT, Guetterman et al. (2019) found that pub-
lished MM–GT studies provided little methodological detail regarding the use of grounded the-
ory, mixed methods, and MM–GT. Importantly, they noted the lack of theoretical development
among the MM–GT in their sample (67% of the 61 reviewed studies did not mention the devel-
opment of a theory, model, typology, etc.) and that the majority used convergent designs in
which grounded theory analysis was used to develop interventions, analyze survey data, or
explore secondary data. These authors extended the work of others (Babchuk, 2015; Birks &
Mills, 2015; Charmaz, 2014; Johnson et al., 2010; Walsh, 2015) to explicate the advantages of
MM–GT and develop ‘‘recommendations for best practices’’ (pp. 191-192) useful for both
novice and experienced researchers interested in using this methodology.
Walsh and colleagues have also made invaluable contributions to the development of MM–
GT in a series of publications (Holton & Walsh, 2017; Johnson & Walsh, 2019; Walsh, 2015).
Walsh (2015) advocates for the use of mixed-design grounded theory as being particularly
effective for theoretical development and elaboration. This approach is framed through the lens
of classic grounded theory viewed as a ‘‘meta-theory of research design’’ (p. 531) and Glaser
and Strauss’ (1967) clear mandate on the use of grounded theory with quantitative and qualita-
tive data. She concludes that ‘‘this research shows the importance for grounded theorists not to
limit themselves to qualitative data, as doing so might hinder the emergence of the resulting
theory’’ (Walsh, 2015, p. 551). In an extension of Johnson et al. (2010) and Walsh’s (2015)
work, Johnson and Walsh (2019) advance their brand of MM–GT labeled mixed grounded the-
ory, effectively combining salient aspects of grounded theory and mixed and multimethod
research. They provide a succinct overview of epistemological and methodological aspects of
grounded theory, mixed methods, and multimethod research. Elucidating how these approaches
can be successfully integrated, they defined MGT as ‘‘a research approach that includes the
development of a grounded theory using qualitative and/or quantitative data and uses elements,
logics, and strategies from both GT (grounded theory) and MR (mixed research) traditions’’ (p.
9). Johnson and Walsh also include a set of six basic MGT designs with exemplars from the lit-
erature to ‘‘help beginning researchers get started in their conduct of MGT’’ (p. 15).
Focusing on intervention research, Creamer (2018) has advanced a fully integrated mixed
methods approach to grounded theory. Her approach uses mixed methods to generate a formal
theory that has the potential to understand mechanisms of action for an intervention by capita-
lizing on the strength of qualitative and quantitative methods to corroborate and elaborate a
grounded theory. Creamer identified three major value-added aspects of MM–GT: (a) construct-
ing a multilayered theory, (b) pursuing discordance between qualitative and quantitative results
to develop a more complex theory, and (c) testing and generating a theory with a study.
To ascertain the current acceptance of MM–GT, we conducted a search on PROQUEST to
examine the use of MM–GT within dissertations, as they are often a leading indicator of metho-
dological innovation (McKim, 2017). We identified 482 dissertations using the search terms
Howell Smith et al. 187

Figure 1. Frequency of mixed methods–grounded theory dissertations by year, 2001-2017.

mixed method* AND grounded theory in the full text. Figure 1 demonstrates the increasing use
of MM–GT over time in dissertations.
Not only are MM–GT dissertations growing in number, but they are also being recognized
by leading mixed methods professional associations (i.e., the American Educational Research
Association’s Mixed Methods Special Interest Group and the Mixed Methods International
Research Association) for the quality of their methodological rigor (Howell Smith, 2011; Shim,
2015). In this article, Howell Smith’s (2011) dissertation serves not only as an exemplar of the
potential value of MM–GT but also as the basis for advancing ‘‘best practices for MM–GT’’
(Guetterman et al., 2019).

An MM–GT Study to Develop and Test a Theoretical Model


In 2008, the National Science Foundation issued a request for proposals for exploratory projects
that addressed why so few domestic students pursue a PhD in engineering. While existing
research had identified a shortage of domestic engineers with advanced education, it had not
focused on the process of how engineers come to cultivate their interest in doctoral-level engi-
neering education and how they turned this interest into action and pursued such a degree. We
responded to this call by proposing an MM–GT study to develop and test a theoretical model
for interest in the engineering PhD. As noted by Seymour and Hewitt (1997), ‘‘without sys-
temic investigation, we could not know whether all of the pertinent issues had been raised, or
which elements matter more than the others’’ (p. 6). A validated theoretical model would pro-
vide engineering educators and administrators with accurate information to more effectively
guide their efforts in increasing the interest of domestic students in the engineering PhD and
ultimately increasing enrollments.
The purpose of this MM-GT exploratory sequential instrument development design study
was to develop and test a theoretical model for how domestic undergraduate engineering stu-
dents develop interest in the engineering PhD. The study consisted of four phases. The first
phase was a qualitative grounded theory to develop a theoretical model of constructs that relate
interest in the engineering PhD. The second, instrument development phase, developed the
Exploring Engineering Interest Inventory (EEII), a measurement instrument designed with
sound psychometric properties to test a series of preliminary hypotheses related to the theory
188 Journal of Mixed Methods Research 14(2)

Figure 2. Mixed methods-grounded theory exploratory sequential instrument development design.

generated in the qualitative phase, thus serving as the first point of integration in this study. In
the third phase, the EEII was used to collect data from a larger sample of junior and senior
engineering majors at five institutions to test the theoretical model of the EEII. The fourth and
final phase of the study integrated the quantitative model-fit results with the grounded theory
model to finalize an empirically tested theoretical model. By basing the instrument’s develop-
ment on the theory generated from a grounded theory study, the intent was to develop an instru-
ment that more accurately measured the phenomenon than if it had been based on the scant
amount of information currently available in the literature.
Qualitative, quantitative, and MMR questions guided all aspects of the study.

Phase I: Qualitative Research Questions

1. What is the theory that explains the process that facilitates or inhibits interest in the engineering
PhD among domestic engineering students?
Phase III: Quantitative Research Questions

1. What is the reliability and validity evidence of the instrument scores as analyzed through
exploratory and confirmatory factor analysis?
Phase IV: Integration Synthesis Mixed Methods Research Questions

1. Does the factor structure of the instrument confirm the theoretical model?
2. How does the instrument that has been designed based on grounded theory provide a better mea-
sure of the phenomenon?

Method
Overview of Mixed Methods Instrument Development Designs
This study used an MM-GT exploratory instrument development design (Figure 2). Exploratory
designs are characterized by an initial qualitative phase that explores the central phenomenon
which then informs a second quantitative phase (Creswell & Plano Clark, 2018). Traditionally,
instrument development designs place more emphasis on the quantitative phase of the study
(Creswell & Plano Clark, 2018). However, in developing the Instrument Development and
Construct Validation model, Onwuegbuzie, Bustamante, and Nelson (2010) make explicit the
value of both quantitative and qualitative data in scale development procedures. Whereas the
Howell Smith et al. 189

Table 1. Legitimation Strategies Implemented in a Mixed Methods–Grounded Theory Instrument


Development Study.

Legitimation Strategy Implementation of Legitimation Strategy


Onwuegbuzie and Johnson (2006) Howell Smith (2011)
Sample integration The sample for the initial grounded theory study was drawn from
the same pool as the quantitative follow-up testing.
Inside outside The grounded theory model and resulting instrument were based
on the emic perspective of the target audience, while the
feedback was provided by etic experts.
Weakness minimization The sequential exploratory design was selected to maximize the
strengths of the initial grounded theory strand to develop the
instrument and the quantitative strand to test it.
Multiple validities Validity information for both the qualitative grounded theory and
the quantitative strands are presented.

Instrument Development and Construct Validation model is a 10-step instrument development


framework that incorporates crossover analyses, this study integrated grounded theory through-
out the MMR design, thus elevating the role of the qualitative strand to be equal with the quan-
titative strand and adding to the overall rigor of the study. The design of this study also placed
emphasis on the instrument development process by conceptualizing it as a distinct phase of
the study. Finally, this design was enhanced by an integration synthesis phase where the quali-
tative findings and quantitative results were integrated, the methodology was reviewed, and the
results were disseminated.
We primarily used a hybrid sequential parallel/quasi-nested mixed methods sampling design
for this study (Collins, Onwuegbuzie, & Jiao, 2007). A parallel sampling relationship indicates
that the participants in one strand were drawn from the same underlying population as the other
strand. This approach was reflected in the recruitment of undergraduate engineering students at
seven universities for the qualitative strand and five of those universities for the quantitative
strand. A nested sampling relationship refers to the sample of one strand being drawn from the
participants of the other strand. Generally, a nested sampling relationship implies that the sub-
sample (e.g., qualitative participants) was drawn from among a larger sample (e.g., quantitative
participants.) However, in this study, the qualitative data collected occurred before the quantita-
tive data collection. The participants in the qualitative strand were all potential participants in
the quantitative strand, but due to the anonymous survey data collection, we have no way to
know if any of them did, in fact, complete the survey. Hence, our sampling design was only
quasi-nested. Engineering graduate students, faculty, and industry PhDs were recruited outside
this sampling design to supplement the qualitative data collection. The institutional review
board at the first author’s institution provided oversight of this study and collaborated with par-
ticipating universities to safeguard all human subject participants.
To ensure that this study represented the highest design quality and interpretive rigor to pro-
duce transferable inferences (Tashakkori & Teddlie, 2003), four legitimation strategies were
implemented: sample integration, inside outside, weakness minimization, and multiple validities
(Onwuegbuzie & Johnson, 2006). Table 1 summarizes the legitimation strategies used through-
out this study.

Phase I: Qualitative Grounded Theory


Grounded Theory. We selected a grounded theory approach for this study, because the intent
was to develop a theoretical model to explain the process of becoming interested in the
190 Journal of Mixed Methods Research 14(2)

engineering PhD among domestic students. Grounded theory methods allowed the research
team to examine the statements of engineers, engineering students, and faculty to produce a the-
oretical explanation solidly grounded in the data. The theoretical model would have utility
beyond identifying the constructs for the instrument through guiding the development of inter-
ventions by engineering faculty and administrators that could affect the number of engineers
who earn doctoral degrees.
Although all approaches to grounded theory research are designed to support the develop-
ment of a theoretical model, we adopted a primarily constructivist approach to grounded theory
(Charmaz, 2014), which complemented the pragmatic use of mixed methods to develop an
instrument. Our stance as a research team was not as neutral and value-free researchers; we
actively engaged in co-constructing the theoretical model with our participants.

Sampling Method. First, we used maximum variation sampling to identify seven public and pri-
vate institutions (ranging in size from 2,000 to 30,000 students) offering undergraduate engi-
neering programs that participated in this study. Then, we used theoretical sampling to identify
potential participant roles that would be most relevant to inform the theoretical model: under-
graduate engineering students, PhD engineering students, engineering faculty, and industry
PhDs. Within these four roles, gender, ethnicity, major/discipline, and status (e.g., junior/senior
or assistant/associate/full professor) were all initially considered when selecting individuals to
invite to participate in the study, in order to ensure maximal variation within each role. As the
theoretical model began to emerge, we selected participants within particular roles to provide
additional perspectives related to questions that arose during data analysis or to provide feed-
back on the developing model. Given that there were no national or regional organizations from
which to recruit nonacademic engineering PhDs, we used a snowball sampling approach to
identify industry PhD participants.

Data Collection. Approximately 140 undergraduate students participated in semistructured focus


groups (two at each of the seven sites). Semistructured individual interviews were conducted with
32 faculty and 16 doctoral students across the seven sites, while six industry PhDs participated in
phone interviews. Interview protocols were developed based on issues noted in the engineering
education literature and feedback from engineering faculty to elicit data related to perceptions of
the engineering PhD and elements that facilitate or inhibit interest in the engineering PhD. While
the core protocol did not change significantly throughout the course of the study, the areas in
which the interviewer prompted for more information differed, depending on the nature of the
interview, as well as previously collected data, to ensure that all constructs were fully explored.
Interviews were digitally recorded and transcribed verbatim by a professional transcriptionist.

Data Analysis. The coding process used initial and selective codes to identify, reduce, and orga-
nize the data; selective codes were then evaluated and shaped into an ‘‘interpretive theory,’’
which calls for ‘‘the imaginative understanding of the studied phenomenon’’ (Charmaz, 2014,
p. 126). MAXQDA 10 (Verbi GmbH, 2017), a qualitative data analysis software package, was
used to facilitate the coding and analysis process. The initial and selective coding were com-
pleted in an iterative process such that small groups of transcripts (based on participant type and
location) were initially coded with brief, often in vivo, labels applied to meaningful units of text
in MAXQDA. After each wave of initial coding, all previous waves were revised and synthe-
sized through selective coding, using numerous iterative coding retrieval searches in MAXQDA
to compare data within and across codes as well as within and across participant type and loca-
tion. Using this constant comparison approach, we were able to relate and refine codes within
and across waves of data analysis and identify emerging theoretical constructs. Memos were
Howell Smith et al. 191

Figure 3. Engineering PhD interest grounded theory model.

written to record the definition, description, and evolution of codes, constructs, and models, as
well as to capture researcher reflections, questions, and insights throughout the coding and anal-
ysis process.
Developing the theoretical model occurred concurrently with the coding process. Once the
initial code list was developed, each code was written on a brightly colored Post-It note and
placed on a large poster board that was brought to each research team meeting. The post-it notes
allowed the research team to easily group and regroup the codes and to move groups of codes
in relation to one another. As the organization and hierarchy of the codes became more settled,
the codes were rewritten onto new Post-It notes, which were color coded by groups to represent
emerging constructs. This process provided an early and ongoing visual representation of the
evolving theoretical model. Once the theoretical model seemed to stabilize, the visual model
moved to a computer graphic program, and additional refinements were made in this format.
The central concept of the theoretical model, pathways to the engineering PhD, emerged
early in the coding process, while the code list was being created and refined. Throughout the
transcripts, large sections of text were thought to be descriptive of each participant’s pathway in
direct and indirect ways. In essence, each interview encapsulated one person’s journey. Since
the pathway was seen as a pervasive element, the research team conducted additional analyses
to code salient elements that either supported or undermined an individuals’ interest in doctoral
education. Through reading memos, drafting diagrams, and discussing categories in research
team meetings, the pathway concept was revisited and put forward as the central concept of the
study as it existed frequently in the data, it offered a logical justification, it was abstract, it had
the capacity to provide a strong explanation through its connections with other categories, and
it was able to incorporate variation (Strauss & Corbin, 1998).

Findings. The engineering PhD interest grounded theory model identified constructs that influ-
enced interest in pursuing an engineering PhD, described the relationships among the constructs,
and explained the process of developing interest in the engineering PhD. This visual diagram is
presented in Figure 3.
‘‘Pathways to the engineering PhD’’ emerged as an integral focused category that served as
the foundation for the emerging theoretical model. This pathway was composed of the following
constructs: misperceptions of graduate education, environmental influences, and personal
192 Journal of Mixed Methods Research 14(2)

characteristics. Engineers’ unique experiences with these constructs, along with their own per-
sonal reflection and career considerations, led to their level of interest in advanced education.
Undergraduate engineers had many misperceptions regarding the engineering PhD. In turn,
their incorrect and incomplete beliefs regarding graduate education, their financial futures, and
their careers, shaped their career trajectory and interest in advanced education. Because these
misperceptions were believed to be true, they often served as major barriers to pursuing the
PhD. For those who were pursuing PhDs or became PhD engineers, their interest in an engi-
neering PhD degree was profoundly influenced by their experiences and the people in their
environment. Personal characteristics, such as belief in themselves and personal interests and
skills, propelled individuals to be interested in pursuing a PhD in engineering. These three con-
structs are not necessarily independent from one another. For example, undergraduate students
who are in an environment that actively discourages graduate education in engineering may
never seek out experiences that might correct misperceptions they have about an engineering
PhD. Through exposure to and reflection on the constructs (misperceptions, environment, and
personal characteristics), they determined if they had interest in a PhD degree in engineering.
Engineers who had more accurate perceptions of graduate education, positive environmental
influences, and strong internal personal characteristics were most likely to develop interest in a
PhD. Conversely, misperceptions, negative environmental influences, and interpersonal charac-
teristics that did not align with graduate education were least likely to develop interest in a
PhD. However, each individual engineer assigned different values or weights to the constructs
to estimate the benefits and costs associated with advanced education. Hence, the relative
importance of each construct was critical to the decision to pursue doctoral-level engineering
education, the ultimate outcome of the engineering PhD interest model.

Credibility. We employed several validation strategies to ensure the findings were an accurate
representation of the participants’ lived experience: prolonged engagement by visiting several
different engineering programs over the course of the year; data triangulation by collecting data
from undergraduates, graduate students, faculty, and industry engineers; investigator triangula-
tion by developing the theory across three researchers; rich description by providing evidence
from participants’ quotes throughout the analysis; and member checking by soliciting feedback
from students, faculty, and engineers on the emerging engineering PhD interest theoretical
model (Creswell & Miller, 2000; Creswell & Poth, 2018). Additionally, a leading mixed meth-
ods scholar served as an external reviewer providing peer review and debriefing throughout the
data collection and analysis process.

Phase II: Integration: Instrument Development


In a mixed methods framework, the instrument development phase represents the ‘‘mixing’’ of
the qualitative and quantitative phases, or the explicit integration of the qualitative and quantita-
tive data (Creswell & Plano Clark, 2018). In an instrument development design, the data are not
‘‘mixed’’ in the literal sense, as the qualitative data analysis serves as the foundation for the
quantitative data collection. The phases, however, ‘‘build’’ through the process of transforming
the engineering PhD interest theoretical model into quantitative items on the EEII instrument. A
review of 102 published articles describing the development of 76 instruments using qualitative
and quantitative approaches noted that the report quality was generally low due to unclear or
limited details about methodological decisions and integration strategies (Howell Smith, Arthur,
Hawley, & Koziol, 2017). Therefore, we conceptualized integration as its own unique phase to
place an increased emphasis on the process of translating qualitative findings (in this case, the
engineering PhD interest theoretical model) into quantitative items. Instrument development
Howell Smith et al. 193

Table 2. Scale Development Process.

Steps From DeVellis (2017) Procedures in Howell Smith’s (2011) Dissertation


1. Determine clearly what you The three primary constructs from the grounded theory
want to measure (misperceptions, environment, and personal characteristics)
were expanded into five subscales for the Exploring Engineering
Interest Inventory to allow for greater specificity in key areas.
The five subscales included
1. Misperceptions of economic and personal costs
2. Misperceptions of engineering work
3. Educational environment
4. Interpersonal environment
5. Personal characteristics
2. Generate an item pool An item development matrix (see Table 3) showing the
theoretical constructs, related subconstructs, and
representational quotes was developed to guide the item
writing process and to ensure that all elements of each
construct was included. Items were written using in vivo
language when possible to correspond to each construct and
subconstruct.
3. Determine the format of the Items were written with Likert-type response options. The
measure survey was administered via an online survey platform.
4. Have experts review the Items were reviewed by the research team and colleagues in a
initial item pool psychometric graduate program. Representative undergraduate
engineers participated in iterative feedback interviews,
described below.
5. Consider the inclusion of Validation items were not included to minimize participant
validation items burden.

was conducted by the lead author and the fourth author. Statistical analysis of the reliability and
validity of the scores obtained by this scale, in addition to analysis of predictors, are presented
in the Phase III: Quantitative Section.
The framework for developing the EEII generally followed the first five of steps for develop-
ing measurement scales identified by DeVellis (2017): (1) determine clearly what you want to
measure, (2) generate an item pool, (3) determine the format of the measure, (4) have experts
review the initial item pool, and (5) consider the inclusion of validation items. The remaining
three steps will be discussed in the Phase III Quantitative Section: (6) administer items to devel-
opment sample, (7) evaluate the items, and (8) optimize scale length. Table 2 summarizes how
Steps 1 to 5 were enacted in this study. Step 2 (generate an item pool) and Step 4 (have experts
review the initial item pool) featured specific integration techniques, which are described in fur-
ther detail.

Step 1: Determine Clearly What You Want to Measure. The engineering PhD interest grounded
theory model guided the content and the structure of the EEII measure. The EEII was designed
to measure interest in the engineering PhD as both a reflective first-order construct that is
directly measured by items as well as an aggregate multidimensional construct with items
related to the individual constructs of the theoretical model that are summed together to reflect
a total score (Edwards, 2011).

Step 2: Generate an Item Pool. To facilitate translating the theoretical model into items that
would measure the constructs in the model, we created a joint display for data collection
Table 3. Joint Display for Data Collection Planning: Item Generation Matrix.

194
Construct and Subconstruct Quote Scale Items
Personal characteristics: balance Most PhD programs are not structured for people that Family responsibilities would make it difficult for me to
of work, school, and family life work. They just aren’t. And that makes it extremely pursue a PhD in engineering.
difficult to pursue it. If you’re someone that has a family Balancing school, work, and family time would be a factor
and has a job, and things go along with that like a house. in considering a PhD.
It’s extremely difficult. I could work full time while earning a PhD part time.
Personal characteristics: And when you think that it’s unreachable or unattainable I am smart enough to complete a PhD.
confidence and self-efficacy or you couldn’t—you know, it seems too hard or I’m My GPA is good enough to get admitted to a PhD
not smart enough or something like—even though you’re program.
doing fine. I feel confident in my academic abilities.
Personal characteristics: If you could take away the scare from like the big I am intimidated by the thought of writing a dissertation.
confidence and self-efficacy dissertation.
Interpersonal environment: My father-in-law has a PhD. I think that was helpful at least What is the highest level of education completed by your
family influence in a sense to me. I’m thinking well if he can do it I can parents or guardians?
too, and it’s worked out well for him. Growing up was there anyone important to you who had
earned a PhD in any field?
I know people who are pursuing or have a PhD in
engineering.
Educational environment: So we put together a workshop on ‘‘what the heck is this I have attended a graduate school workshop.
institutional programs and grad school thing?’’ It includes things like, what’s the
services difference between a masters and a PhD, and what’s the
difference between an RA and a TA and a fellowship.
Educational environment: I think if you had some of the current PhD students work I have interacted with engineering graduate students.
institutional programs and with the undergrads and involve them in their research
services and maybe get them more interested in that and just let
them see more what different things are out there, it
would help.
Nature of work: PhD-level I guess, um, they could make it more interesting to me if A professional engineering license is more valued by
engineering work they could show a reason, a difference between being a, industry than a PhD.
just a PE, or being a PE and having a PhD. Like I can’t see, I understand the kind of work that engineers with PhDs
I don’t know what difference there is adding your PhD, do.
they pretty much can do the same thing. I think people with a PhD in engineering are overqualified
for most engineering jobs.
I can do the same kind of work with a bachelor’s degree
that an engineer with a PhD can do.
(continued)
Table 3. (continued)

Construct and Subconstruct Quote Scale Items


Interpersonal environment: I think the teachers themselves are the best, uh, advocates No one at my undergraduate program ever talked about
professors/mentors for continuing to get a PhD. . . . I think, if they talk more earning a PhD as a possibility.
about it, you guys get your PhD, your doctor’s, even Professors have described the importance of the PhD in
more students would be interested in it. the engineering field.
Professors have discussed earning a PhD as an option in
one or more of my classes.
Professors in my undergraduate program encouraged me
to pursue a PhD in engineering.

195
196 Journal of Mixed Methods Research 14(2)

planning to serve as a matrix for generating items that placed each construct and subconstruct
side by side with representative quotes. We developed potential items for the instrument across
all the components of the theoretical model, using in vivo language from the quotes as much as
possible. Hence, the matrix served as a bridge, providing an explicit connection from the qualita-
tive grounded theory to the quantitative testing of that theory. Through this process, we identi-
fied two constructs that each had two distinct subconstructs embedded within them:
Misperceptions included misperceptions of economic and personal costs and misperceptions of
engineering work, and environment included educational environment and interpersonal envi-
ronment. These subconstructs were developed as a separate subscale to ensure that they were
adequately covered by items in the instrument. Table 3 provides a sample of the joint display for
data collection.

Step 3: Determine the Format of the Measure. We selected Survey Monkey, an online survey plat-
form, to administer the instrument. Items related to the theoretical constructs were all formatted
with Likert-type response options.

Step 4: Have Experts Review the Initial Item Pool. Once the items were developed and formatted
for an online survey, we used qualitative approaches to solicit feedback from a variety of
experts. In addition to consulting with engineering faculty, psychometric colleagues, and sur-
vey research experts, we recruited undergraduate engineering students to provide feedback
on the EEII scale. Involving undergraduate engineering students as experts in the scale
review not only provided valuable feedback in selecting items that would resonate with the
target population but also provided an additional opportunity for member checking and
increasing the validity of the qualitative phase of the study. We developed an iterative feed-
back interview that streamlined a traditional cognitive interview. Instead of probing for feed-
back on every item, we asked students to examine the instrument for items they thought
were too long, irritating, embarrassing, confusing, contained unclear wording, or included
words they did not understand. Students could also nominate questions they wanted to delete,
add, or rewrite. At the conclusion of the instrument, students were asked additional questions
regarding the content, the relevance of the experiences described in the questions, the scale,
the organization, their comfort level in answering the questions, their ability to give honest
answers, the length, and any other important issues that may have been overlooked as sug-
gested by Iarossi (2006).
After each session, the scale was modified to address the concerns raised by students. With
each iteration of the scale, fewer concerns emerged, and students were able to make more subtle
suggestions since major flaws were corrected early in the process. In all, the scale went from
106 items at the outset of this process and ended with 72 items. The completion time of 8 min-
utes was deemed appropriate for the time constraints of this population.
Changes based on expert review. The review process of the EEII scale significantly shaped
the content, wording, formatting, and survey design. In general, these changes related to the fol-
lowing categories: rewording items to use more familiar terms for the field, deleting or merging
items that were redundant or unclear to obtain a reasonable length for an instrument, shortening
items to make them more direct and readable for the students, adding items that addressed
important constructs but were missing in the original item pool, and improving layout and
formatting.

Step 5: Consider the Inclusion of Validation Items. Given the length of the pilot instrument, we
decided against including additional validation items.
Howell Smith et al. 197

Phase III: Quantitative Correlation Study


The quantitative phase was based loosely on DeVellis’ (2017) final three steps of developing an
instrument: (6) administer items to development sample, (7) evaluate the items, and (8) optimize
scale length (pp. 88-101).

Data Collection. The EEII was administered using Survey Monkey, an online data collection
platform. The dean of engineering at the five universities participating in the quantitative study
sent an introductory e-mail to all junior and senior undergraduate engineering students who
were U.S. citizens (n = 8,432). Four of the five participating universities provided student
e-mail addresses to the research team; the remaining university forwarded the survey invitation and
link to students. The survey link was emailed to students the next day, and two follow-up reminder
emails were sent to nonresponders. A total of 1,459 students responded to the survey. We removed
171 surveys because they were completed by non-U.S. citizens, nonengineering majors, or freshmen
or sophomore students. We also removed surveys with nonresponse to 10 or more of the core items
from analysis (n = 384). The remaining 904 surveys were used in our analyses. The final sample
was 28% female, 40% racial and ethnic minority, and 32% eligible for a Pell Grant, a need-based
subsidy provided by the U.S. government for postsecondary education, which is sometimes used as
a proxy for low socioeconomic status. Females and racial and ethnic minorities were overrepre-
sented in our sample as national demographic data for undergraduate engineering majors in 2011
was 19.2% female and 3.1% racial and ethnic minority (Center for Online Education, n.d.). Data on
Pell Grant eligibility are not available disaggregated by majors; however, across all majors, 41% of
undergraduate students received Pell Grants in 2011 (Janice & Voight, 2016), which is higher than
the percentage of students eligible for Pell Grants in our pool. We randomly assigned the completed
surveys into two groups: 300 cases were used for the exploratory factor analysis (EFA), and 604
cases were used for the confirmatory factor analysis (CFA).

Exploratory Factor Analysis. The purpose of the EFA was to identify the factor structure of a scale
(Worthington & Whittaker, 2006). Although the EEII scale was built to reflect the structure of
the engineering PhD interest theoretical model generated during the qualitative grounded theory
phase, conducting an EFA provided key information, such as the underlying factor structure of
the data, necessary to test the replication of the factor structure with a CFA (Worthington &
Whittaker, 2006).
Factor retention. Common-factor analysis (FA) was used as the extraction method using an
orthogonal (VARIMAX) rotation method. Initially a five-factor solution was selected because
the EEII was based on five constructs from the grounded theory. However, there were four fac-
tors that had eigenvalues greater than 3, and a fifth factor with an eigenvalue of 2.8. The four
factors with eigenvalues greater than 3 were examined for theoretical relevance by mapping the
items that loaded on each of the four factors back to the expanded engineering PhD interest the-
oretical model. Through this process, we noted that the ‘‘interpersonal environment’’ factor had
broken apart, and the ‘‘people’’ referred to in those items drove the item loadings on other fac-
tors: faculty loaded with educational environment, family with personal characteristics, and
employers/coworkers with engineering work. The fifth factor lacked theoretical coherence as it
contained unrelated items from four of the five hypothesized factors. Based on both the statisti-
cal and the theoretical information, the EFA was run with a four-factor solution (FA/
VARIMAX) and was further refined through item deletion.

Item Deletion. Additional EFAs (FA/PROMAX) were conducted to further refine the factor
structure of the instrument. Items with communalities less than .40 (Costello & Osborne, 2005)
and loadings of less than .32 or cross-loadings larger than .32 (Tabachnick & Fidell, 2012) were
198 Journal of Mixed Methods Research 14(2)

Table 4. Factor Correlation Matrix.

Factor 1 2 3 4
1 1.000
2 .258 1.000
3 2.222 2.247 1.000
4 2.118 2.087 .282 1.000

Note. Extraction method: Principal axis factoring. Rotation method: Promax with Kaiser normalization.

Table 5. Confirmatory Factor Analysis Test of Model Fit (Robust Maximum Likelihood).

Result
Chi-square value 562.024
Degrees of freedom 224
Chi-square p value 0.0000
Comparative fit index .90
Tucker–Lewis index .89
Root mean square error of approximation .05
Standardized root mean square residual .05

considered for deletion. Of the 49 items deleted in this phase, 19 performed poorly and did not
provide any useful information. The remaining 30 items were deleted due to confusing word-
ing, lack of relevance to traditional undergraduate engineering students, or because they were
pseudo double-barreled, in that they were long items with multiple clauses that tended to cross-
load.
After three rounds of EFA analyses, preliminary CFA analyses were conducted for additional
information provided in the modification indices. Two items had significant correlations with
other items in their respective factor, and since the content of the items was very similar, these
items were deleted. A sixth and final EFA containing 23 items was conducted using an oblique
(PROMAX) rotation to generate the factor correlation matrix (Table 4). Using the criteria of
KMO (Kaiser–Meyer–Olkin) values greater than 0.6 as recommended by Fabrigar, Wegener,
MacCallum, and Strahan (1999), we decided that the four-factor/23-item EFA (KMO = .789)
was the best model. The total variance explained (R2) of 30.427 in the original five-factor/72-
item model was improved to 50.575 in the final four-factor model.

Reliability Analysis. To establish the reliability of the scores obtained by this instrument, the
Cronbach’s alpha if item deleted was reviewed for each factor as retained by the EFA proce-
dure. All factors had Cronbach’s alpha values greater than .7, which is considered to be ade-
quate, as suggested by George and Mallery (2003). Only two items had alpha-if-deleted values
above the factor alpha value, but the increase was negligible, and the items’ content was rele-
vant to the theoretical structure of the instrument. Therefore, no items were deleted based on
review of the reliability of the scores.

Confirmatory Factor Analysis. The CFA used a robust maximum likelihood (MLR) estimator to
account for non-normality (Satorra & Bentler, 1994). The following fit indices (see Table 5)
using MLR were obtained when evaluating model fit of the CFA.
Although the chi-square value was significant, its sensitivity to sample size lessens the
importance of this particular test of model fit (Byrne, 2016). However, the approach of
Howell Smith et al. 199

accepting a model in which the chi-square value is less than twice the degrees of freedom also
did not demonstrate good fit (Newcomb, 1994). The remaining tests, comparative fit index
(CFI), Tucker–Lewis index (TLI), root mean square error of approximation (RMSEA), and
standardized root mean square residual (SRMR), all demonstrated that the four-factor/23-item
model adequately fit the data. The final EEII is available from the corresponding author.

Phase IV: Integration Synthesis


The synthesis phase of the study involved integrating the findings from the qualitative and the
quantitative phases of the study, reflecting on the lessons learned from the second phase of the
study and planning for the dissemination of the findings to engineering educators and research
methodologists.
In answering the qualitative research question, this study identified a four-construct model
that describes the pathways to the engineering PhD: Personal characteristics represent the initial
conditions of undergraduate engineering students; the educational environment represents the
context; and misperceptions, or rather the correction of misperceptions about the nature of PhD-
level engineering work and the economic and personal costs of pursing a PhD, represent the
intervening conditions. Once the inputs are synthesized through a process of reflection and
career alignment, the result is either continued disinterest or consideration of the engineering
PhD. The reliability and validity evidence of the final, revised theoretical model related to the
quantitative research question includes Cronbach’s alpha values greater than .7 with none of the
scores of the 23 items significantly detracting from the reliability of the scores for each factor.
The CFA tests, CFI, TLI, RMSEA, and SRMR, all demonstrated that the four-factor/23-item
model adequately fit the data. The quantitative analysis informed changes to the theoretical
model; therefore, the answer to our first MMR research question was that the four-factor struc-
ture confirmed the four-construct theoretical model.
The second MMR question, ‘‘How does the instrument that has been designed based on
grounded theory provide a better measure of the phenomenon?’’ does not have a direct answer.
However, there are a variety of reasons why the mixed methods approach used in this study
developed a better measure than one that could have been produced using other approaches.
First and foremost, developing an instrument using mixed methods allows the process to capita-
lize on the strength of each method while minimizing the weaknesses, thus providing an advan-
tage over monomethod development approaches (Creswell & Plano Clark, 2018). Second, the
methodological focus of this sequential exploratory mixed methods study led to a number of
decisions that underscored the attention to rigor across the study. Namely, a grounded theory
approach was selected for the qualitative strand to provide a systematic and iterative process for
developing a theoretical model and then using that model as the basis for developing the instru-
ment throughout subsequent development activities. This approach allowed the research team to
develop a thorough understanding of the domain (interest in the engineering PhD) and the rela-
tionships between constructs within that domain. We acknowledge that some authors advocate
that conducting crossover analysis (e.g., quantitizing qualitative data) can extract more meaning
by incorporating an additional level of analysis (Onwuegbuzie & Combs, 2010). However, for
this study, we chose to focus on the richness of each methodology in its own right. Integration
strategies explicitly bridged the qualitatively developed theoretical model, and successive quan-
titative testing were evaluated not only in the context of the currently accepted statistical stan-
dards but also within the context of the theoretical model. Finally, each phase, and each step
within each phase, was conceptualized as an iterative process. As the project progressed and
new information was discovered, or new interpretations were suggested, the research team was
200 Journal of Mixed Methods Research 14(2)

Table 6. Best Practices for Mixed Methods–Grounded Theory Instrument Development Studies
Checklist.

h Be explicit about the rationale for using a mixed methods–grounded theory approach.
h Clearly describe the theoretical model development process.
h Clearly describe the protocol for translating themes or theoretical constructs in an item pool.
h Clearly describe the protocol for refining the items or scales before conducting a pilot study.
h Consult survey research literature for resources regarding scale formatting and maximizing data
collection efficacy.
h Provide details regarding the pilot study and scale evaluation process.
h Consider forgoing exploratory factor analysis procedures when the instrument is developed based
on a grounded theory.

able to move fluidly across the phases of the study to adjust prior decisions and determinations
to reflect the most informed and up-to-date understanding of the phenomenon.

Recommendations for Mixed Methods Instrument Development Studies. Based on the methods used
in this study to develop the EEII, the following recommendations (see Table 6) give guidance to
other researchers who may be considering using MM–GT to develop an instrument.

Limitations
Given that the participating universities were purposefully selected to represent a diversity of
undergraduate engineering programs and not randomly selected, the findings of this study may
not generalize to all engineering programs or engineering students. Additionally, we failed to
recruit a top-tier engineering PhD granting institution or historically Black college or university
as a site for data collection, further biasing our sampling design. Additionally, the demographic
characteristics of participants in the quantitative testing strand do not mirror national enrollment
data. Our sample had 9% more women, 37% more racial and ethnic minorities, and 9% fewer
students eligible for a Pell Grant. Another limitation was the volume of nonrandom missing
data; there was a clear pattern of participants leaving the survey after each page. The final
instrument is short enough to have all items randomized on one page, therefore diminishing
participant fatigue and nonrandom missing data.

Discussion
Contributions to Mixed Methods Research Literature
This study presents four methodological contributions to the field of MMR: (a) illustrating a
study that incorporated the best practices for MM–GT as identified by Guetterman et al. (2019),
(b) providing a design of MM–GT used for scale development, (c) demonstrating how a scale
developed from a grounded theory model could potentially bypass EFA and go directly to CFA
for testing its psychometric properties, and providing an example of a joint display for data col-
lection planning.

Example of MM-GT Best Practices. First, this research provides extensive methodological details
to illustrate how best practices for MM–GT can be incorporated and articulated. Although best
practices for MM–GT have been advanced, publications seldom include all elements
(Guetterman et al., 2019). This exemplar shows that it is possible to include all 10 best practice
Howell Smith et al. 201

Table 7. Best Practices for Mixed Methods–Grounded Theory Elements in this Study.

Best Practices for Mixed Methods–


Grounded Theory Element Evidence From Howell Smith’s (2011) Dissertation
Read and cite appropriate mixed methods Cites Creswell & Plano Clark (2018) and Plano Clark &
methodological literature. Ivankova (2016), among others.
Read and cite appropriate grounded Cites Glaser & Strauss (1967) and Charmaz (2014), among
theory methodological literature. others.
Ensure that methods match the research Research questions call for the development of a
question from both mixed methods and theoretical model (grounded theory) as well as the
grounded theory perspectives. triangulation between the psychometric properties of an
instrument developed based on that model and the
grounded theory itself (mixed methods).
Describe the reason for using mixed Specifies a sequential exploratory design and indicates that
methods and specify which design. the rationale for using mixed methods research was
development and offsetting weakness with strengths.
Describe the reason for using grounded Specifies the use of constructivist grounded theory for the
theory and specify which approach. purpose of developing a theoretical model that would
provide the foundation for the development of a new
instrument.
Clearly identify the mixed methods Provides a procedural diagram and described
procedures being used. methodological procedures throughout the study.
Clearly identify the grounded theory Describes the use of theoretical sampling, emergent
procedures being used. coding, constant comparison, memo writing, theoretical
saturation, and the development of a theoretical model.
Employ mixed methods legitimation Employs four legitimation strategies to address potential
strategies to address potential threats to threats to validity.
validity.
Employ strategies for validating the Uses prolonged engagement, triangulation, rich
grounded theory findings. description, member checking, clarifying researcher
biases, and peer review and debriefing.
Use standards for evaluating quality of the Addresses all five guidelines for Good Reporting of a Mixed
mixed methods components. Methods Study (GRAMMS) (O’Cathain, Murphy, & Nicholl,
2008).

MM–GT elements within a publication. Table 7 maps the best practices for MM–GT elements
to their illustration in this MM–GT study. These elements further highlight the value of inter-
secting mixed methods with grounded theory, including the development of a formal grounded
theory that is further modified and iterated through subsequent quantitative phases of research.
Through this study, grounded theory and MMR complemented one another in multiple ways.
Perhaps the most salient best practices for intersecting MMR and grounded theory were the pre-
sentation of a formal theory and the integration of qualitative and quantitative research. The
theoretical model was critical to the study. Validating a grounded theory model is potentially
challenging, but intersecting with a mixed methods approach offers a way to test the theory
quantitatively. From a mixed methods perspective, achieving meaningful integration is a chal-
lenge. However, in this study, the grounded theory model provided a theoretical basis and per-
mitted building integration (Fetters, Curry, & Creswell, 2013) to use findings to develop the
instrument items.

Example of MM-GT for Instrument Development. The second methodological contribution of this
article is to provide an MM–GT design for the purpose of scale development. While others have
discussed MM–GT use for intervention studies (Creamer, 2018; Shim, 2015), we present an
202 Journal of Mixed Methods Research 14(2)

Figure 4. Joint display of the evolution of a mixed methods–grounded theory theoretical model.

example that develops an instrument that is grounded in a theoretical model based on the per-
spectives of the target audience. Our study featured multiple points of integration of qualitative
and quantitative approaches: building the initial instrument on the theoretical model, using the
iterative feedback interview approach to gather recommendations to improve the instrument,
revising the theoretical model based on the factor loadings of the EFA, consulting the theoretical
model when making decisions about item retention during the EFA, and validating the model
with the CFA. By knitting together the quantitative and qualitative strands throughout the study,
the theoretical model evolved considerably throughout the process. Figure 4 documents the evo-
lution in a joint display of the initial three-factor model, the expanded five-factor model, and the
revised four-factor model, alongside the relevant research study phase. This juxtaposition
Howell Smith et al. 203

captures the added value MM–GT brought to this instrument development study in terms of
facilitating the theoretical evolution and strengthening the validity evidence of the instrument.

MM-GT may Provide Grounding for CFA. On a third and related point, this study raises the ques-
tion of whether it is necessary to begin with EFA before proceeding to CFA when the instru-
ment is clearly grounded in a theoretical model. In this case, the model was developed through
the study using grounded theory procedures. We argue that the use of MM–GT for instrument
development can enable researchers to proceed directly to CFA. Of course, some judgment is
necessary, but a well-developed grounded theory model can provide enough theoretical ground-
ing for CFA.
Example of a Joint Display for Data Collection Planning. A final contribution of this study is the
example of a joint display for data collection planning. This side-by-side item generation matrix
provides a tangible structure to link the theoretical components from the grounded theory model
with the items developed to measure them, thus strengthening the integration between the quali-
tative and quantitative strands.

Conclusion
MM–GT has emerged as a powerful hybrid approach that combines quantitative and qualitative
data to more thoroughly study real-world phenomena across disciplines and settings. Based on
the epistemological and theoretical foundations along with procedural aspects of its original
formulation, we believe that it is an ideal methodology to be used in tandem with mixed meth-
ods design and application. In this study, we extend previous work on the best practices for
MM–GT that underscores the intersection of grounded theory with mixed methods and its
potential social research. Whereas some may view any form of checklist approach that involves
qualitative methodology as potentially restrictive and inhibiting, we believe that all researchers,
regardless of their epistemological or theoretical bent or disciplinary orientation or training, can
benefit by thinking through the key elements we have outlined in the design and implementa-
tion of an MM–GT study.
The exemplar we presented employs MM–GT as a distinct mixed methods design for inter-
secting MMR and grounded theory, as introduced by Plano Clark and Ivankova (2016). At the
heart of this discussion, we present the added value of MM–GT for both iterative theory devel-
opment and instrument development. Other literature has argued for merging qualitative and
quantitative data to build theory (Creamer, 2018), and we have presented a research project that
both models the use of mixed methods instrument development design and serves as an exem-
plar of MM–GT. Regarding instrument development, we argue that an instrument based on a
grounded theory can forgo an EFA and proceed to a CFA directly. In this article, we took steps
to contribute to the active and ongoing conversation that has emerged surrounding the use of
MM–GT and welcome future research and discussion that can further advance and refine this
methodology.

Acknowledgments
We would like to thank Vicki Plano Clark, Department of Educational Psychology, University of
Nebraska-Lincoln, now at School of Education, University of Cincinnati, for her thoughtful advice and
peer debriefing during this study and Chaorong Wu, Department of Educational Psychology, University of
Nebraska-Lincoln, now at Institute for Clinical and Translational Science, University of Iowa, for his
assistance in programming the quantitative analysis.
204 Journal of Mixed Methods Research 14(2)

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or pub-
lication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or pub-
lication of this article: This work was supported in part by the National Science Foundation (award EEC-
0935108). The opinions, views, and conclusions expressed in this article may not reflect those of the fund-
ing agency.

ORCID iDs
Michelle C. Howell Smith https://orcid.org/0000-0001-7397-4912
Timothy C. Guetterman https://orcid.org/0000-0002-0093-858X

References
Babchuk, W.A. (2015). Pragmatist grounded theory: Advancing mixed methods for educational inquiry. In
B. Chang (Ed.), Proceedings of the 34th Annual Research-to-Practice in Adult and Higher Education
(pp. 10-16). Oklahoma, Oklahoma City, OK: University of Central.
Birks, M., & Mills, J. (2015). Grounded theory: A practical guide (5th ed.). Thousand Oaks, CA: Sage.
Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and
programming (3rd ed.). New York, NY: Routledge.
Center for Online Education. (n.d.). STEM opportunities for women and minorities. Retrieved from https://
www.onlinecolleges.net/for-students/women-and-minorities-stem/
Charmaz, K. (2014). Constructing grounded theory: A practical guide through qualitative analysis (2nd
ed.). Thousand Oaks, CA: Sage.
Collins, K. M., Onwuegbuzie, A. J., & Jiao, Q. G. (2007). A mixed methods investigation of mixed
methods sampling designs in social and health science research. Journal of Mixed Methods Research,
1(3), 267-294. doi:10.1177/1558689807299526
Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four
recommendations for getting the most from your analysis. Practical Assessment, Research &
Evaluation, 10(7), 1-9.
Creamer, E. G. (2018). Enlarging the conceptualization of mixed method approaches to grounded
theory with intervention research. American Behavioral Scientist, 62, 919-934. doi:10.1177/
0002764218772642
Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice,
39, 124-130. doi:10.1207/s15430421tip3903_2
Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.).
Thousand Oaks, CA: Sage.
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five
approaches (4th ed.). Thousand Oaks, CA: Sage.
DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). Thousand Oaks, CA: Sage.
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: An integrative
analytical framework. Organizational Research Methods, 4, 144-192. doi:10.1177/109442810142004
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of
exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299.
Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs:
Principles and practices. Health Services Research, 48, 2134-2156. doi:10.1111/1475-6773.12117
George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference 11.0
update (4th ed.). Boston, MA: Allyn & Bacon.
Howell Smith et al. 205

Glaser, B. G. (2008). Doing quantitative grounded theory. Mill Valley, CA: Sociology Press.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative
research. Chicago, IL: Aldine.
Guetterman, T. C., Babchuk, W. A., Howell Smith, M. C., & Stevens, J. (2019). Contemporary approaches
to mixed methods-grounded theory research (MM-GT): A field-based analysis. Journal of Mixed
Methods Research, 13, 179-195. doi:10.1177/1558689817710877
Holton, J. A., & Walsh, I. (2017). Classic grounded theory: Applications with qualitative and quantitative
data. Thousand Oaks, CA: Sage.
Howell Smith, M. C., Arthur, A. M., Hawley, L. R., & Koziol, N. A. (2017, April). Expanding mixed
methods instrument development designs. San Antonio, TX: Paper presented at the American
Educational Research Association Conference.
Howell Smith, M. C. (2011). Factors that facilitate or inhibit interest of domestic students in the
engineering PhD: A mixed methods study. (Unpublished Dotrocal Dissertation). University of
Nebraska, Lincoln, NE. Retrieved from https://digitalcommons.unl.edu/cehsdiss/121/
Iarossi, G. (2006). The power of survey design: A user’s guide for managing surveys, interpreting results,
and influencing respondents. Washington, DC: The World Bank.
Janice, A., & Voight, M. (2016). Toward convergence: A technical guide for the postsecondary metrics
framework. Washington, DC: Institute for Higher Education Policy. Retrieved from http://
www.ihep.org/sites/default/files/uploads/postsecdata/docs/resources/ihep_toward_convergence_ch5_med
_2.pdf
Johnson, R., McGowan, M., & Turner, L. (2010). Grounded theory in practice: Is it inherently a mixed
method?Research in the Schools, 17, 65-78.
Johnson, R., & Walsh, I. (2019). Mixed grounded theory: Merging grounded theory with mixed methods
and multimethod research. In A. Bryant & K. Charmaz (Eds.), The SAGE handbook of current
developments in grounded theory (pp. 517-531). Thousand Oaks, CA: Sage.
McKim, C. A. (2017). The value of mixed methods research: A mixed methods study. Journal of Mixed
Methods Research, 11(2), 202-222. doi:10.1177/1558689815607096
Newcomb, M. D. (1994). Drug use and intimate relationships among women and men: Separating specific
from general effects in prospective data using structural equation models. Journal of Consulting and
Clinical Psychology, 62, 463-467. doi:10.1037/0022-006X.62.3.463
O’Cathain, A., Murphy, E., & Nicholl, J. (2008). Multidisciplinary, interdisciplinary, or dysfunctional?
Team working in mixed-methods research. Qualitative Health Research, 18, 1574-1585. doi:
10.1177/1049732308325535
Onwuegbuzie, A. J., Bustamante, R. M., & Nelson, J. A. (2010). Mixed research as a tool for
developing quantitative instruments. Journal of Mixed Methods Research, 4(1), 56-78. doi:
10.1177/1558689809355805
Onwuegbuzie, A. J., & Combs, J. P. (2010). Emergent data analysis techniques in mixed methods research:
A synthesis. In A. Tashakkori & C. Teddlie (Eds.), SAGE handbook of mixed methods in social and
behavioral research (2nd ed., pp. 397-430). Thousand Oaks, CA: Sage.
Onwuegbuzie, A. J., & Johnson, R. (2006). The validity issue in mixed research. Research in the Schools,
13, 48-63.
Plano Clark, V. L., & Ivankova, N. V. (2016). Mixed methods research: A guide to the field. Thousand
Oaks, CA: Sage.
Satorra, A., & Bentler, P. M. (1994). Corrections to test statistics and standard errors in covariance
structure analysis. In A. von Eye & C. C. Clogg (Eds.), Latent variables analysis: Applications for
developmental research (pp. 399-419). Thousand Oaks, CA: Sage.
Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences.
Boulder, CO: Westview.
Shim, M. (2015). A model of dance/movement therapy for resilience-building in people living with chronic
pain: A mixed methods grounded theory study (Unpublished doctoral dissertation). Philadephia, PA:
Drexel University. Retrieved from https://idea.library.drexel.edu/islandora/object/idea%3A6802
Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing
grounded theory (2nd ed.). Thousand Oaks, CA: Sage.
206 Journal of Mixed Methods Research 14(2)

Tabachnick, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). Boston, MA: Allyn &
Bacon/Pearson Education.
Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social & behavioral
research: Thousand Oaks, CA: Sage.
Verbi GmbH. (2017). MAXQDA [Software]. Berlin, Germany: Author. Retrieved from http://
www.maxqda.com/
Walsh, I. (2015). Using quantitative data in mixed-design grounded theory studies: An enhanced path to
formal grounded theory in information systems. European Journal of Information Systems, 24,
531-557. doi:10.1057/ejis.2014.23
Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and
recommendations for best practices. The Counseling Psychologist, 34, 806-838. doi:
10.1177/0011000006288127

You might also like