You are on page 1of 15

Thinking Skills and Creativity 38 (2020) 100734

Contents lists available at ScienceDirect

Thinking Skills and Creativity


journal homepage: www.elsevier.com/locate/tsc

The relationship of the quality of creative problem solving stages


to overall creativity in engineering students
Lamies J. Nazzal a, James C. Kaufman b, *
a
California State University, San Bernardino, United States
b
Neag School of Education, University of Connecticut, United States

A R T I C L E I N F O A B S T R A C T

Keywords: Despite the growing call for enhancing creativity in STEM fields, creativity is still largely absent in
Engineering contemporary engineering education. Developing creativity-supportive environments in engi­
Creative problem solving neering education requires an understanding of the creative process itself, particularly as it ap­
Education
plies to engineering. Engineering students from a public university in the Northeastern region of
Creativity
STEM
the United States were asked to solve an engineering-related problem across four stages of the
creative problem solving process. The Consensual Assessment Technique was then used to assess
the quality of each stage as well as the overall creativity of the entire response. Two path analysis
models were conducted to investigate the association between the creative problem solving stages
and overall creativity. The first path analysis model computed the mean quality of the ideas
generated, whereas the second path analysis considered the total quality of the ideas generated.
The different interpretations of idea generation greatly impacted which stages were most related
to each other and overall creativity.

1. Introduction

The engineering profession revolves around designing solutions to problems. Hence, creative problem solving is a vital tool for
engineers who are responsible for developing these solutions (Charyton & Merrill, 2009; Cropley & Cropley, 2005; Passow & Passow,
2017). Driven by existing needs, engineers identify the problems and derive many solutions to end up with implementing the optimal
one.
Despite its importance, engineering education does not always foster creativity (Cropley, 2015a; Terkowsky et al., 2016). Yet there
has been a surge of scholarship on this topic over the last decade. Some studies have examined specific ways of measuring engineering
creativity. These have ranged from new scoring approaches on engineering idea generation (Kershaw, Bhowmick, Seepersad, &
Hölttä-Otto, 2019) to comparing engineers and industrial designers judgements of products (Cropley & Kaufman, 2019). Other studies
have examined how different instructions or teaching techniques can inspire higher levels of creative problem-solving in engineering
students (Bourgeois-Bougrine, Buisine, Vandendriessche, Glaveanu, & Lubart, 2017; Dumas, Schmidt, & Alexander, 2016).
Creative problem solving is a vast field that encompasses more stages than are necessarily studied in most empirical investigations
(in engineering or other fields). Indeed, the concept of creative problem solving (Wallas, 1926) predates the field of creativity itself
(Guilford, 1950). Many modern models consider creative problem solving to be an everyday process that is obtainable with sufficient

* Corresponding author at: Neag School of Education, University of Connecticut, 2131 Hillside Road, Unit 3007, Storrs, CT, 06269-3007, United
States.
E-mail address: james.kaufman@uconn.edu (J.C. Kaufman).

https://doi.org/10.1016/j.tsc.2020.100734
Received 7 January 2020; Received in revised form 19 September 2020; Accepted 27 September 2020
Available online 4 October 2020
1871-1871/© 2020 Elsevier Ltd. All rights reserved.
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

dedication and work (Mumford, Reiter-Palmon, & Redmond, 1994; Reiter-Palmon & Illies, 2004).
Although a variety of models have been proposed (see Sawyer, 2012, for an overview), many are at least partially rooted in the split
between divergent and convergent thinking. For example, the Geneplore model (Finke, Ward, & Smith, 1992) has two primary phases,
Generation and Exploration. In the generative phase, a person gathers mental representations of possible solutions, whereas in the
explorative phase, these possibilities are explored until one is chosen to be pursued. More recent models include the initial stage of
problem recognition (Mumford, Mobley, Uhlman, Reiter-Palmon, & Doares, 1991; Reiter-Palmon & Robinson, 2009), which is when
someone determines the particular problem that needs to be solved.
Although the exact number of stages can vary wildly, four stages are present in most modern theories: problem recognition, idea
generation, idea evaluation, and then solution validation, in which the idea is tested. These four stages were articulated by Cropley
(2015b), drawing heavily from Guilford (1959), to be the foundation for understanding the main stages needed for creative problem
solving in engineering. Likewise, as presented in Table 1, these four stages correspond nicely to the four criteria emphasized in the Next
Generation Science Standards (NGSS) to define engineering design: identifying the problem in terms of criteria for success and con­
straints, generating potential solutions, evaluating the solutions in light of the criteria and constraints, and choosing the best solution
to implement (NGSS Lead States, 2013). Further, they tie into the phases of design thinking concept (e.g., Brown, 2008) that have been
adopted by several businesses and institutions as an approach for investigating ill-defined problems, focusing on the process of the
problem solving rather than the product per se.
The process of problem solving is considered the core of the application of engineering, wherein creativity is a vital tool to human
development Cropley, 2015b. We will now briefly review the four stages outlined by Cropley (2015b) and others for creative problem
solving.

1.1. Four-stage creative problem solving model

Engineering profession involves dealing with real world problems. However, real world problems are rarely presented in a
completely straightforward manner (Mumford, Baughman, & Sager, 2003). Thus, engineering students should be trained to creatively
solve open-ended ill-defined problems. The following four stages represent a process to solve such problems and increase innovation.

1.2. Problem recognition

Creativity often emerges when individuals work on unspecified problems (Csikszentmihalyi, 1965). Many creativity theorists
therefore believe that problem finding is as important as problem solving for the creative process (Reiter-Palmon, Mumford, O’Connor
Boes, & Runco, 1997). Cropley (2015b) includes this stage as the first in the creative process as the identification of a problem among
several ones. This phase requires one to precisely recognize and define the problem that needs to be solved. Problem recognition entails
finding the good problems (Getzels & Csikszentmihalyi, 1976). A “good” problem is one that will be helpful to the situation at hand, as
well as will yield to generalizable solutions beyond the present need. Although this stage is sometimes overlooked, in real life a
problem is not simply presented to someone, ready to be solved. The time and effort spent on this stage often results in notably better
and more creative solution (Reiter-Palmon, Mumford, & Threlfall, 1998).
Prior research shows that undergraduates with higher problem recognition ability produced solutions of higher quality and
originality in response to leadership, social, and school problems (Reiter-Palmon et al., 1997). Particularly, engineering education
research emphasizes the impact of problem framing on innovative ideation, creative design, and solution shifts (Silk, Daly, Jablokow,
Yilmaz, & Berg, 2014; Studer, Daly, McKilligan, & Seifert, 2018; Wright, Rutgers, Daly, Jablokow, & Yilmaz, 2015). In addition, viable
framing of an ill-defined problem was positively associated with the next stages of the creative process, including generating potential
ideas (Mumford, Baughman, Threlfall, Supinski, & Costanza, 1996).

1.3. Idea generation

After the problem has been recognized, different solutions are suggested as possible answers. The task for this stage is to generate as
many different ideas that might serve as potential solutions to the problem identified in the first stage. Cropley (2015b) argues that this
stage is when the creator moves from having one identified problem (convergence) to a variety of ideas (divergence).
Although this stage consists primarily of coming up with ideas, it nonetheless also requires knowledge and expertise in the specific

Table 1
Four-Stage Model of Creative Problem Solving, Engineering Design, and Design Thinking.
Four-Stage Model Description Engineering Design Design Thinking
(Cropley, 2015b) (NGSS Lead States, 2013) (Brown, 2008)

Problem Recognition that a problem exists Identifying the problem Problem finding and
recognition framing
Idea generation Production of a variety of relevant ideas Generating potential Ideation/ brainstorming
solutions
Idea evaluation Evaluation of the various possibilities produced Evaluating the solutions Prototyping
Solution Validation Drawing of appropriate conclusions that lead to the solution of the Choosing the best solution Testing & Reiterating
problem

2
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

domain (Vincent, Decker, & Mumford, 2002). The goal at this stage is not only to generate ideas, but also to come up with as many
good potential solutions to the problem as possible. Information about the problem structure (from the first stage) along with concepts
of the given domain provide the basis for generating alternative ideas (Runco & Chand, 1994).
Creativity can be enriched by a close linkage between divergent and convergent thinking; real-world creativity requires both idea
generation and idea evaluation abilities (Kaufman & Beghetto, 2013). Silvia (2008) found a correlation between idea generation and
evaluation abilities; people who generated more ideas were better at evaluating them. Hence, doing well at this stage may enable
strong performance on the next.

1.4. Idea evaluation

In the third stage, the problem solver evaluates the generated potential solutions from the previous stage in terms of criteria for
success and then tests them against constraints to choose the optimal one. Cropley (2015b) asserted that this phase is when the
problem-solver needs to eliminate nearly all of ideas generated in the second phase to arrive at a single solution; this stage is
convergent in nature. He further argued that whereas the potential for success drives the stage of “idea generation,” the constraints are
typically what play a vital role in the “idea evaluation” stage—they provide the criteria against which the alternatives would be
judged.
In this phase, the creator needs to draw on his or her knowledge about the domain in order to evaluate the ideas (Runco & Chand,
1995), in addition to other evaluative criteria such as novelty and appropriateness (Sawyer, 2012). Accordingly, the novel solution that
both addresses the need and meets the constraints will be chosen. Blair and Mumford (2007) identified different attributes that people
use when evaluating ideas. These attributes included fitting social norms, producing the desired outcome quickly, being complex to
implement yet easy to understand, and benefitting many people. A related study revealed that people consider two issues when they
evaluate ideas: the resources needed to implement the idea, and the consequences of implementing the idea (Dailey & Mumford,
2006). Indeed, these components represent the last stage of the creative process.

1.5. Solution validation

Given that the ultimate purpose is to successfully solve the problem, the final stage of most models is validating the best solution
and then implementing it. This stage is convergent in nature and requires connecting the chosen solution back to the initial problem; it
simply brings all the stages together to confirm that all stages were executed correctly and were integrated into an actual solution of the
problem at hand (Cropley, 2015b). Although creativity research tends to focus on earlier stages of the creative process, a great deal of
creativity occurs at this stage (Sawyer, 2012).
Creators, particularly those in applied domains (such as technological invention, engineering, and entrepreneurship), need to be
skilled at executing ideas, identifying the necessary resources to make them successful, forming plans for implementation, and
anticipating how to adjust the plans in the future (Mumford, 2003; Policastro & Gardner, 1999). Sawyer (2012) argues this phase
requires more than a straightforward execution of the idea; it entails creatively making the idea a reality as well as generating potential
follow-up ideas. He further notes that for successful creators, this stage is essential to the problem-finding phase. This view ties all of
the four stages together.

2. Quality of the stages vs overall creativity

Multiple studies and theories (e.g., Reiter-Palmon et al., 1997; Sawyer, 2012; Silvia, 2008) discuss the relationships of individual
stages to overall creativity. Traditionally, it is the creativity of these specific stages that is measured (Mumford et al., 1996). Yet if one
takes a domain-general perspective of creativity (e.g., Kaufman & Baer, 2004; Plucker & Beghetto, 2004), then the connection between
creativity on a specific stage and the overall product may be a result of someone simply being a more creative person. One way of
accounting for this issue is to look at each stage of the creative problem solving process, comparing the quality at each stage to the
overall creativity of the solved problem.
The goal of this study, therefore, was to use a realistic problem scenario and have engineering students engage in all four stages of
the creative problem solving process: problem recognition, idea generation, idea evaluation, and solution validation. Each stage was
then rated for quality by quasi-experts (following procedures from Kaufman & Baer, 2012; Kaufman, Baer, Cropley, Reiter-Palmon, &
Sinnett, 2013) using the Consensual Assessment Technique (CAT, Amabile, 1996). Overall creativity in the process was then evaluated
by a different group of quasi-expert raters.
The two research questions were:

(a) Does the quality of each stage of the creative problem solving process predict the next stage?
(b) Does the additive quality of all stages predict the overall creativity of the response?

3. Method

3.1. Participants

As part of a larger study on different facets of creative problem solving in engineering education, 505 engineering students at a

3
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

public university in the Northeastern region of the United States were recruited across different engineering disciplines and at different
levels in their programs. A convenience sampling method was used to recruit undergraduate engineering students from all different
engineering domains. All students were directed to a website where they filled out the online consent form and then completed all
measures through the online survey. Of the 505 participants (average age was 18.71 years old, SD = 1.43), 74.7 % were males, 24.6 %
are females, and 0.7 % did not indicate. The sample was predominantly first-year students (73.5 % first-year, 11.5 % second-year, 6.3
% third-year, and 8.7 % fourth-year).

3.2. Instruments/Procedures

3.2.1. Measures of stages of creative problem solving process


The measures of creative thinking skills were derived from the engineering-relevant multi-stage problem. This problem was created
through a multi-step process. First, a panel of several engineering professors created different scenarios involving general engineering
problems. Next, the panel reviewed all scenarios with regard to three criteria: (a) real-world applications, (b) relevance to multiple
engineering disciplines, and (c) potentially open-ended with room for creativity. Additionally, the problems were checked for clarity
and realism. This process led to the selection of one prompt that was used in the study:
“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The electricity bill was twice
what I am used to; I had to pay triple what I was paying in my last place for oil. What can I do?”
This problem included four stages in which students were asked to write their responses to each stage (the exact prompt is included
as Appendix A). Each response for each stage was later rated as part of the measurement of the creative process. Participants were
asked to read through the scenario and then assume the role of the responsible engineer for their response. After reading the scenario,
students were presented with four questions that led them through the four stages of the creative problem solving process. The
questions were as follows:

1 Identify an engineering-related problem that you find in this scenario and explain it in detail in one or two sentences.
2 Think of potential solutions to this problem. List as many different ideas as you can that might solve it.
3 Out of all of your potential solutions, which idea would you select as your best idea that you would choose to implement to solve the
problem?
4 Finally, think of how you would validate and carry out your solution. In two or three sentences, explain what your plans for
implementation.

The questions mapped onto the creative problem-solving processes that were targeted for assessment are the following four stages:
(a) Question 1 (problem recognition), (b) Question 2 (idea generation), (c) Question 3 (idea evaluation), and (d) Question 4 (solution
validation).
Scoring Protocol. After data collection, all responses were entered into a spreadsheet computer program. Participants’ responses
were scored using the Consensual Assessment Technique (Amabile, 1996). According to this method, experts from the relevant field
are the best and the most appropriate judges of creativity in the domain (Kaufman, Baer, & Cole, 2009). However, the question of what
constitutes an expert is still being studied and debated. Given the difficulty in obtaining high-level experts to serve as raters for a large
sample, Kaufman and Baer (2012) proposed the use of quasi-experts – that is, people with experience in the particular field who are
clearly above the level of a novice yet who may not have completed the ten years of deliberate practice typically needed to be qualified
as an expert (Ericsson, 1996; Plucker, Kaufman, Temple, & Qian, 2009). Studies have shown that advanced students in a field tend to
agree at a reasonable level with actual experts (Kaufman et al., 2013; Kaufman, Gentile, & Baer, 2005), and both experts and
quasi-experts in the domains of teaching and creativity research also agree with domain experts when judging student work (Baer,
Kaufman, & Riggs, 2009; Kaufman et al., 2013).
Consistent with the methodology proposed by Kaufman and Baer (2012) and demonstrated by Kaufman et al. (2013), this study
used a series of several quasi-experts from within the fields of education, creativity research, and the domain in question (engineering).
Twelve expert and quasi-expert engineers (engineering graduate students and professional engineers from different disciplines) rated
the quality of creative problem solving stages (four rated the quality of problem recognition stage; a different four raters assessed the
quality of idea generation stage; and a group of four different raters assessed the quality of solution validation stage). In addition, five
graduate students in the field of education with expertise in creativity research rated the entire set of responses to all four stages for
overall creativity (called the “snapshot” method; Silvia, Martin, & Nusbaum, 2009).
Consistent with past Consensual Assessment Technique work (Amabile, 1996; Kaufman & Baer, 2012), raters assigned scores to
each of the responses based on specific criteria. Consistent with the original guidelines established by Amabile (1996), the raters relied
on their own beliefs about creativity and the domain to assign scores; no definition of creativity was provided. Further, also consistent
with Amabile (1996) methods, there was no communication between raters at any point; all judgments were made without conferring
with the researchers or fellow raters.
Because, ideally, the experts will rate all products within a particular category (i.e., all of the Idea Generation task), multiple
experts were used to avoid rater fatigue. The overall creativity ratings were assigned by quasi-experts in creativity scholarship (as
opposed to engineering) for several reasons. First, Cropley (2015a) has specifically questioned how well some engineers may perceive
creativity in a larger, holistic manner. By using specific experts in creativity, this potential issue was avoided. Second, it is easy to
conflate related constructs (such as quality and creativity) if someone is assigning multiple scores at the same time. We wanted to

4
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

ensure that different groups would be considering quality and creativity. In addition, the creativity quasi-experts had experience using
Silvia et al.’s (2009) snapshot scoring system, which was specifically used for the overall creativity rating. Finally, at a more practical
level, it was difficult to get so many quasi-expert engineers to assign ratings to a large amount of material; we wanted to make sure that
there would be ample raters for each stage and construct, and the creativity quasi-experts seemed qualified to assign the holistic
creativity scores.
Inter-rater reliability (consistency among the raters) was evaluated with Cronbach (1951) alpha coefficient. Cronbach’s alpha
coefficient is a standard measure of internal consistency and has been used in creativity research as a measure of inter-rater reliability,
treating raters as items (see Kaufman, Plucker, & Baer, 2008). Procedures for scoring the four stages slightly differ. The following
sections give more details about how the raters scored each stage. For all questions, raters were given a copy of the original problem
and specific questions to be rated for the scoring of participant responses. Scores were given across a six-point Likert scale (1 = lowest
quality, 6 = highest quality).

3.2.2. Question 1—Problem recognition stage


Cronbach’s inter-rater reliability was r = .80, indicating high agreement among the four raters. A “problem recognition” score was
thus generated based on the average (mean) score across all raters. All specific instructions that raters received for scoring each stage
are presented in Appendix B.

3.2.3. Question 2—Idea generation stage


Raters were given every idea generated by the participants. Each response to question 2 was considered its own idea. To facilitate
ratings, ideas were first separated into distinct entries when multiple ideas were listed in one thought. This procedure was done
because the quality of ideas may vary even when initially merged into one long concept. The raters then rated every idea generated by
the participants.
Inter-rater reliability was computed using Cronbach’s alpha, which was r = .778 among the four raters, indicating substantial
agreement. The scores were averaged across all raters to produce a quality score for each idea, and the scores for all of a participant’s
ideas were averaged to produce an aggregate score for idea generation. A second, additional score was calculated for idea generation
stage to account for participant fluency. This score was produced by taking the total quality sore for all the ideas generated by a
student. As a result, consistent with past work on divergent thinking (Plucker, Qian, & Wang, 2011), two idea generation scores were
produced: Idea Generation (Average) score (thereby giving more value to participants with particularly high quality ideas), and Idea
Generation (Total) score (thereby giving more value to participants with higher fluency).

3.2.4. Question 3—Idea evaluation stage


For this stage, the participant was shown all of the ideas generated in the second stage and was then asked to select his or her best
idea. The choice was evaluated as a ratio of the highest possible score. That is, the rater-assigned score of the selected pick was divided
by the highest rater-assigned score that would have been possible for each participant. So, for example, if a person chose an idea that
was given an average score of “4” by the raters and no other idea was given a higher score, then that person’s score for the Idea
Evaluation stage would be 1 (4 divided by 4). If, however, the person chose the same idea given a “4” score but had a different idea that
received a “6,” then their score for the Idea Evaluation stage would be .667 (4 divided by 6).

3.2.5. Question 4—Solution validation stage


As with the first two questions, the raters were given a copy of the original problem and specific questions to use as a core basis,
along with their expertise, for scoring participants’ responses. Cronbach’s alpha coefficient was r = .774, indicating substantial
agreement among the four raters. A “solution validation” score was thus generated based on the average (mean) score across all raters.
Finally, a completely different set of the raters assessed the holistic creativity of each participant’s responses to all of the four
questions using the same six-point scale. The Cronbach’s alpha coefficient of r = .704 indicated substantial agreement among the
raters. An overall creativity score was thus generated for each participant.
Each participant received a total of six scores. Five were quality scores from the different stages of the creative problem solving
process: problem recognition quality score, idea generation quality average score, idea generation quality total score, idea evaluation
quality score, and a solution validation quality score. Finally, each participant also received an overall creativity score.

3.2.6. Data analyses


Two path analysis models were conducted (using Amos software) to answer our two research questions by investigating the re­
lationships among stages of the creative problem solving process, as well as the association between the quality of each stage of the
creative problem solving and overall creativity. The first path analysis model computed the mean quality of the ideas generated in the
second stage (thereby giving more value to participants with particularly high quality ideas), whereas the second path analysis
considered the total quality of the ideas generated (thereby giving more value to participants with higher fluency).
Since the sample was so predominantly first-year students (73.5 % first-year, compared to 11.5 % second-year, 6.3 % third-year,
and 8.7 % fourth-year), the variable of year was coded as First and Advanced (second, third and fourth years). In an effort to examine
potential differences in the relationships found in the models across year of school (First vs. Advanced), further analyses were con­
ducted to test group differences within each of the previous path analysis models in AMOS using the Chi-Square difference test (Gaskin,
2011).

5
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

4. Results

4.1. Descriptive statistics

The creative problem-solving processes that were targeted for assessment were: (a) problem recognition (Question 1); (b) idea
generation (Question 2); (c) idea evaluation (Question 3); and (d) solution validation (Question 4). Several scores were generated for
each participant: problem recognition, idea generation (average), idea generation (total), idea evaluation, solution validation, and
overall creativity. Table 2 gives the descriptive analyses of all the variables.

4.2. Path analysis of quality of creative problem solving stages (average IG) and overall creativity

A path analysis model was conducted to investigate the predictive relationships among the four measures of creative problem
solving stages (problem recognition, average idea generation, idea evaluation, and solution validation) and overall creativity.
Fig. 1 provides a visual representation of the results of this path model (all path coefficients represented in the model are stan­
dardized estimates).
The findings of the above path model indicate that the quality of problem recognition stage significantly predicted the quality of
average idea generation stage (ß = .29, p < .001), and the quality of average idea generation stage significantly predicted the quality of
idea evaluation stage (ß = .21, p < .001). However, the quality of the idea evaluation stage did not predict the quality of solution
validation stage (ß = .04, p > .05). In addition, the model indicated that the quality of problem recognition stage significantly predicted
the quality of solution validation stage (ß = .20, p < .001). The findings of the model also suggest that the quality of problem
recognition stage and the quality of solution validation stage positively predicted overall creativity (ß = .18, p < .001 and ß = .35, p <
.001, respectively); whereas, the quality of average idea generation stage and the quality of idea evaluation stage negatively predicted
overall creativity (ß= − .10, p < .05 and ß= − .20, p < .001, respectively).
Furthermore, the findings of this path model offer details about the indirect predictive relationships between creative problem
solving stages and overall creativity. For example, the problem recognition had a significant direct effect on overall creativity (ß = .18,
p < .001), and other indirect effects through idea generation, idea evaluation, and solution validation. The total effect of problem
recognition stage on the overall creativity can be calculated by the sum of all direct and indirect effects, where each indirect effect is
estimated as the product of all direct effects of that path (Kline, 2005). Table 3 shows all of the direct and indirect effects of the quality
scores of creative problem solving stages on overall creativity as well as the total effects.

4.3. Path analysis of quality of creative problem solving stages (total IG) and overall creativity

The second path analysis model was conducted to investigate the predictive relationships among the four measures of creative
problem solving stages (where idea generation stage was computed as the total of the ideas generated) and overall creativity. This
model gives more value to participants with higher fluency (participants who generated large number of ideas).
Fig. 2 provides a visual representation of the results of this path model (all path coefficients represented in the model are stan­
dardized estimates).
The findings of the above path model indicate that the quality of problem recognition stage significantly predicted the quality of
total idea generation stage (ß = .22, p < .001). However, the quality of total idea generation stage negatively predicted the quality of
idea evaluation stage (ß=− .13, p < .01), and the quality of the idea evaluation stage did not predict the quality of solution validation
stage (ß = .08, p > .05). In addition, the model indicates that the quality of problem recognition stage significantly predicted the
quality of solution validation stage (ß = .17, p < .001), and the quality of total idea generation stage significantly predicted the quality
of solution validation stage (ß = .21, p < .001). The findings of the model also suggest that the quality of idea generation stage (taken as
total) and the quality of solution validation stage positively predicted overall creativity (ß = .44, p < .001 and ß = .25, p < .001,
respectively); whereas, the quality of problem recognition stage was not significantly predicting overall creativity (ß = .07, p > .05),
and the quality of idea evaluation stage negatively predicted overall creativity (ß= − .16, p < .001).
Furthermore, the findings of this path model offer details about the indirect predictive relationships between creative problem
solving stages and overall creativity. For example, although the problem recognition stage had no significant direct effect on overall
creativity in this model, the total effect of problem recognition on the overall creativity can be calculated by the sum of all direct and
indirect effects (through idea generation, idea evaluation, and solution validation). Table 4 shows all of the direct and indirect effects

Table 2
Descriptive Statistics.
Minimum Maximum Mean Std. Deviation

Problem Recognition 1.00 5.50 3.15 .86


Idea Generation-Average 1.58 5.25 3.83 .60
Idea Generation-Total 1.50 87.00 18.10 10.64
Idea Evaluation .21 1.00 .90 .14
Solution Validation 1.00 5.25 2.63 .80
Overall Creativity 1.00 5.60 2.90 .82
Note. N = 505

6
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

Fig. 1. First Path Model of Creative Problem Solving Stages and Overall Creativity.

Table 3
Effects Decomposition for a Path Model of Creative Problem Solving Stages and Overall Creativity.
Overall Creativity
Causal Variable Standardized Estimates

Problem Recognition Quality


Direct Effect (PR→Creativity) .29
Indirect Effect 1 (PR→IG→Creativity) (.29)(− .10) = − .029
Indirect Effect 2 (PR→IE→Creativity) (− .06)(− .20) = .012
Indirect Effect 3 (PR→SV→Creativity) (.20)(.35) = .07
Indirect Effect 4 (PR→IG→IE→Creativity) (.29)(.21)(− .20) = − .0121
Indirect Effect 5 (PR→IG→SV→Creativity) (.29)(.07)(.35) = .002
Indirect Effect 6 (PR→IE→SV→Creativity) (− .06)(.04)(.35) = − .00084
Indirect Effect 7 (PR→IG→IE→SV→Creativity) (.29)(.21)(.04)(.35) = .00085
Total Effect .332
Idea Generation Quality
Direct Effect (IG→Creativity) − .10
Indirect Effect 1 (IG→IE→Creativity) (.21)(− .20) = − .042
Indirect Effect 2 (IG→SV→Creativity) (.07)(.35) = .0245
Indirect Effect 3 (IG→IE→SV→Creativity) (.21)(.04)(.35) = 0.0029
Total Effect ¡.115
Idea Evaluation Quality
Direct Effect (IE→Creativity) − .20
Indirect Effect 1 (IE→SV→Creativity) (.04)(.35) = .014
Total Effect ¡.186
Solution Validation Quality
Direct Effect (SV→Creativity) .35
Total Effect .35

of the quality scores of creative problem solving stages on overall creativity as well as the total effects.
The findings of the second path analysis model of creative problem solving stages and overall creativity showed some similar results
with the central difference of the quality of idea generation. The quality of solution validation stage and idea evaluation stage still had
the same effect on overall creativity (SV positively predicted overall creativity, and IE negatively predicted overall creativity), whereas
the quality of idea generation stage (when measured using a total score) had a positive effect on overall creativity in this model. In
addition, the quality of problem recognition stage had no significant direct effect on overall creativity. Furthermore, total effects of
these stages revealed that quality of idea generation (total) had the strongest effect on overall creativity, followed by the effect of
quality of solution validation, the quality of problem recognition, and finally the quality of idea evaluation. This finding is reasonable
since it reinforces the association of high fluency and overall creativity.

7
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

Fig. 2. Second Path Model of Creative Problem Solving Stages and Overall Creativity.

Table 4
Effects Decomposition for a Path Model of Creative Problem Solving Stages and Overall Creativity.
Overall Creativity
Causal Variable Standardized Estimates

Problem Recognition Quality


Direct Effect (PR→Creativity) .07
Indirect Effect 1 (PR→IG→Creativity) (.22)(.44) = .0968
Indirect Effect 2 (PR→IE→Creativity) (.02)(− .17) = − .0034
Indirect Effect 3 (PR→SV→Creativity) (.17)(.25) = .0425
Indirect Effect 4 (PR→IG→IE→Creativity) (.22)(− .13)(− .17) = .00486
Indirect Effect 5 (PR→IG→SV→Creativity) (.22)(.21)(.25) = .01155
Indirect Effect 6 (PR→IE→SV→Creativity) (.02)(.08)(.25) = .0004
Indirect Effect 7 (PR→IG→IE→SV→Creativity) (.22)(− .13)(.08)(.25) = − .00057
Total Effect .222
Idea Generation Quality
Direct Effect (IG→Creativity) .44
Indirect Effect 1 (IG→IE→Creativity) (− .13)(− .17) = .0221
Indirect Effect 2 (IG→SV→Creativity) (.21)(.25) = .0525
Indirect Effect 3 (IG→IE→SV→Creativity) (− .13)(.08)(.25) = − .0026
Total Effect .512
Idea Evaluation Quality
Direct Effect (IE→Creativity) − .17
Indirect Effect 1 (IE→SV→Creativity) (.08)(.25) = .02
Total Effect ¡.15
Solution Validation Quality
Direct Effect (SV→Creativity) .25
Total Effect .25

4.4. Group differences across year (first vs. advanced)

The models were tested for group differences based on year in school (first year students versus more advanced students). The IG-
Average model had one significant difference, in the pathway from solution validation to overall creativity. This path was significant (ß
= 0.35, p < .001) in the overall model for all students. However, the strength of this path was different by group, with it being higher
for first year students (ß = 0.42, p < .001) than for the advanced students (ß = 0.28, p < .01). No other path in the IG-Average model
was significantly different between the two groups. See Fig. 3 below.
For the IG-Total model, there was also a sole significant difference between the groups (first vs. advanced), for the pathway from
problem recognition to overall creativity. This path was not a significant one (ß = 0.07, p > 0.05) in the overall model derived from all
students. When the first year and advanced were separated, this link was significant for the first year students (ß = 0.12, p < .01), but it
was non-significant for the advanced students (ß= -0.06, p > 0.05). See Fig. 4.

8
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

Fig. 3. Group Differences Across Year in the First Path Model.

5. Discussion

The relationship between the quality of the four stages of the creative problem solving process and overall creativity varied
depending on whether idea generation was averaged (thereby giving more weight to high quality) or summed (thereby giving more
weight to fluency). In the first path analysis model of creative problem solving stages and overall creativity, using the average score,
the quality of the problem recognition and solution validation stages positively predicted overall creativity, whereas the quality of idea
generation and idea evaluation stages negatively predicted overall creativity. The total effects of these stages revealed that quality of
solution validation had the strongest effect on overall creativity, followed by the effect of quality of problem recognition, the quality of
idea evaluation, and finally the quality of idea generation.
The second path analysis model, which used the summed score for idea generation, revealed slightly different results. The quality of
solution validation stage and idea evaluation stage still had the same effect on overall creativity (SV positively predicted overall
creativity and IE negatively predicted overall creativity). One difference in the model is that idea generation had a positive effect on
overall creativity in this model. In addition, the quality of problem recognition stage had no significant direct effect on overall
creativity for all students. It is important to note that when analyzed separately, this pathway was significant for first year students (ß =
0.12, p < .01) but not for advanced students (ß= − 0.06, p > 0.05). The total effects of these stages revealed that quality of idea

Fig. 4. Group Differences Across Year in the Second Path Model.

9
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

generation had the strongest effect on overall creativity, followed by the effect of quality of solution validation, the quality of problem
recognition, and finally the quality of idea evaluation.
Both path models emphasized the importance of solution validation and problem recognition in the creative process. Problem
recognition has often been shown to have strong predictive power for overall creativity (Reiter-Palmon & Robinson, 2009). Solution
validation’s relevance for engineering creativity is partiularly salient (Cropley, 2015b). The importance of these two stages (regardless
of how idea generation was calculated) reinforces the connection between domain-specific knowledge and creativity. In order to be
creative in an area, one needs domain-specific knowledge to understand which ideas to pursue (Kaufman & Baer, 2002). The ability to
understand how to implement an idea is rooted in knowledge of that domain. In engineering, this connection is particularly important.
Idea evaluation was negatively associated with creativity in both path models. Given that each stage was rated for quality, not
creativity, this finding is not as counter-intuitive as it seems. Idea evaluation requires analytic and convergent thinking. Convergent
thinking is a key attribute in the big picture of creativity and innovation (Cropley, 2006). However, it is more focused on the
appropriateness aspect of the definition of creativity than the originality aspect. Convergent thinking and divergent thinking often
show no relationship (Claridge & McDonald, 2009). People who score higher on idea evaluation may well be more convergent
thinkers. Thus, as contrasted with more divergent thinkers, it is not surprising that they ultimately received lower creativity scores. In
addition, the fewer ideas someone has, the easier it is to evaluate the ideas, and thus, to score highly on idea evaluation.
Both path analysis models were further tested for group difference across year of school (First vs. Advanced) to investigate different
patterns in the creative problem solving process. This decision was made because previous studies on engineering design processes and
problem solving have exhibited different patterns across different levels of students and experts (see, e.g., Adams & Wieman, 2007,
2015; Atman, Chimka, Bursic, & Nachtmann, 1999; Atman, Cardella, Turns, & Adams, 2005; Atman et al., 2007; Chi & Glaser, 1985).
In particular, across several in-depth studies, Atman and colleagues (Atman et al., 1999, 2005) showed that senior engineering stu­
dents produced higher quality solutions, spent more time on solving problems, and considered more alternative solutions than
freshmen. Adams and Wieman (2007); Adams & Wieman, 2015 found that during the problem solving process, students utilize
different skills to solve the problems. Chi and Glaser (1985) found that knowledgeable people differed from less experienced people on
the representations adopted to solve problems. Yet our results of testing for group difference across year of school revealed relatively
few differences between the two groups (as noted in the results). Such broad similarity could be due to the nature of the engineering
task used for this study; it was deliberately developed as a general task and did not require much sophisticated engineering knowledge.
Another difference we wanted to explore was to distinguish fluency from consistent quality, even though both would traditionally
fall under the larger Idea Generation category. Idea generation as conceptualized by divergent thinking tests produces several different
scores, perhaps most notably fluency (number of responses) and originality (rarity of responses). Summing the scores would give a
stronger emphasis to a person’s fluency abilities and averaging them would emphasize a person’s originality. Because both are
considered important (Runco & Acar, 2019), models were constructed with idea generation summed (IG-Total) and averaged
(IG-Average).
Indeed, the nature of the direct relationship of the quality of idea generation and overall creativity varied depending on whether the
summed score or average score was used in the path model. A summed score highlighted a person’s fluency; having many different
responses all count toward the final score. If this method was used, then idea generation was positively related to overall creativity. An
average mean score emphasized high quality; having two very high quality responses would earn a higher score than two very high
quality responses and five lower quality responses. When this method was used, then idea generation was negatively related to overall
creativity.
Why would such a discrepancy occur? One possibility is that high quality ideas are not necessarily creative ones. Something may be
considered creative because of very strong originality and only marginal appropriateness. As mentioned earlier, real world problems
are usually not straightforward (Mumford et al., 2003). However, the first course of action would still be to follow set protocols.
Consider someone who has a hole in their wall. The first response – indeed, likely the first several responses – will be standard and
unoriginal, because most problems can be solved with one of several core solutions. They are practical and have been repeatedly shown
to work. The clearest solution in this circumstance is to use spackle and fill the hole. It is not necessarily a creative solution, but it is
likely the highest quality.
It is when initial attempts do not work that creativity is most important. If someone does not have spackle (or if the spackle does not
work), the ability to think of novel solutions (such as making spackle out of unusual materials) is essential. In this study, the highest
quality ideas may have been those that would be a first line of defense in real life. Yet since such ideas are less original, they may have
been scored as being less creative. Given there were two sets of raters, it is also important to note that the creativity raters may have
given more credit to particularly original ideas, whereas the quality raters were likely more focused on relevance and feasibility. We
return to this idea in the limitations section.
In contrast, the summed scores placed more emphasis on fluency. Being able to come up with many different solutions is distinct but
highly related to how original those solutions are (Dumas & Dunbar, 2014). Therefore, the high-fluency participants who would have
received higher scores on idea generation when the summed total was used were more likely to also have higher originality. The
high-quality participants who would have received higher scores on idea generation when the average was used would not necessarily
be more original or creative.
The two different ways of calculating idea generation stage has important practical implication for future studies as well as for
engineering education. Studies focusing on the quality of ideas and the importance of solution validation stage should use the first
model (where ideas are averaged, thereby giving more weight to consistently high quality even with few ideas). In contrast, the second
model should be used if the focus is highlighting the relationship between idea generation (scored as the total of ideas, thereby giving
more weight to fluency) and overall creativity; such a system would give more weight to people with many different ideas, even if they

10
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

are of inconsistent quality.

5.1. Limitations

This study had several limitations that need to be addressed. One limitation, which mirrors a limitation in the engineering field, is
that there were many more men in the sample (377) compared to women (124). In addition, first year students were overrepresented in
this study.
There was a limitation embedded in the task of our study. The task was a theoretical engineering problem developed to assess the
participants’ creative thinking skills. Engineering students were asked to assume the role of the responsible engineer and try to give
thoughts for solving the problem. There was a clear limitation embedded in the task itself. First, the students were asked to offer
potential solutions without being able to test them (as would be the case in real life). As a result, students were not asked (or able) to
test and re-evaluate ideas; hence, what was measured was closer to potential creative problem-solving. Further, the four questions
preset the stages for the participants, who could not go back and forth between stages of the problem. Problem solving and engineering
design occur over non-linear progressions of collecting data, evaluating ideas, and updating ideas until finding the optimal solution
(Adams & Atman, 2000; Cropley, 2015b). Such a process was not possible in the current study. Further, future studies are needed to
disentangle the potential different patterns of creative problem solving process across different levels and specific domains of engi­
neering students.
Another limitation is how the different raters may have viewed “quality” vis a via “creativity.” Quality is more related to one
component of creativity, namely appropriateness. Since previous studies have showed a contrary relationship between originality
ratings and appropriateness ratings (e.g., Runco, Illies, & Eisenman, 2005), researchers have found it useful to disentangle these two
aspects of creativity and study them as the two fundamental predictors of creative thinking and creative problem solving (Diedrich,
Benedek, Jauk, & Neubauer, 2015; Huang, Tang, Sun, & Luo, 2018; Illies & Reiter-Palmon, 2004; Long, 2014; Runco & Charles, 1993;
Runco et al., 2005). Results showed that in creativity research, novelty can be regarded as more important than appropriateness
(Diedrich et al., 2015; Runco & Charles, 1993). In our study, the usefulness component is especially needed; a creative solution is more
than a novel idea (Cropley & Cropley, 2005). Therefore, although it was important to rate the quality of the generated solutions, it is
suggested for future studies that raters evaluate the two components of creativity (originality and appropriateness) separately, and
look for potential patterns of relationships among the quality of the solution, its novelty, its appropriateness, and its overall creativity.
A further limitation in this study is that the measures of creative thinking skills were derived from a problem that was created for
this study, in which students were asked to write their responses to each stage of the creative problem solving process. Although it was
designed to be relevant to engineering, it nonetheless was an artificial task in which students had to write their responses. Some
students may not have been engaged in the task, whereas others may have been less able to express their ideas verbally. In real life,
these students may perform in a more creative manner.
In addition, the measurement of idea evaluation may have been problematic and the root of some of the discrepancies between the
two models. Participants were told to select their best idea, not their most creative idea. Someone with a high fluency score would be
more likely to have a poor idea evaluation score simply because there are more options from which to choose. In other words, someone
with twenty ideas has a 1 in 20 chance of accidentally selecting the best idea, whereas someone with two ideas has a 1 in 2 chance.
Finally, this study is limited in its generalizability. This sample was of engineering students at one public university. The likely
similarity in both academic experience and ability may have led to restriction of range. Further studies – with a more diverse pop­
ulation, with a more experienced population, with different stimuli –are needed before these findings can be considered generalizable.
In addition, the engineering department at this university has a notable interest in creativity (with several prominent members of the
department being specifically interested in the topic or studying it as a research topic). It is highly possible that this engineering
program may have created a pro-creative atmosphere, which may not be found in comparable schools.

6. Conclusion

The findings of this study reinforced the role of quality of each stage of the problem solving skills in the creative process. Most
notably (and expected), the quality of idea generation (calculated as the total of quality of ideas generated) was the strongest predictor
of overall creativity. The quality of solution validation was the second strongest predictor of overall creativity highlighting the
connection between domain-specific knowledge and creativity. The idea evaluation was negatively related to overall creativity since
this stage requires analytic and convergent thinking. These findings add to our understanding of the distinct roles that quality plays in
the creative process. They further emphasize the strong association between fluency and overall creativity.
Creativity in engineering education is a growing field (Cropley, 2015b), and this study aimed to offer insight into the stages of the
creative problem-solving process. Our goal was to examine how each stage interrelates to both other stages and an overall assessment
of the creativity of the final solution. Our results emphasize the importance of problem recognition, idea generation (when fluency is
emphasized, as opposed to originality), and solution validation. These findings support past work on the creative problem solving
stages, apply them to the domain of engineering, and explore additional nuances (such as the distinction between the quality of the
stages and the creativity of the overall solution). With additional work, we will continue to gain information on the best ways to help
enhance creativity in engineers and engineering students.

11
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

CRediT authorship contribution statement

Lamies J. Nazzal: Conceptualization, Methodology, Formal analysis, Data curation, Visualization, Investigation, Writing - original
draft, Writing - review & editing. James C. Kaufman: Conceptualization, Methodology, Supervision, Writing - review & editing.

Acknowledgements

The authors would like to acknowledge the valuable help of Ronald Beghetto, Daniel Burkey, David Cropley, Catherine Little, and
Jonathan Plucker.

Appendix A. Engineering-Related Problem

Please read the following scenario carefully and wait until you are asked to respond to each of the following questions. Please feel free to be
creative!

“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The electricity bill was twice
what I am used to; I had to pay triple what I was paying in my last place for oil. What can I do?”

• Identify an engineering-related problem that you find in this scenario and explain it in detail in one or two sentences.
• Think of potential solutions to this problem. List as many different ideas as you can that might solve it.
• Out of all of your potential solutions, which idea would you select as your best idea that you would choose to implement to solve the
problem?
• Finally, think of how you would validate and carry out your solution. In two or three sentences, explain what your plans for
implementation.

Appendix B. Raters Instructions

Problem Recognition Scoring Instructions

Undergraduate engineering students were asked to write responses to the following general
open-ended engineering problem.

Please read the following scenario carefully and answer each of the following questions. Please
feel free to be creative!
“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The
electricity bill was twice what I am used to; I had to pay triple what I was paying in my last place for oil.
What can I do?”
1. Identify an engineering-related problem that you find in this scenario and explain it in detail in one
or two sentences.
2. Think of potential solutions to this problem. List as many different ideas as you can that might solve
it.
3. Out of all of your potential solutions, which idea would you select as your best idea that you would
choose to implement to solve the problem?
4. Finally, think of how you would validate and carry out your solution. In two or three sentences,
explain what your plans for implementation.

Raters:
Please note that you will be scoring the responses to Question #1. It is the one given to you in this excel
file (sheet 1). Please rate the responses to the question that is given to you using a six-point Likert
scale (1 = lowest quality, 6 = highest quality).
You will be rating the overall quality of the participants’ responses based on your own definition of
what entails a high-quality response. Try to use the full range of the scale as much as possible (for
example, try to not give just 1 s and 2 s or just 5 s and 6 s).
You can change your ratings as much as you wish, but there is no need to spend a large amount of time
on this – just give your best expert judgment of the quality of each response.

Thank you!!
Idea Generation Scoring Instructions

For this project, undergraduate engineering students responded to the following multi-part,
open-ended engineering problem, as directly reproduced below:

Please read the following scenario carefully and answer each of the following questions. Please
feel free to be creative!
(continued on next page)

12
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

(continued )
Idea Generation Scoring Instructions

“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The
electricity bill was twice what I am used to; I had to pay triple what I was paying in my last place for oil.
What can I do?”
1. Identify an engineering-related problem that you find in this scenario and explain it in detail in one
or two sentences.
2. Think of potential solutions to this problem. List as many different ideas as you can that might solve
it.
3. Out of all of your potential solutions, which idea would you select as your best idea that you would
choose to implement to solve the problem?
4. Finally, think of how you would validate and carry out your solution. In two or three sentences,
explain what your plans for implementation.

Instructions for Raters:


Please note that you will only be scoring the responses to Question #2. These responses are located in
this same excel file (sheet 1). Please rate *each specific idea* using a six-point scale. A score of “1”
represents the lowest quality, and a score of “6” represents the highest quality.
You will be rating the quality of the participants’ ideas based on your own definition of what entails a
high-quality response. Try to use the full range of the scale as much as possible (for example, try to
not give just 1 s and 2 s or just 5 s and 6 s).
You can change your ratings as much as you wish, but there is no need to spend a large amount of time
on this – just give your best expert judgment of the quality of each response. You may wish to read
a certain number of ideas to get a feeling for the responses. Please give your rating by comparing
the ideas to each other, as opposed to an ideal answer.
Please note that the red color indicates a new participant’s response.
Ideas were sometimes separated when multiple ideas were listed in one thought, so that the wording of
some ideas may sound repetitive. This was done because the quality of ideas may vary even when
initially given in a list form. For example, imagine if you were asked how to feed yourself. If a
person wrote “Someone might buy paper bags and apples at the grocery store,” it would be
reformatted so that one idea read “Someone might buy paper bags at the grocery store” and
“Someone might buy apples at the grocery store.” Even though they were listed together, one idea
(apples) is of higher quality than the other (paper bags). For this reason, some of the responses
from a participant may sound repetitive; please do not let this negatively influence your ratings of
the idea’s quality.

Thank you!!
Solution Validation Scoring Instructions

Undergraduate engineering students were asked to write responses to the following general
open-ended engineering problem.

Please read the following scenario carefully and answer each of the following questions. Please
feel free to be creative!
“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The
electricity bill was twice what I am used to; I had to pay triple what I was paying in my last place for oil.
What can I do?”
1. Identify an engineering-related problem that you find in this scenario and explain it in detail in one
or two sentences.
2. Think of potential solutions to this problem. List as many different ideas as you can that might solve
it.
3. Out of all of your potential solutions, which idea would you select as your best idea that you would
choose to implement to solve the problem?
4. Finally, think of how you would validate and carry out your solution. In two or three sentences,
explain what your plans for implementation.

Raters:
Please note that you will be scoring the responses to Question #4. It is the one given to you in this excel
file (sheet 1). Please rate the responses to the question that is given to you using a six-point Likert
scale (1 = lowest quality, 6 = highest quality).
You will be rating the overall quality of the participants’ responses based on your own definition of
what entails a high-quality response. Try to use the full range of the scale as much as possible (for
example, try to not give just 1 s and 2 s or just 5 s and 6 s).
You can change your ratings as much as you wish, but there is no need to spend a large amount of time
on this – just give your best expert judgment of the quality of each response.

Thank you!!
Overall Creativity Scoring Instructions

Undergraduate engineering students were asked to write responses to the following general
open-ended engineering problem.
(continued on next page)

13
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

(continued )
Overall Creativity Scoring Instructions

Please read the following scenario carefully and answer each of the following questions. Please
feel free to be creative!
“I recently moved into an old farm house and I was horrified when I received the bills for the first quarter. The
electricity bill was twice what I am used to; I had to pay triple what I was paying in my last place for oil.
What can I do?”
1. Identify an engineering-related problem that you find in this scenario and explain it in detail in one
or two sentences.
2. Think of potential solutions to this problem. List as many different ideas as you can that might solve
it.
3. Out of all of your potential solutions, which idea would you select as your best idea that you would
choose to implement to solve the problem?
4. Finally, think of how you would validate and carry out your solution. In two or three sentences,
explain what your plans for implementation.

Raters:
Please note that you will be scoring the responses to the OVERALL CREATIVITY of the responses to all
4 questions. Please rate the responses using a six-point Likert scale (1 = least creative, 6 = most
creative) to assess the holistic creativity of each participant’s responses to all four questions.
You will be rating the overall creativity of the participants’ responses based on your own definition of
what entails a creative response. Try to use the full range of the scale as much as possible (for
example, try to not give just 1 s and 2 s or just 5 s and 6 s).
You can change your ratings as much as you wish, but there is no need to spend a large amount of time
on this – just give your best expert judgment of the creativity of each overall response of a
participant.

Thank you!!

References

Adams, R. S., & Atman, C. J. (2000). Characterizing engineering student design processes: An illustration of iteration. Am Soc Eng Educ, 1–11.
Adams, W. K., & Wieman, C. E. (2007). Problem solving skill evaluation instrument—Validation studies. AIP Conference Proceedings, 883, 18–21. January, No. 1.
Adams, W. K., & Wieman, C. E. (2015). Analyzing the many skills involved in solving complex physics problems. American Journal of Physics, 83(5), 459–467.
Amabile, T. M. (1996). The social psychology of creativity. Boulder, CO: Westview Press.
Atman, C. J., Adams, R. S., Cardella, M. E., Turns, J., Mosborg, S., & Saleem, J. (2007). Engineering design processes: A comparison of students and expert
practitioners. Journal of Engineering Education, 96(4), 359–379.
Atman, C. J., Cardella, M. E., Turns, J., & Adams, R. (2005). Comparing freshman and senior engineering design processes: An in-depth follow-up study. Design Studies,
26(4), 325–357.
Atman, C. J., Chimka, J. R., Bursic, K. M., & Nachtmann, H. L. (1999). A comparison of freshman and senior engineering design processes. Design Studies, 20(2),
131–152.
Baer, J., Kaufman, J. C., & Riggs, M. (2009). Rater-domain interactions in the consensual assessment technique. International Journal of Creativity and Problem Solving,
19, 87–92.
Blair, C. S., & Mumford, M. D. (2007). Errors in idea evaluation: Preference for the unoriginal? The Journal of Creative Behavior, 41, 197–222.
Bourgeois-Bougrine, S., Buisine, S., Vandendriessche, C., Glaveanu, V., & Lubart, T. (2017). Engineering students’ use of creativity and development tools in
conceptual product design: What, when and how? Thinking Skills and Creativity, 24, 104–117.
Brown, T. (2008). Design thinking. Harvard Business Review, 86, 84.
Charyton, C., & Merrill, J. A. (2009). Assessing general creativity and creative engineering design in first year engineering students. Journal of Engineering Education,
98, 145–156.
Chi, M. T., & Glaser, R. (1985). Problem solving ability. In R. J. Sternberg (Ed.), Human abilities: An information processing approach (pp. 227–248). New York, NY:
Freeman.
Claridge, G., & McDonald, A. (2009). An investigation into the relationships between convergent and divergent thinking, schizotypy, and autistic traits. Personality and
Individual Differences, 46, 794–799.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.
Cropley, A. J. (2006). In praise of convergent thinking. Creativity Research Journal, 18, 391–404.
Cropley, D. H. (2015a). Promoting creativity and innovation in engineering education. Psychology of Aesthetics, Creativity, and the Arts, 9(2), 161–171.
Cropley, D. H. (2015b). Creativity in engineering: Novel solutions to complex problems. San Diego: Academic Press.
Cropley, D. H., & Cropley, A. J. (2005). Engineering creativity: A systems concept of functional creativity. In J. C. Kaufman, & J. Baer (Eds.), Creativity across domains:
Faces of the muse (pp. 169–185). New Jersey: Lawrence Erlbaum Associates Inc.
Cropley, D. H., & Kaufman, J. C. (2019). The siren song of aesthetics? Domain differences and creativity in technology. Journal of Mechanical Engineering Science, 2,
451–464.
Csikszentmihalyi, M. (1965). Artistic problems and their solutions: An exploration of creativity in the arts. Doctoral dissertation. University of Chicago, Committee on
Human Development.
Dailey, L., & Mumford, M. D. (2006). Evaluative aspects of creative thought: Errors in appraising the implications of new ideas. Creativity Research Journal, 18,
385–390.
Diedrich, J., Benedek, M., Jauk, E., & Neubauer, A. C. (2015). Are creative ideas novel and useful? Psychology of Aesthetics, Creativity, and the Arts, 9(1), 35.
Dumas, D., & Dunbar, K. N. (2014). Understanding fluency and originality: A latent variable perspective. Thinking Skills and Creativity, 14, 56–67.
Dumas, D., Schmidt, L. C., & Alexander, P. A. (2016). Predicting creative problem solving in engineering design. Thinking Skills and Creativity, 21, 50–66.
Ericsson, K. A. (Ed.). (1996). The road to expert performance: Empirical evidence from the arts and sciences, sports, and games. Mahwah, NJ: Erlbaum.
Finke, R. A., Ward, T. B., & Smith, S. M. (1992). Creative cognition: Theory, research, and applications. Cambridge, MA: MIT Press.
Gaskin, J. (2011). Multigroup moderation in AMOS. Gaskination’s statistics. http://youtube.com/Gaskination.
Getzels, J. W., & Csikszentmihalyi, M. (1976). The creative vision: A longitudinal study of problem finding in art. New York: Wiley.

14
L.J. Nazzal and J.C. Kaufman Thinking Skills and Creativity 38 (2020) 100734

Guilford, J. P. (1950). Creativity. The American Psychologist, 5, 444–454.


Guilford, J. P. (1959). Traits of creativity. In H. H. Anderson (Ed.), Creativity and its cultivation (pp. 142–161). New York: Harper.
Huang, F., Tang, S., Sun, P., & Luo, J. (2018). Neural correlates of novelty and appropriateness processing in externally induced constraint relaxation. Neuroimage,
172, 381–389.
Illies, J. J., & Reiter-Palmon, R. (2004). The effects of type and level of personal involvement on information search and problem solving 1. Journal of Applied Social
Psychology, 34(8), 1709–1729.
Kaufman, J. C., & Baer, J. (2002). Could Steven Spielberg manage the Yankees?: Creative thinking in different domains. Korean Journal of Thinking & Problem Solving,
12, 5–15.
Kaufman, J. C., & Baer, J. (2004). The Amusement Park Theoretical (APT) Model of creativity. Korean Journal of Thinking and Problem Solving, 14, 15–25.
Kaufman, J. C., & Baer, J. (2012). Beyond new and appropriate: Who decides what is creative? Creativity Research Journal, 24, 83–91.
Kaufman, J. C., & Beghetto, R. A. (2013). In praise of Clark Kent: Creative metacognition and the importance of teaching kids when (not) to be creative. Roeper Review,
35, 155–165.
Kaufman, J. C., Baer, J., & Cole, J. C. (2009). Expertise, domains, and the consensual assessment technique. The Journal of Creative Behavior, 43, 223–233.
Kaufman, J. C., Baer, J., Cropley, D. H., Reiter-Palmon, R., & Sinnett, S. (2013). Furious activity vs. understanding: How much expertise is needed to evaluate creative
work? Psychology of Aesthetics, Creativity, and the Arts, 7, 332.
Kaufman, J. C., Gentile, C. A., & Baer, J. (2005). Do gifted student writers and creative writing experts rate creativity the same way? The Gifted Child Quarterly, 49,
260–265.
Kaufman, J. C., Plucker, J. A., & Baer, J. (2008). Essentials of creativity assessment. New York: Wiley.
Kershaw, T. C., Bhowmick, S., Seepersad, C. C., & Hölttä-Otto, K. (2019). A decision tree based methodology for evaluating creativity in engineering design. Frontiers
in Psychology, 10.
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.
Long, H. (2014). More than appropriateness and novelty: Judges’ criteria of assessing creative products in science tasks. Thinking Skills and Creativity, 13, 183–194.
Mumford, M. D. (2003). Where have we been, where are we going? Taking stock in creativity research. Creativity Research Journal, 15, 107–120.
Mumford, M. D., Baughman, W. A., & Sager, C. E. (2003). Picking the right material: Cognitive processing skills and their role in creative thought. In M. A. Runco
(Ed.), Critical creative processes (pp. 19–68). Cresskill, NJ: Hampton Press.
Mumford, M. D., Baughman, W. A., Threlfall, K. V., Supinski, E. P., & Costanza, D. P. (1996). Process-based measures of creative problem-solving skills: I. Problem
construction. Creativity Research Journal, 9, 63–76.
Mumford, M. D., Mobley, M. I., Uhlman, C. E., Reiter-Palmon, R., & Doares, L. M. (1991). Process analytic models of creative capacities. Creativity Research Journal, 4,
91–122.
Mumford, M. D., Reiter-Palmon, R., & Redmond, M. R. (1994). Problem construction and cognition: Applying problem representations in ill-defined domains. In
M. A. Runco (Ed.), Problem finding, problem solving, and creativity (pp. 3–39). Westport, CT: Ablex Publishing.
NGSS Lead States. (2013). Next generation science standards: For states, by states. Washington, DC: The National Academies Press.
Passow, H. J., & Passow, C. H. (2017). What competencies should undergraduate engineering programs emphasize? A systematic review. Journal of Engineering
Education, 106(3), 475–526. https://doi.org/10.1002/jee.20171.
Plucker, J. A., & Beghetto, R. A. (2004). Why creativity is domain general, why it looks domain specific, and why the distinction does not matter. In R. J. Sternberg,
E. L. Grigorenko, & J. L. Singer (Eds.), Creativity: From potential to realization (pp. 153–167). Washington DC: American Psychological Association.
Plucker, J. A., Kaufman, J. C., Temple, J. S., & Qian, M. (2009). Do experts and novices evaluate movies the same way? Psychology & Marketing, 26, 470–478.
Plucker, J. A., Qian, M., & Wang, S. (2011). Is originality in the eye of the beholder? Comparison of scoring techniques in the assessment of divergent thinking. The
Journal of Creative Behavior, 45, 1–22.
Policastro, E., & Gardner, H. (1999). From case studies to robust generalizations: An approach to the study of creativity. In R. J. Sternberg (Ed.), Handbook of creativity.
Cambridge, UK: Cambridge University Press.
Reiter-Palmon, R., & Illies, J. J. (2004). Leadership and creativity: Understanding leadership from a creative problem-solving perspective. The Leadership Quarterly, 15,
55–77.
Reiter-Palmon, R., & Robinson, E. J. (2009). Problem identification and construction: What do we know, what is the future? Psychology of Aesthetics, Creativity, and the
Arts, 3, 43–47.
Reiter-Palmon, R., Mumford, M. D., O’Connor Boes, J., & Runco, M. A. (1997). Problem construction and creativity: The role of ability, cue consistency, and active
processing. Creativity Research Journal, 10, 9–23.
Reiter-Palmon, R., Mumford, M. D., & Threlfall, K. V. (1998). Solving everyday problems creatively: The role of problem construction and personality type. Creativity
Research Journal, 11, 187–197.
Runco, M. A., & Acar, S. (2019). Divergent thinking. In J. C. Kaufman, & R. J. Sternberg (Eds.), Cambridge handbook of creativity (2nd ed, pp. 224–254). New York:
Cambridge University Press.
Runco, M. A., & Chand, I. (1994). Conclusions concerning problem finding, problem solving, and creativity. In M. A. Runco (Ed.), Problem finding, problem solving, and
creativity (pp. 217–290). NJ: Ablex Publishing Corporation.
Runco, M. A., & Chand, I. (1995). Cognition and creativity. Educational Psychology Review, 7, 243–267.
Runco, M. A., & Charles, R. E. (1993). Judgments of originality and appropriateness as predictors of creativity. Personality and Individual Differences, 15(5), 537–546.
Runco, M. A., Illies, J. J., & Eisenman, R. (2005). Creativity, originality, and appropriateness: What do explicit instructions tell us about their relationships? The
Journal of Creative Behavior, 39(2), 137–148.
Sawyer, R. K. (2012). Explaining creativity: The science of human innovation (2nd ed). New York: Oxford University Press.
Silk, E. M., Daly, S. R., Jablokow, K., Yilmaz, S., & Berg, M. N. (2014). The design problem framework: Using adaption-innovation theory to construct design problem
statements.
Silvia, P. J. (2008). Discernment and creativity: How well can people identify their most creative ideas? Psychology of Aesthetics, Creativity, and the Arts, 2, 139–146.
Silvia, P. J., Martin, C., & Nusbaum, E. C. (2009). A snapshot of creativity: Evaluating a quick and simple method for assessing divergent thinking. Thinking Skills and
Creativity, 4, 79–85.
Studer, J. A., Daly, S. R., McKilligan, S., & Seifert, C. M. (2018). Evidence of problem exploration in creative designs. AI EDAM, 32(4), 415–430.
Terkowsky, C., Haertel, T., Ortelt, T., Radtke, M., May, D., & Tekkaya, A. E. (2016). Creating a place to bore or a place to explore? Investigating possibilities to foster
students’ creativity in the manufacturing engineering lab. The International Journal of Creativity & Problem Solving, 26(2), 23–45.
Vincent, A. S., Decker, B. P., & Mumford, M. D. (2002). Divergent thinking, intelligence, and expertise: A test of alternative models. Creativity Research Journal, 14,
163–178.
Wallas, G. (1926). The art of thought. New York: Harcourt Brace.
Wright, S. M., Rutgers, E. M., Daly, S. R., Jablokow, K. W., & Yilmaz, S. (2015). Exploring the effects of problem framing on solution shifts: A case study. Industrial
Design Conference Presentations, Posters and Proceedings, 11. http://lib.dr.iastate.edu/industrialdesign_conf/11.

15

You might also like