You are on page 1of 14

Technological Forecasting & Social Change 165 (2021) 120530

Contents lists available at ScienceDirect

Technological Forecasting & Social Change


journal homepage: www.elsevier.com/locate/techfore

The boundary of crowdsourcing in the domain of creativity✰✰


Jie Ren a, Yue Han b, Yegin Genc c, William Yeoh d, Aleš Popovič e, *
a
Gabelli School of Business, Fordham University, 140 West 62nd Street, New York, NY 10023, United States
b
Faculty of Information Systems, Le Moyne, 1419 Salt Springs Rd, Syracuse, NY 13214, United States
c
Seidenberg School of CSIS, Pace University, New York, NY 10038, United States
d
Faculty of Business and Law, Deakin University, 70 Elgar Road, Burwood, Victoria 3125, Australia
e
NEOMA Business School, 1 Rue du Maréchal Juin, 76130 Mont-Saint-Aignan, France

A R T I C L E I N F O A B S T R A C T

Keywords: Studies promote crowdsourcing as an alternative source of creativity for companies. By investigating whether a
Crowdsourcing boundary exists in crowdsourcing for innovation, we aim to identify the conditions under which the generic
Boundary crowd (mainly consisting of novices, instead of professionals) is less creative. Based on the componential theory
Creativity
of creativity, we compare the crowd’s and professionals’ creativity, focusing on generalist versus specialist tasks.
Specialist task
Generalist task
Leveraging online experiments and semantic analysis, we find that the crowd is more creative than professionals
System design in solving generalist tasks. However, the crowd is less innovative than professionals in solving specialist tasks,
Componential theory of creativity thereby suggesting a boundary to crowdsourcing. Nevertheless, to solve specialist tasks, members of the crowd
can gain relevant knowledge by exposing themselves to each other’s ideas, thereby suggesting an attempt to
break the boundary. This study offers new insights into the boundary of crowdsourcing for innovation.

1. Introduction members of the crowd (Garcia Martinez, 2017). However, it is unclear


whether crowdsourcing in the domain of creativity/innovation has any
The practice of crowdsourcing has become increasingly prevalent in boundary or cap. We focus on the creative application of crowdsourcing,
various industries, with approximately 85% of the most successful explore the boundary of crowdsourcing in the domain of creativity, and
global brands using crowdsourcing in the past two decades (Roth et al., investigate the means to break such a boundary. In particular, we aim to
2015). Crowdsourcing is “a form of open innovation that aims to boost compare the creativity of the generic crowd with that of experts and
idea generation in innovation processes” (Cappa et al., 2019). The un­ attempt to improve the crowd’s creativity.
derlying rationale is that “the collective intelligence” of a large number Prior studies comparing the crowd with experts in the domain of
of contributors outside the firm’s boundaries increases the likelihood of creativity have provided anecdotal evidence for this comparison in one
achieving high-quality ideas with exceptional business potential (Cappa or two product designs (Nishikawa et al., 2013; Poetz and Schreier,
et al., 2019; Howe, 2006). For instance, Netflix has used crowdsourcing 2012), and as suggested for these specific products (e.g., baby products),
to improve its algorithms (Hallinan and Striphas, 2014), Starbucks has the generic crowd performed more creatively. In reality, we have also
generated product ideas from customers (Hossain and Islam, 2015), and seen examples of professionals outperforming the generic crowd—an
Dell (Bayus, 2013) and IBM (Bjelland and Wood, 2008) have adopted example is NASA’s crowdsourcing campaign to name a new node of the
crowdsourcing for new product design. International Space Station, even though NASA did not use any names
The literature has mainly looked at how to attract valuable contri­ from the crowd eventually. These various comparison outcomes call for
butions from the crowd for innovative ideas (e.g., Blohm et al., 2013; a study that systematically and theoretically compares the crowd and
Chiu et al., 2014; Ren et al., 2014), at the factors that affect crowd experts in the domain of creativity. In response to this, we aim to answer
members’ willingness to submit ideas (Piezunka and Dahlander, 2018), the following research questions. 1) Under what circumstances does the
or at the conditions that can foster knowledge exchange among crowd outperform professionals creativity-wise and under what


The authors acknowledge the financial support from the Slovenian Research Agency (research core funding No. P5–0410) and from the National Natural Science
Foundation of China (Grant # 71802204).
* *Corresponding author.
E-mail addresses: jren11@fordham.edu (J. Ren), hany@lemoyne.edu (Y. Han), ygenc@pace.edu (Y. Genc), william.yeoh@deakin.edu.au (W. Yeoh), ales.
popovic@neoma-bs.fr (A. Popovič).

https://doi.org/10.1016/j.techfore.2020.120530
Received 13 February 2019; Received in revised form 11 December 2020; Accepted 12 December 2020
Available online 26 December 2020
0040-1625/© 2020 Elsevier Inc. All rights reserved.
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

circumstances otherwise? 2) If professionals outperform the crowd crowd’s solutions without learning.
creativity-wise, how can the crowd improve its creativity to catch up? The contributions of this study to the crowdsourcing literature are
We define creativity in our study as “the production of novel and fourfold. First, this study is among the first attempts to focus on the
useful solutions” (London, 2019), and we measure it through the prac­ boundary of crowdsourcing for innovation, in comparison to pro­
ticality and novelty of generated ideas from the crowd versus pro­ fessionals. Identifying this boundary provides more insight into the
fessionals (Brem and Bilgram, 2015; London, 2019). Since the crowd crowd’s limitations and thus sheds light on the discussion of overcoming
and professionals are all individuals, we adopt a classic factorial theory these limitations. This understanding also significantly contributes to
(Wang and Nickerson, 2017) that taps into individual creativity—that is, the design of crowdsourcing systems. Second, we highlight the impor­
the componential theory of creativity (Amabile, 2011). According to this tance of knowledge to creativity. Combined, the literature on creativity
theory, there are three components of individual creativity: 1) and crowdsourcing suggests that the diverse backgrounds of crowd
domain-relevant skills, 2) creativity-relevant processes, and 3) motiva­ members may not contribute to their creativity if they do not have
tions to contribute. In our study, we focus on the first two components, sufficient knowledge to understand the open call or the boundary of the
as we assume that both parties have comparable motivations to create context. As a result, the creativity of the crowd may be compromised.
and contribute. In our later experiments, set against professionals, we Realizing this relationship between knowledge and creativity, our study
gave our participants, who were members of the crowd, equal incentives explores potential solutions to address this issue, which complements
to control for the third component. existing research (e.g., Nishikawa et al., 2013; Poetz and Schreier,
We use the task type to measure the two components of creativity in 2012). Third, we provide empirical evidence that crowd members can
our study (Amabile, 2011). Specifically, we employ the task type of a learn from each other to gain more knowledge about specialist tasks and
specialist task and generalist task to trigger the different domain to gradually understand the boundary of the context. As a consequence,
knowledge and various creativity-relevant processes between the their diverse backgrounds could contribute to collective creativity,
generic crowd and professionals, which may consequently lead to thereby eventually breaking the boundary of crowdsourcing. Fourth,
different creativity outcomes. In particular, specialist tasks (e.g., this study introduces a novel mixed-methods design (experiment and
designing nuclear plants) require high specificity of the necessary semantic analysis) to examine crowd members’ thinking process for
domain knowledge and can limit the solution space for people to explore innovation in a crowdsourcing setting. We use semantic analysis to
diverse perspectives. Meanwhile, generalist tasks require a broader calculate the semantic distance between the topics in ideas and task
range of domain knowledge (Hossain and Islam, 2015; L. Yu and Nick­ definitions, and this provides a retrospective measure of how idea cre­
erson, 2011), and the solution space for people to explore diverse per­ ators utilize their knowledge for creativity.
spectives is wider as well. The remainder of this paper is structured as follows. The next section
We argue, given the crowd’s and professionals’ inherent differences reviews the literature on the crowd’s and professionals’ creativity and
in these two components, that the generic crowd and professionals may introduces the theoretical basis for our work. In Section 3, we formalize
have different levels of creativity. If the inherent difference between the our hypotheses. Next, we report on our two experimental studies. Sec­
generic crowd and professionals can be reduced in some way, the tion 5 elaborates on the findings and implications before outlining the
creativity performance difference between them would also be reduced. limitations and avenues for future research. Our concluding remarks
Moreover, we maintain that the learning process (Janz and Prasarn­ follow in Section 6.
phanich, 2003) can help the generic crowd (who are by nature novices)
become experts, reducing the inherent differences (especially the first 2. Literature review
component) between the crowd and professionals. Specifically, we
conjecture that, for specialist tasks, the crowd’s creativity can be 2.1. Creativity
potentially enhanced to a level close to professionals’ creativity as long
as members of the crowd participate in this learning process. In the past, many organizations have shifted their innovation model
To explore these conjectures driven by the componential theory of from solely relying on in-house experts to a private-collective model
creativity, we conducted two studies, each with an online experiment, (von Hippel and Krogh, 2003). These organizations conduct innovation
through which we directly identified causality among variables (Shad­ search and generate creative product and service ideas with external
ish et al., 2002; Winston and Blais, 1996) and excluded or controlled for resources such as customers, suppliers, and universities (Gassmann
the confounding effect of other contextual variables (e.g., motivations of et al., 2006; Harryson et al., 2008; Schiele, 2010; Zhang et al., 2020).
the participants). For idea collection, we posted two open calls: one for a These general crowd members are recognized as a vital source of
specialist task (the design of a cybersecurity channel) and one for a external innovation (Cappa et al., 2019) and are invited by organiza­
generalist task (the design of an iPhone application). To collect the ideas tions for creativity co-creation (Elmquist et al., 2009; Feldmann and
of professionals, we requested professionals with doctoral degrees in the Teuteberg, 2020; Johnson et al., 2019; Kristensson et al., 2002). Among
field of computer science to develop relevant ideas. Professionals in the the different collaboration models, crowdsourcing has been one of the
subfields of cybersecurity and the design of iPhone applications partic­ most widely used methods for the external search for innovation (Bayus,
ipated in our study to make sure they have domain knowledge for both 2013; Kyriakou et al., 2017).
the specialist task and the generalist task. To test the creativity of There are two streams of literature on innovation/creativity. The
crowds, we asked members of a Mechanical Turk crowd (who have first deals with the measurement of creativity/innovation. Most studies
domain knowledge for the generalist task but probably not for the focus on the practicality and novelty dimensions of creative solutions.
specialist task) to produce their ideas independently. This way, these For example, London (2019) viewed creativity as the production of
two parties vary in their domain-relevant skills. Moreover, we used se­ novel and useful solutions. Brem et al. (2016) suggested that creativity
mantic analysis to measure the creativity processes of crowd members has multiple dimensions, with special recognition of newness and use­
versus professionals in their generated ideas (Egozi et al., 2011; Genc, fulness. The second stream of research relates to increasing the creativity
2014, 2011). These two parties vary in their creativity-relevant level of firms or individuals. For example, on the one hand, London
processes. (2019) focused on the link between technology fit and creative perfor­
We found that the generic crowd outperforms professionals mance within firms. Brem et al. (2016) summarized three factors that
creativity-wise only for generalist tasks, and it is the other way around increase the creativity performance within firms: environmental factors,
for specialist tasks. However, when crowd members learn from each leadership competencies, and methods for creative thinking. Hughes
other’s generated ideas regarding the specialist tasks, their creativity et al. (2018) explored how leadership impacts creativity in teams. On
level can increase to a level that is significantly higher than that of the the other hand, at the individual level, Brem et al. (2016) suggested that

2
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

most studies explore the creativity of individuals via self-reports, ex­ experience transfer to the society (Martínez-Climent et al., 2020).
periments, and qualitative studies. Wang and Nickerson (2017) sum­ In the problem-solving domain, crowd-level collaboration can help
marized approaches supporting creativity in different stages from address challenging multifaceted social problems that are widely known
finding the problem to finding the solution, and they summarized rele­ as “wicked problems.” Wicked problems, such as disaster prevention
vant theories from factorial theory (creativity based on factors), to (Mileski et al., 2018) or port resilience during major disruptions
associative theory (creativity based on associations), to stage theory (Becker, 2017; Gharehgozli et al., 2017) require innovative and inter­
(creativity based on stages). active collaboration from diverse stakeholders (Elia and Margherita,
2018). Similar to the entrepreneurial activities described above, crowds
2.2. Crowdsourcing for innovative ideas can generate ideas to solve these problems, and experts can evaluate
them. A significant challenge to overcome is to identify the boundaries
The literature has demonstrated that the crowd can provide both between the role of idea crowdsourcing and expert decisions or at least
“black and white” answers to micro tasks, such as tagging images or provide platforms and technologies that help “identify, capture and
adding up the total of a receipt, and provide creative solutions (Kittur, aggregate multi-stakeholder contribution through a structured
2010). Scholars have examined ways to attract contributions of good approach” (Elia and Margherita, 2018).
ideas from the crowd. Piezunka and Dahlander (2018) found that With all the papers highlighting the benefit of using the crowd
receiving rejections, especially with explanations, can positively affect (Blohm et al., 2013; Chiu et al., 2014; Ren et al., 2014; Zhao and Zhu,
crowd members’ willingness to submit ideas in the future. Brem and 2014), a natural question is, does crowdsourcing in the domain of
Bilgram (2015) compared techniques of identifying lead users in creativity have a boundary? If so, under what circumstances does the
contributing ideas for innovation. Gillier et al. (2018) analyzed the crowd underperform in providing innovative ideas?
impacts of different types of task instructions on innovative idea quality
in the setting of crowdsourcing. Garcia Martinez (2017) explored trust in 2.3. Creativity and crowdsourcing
the crowdsourcing platform and its role in knowledge exchange among
crowd members for better ideas. On a systematic level, using the Crowdsourcing outsources an innovation task to a large group of
simulation approach, Natalicchio et al. (2017) and Schenk et al. (2019) people in the form of an open call (Estellés-Arolas and
studied the interplay of the crowd, the problem, and the platform in González-Ladrón-de-Guevara, 2012; Howe, 2006; Renard and Davis,
solving the problem and providing solutions. Accordingly, they also 2019; Steils and Hanine, 2019). Crowdsourcing harnesses the wisdom of
offered guidelines for firms to use the crowd for innovative ideas. the crowd and promotes collective intelligence by providing members
Moreover, scholars have focused on different creativity techniques with platforms to design and create user-generated content (Blohm
and applied these techniques to the crowdsourcing context (Boons and et al., 2013; Han et al., 2020; Riedl and Seidel, 2018). Crowd members
Stam, 2019; Jiang and Wang, 2020). Morris et al. (2013) and Wang et al. can contribute to creativity tasks individually (e.g., InnoCentive, MyS­
(2018) examined the effect of idea priming on generated ideas to tarbucksIdea, and Threadless) and collaboratively (e.g., Thingiverse,
analyze the dimensions of idea quality and idea volume. In addition, Scratch, and Climate CoLab) (Acar, 2019). Previous studies suggest that
scholars have been inspired by genetic algorithms and adopted these one of the significant benefits of using crowdsourcing for creativity
algorithms to organize the crowd as if each crowd member’s idea were a generation is access to massive, diverse ideas (Campos-Blázquez et al.,
gene to be combined, modified, or selected to remain in the following 2020; Chiu et al., 2014; Gimpel et al., 2020; Majchrzak and Malhotra,
generation (Ren et al., 2014; L. Yu and Nickerson, 2011). Leveraging the 2013). Another advantage of crowdsourcing creativity is low cost
benefits of crowdsourcing innovation, scholars have also tapped into (Afuah and Tucci, 2012; Ogink and Dong, 2019). Nevertheless, there
how to integrate crowdsourcing into the internal R&D processes. For have also been critiques of the efficiency of crowdsourcing creativity
example, de Mattos et al. (2018) and Christensen and Karlsson (2019) due to a considerable amount of superficial or redundant outcomes
explored how firms integrated crowdsourcing into open innovation (Bjelland and Wood, 2008; Cheng et al., 2020).
processes. To improve and optimize crowdsourcing creativity, prior research
Crowd intelligence and crowd-level collective action have also has identified three research directions: technology development, cre­
shown broader sociotechnical implications, particularly for entrepre­ ative design processes, and evaluating creativity (Hwang et al., 2019;
neurial and innovative processes (Elia et al., 2020) and solving difficult Maher, 2011). Some researchers collaborated with online platforms to
social or cultural problems (Elia and Margherita, 2018). In the case of explore new technological supports for crowdsourcing creativity, such
entrepreneurial and innovative processes, the advent of digital tech­ as automatic attribution and recommendation systems (Geiger and
nologies that offer greater coordination and collaboration democratized Schader, 2014; Malhotra et al., 2020). Others focus on the creative
such processes, leading to the formation of ecosystems (Elia et al., 2020) design process and study how to organize the crowds better, divide a
(for a detailed review on innovation and entrepreneurial ecosystems, see task, or aid the crowd to collectively explore the design space (Han et al.,
Scaringella and Radziwon (2018)). In the early stages of the creation of 2020; Kittur, 2010; Malone et al., 2017; Wang and Nickerson, 2017; Zhu
innovation ecosystems, users have direct value creation roles (Dede­ et al., 2019). Prior research also discussed the evaluation of creativity
hayir et al., 2018). In addition to contributing to these ecosystems by and proposed several commonly agreed measurements, such as novelty
defining problems or needs to be addressed, users can also be a source of and practicality (Boden, 2004; Brem et al., 2016; London, 2019; Maher,
“the innovation ideas, around which ecosystems are created” (Dede­ 2011; Malhotra and Majchrzak, 2019). This study aims to contribute to
hayir et al., 2018). Studies on digital entrepreneurship ecosystems are the improvement of the creative design process and to explore the
likely to continue to investigate the ways that crowds can be actively and boundary of crowdsourcing creativity.
effectively involved in the entrepreneurial and innovative processes as
new technologies and digital actors (e.g., software agents) are intro­ 2.4. Anecdotal evidence from comparing the crowd’s creativity with
duced (Elia et al., 2020; Mazzola et al., 2020; Ogink and Dong, 2019). professionals’
Crowd-level activities have produced unintended spillover effects on
entrepreneurial processes. For example, crowdfunding activities legiti­ In line with exploring the boundary of crowdsourcing in the domain
mize entrepreneurial identity as successful campaigns signal the quality of creativity, Poetz and Schreier (2012) and Nishikawa et al. (2013)
of projects being funded (De Luca, Margherita, and Passiante, 2019). compared the crowd’s and professionals’ creativity, respectively, and
Furthermore, crowdfunding activities help professionals and partici­ focused on one incident each. Poetz and Schreier (2012) examined a
pants be involved in the development processes (De Luca et al., 2019) company selling baby products as the case to study, while Nishikawa
and facilitate the knowledge accumulated during the crowdfunding et al. (2013) focused on the design of furniture. Both studies were

3
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

descriptive and reported the outcome of creativity per se. Poetz and componential theory of creativity (Amabile, 1983, 2011), there are
Schreier (2012) suggested that the crowd’s ideas score significantly three components of creativity: domain-relevant skills,
higher on novelty and customer benefits than professionals’ ideas, yet creativity-relevant processes, and motivations. Domain-relevant skills
they score lower on feasibility. Nishikawa et al. (2013) found that mainly include knowledge and expertise that a problem-solver has ac­
user-generated furniture products were more novel than designer quired in the particular domain where he or she is working, such as
products. product design or electrical engineering. These skills are highly profes­
sion related. For example, a lawyer’s domain-relevant skill is law and
2.5. Our focus: theoretically comparing the crowd’s creativity with the techniques to practice law. However, creativity-relevant processes
professionals’ are mainly conductive to adopting new perspectives on problems, which
is divergent thinking. Finally, motivations are related to what drives
Thus far, the literature has promoted crowdsourcing for innovation, individuals to offer their creative ideas.
and some studies have provided anecdotal evidence indicating that the In this paper, we focus on the domain-relevant skills and creativity-
crowd can be more creative than professionals. Our study theoretically relevant processes and assume that the crowd’s motivation to contribute
explores the boundary of crowdsourcing for innovation by comparing to a crowdsourcing campaign is comparable to professionals’ motivation
the crowd’s creativity with professionals’ and potentially providing to contribute.
ways to improve the crowd’s creativity. Table 1 shows some exemplary
papers from our literature review. 3. Hypotheses development

2.6. Componential theory of creativity We argue that the generic crowd and professionals vary in two
components of creativity: domain-relevant skills and creativity-relevant
Since the crowd and professionals both consist of individuals, we processes. First, in general, professionals, given their years of experience
applied one of the factorial theories explaining individual creativity of learning and practicing in a specific domain, have more domain-
(Wang and Nickerson, 2017). Specifically, to theoretically compare the relevant skills than the members of the crowd. The latter are usually
crowd with professionals in the domain of creativity, we employed novices in this assigned domain. Second, each member of the generic
Amabile’s componential theory of creativity (Amabile, 1983, 2011) as crowd, given his or her lack of domain knowledge in the assigned task
the theoretical foundation of this research. (in many cases, like Mechanical Turk), is more likely to apply the
Individual creativity depends on many factors. According to the knowledge of his or her professions, which is distant to the assigned
domain, thus creating the opportunity to adopt new perspectives on
problems. Therefore, in general, the crowd has a higher value in
Table 1
Literature Review on the Focus of This Research (with Example Papers). creativity-relevant processes, specific to introducing new perspectives,
than professionals.
Authors Focus Findings
Facing different task types, the inherent differences between these
Crowdsourcing for Innovation two components between the generic crowd and professionals may
Ren et al. (2014) Organizing the crowd via a The crowd in the modification
trigger them to generate creative solutions that vary in creativity. A
genetic algorithm system (mutation) performs
more innovatively than the
specialist task, such as designing nuclear plants, requires high specificity
combination system (crossover) of the necessary domain knowledge and can limit the solution space for
in generating creative Facebook people to explore. Hence, this type of task may have a higher require­
ads. ment for in-depth expertise but a lower requirement for new perspec­
Brem and Lead users Techniques of identifying lead
tives from task-solvers. Meanwhile, a generalist task, such as designing a
Bilgram users in contributing ideas for
(2015) innovation were compared. chair, requires a broader range of domain knowledge (Hossain and
Martinez (2017) Trust Trust can foster knowledge Islam, 2015; L. Yu and Nickerson, 2011), and the solution space for
exchange among crowd people to explore is wider as well. Thus, this type of task may have a
members for better ideas.
lower requirement for in-depth knowledge but a higher requirement for
Natalicchio Simulation model among Guidelines for firms using the
et al. (2017) the crowd, the problem, crowd for innovative ideas were
new perspectives from task-solvers.
and the platform provided via a simulation Since domain-relevant skills and creativity-relevant processes are
approach. two important components of creativity, we argue the following:
Gillier et al. Task instruction Either unbounded or prohibitive Generalist tasks require low levels of domain-relevant skills but high
(2018) task instructions can lead to
levels of creativity-relevant processes. The crowd members, compared to
better quality ideas.
Piezunka and Motivation: the rejection of Receiving rejections, especially professionals, have a relatively lower expertise level, but the crowd may
Dahlander crowd members’ ideas with explanations, can positively have sufficient knowledge in general to solve generalist tasks. At the
(2019) affect crowd members’ same time, the crowd members can apply knowledge of their professions
willingness to submit ideas in the that is usually distant from the domain of the generalist tasks. Specialist
future.
Anecdotal Evidence from Comparing the Crowd’s Creativity with Professionals’
tasks require high levels of domain-relevant skills but low levels of
Poetz and Baby products for this The crowd’s ideas scored creativity-relevant processes, and professionals, compared to the crowd,
Schreier comparison significantly higher on novelty have a much higher expertise level to solve specialist tasks. However,
(2012) and customer benefits than the they often fixate on their expertise domain, which is usually very close
professionals’ ideas, yet they
or equal to the specialist tasks’ domain (Purcell and Gero, 1996). Table 2
scored lower on feasibility.
Nishikawa et al. Furniture design User-generated furniture describes how the two components of creativity contribute to the
(2013) products were more novel than crowd’s versus professionals’ creativity performance difference when
designer products. facing different tasks (as a result of the innate characteristics of gener­
Our Focus: Theoretically Comparing the Crowd’s Creativity with Professionals’ alist tasks versus specialist tasks).
Our paper The componential theory We systematically and
of creativity theoretically compare the
Therefore, we propose the following two hypotheses:
crowd’s creativity with
H1. The crowd is less creative than professionals in solving specialist tasks.
professionals’ and explore
potential methods to improve the
crowd’s creativity. H2. The crowd is more creative than professionals in solving generalist

4
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

Table 2
Theoretical Argument for H1 and H2.
Specialist Tasks (H1) Generalist Tasks (H2)
The crowd Professionals The crowd Professionals

Two Components of Creativity Domain knowledge Low High High High


Creativity process (divergent thinking) Medium Medium High Medium
Creativity performance Medium Low High High

tasks.
Table 3
Next, we focus on the reasoning that leads to the inferior creativity Theoretical Argument for H3.
performance by the crowd to solve specialist tasks that we mentioned Specialist Tasks (H3)
above. There, we said that specialist tasks require high domain-relevant The crowd being The crowd submitting
skills, but the genetic crowd, in general, is composed of novices in the exposed to each ideas without such
assigned domain (see Table 2). Therefore, if members of the crowd can other’s ideas influence

increase their knowledge level, perhaps this boundary can be broken. Two Components Domain Medium Low
We thus focus on learning. of Creativity knowledge
Creativity process Medium Medium
People acquire knowledge through education. Previous studies have
(divergent
examined different techniques for individuals receiving knowledge from thinking)
other people, either via explicit formats (including texts, audios, and Creativity performance Medium Low
videos) or through implicit learning from mentors (Morris et al., 2013;
X. Yu, Shi, Zhang, Nie, and Huang, 2014). The explicit or implicit for­
mats are the extended knowledge base for people to search distantly to H3. To solve specialist tasks, members of the crowd who are exposed to
acquire information. In general, crowdsourcing has evolved from the each other’s ideas are more creative than members who submit ideas without
crowd addressing an open call without being influenced by other crowd such influence.
members. For instance, on Wikipedia, members can edit the contribu­ We conducted two studies, with each having an experiment to test
tions of others. Many scholars have studied this new form of collective the three hypotheses.
contributions to trace the emergence and evolution of overall creativity
that accumulates on crowdsourcing sites (Kyriakou et al., 2017). Some 4. Methodology: experiments
researchers have studied how young people generate creative projects
via interactions using the Scratch language (Hill and Mon­ 4.1. Study 1: testing hypotheses 1 and 2
roy-Hernández, 2012; Resnick et al., 2009). Other studies have dis­
cussed collective innovation in a 3D-printing design community 4.1.1. Stimuli construction and data collection
(Kyriakou et al., 2017). In this experiment, to measure a specialist task, we asked partici­
We argue that when members of the crowd are under the influence of pants to provide creative solutions to create a cybersecurity channel. We
each other—such as being exposed to each other’s ideas—they can treat assumed that people, in general, are unfamiliar with this task; thus, we
each other’s ideas as an extended knowledge base to search for knowl­ designed the instructions for this task in a specific manner (see Table 4).
edge relevant to the task. Knowledge sharing is a set of behaviors that We selected the task of creating an iPhone application to measure the
involve the exchange of information or provision of assistance to others generalist task. We assumed that all participants would have some
(Janz and Prasarnphanich, 2003), and it occurs when individuals assist experience using iPhones or, at least, smartphones. In addition,
and learn from one another to develop new competencies (Yang, 2007). considering the nature of generalist tasks (Clarkson et al., 2013), we
Applied to the crowdsourcing setting, the interactions among members designed the instructions for this task generically.
of the crowd can potentially enable them to search in each other’s We conducted a two-by-two experiment—that is, there were two
“exposed minds” for more relevant knowledge, therefore increasing the types of tasks and two groups of idea generators. We defined the pro­
value in the domain-knowledge skills. In particular, by learning from fessionals for both tasks as individuals with doctoral degrees in the field
each other in the crowd, although each individual may estimate the
relevant knowledge for the specialist task, as a group, each estimate is an
incremental move toward the required range of knowledge for the task. Table 4
Therefore, with more exposure to crowd members’ ideas, by searching Stimuli Construction in Study 1.
the extended knowledge base built upon the collective intelligence, an
Task Type Specific Task Task Description
individual can eventually understand the open call and suggest solutions
that make sense. Specialist Cybersecurity idea A firm aims to provide secure real-time
task creation communication channels among different
Since learning from the crowd itself may not lead the members to office branches—for example, against man-
become experts all at once, we only aim to explore how learning can in-the-middle attacks. Please generate a
improve the creativity of the crowd’s solutions for specialist tasks where creative idea to help this company.
the crowd initially lacked knowledge. More specifically, we aim to Generalist iPhone application A firm wishes to develop an iPhone
task idea creation application. Please generate a creative idea
reduce the knowledge difference between the crowd and professionals in
to help this company.
solving specialist tasks. Specialist tasks require high domain-relevant
skills, but low creativity-relevant processes—that is, the creativity of
the crowd with exposure to each other’s ideas—should be improved.
Table 3 describes theoretically how one component of creativity—the
domain knowledge—contributes to the creativity performance differ­
ence of the generic crowd being exposed to each other’s ideas versus
submitting ideas without such influence to solve specialist tasks.
Therefore, we argue the following:

5
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

of computer science.1 Holding a doctoral degree in a domain typically We conducted an analysis of variance with task type and idea generator
indicates that an individual is a professional in that field (Wellington group as factors and domain knowledge as the dependent variable. We
and Sikes, 2006). Professionals in the subfields of cybersecurity and the also conducted two t-tests. These tests indicated that, in specialist tasks,
design of iPhone applications separately participated in our study for the professionals are more knowledgeable than the crowd (p < 0.001), and,
two tasks. To eliminate the confounding effect of the sampling process in generalist tasks, crowd members are less knowledgeable than pro­
on data collection, we collected ideas from doctoral alumni of four fessionals, but this difference is not significant (p > 0.05). Fig. 2 presents
different universities (three from the United States and one from the results.
Australia). We asked these universities to send our SurveyMonkey links
(one for the specialist task and one for the generalist task) to their 4.1.3. Pretest 2: creativity-relevant processes
doctoral alumni in the field of computer science who had a professional Second, we present how we conducted a semantic analysis to
background in cybersecurity or the design of iPhone applications. Each examine the crowd’s ideas versus professionals’ ideas and to understand
expert participant could respond to only one survey—that is, complete how they applied new perspectives. We do this to validate our reasoning
only one task. The instruction for clicking links was the same. To check for our hypotheses. We used the knowledge representation and simi­
for any sampling bias, we compared the professional samples and per­ larity defined by the semantic analysis theories (Landauer and Dumais,
formed multiple t-tests on the variables, including respondents’ de­ 1997)—that is, mathematically based theories of meaning (Deerwester
mographics and the creativity of the generated ideas. The results et al., 1990). Our goal was to determine the knowledge distance to the
indicate no significant differences, thereby suggesting there was no relevant task during idea generation to complete the task. We formu­
sampling bias. lated knowledge distance detection in a generated idea for a particular
We considered Mechanical Turk workers as the crowd, in which task as a problem of finding the semantic similarity of the idea to the
members had different backgrounds and were located worldwide. core concepts of the task definition. To that end, we aimed to measure
Numerous researchers have adopted Mechanical Turk for data collection the semantic distance from each generated idea to the task definition
(Buhrmester et al., 2011; Kittur, 2010). Each crowd participant in these provided to the participants.
two idea generator groups was asked to come up with only one creative Semantic similarity models rely on studying word co-occurrence
idea in response to the tasks described in Table 4. We kept the “HIT” patterns. These models can be generated from the collection of docu­
(metadata of the task) instruction the same for the two tasks. The Me­ ments being analyzed (corpus)—corpus-based models (Blei, 2012;
chanical Turk participants saw this instruction before reading the task Landauer, 2007)—by extracting the multi-dimensional word vector
instruction. For both tasks, in the HIT instruction, we stated, “Please spaces generated from the documents being analyzed. However, these
provide a creative idea.” We also stated that the payment for each models require training with a large number of documents: The longer
completed task was 20 cents, which is the standard payment in Me­ the documents are, the better they perform. Our documents (idea texts
chanical Turk. We posted two HITs simultaneously and repeated this and task definitions) were relatively short and few, which meant that
idea collection three times with each HIT for both tasks. Fig. 1 records generating statistical models—such as a latent semantic analysis or
how we assigned tasks to the crowd versus professionals. latent Dirichlet allocation—would not be particularly useful. As such,
To check the sampling bias for the two crowds in both tasks, we also we expanded our model using a knowledge-based approach.
conducted t-tests on the variables, including demographics and the Knowledge-based approaches, as opposed to corpus-based approaches,
creativity of the generated ideas. The results indicate no significant rely on human-crafted explicit knowledge bases, such as WordNet
difference. We collected a comparable sample size for professionals’ (Miller, 1995) or Wikipedia (Gabrilovich and Markovitch, 2007). Our
solutions. This choice of the crowd’s and professionals’ sample sizes was model mapped task definitions to the relevant Wikipedia pages and used
consistent with Poetz and Schreier’s (2012) research approach. In total, the semantic distance of the selected Wikipedia pages from the idea
we collected 160 ideas: 30 ideas from professionals and 50 ideas from texts. This method was adopted from Genc (2014), who used Wikipedia
the crowd for the specialist task and 30 ideas from professionals and 50 to classify short and elliptic text into its topics.
ideas from the crowd for the generalist task. Table 5 summarizes the For our analysis, we first extracted concepts from task definitions by
demographic characteristics of all participants. matching the noun phrases (extracted from the definitions) with Wiki­
For both professionals and crowds, we used the same task design and pedia pages. That is, if a noun phrase in the task definition matched a
followed the same acceptance criteria to control for participants’ Wikipedia page title, we considered this phrase to represent a concept of
attentiveness: 1) We reversed the scale of some of the questions and the task definition. Table 7 presents how the two task definitions and the
excluded submissions that have the same answer to each question; 2) we concepts were mapped to each other.
excluded submissions with an idea less than 100 words; and 3) we For each of the concepts in the task definitions, we created word-
removed submissions with a task completion time less than 5 min. To frequency distributions by counting the word occurrences in the asso­
verify our reasoning for the hypotheses that are based on 1) domain- ciated Wikipedia page and removing stop words. For example, for the
relevant skills and 2) creativity-relevant processes that are specific to concept of “information security,” we extracted the text from the “In­
adopting new perspectives on problems, we conducted two pretests. formation Security” page in Wikipedia and calculated the relative fre­
quencies of non-stop words. The process yielded the following vector
4.1.2. Pretest 1: domain-relevant skills (only the top six words are presented):
First, in each task condition, we asked participants about their levels
of knowledge of the task. To measure the levels of knowledge, we Cinformation_security = 〈 0.042 * information, 0.031 * security, 0.013 * business,
adopted the measurement scale of expertise that Ohanian (1990) 0.012 * change, 0.010 * management, 0.008 * access, … 〉
developed (see Table 6). This measurement was performed after the Similarly, we created relative word-frequency-distribution vectors
participants completed the tasks because we did not want their aware­ for each of the ideas. Ideas were then considered to be a distribution over
ness of their task domain knowledge to affect how they completed them. the definition concepts, Θ = (θj,k), such that element (j,k) represented
the relevance of idea j to concept k of the task definition. By observing
the word-frequency vectors of both task definition concepts (W) and
1
Professionals for the generalist task could include people from the field of ideas (D), we could calculate the concept frequencies of ideas as follows:
computer science, as well as people from other fields, such as design and
marketing. We selected professionals from computer science because we wished
Θ = tfidf(D)T tfidf(W),
to control for the confounding effect of discipline on the quality of generated where V = {v1, v2, … , vn} is the vocabulary of ideas in the idea
ideas for the generalist and specialist tasks.

6
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

Fig. 1. Condition Construction in Study 1


Note: the last layer of boxes represents the ideas generated in each condition.

Table 5
Summary of Participants’ Demographics.
Conditions
Professionals & Specialist Task Professionals & Generalist Task Crowd & Specialist Task Crowd & Generalist Task

Gender Male 22 21 24 20
Female 8 9 26 30
Age Average age 31.77 31.53 31.62 31.34
Education High school or below 0 0 13 11
Bachelor’s degree 0 0 27 26
Master’s degree 0 0 10 13
PhD 30 30 0 0
Professional background Computer science 30 30 0 0
Business 6 5
Psychology 4 3
Biology 0 4
Electrical engineering 2 2
English 2 1
Others 36 35

Table 6 Table 7
Measurement of Participants’ Levels of Knowledge. Operationalization.
Item Question Task Definition Mapped Concepts

Expertise How much do you consider yourself to be an expert in the domain of Provide secure real-time communication “Real-time communication,” “Man-in-
the displayed topic? channels among different office the-middle attack,” “Information
Experience How experienced are you in the domain of the displayed topic? branches—for example, against man- security”
Knowledge How knowledgeable are you in the domain of the displayed topic? in-the-middle attacks
Qualifications How qualified are you in the domain of the displayed topic? Develop an iPhone application “iPhone,” “Software development,”
Skill How skilled are you in the domain of the displayed topic? “Application software”
Familiarity How familiar are you with the displayed topic?

collection; D = (di,j) is the idea word-frequency matrix, where element (i,


j) represents the frequency of term i (i ∈ V) in idea j; and W = (wi,k) is the
concept word-frequency matrix, where element (i,k) represents the
frequency of term i (i ∈ V) in concept k. To address the noise from
commonly used words across all documents and Wikipedia pages, we
applied a term frequency–inverse document frequency (tfidf) trans­
formation to D and W. The semantic distance of idea j from the task
definition was then expressed as follows:
( ) ( )
dist ideaj , task definition = 1 − similarity ideaj , task definition

=1− θjk
k

We conducted two independent-sample tests to compare the


knowledge that the crowd versus professionals used to solve a specialist
task versus a generalist task. The t-tests showed that, for the specialist
task, the crowd applied a significantly greater degree of knowledge
distance than professionals (0.74 versus 0.55, p < 0.001). However, the
crowd applied a greater degree of knowledge distance than professionals
Fig. 2. Self-Reported Domain Knowledge by Group and Task for the generalist task. Still, this difference is not significant (0.74 versus
Note: Error bars represent standard errors. 0.73, p > 0.1). Fig. 3 demonstrates the same pattern of results. These two

7
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

Table 9
Idea Practicality by Group and Task.
DV: Idea Practicality Sum of df Mean F Value Sig.
Squares Square

Idea generator group 1.26 1 1.257 1.015 ns


Task type 27.46 1 27.460 22.167 ***
Idea generator group *task 19.74 1 19.743 15.937 ***
type
Gender 0.61 1 0.615 0.496 ns
Age 5.29 1 5.291 4.271 *
Residuals 188.30 152 1.239

Note: ns not significant, * p < 0.05, *** p < 0.001. Note: For the idea generator
group, professionals were coded as 1 and the crowd as 0; for task type, specialist
task was coded as 1 and generalist task as 0; for gender, male was coded as 1 and
female as 0.

Fig. 3. Knowledge Distance by Group and Task


Note: Error bars represent standard errors.

pretests’ results support our reasoning for our hypotheses.

4.1.4. Idea creativity evaluation and main results


To measure the creativity of the crowd versus professionals,
following the literature, we used the practicality and novelty dimensions
of the solutions that the two groups of people generated (Brem et al.,
2016; London, 2019). Specifically, London (2019) called creativity “the
production of novel and useful solutions.” Brem et al. (2016), in their
review of creativity, stated that creativity has multiple dimensions, and
most of the research agreed on the dimensions of newness and
usefulness.
To test the hypotheses, we first ran a two-way analysis of covariance
(ANCOVA) as age is a continuous control variable. We also reported
interaction plots. In these tests and plots, we can see the direction Fig. 4. Idea Novelty Comparison
(positive, negative, or neutral) of the effects on the dependent variables. Note: Error bars represent standard errors.
To test H1 and H2, and to compare the crowd with professionals in their
creativity performance for the specialist task versus the generalist task,
we repeated the ANCOVA with the dependent variables of idea novelty
and idea practicality and the factors of the idea generator binary vari­
able (where 1 indicated professionals, and 0 indicated the crowd) and
the task-type binary variable (where 1 indicated the specialist task, and
0 indicated the generalist task). Tables 8 and 9 demonstrate that the
interaction term of idea generator and task type is significant (p < 0.05;
p < 0.001). This suggests a moderating effect of task type on the crea­
tivity levels of professionals and the crowd.
We also conducted multiple independent-sample t-tests and gener­
ated interaction plots on both dimensions of creativity: novelty and
practicality (Figs. 4 and 5). These tests showed that professionals’ ideas
were more practical compared with those of the crowd for the specialist
task (3.98 versus 3.08, p < 0.001). Professionals’ ideas were slightly

Table 8
Idea Novelty by Group and Task.
Fig. 5. Idea Practicality Comparison
DV: Idea Novelty Sum of df Mean F Sig.
Note: Error bars represent standard errors.
Squares Square Value

Idea generator group 0.20 1 0.196 0.256 ns


Task type 6.39 1 6.392 8.343 **
more novel compared with those of the crowd for the specialist task but
Idea generator group *task 4.07 1 4.072 5.315 * not significantly more novel (2.69 versus 2.45, p > 0.1). These results
type partially support H1. However, for the generalist task as the baseline, the
Gender 0.80 1 0.797 1.040 ns ideas of the crowd were more novel and practical compared with those
Age 0.36 1 0.364 0.475 ns
of the professionals (2.63 versus 3.09, p < 0.05; 3.9 versus 4.45, p < 0.1).
Residuals 116.45 152 0.766
Therefore, H2 is supported.
Note: ns not significant, *** p < 0.001. Note: For the idea generator group,
professionals were coded as 1 and the crowd as 0; for task type, specialist task
was coded as 1 and generalist task as 0; for gender, male was coded as 1 and
female as 0.

8
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

4.2. Study 2: testing hypothesis 3 Table 10


Summary of Participants’ Demographics.
4.2.1. Stimuli construction Conditions
We used the cybersecurity task in Table 4 in Study 1 to measure the Round 1 Round 2 Round 3
specialist task again in Study 2. We again used Mechanical Turk users to Gender Male 26 25 24
measure the crowd. We conducted Study 2 in parallel with Study 1 and Female 24 25 26
followed the same control process for participants’ attentiveness. We Age Average age 30.22 34.67 32.80
organized these users in three rounds to measure the crowd that was Education High school or 14 19 10
below
exposed to one another’s ideas. Specifically, in the first round, we asked Bachelor’s degree 28 23 31
one crowd of 50 Mechanical Turk users to independently generate one Master’s degree 8 8 9
idea addressing the specialist task. When all responses had been PhD 0 0 0
collected from this group of crowd members in the first round, we Professional Computer science 0 0 0
background Business 4 4 7
randomly presented three responses to each group of crowd members to
Psychology 3 3 4
act as a starting point of reference. Specifically, we provided the Biology 2 2 2
following instructions: “Three fellow MTurkers have submitted the Electrical 4 0 2
following three ideas, respectively, to help this company.” We then engineering
asked the latter group of members to generate a creative idea to help the English 3 1 3
Others 34 40 32
company. For two more rounds, we repeated the process of requesting
ideas from crowd members. Each crowd member participated only once
by providing one proposal. We asked participants about their levels of participants were asked to answer the cybersecurity open call.
knowledge of the topic by applying the measurement scale of expertise To validate our reasoning for H3, we also conducted a pretest.
that Ohanian (1990) developed (see Table 6). This was undertaken after Through the application of independent-sample t-tests, the results
the participants had completed the tasks. We also stated that the pay­ confirm the learning effect of idea building in the crowd. With regard to
ment for each completed task was 20 cents, which is the standard pay­ the specialist task, as the process of priming other crowd members with
ment in Mechanical Turk. Fig. 6 records how we exposed ideas to crowd ideas proceeded from Rounds 1 to 3, the crowd members’ knowledge
members via two rounds compared with asking the crowd to submit levels increased significantly (p < 0.001) as shown in Fig. 7. Interest­
ideas without such exposure. ingly, professionals’ knowledge (from Study 1) was still higher than that
Table 10 lists the demographics of the Mechanical Turk participants. of the crowd from Round 3 (p < 0.001). However, the knowledge of the
crowd that was exposed to one another’s ideas from Round 3 was
4.2.2. Idea evaluation significantly higher than the knowledge of the crowd that lacked such an
We focused on the ideas generated in the third round (Ren et al., influence in Study 1 (p < 0.001). Therefore, our reasoning for H3 is
2014). Three experts evaluated each of the 50 ideas (PhDs in the field of validated.
computer science—the same experts from Study 1). The experts who
conducted the evaluations were unaware of the source of each idea. The 4.2.3. Main results
participants evaluated the ideas according to the novelty and practi­ Moreover, H3 is also supported. We conducted multiple
cality dimensions of creativity—the same as in Study 1. For analysis, we independent-sample t-tests to compare the creativity performance of the
averaged the ratings from experts to address the condition in which the ideas of professionals (in Study 1), the ideas of the crowd (in Study 1),

Fig. 6. Condition Construction in Study 2


Note: the last layer of boxes represents the ideas generated in each condition.

9
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

crowdsourcing. However, we also find that, by solving specialist tasks,


crowd members can gain relevant knowledge as a result of being
exposed to one another’s ideas, thereby suggesting an attempt to break
through the boundary.

5.1. Implications for research

Our study makes several significant contributions to the literature.


First, it complements the literature promoting the benefits of the crowd:
The crowd not only has the advantage of being large scale and diverse
but also helps organizations retain distant searches at a low cost (Afuah
and Tucci, 2012; Chiu et al., 2014; Ann Majchrzak, Faraj, Kane, and
Azad, 2013). However, the crowd cannot do everything, especially with
such benchmarks as professional-level creativity. To our best knowl­
edge, only anecdotal evidence targets one task at a time to understand
whether the crowd performs more creatively than professionals. Our
study, in contrast, leverages a prominent theory on individual creativity
Fig. 7. Knowledge Level Comparison between Groups to Solve the Specialist to explain the creativity comparison between the crowd and pro­
Task fessionals. We show that the capacities of the crowd and professionals to
Note: CWI R1 = crowd with influence, Round 1; CWI R3 = crowd with influ­ create depend on the task type—that is, specialist versus generalist tasks.
ence, Round 3. Error bars represent standard errors.
Second, our paper contributes to the literature on how to increase the
crowd’s capacity to create. Many studies have identified important
and the ideas of the crowd that was exposed to one another’s ideas (third organizational and incentive structures that can improve the crowd’s
round in Study 2—see Fig. 8). Fig. 8 shows that the ideas of the crowd creative capacity (Boudreau et al., 2011; Lakhani et al., 2013). Mean­
whom others influenced were more novel and practical than those of the while, our study uses the componential theory of creativity (Amabile,
crowd that lacked such an influence (2.83 versus 2.45, p < 0.05; 3.56 2011), especially the first component—domain knowledge—to explain
versus 3.08, p < 0.05). However, the ideas of the crowd whom others that learning can help increase the crowd’s capacity to create. Specif­
influenced surpassed professionals’ ideas in terms of novelty, but the ically, this study suggests a way for crowd members to foster learning
difference was not significant (2.83 versus 2.69, p > 0.1). However, when they do not initially have the necessary knowledge for the
professionals’ ideas were more practical than those of the crowd whom specialist task. By doing so, they can search one another’s “exposed
others influenced (3.98 versus 3.56, p < 0.1). minds” via examining their generated ideas to increase their knowledge
levels relevant to the task. Once members of the crowd gain relevant
5. Discussion knowledge, the inherently diverse nature of crowds can lead to higher
creativity. Our findings indicate that if the crowd members lack the
This work focuses on the creativity application of crowdsourcing. knowledge needed to perform a creativity task (a specialist task), they
Unlike the literature exploring the benefits of crowdsourcing in the can learn from the ideas of other members. However, although their
creativity domain, this study is among the first attempts to systemati­ ideas may be as creative as the ideas of professionals, their knowledge
cally and theoretically explore the capacity of the crowd to create by levels are still lower than those of professionals on the topic (Fig. 7, p <
comparing the crowd’s creativity with professionals’. In addition, we 0.001). Thus, arguably, if we can create an environment in which the
explore how to improve the crowd’s capacity to create by exposing the crowd members can increase their learning, their outcomes could be
crowd’s ideas to its members and by fostering learning domain knowl­ even more creative. Along these lines, our paper contributes to the
edge for the relevant task. Specifically, we find that when it comes to platforms of crowdsourcing for innovation, especially platforms that
solving generalist tasks, the crowd is more creative than professionals, foster learning among members of the crowd (Kyriakou et al., 2017;
whereas when it comes to solving specialist tasks, the crowd is less Malone et al., 2017; Resnick et al., 2009). Our study suggests that the
creative than professionals, thereby suggesting a boundary with interaction among crowd members can foster learning among its
members to increase the crowd’s collective creativity.
Third, we identified that the principal reason for the crowd’s crea­
tivity is not just the diversity of backgrounds as the literature argued
(Blohm et al., 2013; Chiu et al., 2014; Zhao and Zhu, 2014). It is also
because crowd members possess knowledge relevant to the open call.
Our study tests the commonly held assumption that members of the
crowd can match their knowledge base with a task’s requirements
(Afuah and Tucci, 2012). It revealed that if this assumption is not met,
the crowd’s diversity of backgrounds cannot function, and the crowd
will be less practical than professionals. This suggests the boundary that
exists with crowdsourcing. In general, it is likely that this assumption
will not be met because many crowd members—particularly those in
crowdsourcing marketplaces that have thousands of tasks availa­
ble—may manipulate the system and select tasks based on cues in the
task description, or based on monetary rewards (Downs et al., 2010).
This can lead to mismatches between the crowd’s knowledge and the
knowledge that the task requires, which can cause the requester to have
to exert a considerable amount of effort to filter out a few high-quality
Fig. 8. Comparisons of Creativity Performance of Specialist Tasks solutions from many low-quality solutions (Ren, 2011; Ren et al.,
Note: CWI R3 = crowd with influence, Round 3. Error bars represent stan­ 2014). The investigation presented in this paper reveals the importance
dard errors. of finding a crowd whose knowledge base matches the knowledge that

10
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

the task requires. crowd’s generated knowledge (Nonaka and Konno, 1998).
Fourth, our paper contributes to the literature on how tasks affect Third, our findings suggest that practitioners should carefully select
creativity in general and the crowd’s creativity in particular. The liter­ the tasks to be crowdsourced. Our findings show that specialist tasks
ature (e.g., Füller et al., 2014; Zheng et al., 2011) explored how task should be solved through sourcing the firm’s expertise. If the firm needs
attributes, which are autonomy, variety, tacitness, analyzability, and to outsource the task to the crowd, it needs to train the crowd to acquire
variability, affect the crowd’s contribution of creative ideas by affecting relevant knowledge—for example, via the information systems sug­
their motivations. Our study uses tasks—generalist versus specialist gested earlier. They could also divide a task into various sub-tasks based
tasks—to trigger the crowd’s versus professionals’ two components of on the knowledge domain or expertise level, which could reduce the
creativity, which are domain knowledge and the creativity process. We difficulty level of matching a task with a suitable crowd member.
argue that specialist tasks require a high specificity of the necessary Table 11 summarizes the implications of our study for research and
domain knowledge and can limit the solution space for people to explore practice.
diverse perspectives. Therefore, when facing this type of task, pro­
fessionals may mainly apply their expertise, whereas the crowd mem­ 5.3. Limitations and future research
bers, due to their lack of domain knowledge, may mainly apply
divergent thinking. Generalist tasks, however, require a broader range When it comes to assessing this study’s contributions, it is important
of domain knowledge (Hossain and Islam, 2015; L. Yu and Nickerson, to examine the limitations, which also offer opportunities for future
2011), and the solution space for people to explore diverse perspectives research. First, we captured limited dimensions of the demographic in­
is wider as well. Because both professionals and the crowd have suffi­ formation for our study subjects, such as age, gender, education, and so
cient domain knowledge for generalist tasks, their different tendencies on. Even though we performed t-tests on the collected demographics and
for divergent thinking may contribute to differences in creative identified no significant sampling bias, we were not able to examine
performance. other demographic dimensions that were not captured in our study, such
Fifth, this study is among the first attempts to compare retrospec­ as income and socioeconomic status. It is possible that these uncaptured
tively how the crowd versus professionals use their knowledge to answer dimensions might be unbalanced among various idea generator groups,
open calls for creativity. This novel measure can be applied to other which would influence a person’s usage of knowledge in creativity
settings in which knowledge is used and ideas are generated. In partic­ generation. However, considering the sample size and the unbiased
ular, we employed the average of the semantic distance between the sample of the collected demographic information, this possibility is
concepts in the task definition and the generated ideas to measure the relatively low. Future studies could examine the impact of these
distance of knowledge from the task. Our results demonstrated that uncaptured demographics.
professionals are more likely than the crowd to fixate on the task defi­ Second, although we conducted two experiments to observe the
nition, and they are subsequently less likely to use distant knowledge. causality between variables, the research setting was inherently artifi­
cial. Thus, future research could replicate our study in a real-world
5.2. Implications for practice setting. However, studying the phenomena revealed in our study in a
real-world setting could be difficult, as companies might focus on only
Our findings have significant practical implications. First, our find­ one or two products or services for crowdsourcing, which could impair
ings suggest that practitioners can design information systems that in­
crease the chance of matching the crowd’s knowledge with the task Table 11
requirements. An example is a social network system that bridges the Summary of Implications for Research and Practice.
gap between crowd members (Bechter et al., 2011). For example, a
Implications for research
recommendation system can be created to match crowd members with By leveraging the componential theory of creativity, we show that the crowd and
tasks based on the text analysis of a crowd member’s background and professionals’ capacities to create depend on the task type.
completed task history, as well as the requirements for a task. Practi­ We explicate that learning can increase the crowd’s capacity to create and suggest a
tioners can also encourage user-generated tagging to identify the way for crowd members to foster learning when they do not initially have the
necessary knowledge for the specialist task.
domain of a task, which may help a crowd member identify tasks that
We test the commonly held assumption that members of the crowd can match their
match his or her knowledge domain. knowledge base with a task’s requirements. We demonstrate that if this assumption
Second, our study suggests that system designers need to consider is not met, the crowd’s diversity of backgrounds cannot function, and the crowd will
increasing the interaction among crowd members to foster learning. One be less practical than professionals. This suggests the boundary that exists with
example is using Climate CoLab to nurture a community to collectively crowdsourcing.
We argue that specialist tasks require a high specificity of the necessary domain
curate the crowd’s wisdom regarding a specialist task so as to solve a knowledge and limit the solution space for people to explore diverse perspectives.
sophisticated problem (climate change) (Introne et al., 2011; Malone When facing this type of task, professionals may mainly apply their expertise,
et al., 2017). Practitioners also need to devote close attention to whereas the crowd members, due to their lack of domain knowledge, may mainly
designing information systems that provide descriptions of the related apply divergent thinking. However, generalist tasks require a broader range of
domain knowledge, and the solution space for people to explore for diverse
knowledge to the crowd so that the crowd can learn and gain new
perspectives is also more extensive.
knowledge that is relevant to the task. Our study is among the first attempts to compare retrospectively how the crowd versus
In particular, practitioners may include professionals who interact on professionals use their knowledge to answer open calls for creativity.
the platform to ensure that the crowd learns specialist tasks for the Implications for practice
following reasons. The crowd may be unfamiliar with the R&D details Practitioners should design information systems that increase the chance of matching
the crowd’s knowledge with the task requirements. Practitioners can also encourage
that play a role in the eventual production of the product. Thus, the user-generated tagging to identify a task’s domain, which may help a crowd
crowd members’ organic interactions may reach a limit with the member identify tasks that match his or her knowledge domain.
knowledge capacity they obtain from one another. One way in which to System designers need to consider increasing the interaction among crowd members
avoid this restriction is to include inputs from professionals in crowd to foster learning. They also need to devote close attention to designing information
systems that provide descriptions of the related knowledge to the crowd so that the
deliberations. This artificial intervention may help accelerate learning in
crowd can learn and gain new knowledge relevant to the task.
general and could provide specific knowledge details that the crowd Practitioners should carefully select the tasks to be crowdsourced. Specialist tasks
otherwise would not encounter (Nonaka and Konno, 1998). This is should be solved by sourcing the firm’s expertise. If the firm needs to outsource the
consistent with the SECI model (Socialization, Externalization, Combi­ task to the crowd, it needs to train the crowd to acquire relevant knowledge.
nation, and Internalization) that Nonaka and Takeuchi (1995) devised. Dividing a task into various sub-tasks based on the knowledge domain or expertise
level can reduce the difficulty of matching a task with a suitable crowd member.
Perhaps an intervention using a learning tool could also increase the

11
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

the study’s generalizability. Becker, A., 2017. Using boundary objects to stimulate transformational thinking: storm
resilience for the Port of Providence, Rhode Island (USA). Sustainability Science 12
Third, we discussed the knowledge that both groups used; however,
(3), 477–501. https://doi.org/10.1007/s11625-016-0416-y.
creativity is a complex process (Woodman et al., 1993) and may involve Bjelland, O.M., Wood, R.C., 2008. An inside view of IBM’s’ Innovation Jam’. MIT Sloan
more factors. For example, creativity might involve the ability to break management review 50 (1), 32.
down complexity (Campbell, 1988). However, this new dimension is Blei, D.M., 2012. Probabilistic topic models. Commun ACM 55 (4). https://doi.org/
10.1145/2133806.2133826.
challenging to study via semantic analysis or self-reported surveys. Blohm, I., Leimeister, J.M., Krcmar, H., 2013. Crowdsourcing: how to benefit from (too)
Future research could study this new dimension by introducing a series many great ideas. MIS Quarterly Executive 12 (4), 199–211.
of experiments. Boden, M.A., 2004. The Creative mind: Myths and Mechanisms. Psychology Press.
Boons, M., Stam, D., 2019. Crowdsourcing for innovation: how related and unrelated
Fourth, for the first step to study the boundary of crowdsourcing, we perspectives interact to increase creative performance. Res Policy 48 (7),
focused on a generic crowd that was not specific to any domain 1758–1770. https://doi.org/10.1016/j.respol.2019.04.005.
knowledge. However, a crowd can also refer to an expert crowd (Bozzon Boudreau, K.J., Lacetera, N., Lakhani, K.R., 2011. Incentives and Problem Uncertainty in
Innovation Contests: an Empirical Analysis. Manage Sci 57 (5), 843–863. https://
et al., 2013). Expert crowd members may have acquired knowledge via doi.org/10.1287/mnsc.1110.1322.
their professions or via pursuing hobbies and may use crowdsourcing Bozzon, A., Brambilla, M., Ceri, S., Silvestri, M., Vesci, G., 2013. Choosing the right
websites as freelancers to earn extra money. An example is Upwork, crowd. In: Paper presented at the 16th International Conference on Extending
Database Technology - EDBT ’13. Genoa, Italy.
where crowd members showcase their design portfolios. Here, the Brem, A., Bilgram, V., 2015. The search for innovative partners in co-creation:
specification of tasks is central to helping the crowd members under­ identifying lead users in social media through netnography and crowdsourcing.
stand the tasks and the firm context. In the case of an expert crowd, J Engineering and Technology Management 37, 40–51. https://doi.org/10.1016/j.
jengtecman.2015.08.004.
outsider expert crowd members may possess only explicit knowledge
Brem, A., Puente-Diaz, R., Agogue, M., 2016. Creativity and Innovation: state of The Art
related to what the firm wants, yet lack any tacit knowledge that can be and Future Perspectives for Research. Int J Innovation Management 20 (04),
acquired by immersing themselves in the company. Future research 1602001. https://doi.org/10.1142/S1363919616020011.
could compare the generic crowd’s creativity performance with that of Buhrmester, M., Kwang, T., Gosling, S.D., 2011. Amazon’s Mechanical Turk: a New
Source of Inexpensive, Yet High-Quality, Data? Perspect Psychol Sci 6 (1), 3–5.
the expert crowd and insider professionals. https://doi.org/10.1177/1745691610393980.
Campbell, D.J., 1988. Task Complexity: a Review and Analysis. The Academy of
6. Conclusion Management Review 13 (1), 40–52. https://doi.org/10.2307/258353.
Campos-Blázquez, J.R., Morcillo, P., Rubio-Andrada, L., 2020. Employee Innovation
Using Ideation Contests: seven-Step Process to Align Strategic Challenges with the
Crowdsourcing—the collective intelligence of a large number of Innovation Process. Research-Technology Management 63 (5), 20–28. https://doi.
contributors outside of the firm’s boundaries—is key to increasing the org/10.1080/08956308.2020.1790237.
Cappa, F., Oriani, R., Pinelli, M., De Massis, A., 2019. When does crowdsourcing benefit
likelihood of achieving high-quality ideas with exceptional business firm stock market performance? Res Policy 48 (9), 103825. https://doi.org/
potential. Drawing on the componential theory of creativity, we 10.1016/j.respol.2019.103825.
explored the boundary of crowdsourcing in the domain of creativity and Cheng, X., Fu, S., de Vreede, T., de Vreede, G.-.J., Seeber, I., Maier, R., Weber, B., 2020.
Idea Convergence Quality in Open Innovation Crowdsourcing: a Cognitive Load
examined the creativity of the crowd and professionals for specialist Perspective. J Management Information Systems 37 (2), 349–376. https://doi.org/
tasks versus generalist tasks. 10.1080/07421222.2020.1759344.
Our findings indicate that a boundary exists when crowd members Chiu, C.-.M., Liang, T.-.P., Turban, E., 2014. What can crowdsourcing do for decision
support? Decis Support Syst 65, 40–49. https://doi.org/10.1016/j.dss.2014.05.010.
solve specialist tasks for which they lack the domain knowledge needed
Christensen, I., Karlsson, C., 2019. Open innovation and the effects of Crowdsourcing in a
to understand the context, and their high diversity of backgrounds in pharma ecosystem. J Innovation & Knowledge 4 (4), 240–247. https://doi.org/
this situation can hurt their creativity and especially their practicality. 10.1016/j.jik.2018.03.008.
As a result, professionals perform more practically than the crowd does Clarkson, J.J., Janiszewski, C., Cinelli, M.D., 2013. The Desire for Consumption
Knowledge. J Consumer Research 39 (6), 1313–1329. https://doi.org/10.1086/
when it comes to solving specialist tasks. Our study also reveals how to 668535.
increase the crowd’s capacity to create to solve specialist tasks. When De Luca, V.V., Margherita, A., Passiante, G., 2019. Crowdfunding: a systemic framework
crowd members are exposed to one another’s ideas, their high diversity of benefits. Int J Entrepreneurial Behavior & Research 25 (6), 1321–1339. https://
doi.org/10.1108/IJEBR-11-2018-0755.
of backgrounds can help them gain relevant knowledge and eventually Dedehayir, O., Mäkinen, S.J., Roland Ortt, J., 2018. Roles during innovation ecosystem
improve both their novelty and their practicality. genesis: a literature review. Technol Forecast Soc Change 136, 18–29. https://doi.
This study represents an early attempt at understanding the bound­ org/10.1016/j.techfore.2016.11.028.
Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R., 1990.
ary of crowdsourcing in the domain of creativity, as well as an attempt at Indexing by latent semantic analysis. J American society for information science 41
providing a solution to this boundary, enabled by learning. The aim of (6), 391.
this study was to thereby advance the crowdsourcing literature. The Downs, J.S., Holbrook, M.B., Sheng, S., Cranor, L.F., 2010. Are your participants gaming
the system?. In: Paper presented at the 28th International conference on Human
results also provide instrumental insights for managers to foster factors in computing systems - CHI ’10. Atlanta, Georgia, USA.
learning. We hope that this work inspires future attempts at a more Egozi, O., Markovitch, S., Gabrilovich, E., 2011. Concept-based information retrieval
elaborate and comprehensive understanding of such capability-building using explicit semantic analysis. ACM Transactions on Information Systems 29 (2),
1–34. https://doi.org/10.1145/1961209.1961211.
phenomena.
Elia, G., Margherita, A., 2018. Can we solve wicked problems? A conceptual framework
and a collective intelligence system to support problem analysis and solution design
References for complex social issues. Technol Forecast Soc Change 133, 279–286. https://doi.
org/10.1016/j.techfore.2018.03.010.
Acar, O.A., 2019. Motivations and solution appropriateness in crowdsourcing challenges Elia, G., Margherita, A., Passiante, G., 2020. Digital entrepreneurship ecosystem: how
for innovation. Res Policy 48 (8), 103716. https://doi.org/10.1016/j. digital technologies and collective intelligence are reshaping the entrepreneurial
respol.2018.11.010. process. Technol Forecast Soc Change 150, 119791. https://doi.org/10.1016/j.
Afuah, A., Tucci, C.L., 2012. Crowdsourcing as a solution to distant search. Academy of techfore.2019.119791.
Management Review 37 (3), 355–375. https://doi.org/10.5465/amr.2010.0146. Elmquist, M., Fredberg, T., Ollila, S., 2009. Exploring the field of open innovation.
Amabile, T.M., 1983. The social psychology of creativity: a componential European J Innovation Management 12 (3), 326–345. https://doi.org/10.1108/
conceptualization. J Pers Soc Psychol 45 (2), 357–376. https://doi.org/10.1037/ 14601060910974219.
0022-3514.45.2.357. Estellés-Arolas, E., González-Ladrón-de-Guevara, F., 2012. Towards an integrated
Amabile, T.M., 2011. Componential Theory of Creativity. Harvard Business School. crowdsourcing definition. J Information Science 38 (2), 189–200. https://doi.org/
Bayus, B.L., 2013. Crowdsourcing New Product Ideas over Time: an Analysis of the Dell 10.1177/0165551512437638.
IdeaStorm Community. Manage Sci 59 (1), 226–244. https://doi.org/10.1287/ Feldmann, A., Teuteberg, F., 2020. Understanding the factors affecting employees’
mnsc.1120.1599. motivation to engage in co-creation in the banking industry. Int J Innovation and
Bechter, C., Jentzsch, S., Frey, M., 2011. From wisdom of the crowd to crowdfunding. Technology Management 17 (02), 2050015. https://doi.org/10.1142/
J Communication and Computer 8, 951–957. https://doi.org/10.17265/1548-7709/ S0219877020500157.
2011.11.005. Füller, J., Hutter, K., Hautz, J., Matzler, K., 2014. User roles and contributions in
innovation-contest communities. J Management Information Systems 31 (1),
273–308. https://doi.org/10.2753/MIS0742-1222310111.

12
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

Gabrilovich, E., Markovitch, S., 2007. Computing semantic relatedness using Wikipedia- Maher, M.L., 2011. Design Creativity Research: from the Individual to the Crowd. In:
based explicit semantic analysis. In: Paper presented at the 20th International joint Paper presented at the Design Creativity 2010. London.
conference on Artifical intelligence. Hyderabad, India. Majchrzak, A., Faraj, S., Kane, G.C., Azad, B., 2013. The contradictory influence of social
Garcia Martinez, M., 2017. Inspiring crowdsourcing communities to create novel media affordances on online communal knowledge sharing. J Computer-Mediated
solutions: competition design and the mediating role of trust. Technol Forecast Soc Communication 19 (1), 38–55. https://doi.org/10.1111/jcc4.12030.
Change 117, 296–304. https://doi.org/10.1016/j.techfore.2016.11.015. Majchrzak, A., Malhotra, A., 2013. Towards an information systems perspective and
Gassmann, O., Sandmeier, P., Wecht, C.H., 2006. Extreme customer innovation in the research agenda on crowdsourcing for innovation. J Strategic Information Systems
front-end: learning from a new software paradigm. Int J Technology Management 22 (4), 257–268. https://doi.org/10.1016/j.jsis.2013.07.004.
(1), 33. https://doi.org/10.1504/ijtm.2006.008191. Malhotra, A., Majchrzak, A., 2019. Greater associative knowledge variety in
Geiger, D., Schader, M., 2014. Personalized task recommendation in crowdsourcing crowdsourcing platforms leads to generation of novel solutions by crowds.
information systems — Current state of the art. Decis Support Syst 65, 3–16. https:// J Knowledge Management 23 (8), 1628–1651. https://doi.org/10.1108/JKM-02-
doi.org/10.1016/j.dss.2014.05.007. 2019-0094.
Genc, Y., 2014. Exploratory search with semantic transformations using collaborative Malhotra, A., Majchrzak, A., Bonfield, W., Myers, S., 2020. Engaging customer care
knowledge bases. In: Paper presented at the Proceedings of the 7th ACM employees in internal collaborative crowdsourcing: managing the inherent tensions
international conference on Web search and data mining - WSDM ’14. New York, and associated challenges. Hum Resour Manage 59 (2), 121–134. https://doi.org/
New York, USA. 10.1002/hrm.21952.
Genc, Y., Sakamoto, Y., Nickerson, J.V., 2011. Discovering Context: classifying Tweets Malone, T.W., Nickerson, J.V., Laubacher, R.J., Fisher, L.H., de Boer, P., Han, Y.,
through a Semantic Transform Based on Wikipedia. In: Paper presented at the Towne, W.B., 2017. Putting the Pieces Back Together Again. In: Paper presented at
Foundations of Augmented Cognition. Directing the Future of Adaptive Systems. the 2017 ACM Conference on Computer Supported Cooperative Work and Social
Berlin, Heidelberg. Computing - CSCW ’17. Portland, Oregon, USA.
Gharehgozli, A.H., Mileski, J., Adams, A., von Zharen, W., 2017. Evaluating a “wicked Martínez-Climent, C., Mastrangelo, L., Ribeiro-Soriano, D., 2020. The knowledge
problem”: a conceptual framework on seaport resiliency in the event of weather spillover effect of crowdfunding. Knowledge Management Research & Practice 1–11.
disruptions. Technol Forecast Soc Change 121, 65–75. https://doi.org/10.1016/j. https://doi.org/10.1080/14778238.2020.1768168.
techfore.2016.11.006. Mazzola, E., Piazza, M., Acur, N., Perrone, G., 2020. Treating the crowd fairly: increasing
Gillier, T., Chaffois, C., Belkhouja, M., Roth, Y., Bayus, B.L., 2018. The effects of task the solvers’ self-selection in idea innovation contests. Industrial Marketing
instructions in crowdsourcing innovative ideas. Technol Forecast Soc Change 134, Management 91, 16–29. https://doi.org/10.1016/j.indmarman.2020.07.019.
35–44. https://doi.org/10.1016/j.techfore.2018.05.005. Mileski, J., Gharehgozli, A., Ghoram, L., Swaney, R., 2018. Cooperation in developing a
Gimpel, H., Graf-Drasch, V., Laubacher, R.J., Wöhl, M., 2020. Facilitating like Darwin: disaster prevention and response plan for Arctic shipping. Mar Policy 92, 131–137.
supporting cross-fertilisation in crowdsourcing. Decis Support Syst 132, 113282. https://doi.org/10.1016/j.marpol.2018.03.003.
https://doi.org/10.1016/j.dss.2020.113282. Miller, G.A., 1995. WordNet: a lexical database for English. Commun ACM 38 (11),
Hallinan, B., Striphas, T., 2014. Recommended for you: the Netflix Prize and the 39–41. https://doi.org/10.1145/219717.219748.
production of algorithmic culture. New Media & Society 18 (1), 117–137. https:// Morris, R.R., Dontcheva, M., Finkelstein, A., Gerber, E., 2013. Affect and creative
doi.org/10.1177/1461444814538646. performance on crowdsourcing platforms. In: Paper presented at the 2013 Humaine
Han, Y., Ozturk, P., Nickerson, J.V., 2020. Leveraging the Wisdom of the Crowd to Association Conference on Affective Computing and Intelligent Interaction.
Address Societal Challenges: revisiting the Knowledge Reuse for Innovation Process Natalicchio, A., Messeni Petruzzelli, A., Garavelli, A.C., 2017. Innovation problems and
through Analytics. J Association for Information Systems 21 (5), 1128–1152. search for solutions in crowdsourcing platforms – A simulation approach.
Harryson, S., Kliknaite, S., Dudkowski, R., 2008. Flexibility in innovation through Technovation 64-65, 28–42. https://doi.org/10.1016/j.technovation.2017.05.002.
external learning: exploring two models for enhanced industry university Nishikawa, H., Schreier, M., Ogawa, S., 2013. User-generated versus designer-generated
collaboration. Int J Technology Management 41 (1/2). https://doi.org/10.1504/ products: a performance assessment at Muji. Int J Research in Marketing 30 (2),
ijtm.2008.015987. 160–167. https://doi.org/10.1016/j.ijresmar.2012.09.002.
Hill, B.M., Monroy-Hernández, A., 2012. The Remixing Dilemma: the Trade-Off Between Nonaka, I., Konno, N., 1998. The Concept of “Ba”: building a Foundation for Knowledge
Generativity and Originality. American Behavioral Scientist 57 (5), 643–663. Creation. Calif Manage Rev 40 (3), 40–54. https://doi.org/10.2307/41165942.
https://doi.org/10.1177/0002764212469359. Nonaka, I., Takeuchi, H., 1995. The Knowledge-Creating company: How Japanese
Hossain, M., Islam, K.M.Z., 2015. Generating Ideas on Online Platforms: a Case Study of Companies Create the Dynamics of Innovation. Oxford University Press.
“My Starbucks Idea”. Arab Economic and Business Journal 10 (2), 102–111. https:// Ogink, T., Dong, J.Q., 2019. Stimulating innovation by user feedback on social media:
doi.org/10.1016/j.aebj.2015.09.001. the case of an online user innovation community. Technol Forecast Soc Change 144,
Howe, J., 2006. The rise of crowdsourcing. Wired magazine 14 (6), 1–4. 295–302. https://doi.org/10.1016/j.techfore.2017.07.029.
Hughes, D.J., Lee, A., Tian, A.W., Newman, A., Legood, A., 2018. Leadership, creativity, Ohanian, R., 1990. Construction and Validation of a Scale to Measure Celebrity
and innovation: a critical review and practical recommendations. Leadersh Q 29 (5), Endorsers’ Perceived Expertise, Trustworthiness, and Attractiveness. J Advert 19 (3),
549–569. https://doi.org/10.1016/j.leaqua.2018.03.001. 39–52.
Hwang, E.H., Singh, P.V., Argote, L., 2019. Jack of All, Master of Some: information Piezunka, H., Dahlander, L., 2018. Idea Rejected, Tie Formed: organizations’ Feedback
Network and Innovation in Crowdsourcing Communities. Information Systems on Crowdsourced Ideas. Academy of Management J 62 (2), 503–530. https://doi.
Research 30 (2), 389–410. https://doi.org/10.1287/isre.2018.0804. org/10.5465/amj.2016.0703.
Introne, J., Laubacher, R., Olson, G., Malone, T., 2011. The Climate CoLab: large scale Poetz, M.K., Schreier, M., 2012. The Value of Crowdsourcing: can Users Really Compete
model-based collaborative planning. In: Paper presented at the 2011 International with Professionals in Generating New Product Ideas? J Product Innovation
Conference on Collaboration Technologies and Systems (CTS). Management 29 (2), 245–256. https://doi.org/10.1111/j.1540-5885.2011.00893.x.
Janz, B.D., Prasarnphanich, P., 2003. Understanding the Antecedents of Effective Purcell, A.T., Gero, J.S., 1996. Design and other types of fixation. Design Studies 17 (4),
Knowledge Management: the Importance of a Knowledge-Centered Culture*. 363–383. https://doi.org/10.1016/S0142-694X(96)00023-3.
Decision Sciences 34 (2), 351–384. https://doi.org/10.1111/1540-5915.02328. Ren, J., 2011. Exploring the process of web-based crowdsourcing innovation. In: Paper
Jiang, J., Wang, Y., 2020. A theoretical and empirical investigation of feedback in presented at the 17th Americas Conference on Information Systems. Detroit,
ideation contests. Production and Operations Management 29 (2), 481–500. https:// Michigan.
doi.org/10.1111/poms.13127. Ren, J., Nickerson, J.V., Mason, W., Sakamoto, Y., Graber, B., 2014. Increasing the
Johnson, J.S., Fisher, G.J., Friend, S.B., 2019. Crowdsourcing Service Innovation crowd’s capacity to create: how alternative generation affects the diversity,
Creativity: environmental Influences and Contingencies. J Marketing Theory and relevance and effectiveness of generated ads. Decis Support Syst 65, 28–39. https://
Practice 27 (3), 251–268. https://doi.org/10.1080/10696679.2019.1615842. doi.org/10.1016/j.dss.2014.05.009.
Kittur, A., 2010. Crowdsourcing, collaboration and creativity. XRDS: Crossroads, The Renard, D., Davis, J.G., 2019. Social interdependence on crowdsourcing platforms. J Bus
ACM Magazine for Students 17 (2), 22–26. https://doi.org/10.1145/ Res 103, 186–194. https://doi.org/10.1016/j.jbusres.2019.06.033.
1869086.1869096. Resnick, M., Silverman, B., Kafai, Y., Maloney, J., Monroy-Hernández, A., Rusk, N.,
Kristensson, P., Magnusson, P.R., Matthing, J., 2002. Users as a Hidden Resource for Silver, J., 2009. Scratch. Commun ACM 52 (11), 60–67. https://doi.org/10.1145/
Creativity: findings from an Experimental Study on User Involvement. Creativity and 1592761.1592779.
Innovation Management 11 (1), 55–61. https://doi.org/10.1111/1467-8691.00236. Riedl, C., Seidel, V.P., 2018. Learning from mixed signals in online innovation
Kyriakou, H., Nickerson, J.V., Sabnis, G., 2017. Knowledge Reuse for Customization: communities. Organization Science 29 (6), 1010–1032. https://doi.org/10.1287/
metamodels in an Open Design Community for 3D Printing. MIS Quarterly 41 (1), orsc.2018.1219.
315–322. Roth, Y., Pétavy, F., Céré, J., 2015. The State of Crowdsourcing in 2015. Retrieved from.
Lakhani, K.R., Boudreau, K.J., Loh, P.R., Backstrom, L., Baldwin, C., Lonstein, E., https://en.eyeka.com/resources/reports.
Guinan, E.C., 2013. Prize-based contests can provide solutions to computational Scaringella, L., Radziwon, A., 2018. Innovation, entrepreneurial, knowledge, and
biology problems. Nat Biotechnol 31 (2), 108–111. https://doi.org/10.1038/ business ecosystems: old wine in new bottles? Technol Forecast Soc Change 136,
nbt.2495. 59–87. https://doi.org/10.1016/j.techfore.2017.09.023.
Landauer, T.K., 2007. LSA as a theory of meaning. In: Handbook of latent semantic Schenk, E., Guittard, C., Pénin, J., 2019. Open or proprietary? Choosing the right
analysis. Lawrence Erlbaum Associates Publishers, Mahwah, NJ, US, pp. 3–34. crowdsourcing platform for innovation. Technol Forecast Soc Change 144, 303–310.
Landauer, T.K., Dumais, S.T., 1997. A solution to Plato’s problem: the latent semantic https://doi.org/10.1016/j.techfore.2017.11.021.
analysis theory of acquisition, induction, and representation of knowledge. Psychol Schiele, H., 2010. Early supplier integration: the dual role of purchasing in new product
Rev 104 (2), 211–240. https://doi.org/10.1037/0033-295X.104.2.211. development. R&D Management 40 (2), 138–153. https://doi.org/10.1111/j.1467-
London Jr., J.P., 2019. Creativity and Information Systems: A Theoretical and Empirical 9310.2010.00602.x.
Investigation of Creativity in IS. Clemson University, Ann Arbor (Ph.D.)ProQuest Shadish, W., Cook, T.D., Campbell, D.T., 2002. Experimental and Quasi-Experimental
Dissertations & Theses Global database. (22584721). Designs For Generalized Causal Inference. Houghton Mifflin Boston.

13
J. Ren et al. Technological Forecasting & Social Change 165 (2021) 120530

Steils, N., Hanine, S., 2019. Recruiting valuable participants in online IDEA generation: Jie Ren is an Assistant Professor of Information Systems at Gabelli School of Business at
the role of brief instructions. J Bus Res 96, 14–25. https://doi.org/10.1016/j. Fordham University. She strives to understand the business impact of collective online
jbusres.2018.10.038. behaviors – that is how the crowd helps organizations innovate, market, and make
von Hippel, E., Krogh, G.v., 2003. Open Source Software and the “Private-Collective” financial decisions. Specifically, she studies crowdsourcing, online reviews, and social
Innovation Model: issues for Organization Science. Organization Science 14 (2), media. Her research has been published in IS leading journals such as the European Journal
209–223. https://doi.org/10.1287/orsc.14.2.209.14992. of Information Systems, Decision Support Systems, and the Journal of the Association for In­
Wang, K., Nickerson, J., Sakamoto, Y., 2018. Crowdsourced idea generation: the effect of formation Science and Technology, as well as IS leading conference proceedings such as Inter­
exposure to an original idea. Creativity and Innovation Management. https://doi. national Conference on Information Systems and Americas Conference on Information Systems.
org/10.1111/caim.12264. She often presents her research not only at conferences and workshops, but also at other uni­
Wang, K., Nickerson, J.V., 2017. A literature review on individual creativity support versities as invited talks.
systems. Comput Human Behav 74, 139–151. https://doi.org/10.1016/j.
chb.2017.04.035.
Yue Han is an Assistant Professor of Information Systems at the Madden School of Business
Wellington, J., Sikes, P., 2006. ‘A doctorate in a tight compartment’: why do students
at Le Moyne College. Her research examines collective intelligence in online communities,
choose a professional doctorate and what impact does it have on their personal and
discovering how people reuse knowledge for innovation. She also studies crowdsourcing
professional lives? Studies in Higher Education 31 (6), 723–734. https://doi.org/
creativity and information diffusion in social networks. She has published and presented
10.1080/03075070601004358.
her research in major IS conferences, such as the International Conference on Information
Winston, A.S., Blais, D.J., 1996. What Counts as an Experiment?: a Transdisciplinary
Systems, the ACM Conference on Computer-Supported Cooperative Work and Social Computing,
Analysis of Textbooks, 1930-1970. Am J Psychol 109 (4), 599–616. https://doi.org/
and the ACM Collective Intelligence Conference.
10.2307/1423397.
Woodman, R.W., Sawyer, J.E., Griffin, R.W., 1993. Toward a Theory of Organizational
Creativity. The Academy of Management Review 18 (2), 293–321. https://doi.org/ Yegin Genc is an Assistant Professor of Information Systems at Seidenberg School of
10.2307/258761. Computer Science and Information Systems at Pace University. His-research interests
Yang, J.t., 2007. The impact of knowledge sharing on organizational learning and include understanding digital innovation and making sense of unstructured data. He holds
effectiveness. J Knowledge Management 11 (2), 83–90. https://doi.org/10.1108/ an M.S. in Software Engineering from the University of Central Missouri and a PhD. in
13673270710738933. Information Management from Stevens Institute of Technology.
Yu, L., Nickerson, J.V., 2011. Cooks or cobblers?: crowd creativity through combination.
In: Paper presented at the Proceedings of the SIGCHI Conference on Human Factors
William Yeoh is the director for IBM center of Excellence in Business Analytics at Deakin
in Computing Systems. Vancouver, BC, Canada.
University, Australia. He received his PhD from the University of South Australia. His-
Yu, X., Shi, Y., Zhang, L., Nie, G., Huang, A., 2014. Intelligent Knowledge Beyond Data
research is supported by various funding bodies and has appeared in high-tier journals
Mining: influences of Habitual Domains. Communications of the Association for
and most competitive IS conferences (e.g. ICIS). His-mentored team was crowned the
Information Systems 34, 53.
World Champion at the 2016 IBM Watson Analytics Global Competition held in Las Vegas.
Zhang, S., Pan, S.L., Ouyang, T.h., 2020. Building social translucence in a crowdsourcing
He was the recipient of the IBM Faculty Award and Australian ICT Educator of the Year
process: a case study of Miui.com. Information & Management 57 (2), 103172.
Gold Award (awarded by the Australian Computer Society). He is also the Editor-in-Chief
https://doi.org/10.1016/j.im.2019.103172.
Emeritus of the International Journal of Business Intelligence Research.
Zhao, Y., Zhu, Q., 2014. Evaluation on crowdsourcing research: current status and future
direction. Information Systems Frontiers 16 (3), 417–434. https://doi.org/10.1007/
s10796-012-9350-4. Aleš Popovič is Professor of Information Systems at NEOMA Business School, France and
Zheng, H., Li, D., Hou, W., 2011. Task Design, Motivation, and Participation in School of Business and Economics at University of Ljubljana, Slovenia. His-research in­
Crowdsourcing Contests. Int J Electronic Commerce 15 (4), 57–88. https://doi.org/ terests include understanding IS value, success, and related business process change, both
10.2753/JEC1086-4415150402. within and between organizations. He has published his research in a variety of academic
Zhu, H., Kock, A., Wentker, M., Leker, J., 2019. How does online interaction affect idea journals, such as Journal of the Association for Information Systems, The Journal of Strategic
quality? The effect of feedback in firm-internal idea competitions. J Product Information Systems, Decision Support Systems, Expert Systems with Applications, Information
Innovation Management 36 (1), 24–40. https://doi.org/10.1111/jpim.12442. Systems Frontiers, Government Information Quarterly, and Journal of Business Research among
others.

14

You might also like