You are on page 1of 10

Evaluation and Program Planning 102 (2024) 102366

Contents lists available at ScienceDirect

Evaluation and Program Planning


journal homepage: www.elsevier.com/locate/evalprogplan

Evaluation in design science: A framework to support project studies in the


context of University Research Centres
José da Assunção Moutinho a, b, *, Gabriela Fernandes c, Roque Rabechini Jr b
a
State University of Rio de Janeiro – UERJ, Rua São Francisco Xavier, 524 - Maracanã, 20550-013 Rio de Janeiro, RJ, Brazil
b
University Nove de Julho – UNINOVE, Rua Vergueiro, 235/249 - Liberdade, 01156-080 São Paulo, SP, Brazil
c
University of Coimbra, CEMMPRE, Pólo II, Rua Luis Reis Santos, 3030-788 Coimbra, Portugal

A R T I C L E I N F O A B S T R A C T

Keywords: This paper proposes a discussion of the evaluation of an artefact developed under the Design Science paradigm
Design science using the Delphi method. It evaluates the Ecosystem framework of University Research Centres in Project
Design science research Studies, considering a set of criteria pre-established in the literature. The Delphi method is an evaluation
Framework
implemented in an electronic platform involving twenty-one participants, among whom were academics, prac­
Project studies
Delphi method
titioners, and PhD candidates in the field of project management. It reached consensus and stability in two
Project management rounds: the results indicate a consensus among the participants in the applicability, novelty, simplicity,
completeness, fidelity to modelled phenomena, consistency and internal coherence, scalability, flexibility, in­
terest, elegance, and reusability criteria. Usability was the only criterion that did not attain the predefined
percentage of consensus among the participants (70%). Given the framework’s characteristics, Delphi partici­
pants indicated the need to produce complementary guidelines for its implementation.

1. Introduction Smith, 1995). Design Science emphasises on the construction and


accumulation of knowledge and the learning process (Akoka et al., 2023;
The need to produce impactful research in organisations has been an Reining et al., 2022, Vom Brocke et al., 2020) rather than the discovery
ongoing concern for numerous scholars (Carton & Mouricou, 2017; of immutable knowledge (Kuechler & Vaishnavi, 2008; Vries et al.,
Kieser, Nicolai, & Seidl, 2015). Unfortunately, the project management 2013). Its application covers various fields, including project manage­
field is not exempt from such concern in this issue (Brunet, 2021; Walker ment (Hofmann et al., 2020; Secundo et al., 2022). As an inherent
et al., 2008). What can be perceived is that academic research has characteristic, the research developed under the Design Science para­
contributed little to the reduction of the gap between theory and prac­ digm highlights the importance of rigour and relevance (Romme, 2003;
tice (Konstantinou, 2015). A possible limiting factor for this state of Van Aken and Romme (2009)). Therefore, this research is operational­
affairs is the choice of traditional paradigms of research into project ised through the Design Science Research (DSR) method.
management which lead to the production of knowledge that is not al­ A method oriented to prescribing solutions to real-world problems,
ways directly applicable to the real-world problems of organisations DSR aims, from the understanding of a given problem, to design,
(Clegg et al., 2018; Söderlund & Maylor, 2012; Walker & Lloyd-Walker, develop and evaluate artefacts that enable the modification of contexts
2016). for the better (Baskerville et al., 2018; Manson, 2006; Nagle et al. 2022,
One way to address these constraints is by considering the Design Peffers et al., 2007). Indeed, research developed using the DSR method
Science paradigm. This paradigm focuses on artefacts produced or should evidence the artefact’s usefulness in solving real-world problems
invented by humans or those that are influenced by human intervention. (Tremblay et al., 2010). Hence, its evaluation is considered crucial for
Design Science is concerned with how things should be to achieve a scientific research in DSR to rigorously confirm the relevance of an
specific goal, whether it’s solving a known problem or designing artefact to the practical sphere (Peffers et al., 2012; Sonnenberg & vom
something entirely new (Simon, 1996). Design science is a Brocke, 2012). It is a critical but fundamental phase that, when carefully
problem-solving paradigm focuses on providing prescriptive solutions conducted, ensures, in addition to rigour of research, feedback for
that serve human purposes through the production of artefacts (March & further developments (Tremblay et al., 2010).

* Corresponding author at: State University of Rio de Janeiro – UERJ, Rua São Francisco Xavier, 524, Maracanã, 20550–013 Rio de Janeiro, RJ, Brazil.
E-mail address: moutinho_pmp@yahoo.com.br (J.A. Moutinho).

https://doi.org/10.1016/j.evalprogplan.2023.102366
Received 5 October 2022; Received in revised form 10 July 2023; Accepted 28 August 2023
Available online 29 August 2023
0149-7189/© 2023 Elsevier Ltd. All rights reserved.
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

Depending on characteristics such as the type of artefact, the func­ 2. Background


tional purpose and the evaluation paradigm concerning its develop­
ment, the evaluation can be made using various methods and techniques 2.1. Design science research: URC-PS framework development
(Venable et al., 2016). In dealing with artificial and ex-post evaluation
strategies, previous studies have used the Delphi method (e.g., Coetzee, Underpinned by the Theory of Organisational Knowledge Creation
2019; Ebel et al., 2022), as recommended by Peffers et al. (2012). The (Nonaka, 1991) and its concept of ’Ba’, representing the shared space
central characteristic of the Delphi method is anonymity among par­ where actors create knowledge through interaction (Nonaka & Toyama,
ticipants. The method is systematic and iterative and strives for both the 2007), Moutinho, Fernandes and Rabechini (2023) generated the
convergent and divergent opinions of a group of experts, on a particular URC-PS Ecosystem framework (Fig. 1). The framework represents a
issue, by means of several rounds of questionnaires followed by their space for academics, students and practitioners to discuss theoretical
ensuing feedback. and practical issues in Project Studies with a view to producing im­
Given its practical nature, the project management field has pactful research within organisations. The framework provides a holis­
frequently used the DSR method for artefact generation, such as for tic, integrative and procedural approach. In the holistic view, the
frameworks and their instantiation. Többen and Opdenakker (2022), for framework considers the context in which it is embedded; in the inte­
example, developed a framework to integrate circularity into the early grative view, it includes the management and governance processes
stages of construction projects, while Secundo et al. (2022) proposed a relative to the circumstances; and in the procedural view, it describes
framework for knowledge visualisation in strategic decision-making in the macro-elements (Project Studies, Impact Generation Process, Cir­
project management. Narazaki et al. (2020) instantiated a framework to cumstances, Governance & Management, and Context) and the elements
guide the integrated use of social media in project management, and (Project Design, Level of Analysis, Strategy, Governance Structure,
Silvius and Schipper (2019) proposed an integrative framework for among others) that lead to the impact generation of co-created
stakeholder engagement in projects from a sustainable development knowledge.
perspective. As indicated in the introductory section of this paper, the URC-PS
By the same token, Moutinho (2022) have used DSR to generate the framework was developed using DSR, following precisely the steps
"Ecosystem of a University Research Centre in Project Studies" frame­ defined by Peffers (2007). Table 1 illustrates the actions carried out. The
work, hereafter referred to as the URC-PS framework. This collaborative idea of creating the framework arose in the academic environment of the
environment brings together academics, students, and practitioners, Postgraduate Programme in Project Management (PPPM) at Nove de
fully dedicated to co-creating knowledge in Project Studies. The Impact Julho University, from the need to develop research into Project Studies
Generation Process, which is a central part of the framework, leads to with greater impact on and benefits for organisations (Moutinho, 2022).
what Galan-Muros and Davey (2019) have identified as outcomes indi­ This understanding corroborates the literature on project management
rectly experienced by individuals, organisations and society or unin­ and suggests creating space for the co-creation of knowledge for the
tended outcomes that influence stakeholders, including society (Albats development of research involving academics and practitioners
et al., 2018). Indeed, collaborative research developed within URC-PS (Berggren & Söderlund, 2011; Söderlund & Maylor, 2012). Since the
generates several impacts, i.e., positive, or negative changes in early stages of its conception, there has been consensus among PPPM
macro-level aspects, including societal aspects (Siemieniako et al., academics as to the need for the engagement of practitioners in the
2021). The societal impact of collaborative research developed by the co-creation of knowledge in all stages of research; from the identifica­
URC promotes social change by creating knowledge based on scientific tion phase which defines problems of a practical nature through to the
knowledge (Bornmann, 2013; Fini et al., 2018). institutionalisation of the research results. The URC-PS framework ap­
This methodological paper intends to fill the existing gap in the pears to provide a means to support the production of research with the
specialized literature on the evaluation of artefacts developed with the required academic rigour, practical relevance, and impact on
Design Science Research method (Venable et al., 2012). Therefore, this organisations.
study aims to evaluate the URC-PS framework in the light of
pre-established criteria in the literature. From this, the following can be 2.2. Artefact evaluation in DSR: rigour and relevance
considered criteria: applicability, novelty, simplicity, completeness,
elegance, usability, fidelity to real-world phenomena, consistency and The importance of evaluating artefacts produced under the Design
internal coherence, scalability, flexibility, interest, and reusability Science paradigm is consensual and widely supported by the literature
(Bondi, 2000; Davis, 1989; Gill & Hevner, 2013; ISO/IEC/IEEE, 2010; (Peffers et al., 2012; Tremblay et al., 2010; Venable et al., 2012, 2016).
March & Smith, 1995; Prat et al., 2015; Venable et al., 2012). Even so, guidance on what would be desirable, acceptable, or customary
The Delphi method was applied to achieve the research aim and to include in evaluation is still a point for debate. For Hevner et al.
involved twenty-one participants: eleven academics, five practitioners, (2004), artefacts should be evaluated for relevant quality attributes
and five Ph.D. candidates. After two rounds, there was a consensus of based on the requirements of the context of their implementation and
agreement on all criteria except usability. The results indicate that the should expose evidence that the artefacts can solve real problems. They
effective implementation of the framework and its consequent institu­ present five ways to evaluate an artefact: observational, analytical,
tionalisation needs the development of a practical guide that details the experimental, testing and descriptive. For each form of evaluation,
steps for such implementation of the URC-PS framework. Hevner et al. (2004) propose associated methods and techniques. Ven­
The paper is structured as follows. The following section provides a able et al. (2016) propose a framework for evaluation in Design Science,
theoretical background in the form of a presentation of URC-PS frame­ by means of four steps: explain the evaluation aims, choose the evalu­
work development using DSR, artefact evaluation, and the Delphi ation strategy or strategies, determining the properties to be evaluated,
Method. First, the research methodology identifies the objectives for and designing the individual evaluation episode(s). The framework
evaluation, chooses the evaluation strategy, determines the properties to provides a strategic view of the evaluation of DSR according to two di­
evaluate, and designs the evaluation episodes. Following this, the find­ mensions: the functional aim of the evaluation (formative ’ex-ante’ or
ings of the two rounds of the Delphi method are presented and discussed summative ’ex-post’) and the evaluation paradigm (naturalistic or
in light of the literature. Finally, the study delivers conclusions, along artificial). Based on their strategy selection, the framework guides the
with its methodological contributions. user to possible appropriate methods.
Naturalistic evaluation explores the artefact in a natural environ­
ment, mainly conducted within an organisation; as such, it is always
empirical. The dominant interpretive paradigm provides a naturalistic

2
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

• Strategy • Benefits mgmt • Assessment • Students & alumni mgmt


Governance & Management • Governance structure • Leadership • Communica�on • Knowledge management
• Stakeholder mgmt • Agreements • Interna�onaliza�on • E-pla�orm

Partners Resources Ac�vi�es Outputs Outcomes Impacts


Impact Generation Process

• Higher educa�on • Staff (academics, • Collabora�ve research • New theories and • Increase prac�cal skills • Enhance reputa�on
ins�tu�ons students, prac��oners, • Joint supervision of prac�ces knowledge & experience • Reinforce knowledge
• Research ins�tu�ons others) students • Technical and scien�fic • Prac�cal applica�on of transfer
• Companies • Financial (government, • Lifelong learning publica�ons research results • Develop R&D roadmaps
• Governments business, HEI, others) • Professional and • Intellectual property • New research • Increase knowledge
• Professional associa�ons • Facili�es (dedicated student mobility • Technological products opportuni�es breakthroughs
facili�es, access to and processes • Commercializa�on of • Increase employability
• Other outputs research • Other impacts
partners' facili�es)
• New ventures crea�on
• Knowledge assets • Network
• Curriculum update
• Other outcomes

Project Studies
• Project Design • Level of analysis

Circumstances Drivers Barriers


• Complementarity • Mo�va�on • Orienta�on asymmetry • Staff with mul�ple roles
• Trust • Personal rela�onship • People's availability • Other barriers
• Previous experiences • Other drivers • Confiden�ality

• Environmental factors
Context • Organiza�onal factors
• Individual factors

Fig. 1. URC-PS Ecosystem Framework.


Source: Moutinho, Fernandes and Rabechini (2023)

evaluation of DSR with the benefits of more substantial internal validity


Table 1
(Larsen et al., (2020)). Naturalistic evaluation methods typically include
Development evolution of the URC-PS framework.
case studies, field studies, field experiments, surveys, ethnography,
DSR Steps ( Method Data collection Results phenomenology, hermeneutic methods, and action research (Venable
Peffers et al. and data analysis
et al., 2016).
2007)
Artificial evaluation, on the other hand, can be empirical or non-
Identify Expert-based Focus group and Issues to be overcome empirical. It is usually positivist, but interpretive techniques can also
problem and qualitative content analysis for research projects to
motivate study have a more significant
be used to better understand why an artefact performs. In addition to
impact on organisations this, the dominant scientific/rational paradigm brings to artificial
Systematic Bibliographic, Characterisation of evaluation the benefits of far sounder scientific reliability by means of its
literature integrative, and collaborative improved repeatability and falsifiability (Baskerville et al., 2015).
review interpretative environment formed by
Artificial evaluation includes laboratory experiments, simulations,
analysis URCs and external
actors (Moutinho & theoretical arguments, mathematical proofs, and criterion-based anal­
Rabechini, 2021) ysis. One way to implement criterion-based analysis is using the Delphi
Define Expert-based Focus group and Definition of artefact Method, as described below.
objectives of qualitative content analysis functionalities
the solution study
Design and Systematic Bibliographic, Search for elements that 2.3. The Delphi method: the search for consensus
Development literature integrative, and could solve the research
review interpretative problem. Conceptual The Delphi method was developed in the 1950 s by the RAND Cor­
analysis proposal of the URC-PS poration for the purposes of United States military forecasts. At the time,
Ecosystem framework (
RAND Corporation scientists investigated the scientific use of expert
Moutinho, Rabechini, &
Fernandes, 2023) opinions in groups, demonstrating their superiority when compared
Expert-based Semi-structured Development of the with individual opinions (Landeta, 2006). In the following decade, the
qualitative interviews and URC-PS Ecosystem method became popular and was applied in several other fields (Lin­
study thematic analysis framework (Moutinho,
stone & Turoff, 2002). The Delphi method has been widely used in sit­
Fernandes, & Rabechini,
2023)
uations where theoretical knowledge is insufficient when there is no
Demonstration Expert-based Semi-structured Description of the accurate historical information, or in cases where new ideas need to be
qualitative interviews and scenario of usage for the stimulated (Wright & Giovinazzo, 2006).
study content analysis URC-PS Ecosystem The Delphi method is a systematic process based on communication
framework in the
centred on anonymity and multiple iterations. It includes participant
Postgraduate
Programme in Project opinions of a complex problem via consecutive rounds of questionnaire
Management replies and controlled collective feedback in search of consensus (Young
& Jamieson, 2001). Authors such as Linstone and Turoff (2002) suggest
a maximum of three rounds, given that further rounds tend to reveal
little change. The Delphi method, since its development, has undergone

3
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

several changes so as to be adaptable to different scenarios. Neverthe­ 3. Research methodology


less, its original concepts form the basis for the variations that have been
developed and used, given the method’s inherent flexibility and the The research philosophy anchors on the social constructivist para­
specific needs of each study (Linstone & Turoff, 2002). digm because it recognizes the contribution of each individual in the
The definition of specialists participating in the Delphi method is context of the development of group opinion (Denzin & Lincoln, 2018).
fundamental. Heterogeneous groups tend to lead to solutions of more This study draws on the Framework for Evaluation in Design Science
excellent quality and acceptance, given that the absence of a variety of (FEDS) (Venable et al., 2016), as presented in Fig. 2. It includes a
information to be shared does not contribute to the gain in a procedure two-dimensional characterisation of DSR evaluation episodes: the
of this kind (Powell, 2003). The inclusion of academics and practitioners functional purpose of evaluation (formative or summative) and para­
in the panel may be a solution that helps to fulfil these criteria (Grisham, digm of evaluation (artificial or naturalistic). According to the taxonomy
2009). The number of participants varies considerably from study to proposed by Venable et al. (2012), the evaluation performed in this
study (Hsu & Sandford, 2007). However, it is recommended that the study can be classified as summative. This occurred after the develop­
number should not be less than ten (Delbecq et al., 1975). For Grisham ment of the artefact and considered the artificial approach, involving
(2009), a higher number of participants generates massive data. It interpretative techniques to evaluate the relevant quality attributes of
makes administration and analysis very complex, while less than ten the URC-PS framework.
participants compromise the results insofar as effective consensus and The methodological approach considers the nature of the research
relevance of the information obtained is concerned. and uses the Delphi method (Brady, 2015), which is traditionally used to
The questionnaires used during the first round may be open, allow­ seek consensus, while maintaining the respondents’ anonymity (Dalkey
ing participants to express themselves freely on the topic under discus­ & Helmer, 1963). It allows barriers to be overcome such as persuasive
sion, or even closed, starting in a more directed way (Powell, 2003). ability and psychological dominance of some members of the Delphi
Whatever the format, it is essential that participants can comment on panel and a natural reluctance to express unpopular opinions (Yousuf,
their answers or even argue in favour of their positions (Kayo & 2007).
Securato, 1997). Once responses have been analysed and comments The Delphi method was materialised using the welphi e-platform
synthesised, the Delphi mediator is responsible for providing feedback (www.welphi.com), as in Costa et al. (2019). The interface, illustrated in
to participants (given participants do not communicate directly) and Fig. 3, combines multiple tools, such as participant management (indi­
thus laying the groundwork for the next round. vidual registration, access control and sending emails), round manage­
Consensus is considered to be a high concentration in the distribution ment (round parameters, individual statistics, and round approval
of responses to a particular item (Von der Gracht, 2012). Stability is rules), and question management (criteria definition and scale levels).
characterised by the absence of new contributions and the discrete At the design phase of the study, eleven criteria were initially defined
variation of responses between successive rounds (Osborne et al., 2003). to evaluate the URC-PS framework. These criteria support the artificial
Although these are two subjective criteria (Powell, 2003), the literature form of evaluation and align with interpretative techniques. A pre-test
considers that consensus has been attained when, for example, more conducted with two academics with experience in artefact evaluation
than two-thirds of the participants rated an item with a four or a five on led to the inclusion of one additional criterion, resulting in a total of
the five-point Likert scale (Osborne et al., 2003); while stability is ach­ twelve criteria for evaluation. The sample of participants was defined by
ieved when there is a low coefficient of variation in responses between convenience (Saunders et al., 2019), since they needed prior knowledge
rounds (Dajani et al., 1979; Von der Gracht, 2012). of the topics under discussion (Powell, 2003; Trevelyan & Robinson,
2015). As shown in Table 2, the participant set does not have

Identify the goals of Summative and artificial evaluation of the URC-


the evaluation Project Studies Ecosystem framework

Choose the evaluation Delphi method (e-platform) with academics,


strategy practitioners and students participating.

Determine the The elements and macro elements of the URC-


properties to evaluate Project Studies Ecosystem framework

Design the individual Customised welphi environment in n-rounds to


evaluation episode(s) reach consensus on criteria among participants.

Evaluate the relevant quality attributes of the


URC-Project Studies Ecosystem framework

Fig. 2. Methodological approach - FEDS (Venable et al. 2016).

4
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

ATTRIBUTES
Applicability: The URC-PS framework applies to knowledge co-crea�on in a collabora�ve context
Novelty: The URC-PS framework is a novel proposal for Project Studies and knowledge crea�on
Simplicity: The framework comprises the necessary and sufficient number of elements and their rela�onships and is free of complex concep ts
Completeness: The URC-PS framework includes all the essen�al elements and the rela�onships between the elements
Elegance: The framework clearly represents the URC-PS
Usability: The URC-PS framework is easy to use
Fidelity to modelled phenomena: The URC-PS framework corresponds to the modelled reality
Consistency and internal coherence: The URC-PS framework is uniform, standardised, and free of contradic�ons between its elements
Scalability: The URC-PS framework can support growing numbers of elements without losing the characteris�cs that add value to it
Add field Flexibility: The URC-PS framework can be tailored or changed to meet the needs of specific contexts
Interest: The URC-PS framework is apt to be spread in the project studies field
Reusability: The URC-PS framework can lead to the crea�on of a new artefact

Strongly disagree
Disagree
Neither disagree nor agree
Agree
Strongly agree

Fig. 3. Illustration from welphi website.

Table 2 Table 3
Participant characterization. Relevant quality attributes of the URC-PS framework.
Roles Home Organisation Criterion Description Reference

Academics 52% Higher Education Institution 52% Applicability The URC-PS framework applies to Gill and Hevner
Practitioners 24% Company 38% knowledge co-creation in a (2013)
PhD Candidates 24% Government 10% collaborative context.
Years of Experience in Project Educational Level Novelty The URC-PS framework is a novel Gill and Hevner
Management proposal for Project Studies and (2013)
Over 15 52% PhD 71% knowledge creation.
5–15 43% Master’s degree 29% Simplicity The URC-PS framework comprises the ISO/IEC/IEEE
Under 5 5% necessary and sufficient number of (2010)
elements and their relationships and is
free of complex concepts.
homogeneous characteristics: indeed, a certain level of diversity in the Completeness The URC-PS framework includes all Prat et al. (2015)
the essential elements and the
demographic characteristics of the participants and in elements related
relationships between the elements.
to professional experience is beneficial (Förster and von der Gracht Elegance The framework clearly represents the March and Smith
(2014); Hussler et al., 2011; Powell, 2003). For participants who were URC-PS. (1995)
practitioners, however, there was a requirement - a previous relation­ Usability The URC-PS framework is easy to use. Davis (1989)
ship with the PPPM. Fidelity to modelled The URC-PS framework corresponds to Prat et al. (2015)
phenomena the modelled reality.
Although the URC-PS framework has not yet been applied in the real
Consistency and The URC-PS framework is uniform, ISO/IEC/IEEE
world (for example, the PPPM), its relevant quality attributes have been internal coherence standardised, and free of (2010)
evaluated by PPPM academics, PPPM’s students, alumni, and practi­ contradictions between its elements.
tioners. The evaluation took place in April 2022. Once the evaluation Scalability The URC-PS framework can support Bondi (2000)
had begun, the participants initially viewed the explanatory video of the growing numbers of elements without
losing the characteristics that add
URC-PS framework (9′30’’ duration), whereupon they evaluated the value to it.
framework according to the twelve criteria defined above. Flexibility The URC-PS framework can be tailored Gill and Hevner
The method was designed to be carried out in no more than three or changed to meet the needs of (2013)
rounds, or a consensus ≥ 70% for each criterion, or, indeed, stability specific contexts.
Interest The URC-PS framework is apt to be Gill and Hevner
(variation ≤ 20%) in comparison with two consecutive rounds and the
spread in the project studies field. (2013)
absence of new relevant contributions. In each round, participants had Reusability The URC-PS framework can lead to the Prat et al. (2015)
the opportunity to indicate their level of agreement with the statement creation of a new artefact.
made for each criterion (Table 3) using a five-point Likert scale:
"strongly disagree"; "disagree"; "neither disagree nor agree"; "agree" and
"strongly agree". By choosing one of the first three points on the scale, maintain or revise their answers.
each participant was required to indicate the reasons for their The reliability of responses among participants was verified using
disagreement and was also allowed to include comments when they Cronbach’s Alpha coefficient (Cronbach, 1951). The study was con­
agreed with the statement. Once the first round was over, the partici­ cerned with the heterogeneous composition of the sample of participants
pants were presented with the statistical results and comments from the (including academics, students and practitioners) in relation to the
previous round - in the second round, they had the opportunity to concepts under evaluation, since any homogeneity could lead to a low

5
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

variance between the responses and, consequently, a low level of reli­ relevant quality attributes compared to students and practitioners. This
ability of the results (Creswell, 2010; Hayes, 1995). fact may occur due to the proximity of the academics to both the ele­
To evaluate the similarity between the academics, practitioners and ments and macro-elements that make up the framework since they come
students, the answers given to the twelve criteria, using the Likert scale, across them in their daily working lives.
were organised in a contingency table to perform the χ2 test (Rana &
Singhal, 2015). The hypotheses were defined to test if the participants
come from the same population or not: 4.2. First round analysis

H0. : Academics, practitioners, and students come from the same


Of the twelve evaluation criteria for the URC-PS framework, ten
population.
already attained the desired level of agreement (at least 70%) in the first
H1. : Academics, practitioners, and students do not come from the round, as shown in Fig. 4.
same population. For each of the criteria evaluated, the results of the first-round show
that:
4. Results
• Applicability: 95% of the participants agreed with the applicability
4.1. The participants of the URC-PS framework in the co-creation of knowledge in
collaborative environments. Only one participant partially dis­
Professional postgraduate courses have specific characteristics when agreed, arguing that to agree, it would have been necessary to go into
compared with academic courses (Jones, 2018). They offer a differen­ more prescriptive factors.
tiated form of management education, which aims to contribute to • Novelty: 77% of the participants agreed. The participants principal
professional practice, providing relevance to organisational practice and counter-arguments to the proposal include the following: this is not
academic knowledge (Creaton & Anderson, 2021; Watermeyer & the first framework with this function (although the URC-PS has
Chubb, 2018). Consequently, the dissertations and thesis produced on different characteristics); there are frameworks proposed by bodies
these courses can result in artefacts (Simon, 1996) incorporated into the such as PMI, and scientific studies that provide the opportunity for
everyday life of organisations. Therefore, in addition to academics and consideration such as this; the elements presented in the framework
students (master’s and doctoral), practitioners play an essential role in are typical, although the compilation of information is interesting;
designing the products generated or institutionalising the products the framework is not a new proposal, but rather an agglomeration of
developed. concepts with a specific purpose, which was already available.
Thus, twenty-five participants, thirteen academics, six PhD candi­ • Completeness: 77% of the participants agreed on the completeness
dates and six practitioners were invited by email to participate in of the URC-PS framework. The arguments of those who disagreed are
evaluating the URC-PS framework. All those contacted accepted their based on the lack of explicit relationships among the elements of the
invitation to participate and agreed to the process. A consensus level of framework, as well as among the macro-elements. The lack of
agreement (strongly agree and partially agree) of 70% and less than 15%
strongly or partially disagreeing was established. The participants Applicability
received an e-mail with the link to the welphi site for the URC-PS 100% 95%
framework evaluation, along with initial instructions as well as a Reusability 90% Novelty
95% 80% 77%
deadline of seven days to carry out the evaluation. On the seventh day, 70%
nine people still had not evaluated the framework; they received an e- 60%
mail with deadline extension information for two days. Despite the Interest 50% Simplicity
77% 40% 67%
deadline extension, two academics, one student and one practitioner,
30%
did not evaluate the framework and were automatically excluded from 20%
the process. As a result, twenty-one participants, eleven academics, five 10%
PhD candidates and five practitioners concluded the first evaluation Flexibility 0% 77% Completeness
91%
round.
After the first round, the reliability measure of internal consistency,
Cronbach’s α, was calculated from the responses of the twenty-one 72% 90%
participants in the twelve criteria evaluated. The result (α = 0.905) Scalability
52%
Elegance
indicated slight variation and high level of reliability (Pallant, 2001). To
check if academics, practitioners, and students come from the same 81%
population, the χ2 test was performed. Given the degree of freedom in Consistency Usability
91%
Table 1 (df=8), for p < 0.001 and critical value χ2 = 26.12, the calcu­
Fidelity to modelled
lated value was χ2 = 13.21. As a result, H0 cannot be rejected. The re­ phenomena
sults indicate that academics, practitioners, and students have
similarities – in other words, they come from the same population. Fig. 4. Agreement level with the criteria used in the URC-PS frame­
Table 4 shows that academics had a more critical perception of the work evaluation.

Table 4
Frequency distribution of degrees of agreement.
Degree of agreement (Likert scale) Academics Students Practitioners

Absolute Frequency Relative Frequency Absolute Frequency Relative Frequency Absolute Frequency Relative Frequency

Strongly Agree 55 41,7% 27 45,0% 32 53,4%


Agree 42 31,8% 26 43,4% 20 33,3%
Neither Agree nor Disagree 14 10,6% 5 8,3% 6 10,0%
Disagree 20 15,1% 2 3,3% 2 3,3%
Strongly Disagree 1 0,8% 0 0,0% 0 0,0%

6
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

detailing of the framework’s elements was also highlighted as a re­ reached during the first round, notably the percentages of consensus and
striction to its completeness, as well as the scope of its solution, given the participants’ comments. It also considered the definition of the
the heterogeneity of organisations involved. approval rules for the second round (degree of consensus ≥ 70% or
• Elegance: In the opinion of 90% of the participants, the elegance variation ≤ 20% compared to the previous round’s results). The second
criterion was achieved since the framework represents the URC-PS round focused on the two criteria that did not reach a consensus in the
Ecosystem clearly. In the opinion of the two participants who dis­ first round: simplicity and usability. The twenty-one participants
agreed with the statement, the latter was couched in such a way that received an email informing them of the start of the second round with a
the excess of words and the lack of clarity of the processes and their deadline of seven days for its completion. During the second round, each
flows clouded the issue. participant had the opportunity to analyse the comments of the others
• Fidelity to modelled phenomena: The criterion is present in the and thus adhere to their degree of agreement or even change it.
framework for 91% of respondents. For those who disagreed, func­ The simplicity criterion attained 77% of agreement. The four dis­
tions such as changes in project scope, manager, and team were not agreeing participants argued that it is impossible to say that the
represented. framework is free of complex concepts since some are too exhaustive
• Internal consistency/coherence: 81% of the respondents agreed and abstract. In addition to this, they could not indicate any missing
that the framework has internal consistency/coherence. In the elements. Even among those who agreed with the framework’s
opinion of those who disagreed, some points are presented generi­ simplicity, there is an indication that there are items that would require
cally, which would need to be further explained, as would the greater detailing due to the meaning they may assume in the frame­
interface between the macro-elements. work’s context. Some participants understood that the excess elements
• Scalability: This criterion obtained 72% of agreement. For the might be particularly visually complex. However, considering the
participants who disagreed, including new elements may make the various macro-elements and the scope of elements, it would be difficult
framework complex and compromised or even make its application to produce a more straightforward framework. They stated that the
unfeasible. Additionally, there is a group which neither agreed nor concepts in the framework are quite intelligible even for people from
disagreed, either by claiming lack of grounds for a conclusive eval­ other fields of expertise.
uation or because the framework has not been used in a real context. The usability criterion remained without consensus among the
• Flexibility: The criterion was accepted by 91% of the participants. participants. Opinions remained divided. While 52% of the participants
The two participants who disagreed with the affirmation indicated agreed that the framework is easy to use, 24% neither disagreed nor
the need for the incorporation of elements linked to change and agreed, and another 24% disagreed. Those who agreed understood that
configuration management. the framework represents an environment whose core process leads to
• Interest: 77% of the participants agreed that the framework could the impact of co-created knowledge. Even so, they highlighted the need
engender interest among both academics and practitioners. The for an explanation regarding the flow of the framework. Those who
participants who disagreed with the statement stated that there neither agreed nor disagreed point to the need to develop complemen­
needs to be a specific context with a minimum structure for imple­ tary material such as a manual or even a guide for its implementation.
mentation in order to arouse interest. Similarly, the use of frame­ For this group of participants, the way the framework should be used is
works representing research centres is not common practice, which still not completely clear. Those who disagreed claimed that without
may hinder people’s interest. In any case, dissemination of the URC- having the opportunity to use the framework in practice, they could not
PS framework depends on the form of communication and the pre­ give their opinion with certainty. They also emphasised the need for a
disposition of URC to receive it. gradual implementation of the elements that make up the Governance
• Reusability: The criterion indicating the framework’s reusability and Management macro-elements.
potential was a consensus for 95% of the participants. Only one Because participants reached a consensus on the simplicity criterion
participant had expressed his doubt that the framework’s could (77%) in the second round and because of the stability of opinions in the
create new artefacts. usability criterion between the first and second round (52%), coupled
• Simplicity: 67% of the participants felt the framework was simple. with the lack of new relevant contributions, the researchers realised that
Those who neither agree nor disagree claimed that the macro- a new round would not produce different results. This decision agrees
element Project Studies should be explored further, with particular with Gallego et al. (2008), given that after stability is achieved between
reference to: specific terminology to address the research paradigm; subsequent rounds, new variations tend to be marginal and do not
the purpose for which the framework was created (representing a compensate for the mobilisation and efforts of participants.
collaborative environment) is naturally complex; at some point, the
framework becomes complex given the number of elements; the 5. Discussion and conclusions
number of relationships between elements and macro-elements is
very large, which makes the framework complex. The positive results of evaluating the URC-PS framework enhance
• Usability: This criterion, according to the participants, had the our understanding of the societal impact of university research (Dotti &
lowest agreement levels (52%), with 42% of agreement and 10% of Walczyk, 2022). Due to their specific objectives, research conducted by
total agreement. For 28% of the participants, the framework’s use is URCs often leads to even more societal impacts (Noe & Alrøe, 2023).
still unclear; the number of variables and macro-elements may The evaluation of the URC-PS framework is aligned with Tremblay
hinder its use; perhaps it may be partially possible to apply it; there is et al. (2010) in its manifestation of evidence that the artefact is endowed
a need for additional explanatory material to guide its with quality attributes that solve real problems. Moreover, as seen in the
implementation. literature, the Delphi method can be used at different moments of
evaluation in the context of the development of the artefact in research
Only two criteria, simplicity, and usability, did not reach the mini­ using the DSR method, both ex-ante (Henriques et al., (2021); Smits &
mum percentage (70%) of consensus during the first evaluation round Van Hillegersberg, 2014) and ex-post (Coetzee, 2019; Ebel et al., 2022),
among the twenty-one participants. as in this study.
In the present methodological paper, the Delphi method aimed at
4.3. Second round analysis consensus among participants, as it does in most studies using this
method (Diamond et al., 2014). However, some studies use variations of
As soon as the first round was completed, the welphi e-platform was the method, such as Policy Delphi, where dissensus prevails in searching
customised for the second round. It initially took into account the results for a wide range of opinions (Steinert, 2009). This study previously

7
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

restricted the maximum of three rounds, as Linstone and Turoff (2002) 5.1. Lessons learned
suggested, given that additional rounds tend to reveal little change; and
the number of participant dropouts could increase considerably. How­ The URC-PS framework was evaluated by a set of relevant quality
ever, this study gives additional empirical evidence that the recom­ attributes based on the requirements of a context for its future imple­
mended number of rounds for a Delphi study is between two and three mentation. As a lesson learned, this study demonstrates that stakeholder
(Gallego et al., 2008). engagement during all phases of DSR - from problem identification to
The agreement objective in each of the evaluated criteria was evaluating the generated artefact - is critical. Its contribution to practice
defined, a priori, as 70%. This percentage is within the range indicated is proven in terms of its relevance to the Postgraduate Programme in
by the literature, which is between 50% and 97% (Diamond et al., Project Management when the latter acknowledges the artefact’s
2014). It is important to emphasise that the alleged arbitrariness importance as a tool for managing the collaborative environment
established in the percentage does not interfere with the research result. formed by academics and practitioners to co-create knowledge in Proj­
Unlike other studies that may exclude a criterion for not reaching the ect Studies.
specified limit, this study only does not recognise the criterion as
consensual in that particular round, taking it on to a new round of 5.2. Limitations and suggestions for further research
evaluation. The convergence rate at the end of the first evaluation round
of the URC-PS framework can be attributed to the fact that the relevance The study presents limitations, such as the range of the Likert scale
cycle was effectively incorporated from the early stages of the study. The used, which may have influenced the change of opinion and how
generation of the URC-PS framework took into account feedback from consensus was reached (Makkonen et al., 2016). The research also does
several key stakeholders, specifically collected during the conduct of a not provide details as to the reasons that motivated participants to
focus group on problem identification and motivation and another change their opinions, although comments made during the second
during the definition of objectives for a solution, while, later in the round may give some indication (Turnbull et al., 2018).
artefact development step, the conduct of twenty-eight interviews with With this study, the artefact (URC-PS Ecosystem framework) is
academics, practitioners and Ph.D. candidates guaranteed appropriate considered evaluated (Venable et al., 2016), given that the application
levels of feedback. of the Delphi method was conducted as expected and achieved its
Even though there is no consensus on how to give feedback to par­ objective (Brady, 2015). As a result, it is pertinent to consider further
ticipants in the Delphi method, variations focus between argumentative research, such as, for example, conducting a naturalistic ex-post evalu­
comments and statistical summaries (Rowe et al., 2005). In this study, ation, exploring the artefact in the environment of its practical use, and
both forms of feedback were used. While Rowe and Wright (1996) using other research strategies (leading to a possible methodological
identified that statistical summaries elicit fewer changes in opinions triangulation) which would further increase its external validity.
among participants compared to argumentative comments, in a The suggestion to develop an implementation guide also deserves
post-Delphi questionnaire study, Turnbull et al. (2018) identified that special attention since the framework’s implementation tends to involve
both argumentative comments and statistical summaries similarly in­ different actors depending on the context in which the URC is inserted.
fluence changes in opinions. Developing a user guide would make perfect sense since the framework
Stability between rounds also needs to be analysed (Dajani et al., is composed of many elements and there are multiple relationships be­
1979). In this study, the variation between the first and second rounds tween them.
was considered acceptable since only 3 out of 21 participants (14%) After the framework’s implementation, a longitudinal study is sug­
changed their opinions - a percentage below that of the 20% recom­ gested in order to analyse the results arising from the application of the
mended by Novakowski and Wellar (2008). If, on the one hand, the artefact, as well as its impact on individuals and the organisations
anonymity of the participants is a central feature of the method, thus involved.
reducing the effect of dominant individuals in the group, on the other In the same vein as the societal impact assessment of R&D collabo­
hand, majority opinions also tend to influence changes of views (Mak­ rations between universities and industries is still an underexplored
konen et al., 2016; Meijering & Tobi, 2018), although such influence topic (Cohen et al. 2022), the evaluation and measurement of the soci­
also depends on the composition of the group and the perceived etal impact of collaborative research conducted within the URC-PS
importance of the participants (Turnbull et al., 2018). context also warrant special attention.
Applying scientific methods in the evaluation step of artefacts Further investigations may also test the framework in URCs for fields
developed under the Design Science paradigm is imperative to legiti­ of knowledge other than Project Studies, as well as numerous other
mising the process of producing an artefact using DSR (Venable et al., contexts, which will legitimise its generalisation to the "collaborative
2016). However, even though the literature has explored this crucial academic environments" class of problems.
component, it still provides limited guidance on its implementation
(Peffers et al., 2012). Thus, this study aimed to evaluate the URC-PS CRediT authorship contribution statement
Ecosystem framework using the Delphi method in light of pre-defined
criteria. In addition to the specific results achieved, the study brings José da Assuncao Moutinho: Conceptualization, Methodology,
methodological contributions, lessons learned and provides opportu­ Formal analysis, Investigation, Writing – original draft, Visualization.
nities for further research. Gabriela Fernandes: Resources, Data curation, Writing – review &
There are two main methodological contributions: the first to DSR editing, Supervision, Funding acquisition. Roque Racbechini Junior:
and the second to the Delphi method. In the case of DSR, it supports Validation, Resources, Data curation, Writing – review & editing, Su­
using an anonymous qualitative method (Delphi) to conduct artificial pervision, Project administration, Funding acquisition.
summative evaluation, in addition to already existing methods. The
methodological rigour adhered to in the evaluation demonstrated the Acknowledgments
quality of the knowledge produced by Design Science. For the Delphi
method, using an e-platform such as welphi broadens its application This research is sponsored by national funds through FCT – Fundação
possibilities when transposing barriers such as displacement restrictions para a Ciência e a Tecnologia, under the project UIDB/00285/2020 and
and incompatibility of participants’ schedules, thus increasing the LA/P/0112/2020 and was financed in part by the Coordenação de
method’s efficiency. Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES).

8
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

References Grisham, T. (2009). The Delphi technique: a method for testing complex and
multifaceted topics. International Journal of Managing Projects Business, 2(1),
112–130.
Akoka, J., Comyn-Wattiau, I., Prat, N., & Storey, V. C. (2023). Knowledge contributions
Hayes, B. E. (1995). Medindo a satisfação do cliente (p. 228). Rio de Janeiro: Editora
in design science research: Paths of knowledge types. Decision Support Systems, 166,
Qualitymark.
Article 113898.
Henriques, D., Pereira, R., Bianchi, I. S., Almeida, R., & Silva, M. M. da (2021). How IT
Albats, E., Fiegenbaum, I., & Cunningham, J. A. (2018). A micro level study of university
Governance can assist IoT project implementation. International Journal of
industry collaborative lifecycle key performance indicators. Journal of Technology
Information Systems and Project Management, 8(3), 25–45.
Transfer, 43(2), 389–431.
Hevner, A. R., March, S. T., & Park, J. (2004). Design research in information systems
Baskerville, R., Baiyere, A., Gregor, S., Hevner, A., & Rossi, M. (2018). Design science
research. MIS Quarterly, 28, 75–105.
research contributions: Finding a balance between artifact and theory. Journal of the
Hofmann, P., Jöhnk, J., Protschky, D., & Urbach, N. (2020). Developing purposeful ai use
Association for Information Systems, 19(5), 358–376.
cases - A structured method and its application in project management. Proceedings
Baskerville, R., Kaul, M., & Storey, V. C. (2015). Genres of inquiry in design-science
of the 15th International Conference on Business Information Systems 2020
research: Justification and evaluation of knowledge production. MIS Quarterly, 39
"Developments, Opportunities and Challenges of Digitization".
(3), 541–564.
Hsu, C.-C., & Sandford, B. A. (2007). The delphi technique: Making sense of consensus.
Berggren, C., & Söderlund, J. (2011). Management education for practicing managers:
Practical Assessment, Research, and Evaluation, 12. Article 10.
Combining academic rigor with personal change and organizational action. Journal
Hussler, C., Muller, P., & Rond, P. (2011). Is diversity in Delphi panelist groups useful?
of Management Education, 35(3), 377–405.
Evidence from a French forecasting exercise on the future of nuclear energy.
Bondi, A. B. (2000). Characteristics of scalability and their impact on performance.
Technological Forecasting and Social Change, 78(9), 1642–1653.
Proceedings of the Second International Workshop on Software and Performance (WOSP
ISO/IEC/IEEE, Systems and Software Engineering: Vocabulary, ISO/IEC/IEEE 24765:
2000) (pp. 195–203). Ottawa: ACM.
2010(E), December 2010, 1–418.
Bornmann, L. (2013). What is societal impact of research and how can it be assessed? a
Jones, M. (2018). Contemporary trends in professional doctorates. Studies in Higher
literature survey. Journal of the American Society for Information Science and
Education, 43(5), 814–825.
Technology, 64(2), 217–233.
Kayo, E. K., & Securato, J. R. (1997). Método Delphi: fundamentos, críticas e vieses. In
Brady, S. R. (2015). Utilizing and adapting the Delphi method for use in qualitative
Cadernos de Pesquisa em Administração, 1 pp. 51–61).
research. International Journal of Qualitative Methods, 1–6.
Kieser, A., Nicolai, A., & Seidl, D. (2015). The practical relevance of management
Brunet, M. (2021). On the relevance of theory and practice in project studies.
research: Turning the debate on relevance into a rigorous scientific research
International Journal of Project Management.
program. Academy of Management Annals, 9(1), 143–233.
Carton, G., & Mouricou, P. (2017). Is management research relevant? A systematic
Konstantinou, E. (2015). Professionalism in project management: Redefining the role of
analysis of the rigor-relevance debate in top-tier journals (1994–2013). Management,
the project practitioner. Project Management Journal, 46(2), 21–35.
20(2), 166–203.
Kuechler, B. & Vaishnavi, V. (2008). Theory development in design science research:
Clegg, S., Killen, C. P., Biesenthal, C., & Sankaran, S. (2018). Practices, projects and
Anatomy of a research project. In Proceedings of the Third International Conference on
portfolios: Current research trends and new directions. International Journal of Project
Design Science Research in Information Systems and Technology, Atlanta, Georgia.
Management, 36(5), 762–772.
Landeta, J. (2006). Current Validity of the Delphi Method in Social Sciences.
Coetzee, R. (2019). Towards designing an artefact evaluation strategy for human factors
Technological Forecasting & Social Change, 73(5), 467–482.
engineering: A lean implementation model case study. South African Journal of
Larsen, K. R., Lukyanenko, R., Muller, R., Storey, V. C., Vander Meer, D., Parsons, J., &
Industrial Engineering, 30(3), 289–303.
Hovorka, D. S. (2020). Validity in Design Science Research. International Conference
Cohen, M., Fernandes, G., & Godinho, P. (2022). Measuring the societal impacts of
on Design Science Research in Information Systems and Technology (DESRIST 2020),
university-industry R&D collaborations. Procedia Computer Science, 2022.
1–15.
Costa, C. A. B., Vieira, A. C. L., Nóbrega, M., Quintino, A., Oliveira, M. D., & Costa, J. B.
Linstone, H. A., & Turoff, M. (2002). The Delphi method: Techniques and applications.
(2019). Collaborative Value Modelling in corporate contexts with MACBETH.
Addison Wesley Newark, NJ: New Jersey Institute of Technology,.
Procedia Computer Science, 162, 786–794.
Makkonen, M., Hujala, T., & Uusivuori, J. (2016). Policy experts’ propensity to change
Creaton, J., & Anderson, V. (2021). The impact of the professional doctorate on
their opinion along Delphi rounds. Technological Forecasting and Social Change, 109,
managers’ professional practice. The International Journal of Management Education,
61–68.
19(1), Article 100461.
Manson, N. J. (2006). Is operations Research really Research? Orion, 22(2), 155–180.
Creswell, J. W. (2010). Educational research-planning, conducting, and evaluating
March, S. T., & Smith, G. F. (1995). Design and natural science research on information
quantitative and qualitative research (fourth ed.). New Jersey: Pearson Merril Prentice
technology. Decision Support Systems, 15(4), 251–266.
Hall.
Meijering, J. V., & Tobi, H. (2018). The effects of feeding back experts’ own initial ratings
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of test. Psychometrika,
in Delphi studies: a randomized trial. International Journal of Forecasting, 34(2),
16(3), 297–334.
216–224.
Dajani, J. S., Sincoff, M. Z., & Talley, W. K. (1979). Stability and agreement criteria for
Moutinho, J. A. (2022). Ecosystem of a University Research Centre in Project Studies
the termination of Delphi studies. Technological Forecasting and Social Change, 13(1),
[Doctoral dissertation]. Universidade Nove de Julho.
83–90.
Moutinho, J. A., Fernandes, G., & Rabechini, R., Jr. (2023). Knowledge co-creation in
Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to
project studies: The research context. Project Leadership and Society, 4, 100090.
the use of experts. Management Science, 9(3), 458–467.
Moutinho, J. A., & Rabechini, R., Jr. (2021). Centro de Pesquisa Universitária:
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
Caracterização do Ambiente de Pesquisa. Cadernos EBAPE.BR, 19(4), 887–900.
information technology. MIS Quarterly, 13(3), 319–340.
Moutinho, J. A., Rabechini, R., Jr., & Fernandes, G. (2023). Ecossistema de Centro de
Delbecq, A. L., Van de Ven, A. H., & Gustafson, D. H. (1975). Group techniques for program
Pesquisa Colaborativa em Project Studies: Um Framework Conceitual. Revista de
planning. Glenview, IL: Scott, Foresman, and Co.
Administração Mackenzie, 24(5), 1–31.
Denzin, N. K., & Lincoln, Y. S. (2018). The Sage Handbook of Qualitative Research (fi).
Nagle, T., Doyle, C., Alhassan, I. M., & Sammon, D. (2022). The research method we need
London: Sage,.
or deserve? A literature review of the design science research landscape.
Diamond, I. R., Grant, R. C., Feldman, B. M., Pencharz, P. B., Ling, S. C., Moore, A. M., &
Communications of the Association for Information Systems, 50.
Wales, P. W. (2014). Defining consensus: A systematic review recommends
Narazaki, R. S., Chaves, M. S., & Pedron, C. D. (2020). A project knowledge management
methodologic criteria for reporting of Delphi studies. Journal of Clinical Epidemiology,
framework grounded in design science Research. Knowledge and Process Management,
67(4), 401–409.
January, 1–14.
Dotti, N. F., & Walczyk, J. (2022). What is the societal impact of university research? A
Noe, E. B., & Alrøe, H. F. (2023). University Research Centres, scientific freedom, and the
policy-oriented review to map approaches, identify monitoring methods and success
Jester’s paradox. Syst Pract Action Res. https://doi.org/10.1007/s11213-023-09655-
factors. Evaluation and Program Planning, 95.
x
Ebel, M., Jaspert, D., & Poeppelbuss, J. (2022). Smart already at design time – Pattern-
Nonaka, I. (1991). The knowledge-creating company. Harvard Business Review, 69,
based smart service innovation in manufacturing. Computers in Industry, 138, Article
96–104.
103625.
Nonaka, I., & Toyama, R. (2007). Why do firms differ? The theory of the knowledge
Fini, R., Rasmussen, E., Siegel, D., & Wiklund, J. (2018). Rethinking the
creating firm. In K. Ichijo, & I. Nonaka (Eds.), Knowledge creation and management.
commercialization of public science: From entrepreneurial outcomes to societal
New challenges for managers (pp. 13–31). Oxford: Oxford University Press.
impacts. Academy of Management Perspectives, 32(1), 4–20.
Novakowski, N., & Wellar, B. (2008). Using the Delphi technique in normative planning
Förster, B., & von der Gracht, H. (2014). Assessing Delphi panel composition for strategic
research: methodological design considerations. Environment and Planning A, 40(6),
foresight - A comparison of panels based on company-internal and external
1485–1500.
participants. Technological Forecasting and Social Change, 84, 215–229.
Osborne, J., Collins, S., Ratcliffe, M., Millar, R., & Duschl, R. (2003). What "Ideas-about-
Galan-Muros, V., & Davey, T. (2019). The UBC ecosystem: putting together a
Science" should be taught in school science? A Delphi study of the expert
comprehensive framework for university-business cooperation. The Journal of
community. Journal of Research in Science Teaching, 40(7), 692–720.
Technology Transfer, 44(4), 1311–1346.
Pallant, J. (2001). SPSS survival manual - a step by step guide to data analysis using SPSS for
Gallego, M. D., Luna, P., & Bueno, S. (2008). Designing a forecasting analysis to
windows (version 10). Buckingham Open University Press,.
understand the diffusion of open-soURCe software in the year 2010. Technological
Peffers, K., Rothenberger, M., Tuunanen, T., & Vaezi, R. (2012). Design science research
Forecasting and Social Change, 75(5), 672–686.
evaluation. Lecture Notes in Computer Science (including Subseries Lecture Notes in
Gill, T. G., & Hevner, A. R. (2013). A fitness-utility model for design science research.
Artificial Intelligence and Lecture Notes in Bioinformatics), 398–410.
ACM Trans Management Information System, 4(2), 1–24.

9
J.A. Moutinho et al. Evaluation and Program Planning 102 (2024) 102366

Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science Venable, J., Pries-Heje, J., & Baskerville, R. A. (2012). Comprehensive framework for
research methodology for information systems research. Journal of Management evaluation in design science research. In K. Peffers, M. Rothenberger, & B. Kuechler
Information Systems, 24(3), 45–77. (Eds.), Proceedings of the Seventh International Conference on Design Science Research in
Powell, C. (2003). The Delphi technique: myths and realities. Journal of Advanced Information Systems and Technology (DESRIST 2012) (pp. 423–438). Las Vegas:
Nursing, 41(4), 376–382. Springer Verlag.
Prat, N., Comyn-Wattiau, I., & Akoka, J. (2015). A taxonomy of evaluation methods for Venable, J., Pries-Heje, J., & Baskerville, R. A. (2016). FEDS: A Framework for evaluation
information systems artifacts. Journal of Management Information Systems, 32(3), in design science research. European Journal of Information Systems, 25(1), 77–89.
229–267. Vom Brocke, J., Winter, R., Hevner, A., & Maedche, A. (2020). Special issue editorial
Rana, R., & Singhal, R. (2015). Chi‑square test and its application in hypothesis testing. –accumulation and evolution of design knowledge in design science research: a
Journal of the Practice of Cardiovascular Sciences, 1(1), 69–71. Journey through time and space. Journal of the Association for Information Systems, 21
Reining, S., Ahlemann, F., Mueller, B., & Thakurta, R. (2022). Knowledge accumulation (3), 520–544.
in design science research: Ways to foster scientific progress. SIGMIS Database, 53(1), Von der Gracht, H. A. (2012). Consensus measurement in Delphi studies. Review and
10–24. implications for future quality assurance. Technological Forecasting and Social Change,
Romme, A. G. L. (2003). Making a difference: organization as design. Organization 79(8), 1525–1536.
Science, 14(5), 558–573. Vries, M. D., Gerber, A., & van der Merwe, A. (2013). A framework for the identification
Rowe, G., & Wright, G. (1996). The impact of task characteristics on the performance of of reusable processes. Enterprise Information Systems, 7(4), 424–469.
structured group forecasting techniques. Technological Forecasting and Social Change, Walker, D. H. T., Anbari, F. T., Bredillet, C., Söderlund, J., Cimil, S., & Thomas, J. (2008).
12, 73–89. Collaborative academic/practitioner research in project management: Examples and
Rowe, G., Wright, G., & McColl, A. (2005). Judgment change during Delphi-like applications. International Journal of Managing Project in Business, 1, 168–192.
procedures: The role of majority influence, expertise, and confidence. Technological Walker, D. H. T., & Lloyd-Walker, B. (2016). Rethinking project management.
Forecasting and Social Change, 72(4), 377–399. International Journal of Managing Projects in Business, 9(4), 716–743.
Saunders, M., Lewis, P., & Thornhill, A. (2019). Research Methods for Business Students Watermeyer, R., & Chubb, J. (2018). Evaluating ‘impact’ in the UK’s research excellence
(eighth ed.). Essex: Pearson. framework (REF): Liminality, looseness, and new modalities of scholarly distinction.
Secundo, G., Elia, G., Margherita, A., & Leitner, K.-H. (2022). Strategic decision making Studies in Higher Education, 44(9), 1554–1566.
in project management: a knowledge visualization framework. Management Decision, Wright, J. T. C., & Giovinazzo, R. (2006). O país no futuro: aspectos metodológicos e
60(4), 1159–1181. cenários. Estudos Avançados, 20(56), 13–28.
Siemieniako, D., Kubacki, K., & Mitręga, M. (2021). Inter-organisational relationships for Young, S. J., & Jamieson, L. M. (2001). Delivery methodology of the Delphi: A
social impact: A systematic literature review. Journal of Business Research, 132, comparison of two approaches. Journal of Park and Recreation Administration, 19(1),
453–469. 42–58.
Silvius, G., & Schipper, R. (2019). Planning project stakeholder engagement from a Yousuf, M. I. (2007). Using experts’ opinions through Delphi technique. Practical
sustainable development perspective. Administrative Sciences, 9(2), 46. Assessment, Research & Evaluation, 12(4), 1–9.
Simon, H. A. (1996). The science of the artificial (third ed.). Cambridge: MIT Press.
Smits, D., & Van Hillegersberg, J. (2014). The development of an IT governance maturity
José da Assunção Moutinho holds a PhD in project management from Nove de Julho
model for hard and soft governance. Proceedings of the 8th European Conference on
University (2022). He is a project manager at the State University of Rio de Janeiro and a
Information Management and Evaluation, 347–355.
researcher in the Graduate Program in Project Management at the Nove de Julho Uni­
Söderlund, J., & Maylor, H. (2012). Project management scholarship: Relevance, impact
versity. His research interests include collaborative R&D research, implementation of
and five integrative challenges for business and management schools. International
project management practices in public environment and knowledge management in
Journal of Project Management, 30(6), 686–696.
projects. He has worked as a business consultant and project manager for over 20 years.
Sonnenberg, C., & vom Brocke, J. (2012). Evaluation patterns for design science research
artefacts. Communications in Computer and Information Science, 286, 171–177.
Steinert, M. (2009). A dissensus based online Delphi approach: An explorative research Gabriela Fernandes holds a PhD in management from the University of Southampton
tool. Technological Forecasting and Social Change, 76, 291–300. (2014). She is assistant professor at the University of Coimbra and researcher at CEMMPRE
Többen, J., & Opdenakker, R. (2022). Developing a Framework to integrate circularity and ALGORITMI Research Centres. Her research interests are in Organizational Project
into construction projects. Sustainability (Switzerland), 14(9), 5136. Management and Innovation Management, particularly in University-Industry R&D Col­
Tremblay, M. C., Hevner, A. R., & Berndt, D. J. (2010). Focus group for artifact laborations context. She spent ten years coordinating and managing projects in different
refinement and evaluation in design research. Communications of the Association for industries. She developed and taught several project management training courses and, as
Information Systems, 26, 599–618. a consultant, coordinated the implementation of project management systems and Project
Trevelyan, E. G., & Robinson, N. (2015). Delphi methodology in health research: How to Management Office structures.
do it? European Journal of Integrative Medicine, 7(4), 423–428.
Turnbull, A. E., Dinglas, V. D., Friedman, L. A., Chessare, C. M., Sepúlveda, K. A.,
Roque Rabechini Junior holds a PhD in Production Engineering from the Polytechnic
Bingham, C. O., & Needham, D. M. (2018). A survey of Delphi panelists after core
School of the University of São Paulo (2003). He is a professor in the Graduate Program in
outcome set development revealed positive feedback and methods to facilitate panel
Project Management at the Nove de Julho University. He works as a researcher of tech­
member participation. Journal of Clinical Epidemiology, 102(410), 99–106.
nology, project management and business strategy, as a consultant in projects of Project
Van Aken, J. E., & Romme, G. (2009). Reinventing the future: Adding design science to
Management Office implementation, structuring of new product launching processes and
the repertoire of organization and management studies. Organization Management
creation of project management methodology, among others.
Journal, 6(1), 5–12.

10

You might also like