You are on page 1of 27

Scientometrics (2015) 105:2109–2135

DOI 10.1007/s11192-015-1744-x

Methodi Ordinatio: a proposed methodology to select


and rank relevant scientific papers encompassing
the impact factor, number of citation, and year
of publication

Regina Negri Pagani1 • João Luiz Kovaleski1,2 •

Luis Mauricio Resende1,2

Received: 18 May 2015 / Published online: 12 September 2015


 Akadémiai Kiadó, Budapest, Hungary 2015

Abstract An increase in the number of scientific publications in the last few years, which
is directly proportional to the appearance of new journals, has made the researchers’ job
increasingly complex and extensive regarding the selection of bibliographic material to
support their research. Not only is it a time consuming task, it also requires suitable
criteria, since the researchers need to elect systematically the most relevant literature
works. Thus the objective of this paper is to propose the methodology called Methodi
Ordinatio, which presents criteria to select scientific articles. This methodology employs an
adaptation of the ProKnow-C for selection of publications and the InOrdinatio, which is an
index to rank by relevance the works selected. This index crosses the three main factors
under evaluation in a paper: impact factor, year of publication and number of citations.
When applying the equation, the researchers identify among the works selected the most
relevant ones to be in their bibliographic portfolio. As a practical application, it is provided
a research sample on the theme technology transfer models comprising papers from 1990
to 2015. The results indicated that the methodology is efficient regarding the objectives
proposed, and the most relevant papers on technology transfer models are presented.

Keywords Bibliographic portfolio  Research methodology  Methodi Ordinatio 


InOrdinatio  Technology transfer models

& Regina Negri Pagani


reginapagani@utfpr.edu.br
João Luiz Kovaleski
kovaleski@utfpr.edu.br
Luis Mauricio Resende
lmresende@utfpr.edu.br
1
Department of Industrial Engineering, Federal University of Technology - Paraná (UTFPR)
Câmpus Ponta Grossa, Av. Monteiro Lobato, s/n - Km 04, Ponta Grossa, PR CEP 84016-210,
Brazil
2
Department of Post-graduation in Production Engineering, Federal University of Technology -
Paraná (UTFPR) Câmpus Ponta Grossa, Av. Monteiro Lobato, s/n - Km 04, Ponta Grossa,
PR CEP 84016-210, Brazil

123
2110 Scientometrics (2015) 105:2109–2135

Introduction

Sharing information throughout the research process provides the basis for the accumu-
lation of knowledge production and scientific progress (Haeussler et al. 2014). In this
regard, the number of scientific publications and the number of journals have increased
considerably in the last few years. Two factors have been noticed to contribute to this
increase: first, the new technologies which enable research and provide new ways of
scientific investigation, favoring the appearance of new studies; second, the need for
specialization and construction of new knowledge, imposed by the markets and the
knowledge society, which leads to the search and spread of new scientific and techno-
logical knowledge. The result is an increase in the world scientific literature as a whole
found in several databases which have been made available recently (Bhupatiraju et al.
2012).
Therefore, there are countless possibilities of finding information sources aiming to
produce new knowledge, and it is the researchers’ job to accomplish the task of selecting
such sources as well as the most relevant information for their research. This wide offer of
works requires selection of those which are the most significant (Small et al. 2014) to
compose the portfolio.
The concern about establishing a process that points out the quality of the best works is
highlighted in the scientific literature. Early works in this area (Irvine and Martin 1986;
Vinkler 1986b; Martin 1996; De Greve and Fridjal 1989) approached the quality dimension
of the work, represented by the impact factor and the number of citations of the works
under analysis.
More recently, due to the increase in the number of publications, some concern was also
focused on the issues of selection of the most relevant works and elimination of those
which were not so relevant for a specific research. In Afonso et al. (2012), Vaz et al.
(2013), and Lacerda et al. (2012), for instance, the methodology ProKnow-C is presented.
Firstly, the selection of papers is conducted by searching in data bases available for the
publications related to theme of the researchers’ interest. The researcher collects the papers
and then he/she analyzes the title, abstract, keywords, the combinations of keywords that,
according to the researcher’s understanding represents the subject that he/she wants to
investigate, in order to verify the alignment of the papers; finally the full publication is
considered, and the most cited papers are included in the portfolio. This thorough and
systematic search requires time and proper techniques, involving both selection issues and
the value or quality of the scientific papers. This task might become complex and
exhausting, demanding large amount of time from the researchers (Barham et al. 2014).
In such scenario, the following problem is proposed: how can researchers select a
consistent bibliographic portfolio for the elaboration of a research work aiming to produce
actual contribution to the science and advance of knowledge in a faster and more effective
way? Over the last decades, a considerable amount of multiple criteria decision aid
(MCDA) tools have been developed to help out in the decision making process in several
different areas. Considering that the researcher also need to make decisions, such as which
paper to read or not to accomplish his/her research, the objective of this paper is to propose
a MCDA methodology, based on the existing methodology ProKnow-C, to select and rank
scientific works according to their relevance to create a bibliographic portfolio, considering
the three most important aspects: impact factor, number of citations and year of publica-
tion. An example is presented on the theme technology transfer models.

123
Scientometrics (2015) 105:2109–2135 2111

Literature review

This section approaches existing methodologies on selection of bibliographic portfolio and


multiple criteria methods in decision aiding.

Methodologies to evaluate, select and build bibliographic portfolios

Eugene Garfield started a new era in the scientific publication evaluation and measurement
processes with his radical invention, the Science Citation Index, which enabled large-scale
scientific literature statistical analysis (van Raan 2004). Since the early 70’s the literature
has shown great increase in the quantitative material regarding the state-of-the-art in
sciences and technology (van Raan 2004). Several methodologies have been proposed
regarding the evaluation of scientific works. Some proposed the work quality evaluation
through its impact in the scientific community (Irvine and Martin 1983; Vinkler 1986b,
1996, 2004, 2009, 2010, 2012; Martin 1996; De Greve and Frijdal 1989), while others
(Afonso et al. 2012; Vaz et al. 2013; Lacerda et al. 2012) selected works through a process
of elimination of the papers whose content is not aligned with the subject or do not have
scientific recognition.
For this study, three works aiming to evaluate, select and build bibliographic portfolios
with scientific production were identified in the literature: The Management System of the
Central Research Institute; The Cochrane Collaboration model; and the ProKnow-C. They
are described in the sequence.

The Management System of the Central Research Institute (MSCRI)

This methodology originated at the Central Research Institute for Chemistry of the Hun-
garian Academy of Sciences. Founded in 1954, the Institute had a large group of
researchers, covering several areas of investigation in Biology and Chemistry. The Institute
needed to evaluate the scientific publication of its workers aiming at better management of
its financial resources, rewarding its researchers in a fair and impartial way. In order to
achieve this aim, Vinkler (1986a) proposed a method called The Management System of
the Central Research Institute (MSCRI). The methodology proposed took into considera-
tion some important aspects such as: review of papers performed by the Institute members;
evaluation of scientific publication; number of workers in scientific committees or editorial
boards; number of science awards; number and impact of scientific lectures; number of
lectures in international conferences; number of doctorate thesis; book chapters, and
patents (Vinkler 1986a). This methodology was developed with the objective to evaluate
the scientific production of a specific institution. Thus, the system proposed for the Institute
can be used to evaluate the scientific production of other institutions that might be inter-
ested in measuring the relevance of each of their scientists.
From this method developed for the Institute, other studies were developed by Vinkler
(1986b, 1996, 2009, 2010, 2012) aiming to discuss the criteria used to attribute impact
factor to scientific papers.

The Cochrane Collaboration

In 1993, Archie Cochrane founded The Cochrane Collaboration, which is an international


not-for-profit organization that produces systematic literature reviews of interventions in

123
2112 Scientometrics (2015) 105:2109–2135

Chart 1 Sections of a Cochrane review


Title*
Review information
Authors*
Contact person*
Dates*
What’s new
History
Abstract
Background*
Objectives*
Search methods*
Data collection and analysis*
Results*
Authors’ conclusions*
Plain language summary
Plain language title*
Summary text*
The review
Background*
Objectives*
Methods
Criteria for selecting studies for this review
Types of studies*
Types of participants*
Types of interventions*
Types of outcome measures*
Search methods for identification of studies*
Data collection and analysis*
Results
Description of studies*
Risk of bias in included studies*
Effects of interventions*
Discussion*
Authors’ conclusions
Implication for practice*
Implication for research*
Acknowledgements
References
References to studies
Included studies
Excluded studies
Studies awaiting classification
Ongoing studies
Other references
Additional references

123
Scientometrics (2015) 105:2109–2135 2113

Chart 1 continued

Other published versions of this review


Tables and figures
Characteristics of studies
Characteristics of included studies (includes ‘Risk of bias’ tables)
Characteristics of excluded studies
Characteristics of studies awaiting assessment
Characteristics of ongoing studies
‘Summary of findings’ tables
Additional tables
Figures
Supplementary information
Data and analyses
Appendices
Feedback
Title
Summary
Reply
Contributors
About the article
Contributions of authors
Declarations of interest*
Differences between protocol and review
Sources of support
Internal sources
External sources
Published notes
Source: Higgins and Green (2011)

healthcare (Nightingale 2009). It is an organization whose primary aim is to help people


make well-informed decisions about healthcare by preparing, maintaining and promoting
the accessibility of systematic reviews of the evidence that underpins them (Higgins and
Green 2011).
The first stage of The Cochrane Collaboration procedures in conducting a systematic
review is to develop a protocol that clearly defines: (1) the aims and objectives of the
review; (2) the inclusion and exclusion; criteria for studies; (3) the way in which studies
will be identified; and (4) the plan of analysis (Nightingale 2009). The strategy used aims
to reach all the studies that have been published addressing one specific topic, considering
that a review that includes no studies is not helpful for the clinician trying to base their
clinical practice on the best available evidence. This methodology uses the principle of
heterogeneity—variability in the results of studies included in a meta-analysis that may
arise from differences in study populations or interventions, or from differences in study
methodology—that is, all studies published, and unpublished ones as well (relevant con-
ference papers for instance) should be reached, systematically read and analyzed
(Nightingale 2009). The work of The Cochrane Collaboration applies around 53 Cochrane
Review Groups (CRGs), responsible for preparing and maintaining reviews that covers

123
2114 Scientometrics (2015) 105:2109–2135

specific areas of health care (Higgins and Green 2011), in such a way no work will be left
aside.
The systematic review of The Cochrane Collaboration must provide a list of elements,
which defines a complete Cochrane review (Higgins and Green 2011). Chart 1 presents the
elements that indicate how the review is likely to appear.
The asterisks indicate a mandatory field, and the reviewers must provide the required
information to continue the process.
Although the Cochrane Collaboration model was especially designed to the healthcare
field, the same core principles may be applied to a systematic literature review in other
fields, considering that the main characteristic of the methodology is that all papers should
be read and analyzed and there is not a procedure to eliminate not relevant works.

ProKnow-C

The methodology ProKnow-C, described by Afonso et al. (2012), Vaz et al. (2013) and
Lacerda et al. (2012), is a knowledge construction methodology used to compose a bib-
liographic portfolio of research organized in four stages. Similarly to the Cochrane Col-
laboration model, the first stage of the ProKnow-C consists in selecting a bibliographic
portfolio of articles aligned with the theme of interest as perceived by the researcher’s and
with scientific recognition. In the second stage the bibliometric analysis of the portfolio
must be provided. In the third stage a systematic analysis is performed to identify the gaps
that exist in order to identify research opportunities. In the fourth stage of the ProKnow-C
all the knowledge developed is used to propose the research question and objectives (Vaz
et al. 2013). The main base for establishing the scientific relevance of the article after
applying the filtering procedures, that is, defining whether it is aligned with the theme or
not, is the scientific recognition through the number of citation, according to Lacerda et al.
(2012, pp. 65–66, 75).

Multiple criteria methods in decision aiding

Since the methodology proposed in this paper uses a multiple criteria decision making
model, some considerations must be built on this concern by reviewing some of the several
existing MCDA.
Decisions have raised reflection of many thinkers since ancient times. They are present
in the daily lives of human beings and these are required to take preference for an alter-
native considering the scenario that is presented and the different aspects involved in the
problem. However, in some situations decision-making can be presented as an extremely
complex scenario involving different alternatives of action, distinct points of views among
policy makers, specific evaluation criteria, which contribute to multiple criteria, compete
with each other (Roy 2005).
The decisions related to complex problems are common to several areas in economics,
engineering, production, political, social, and they are present in a multitude of activities,
whether public or private, and most of these situations are characterized by the existence of
multiple objectives to be achieved (Roy 2005).
In face of their contribution, decision-makers have sought aid in multiple criteria
methodologies that can aid in the decision making (DM) process. The role of method-
ologies to support multiple criteria (MC) decision is to establish an agreement of the best
alternative decisions regarding the selection within a set with several potential alternatives

123
Scientometrics (2015) 105:2109–2135 2115

to solve the problem, subject to various attributes or ‘criteria’ tangible or intangible, with
the ability to provide special treatment to peculiarities of the problem (Cho 2003).
Decision aiding (DA) is achieved through models that helps get answer elements to
questions made by stakeholders in a process in which a decision or choice is required. Such
elements work in clarifying a decision and giving more consistency to the process. DA
contributes, among other things, to elaborating recommendations using results taken from
models and computational procedures, and participating in the final decision legitimization
(Roy 2005).
DA is more often multicriteria than monocriterion, because even when DA is provided
for a single decision maker, it is rare for him/her to have in mind a single criterion; when
DA occurs in a multi-actor DM process, it is even more difficult to establish a single
criterion that is accepted by all the actors. Generally, each one will has his/her own
priorities and different points of views, and the preferences should be taken into consid-
eration (Roy 2005).
The most frequently used DA methods are based on mathematically multicriteria
aggregation procedures, which brings into play various inter-criteria parameters, such as
weights and scaling constants, among others, which allow defining the specific role that
each criterion can play with respect to others (Roy 2005).
Several MCDA methods were developed during the last decades to help out in the
process of decision making, and have been largely used and broadly discussed in the
literature, and even to mention all of them one single paper would provide a small space.
Cinelli et al. (2014) present a didactic division of the MCDA methods into three families:
the utility-based theory, the outranking relation theory, and the sets of decision rules
theory. Five well-known multicriteria decision methods will be briefly described in the
sequence. And, to make a linkage with the methodology proposed in this paper with
MCDAs, a sixth method, not very approached by the literature according to Cinelli et al.
(2014), will be also described.

The utility-based theory

Utility functions are widely used in MCDA for preferential modeling purposes. Each
marginal utility function provides a mechanism for transforming the scale of the corre-
sponding criterion into utility/value terms. The major advantage of using such a trans-
formation mechanism is that it enables the consideration of both quantitative and
qualitative criteria (Zopounidis and Doumpos 2002).
The analytic hierarchy process (AHP), proposed by Saaty (1990), involves an impor-
tance-ratio assessment procedure and uses a hierarchy to establish preferences and
orderings; then, a linear model is derived and used to rank alternatives; by changing
weights sensitivity analysis is possible (Dyer et al. 1992). The standard process requires
firstly the identification of a set of alternatives and a hierarchy of evaluation criteria (value
tree) followed by pair wise comparisons to evaluate alternatives performance on criteria
(scoring) and criteria among themselves (Cinelli et al. 2014).
The multi attribute utility theory (MAUT) is a performance aggregation based approach,
which requires the identification of utility functions and weights for each attribute that can
then be assembled in a unique synthesizing criterion (Keeney and Raiffa 1993). It takes
into consideration the preferences of the decision-maker in the form of the utility function
which is defined over a set of attributes (Pohekar and Ramachandran 2004).

123
2116 Scientometrics (2015) 105:2109–2135

The outranking relation theory

The outranking relation is a binary relation that enables the assessment of the outranking
degree of an alternative ai over an alternative ap; it allows to conclude that ai outranks ap if
there are enough arguments to confirm that ai is at least as good as ap (concordance), while
there is no essential reason to refute this statement (discordance) (Zopounidis and
Doumpos 2002).
The outranking method of elimination and choice expressing the reality (ELECTRE)
uses cardinal scales with dominance concept based on graph theory to determine the best
alternative when there is one, and does not assume anything about rank preservation (Cho
2003). ELECTRE methods are relevant when facing decision situations where: the deci-
sion-maker wants to include in the model at least three criteria; actions are evaluated (for at
least one criterion) on an ordinal scale; a strong heterogeneity related with the nature of
evaluations exists among criteria (e.g., duration, noise, distance, security, cultural sites,
monuments etc.); compensation of the loss on a given criterion by a gain on another one
may not be acceptable for the decision-maker; small differences of evaluations are not
significant in terms of preferences, while the accumulation of several small differences
may become significant (Figueira et al. 2005).
The preference ranking organization method for enrichment of evaluations (PRO-
METHEE) uses the outranking principle to rank the alternatives, combined with the ease of
use and decreased complexity. It performs a pair-wise comparison of alternatives in order
to rank them with respect to a number of criteria (Pohekar and Ramachandran 2004).

The sets of decision rules theory

Decision rules derived from these approximations constitute a preference model. Each
‘if… then…’ decision rule is composed of a condition part specifying a partial profile on a
subset of criteria to which an alternative is compared using the dominance relation, and a
decision part suggesting an assignment of the alternative to ‘at least’ or ‘at most’ a given
class (Zopounidis and Doumpos 2002).
The dominance based rough set approach (DRSA) theory was introduced by Pawlak and
proved to be an excellent mathematical tool for the analysis of a vague description of
objects. Its philosophy is based on the assumption that with every object of the universe
there is associated a certain amount of information, such as data, knowledge etc. (Greco
et al. 2001a). It is particularly useful to deal with inconsistencies of input information. The
original rough set approach did not consider the attributes with preference-ordered
domains. The categories are ordered from the best to the worst and the approximations are
constructed using a dominance relation instead of an indiscernibility relation (Greco et al.
2001b).

Other methods

Trade-offs may be present when using a multicriteria decision making process. In order to
find a better decision, the best ‘trade-offs’ have to be found, eventually to reach a ranking
(Cinelli et al. 2014). A method that concentrates on major conflicts solving is the partial
order scalogram analysis with coordinates (POSAC) method. The idea behind this
methodology is that conflicts should be made evident, and its main utility is to aid com-
munication in solving a conflict situation. One useful feature of POSAC is the ability to

123
Scientometrics (2015) 105:2109–2135 2117

examine sequences of behavior by connecting episodes from a single case in temporal


order (Taylor 2002). The general idea behind POSAC is to break down the incomparability
to the contributions of each indicator, and then a decision about a possible trade-off can be
reached (Cinelli et al. 2014).
The POSAC theory shows a way of how obtaining a ranking (Bruggemann and Carlsen
2012), dealing with the trade-offs. Despite dealing with incomparable factors, such as
political, domestic, prison among others, which are not comparable factors, and yet it is
possible to build a rank, was proved useful (Taylor 2002).
The most interesting aspect concerning this method towards its contribution to this
present paper is the fact that it deals with incomparable factors. The methodology proposed
in this paper also deals with factors that cannot be compared: years, number of citation, and
impact factor.
Another method broadly used, and which encompasses factors that cannot be compared,
is the Human Development Index (HDI), created by Mahbub ul Haq in 1990, with the
collaboration of Amartya Sen (UNDP 2015). Its first conception was to deal with three
main aspects, or criteria: per capita income, life expectancy at birth, and education (Ul Haq
2003). The equation has evolved and new reformulation was presented in 2010. However,
the central idea is to obtain a summary measure of average achievement in key dimensions
of human development: a long and healthy life, being knowledgeable and having a decent
standard of living (UNDP 2015). The HDI may be considered a MCDA, once governments
might want to analyze the results in order to propose public policies for the factors
considered by the index.
This way, having these contexts in mind, a new methodology to select and rank a
bibliographic portfolio is conceived and presented on Sect. 4.

Methodological strategy

The methodological strategy is based on the development of a methodology to select,


collect, rank and systematically read scientific papers published in journals, in such a way
three criteria for the ranking can be provided: year of publication, number of citations, and
impact factor.
The methodological procedures are half constituted by theory, and half by practice. The
first part of the methodology consisted in performing the research tasks, using the
methodology ProKnow-C. Considering the complex process it turned out to be, new ideas
for the same process started to come from the researchers.
The second part of the methodology was performed by searching the literature to
support a multiple criteria methodology that could facilitate the process of building a
portfolio. For this purpose, an exploratory bibliographic search was done, and insights
were obtained, from which the new methodology was conceived. For this step, a random
bibliographic search was done.
The third part consisted in proposing the new methodology, and nine very concise steps
were set. In order to propose the 7th phase—described in the next section—more than ten
formats of equations were constituted and tested. Finally, the researchers came to the
conclusion that the final InOrdinatio proposed, using a that can vary from 1 to 10 to
establish the weight of the year, was the most suitable equation to compose the complete
Methodi Ordinatio, which can select a bibliographic portfolio for any desired research.

123
2118 Scientometrics (2015) 105:2109–2135

The last part of the strategy was the practical sample application, presented in Sect. 5.
For the sample application, a systematic bibliographic search was done, using the Methodi
Ordinatio phases.

Proposal of a methodology to select and rank a bibliographic portfolio:


Methodi Ordinatio

The portfolio selection methodology, Methodi Ordinatio, which employs an equation to


rank papers, the Index Ordinatio (InOrdinatio), aims to select and rank papers according to
their scientific relevance, taking into consideration the main factors to be considered in a
scientific paper: the impact factor of the journal in which the paper was published, the
number of citations and the year of publication. The ranking task is carried out before the
systematic analysis, so that the importance of the paper is recognized in the initial phases
of the process. The methodology presents nine phases, described in the sequence. The first
five phases of the Methodi Ordinatio are an adaptation of ProKnow-C step of Selection of
the Gross Bibliographic Portfolio. The last four phases of the Methodi Ordinatio are an
adaptation to replace the ProKnow-C criteria of the number of citations of the articles to
prioritize the publications, by the multicriteria model of evaluation (InOrdinatio) to order
the publications according to a set of criteria (impact factor, year of publication and
number of citations).
Phase 1—Establishing the intention of research The ideal condition is that the
researcher already has a problem, for according to Feyerabend in Roy (1993, p. 189)
‘‘Scientific investigation, says Popper, starts with a problem and proceeds by solving it.’’
However, while the researcher has not delimited the research problem yet, he/she can start
the process from random words and build his/her intention as he/she explores the data
bases.
Phase 2—Exploratory preliminary research with keywords in data bases Once the
research intention is known, the keywords and their possible combinations are set. Next, an
exploratory preliminary search is carried out in the databases listed by the researchers,
aiming to evaluate and test the adherence of words and combinations selected, as well as to
detect the existence of other terms linked to the research objective. At this moment the
researchers will ‘play’ with the keywords and filters available in each of the databases.
Since each data base presents their own set of specific research mechanisms, the researcher
must observe attentively the research instructions in each of them, aiming to set search
criteria the closest possible with the other bases, trying to achieve the best uniformity
possible in the research. The time frame is also tested in this step. Some papers are ‘classic’
in their field once they have been cited a great number of times over the years and therefore
should not be left aside. When the researcher uses a broad time frame—more than 10 years
long—he/she is ensuring that no classic works will be left out. Choosing the time frame is
an activity to be done using the researcher’s values and criteria, once there is not a rule for
setting the time. His/her judgment will establish the best time frame. A thorough work in
this phase, paying attention to details and using the filters properly, leads to the elimination
of papers which are not related to the research intention.
Phase 3—Definition and combination of keywords and data bases In this brief phase,
researchers define and limit the relevant keywords and combinations, as well as the most
significant data bases to be used in the systematic search. In order to be considered sig-
nificant the data bases must show a great number of works about the theme and availability

123
Scientometrics (2015) 105:2109–2135 2119

to the access of the published material. It seems important to emphasize that, even after
having defined the keywords, combinations and data bases, this is the best time to go back
in some decision, since ‘‘[…] problems may be wrongly formulated, that one may inquire
about properties of things and processes which later views declare to be non-existent.
Problems of this kind are not solved; they are dissolved and removed from the domain of
legitimate inquiry’’ (Feyerabend in Roy 1993, 189), and ‘‘[…] we do not discover a
problem as we would a pre-existing object; the formulation we give to it cannot be
generally totally objective, but is expected to evolve throughout the decision-making
process’’ (Roy in Roy 1993, p. 189). That is to say, if the researcher consider that it is
important to rethink his/her problem, go back to Phase 2.
Phase 4—Final search in the data bases In this phase, a reference manager tool should
be employed (e.g. Mendeley, EndNote, Zotero etc.) to collect the papers. The search for
terms, combination and parameters previously selected is carried out in each data base, and
data is exported to the reference manager selected by the researchers. The result of this
phase is the gross portfolio.
Phase 5—Filtering procedures A well developed systematic search will enable good
filtering, presenting quality of results. However, some works of non-related areas might
appear among the papers selected. Therefore, another filtering procedure is applied to
eliminate repeated works or papers that do not belong to the research area of interest. This
procedure consists in analyzing the title, key words and abstract. If any doubts remain
whether the paper is of real interest for the researcher or not, a quick look into the topics
might help, to check whether or not its content is related to research. This process might
eliminate a good number of papers. When taking Roy’s (1993, p. 188) words into con-
sideration: ‘‘[…] the perceptions of reality held by an individual, what he says and what he
writes on the subject, the questions he brings up about it, etc. constitute a way of inter-
acting with the real situation which may well contribute to changing’’, not only this
procedure on phase 5, but all the others that recommend the use of the researcher’s own
judgment are justified. The result of this phase is the final portfolio.
Phase 6—Identification of impact factor, year of publication and number of citations
When evaluating scientific publications, Vinkler (1986b) considers two important aspects
of a paper: the impact factor and the number of citations of individual papers. The impact
factor indicates the relevance of the journal in which the paper was published; the higher
the factor, the more serious the paper is considered. The number of citations indicates the
paper and its authors’ scientific recognition. However, when the search is carried out, it is
possible to observe that there are papers without impact factor which have a high number
of citations, while others with high impact factor show a small number of citations. There
are also papers with high number of citations and high impact factor, but old—not actual—
papers. At this point, in order to eliminate doubt regarding which aspect is the most
relevant in a paper, the analysis of three main aspects is proposed: journal relevance,
evaluated through the impact factor; the paper scientific recognition, evaluated through the
number of citations; how recent the article is, evaluating the year of publication. The
importance of these three aspects is explained below:
(a) Impact factor: ‘‘[…] the impact factor for a periodical is a measure of the frequency
with which the average article published in two consecutive years’’ (Vinkler 1986a,
p. 78). Due to its importance, this factor has been studied throughout the last two
decades (Vinkler 1986b, 1996, 2009, 2010, 2012). The metrics used to identify
impact factor vary among the journals. The most employed are: (a) Source
Normalized Impact per Paper (SNIP); (b) SCImago Journal Rank (SJR); (c) Impact

123
2120 Scientometrics (2015) 105:2109–2135

Factor (previous year JCR) and; (d) 5-Year Impact Factor (JCR). The last one seems
to be the ideal, as it presents the average of the last 5 years, which might represent a
better evaluation of the journal.
(b) Number of citations: the number of times a paper is cited demonstrates its relevance
and recognition by the scientific community, and this should be thoroughly observed
(Bornmann 2010). However, a recent paper might have a low number of citations.
Therefore, it would be a mistake to attribute this paper lower scientific relevance
only based on the criterion number of citations, since this isolated aspect cannot
represent the global scientific relevance of a paper. Another factor that affects the
number of citations is the availability of the paper, since ‘‘freely available articles do
have a greater research impact (Antelman 2004) once researchers will have more
access to it and, therefore, it has more chances to be cited’’. Papers which can only
be accessed upon payment end up having the number of readings limited, which also
limits the citation. Thus, an excellent article that is not freely available might be read
and cited fewer times than an average paper whose access is free. Therefore, it is
important to apply the criterion number of citations together with the other
evaluation criteria.
(c) Year of publication: the year of publication indicates how current the data is; the
more recent the research, the more likely it is that new advances have been reached,
and the higher the probability of the paper to contribute to some innovation in the
knowledge area. Also, there is great likelihood that more recent papers are based on
methodologies which have been already validated, which makes them even more
valuable. Besides that, the probability of a paper being cited decreases with time
(Dieks and Chang 1976), which reinforces the importance of valuing the most recent
papers.
Taking all that into consideration, the task of this 6th phase is to identify and register
the paper year of publication, number of times it was cited and the impact factor of
the journal in which it was published. This phase can be carried out simultaneously
with the 8th phase—search for the paper’s full version—aiming to save time as a
whole, since several complete papers can be easily located when the impact factor is
searched. However, as the full version of some papers might not be easily found, this
task should be carried out after identifying the paper relevance through the
InOrdinatio.
Phase 7—Ranking the papers using the InOrdinatio After carrying out phases 1–6, the
equation InOrdinatio (1) is applied to identify the scientific works’ ranking,
InOrdinatio ¼ ðIF=1000Þ þ a  ½10  ðResearchYearPublishYearÞ þ ðR CiÞ ð1Þ
where IF is the impact factor, a is a weighting factor ranging from 1 to 10, to be attributed
by the researcher; ResearchYear is the year in which the research was developed; Pub-
lishYear is the year in which the paper was published; and R Ci is the number of times the
paper has been cited. The equation InOrdinatio presents the following dynamics:
(a) Impact factor is divided by 1000 (one thousand), aiming to normalize its value
concerning the other criteria.
(b) The equation presents the weighting factor a, whose value to be attributed by the
researchers might vary from 1 to 10. The closer the number is to one, the lower the
importance the researcher will attribute to the criterion year, while the closer to 10,
the higher is the importance of this criterion. For themes like Technology Transfer,

123
Scientometrics (2015) 105:2109–2135 2121

the criterion year is relevant, due to the higher number of new publications
available. Also, the time frame should be broader in this case, considering it has
been approached in the literature more than a decade long.
(c) This criterion is the gross number of citation found in the data of the portfolio
construction.
After treating this data, the InOrdinatio of each paper is obtained, and from this point, it is
possible to rank the papers according to their scientific relevance: the higher the InOrdi-
natio value is, the more relevant the paper is for the portfolio. With the papers ranked, the
researcher can define how many papers he will search for the full version, according to his
priorities (for instance, the 10 first, or the 50 first, and so on).
Phase 8—Finding the full papers After ranking the papers using the InOrdinatio, the
complete version of those selected papers should be found. If the article is not freely
available, but is relevant to the research, it is advisable that it should be purchased.
Phase 9—Final reading and systematic analysis of the papers Depending on the number
of papers selected through the Methodi Ordinatio, it is not possible for the researchers to
read all of them. For this reason, it is important to use the InOrdinatio, as this index
provides the scientific criteria for the selection of the most relevant papers to be read and
systematically analyzed. The number of papers to be read is decided by the researcher.
During this phase, the researcher will search for those aspects considered relevant for
his work, such as main authors, variables identified, results achieved, models proposed,
comparisons, research gaps etc. Researchers might want to use The Cochrane Collabora-
tion model of systematic review to perform this phase, presented by Higgins and Green
(2011), and earlier in this paper on Chart 1. Figure 1 presents the flow of activities
proposed in the methodology, adapted from ProKnow-C.
Next, in order to give an example of the Methodi Ordinatio dynamics, a practical
example is presented on the theme technology transfer model.

Methodi Ordinatio: a sample application

Phase 1—Establishing the intention of research This research intention was technology
transfer models.
Phase 2—Preliminary exploratory search of keywords in data bases The combination
technology transfer model was tested in data bases with which the researchers usually
work and are familiar.
Phase 3—Definition and combination of keywords and data bases Among the bases
tested, the ones selected for this research were Science Direct, Web of Knowledge and
Scopus, since they present a large number of publications with the keywords searched and
higher availability of access to the material published, with higher consistency in the
search. The remaining bases did not offer the access expected, and some did not present
consistency during the exploratory search, reaching different results each time they were
tested, which prevented the research from being developed in a reliable way.
Articles with the combination technology transfer model* were found. The search was
limited to the period 01/01/1990 to 31/01/2015, aiming at a broader coverage of papers.
After the final decision about the data bases to be used, keyword combination and limit of
time, the final tests were carried out aiming to ensure the consistency and efficacy of the
search.

123
2122 Scientometrics (2015) 105:2109–2135

Start

1. Establishing the intenon of research

2. Exploratory preliminary research with key-words in

Adherence
Yes No
of keywords

3. Definion and combinaon of key-words and


databases

4. Final search in the databases

5. Filtering procedures

Not aligned to
the topic or in No
Discard Yes
duplicate.

6. Idenficaon of impact factor, year of publicaon

7. Ranking the papers using the InOrdinao

8.Finding the full papers

Full paper InOrdinao


Yes No indicates No Discard
found
relevance

Yes

9. Final reading and systemac analysis of the


Puchase the paper
papers

End

Fig. 1 Phases of the methodology Methodi Ordinatio. Source: Adapted from ProKnow-C

Phase 4—Final search in the data bases The definitive data search resulted in a gross
total of 352 results. Taking into consideration the different search tools in the different
bases, the application of a standard filtering procedure for all of them was not possible.
Phase 5—Filtering procedures In this phase, all the papers searched in all data bases
were put together. It is important to emphasize that some filters could not be used in some
bases and, because of that, some papers were collected, those whose theme was not related
to the theme searched. Then, the following filtering and elimination procedures were
applied: repeated papers; papers whose Title, Abstract or Keywords were not related to the
theme searched; papers presented in conferences, and book chapters (they do not have
impact factor; the researcher may use his/her own values and criteria to select other

123
Scientometrics (2015) 105:2109–2135 2123

material, such as books, book chapter, conference papers etc., that will be complementary
to the articles). The filtering resulted in a large number of papers eliminated, once the
objective was to collect only articles. This resulted in a total of 93 articles left.
Phase 6—Identifying impact factor, year and number of citations This phase was
partially carried out simultaneously with phase 8, that is, for some articles it was possible
to find the full text when searching for this information. The sources used in this phase
were Google Scholar and the sites of the journals. Some papers were not found and, for this
reason, were eliminated, which resulted in a final total of 61 papers. From these, 12 were
SJR and 49 JCR. The two groups were treated separately in the next phase, but later on
incorporated within the same table, since no incompatibility was verified between the
results. The papers were organized in a spreadsheet in the following order of columns:
paper title, impact factor (last year JCR and SJR), number of citations, and year.
Phase 7—Ranking the papers using the InOrdinatio The equation InOrdinatio was
employed. In this research, a was given value 10, considering that the factor year is
relevant to the theme under study; no newest work, the same way no classic ones, should
be left aside. Table 1 shows the final articles resulting from the application of phases 1–7.
It is possible to observe that the seven first papers present a balance of the three aspects
considered important. When analyzing, for example, data from the first article in the table,
it was seen that its currentness was balanced with the impact factor and the number of
citations. The same occurred with paper 2. When analyzing data in article 8, the existence
of a high impact factor was seen, and although no citation was found, its currentness had to
be considered.
Another example is paper 34. Although it presented the highest impact factor when
compared to the others and was relatively current, it was only cited five times.
Analyzing the rank in general, some articles were seen to present a negative value for
the InOrdinatio. This is due to the fact that the search involved a period of time longer than
10 years. Thus, besides not being current, these papers did not present high impact factor
or number of citations, resulting in very low or even negative InOrdinatio value.
The results revealed that the first papers in the table had at least two relevant criteria to
be highlighted, which proved the efficacy of the equation. The last papers, however,
presented more than two unfavorable factors, which placed them in the last positions.
Phase 8—Finding the full papers This phase was partially carried out simultaneously
with phase 6. Only the papers whose full text could not be found before needed to be
located now. Among all papers searched, only paper 31 was not found. As it did not present
a very high InOrdinatio it was substituted by the following paper in the list.
Phase 9—Reading and systematic analysis of the papers Again, at this step, the
researcher may use his/her own values and criteria to establish how many articles should be
read. Our advice is that the researcher establishes a broader time frame as much as
possible in initial phases. By doing this, he/she will make sure the classic papers will be
included, and the limit of papers to be read will be the ones with positive InOrdinatio. The
papers will present a negative InOrdinatio when the time frame is over 10 years, con-
sidering that the limit value of a is 10, according to the Eq. (1) in phase 7. Therefore, for
the sample application presented, the systematic reading is recommended on the papers
that presented positive InOrdinatio, the 36 first papers.
Considering that the objective of this paper was to present the methodology Methodi
Ordinatio, and the equation InOrdinatio, the results of the systematic reading on tech-
nology transfer models will be presented in another paper whose scope comprises other
objectives the authors have for the theme.

123
2124 Scientometrics (2015) 105:2109–2135

Table 1 Final papers on technology transfer model after the application of phase 8 of Methodi Ordinatio
Ranking Articles on technology transfer model Impact Citations Year InOrdinatio
number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

1 Bozeman, Barry, 2000. Technology 2.598 1124 2000 1076.60


Transfer and Public Policy: A
Review of Research and Theory.
Research Policy
2 Wang, Jian-Ye, and Magnus 1.364 813 1992 684.36
Blomström, 1992. Foreign
Investment and Technology
Transfer: A Simple Model. European
Economic Review
3 Siegel, Donald S., David A. Waldman, 2.106 469 2004 461.11
Leanne E. Atwater, and Albert N.
Link, 2004. Toward a Model of the
Effective Transfer of Scientific
Knowledge from Academicians to
Practitioners: Qualitative Evidence
from the Commercialization of
University Technologies. Research
on the Human Connection in
Technological Innovation
4 Gorschek, T., P. Garre, S. Larsson, and 1.23 120 2006 130.00
C. Wohlin, 2006. A Model for
Technology Transfer in Practice.
IEEE Software
5 Harmon, Brian, Alexander Ardishvili, 3.265 198 1997 121.27
Richard Cardozo, Tait Elder, John
Leuthold, John Parshall, Michael
Raghian, and Donald Smith, 1997.
Mapping the University Technology
Transfer Process. Journal of Business
Venturing
6 Colyvas, Jeannette A., 2007. From 2.598 96 2007 118.60
Divergent Meanings to Common
Practices: The Early
Institutionalization of Technology
Transfer in the Life Sciences at
Stanford University. Biotechnology:
Its Origins, Organization, and
Outputs
7 Bozeman, Barry, Heather Rimes, and 2.598 0 2015 102.60
Jan Youtie, 2015. The Evolving
State-of-the-Art in Technology
Transfer Research: Revisiting the
Contingent Effectiveness
Model. Research Policy
8 Cavalheiro, Gabriel M. do Canto, and 2.033 1 2014 93.03
Luiz A. Joia, 2014. Towards a
Heuristic Frame for Transferring
E-Government
Technology. Government
Information Quarterly

123
Scientometrics (2015) 105:2109–2135 2125

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

9 Landry, R., N. Amara, J.-S. Cloutier 2.704 9 2013 91.70


and N. Halilem, 2013. Technology
transfer organizations: Services and
business models. Technovation
10 Genet, C., K. Errabi and C. Gauthier, 2.704 18 2012 90.70
2012. Which model of technology
transfer for nanotechnology? A
comparison with biotech and
microelectronics. Technovation
11* Nguyen, Nguyen Thi Duc, and Atsushi 0 0 2014 90.00
Aoyama, 2014. Achieving Efficient
Technology Transfer through a
Specific Corporate Culture
Facilitated by Management
Practices. The Journal of High
Technology Management Research
12 Heinzl, J., A.-L. Kor, G. Orange, and 1.305 3 2013 84.31
H.R. Kaufmann, 2013. Technology
Transfer Model for Austrian Higher
Education Institutions. Journal of
Technology Transfer
13* Festel, G., 2013. Technology Transfer 0.42 0 2013 80.00
Models between Industrial
Biotechnology Companies and
Academic Spin-Offs. Industrial
Biotechnology
14* Sun, Z.-Y., W. Yu, H.-F. Wei, Q.-P. 0.28 0 2013 80.00
Liang, and H. Qian, 2013. A Study
on the Contract Arrangement of
Technology Transfer Model in China
Information Technology Industry.
Information Technology Journal
15* Necoechea-Mondragón, H., D. Pineda- 0.25 0 2013 80.00
Domı́nguez, and R. Soto-Flores,
2013. A Conceptual Model of
Technology Transfer for Public
Universities in Mexico. Journal of
Technology Management and
Innovation
16* Da Silva, R.C., M. Vieira Junior, and 0.164 0 2013 80.00
W.C. Lucato, 2013. Recent
technology transfer models and an
evaluation of their relevant
characteristics. Espacios
17* Khabiri, Navid, Sadegh Rast, and 0.15 8 2012 78.00
Aslan Amat Senin, 2012. Identifying
Main Influential Elements in
Technology Transfer Process: A
Conceptual Model. Asia Pacific
Business Innovation And Technology
Management Society

123
2126 Scientometrics (2015) 105:2109–2135

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

18 Mohamed, A. S., S. M. Sapuan, M. 2.020 3 2012 75.02


M. H. M. Ahmad, A. M. S. Hamouda
and B. T. H. T. Bin Baharudin, 2012.
Modeling the technology transfer
process in the petroleum industry:
Evidence from Libya. Mathematical
and Computer Modelling
19* Pérez, M. T. A., & Carrasco, F. R. C. 0.149 0 2012 70.00
(2012). Los modelos europeos de
transferencia de tecnologı́a
universidad-empresa. Revista de
Economı́a Mundial
20 ATTC, 2011. Research to Practice in 1.867 7 2011 68.87
Addiction Treatment: Key Terms and
a Field-Driven Model of Technology
Transfer. Journal of Substance Abuse
Treatment
21 Khalozadeh, F., S.A. Kazemi, M. 0 8 2011 68.00
Movahedi, and G. Jandaghi, 2011.
Reengineering University-Industry
Interactions: Knowledge-Based
Technology Transfer
Model. European Journal of
Economics, Finance and
Administrative Sciences
22 Aronsson, T., K. Backlund and L. 1.404 15 2010 66.40
Sahlen, 2010. Technology transfers
and the clean development
mechanism in a North–South general
equilibrium model. Resource and
Energy Economics
23* Hidalgo, A., and J. Albors, 2011. 0.25 4 2011 64.00
University-Industry Technology
Transfer Models: An Empirical
Analysis. International Journal of
Innovation and Learning
24* Wahab, S.A., R.C. Rose, J. Uli, and H. 0.13 22 2009 62.00
Abdullah, 2009. A Review on the
Technology Transfer Models,
Knowledge-Based and
Organizational Learning Models on
Technology Transfer. European
Journal of Social Sciences
25 Di Benedetto, C. A., R. J. Calantone 1.778 73 2003 54.78
and C. Zhang, 2003. International
technology transfer—Model and
exploratory study in the People’s
Republic of China. International
Marketing Review

123
Scientometrics (2015) 105:2109–2135 2127

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

26 Mohamed, A. S., S. M. Sapuan, M. 0 3 2010 53.00


M. H. M. Ahmad, A. M. S. Hamouda
and B. T. H. T. Bin Baharudin, 2010.
Modeling technology transfer for
petroleum industry in Libya: An
overview. Scientific Research and
Essays
27 Fosfuri, Andrea, 2000. Patent 0.947 101 2000 51.00
Protection, Imitation and the Mode
of Technology
Transfer. International Journal of
Industrial Organization
28 Warren, A., R. Hanke and D. Trotzer, 0.696 21 2008 51.00
2008. Models for university
technology transfer: resolving
conflicts between mission and
methods and the dependency on
geographic location. Cambridge
Journal of Regions Economy and
Society
29 Malik, K., 2002. Aiding the technology 2.704 75 2002 47.70
manager: a conceptual model for
intra-firm technology transfer.
Technovation
30 Baek, D.-H., W. Sul, K.-P. Hong and 1.266 17 2007 38.27
H. Kim, 2007. A technology
valuation model to support
technology transfer negotiations. R &
D Management
31 Ferguson, K. M., 2005. Beyond 0.451 34 2005 34.00
indigenization and
reconceptualization—Towards a
global, multidirectional model of
technology transfer. International
Social Work
32* Coppola, H.W., and H. Elliot, 2007. 0.429 12 2007 32.00
A Technology Transfer Model for
Program Assessment in Technical
Communication. Technical
Communication
33 Morrissey, Michael T., and Sergio 2.576 27 2005 29.58
Almonacid, 2005. Rethinking
Technology Transfer. Journal of
Food Engineering
34 Gross, C. M., 2003. U2B: A new 39.100 5 2003 24.10
model for technology transfer.
Nature Biotechnology.

123
2128 Scientometrics (2015) 105:2109–2135

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

35 Seaton, R. A. F. and M. Cordeyhayes, 2.704 122 1993 4.70


1993. The development and
application of interactive models of
industrial-technology transfer.
Technovation
36 Jayaraman, V, M.I Bhatti, and H. 1.600 9 2004 0.60
Saber, 2004. Towards Optimal
Testing of an Hypothesis Based on
Dynamic Technology Transfer
Model. Applied Mathematics and
Computation
37 Todo, Y., 2003. Empirically consistent 0.710 13 2003 -7.00
scale effects: An endogenous growth
model with technology transfer to
developing countries. Journal of
Macroeconomics
38 Kingsley, Gordon, Barrt Bozeman, and 2.598 77 1996 -10.40
Karen Coker, 1996. Technology
Transfer and Absorption: An ‘R & D
Value-Mapping’ Approach to
Evaluation. Research Policy
39 Mohan, S. R. and A. R. Rao, 2003. 0.500 1 2003 -19.00
Early identification of innovative and
market acceptable technologies—A
model for improving technology
transfer capabilities of public
research institutes. Journal of
Scientific & Industrial Research
40* Takahashi, V. P. and J. B. Sacomano, 0.16 10 2002 -20.00
2002. Proposta de um modelo
conceitual para análise do sucesso de
projetos de transferência de
tecnologia: estudo em empresas
farmacêuticas. Gestão & Produção
41 Hussain, S., 1998. Technology 18 1998 -52.00
Transfer Models across Cultures:
Brunei-Japan Joint
Ventures. International Journal of
Social Economics
42 Caldwell, James L., 1998. Formal 0.404 9 1998 -61.00
Methods Technology Transfer: A
View from NASA. Formal Methods
in System Design
43 Mejia, Luis R., 1998. A Brief Look at a 1.959 6 1998 -62.04
Market-Driven Approach to
University Technology Transfer:
One Model for a Rapidly Changing
Global Economy. Technological
Forecasting and Social Change

123
Scientometrics (2015) 105:2109–2135 2129

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

44 Gupta, M. R., 1998. Foreign capital 0.588 7 1998 -63.00


and technology transfer in a dynamic
model. Journal of Economics-
Zeitschrift Fur Nationalokonomie.
45 Madu, C.N., Lin Chinho, and C.-H. 2.020 3 1998 -64.98
Kuei, 1998. A Goal Compatibility
Model for Technology
Transfers. Mathematical and
Computer Modelling
46 Wong, J. K., 1995. Technology 0 35 1995 -65.00
transfer in Thailand descriptive
validation of a technology transfer
model. International Journal of
Technology Management
47 Séror, Ann C., 1996. Action Research 2.704 17 1996 -70.30
for International Information
Technology Transfer: A
Methodology and a Network
Model. Technovation
48 Chaudhuri, P. R., 1997. Generalized 1.814 0 1997 -78.19
assignment models: with an
application to technology transfer.
Economic Theory
49 De Castro, Julio O., and Willams S. 0 13 1995 -87.00
Schulze, 1995. The Transfer of
Technology to Less Developed
Countries: A Model from the
Perspective of the Technology
Recipient. Special Issue Technology
and Entrepreneurship
50 Padmanabhan, V. and W. E. Souder, 1.379 19 1994 -89.62
1994. A Brownian-motion model for
technology transfer application to a
machine maintenance expert-system.
Journal of Product Innovation
Management
51* Climént, J.B., C. Palmer, and S. Ruiz, 1.558 3 1995 -95.44
1995. Omissions Relevant to the
Contextual Domains of Technology
Transfer Models. The Journal of
Technology Transfer
52 Haug, P., 1992. An international 1.323 31 1992 -97.68
location and production transfer
model for high technology
multinational-enterprises.
International Journal of Production
Research

123
2130 Scientometrics (2015) 105:2109–2135

Table 1 continued

Ranking Articles on technology transfer model Impact Citations Year InOrdinatio


number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)

53 Marjit, S., 1994. A competitive general 0.588 8 1994 -102.00


equilibrium-model of technology-
transfer, innovation, and
obsolescence. Journal of Economics-
Zeitschrift Fur Nationalokonomie
54 Liu, Win G., 1993. A Quantitative 1.959 9 1993 -109.04
Technology Transfer Model and Its
Application to Aircraft
Engines. Technological Forecasting
and Social Change
55 Climent, J.B., 1993. From Linearity to 1.305 6 1993 -112.70
Holism in Technology-Transfer
Models. The Journal of Technology
Transfer
56 De La Garza, Jesus M., and Panagiotis 0.870 6 1992 -124.00
Mitropoulos, 1992. Flavors and
Mixins of Expert Systems
Technology Transfer Model for AEC
Industry. Journal of Construction
Engineering and Management
57 Delagarza, J. M. and P. Mitropoulos, 0.870 16 1991 -124.00
1991. Technology-transfer (t2)
model for expert systems. Journal of
Construction Engineering and
Management-Asce
58 Bommer, M. R. W., R. E. Janaro and 1.959 14 1991 -124.04
D. C. Luper, 1991. A manufacturing
strategy model for international
technology transfer. Technological
Forecasting and Social Change
59 Myllyntaus, T., 1990. The Finnish 0 18 1990 -132.00
model of technology-transfer.
Economic Development and Cultural
Change
60 Goodman, D., 1990. A new model for 0 4 1990 -146.00
federal-state-industry cooperation—
technology-transfer lessons from the
new-jersey experience. SRA-Journal
of the Society of Research
Administrators

* SJR papers

Discussion on the methodologies presented

The scientific publication mechanisms, as well as the ones employed in the search for
theoretical scientific papers, have increased both in quantity and in quality. In this
increasing volume of papers, the researchers may face difficulties to develop their studies,
considering the great volume to be read and analyzed. Amidst the abounding data bases,
journals and papers are those which are the most relevant and should compose the

123
Scientometrics (2015) 105:2109–2135 2131

bibliographic portfolio of a specific research work. But, in order to find them, the
researchers need to carry out some ‘mining’ work. Such is the discussion proposed in this
paper.
The first methodology presented, The Management System of the Central Research
Institute (MSCRI), was specially developed to evaluate the scientific production of an
institution aiming at a better and fairer management of the financial resources for its
scientists. The methodology can be applied to institutions with the same purposes—to
evaluate their scientific production in general, including books, book chapters, papers,
presentations in conferences etc.—or even to evaluate a single individual’s scientific
production. However, it cannot be employed to the selection and ranking of papers for a
bibliographic portfolio, since its set of tools does not meet that purpose. A researcher will
need scientific material from different other researchers and scientists to compose his/her
portfolio, rather from only a single source. Therefore, this system cannot be used with the
purpose to select and rank papers for a specific portfolio of research.
The second methodology presented, The Cochrane Collaboration model, was specifi-
cally designed for the health care area and, for this reason, all works on a specific theme
must be found and systematically read by a group of researchers, considering the amount of
work to be done in the systematic analysis. There is not a proposed adaptation for other
fields in the sense of filtering the most relevant papers. Therefore, the researcher will
remain with the task of reading all and every paper on his theme, that is, back to the
difficulties mentioned in the beginning of this paper concerning to the great volume of
publications nowadays and the time shortage of researchers.
The third methodology, called ProKnow-C, will consumes a lot of the researcher’s time
if he/she wants the full application of it, doing the bibliometry, and the systematic analysis
to analyze the contents and opportunities for new investigations. If however the researcher
wants just the bibliographic portfolio of a given theme, the time is equivalent to the
proposed by the Methodi Ordinatio.
The Methodi Ordinatio offers a solution to aid the decision making process when
choosing a portfolio, and its main advantage is that it defines the scientific relevance of
every paper using three criteria (impact factor, year of publication and number of citations
by the process proposed) instead of one criterion (number of citations) like the ProKnow-
C. The relevance is scientifically established before the systematic reading.
This new methodology presents two mechanisms, which can be used together or sep-
arately: Methodi Ordinatio, which is the complete methodology comprising nine phases;
and the InOrdinatio, which consists of one phase only, the seventh. When the researchers
are only interested in surveying the papers without ranking them according to their sci-
entific relevance, they might simply eliminate phase 7, which is the Ranking of papers
through the InOrdinatio. On the other hand, if the researchers only desire to establish
scientific value to the papers collected without employing the Methodi Ordinatio, they can
do it using only the seventh phase of it, through the InOrdinatio. However, in order to set
scientific criteria which are suitable to the selection of a portfolio, the use of the whole
methodology is recommended, comprising the Methodi Ordinatio and the InOrdinatio.
On Chart 2 it is presented a clear comparison between the methodologies presented,
and the approaches of each one. Each approach carries with it a set of assumptions. The
choice of a methodological approach should be linked with the scientific research objec-
tives. In this sense, there is not a better approach than the other. Nevertheless, there are
research questions that are better suited to be answered by a specific methodology (Lacerda
et al. 2015).

123
2132 Scientometrics (2015) 105:2109–2135

Chart 2 Comparing the methodologies presented


Methodology Approach Characteristics

The MSCRI Realistic Descriptive: Based on the search Assesses the scientific production of a
(1985) for relationships between the decisions specific institution or researcher
made by practitioners in the past, the (Vinkler 1986a)
available variables, and the results
collected from the past. Researchers’
task is to observe the environment, and
discover which variables interfere with
the results expected by decision-makers
(Larcerda et al. 2015)
The Cochrane Axiomatic (Prescriptive): ‘‘The axiomatic Offers a strategy to collect and perform the
Collaboration path within the context of a problem systematic reading of all works
(1993) which aims to combine elements, to (published papers and conference papers)
aggregate points of view, to take a that are healthcare related
position in the presence of risks, etc. It can be used in other areas of study, but
consists of transcribing, in formal terms, there is not a filtering procedure to
those demands reflecting a form of eliminate those works of no interest for
rationality in order to investigate its the researcher (Nightingale 2009 and
logical consequences (Roy 1993, Higgins and Green 2011)
p. 192)’’
ProKnow-C Constructivist: Taking the path of Offers a strategy to collect papers on a
(2010) constructivism consists of considering specific theme
concepts, models 12, procedures and Not relevant, or not aligned works, are
results to be keys capable (or not) of filtered out
opening certain locks likely (or not) to be Bibliometric analysis and systematic
appropriate for organizing a situation or reading is performed before scientific
causing it to develop (Roy 1993 p. 194)’’. relevance is defined (for this reason, lots
In this approach researcher’s values and of papers with not much relevance end
preferences are used to expand his/her up being systematically analyzed)
knowledge on the subject Final rank of scientific relevance of the
papers is defined using the papers’
scientific recognition (number of
citations) (Afonso et al. 2012; Vaz et al.
2012; Lacerda et al. 2012)
Methodi Realistic Normative: Decision-maker Offers a strategy to collect papers on a
Ordinatio decides by rationality, that is, operating specific theme
(2015) according to principles that reason itself Not relevant, or not aligned works, are
creates and that are consistent with filtered out
reality as it is accepted by a ration being, Systematic reading is performed after
devoid of emotions (Larcerda et al. scientific relevance is defined by the
2015). The researcher delegates InOrdinatio
decisions to a universal model on which Scientific relevance is defined by
articles are relevant InOrdinatio that uses three factors:
number of citation, year of publication
and impact factor (see phase 7 of
Methodi Ordinatio)

Source: Elaborated by the authors from research data, 2015

The strength of ProKnow-C is that the researcher will have to perform the bibliometric
analysis and systematic reading before making up a decision on the paper’s scientific
relevance. And this is also the weakness of the method, once a good quantity of tasks must
be employed by the researcher before knowing whether it is a relevant paper for his/her
research or not. On its turn, the strength of Methodi Ordinatio is that the researcher will
know in advance, before performing the systematic reading, the scientific relevance of the

123
Scientometrics (2015) 105:2109–2135 2133

paper; moreover, a complete bibliometric analysis is not necessary to be performed, once


the relevance is defined on the base of the three factors (year of publication, impact factor
and number of citation).
The methodology proposed does not suggest an extremely rational path in which the
researcher’s pondering is left aside. It can be observed that in phases 2, 5 and 9 the
researcher is supposed to use his/her own values and criteria to carry out the activities.
The methodology previews the application to papers published in journals, due to the
use of the impact factor criteria. However, it is known that books and book chapters are
essential to a research. In order to select which books and chapters he/she should use, the
researcher may apply the whole methodology, and in phase 7, using the equation, the value
of the impact factor is left blank. Other kinds of complementary material can be used in the
same way, if the researcher consider relevant for the research, like conference papers, for
instance. For methodological purposes, it is interesting that they be detailed in the
methodological strategy section.
The contribution of this manuscript is the replacement of the ProKnow-C that uses the
number of citations as a final prioritization criterion, whereas Methodi Ordinatio uses a
multicriteria model of evaluation (InOrdinatio) to order the publications according to a set
of criteria (impact factor, year of publication and number of citations).

Final considerations

Methodological decision aiding tools, when adequately build, with appropriate models and
procedures, are very representative in helping find the better decision. Any methodology
employed to the evaluation of variables with different dimensions may present limitations,
since they cannot totally translate the reality. Therefore, some limitations are also expected
to be found in the methodology proposed.
One first limitation is the fact that two different kinds of metrics were used to calculate
the InOrdinatio, the JCR and the SJR. The results were presented in the same table, ranked
by the value found. Despite using different metrics systems, tests proved they can be used
conjointly. Despite the metrics used, we suggest that the information related to the journal
metrics of the publishing year could be included in the paper together with the ISSN and
DOI, for instance.
Another limitation is the fact that the application of the InOrdinatio equation is, in prin-
ciple, destined to aid the search for portfolios with a large number of works. When the
research involves themes which are not abundant in the literature, the solution suggested is the
application of all the phases of the methodology, except for the seventh, if the researchers
want to read all the papers found and do not need to know the scientific value of each one.
The limitations presented do not affect the importance or validity of the methodology,
which takes into account the three essential aspects when evaluating the scientific work.
The methodology offers a solution to rank the papers effectively. This simplicity is one
fundamental feature of the research work, since according to van Raan (2004, p. 26),
‘‘Scientists are fascinated by basic features such as simplicity, symmetry, harmony, and
order’’, and these characteristics are provided by this methodology.
Finally, at the same time as proposing solutions related to time and quality to the
researchers’ work, this article raises some reflection towards the management of databases
aiming to promote homogeneity in the way data and information about the journals is made
available, aiming to benefit the advancement of sciences in general.

123
2134 Scientometrics (2015) 105:2109–2135

Acknowledgments We thank the Brazilian Government, the Ministry of Education, and UTFPR that
supported this research.

References
Afonso, M. H. F., de Souza, J. V., Ensslin, S. R., & Ensslin, L. (2012). Como construir conhecimento sobre
o tema de pesquisa? Aplicação do processo proknow-c na busca de literatura sobre avaliação do
desenvolvimento sustentável. Revista de Gestão Social E Ambiental,. doi:10.5773/rgsa.v5i2.424.
Antelman, K. (2004). Do open-access articles have a greater research impact? College & Research
Libraries, 65(5), 372–382. doi:10.5860/crl.65.5.372.
Barham, B. L., Foltz, J. D., & Prager, D. L. (2014). Making time for science. Research Policy, 43(1), 21–31.
doi:10.1016/j.respol.2013.08.007.
Bhupatiraju, S., et al. (2012). Knowledge flows: analyzing the core literature of innovation, entrepreneurship
and science and technology studies. Research Policy, 41, 1205–1218. doi:10.1016/j.respol.2012.03.
011.
Bornmann, L. (2010). Towards an ideal method of measuring research performance: Some comments to the
Opthof and Leydesdorff (2010) paper. Journal of Informetrics, 4(3), 441–443. doi:10.1016/j.joi.2010.
04.004.
Bruggemann, R., & Carlsen, L. (2012). Multi-criteria decision analyses. Viewing MCDA in terms of both
process and aggregation methods: Some thoughts, motivated by the paper of Huang, Keisler and
Linkov. Science of the Total Environment, 425, 293–295.
Cho, K. T. (2003). Multicriteria decision methods: An attempt to evaluate and unify. Mathematical and
Computer Modelling, 37(9–10), 1099–1119. doi:10.1016/S0895-7177(03)00122-5.
Cinelli, M., Coles, S. R., & Kirwan, K. (2014). Analysis of the potentials of multicriteria decision analysis
methods to conduct sustainability assessment. Ecological Indicators, 46, 138–148.
De Greve, J. P., & Frijdal, A. (1989). Evaluation of scientific research profile analysis: a mixed method.
Higher Education Management, 1, 83–90.
Dieks, D., & Chang, H. (1976). Differences in impact of scientific publications: Some indices derived from a
citation analysis. Social Studies of Science, 6, 247–267. doi:10.1177/030631277600600204.
Dyer, J. S., Fishburn, P. C., Steuer, R. E., Wallenius, J., & Zionts, S. (1992). Multiple criteria decision
making, multiattribute utility theory: The next ten years. Management Science, 38(5), 645–654.
Figueira, J., Mousseau, V., & Roy, B. (2005). Electre methods. In Multiple criteria decision analysis: State
of the art surveys. International series in operations research and management science (Vol. 78,
pp. 133–153). United States: Springer.
Greco, S., Matarazzo, B., & Slowinski, R. (2001a). Rough sets theory for multicriteria decision analysis.
European Journal of Operational Research, 129, 1–47.
Greco, S., Matarazzo, B., Slowinski, R. & Stefanowski, J. (2001b). Variable consistency model of domi-
nance-based rough sets approach. In Rough sets and current trends in computing. Lecture notes in
computer science, 2005, 170–18.
Haeussler, C., Jiang, L., Thursby, J., & Thursby, M. (2014). Specific and general information sharing among
competing academic researchers. Research Policy, 43(3), 465–475. doi:10.1016/j.respol.2013.08.017.
Higgins, J.P.T. & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions
version 5.1.0 [updated March 2011]. The Cochrane Collaboration, Retrieved July 8, 2015, from www.
cochrane-handbook.org.
Irvine, J., & Martin, B. R. (1983). Assessing basic research: The case of the Isaac Newton Telescope. Social
Studies of Science, 13, 49–86. doi:10.1177/030631283013001004.
Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value trade-offs.
Cambridge: Cambridge University Press.
Lacerda, R. T. O., Ensslin, L. & Ensslin, S. R. (2012). A bibliometric analysis of strategy and performance
measurement. Gestão & Produção, 19(1), 59–78. Retrieved July 7, 2015, from http://www.scielo.br/
pdf/gp/v19n1/a05v19n1.
Lacerda, R. T. O., Ensslin, L., & Ensslin, S. R. (2015). Research methods and success meaning in project
management. In B. Pasian (Ed.), Designs, methods and practices for research of project management.
England: Gower Publishing Ltd.
Martin, B. R. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics,
36(3), 343–362. doi:10.1007/BF02129599.
Nightingale, A. (2009). A guide to systematic literature reviews. Surgery (Oxford), 27(9), 381–384.

123
Scientometrics (2015) 105:2109–2135 2135

Pohekar, S. D., & Ramachandran, M. (2004). Application of multi-criteria decision making to sustainable
energy planning: A review. Renewable and Sustainable Energy Reviews, 8, 365–381.
Roy, B. (1993). Decision science or decision-aid science? European Journal of Operational Research, 66,
184–203.
Roy, B. (2005). Paradigms and challenges. In J. Figueira, S. Greco, & M. Ehrgott (Eds.), Multiple criteria
decision analysis: State of the art surveys. Berlin: Springer.
Saaty, T. L. (1990). How to make a decision? The analytic hierarchy process. European Journal of
Operantional Research, 48, 9–26.
Small, H., Boyack, K. W., & Klavans, R. (2014). Identifying emerging topics in science and technology.
Research Policy, 43(8), 1450–1467. doi:10.1016/j.respol.2014.02.005.
Taylor, P. J. (2002). A partial order scalogram analysis of communication behavior in crisis negotiation with
the prediction of outcome. The International Journal of Conflict Management, 13(1), 4–37.
Ul Haq, M. (2003). The birth of the Human Development Index. In A. Kumar (Ed.), Readings in human
development (pp. 127–137). Oxford: Oxford University Press.
UNDP (2015). United Nations Development Program. Human Development Reports. Retrieved July 9,
2015, from http://hdr.undp.org/.
van Raan, A. F. J. (2004). Measuring science. Capita selecta of current main issues. In H. F. Moed, W.
Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. The use of
publication and patent statistics in studies of S&T systems (pp. 19–50). Kluwer Academic Publishers:
Dordrecht.
Vaz, C. R., Tasca, J. E., Ensslin, L., Ensslin, S. R., & Selig, P. M. (2013). Avaliação de desempenho na
gestão estratégica organizacional: seleção de um referencial teórico de pesquisa e análise bib-
liométrica. Revista Gestão Industrial,. doi:10.3895/S1808-04482012000400008.
Vinkler, P. (1986a). Management system for a scientific research institute based on the assessment of
scientific publications. Research Policy, 15(2), 77–87. doi:10.1016/0048-7333(86)90003-X.
Vinkler, P. (1986b). Evaluation of some methods for the relative assessment of scientific publications.
Scientometrics, 10, 157–177. doi:10.1007/BF02026039.
Vinkler, P. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics, 36(3),
343–362. doi:10.1007/BF02129599.
Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: The Garfield (Impact) Factor.
Journal of the American Society for Information Science and Technology, 55, 431–435. doi:10.1002/
asi.10391.
Vinkler, P. (2009). pv-index: A new indicator for assessing scientific impact. Journal of Information
Science, 35, 602–612. doi:10.1177/0165551509103601.
Vinkler, P. (2010). The pv-index: A new indicator to characterize the impact of journals. Scientometrics, 82,
461–475. doi:10.1007/s11192-010-0182-z.
Vinkler, P. (2012). The case of scientometricians with the ‘‘absolute relative’’ impact Indicator. Journal of
Informetrics, 6, 254–264. doi:10.1016/j.joi.2011.12.004.
Zopounidis, C., & Doumpos, M. (2002). Multicriteria classification and sorting methods: A literature
review. European Journal of Operational Research, 138, 229–246.

123

You might also like