Professional Documents
Culture Documents
2015 - Pagani - Article - Methodi Ordinatio A Proposed Metho PDF
2015 - Pagani - Article - Methodi Ordinatio A Proposed Metho PDF
DOI 10.1007/s11192-015-1744-x
Abstract An increase in the number of scientific publications in the last few years, which
is directly proportional to the appearance of new journals, has made the researchers’ job
increasingly complex and extensive regarding the selection of bibliographic material to
support their research. Not only is it a time consuming task, it also requires suitable
criteria, since the researchers need to elect systematically the most relevant literature
works. Thus the objective of this paper is to propose the methodology called Methodi
Ordinatio, which presents criteria to select scientific articles. This methodology employs an
adaptation of the ProKnow-C for selection of publications and the InOrdinatio, which is an
index to rank by relevance the works selected. This index crosses the three main factors
under evaluation in a paper: impact factor, year of publication and number of citations.
When applying the equation, the researchers identify among the works selected the most
relevant ones to be in their bibliographic portfolio. As a practical application, it is provided
a research sample on the theme technology transfer models comprising papers from 1990
to 2015. The results indicated that the methodology is efficient regarding the objectives
proposed, and the most relevant papers on technology transfer models are presented.
123
2110 Scientometrics (2015) 105:2109–2135
Introduction
Sharing information throughout the research process provides the basis for the accumu-
lation of knowledge production and scientific progress (Haeussler et al. 2014). In this
regard, the number of scientific publications and the number of journals have increased
considerably in the last few years. Two factors have been noticed to contribute to this
increase: first, the new technologies which enable research and provide new ways of
scientific investigation, favoring the appearance of new studies; second, the need for
specialization and construction of new knowledge, imposed by the markets and the
knowledge society, which leads to the search and spread of new scientific and techno-
logical knowledge. The result is an increase in the world scientific literature as a whole
found in several databases which have been made available recently (Bhupatiraju et al.
2012).
Therefore, there are countless possibilities of finding information sources aiming to
produce new knowledge, and it is the researchers’ job to accomplish the task of selecting
such sources as well as the most relevant information for their research. This wide offer of
works requires selection of those which are the most significant (Small et al. 2014) to
compose the portfolio.
The concern about establishing a process that points out the quality of the best works is
highlighted in the scientific literature. Early works in this area (Irvine and Martin 1986;
Vinkler 1986b; Martin 1996; De Greve and Fridjal 1989) approached the quality dimension
of the work, represented by the impact factor and the number of citations of the works
under analysis.
More recently, due to the increase in the number of publications, some concern was also
focused on the issues of selection of the most relevant works and elimination of those
which were not so relevant for a specific research. In Afonso et al. (2012), Vaz et al.
(2013), and Lacerda et al. (2012), for instance, the methodology ProKnow-C is presented.
Firstly, the selection of papers is conducted by searching in data bases available for the
publications related to theme of the researchers’ interest. The researcher collects the papers
and then he/she analyzes the title, abstract, keywords, the combinations of keywords that,
according to the researcher’s understanding represents the subject that he/she wants to
investigate, in order to verify the alignment of the papers; finally the full publication is
considered, and the most cited papers are included in the portfolio. This thorough and
systematic search requires time and proper techniques, involving both selection issues and
the value or quality of the scientific papers. This task might become complex and
exhausting, demanding large amount of time from the researchers (Barham et al. 2014).
In such scenario, the following problem is proposed: how can researchers select a
consistent bibliographic portfolio for the elaboration of a research work aiming to produce
actual contribution to the science and advance of knowledge in a faster and more effective
way? Over the last decades, a considerable amount of multiple criteria decision aid
(MCDA) tools have been developed to help out in the decision making process in several
different areas. Considering that the researcher also need to make decisions, such as which
paper to read or not to accomplish his/her research, the objective of this paper is to propose
a MCDA methodology, based on the existing methodology ProKnow-C, to select and rank
scientific works according to their relevance to create a bibliographic portfolio, considering
the three most important aspects: impact factor, number of citations and year of publica-
tion. An example is presented on the theme technology transfer models.
123
Scientometrics (2015) 105:2109–2135 2111
Literature review
Eugene Garfield started a new era in the scientific publication evaluation and measurement
processes with his radical invention, the Science Citation Index, which enabled large-scale
scientific literature statistical analysis (van Raan 2004). Since the early 70’s the literature
has shown great increase in the quantitative material regarding the state-of-the-art in
sciences and technology (van Raan 2004). Several methodologies have been proposed
regarding the evaluation of scientific works. Some proposed the work quality evaluation
through its impact in the scientific community (Irvine and Martin 1983; Vinkler 1986b,
1996, 2004, 2009, 2010, 2012; Martin 1996; De Greve and Frijdal 1989), while others
(Afonso et al. 2012; Vaz et al. 2013; Lacerda et al. 2012) selected works through a process
of elimination of the papers whose content is not aligned with the subject or do not have
scientific recognition.
For this study, three works aiming to evaluate, select and build bibliographic portfolios
with scientific production were identified in the literature: The Management System of the
Central Research Institute; The Cochrane Collaboration model; and the ProKnow-C. They
are described in the sequence.
This methodology originated at the Central Research Institute for Chemistry of the Hun-
garian Academy of Sciences. Founded in 1954, the Institute had a large group of
researchers, covering several areas of investigation in Biology and Chemistry. The Institute
needed to evaluate the scientific publication of its workers aiming at better management of
its financial resources, rewarding its researchers in a fair and impartial way. In order to
achieve this aim, Vinkler (1986a) proposed a method called The Management System of
the Central Research Institute (MSCRI). The methodology proposed took into considera-
tion some important aspects such as: review of papers performed by the Institute members;
evaluation of scientific publication; number of workers in scientific committees or editorial
boards; number of science awards; number and impact of scientific lectures; number of
lectures in international conferences; number of doctorate thesis; book chapters, and
patents (Vinkler 1986a). This methodology was developed with the objective to evaluate
the scientific production of a specific institution. Thus, the system proposed for the Institute
can be used to evaluate the scientific production of other institutions that might be inter-
ested in measuring the relevance of each of their scientists.
From this method developed for the Institute, other studies were developed by Vinkler
(1986b, 1996, 2009, 2010, 2012) aiming to discuss the criteria used to attribute impact
factor to scientific papers.
123
2112 Scientometrics (2015) 105:2109–2135
123
Scientometrics (2015) 105:2109–2135 2113
Chart 1 continued
123
2114 Scientometrics (2015) 105:2109–2135
specific areas of health care (Higgins and Green 2011), in such a way no work will be left
aside.
The systematic review of The Cochrane Collaboration must provide a list of elements,
which defines a complete Cochrane review (Higgins and Green 2011). Chart 1 presents the
elements that indicate how the review is likely to appear.
The asterisks indicate a mandatory field, and the reviewers must provide the required
information to continue the process.
Although the Cochrane Collaboration model was especially designed to the healthcare
field, the same core principles may be applied to a systematic literature review in other
fields, considering that the main characteristic of the methodology is that all papers should
be read and analyzed and there is not a procedure to eliminate not relevant works.
ProKnow-C
The methodology ProKnow-C, described by Afonso et al. (2012), Vaz et al. (2013) and
Lacerda et al. (2012), is a knowledge construction methodology used to compose a bib-
liographic portfolio of research organized in four stages. Similarly to the Cochrane Col-
laboration model, the first stage of the ProKnow-C consists in selecting a bibliographic
portfolio of articles aligned with the theme of interest as perceived by the researcher’s and
with scientific recognition. In the second stage the bibliometric analysis of the portfolio
must be provided. In the third stage a systematic analysis is performed to identify the gaps
that exist in order to identify research opportunities. In the fourth stage of the ProKnow-C
all the knowledge developed is used to propose the research question and objectives (Vaz
et al. 2013). The main base for establishing the scientific relevance of the article after
applying the filtering procedures, that is, defining whether it is aligned with the theme or
not, is the scientific recognition through the number of citation, according to Lacerda et al.
(2012, pp. 65–66, 75).
Since the methodology proposed in this paper uses a multiple criteria decision making
model, some considerations must be built on this concern by reviewing some of the several
existing MCDA.
Decisions have raised reflection of many thinkers since ancient times. They are present
in the daily lives of human beings and these are required to take preference for an alter-
native considering the scenario that is presented and the different aspects involved in the
problem. However, in some situations decision-making can be presented as an extremely
complex scenario involving different alternatives of action, distinct points of views among
policy makers, specific evaluation criteria, which contribute to multiple criteria, compete
with each other (Roy 2005).
The decisions related to complex problems are common to several areas in economics,
engineering, production, political, social, and they are present in a multitude of activities,
whether public or private, and most of these situations are characterized by the existence of
multiple objectives to be achieved (Roy 2005).
In face of their contribution, decision-makers have sought aid in multiple criteria
methodologies that can aid in the decision making (DM) process. The role of method-
ologies to support multiple criteria (MC) decision is to establish an agreement of the best
alternative decisions regarding the selection within a set with several potential alternatives
123
Scientometrics (2015) 105:2109–2135 2115
to solve the problem, subject to various attributes or ‘criteria’ tangible or intangible, with
the ability to provide special treatment to peculiarities of the problem (Cho 2003).
Decision aiding (DA) is achieved through models that helps get answer elements to
questions made by stakeholders in a process in which a decision or choice is required. Such
elements work in clarifying a decision and giving more consistency to the process. DA
contributes, among other things, to elaborating recommendations using results taken from
models and computational procedures, and participating in the final decision legitimization
(Roy 2005).
DA is more often multicriteria than monocriterion, because even when DA is provided
for a single decision maker, it is rare for him/her to have in mind a single criterion; when
DA occurs in a multi-actor DM process, it is even more difficult to establish a single
criterion that is accepted by all the actors. Generally, each one will has his/her own
priorities and different points of views, and the preferences should be taken into consid-
eration (Roy 2005).
The most frequently used DA methods are based on mathematically multicriteria
aggregation procedures, which brings into play various inter-criteria parameters, such as
weights and scaling constants, among others, which allow defining the specific role that
each criterion can play with respect to others (Roy 2005).
Several MCDA methods were developed during the last decades to help out in the
process of decision making, and have been largely used and broadly discussed in the
literature, and even to mention all of them one single paper would provide a small space.
Cinelli et al. (2014) present a didactic division of the MCDA methods into three families:
the utility-based theory, the outranking relation theory, and the sets of decision rules
theory. Five well-known multicriteria decision methods will be briefly described in the
sequence. And, to make a linkage with the methodology proposed in this paper with
MCDAs, a sixth method, not very approached by the literature according to Cinelli et al.
(2014), will be also described.
Utility functions are widely used in MCDA for preferential modeling purposes. Each
marginal utility function provides a mechanism for transforming the scale of the corre-
sponding criterion into utility/value terms. The major advantage of using such a trans-
formation mechanism is that it enables the consideration of both quantitative and
qualitative criteria (Zopounidis and Doumpos 2002).
The analytic hierarchy process (AHP), proposed by Saaty (1990), involves an impor-
tance-ratio assessment procedure and uses a hierarchy to establish preferences and
orderings; then, a linear model is derived and used to rank alternatives; by changing
weights sensitivity analysis is possible (Dyer et al. 1992). The standard process requires
firstly the identification of a set of alternatives and a hierarchy of evaluation criteria (value
tree) followed by pair wise comparisons to evaluate alternatives performance on criteria
(scoring) and criteria among themselves (Cinelli et al. 2014).
The multi attribute utility theory (MAUT) is a performance aggregation based approach,
which requires the identification of utility functions and weights for each attribute that can
then be assembled in a unique synthesizing criterion (Keeney and Raiffa 1993). It takes
into consideration the preferences of the decision-maker in the form of the utility function
which is defined over a set of attributes (Pohekar and Ramachandran 2004).
123
2116 Scientometrics (2015) 105:2109–2135
The outranking relation is a binary relation that enables the assessment of the outranking
degree of an alternative ai over an alternative ap; it allows to conclude that ai outranks ap if
there are enough arguments to confirm that ai is at least as good as ap (concordance), while
there is no essential reason to refute this statement (discordance) (Zopounidis and
Doumpos 2002).
The outranking method of elimination and choice expressing the reality (ELECTRE)
uses cardinal scales with dominance concept based on graph theory to determine the best
alternative when there is one, and does not assume anything about rank preservation (Cho
2003). ELECTRE methods are relevant when facing decision situations where: the deci-
sion-maker wants to include in the model at least three criteria; actions are evaluated (for at
least one criterion) on an ordinal scale; a strong heterogeneity related with the nature of
evaluations exists among criteria (e.g., duration, noise, distance, security, cultural sites,
monuments etc.); compensation of the loss on a given criterion by a gain on another one
may not be acceptable for the decision-maker; small differences of evaluations are not
significant in terms of preferences, while the accumulation of several small differences
may become significant (Figueira et al. 2005).
The preference ranking organization method for enrichment of evaluations (PRO-
METHEE) uses the outranking principle to rank the alternatives, combined with the ease of
use and decreased complexity. It performs a pair-wise comparison of alternatives in order
to rank them with respect to a number of criteria (Pohekar and Ramachandran 2004).
Decision rules derived from these approximations constitute a preference model. Each
‘if… then…’ decision rule is composed of a condition part specifying a partial profile on a
subset of criteria to which an alternative is compared using the dominance relation, and a
decision part suggesting an assignment of the alternative to ‘at least’ or ‘at most’ a given
class (Zopounidis and Doumpos 2002).
The dominance based rough set approach (DRSA) theory was introduced by Pawlak and
proved to be an excellent mathematical tool for the analysis of a vague description of
objects. Its philosophy is based on the assumption that with every object of the universe
there is associated a certain amount of information, such as data, knowledge etc. (Greco
et al. 2001a). It is particularly useful to deal with inconsistencies of input information. The
original rough set approach did not consider the attributes with preference-ordered
domains. The categories are ordered from the best to the worst and the approximations are
constructed using a dominance relation instead of an indiscernibility relation (Greco et al.
2001b).
Other methods
Trade-offs may be present when using a multicriteria decision making process. In order to
find a better decision, the best ‘trade-offs’ have to be found, eventually to reach a ranking
(Cinelli et al. 2014). A method that concentrates on major conflicts solving is the partial
order scalogram analysis with coordinates (POSAC) method. The idea behind this
methodology is that conflicts should be made evident, and its main utility is to aid com-
munication in solving a conflict situation. One useful feature of POSAC is the ability to
123
Scientometrics (2015) 105:2109–2135 2117
Methodological strategy
123
2118 Scientometrics (2015) 105:2109–2135
The last part of the strategy was the practical sample application, presented in Sect. 5.
For the sample application, a systematic bibliographic search was done, using the Methodi
Ordinatio phases.
123
Scientometrics (2015) 105:2109–2135 2119
to the access of the published material. It seems important to emphasize that, even after
having defined the keywords, combinations and data bases, this is the best time to go back
in some decision, since ‘‘[…] problems may be wrongly formulated, that one may inquire
about properties of things and processes which later views declare to be non-existent.
Problems of this kind are not solved; they are dissolved and removed from the domain of
legitimate inquiry’’ (Feyerabend in Roy 1993, 189), and ‘‘[…] we do not discover a
problem as we would a pre-existing object; the formulation we give to it cannot be
generally totally objective, but is expected to evolve throughout the decision-making
process’’ (Roy in Roy 1993, p. 189). That is to say, if the researcher consider that it is
important to rethink his/her problem, go back to Phase 2.
Phase 4—Final search in the data bases In this phase, a reference manager tool should
be employed (e.g. Mendeley, EndNote, Zotero etc.) to collect the papers. The search for
terms, combination and parameters previously selected is carried out in each data base, and
data is exported to the reference manager selected by the researchers. The result of this
phase is the gross portfolio.
Phase 5—Filtering procedures A well developed systematic search will enable good
filtering, presenting quality of results. However, some works of non-related areas might
appear among the papers selected. Therefore, another filtering procedure is applied to
eliminate repeated works or papers that do not belong to the research area of interest. This
procedure consists in analyzing the title, key words and abstract. If any doubts remain
whether the paper is of real interest for the researcher or not, a quick look into the topics
might help, to check whether or not its content is related to research. This process might
eliminate a good number of papers. When taking Roy’s (1993, p. 188) words into con-
sideration: ‘‘[…] the perceptions of reality held by an individual, what he says and what he
writes on the subject, the questions he brings up about it, etc. constitute a way of inter-
acting with the real situation which may well contribute to changing’’, not only this
procedure on phase 5, but all the others that recommend the use of the researcher’s own
judgment are justified. The result of this phase is the final portfolio.
Phase 6—Identification of impact factor, year of publication and number of citations
When evaluating scientific publications, Vinkler (1986b) considers two important aspects
of a paper: the impact factor and the number of citations of individual papers. The impact
factor indicates the relevance of the journal in which the paper was published; the higher
the factor, the more serious the paper is considered. The number of citations indicates the
paper and its authors’ scientific recognition. However, when the search is carried out, it is
possible to observe that there are papers without impact factor which have a high number
of citations, while others with high impact factor show a small number of citations. There
are also papers with high number of citations and high impact factor, but old—not actual—
papers. At this point, in order to eliminate doubt regarding which aspect is the most
relevant in a paper, the analysis of three main aspects is proposed: journal relevance,
evaluated through the impact factor; the paper scientific recognition, evaluated through the
number of citations; how recent the article is, evaluating the year of publication. The
importance of these three aspects is explained below:
(a) Impact factor: ‘‘[…] the impact factor for a periodical is a measure of the frequency
with which the average article published in two consecutive years’’ (Vinkler 1986a,
p. 78). Due to its importance, this factor has been studied throughout the last two
decades (Vinkler 1986b, 1996, 2009, 2010, 2012). The metrics used to identify
impact factor vary among the journals. The most employed are: (a) Source
Normalized Impact per Paper (SNIP); (b) SCImago Journal Rank (SJR); (c) Impact
123
2120 Scientometrics (2015) 105:2109–2135
Factor (previous year JCR) and; (d) 5-Year Impact Factor (JCR). The last one seems
to be the ideal, as it presents the average of the last 5 years, which might represent a
better evaluation of the journal.
(b) Number of citations: the number of times a paper is cited demonstrates its relevance
and recognition by the scientific community, and this should be thoroughly observed
(Bornmann 2010). However, a recent paper might have a low number of citations.
Therefore, it would be a mistake to attribute this paper lower scientific relevance
only based on the criterion number of citations, since this isolated aspect cannot
represent the global scientific relevance of a paper. Another factor that affects the
number of citations is the availability of the paper, since ‘‘freely available articles do
have a greater research impact (Antelman 2004) once researchers will have more
access to it and, therefore, it has more chances to be cited’’. Papers which can only
be accessed upon payment end up having the number of readings limited, which also
limits the citation. Thus, an excellent article that is not freely available might be read
and cited fewer times than an average paper whose access is free. Therefore, it is
important to apply the criterion number of citations together with the other
evaluation criteria.
(c) Year of publication: the year of publication indicates how current the data is; the
more recent the research, the more likely it is that new advances have been reached,
and the higher the probability of the paper to contribute to some innovation in the
knowledge area. Also, there is great likelihood that more recent papers are based on
methodologies which have been already validated, which makes them even more
valuable. Besides that, the probability of a paper being cited decreases with time
(Dieks and Chang 1976), which reinforces the importance of valuing the most recent
papers.
Taking all that into consideration, the task of this 6th phase is to identify and register
the paper year of publication, number of times it was cited and the impact factor of
the journal in which it was published. This phase can be carried out simultaneously
with the 8th phase—search for the paper’s full version—aiming to save time as a
whole, since several complete papers can be easily located when the impact factor is
searched. However, as the full version of some papers might not be easily found, this
task should be carried out after identifying the paper relevance through the
InOrdinatio.
Phase 7—Ranking the papers using the InOrdinatio After carrying out phases 1–6, the
equation InOrdinatio (1) is applied to identify the scientific works’ ranking,
InOrdinatio ¼ ðIF=1000Þ þ a ½10 ðResearchYearPublishYearÞ þ ðR CiÞ ð1Þ
where IF is the impact factor, a is a weighting factor ranging from 1 to 10, to be attributed
by the researcher; ResearchYear is the year in which the research was developed; Pub-
lishYear is the year in which the paper was published; and R Ci is the number of times the
paper has been cited. The equation InOrdinatio presents the following dynamics:
(a) Impact factor is divided by 1000 (one thousand), aiming to normalize its value
concerning the other criteria.
(b) The equation presents the weighting factor a, whose value to be attributed by the
researchers might vary from 1 to 10. The closer the number is to one, the lower the
importance the researcher will attribute to the criterion year, while the closer to 10,
the higher is the importance of this criterion. For themes like Technology Transfer,
123
Scientometrics (2015) 105:2109–2135 2121
the criterion year is relevant, due to the higher number of new publications
available. Also, the time frame should be broader in this case, considering it has
been approached in the literature more than a decade long.
(c) This criterion is the gross number of citation found in the data of the portfolio
construction.
After treating this data, the InOrdinatio of each paper is obtained, and from this point, it is
possible to rank the papers according to their scientific relevance: the higher the InOrdi-
natio value is, the more relevant the paper is for the portfolio. With the papers ranked, the
researcher can define how many papers he will search for the full version, according to his
priorities (for instance, the 10 first, or the 50 first, and so on).
Phase 8—Finding the full papers After ranking the papers using the InOrdinatio, the
complete version of those selected papers should be found. If the article is not freely
available, but is relevant to the research, it is advisable that it should be purchased.
Phase 9—Final reading and systematic analysis of the papers Depending on the number
of papers selected through the Methodi Ordinatio, it is not possible for the researchers to
read all of them. For this reason, it is important to use the InOrdinatio, as this index
provides the scientific criteria for the selection of the most relevant papers to be read and
systematically analyzed. The number of papers to be read is decided by the researcher.
During this phase, the researcher will search for those aspects considered relevant for
his work, such as main authors, variables identified, results achieved, models proposed,
comparisons, research gaps etc. Researchers might want to use The Cochrane Collabora-
tion model of systematic review to perform this phase, presented by Higgins and Green
(2011), and earlier in this paper on Chart 1. Figure 1 presents the flow of activities
proposed in the methodology, adapted from ProKnow-C.
Next, in order to give an example of the Methodi Ordinatio dynamics, a practical
example is presented on the theme technology transfer model.
Phase 1—Establishing the intention of research This research intention was technology
transfer models.
Phase 2—Preliminary exploratory search of keywords in data bases The combination
technology transfer model was tested in data bases with which the researchers usually
work and are familiar.
Phase 3—Definition and combination of keywords and data bases Among the bases
tested, the ones selected for this research were Science Direct, Web of Knowledge and
Scopus, since they present a large number of publications with the keywords searched and
higher availability of access to the material published, with higher consistency in the
search. The remaining bases did not offer the access expected, and some did not present
consistency during the exploratory search, reaching different results each time they were
tested, which prevented the research from being developed in a reliable way.
Articles with the combination technology transfer model* were found. The search was
limited to the period 01/01/1990 to 31/01/2015, aiming at a broader coverage of papers.
After the final decision about the data bases to be used, keyword combination and limit of
time, the final tests were carried out aiming to ensure the consistency and efficacy of the
search.
123
2122 Scientometrics (2015) 105:2109–2135
Start
Adherence
Yes No
of keywords
5. Filtering procedures
Not aligned to
the topic or in No
Discard Yes
duplicate.
Yes
End
Fig. 1 Phases of the methodology Methodi Ordinatio. Source: Adapted from ProKnow-C
Phase 4—Final search in the data bases The definitive data search resulted in a gross
total of 352 results. Taking into consideration the different search tools in the different
bases, the application of a standard filtering procedure for all of them was not possible.
Phase 5—Filtering procedures In this phase, all the papers searched in all data bases
were put together. It is important to emphasize that some filters could not be used in some
bases and, because of that, some papers were collected, those whose theme was not related
to the theme searched. Then, the following filtering and elimination procedures were
applied: repeated papers; papers whose Title, Abstract or Keywords were not related to the
theme searched; papers presented in conferences, and book chapters (they do not have
impact factor; the researcher may use his/her own values and criteria to select other
123
Scientometrics (2015) 105:2109–2135 2123
material, such as books, book chapter, conference papers etc., that will be complementary
to the articles). The filtering resulted in a large number of papers eliminated, once the
objective was to collect only articles. This resulted in a total of 93 articles left.
Phase 6—Identifying impact factor, year and number of citations This phase was
partially carried out simultaneously with phase 8, that is, for some articles it was possible
to find the full text when searching for this information. The sources used in this phase
were Google Scholar and the sites of the journals. Some papers were not found and, for this
reason, were eliminated, which resulted in a final total of 61 papers. From these, 12 were
SJR and 49 JCR. The two groups were treated separately in the next phase, but later on
incorporated within the same table, since no incompatibility was verified between the
results. The papers were organized in a spreadsheet in the following order of columns:
paper title, impact factor (last year JCR and SJR), number of citations, and year.
Phase 7—Ranking the papers using the InOrdinatio The equation InOrdinatio was
employed. In this research, a was given value 10, considering that the factor year is
relevant to the theme under study; no newest work, the same way no classic ones, should
be left aside. Table 1 shows the final articles resulting from the application of phases 1–7.
It is possible to observe that the seven first papers present a balance of the three aspects
considered important. When analyzing, for example, data from the first article in the table,
it was seen that its currentness was balanced with the impact factor and the number of
citations. The same occurred with paper 2. When analyzing data in article 8, the existence
of a high impact factor was seen, and although no citation was found, its currentness had to
be considered.
Another example is paper 34. Although it presented the highest impact factor when
compared to the others and was relatively current, it was only cited five times.
Analyzing the rank in general, some articles were seen to present a negative value for
the InOrdinatio. This is due to the fact that the search involved a period of time longer than
10 years. Thus, besides not being current, these papers did not present high impact factor
or number of citations, resulting in very low or even negative InOrdinatio value.
The results revealed that the first papers in the table had at least two relevant criteria to
be highlighted, which proved the efficacy of the equation. The last papers, however,
presented more than two unfavorable factors, which placed them in the last positions.
Phase 8—Finding the full papers This phase was partially carried out simultaneously
with phase 6. Only the papers whose full text could not be found before needed to be
located now. Among all papers searched, only paper 31 was not found. As it did not present
a very high InOrdinatio it was substituted by the following paper in the list.
Phase 9—Reading and systematic analysis of the papers Again, at this step, the
researcher may use his/her own values and criteria to establish how many articles should be
read. Our advice is that the researcher establishes a broader time frame as much as
possible in initial phases. By doing this, he/she will make sure the classic papers will be
included, and the limit of papers to be read will be the ones with positive InOrdinatio. The
papers will present a negative InOrdinatio when the time frame is over 10 years, con-
sidering that the limit value of a is 10, according to the Eq. (1) in phase 7. Therefore, for
the sample application presented, the systematic reading is recommended on the papers
that presented positive InOrdinatio, the 36 first papers.
Considering that the objective of this paper was to present the methodology Methodi
Ordinatio, and the equation InOrdinatio, the results of the systematic reading on tech-
nology transfer models will be presented in another paper whose scope comprises other
objectives the authors have for the theme.
123
2124 Scientometrics (2015) 105:2109–2135
Table 1 Final papers on technology transfer model after the application of phase 8 of Methodi Ordinatio
Ranking Articles on technology transfer model Impact Citations Year InOrdinatio
number (authors, year, journal) factor (phase 6) (phase 6) (phase 7)
(obtained in (phase 6)
phase 7)
123
Scientometrics (2015) 105:2109–2135 2125
Table 1 continued
123
2126 Scientometrics (2015) 105:2109–2135
Table 1 continued
123
Scientometrics (2015) 105:2109–2135 2127
Table 1 continued
123
2128 Scientometrics (2015) 105:2109–2135
Table 1 continued
123
Scientometrics (2015) 105:2109–2135 2129
Table 1 continued
123
2130 Scientometrics (2015) 105:2109–2135
Table 1 continued
* SJR papers
The scientific publication mechanisms, as well as the ones employed in the search for
theoretical scientific papers, have increased both in quantity and in quality. In this
increasing volume of papers, the researchers may face difficulties to develop their studies,
considering the great volume to be read and analyzed. Amidst the abounding data bases,
journals and papers are those which are the most relevant and should compose the
123
Scientometrics (2015) 105:2109–2135 2131
bibliographic portfolio of a specific research work. But, in order to find them, the
researchers need to carry out some ‘mining’ work. Such is the discussion proposed in this
paper.
The first methodology presented, The Management System of the Central Research
Institute (MSCRI), was specially developed to evaluate the scientific production of an
institution aiming at a better and fairer management of the financial resources for its
scientists. The methodology can be applied to institutions with the same purposes—to
evaluate their scientific production in general, including books, book chapters, papers,
presentations in conferences etc.—or even to evaluate a single individual’s scientific
production. However, it cannot be employed to the selection and ranking of papers for a
bibliographic portfolio, since its set of tools does not meet that purpose. A researcher will
need scientific material from different other researchers and scientists to compose his/her
portfolio, rather from only a single source. Therefore, this system cannot be used with the
purpose to select and rank papers for a specific portfolio of research.
The second methodology presented, The Cochrane Collaboration model, was specifi-
cally designed for the health care area and, for this reason, all works on a specific theme
must be found and systematically read by a group of researchers, considering the amount of
work to be done in the systematic analysis. There is not a proposed adaptation for other
fields in the sense of filtering the most relevant papers. Therefore, the researcher will
remain with the task of reading all and every paper on his theme, that is, back to the
difficulties mentioned in the beginning of this paper concerning to the great volume of
publications nowadays and the time shortage of researchers.
The third methodology, called ProKnow-C, will consumes a lot of the researcher’s time
if he/she wants the full application of it, doing the bibliometry, and the systematic analysis
to analyze the contents and opportunities for new investigations. If however the researcher
wants just the bibliographic portfolio of a given theme, the time is equivalent to the
proposed by the Methodi Ordinatio.
The Methodi Ordinatio offers a solution to aid the decision making process when
choosing a portfolio, and its main advantage is that it defines the scientific relevance of
every paper using three criteria (impact factor, year of publication and number of citations
by the process proposed) instead of one criterion (number of citations) like the ProKnow-
C. The relevance is scientifically established before the systematic reading.
This new methodology presents two mechanisms, which can be used together or sep-
arately: Methodi Ordinatio, which is the complete methodology comprising nine phases;
and the InOrdinatio, which consists of one phase only, the seventh. When the researchers
are only interested in surveying the papers without ranking them according to their sci-
entific relevance, they might simply eliminate phase 7, which is the Ranking of papers
through the InOrdinatio. On the other hand, if the researchers only desire to establish
scientific value to the papers collected without employing the Methodi Ordinatio, they can
do it using only the seventh phase of it, through the InOrdinatio. However, in order to set
scientific criteria which are suitable to the selection of a portfolio, the use of the whole
methodology is recommended, comprising the Methodi Ordinatio and the InOrdinatio.
On Chart 2 it is presented a clear comparison between the methodologies presented,
and the approaches of each one. Each approach carries with it a set of assumptions. The
choice of a methodological approach should be linked with the scientific research objec-
tives. In this sense, there is not a better approach than the other. Nevertheless, there are
research questions that are better suited to be answered by a specific methodology (Lacerda
et al. 2015).
123
2132 Scientometrics (2015) 105:2109–2135
The MSCRI Realistic Descriptive: Based on the search Assesses the scientific production of a
(1985) for relationships between the decisions specific institution or researcher
made by practitioners in the past, the (Vinkler 1986a)
available variables, and the results
collected from the past. Researchers’
task is to observe the environment, and
discover which variables interfere with
the results expected by decision-makers
(Larcerda et al. 2015)
The Cochrane Axiomatic (Prescriptive): ‘‘The axiomatic Offers a strategy to collect and perform the
Collaboration path within the context of a problem systematic reading of all works
(1993) which aims to combine elements, to (published papers and conference papers)
aggregate points of view, to take a that are healthcare related
position in the presence of risks, etc. It can be used in other areas of study, but
consists of transcribing, in formal terms, there is not a filtering procedure to
those demands reflecting a form of eliminate those works of no interest for
rationality in order to investigate its the researcher (Nightingale 2009 and
logical consequences (Roy 1993, Higgins and Green 2011)
p. 192)’’
ProKnow-C Constructivist: Taking the path of Offers a strategy to collect papers on a
(2010) constructivism consists of considering specific theme
concepts, models 12, procedures and Not relevant, or not aligned works, are
results to be keys capable (or not) of filtered out
opening certain locks likely (or not) to be Bibliometric analysis and systematic
appropriate for organizing a situation or reading is performed before scientific
causing it to develop (Roy 1993 p. 194)’’. relevance is defined (for this reason, lots
In this approach researcher’s values and of papers with not much relevance end
preferences are used to expand his/her up being systematically analyzed)
knowledge on the subject Final rank of scientific relevance of the
papers is defined using the papers’
scientific recognition (number of
citations) (Afonso et al. 2012; Vaz et al.
2012; Lacerda et al. 2012)
Methodi Realistic Normative: Decision-maker Offers a strategy to collect papers on a
Ordinatio decides by rationality, that is, operating specific theme
(2015) according to principles that reason itself Not relevant, or not aligned works, are
creates and that are consistent with filtered out
reality as it is accepted by a ration being, Systematic reading is performed after
devoid of emotions (Larcerda et al. scientific relevance is defined by the
2015). The researcher delegates InOrdinatio
decisions to a universal model on which Scientific relevance is defined by
articles are relevant InOrdinatio that uses three factors:
number of citation, year of publication
and impact factor (see phase 7 of
Methodi Ordinatio)
The strength of ProKnow-C is that the researcher will have to perform the bibliometric
analysis and systematic reading before making up a decision on the paper’s scientific
relevance. And this is also the weakness of the method, once a good quantity of tasks must
be employed by the researcher before knowing whether it is a relevant paper for his/her
research or not. On its turn, the strength of Methodi Ordinatio is that the researcher will
know in advance, before performing the systematic reading, the scientific relevance of the
123
Scientometrics (2015) 105:2109–2135 2133
Final considerations
Methodological decision aiding tools, when adequately build, with appropriate models and
procedures, are very representative in helping find the better decision. Any methodology
employed to the evaluation of variables with different dimensions may present limitations,
since they cannot totally translate the reality. Therefore, some limitations are also expected
to be found in the methodology proposed.
One first limitation is the fact that two different kinds of metrics were used to calculate
the InOrdinatio, the JCR and the SJR. The results were presented in the same table, ranked
by the value found. Despite using different metrics systems, tests proved they can be used
conjointly. Despite the metrics used, we suggest that the information related to the journal
metrics of the publishing year could be included in the paper together with the ISSN and
DOI, for instance.
Another limitation is the fact that the application of the InOrdinatio equation is, in prin-
ciple, destined to aid the search for portfolios with a large number of works. When the
research involves themes which are not abundant in the literature, the solution suggested is the
application of all the phases of the methodology, except for the seventh, if the researchers
want to read all the papers found and do not need to know the scientific value of each one.
The limitations presented do not affect the importance or validity of the methodology,
which takes into account the three essential aspects when evaluating the scientific work.
The methodology offers a solution to rank the papers effectively. This simplicity is one
fundamental feature of the research work, since according to van Raan (2004, p. 26),
‘‘Scientists are fascinated by basic features such as simplicity, symmetry, harmony, and
order’’, and these characteristics are provided by this methodology.
Finally, at the same time as proposing solutions related to time and quality to the
researchers’ work, this article raises some reflection towards the management of databases
aiming to promote homogeneity in the way data and information about the journals is made
available, aiming to benefit the advancement of sciences in general.
123
2134 Scientometrics (2015) 105:2109–2135
Acknowledgments We thank the Brazilian Government, the Ministry of Education, and UTFPR that
supported this research.
References
Afonso, M. H. F., de Souza, J. V., Ensslin, S. R., & Ensslin, L. (2012). Como construir conhecimento sobre
o tema de pesquisa? Aplicação do processo proknow-c na busca de literatura sobre avaliação do
desenvolvimento sustentável. Revista de Gestão Social E Ambiental,. doi:10.5773/rgsa.v5i2.424.
Antelman, K. (2004). Do open-access articles have a greater research impact? College & Research
Libraries, 65(5), 372–382. doi:10.5860/crl.65.5.372.
Barham, B. L., Foltz, J. D., & Prager, D. L. (2014). Making time for science. Research Policy, 43(1), 21–31.
doi:10.1016/j.respol.2013.08.007.
Bhupatiraju, S., et al. (2012). Knowledge flows: analyzing the core literature of innovation, entrepreneurship
and science and technology studies. Research Policy, 41, 1205–1218. doi:10.1016/j.respol.2012.03.
011.
Bornmann, L. (2010). Towards an ideal method of measuring research performance: Some comments to the
Opthof and Leydesdorff (2010) paper. Journal of Informetrics, 4(3), 441–443. doi:10.1016/j.joi.2010.
04.004.
Bruggemann, R., & Carlsen, L. (2012). Multi-criteria decision analyses. Viewing MCDA in terms of both
process and aggregation methods: Some thoughts, motivated by the paper of Huang, Keisler and
Linkov. Science of the Total Environment, 425, 293–295.
Cho, K. T. (2003). Multicriteria decision methods: An attempt to evaluate and unify. Mathematical and
Computer Modelling, 37(9–10), 1099–1119. doi:10.1016/S0895-7177(03)00122-5.
Cinelli, M., Coles, S. R., & Kirwan, K. (2014). Analysis of the potentials of multicriteria decision analysis
methods to conduct sustainability assessment. Ecological Indicators, 46, 138–148.
De Greve, J. P., & Frijdal, A. (1989). Evaluation of scientific research profile analysis: a mixed method.
Higher Education Management, 1, 83–90.
Dieks, D., & Chang, H. (1976). Differences in impact of scientific publications: Some indices derived from a
citation analysis. Social Studies of Science, 6, 247–267. doi:10.1177/030631277600600204.
Dyer, J. S., Fishburn, P. C., Steuer, R. E., Wallenius, J., & Zionts, S. (1992). Multiple criteria decision
making, multiattribute utility theory: The next ten years. Management Science, 38(5), 645–654.
Figueira, J., Mousseau, V., & Roy, B. (2005). Electre methods. In Multiple criteria decision analysis: State
of the art surveys. International series in operations research and management science (Vol. 78,
pp. 133–153). United States: Springer.
Greco, S., Matarazzo, B., & Slowinski, R. (2001a). Rough sets theory for multicriteria decision analysis.
European Journal of Operational Research, 129, 1–47.
Greco, S., Matarazzo, B., Slowinski, R. & Stefanowski, J. (2001b). Variable consistency model of domi-
nance-based rough sets approach. In Rough sets and current trends in computing. Lecture notes in
computer science, 2005, 170–18.
Haeussler, C., Jiang, L., Thursby, J., & Thursby, M. (2014). Specific and general information sharing among
competing academic researchers. Research Policy, 43(3), 465–475. doi:10.1016/j.respol.2013.08.017.
Higgins, J.P.T. & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions
version 5.1.0 [updated March 2011]. The Cochrane Collaboration, Retrieved July 8, 2015, from www.
cochrane-handbook.org.
Irvine, J., & Martin, B. R. (1983). Assessing basic research: The case of the Isaac Newton Telescope. Social
Studies of Science, 13, 49–86. doi:10.1177/030631283013001004.
Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value trade-offs.
Cambridge: Cambridge University Press.
Lacerda, R. T. O., Ensslin, L. & Ensslin, S. R. (2012). A bibliometric analysis of strategy and performance
measurement. Gestão & Produção, 19(1), 59–78. Retrieved July 7, 2015, from http://www.scielo.br/
pdf/gp/v19n1/a05v19n1.
Lacerda, R. T. O., Ensslin, L., & Ensslin, S. R. (2015). Research methods and success meaning in project
management. In B. Pasian (Ed.), Designs, methods and practices for research of project management.
England: Gower Publishing Ltd.
Martin, B. R. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics,
36(3), 343–362. doi:10.1007/BF02129599.
Nightingale, A. (2009). A guide to systematic literature reviews. Surgery (Oxford), 27(9), 381–384.
123
Scientometrics (2015) 105:2109–2135 2135
Pohekar, S. D., & Ramachandran, M. (2004). Application of multi-criteria decision making to sustainable
energy planning: A review. Renewable and Sustainable Energy Reviews, 8, 365–381.
Roy, B. (1993). Decision science or decision-aid science? European Journal of Operational Research, 66,
184–203.
Roy, B. (2005). Paradigms and challenges. In J. Figueira, S. Greco, & M. Ehrgott (Eds.), Multiple criteria
decision analysis: State of the art surveys. Berlin: Springer.
Saaty, T. L. (1990). How to make a decision? The analytic hierarchy process. European Journal of
Operantional Research, 48, 9–26.
Small, H., Boyack, K. W., & Klavans, R. (2014). Identifying emerging topics in science and technology.
Research Policy, 43(8), 1450–1467. doi:10.1016/j.respol.2014.02.005.
Taylor, P. J. (2002). A partial order scalogram analysis of communication behavior in crisis negotiation with
the prediction of outcome. The International Journal of Conflict Management, 13(1), 4–37.
Ul Haq, M. (2003). The birth of the Human Development Index. In A. Kumar (Ed.), Readings in human
development (pp. 127–137). Oxford: Oxford University Press.
UNDP (2015). United Nations Development Program. Human Development Reports. Retrieved July 9,
2015, from http://hdr.undp.org/.
van Raan, A. F. J. (2004). Measuring science. Capita selecta of current main issues. In H. F. Moed, W.
Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. The use of
publication and patent statistics in studies of S&T systems (pp. 19–50). Kluwer Academic Publishers:
Dordrecht.
Vaz, C. R., Tasca, J. E., Ensslin, L., Ensslin, S. R., & Selig, P. M. (2013). Avaliação de desempenho na
gestão estratégica organizacional: seleção de um referencial teórico de pesquisa e análise bib-
liométrica. Revista Gestão Industrial,. doi:10.3895/S1808-04482012000400008.
Vinkler, P. (1986a). Management system for a scientific research institute based on the assessment of
scientific publications. Research Policy, 15(2), 77–87. doi:10.1016/0048-7333(86)90003-X.
Vinkler, P. (1986b). Evaluation of some methods for the relative assessment of scientific publications.
Scientometrics, 10, 157–177. doi:10.1007/BF02026039.
Vinkler, P. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics, 36(3),
343–362. doi:10.1007/BF02129599.
Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: The Garfield (Impact) Factor.
Journal of the American Society for Information Science and Technology, 55, 431–435. doi:10.1002/
asi.10391.
Vinkler, P. (2009). pv-index: A new indicator for assessing scientific impact. Journal of Information
Science, 35, 602–612. doi:10.1177/0165551509103601.
Vinkler, P. (2010). The pv-index: A new indicator to characterize the impact of journals. Scientometrics, 82,
461–475. doi:10.1007/s11192-010-0182-z.
Vinkler, P. (2012). The case of scientometricians with the ‘‘absolute relative’’ impact Indicator. Journal of
Informetrics, 6, 254–264. doi:10.1016/j.joi.2011.12.004.
Zopounidis, C., & Doumpos, M. (2002). Multicriteria classification and sorting methods: A literature
review. European Journal of Operational Research, 138, 229–246.
123