You are on page 1of 15

Scientometrics (2023) 128:363–377

https://doi.org/10.1007/s11192-022-04582-5

Labor productivity, labor impact, and co‑authorship


of research institutions: publications and citations
per full‑time equivalents

Wolfgang G. Stock1,2   · Isabelle Dorsch1   · Gerhard Reichmann2   ·


Christian Schlögl2 

Received: 18 December 2021 / Accepted: 27 October 2022 / Published online: 14 November 2022
© The Author(s) 2022

Abstract
Indicators of productivity and impact of research institutions are based on counts of the
institution members’ publications and the citations those publications attracted. How
can scientometricians count publications and citations on the meso-level (here, institu-
tion level)? There are three variables: the institution’s scientific staff in the observed
time frame, their publications in that time, and the publications’ citations. Considering
co-authorship of the publications, one can count 1 for every author (whole counting) or
1/n for n co-authors (fractional counting). One can apply this procedure to publications
as well as citations. New in this article is the consideration of complete lists of scientific
staff members, which include the exact extent of employment, to calculate the labor input
based on full-time equivalents (FTE) and also of complete lists of publications by those
staff members. This approach enables a size-independent calculation of labor productiv-
ity (number of publications per FTE) and labor impact (number of citations per FTE) on
the meso-level. Additionally, we experiment with the difference and the quotient between
summarizing values from the micro-level (person level) and aggregating whole counting
values directly on the meso-level as an indicator for the institution’s predominant internal
or external co-authorship.

Keywords  Research Institutions · Meso-level · Labor productivity · Labor impact ·


Co-authorship · Full-time equivalents (FTE)

* Wolfgang G. Stock
wolfgang.stock@hhu.de
Isabelle Dorsch
isabelle.dorsch@hhu.de
Gerhard Reichmann
gerhard.reichmann@uni-graz.at
Christian Schlögl
christian.schloegl@uni-graz.at
1
Department of Information Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
2
Department of Operations and Information Systems, Karl Franzens University Graz, Graz, Austria

13
Vol.:(0123456789)
364 Scientometrics (2023) 128:363–377

JEL classification  I23 · O32

Introduction

It is common practice in scientometrics and research evaluation of institutions to bank on


the staff’s productivity and impact (Altanopoulou et al., 2012; Edgar & Geare, 2013). Pro-
ductivity is a scientometric indicator working with publication numbers and impact is an
indicator based on citations. Research on departments or institutions is on the meso-level
of scientometrics (Rousseau et al., 2018, p. 247). When we want to conduct research evalu-
ation on the meso-level in a scientifically profound way, we must have detailed and justi-
fied answers to the following questions: How to determine who is part of the institution
in a specific time interval? How to collect and quantitatively describe the complete set of
an institution’s publications and their citations? And how to count publications and cita-
tions for multi-authored papers, as the counting methods greatly influence the results, for
instance, on rankings of institutions (Lin et al., 2013). In contrast to descriptions and eval-
uations of single researchers on the micro-level, scientometric studies on the meso-level
have the challenge of capturing the exact number of staff members in the considered time
frame (Toutkoushian et  al., 2003) as the “underlying production capability” (Thelwall &
Fairclough, 2017, p. 1142) to guarantee a fair ranking (Vavryčuk, 2018), if something like
that is ever possible.
But, stop! What exactly does “productivity” mean? In economics and business admin-
istration, productivity “describes the relationship between output and the inputs required
to generate that output” (Schreiber & Pilat, 2001, p. 128). Productivity in economics is,
therefore, no simple output measure but a relation between the amount of output and the
respective input. In research, the output is the number of publications. Input is the totality
of factors necessary to generate the output, such as the number of researchers (including
their qualifications and academic grades, their positions, their salaries, and working hours),
the technical and administrative staff, the institution’s equipment (laboratories, computers,
etc.), and other cost sources (as, for instance, rents, and accessories and supplies). In sci-
entometric analyses, it would be very difficult or even impossible to collect data on all cost
factors due to the related efforts or for data privacy reasons (e.g., on salaries of individual
researchers). This is why we do not relate the productivity to monetary capital. This clear
limitation can only be hurdled if sufficient cost data are at hand (see, for instance, Abramo
et al., 2010, who had access to some Italian cost data).
So we have to reduce the complex productivity analysis to labor (or workforce) produc-
tivity. With the term of “labor productivity” we follow a common definition in economics,
defining it as the calculation of the “amount of output per unit of labor” (Samuelson &
Nordhaus, 2009, p. 116). Labor productivity is the “performance” of a unit of labor, e.g.,
of an employee, a department, or a company (Mihalic, 2007, p. 104). This is determined by
measuring the quantitative performance which a unit of labor produces in a certain period
of time. In economics, the concept of labor productivity is a simple measure relating output
to input, it is by no means a criterion of quality or workload of a unit of labor. And it does
not say anything about the produced goods. Therefore, labor productivity should be applied
in combination with other indicators.
In scientometrics, we can define “labor productivity” as the relation between the number
of publications (as output) and the number of an institution’s scientific staff (as input) in a
time frame (say, a year) (so did, e.g., Akbash et al., 2021, or Abramo & D’Angelo, 2014)

13
Scientometrics (2023) 128:363–377 365

as a size-independent measure (Docampo & Bessoule, 2019). For Abramo et al. (2010, p.
606), labor productivity “is a fundamental indicator of research efficiency.” As we do not
consider cost data (as we are not able to collect such data) and also consider neither quali-
fications nor academic ranks of the researchers, we rate all researchers and their labor input
as uniform (Abramo et al., 2010). However, we add “labor impact” to our toolbox in order
to arrive at a more comprehensive picture. Labor (or workforce) impact is the number of
citations of the staff’s publications per researcher (Abramo & D’Angelo, 2016a, 2016b).
As there are not all researchers working full-time, and some of them do not work the
entire year, it is necessary to calculate full-time equivalents (FTE) (Kutlača et al., 2015, p.
250). For example, a full-time researcher working all year round counts 1.0 FTE, a half-
time employee 0.5 FTE, and a full-time faculty member working for three months 0.25
FTE. Considering FTE, a size-independent comparison between institutions with different
scientific workforces and, therefore, a fairer ranking is possible. That is why we calculate
a research institution’s productivity by publications per FTE and the institution’s relative
impact by citations per FTE. Additionally, we work with different counting methods (i.e.,
summarized whole counting, aggregated whole counting, and fractional counting). The
effect of changing the counting method in the calculation of publications or citations per
FTE is not analyzed in much detail in the scientometric research literature.
Research output (absolute number of publications) and impact (absolute number of cita-
tions) are size-dependent indicators; they cover the contribution and impact of a research
unit to science, irrespective of the size of the research unit, while labor productivity (num-
ber of publications per FTE) and labor impact (number of citations per FTE) are size-inde-
pendent indicators, which are “about the contribution of a research unit to science relative
to the size of the unit” (Waltman et al., 2016).
If a research institution employs different staff groups, e.g., professors, post-docs, pre-
docs, etc., individual researchers have different labor costs. In Austria, for example, accord-
ing to the collective agreement for university employees, a professor’s salary is around
30 to 80% (80 to 240%) higher than that of a post-doc (pre-doc) (Kollektivvertrag, 2022,
§§47–49). The differences are often even greater in practice since many professors are
offered a higher salary than the collective agreement in appointment negotiations. Various
studies have shown that (full) professors are also more productive: Abramo et  al. (2011)
studied the performance differences between full, associate, and assistant professors in
Italy; Ventura and Mombrú (2006) published a similar study on full and associate profes-
sors in Uruguay; finally, Blackburn et al. (1978) analyzed the productivity of researchers
by age and sex, by their activity in the department, and by the work environment, espe-
cially the reputation of the university, all showing that there are indeed differences. Data
from our project also exhibit differences in the productivity of professors and other staff
members in terms of publications (Reichmann & Schlögl, 2021). For the concrete calcula-
tion of the labor costs of every staff member, it is necessary to have access to salary data,
which was not possible in our case due to the protection of personal data. Therefore, we
had to abstain from calculating labor productivity in terms of money and focus on counting
full-time equivalents. For counting publications with multi-authorship, we can use whole
counting (each publication counts “1” for every co-author), fractional counting (calculating
1/n given n co-authors) or some other methods of counting (Gauffriau, 2017), which we
skip here. For counting citations, both mentioned counting methods are also possible (Ley-
desdorff & Shin, 2011). If field normalization should be necessary, not all citations may
count “1” or “1/n,” depending on the concrete normalization factor.
Gauffriau et al. (2007) also discuss whole counting versus fractional counting. Waltman
and van Eck (2015) recommend fractional counting, especially at the levels of countries

13
366 Scientometrics (2023) 128:363–377

and research organizations and whenever the study requires field normalization. Gauffriau
(2021) presents a comprehensive review of (no less than 32) counting methods in sciento-
metrics. Different counting methods may lead to different results when it comes to ranking
units of analysis as, e.g., individual scientists, institutions, or countries (Gauffriau et  al.,
2008). Following Gauffriau and Larsen (2005, p. 85), counting methods are “decisive for
rankings based on publication and citation studies.”
If one applies whole counting on the meso-level (and also on the macro-level, for exam-
ple, a comparison by countries), we have to distinguish between summarizing counts of
researchers of the same institution (or, on the macro-level, of the same country) and aggre-
gating those counts. Summarizing (in the terminology of Gauffriau et al., 2007, it is called
“complete counting”) means adding up the figures for every co-author; also for co-authors
of the same institution. Assumed that there are two co-authors of one institution (both with
the whole count of “1”), this method will lead to a value of “2” for the institution—and this
is very problematic as it is a duplicative counting on the meso-level. In contrast, aggre-
gating always considers the affiliation of the co-authors and counts “1” in our example.
To show the differences between summarizing and aggregating, we will calculate the two
values for our example institutions. For Gauffriau et al. (2007), whole counting with sum-
marization on the meso-level is not allowed, as such an indicator is “non-additive.” How-
ever, could summarized values be useful if compared with aggregated values? In the case
of co-authorship at the institutional level, a slight difference between summarizing and
aggregating whole counting values indicates predominant external co-authorship (i.e., the
collaboration with authors from other institutions), while a significant difference indicates
predominantly internal co-authorship (i.e., the collaboration with other members of the
same institution) (see also Gauffriau et al., 2007).
The following example should clarify the three counting methods applied in this study
(1. whole counting—summarizing, 2. whole counting—aggregating, and 3. fractional
counting (M. Gauffriau, personal communication, 2022-06-30). Consider an article writ-
ten by three researchers X, Y, and Z. X works at institution A, and Y and Z at institution B.

Counting method (1): summarized whole counting

Author X from institution A: 1 credit, Author Y from institution B: 1 credit, Author Z from
institution B: 1 credit.
Institution A: 1 credit, Institution B: 1 + 1 = 2 credits.

Counting method (2): aggregated whole counting

Institution A: 1 credit, Institution B: 1 credit.

Counting method (3): fractional counting

Author X from institution A: 1/3 credit, Author Y from institution B: 1/3 credit, Author Z
from institution B: 1/3 credit.
Institution A: 1/3 credit, Institution B: 2/3 credits.
In the case of aggregated whole counting (counting method 2), we look directly for the
institution; therefore, there are no values for individual authors.
Based on what we have discussed above, we formulate three research questions (RQs):

13
Scientometrics (2023) 128:363–377 367

RQ1 To what extent do indicators of production and impact (absolute numbers of pub-
lications and citations) on the one hand and labor productivity and labor impact
(numbers of publications and citations per FTE) on the other hand differ at the
meso-level? Are the rankings of the institutions affected?
RQ2 To what extent do indicators using aggregated whole counting and indicators
using fractional counting differ at the meso-level, again for production/impact and
labor productivity/labor impact? And, in turn, are the rankings of the institutions
affected?
RQ3 Can the difference or the quotient between summarizing and aggregating whole
counting values indicate an institution’s internal or external co-authorship
preference?

We will count values for an institution’s output and its absolute impact, i.e. absolute
numbers of publications and citations (whole and fractional counting), as well as its labor
productivity and relative impact (values per FTE, again whole and fractional counting).
For whole counting, we differentiate between summarizing and aggregating. In the end, we
arrive at twelve different scientometric indicators on the meso-level (Table  1), two more
classical indicator sets counting absolute numbers of publications and citations (1 and 2)
and the new indicator sets relating the numbers of publications and citations to FTE (3 and
4). How will values and rankings change when we vary the indicators? We will show dif-
ferences in values and rankings on the example of two research departments.

Methods

Our research method is case study research (Flyvbjerg, 2006; Hays, 2004). We ana-
lyze two paradigmatic institutions of information science in German-speaking countries
(Friedländer, 2014), the Department of Information Science at Heinrich Heine University
Düsseldorf in Germany (Gust von Loh & Stock, 2008) and the Institute for Information
Science and Information Systems at Karl Franzens University Graz in Austria (Reich-
mann et  al., 2021; Reichmann & Schlögl, 2022), since 2020 part of the Dept. of Opera-
tions and Information Systems, for the ten years 2009 to 2018. The research topics of both

Table 1  Scientometric indicators on the meso-level


Indicator Counting method

Whole counting Fractional counting

Sum Aggregate

(1) Production / output (absolute number of publica- Sum(P) Agg(P) Sum(PFC)


tions): #P
(2) Absolute impact (absolute number of citations): #C Sum(C) Agg(C) Sum(CFC)
(3) Labor productivity (number of publications per Sum(P)/FTE Agg(P)/FTE Sum(PFC)/FTE
FTE): #P/FTE
(4) Labor impact (number of citations per FTE): #C/FTE Sum(C)/FTE Agg(C)/FTE Sum(CFC)/FTE

#P number of publications, #C number of citations, Sum whole counting with summarizing, Sum(PFC) frac-
tional counting with summarizing publication points; Sum(CFC) fractional counting with summarizing cita-
tion points, Agg whole counting with aggregating, FTE full-time equivalent

13
368 Scientometrics (2023) 128:363–377

departments are similar; they work on information systems, especially mobile systems, sci-
ence communication, citation analysis, university libraries, and scientific journals (Graz),
as well as on social media, information literacy, informational (smart) cities, information
behavior, information retrieval, and knowledge organization (Düsseldorf). While infor-
mation scientists in Graz have a stronger focus on information systems, their colleagues
from Düsseldorf do more research on information services (including social media ser-
vices) (Dorsch et al., 2017). However, there are significant differences in the number of all
researchers (in favor of Düsseldorf) and the number of professors (in favor of Graz). The
mean number of co-authors for the publications from Graz is 2.02. For Düsseldorf, the cor-
responding value is 2.69; so the counting method for co-authored publications seems very
important. At the beginning of our research, we were confronted with two problems: How
to get a complete list of all staff members and how to get a complete list of their publica-
tions and the citations of these publications? It is essential on this scientometric level to
guarantee the completeness of all indicators to avoid misleading figures or incomparable
data sets (Reichmann & Schlögl, 2021; Reichmann et al., 2022).

The evaluated institutions and their members

What is the unit of evaluation at the meso-level? How to determine who is part of the
institution? As the unit of evaluation, we chose a department and an institute of Informa-
tion Science. The sources of the scientific staff’s employment, including the period and the
extent (hours per week) of their employment, could be personnel records; however, those
official documents are confidential and by no means open data. So we had to use published
information (e.g., on personal or institutional websites) and ask our colleagues personally
(in 2021). We considered scientific staff in temporary projects and research assistants only
if they were employed and held an academic degree. Furthermore, we skipped technical
and administrative staff. In some scientific disciplines, technical staff is mentioned in the
articles’ by-line; this is only rarely realized in information science.
Both full-time and part-time jobs were considered, and we also captured the months in a
year the researchers were employed. We did not consider visiting scholars. Pseudonyms (in
our case study, “Mathilde B. Friedländer”) were dismantled. Our lists of the institutions’
members covered 26 researchers from Düsseldorf and eight from Graz.

The evaluated institutions’ publications

How to gather a complete set of all research publications of the institution’s members? As
all sources are incomplete (Hilbert et al., 2015), we worked not only with publication data
from Web of Science (WoS), Scopus, and Google Scholar but also with personal or cor-
porate publication lists (Dorsch, 2017; Dorsch & Frommelius, 2015; Dorsch et al., 2018).
However, we had to apply the primary multidisciplinary bibliographic information services
for citation data. For all 26 and eight researchers in Düsseldorf and Graz, we searched in
WoS, Scopus, and Google Scholar for publications and citations.
The country of the institution has proven to be essential for the method of collecting
publication data. Due to Austria’s regulations of the universities, including the duty to
report the institutions’ intellectual capital statements (German: Wissensbilanzen) annu-
ally (§ 13 (6) Universitätsgesetz, 2002), we had an excellent institutional repository for
the institution in Graz. There is no such regulation in Germany, so we had to collect the

13
Scientometrics (2023) 128:363–377 369

Table 2  Scientometric indicators for the Information Science Dept. in Düsseldorf


Information Science Düsseldorf 2009–2018 (114.3 FTE) Whole counting Fractional
counting
Sum Aggregate

(1) Production: Sum(P), Agg(P), Sum(PFC) 634 345 271.7


(2) Absolute impact: Sum(C), Agg(C), Sum(CFC) 1132 705 385.0
(3) Labor productivity: Sum(P)/FTE, Agg(P)/FTE, Sum(PFC)/FTE 5.54 3.02 2.38
(4) Labor impact: Sum(C)/FTE, Agg(C)/FTE, Sum(CFC)/FTE 9.90 6.17 3.37

N = 345 publications

Table 3  Scientometric indicators for the Information Science and Information Systems Institute in Graz
Information Science Graz 2009–2018 (51.3 FTE) Whole counting Fractional
counting
Sum Aggregate

(1) Production: Sum(P), Agg(P), Sum(PFC) 258 228 173.3


(2) Absolute impact: Sum(C), Agg(C), Sum(CFC) 206 204 86.0
(3) Labor productivity: Sum(P)/FTE, Agg(P)/FTE, Sum(PFC)/FTE 5.03 4.44 3.38
(4) Labor impact: Sum(C)/FTE, Agg(C)/FTE, Sum(CFC)/FTE 4.02 3.98 1.68

N = 228 publications

publication data for Düsseldorf from the researchers’ personal publication lists on their
websites.
In the end, we evaluated all publication data critically—partly in co-operation with the
researchers. For data storage, retrieval, and most parts of the calculations, we used Micro-
soft Access.

Results

Tables  2 and 3 exhibit the values of our twelve meso-level indicators for both paradig-
matic Information Science institutions. In the analyzed 10 years, Düsseldorf’s institution
formally published 345 documents and Graz 228. In the entire 10 years, Düsseldorf’s labor
input was 114.3 FTE, while Graz’s was only 51.3 FTE. Papers of Düsseldorf’s institution
were cited 705 times in WoS, 1532 times in Scopus, and 5491 times in Google Scholar,
and Graz’s papers received 204 (WoS), 294 (Scopus), and 879 (Google Scholar) citations
(all data as of Sept. 2021). In this paper, we only consider the citation figures in WoS.

Production and impact versus labor productivity and labor impact (RQ1)

Düsseldorf’s production (Agg(P)) in the observed 10 years covers 345 publications, while
Graz’s is 228 papers. Düsseldorf’s 345 documents received (till September 2021) 705 cita-
tions (Agg(C)) in WoS, and the Graz papers got 204 citations. Due to the non-additivity
of whole counting values of the researchers, the application of Sum(P) and Sum(C) does
not make sense at the meso-level. (However, we will use Sum(P) and Sum(C) in order to

13
370 Scientometrics (2023) 128:363–377

answer RQ3.) So, Düsseldorf published 117 documents more than their colleagues from
Graz and received 501 more citations in WoS. In an institutional ranking, Düsseldorf
comes first and Graz—far behind—second with only 61.1% of Düsseldorf’s production
and 28.9% of Düsseldorf’s absolute impact (Table 4).
Now we turn to labor productivity and labor impact. Düsseldorf’s labor productivity
(Agg(P)/FTE) equals 3.02, while Graz’s is 4.44. As can be seen, the ranking between the
institutions dramatically changes. On average, each member of the Information Science
Dept. in Düsseldorf (related to FTE) published 3.0 papers every year and each information
scientist from Graz 4.4 papers, i.e. 1.42 papers more than a researcher from Düsseldorf.
So Graz comes to 147.0% of Düsseldorf’s labor productivity. Düsseldorf’s labor impact
(Agg(C)/FTE) is 6.17 and still higher than that of Graz (which is 3.98), but in contrast to
the absolute impact, the difference between both institutions decreases. Concerning abso-
lute impact, Graz shows only about 28.9% of Düsseldorf’s value (204 citations in Graz in
relation to 705 in Düsseldorf), but concentrating on labor impact, Graz reaches 64.5% of
Düsseldorf’s value (3.98 in relation to 6.17).
Considering our case study, we arrive at a clear result for answering RQ1: There are
indeed significant differences between production and absolute impact indicators on the
one side and labor productivity and labor impact on the other. Also, the ranking order of
the institutions changes for some indicators. As size-independent indicators, labor produc-
tivity and labor impact have benefits for scientometrics and allow for an additional view on
research institutions.

Whole counting versus fractional counting (RQ2)

Whole counting (of course, only aggregated values) does not consider the number of co-
authors, but fractional counting does. In our case, we calculated the proportion of one
author using the simple formula 1/n, assuming that there are n co-authors. How do the
example institutions’ whole and fractional counting values differentiate? As both institu-
tions apply co-authorship, the fractional counting values are necessarily lower than the
whole counting values.
Considering fractional counting of publications (Sum(PFC)), Düsseldorf published
271.7 papers (or—better—publication points) compared to 345 when applying whole
counting. They received 385.0 citation points (Sum(CFC)) instead of 705 for whole count-
ing. Using fractional counting (Sum(PFC)), Graz arrived at 173.3 publication points (for
228 publications) and 86.0 citation points (Sum(CFC)) (derived from 204 citations). Com-
paring both institutions under the lens of fractional counting, Düsseldorf ranks first for

Table 4  Düsseldorf and Graz in comparison


Whole counting Fractional
counting (%)
Sum Aggregate

(1) Production: Sum(P), Agg(P), Sum(PFC) – 66.1% 63.8


(2) Absolute impact: Sum(C), Agg(C), Sum(CFC) – 28.9% 22.3
(3) Labor productivity: Sum(P)/FTE, Agg(P)/FTE, Sum(PFC)/FTE – 147.0% 142.0
(4) Labor impact: Sum(C)/FTE, Agg(C)/FTE, Sum(CFC)/FTE – 64.5% 49.9

Düsseldorf’s values = 100%. N(Düsseldorf) = 345 publications; N(Graz) = 228 publications.

13
Scientometrics (2023) 128:363–377 371

publication points (a difference of 98.4 in favor of Düsseldorf) and citation points (a differ-
ence of 299 points).
Regarding fractional counting of labor productivity and labor impact, Düsseldorf has a
value of 2.38 for labor productivity and 3.37 for labor impact, and Graz comes to 3.38 for
labor productivity and 1.68 for labor impact. Again, the institutions change their ranking
positions when we differentiate between production and labor productivity. However, these
changes are due to the differences between production and labor productivity as well as to
absolute impact and labor impact, but not due to the differences between whole and frac-
tional counting, which are indeed observable (and all in favor of Düsselorf), but rather low
(Table  4). The difference between whole and fractional counting depends on the institu-
tions’ culture to cooperate inside the institution and with external researchers.

Internal or external co‑authorship (RQ3)

Summarizing the whole counting values of the individual researchers from the same insti-
tution does not make sense if a research analysis takes place at the institution level. How-
ever, we can apply differences and quotients between summarizing and aggregating as sci-
entometric indicators of internal co-operation on the meso-level.
If we look at Table 5, we see for Düsseldorf much larger values for all our indicators,
i.e. for production (publications) and absolute impact (citations), as well as for labor pro-
ductivity and labor impact than for Graz. For production, the difference indicates the num-
ber of internal co-authorships and the quotient the (average) rate of the internal authors
per publication. If all co-authors are from other institutions, the difference between sum-
marizing and aggregating is zero, and the quotient equals to 1. The higher these values,
the more internal co-authors (counted twice or multiple times) are involved. In Table 5, we
can see that internal co-authorship is relatively high in Düsseldorf. There are 289 internal
co-authorships; one paper is, on average, co-authored by 1.84 internal staff members. Inter-
nal co-operation is much lower in Graz, where there are only 30 internal co-authorships.
On average, a paper is co-published by 1.13 colleagues from Graz. The average number
of authors (internal and external) per paper is 2.69 for Düsseldorf and 2.02 for Graz. If
the proportion of internal authorship is to be calculated, the previous values must be sub-
stracted by one before. Accordingly, the proportion of internal co-authorships is approxi-
mately 50% for Düsseldorf (0.84/1.69 × 100) and roughly 13% for Graz (0.13/1.02 × 100).

Table 5  Absolute differences and quotients between summarizing and aggregating whole counting values
for the Information Science Institutions in Düsseldorf and Graz
Düsseldorf Graz
Diff Quotient Diff Quotient

(1) Production: Sum(P) − Agg(P), Sum(P)/Agg(P) 289 1.84 30 1.13


(2) Absolute impact: Sum(C) − Agg(C), Sum(C)/Agg(C) 427 1.61 2 1.01
(3) Labor productivity: Sum(P)/FTE − Agg(P)/FTE, Sum(P)/FTE/ 2.53 1.84 0.58 1.13
Agg(P)/FTE
(4) Labor impact: Sum(C)/FTE − Agg(C)/FTE, Sum(C)/FTE/Agg(C)/ 3.74 1.61 0.04 1.01
FTE

N(Düsseldorf) = 345 publications; N(Graz) = 228 publications


Diff. absolute difference.

13
372 Scientometrics (2023) 128:363–377

In the case of absolute impact, the difference informs about the (additional) number of
citations from internally co-authored papers, while the quotient gives the factor by which
the citations increase due the internal authorship (and, as a consequence, multiple count-
ing). It must, however, be noticed that the citations to an internally co-published paper is
counted multiple times, if it is co-authored by more than two staff members. According to
Table 5, Düsseldorf received 427 citations for internally co-published papers, which is 61%
of the total citations. Graz attracted only two citations for internally co-authored papers;
this is only 1% of the total citations received.
Labor productivity refers the internal publication output to 1 FTE. Accordingly, in Düs-
seldorf, 1 FTE is, on average, involved in 2.53 internal co-authorships. In Graz, one FTE
has, on average, 0.58 internal co-authorships. The difference between Düsseldorf and Graz
is again higher for the labor impact. In Düsseldorf, internally co-authored papers by 1 FTE
are on average cited 3.74 times (repeatedly), the corresponding value for Graz is only 0.04.
Can the difference or the quotient (as a percentage) between summarizing and aggregat-
ing whole counting values indicate an institution’s extent of internal or external co-opera-
tion? Based on our case study, the answer is “yes,” as we found clear indications for more
internal co-operation in Düsseldorf and more external co-operation in Graz.

Discussion

Main results

In this article, we discussed the influence of applying absolute publication and citation
numbers compared to numbers per FTE and the influence of whole versus fractional count-
ing on evaluating a research institution. Our method was case study research. For two
exemplary Information Science institutions, we collected data on the extent and period of
employment of the institutions’ researchers, data on their publications, and finally, data on
citations of those publications.
While absolute numbers of publications and citations are size-dependent measures and
work as indicators for research production (output) and research impact, publications per
FTE and citations per FTE are size-independent indicators for labor productivity and labor
impact. Labor productivity and labor impact may be captured by whole counting as well as
by fractional counting.
The choice of indicators and counting methods depends on the purpose of a scientomet-
ric study. The purpose of our study is a comparative analysis of two Information Science
institutions. Both institutions work in the same scientific field but differ in the number of
faculty members and their research co-operation strategies (different numbers and affili-
ation of co-authors). Therefore, for our case study, the most appropriate indicators at the
meso-level are fractional counting values for labor productivity (number of publications
per FTE) and labor impact (number of citations per FTE) for the following reasons:

• Values per FTE reflect the size of an institution and normalize the publication and
citation counts on the employee years invested in the institution. Now, comparisons
between institutions of different sizes become possible.
• Fractional counting seems to be fairer than whole counting, a proposition predomi-
nantly accepted in scientometrics (see, e.g., Perianes-Rodriguez et  al., 2016; Leydes-

13
Scientometrics (2023) 128:363–377 373

dorff & Park, 2017). This is particular true if the co-operation behavior differs (differ-
ent co-authorship numbers, stronger internal/external co-operation).
• Fractional counting and values per FTE make sense for publication and citation meas-
ures, as both depend on the size of an institution and the extent of co-operation.

When using size-dependent counting methods, Düsseldorf takes first place for all indi-
cators. When moving to the size-independent productivity indicator (i.e., publications
per FTE), Graz is on the top. And when it comes to switching to the fractional counting
of publications, Graz is number one again. The change in the rankings when using size-
independent instead of size-dependent indicators is due to the higher number of faculty
in Düsseldorf; the difference in the rankings concerning whole and fractional counting
results from the institutions’ co-operation strategies, whereby Graz prefers more external
research partners than Düsseldorf. Thus, specific characteristics of the two institutions lead
to changes in the scientometric portrayal of their publication activities, including the very
important positioning in a ranking. Labor productivity and labor impact are valuable size-
independent indicators for the scientometric evaluation of research institutions. They do
not replace size-dependent indicators but complement them.
As a by-product of our research, we found that, in particular, the ratios between the
sum of the institutions’ researchers’ publication and citation numbers (on the micro-level)
and the aggregation of the values on the meso-level are a hint concerning the co-operation
policy of an institution. The greater the ratio, the more one institution is oriented towards
internal publication teams. However, we may also count the number of all authors A of
all the institution’s publications (#A), the number of internal co-authors (#AInt), and the
number of external co-authors (#AExt). The quotients #AInt/#A and #AExt/#A form indicators
for internal and external cooperation prevalence. Such indicators give the basis for further
investigations on research co-operations (Silva et al., 2019) as, for instance, the extent of
labor impact of internal versus external co-authorship.

Limitations, outlook, and recommendations

A limitation of our study is the small database. However, the aim was not to present a lot
of data on many institutions but to draw attention to particular scientometric problems at
the meso-level and indicators that promise a solution to FTE-related challenges. Of course,
there should be more extensive studies with much more data. However, the more data will
be used, the greater the problems with the collection of complete institutional or even
country-wide publication sets, especially concerning personal data and realistic citation
sets.
In this paper, we only considered citation impact (Abramo, 2018) and skipped other
impacts, such as on social media, which are partially captured by altmetrics. Likewise, for
the calculation of fractional counting, we only considered the formula 1/n and bypassed
alternative calculation methods (Gauffriau, 2017). We did not apply field-specific cita-
tion normalization and ignored different document types (e.g., research papers vs review
articles).
The indicators for labor productivity, labor impact, and internal/external co-operation
orientation do not only work at the level of single research institutions (as in our exam-
ple) but also for all other meso-level institutions (e.g., universities) and, additionally, for all
entities at the macro-level (e.g., countries or world regions), i.e. they can be applied to any
entity, where size matters. Much more study is needed here as well.

13
374 Scientometrics (2023) 128:363–377

In our study, complete data sets regarding the researchers and their publications formed
the basis for the calculations. The data set for citations is incomplete, as we only consid-
ered citation numbers from WoS. It can be (even very) problematic to collect all data on
researchers’ employment if there are no trustworthy sources and strict data protection laws.
It is also difficult to collect all publication data. But with the union of data from WoS,
Scopus, Google Scholar, field-specific services (e.g., ACM Digital Library), other infor-
mation services such as Dimensions, and—very important—from personal or institutional
publication lists, this should be realizable. Since the required data is known to the analyzed
institutions, the optimal solution to ensure complete datasets, both in terms of employment
data and publication data, would be for the institutions to self-report and archive this data
annually.
Changing the counting method in the calculation of (1) production and impact (absolute
numbers of publications and citations) and (2) labor productivity and labor impact (num-
bers of publications and citations per FTE) has not been analyzed in detail in scientomet-
rics so far. Since there are indeed differences between “raw” publication and citation values
and the numbers of publications and citations per FTE and also considerable differences in
the ranking of institutions, the introduction of FTEs into scientometrics seems to be very
promising and forward-looking.
Acknowledgements  We would like to thank an anonymous reviewer for valuable hints for optimizing our
paper. Special thanks go to Marianne Gauffriau for detailed information on counting methods of publica-
tions and citations and for critical remarks on an earlier version of this article.

Author contributions All authors contributed to the study conception, study design and data collection.
CS performed data preparation and analysis. WGS and GR wrote the first draft of the manuscript, and all
authors commented on previous versions. All authors read and approved the final manuscript.

Funding  Open Access funding enabled and organized by Projekt DEAL. No funds, grants, or other support
were received.

Data availability  The data of our study are publicly available in Zenodo at https://​doi.​org/​10.​5281/​zenodo.​
72632​22.

Declarations 
Conflict of interest  The authors have no financial or proprietary interests in any material discussed in this
article.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

References
Abramo, G. (2018). Revisiting the scientometric conceptualization of impact and its measurements. Journal
of Informetrics, 12(3), 590–597. https://​doi.​org/​10.​1016/j.​joi.​2018.​05.​001
Abramo, G., & D’Angelo, C. A. (2014). How do you define and measure research productivity? Scientomet-
rics, 101, 1129–1144. https://​doi.​org/​10.​1007/​s11192-​014-​1269-8

13
Scientometrics (2023) 128:363–377 375

Abramo, G., & D’Angelo, C. A. (2016a). A farewell to the MNCS and like size-independent indicators.
Journal of Informetrics, 10(2), 646–651. https://​doi.​org/​10.​1016/j.​joi.​2016.​04.​006
Abramo, G., & D’Angelo, C. A. (2016b). A farewell to the MNCS and like size-independent indicators:
Rejoinder. Journal of Informetrics, 10(2), 679–683. https://​doi.​org/​10.​1016/j.​joi.​2016.​01.​011
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). Research productivity: Are higher academic
ranks more productive than lower ones? Scientometrics, 88, 915–928. https://​doi.​org/​10.​1007/​
s11192-​011-​0426-6
Abramo, G., D’Angelo, C. A., & Solazzi, M. (2010). National research assessment exercises: A measure
of the distortion of performance rankings when labor input is treated as uniform. Scientometrics, 84,
605–619. https://​doi.​org/​10.​1007/​s11192-​010-​0164-1
Akbash, K. S., Pasichnyk, N. O., & Rizniak, R. Y. (2021). Analysis of key factors of influence on scien-
tometric indicators of higher education institutions of Ukraine. International Journal of Educational
Development, 81, 102330. https://​doi.​org/​10.​1016/j.​ijedu​dev.​2020.​102330
Altanopoulou, P., Dontsidou, M., & Tselios, N. (2012). Evaluation of ninety-three major Greek university
departments using Google Scholar. Quality in Higher Education, 18(1), 111–137. https://​doi.​org/​10.​
1080/​13538​322.​2012.​670918
Blackburn, R.T., Behymer, C.E., & Hall, D.E. (1978). Correlates of faculty publications. Sociology of Edu-
cation, 51(2), 132–141. https://​www.​jstor.​org/​stable/​21122​45
Docampo, D., & Bessoule, J. J. (2019). A new approach to the analysis and evaluation of the research
output of countries and institutions. Scientometrics, 119(2), 1207–1225. https://​doi.​org/​10.​1007/​
s11192-​019-​03089-w
Dorsch, I. (2017). Relative visibility of authors´ publications in different information services. Scientomet-
rics, 112(2), 917–925. https://​doi.​org/​10.​1007/​s11192-​017-​2416-9
Dorsch, I., Askeridis, J., & Stock, W. G. (2018). Truebounded, overbounded, or underbounded? Scientists’
personal publication lists versus lists generated through bibliographic information services. Publica-
tions, 6(1), 1–9. https://​doi.​org/​10.​3390/​publi​catio​ns601​0007
Dorsch, I., & Frommelius, N. (2015). A scientometric approach to determine and analyze productivity,
impact and topics based upon personal publication lists. In F. Pehar, C. Schlögl, & C. Wolff (Eds.),
Re:inventing information science in the networked society. Proceedings of the 14th International Sym-
posium on Information Science (ISI 2015), Zadar, Croatia, 19th-21st May 2015 (pp. 578–580). Hüls-
busch. https://​zenodo.​org/​record/​17978
Dorsch, I., Schlögl, C., Stock, W. G., & Rauch, W. (2017). Forschungsthemen der Düsseldorfer und Grazer
Informationswissenschaft (2010 bis 2016). Information: Wissenschaft & Praxis, 68(5–6), 320–328.
https://​doi.​org/​10.​1515/​iwp-​2017-​0060
Edgar, F., & Geare, A. (2013). Factors influencing university research performance. Studies in Higher Edu-
cation, 38(5), 774–792. https://​doi.​org/​10.​1080/​03075​079.​2011.​601811
Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–
245. https://​doi.​org/​10.​1177/​10778​00405​284363
Friedländer, M. B. (2014). Informationswissenschaft an deutschsprachigen Universitäten: Eine kompara-
tive informetrische Analyse. Information: Wissenschaft und Praxis, 65(2),109–119. https://​doi.​org/​10.​
1515/​iwp-​2014-​0018
Gauffriau, M. (2017). A categorization of arguments for counting methods for publication and citation indi-
cators. Journal of Informetrics, 11(3), 672–684. https://​doi.​org/​10.​1016/j.​joi.​2017.​05.​009
Gauffriau, M. (2021). Counting methods introduced into the bibliometric research literature 1970–2018: A
review. Quantitative Science Studies, 2(3), 932–975. https://​doi.​org/​10.​1162/​qss_a_​00141
Gauffriau, M., & Larsen, P. O. (2005). Counting methods are decisive for rankings based on publication and
citation studies. Scientometrics, 64, 85–93. https://​doi.​org/​10.​1007/​s11192-​005-​0239-6
Gauffriau, M., Larsen, P. O., Maye, I., Roulin-Perriard, A., & von Ins, M. (2007). Publication, coopera-
tion and productivity measures in scientific research. Scientometrics, 73(2), 175–214. https://​doi.​org/​
10.​1007/​s11192-​007-​1800-2
Gauffriau, M., Larsen, P. O., Maye, I., Roulin-Perriard, A., & von Ins, M. (2008). Comparisons of results
of publication counting using different methods. Scientometrics, 77, 147–176. https://​doi.​org/​10.​1007/​
s11192-​007-​1934-2
Gust von Loh, S., & Stock, W. G. (2008). Wissensrepräsentation—Information Retrieval—Wissensman-
agement. Das Forschungsprogramm der Düsseldorfer Informationswissenschaft. Information: Wis-
senschaft und Praxis, 59(2), 73–74. https://​www.​isi.​hhu.​de/​filea​dmin/​redak​tion/​Fakul​taeten/​Philo​sophi​
sche_​Fakul​taet/​Sprac​he_​und_​Infor ​mation/​Infor ​matio​nswis​sensc​haft/​Datei​en/​Wolfg​ang_​G._​Stock/​
Daten_​vor_​2011/​12045​43980​edito​rial_.​pdf
Hays, P. A. (2004). Case study research. In K. deMarrais & S. D. Lapan (Eds.), Foundations for research
(pp. 217–234). Routledge.

13
376 Scientometrics (2023) 128:363–377

Hilbert, F., Barth, J., Gremm, J., Gros, D., Haiter, J., Henkel, M., Reinhardt, W., & Stock, W. G. (2015).
Coverage of academic citation databases compared with coverage of social media: Personal publica-
tion lists as calibration parameters. Online Information Review, 39(2), 255–264. https://​doi.​org/​10.​
1108/​OIR-​07-​2014-​0159
Leydesdorff, L., & Park, H. W. (2017). Full and fractional counting in bibliometric networks. Journal of
Informetrics, 11(1), 117–120. https://​doi.​org/​10.​1016/j.​joi.​2016.​11.​007
Leydesdorff, L., & Shin, J. C. (2011). How to evaluate universities in terms of their relative citation impacts:
Fractional counting of citations and the normalization of differences among disciplines. Journal of the
American Society for Information Science and Technology, 62(6), 1146–1155. https://​doi.​org/​10.​1002/​
asi.​21511
Kollektivvertrag. (2022). Kollektivvertrag für die ArbeitnehmerInnen der Universitäten. Dachverband der
Universitäten. https://​www.​aau.​at/​wp-​conte​nt/​uploa​ds/​2018/​06/​Kolle​ktivv​ertrag-​Unive​rsita​eten.​pdf
Kutlača, D., Babić, D., Živković, L., & Štrbac, D. (2015). Analysis of quantitative and qualitative indi-
cators of SEE countries scientific output. Scientometrics, 102, 247–265. https://​doi.​org/​10.​1007/​
s11192-​014-​1290-y
Lin, C. S., Huang, M. H., & Chen, D. Z. (2013). The influence of counting methods on university rankings
based on paper count and citation count. Journal of Informetrics, 7(3), 611–621. https://​doi.​org/​10.​
1016/j.​joi.​2013.​03.​007
Mihalic, V. (2007).ABC der Betriebswirtschaft (7th ed.). Linde.
Perianes-Rodriguez, A., Waltman, L., & van Eck, N. J. (2016). Constructing bibliometric networks: A com-
parison between full and fractional counting. Journal of Informetrics, 10(4), 1178–1195. https://​doi.​
org/​10.​1016/j.​joi.​2016.​10.​006
Reichmann, G., & Schlögl, C. (2021). Möglichkeiten zur Steuerung der Ergebnisse einer Forschungsevalu-
ation. Information: Wissenschaft und Praxis, 72(4), 212–220. https://​doi.​org/​10.​1515/​iwp-​2021-​2148
Reichmann, G., & Schlögl, C. (2022). On the possibilities of presenting the research performance of an
institute over a long period of time: The case of the Institute of Information Science at the University
of Graz in Austria. Scientometrics, 127, 3193–3223. https://​doi.​org/​10.​1007/​s11192-​022-​04377-8
Reichmann, G., Schlögl, C., Stock, W. G., & Dorsch, I. (2022). Forschungsevaluation auf Institutsebene:
Der Einfluss der gewählten Methodik auf die Ergebnisse. Beiträge zur Hochschulforschung, 44(1),
74–97. https://​www.​bzh.​bayern.​de/​filea​dmin/​user_​upload/​Publi​katio​nen/​Beitr​aege_​zur_​Hochs​chulf​
orsch​ung/​2022/​2022-1-​Reich​mann-​Schlo​egl-​Stock-​Dorsch.​pdf
Reichmann, G., Schlögl, C., & Thalmann, S. (2021). Das Institut für Informationswissenschaft an der Uni-
versität Graz: 1987–2020. Information: Wissenschaft und Praxis, 72(1), 1–9. https://​doi.​org/​10.​1515/​
iwp-​2020-​2132
Rousseau, R., Egghe, L., & Guns, R. (2018).Becoming Metric‐Wise: A Bibliometric Guide for Researchers.
Chandos. https://​www.​scien​cedir​ect.​com/​book/​97800​81024​744/​becom​ing-​metric-​wise
Samuelson, P. A., & Nordhaus, W. D. (2009). Economics (19th ed.). McGraw-Hill.
Schreiber, P., & Pilat, D. (2001). Measuring productivity. OECD Economic Studies, 33(II), 128–170. https://​
www.​oecd.​org/​emplo​yment/​labour/​19590​06.​pdf
Silva, F. S. V., Schulz, P. A., & Noyons, E. C. M. (2019). Co-authorship networks and research impact
in large research facilities: Benchmarking internal reports and bibliometric databases. Scientometrics,
118, 93–108. https://​doi.​org/​10.​1007/​s11192-​018-​2967-4
Thelwall, M., & Fairclough, R. (2017). The research production of nations and departments: A statistical
model for the share of publications. Journal of Informetrics, 11(4), 1142–1157. https://​doi.​org/​10.​
1016/j.​joi.​2017.​10.​001
Toutkoushian, R. K., Porter, S. R., Danielson, C., & Hollis, P. R. (2003). Using publications counts to meas-
ure an institution’s research productivity. Research in Higher Education, 44(2), 121–148. https://​doi.​
org/​10.​1023/A:​10220​70227​966
Universitätsgesetz. (2002). Bundesgesetz über die Organisation der Universitäten und ihre Studien (Univer-
sitätsgesetz 2002). Bundesgesetzblatt für die Republik Österreich, Teil I, Nr. 120/2002. https://​www.​
ris.​bka.​gv.​at/​Dokum​ente/​BgblP​df/​2002_​120_1/​2002_​120_1.​pdf
Vavryčuk, V. (2018). Fair ranking of researchers and research teams. PLoS ONE, 13(4), e0195509. https://​
doi.​org/​10.​1371/​journ​al.​pone.​01955​09
Ventura, O., & Mombrú, A. (2006). Use of bibliometric information to assist research policy making. A
comparison of publication and citation profiles of Full and Associate Professors at a School of Chemis-
try in Uruguay. Scientometrics, 69, 287–313. https://​doi.​org/​10.​1007/​s11192-​006-​0154-5
Waltman, L., & van Eck, N. J. (2015). Field-normalized citation impact indicators and the choice of an
appropriate counting method. Journal of Informetrics, 9(4), 872–894. https://​doi.​org/​10.​1016/j.​joi.​
2015.​08.​001

13
Scientometrics (2023) 128:363–377 377

Waltman, L., van Eck, N. J., Visser, M., & Wouters, P. (2016). The elephant in the room: The problem of
quantifying productivity in evaluative scientometrics. Journal of Informetrics, 10(2), 671–674. https://​
doi.​org/​10.​1016/j.​joi.​2015.​12.​008

13

You might also like