You are on page 1of 3

When measuring research, we must remember that 'engageme...

1 of 3

When measuring research, we must

remember that engagement and impact
are not the same thing
April 11, 2016 6.02am AEST
Stephen Taylor
Professor of Accounting, University of Technology Sydney

What is the purpose of measuring engagement, impact or quality? from

In the Innovation Statement late last year, the federal government indicated a strong belief
that more collaboration should occur between industry and university researchers.
At the same time, government, education and industry groupings have made numerous
recommendations for the impact of university research to be assessed alongside or in
addition to the existing assessment of the quality of research.

How should we measure research?

But what should we measure and, more importantly, why should we measure it?
In accounting, we stress that the measurement basis of something inevitably reflects the
purpose for which that measure is to be used.
So what is the purpose of measuring engagement, impact or, for that matter, quality?
The primary reason for measuring quality seems fairly self-evident as a major stakeholder in
terms of funding (especially dedicated research-only funding), the government wants an
assessment of just how good by academic standards such research really is.

2016/4/11 10:03

When measuring research, we must remember that 'engageme...

2 of 3

Looking ahead, measures of quality such as the Excellence in Research for Australia (ERA)
rankings have been speculated to potentially influence future funding via prestigious
competitive schemes (such as the Australian Research Council), block funding for
infrastructure and the availability of government support for doctoral students via Australian
Postgraduate Awards.
So the demand for a measure of research quality and the potential uses of such a measure
are pretty clear.
But what valid reasons are there for investing significant resources in the measurement of
research impact or engagement?
If high-quality research addresses important practical problems (large or small), surely we
would expect impact would follow?
In this sense, the extent of impact is really a joint product of the quality (or robustness) of
research and the choice of topic (ie, practical versus more esoteric).

Research impact needs time

But over what period should impact be measured?
Recent exercises such as that conducted by the Australian Technology Network and Group of
Eight have a relatively short-term focus, as would any impact assessment tied to the
corresponding period covered by the existing ERA time frame (say the last six years).
I and many others maintain that impact can only be assessed over much longer periods, and
that in many cases short-term impact is potentially misleading.
How often have supposedly impactful results subsequently been rejected or overturned?
Such examples inevitably turn out to reflect low quality (and in some cases outright fraudulent)

Ranking impact
Finally, how can impact be ranked? Is there a viable measure that can distinguish between
high and low impact? Existing case-study approaches are unlikely to yield any form of
quantifiable measurement of research impact.
Equally puzzling is the call to measure research engagement. What is the purpose of such an
exercise? Surely in a financially constrained research environment, universities readily
recognise the importance of such engagement and pursue it constantly.
We dont need a national assessment of engagement to encourage universities to engage.
Motive aside, one approach canvassed is the quantum of non-government investment in
research (ie, non-government research income).
This is arguably one rather limited way to measure engagement, and is focused on input
rather than output. If the purpose of any measurement is to capture outcomes, does it make
sense to focus exclusively on inputs? The logic of this escapes me.

Engagement and impact are not the same thing

Even more worryingly, some use the terms engagement and impact interchangeably.

2016/4/11 10:03

When measuring research, we must remember that 'engageme...

3 of 3

They would have us believe that a simple (but useful) measure of impact is the extent to which
university researchers receive industry funding. Surely this is, at best, a measure of
engagement, not impact.
Although the two are likely correlated, the extent will vary greatly across discipline areas.
Further, in business disciplines, much of the knowledge transfer that occurs via education
(including areas such as executive programs) reflects the impact of the constant process of
researching better business practices across areas such as accounting, finance, economics,
marketing and so on.
Discretionary expenditure on such programs by business is surely an indication of the extent
to which business schools and industry are engaged, yet this would be ignored if we focused
on research income alone.
We must not lose sight that quality (ie, rigour and innovativeness) is a necessary but not
sufficient condition for broader research impact.
Engagement is not impact, and simple measures such as non-government research income
tell us very little about genuine external engagement between universities and industry.
As accountants know, performance measurement reflects its purpose. What we need before
any further national assessment of attributes such as impact or engagement is clear
understanding of the purpose of such an exercise.
Only when the purpose is clearly specified can we have a sensible debate about
measurement principles.

research impact
Industry research links
Research quality
Get newsletter

2016/4/11 10:03