You are on page 1of 36

PR Metrics

Research for Planning & Evaluation

of PR & Corporate Communication

Jim R. Macnamara BA, MA, PhD*, FPRIA, AFAMI, CPM

1st floor, 87-97 Regent St

Chippendale, NSW, 2008
Fax: 61 2 9698 1032
Tel: 61 2 9698 1033
Incorporating: CARMA International (Asia Pacific) Pty Limited
MASS Software MASS Communication Consulting
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

PR Metrics Research for Planning &

Evaluation of PR & Corporate Communication

Jim R. Macnamara BA, MA, FPRIA, PhD*, FAMI, CPM

Today, in both the public and private sectors, accountability and, therefore, measurability are key
principles of management. Increasingly, measurement and evaluation need to be more than
anecdotal and informal. Objective rigorous methods are required that deliver credible proof of
results and Return on Investment (ROI) to management, shareholders and other key

The environment in which public relations and corporate communication operate today is
frequented by management practices such as:

Key Performance Indicators (KPIs) or Key Results Areas (KRAs);

Balanced Score Card; and
Other systems of tracking key metrics to evaluate activities.

Furthermore, communication campaigns today are increasingly planned and based on research.
What do target audiences know already? What awareness exists? What are their perceptions?
What media do they rely on or prefer for information?

Research before a communication campaign or activity is termed formative research, while

research to measure effectiveness is termed evaluative research. Evaluative research was
originally thought to be conducted after a communication campaign or activity. However, Best
Practice thinking outlined indicates that measurement and evaluation should begin early and
occur throughout communication programs as a continuous process. Undertaken this way,
formative and evaluative research inter-relate and merge and, therefore, are discussed together.

PR and corporate communication has met the growing requirements for formative and evaluative
research with a patchy track record and this is widely viewed as a major area for focus in future.

Public Relations Research An Historical Perspective

A review of academic and industry studies worldwide shows growing recognition of the need for
research and evaluation, but a slow and low uptake by practitioners.

In 1983, James Grunig concluded: Although considerable lip service is paid to the importance of
program evaluation in public relations, the rhetorical line is much more enthusiastic than actual
utilisation. Grunig added: I have begun to feel more and more like a fundamentalist minister
railing against sin; the difference being that I have railed for evaluation in public relations practice.
Just as everyone is against sin, so most public relations people I talk to are for evaluation. People
keep on sinning ... and PR people continue not to do evaluation research. 1

1 James Grunig, 'Basic research provides knowledge that makes evaluation possible' in "Public Relations
Quarterly" Number 28, 1983, pp. 28-32.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

A study by Dr Lloyd Kirban in 1983 among Public Relations Society of America (PRSA) members
in the Chicago chapter found that more than half the practitioners expressed a fear of being
measured. 2

In Managing Public Relations (1984), James Grunig and Todd Hunt, commented:

The majority of practitioners ... still prefer to fly by the seat of their pants and use intuition rather
than intellectual procedures to solve public relations problems. 3

A Syracuse University study conducted by public relations educator, Judy Van Slyke, compared
public relations to Jerome Ravetzs model of an immature and ineffective science and concluded
that public relations fits the model. 4

Professor James Bissland found in a 1986 study of public relations that while the amount of
evaluation had increased, the quality of research has been slow to improve. 5

In his book on PR research, Public Relations What Research Tell Us, John Pavlik commented
in 1987 that measuring the effectiveness of PR has proved almost as elusive as finding the Holy
Grail. 6

A landmark 1988 study developed by Dr Walter Lindenmann of Ketchum Public Relations

(Ketchum Nationwide Survey on Public Relations Research, Measurement and Evaluation)
surveyed 945 practitioners in the US and concluded that most public relations research was
casual and informal, rather than scientific and precise and that most public relations research
today is done by individuals trained in public relations rather than by individuals trained as
researchers. However, the Ketchum study also found that 54 per cent of the 253 respondents to
the survey strongly agreed that PR research for evaluation and measurement would grow during
the 1990s, and nine out of 10 practitioners surveyed felt that PR research needed to become
more sophisticated than has been the case up to now. 7

A study by Smythe, Dorward and Lambert in the UK in 1991 found 83 per cent of practitioners
agreed with the statement: There is a growing emphasis on planning and measuring the
effectiveness of communications activity. 8

In a 1992 survey by the Counselors Academy of the Public Relations Society of America, 70 per
cent of its 1,000 plus members identified demand for measured accountability as one of the
leading industry challenges. 9

In 1993, Gael Walker from the University of Technology Sydney replicated the Lindenmann
survey in Australia and found 90 per cent of practitioners expressed a belief that research is now
widely accepted as a necessary and integral part of the planning, program development, and
evaluation process. 10

2 John V. Pavlik, Public Relations - What Research Tells Us, Sage Publications, 1987, p. 65.
3 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt Rinehart & Winston, Inc., 1984, p.
4 ibid, p. 77.
5 John V. Pavlik, Public Relations - What Research Tells Us, Sage Publications, 1987, p. 68.
6 ibid, p. 65.
7 Walter K. Lindenmann, 'Research, Evaluation and Measurement: A National Perspective' in "Public
Relations Review", Volume 16, Number 2, 1990, pp. 3-24.
8 Public Relations Evaluation: Professional Accountability, International Public Relations Association
(IPRA) Gold Paper No. 11, 1994, p. 5.
9 ibid, p. 5.
10 Gael Walker, Public relations practitioners use of research, measurement and evaluation in
Australian Journal of Communication, Vol. 24 (2), 1997, Queensland University of Technology, p. 101.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The International Public Relations Association (IPRA) used a section of Lindenmanns survey in
an international poll of public relations practitioners in 1994 and confirmed wide recognition of the
importance of research for evaluation and measurement.

In the same year, a Delphi study undertaken by Gae Synott from Edith Cowan University in
Western Australia found that, of an extensive list of issues identified as important to public
relations, evaluation ranked as number one. 11

White and Blamphin also found evaluation ranked number one among a list of public relations
research priorities in a UK study of practitioners and academics. 12

Notwithstanding this worldwide philosophical consensus, the application of evaluation research

has been low in PR and corporate communication. A survey of 311 members of the Public
Relations Institute of Australia in Sydney and Melbourne and 50 public relations consultancies
undertaken as part of an MA research thesis in 1992, found that only 13 per cent of in-house
practitioners and only nine per cent of consultants regularly used objective evaluation research.13

Gael Walker examined the planning and evaluation methods described in submissions to the
Public Relations Institute of Australia Golden Target Awards from 1988 to 1992 and found 51 per
cent of 124 PR programs and projects entered in the 1990 awards had no comment in the
mandatory research section of the entry submission. The majority of campaigns referred to
research and evaluation in vague and sketchy terms, Walker reported. 14

Walker similarly found that 177 entries in 1991 and 1992 listed inquiry rates, attendance at
functions and media coverage (clippings) as methods of evaluation but she added, the latter
rarely included any analysis of the significance of the coverage, simply its extent, 15

David Dozier refers to simplistic counting of news placements and other communication activities
as pseudo-evaluation. 16

Tom Watson, as part of post-graduate study in the UK in 1992, found that 75 per cent of PR
practitioners spent less than 5 per cent of their budget on evaluation. He also found that while 76
per cent undertake some form of review, the two main methods used were monitoring (not
evaluating) press clippings and intuition and professional judgement. 17

As well as examining attitudes towards evaluation, the 1994 IPRA study explored implementation
and found a major gap between what PR practitioners thought and what they did. IPRA found 14
per cent of PR practitioners in Australia; 16 per cent in the US; and 18.6 per cent of its members
internationally regularly undertook evaluation research. 18

11 Jim Macnamara, Measuring Public Relations & Public Affairs, speech to IIR conference, Sydney,
12 Jon White and John Blamphin, Priorities for Research into Public Relations Practice in the United
Kingdrom, London, City University Business School/Rapier Marketing, 1994.
13 Jim Macnamara, Public Relations & The Media A New Influence in Agenda-Setting and Content,
unpublished MA thesis, Deakin University, 1993.
14 Gael Walker, Communicating Public Relations Research in Journal of Public Relations Research,
1994, 6 (3), p. 145.
15 ibid, p. 147.
16 Jon White, How to Understand and Manage Public Relations, Business Books, London, 1991, p. 18.
17 Public Relations Evaluation: Professional Accountability, International Public Relations Association
(IPRA) Gold Paper No. 11, 1994, p. 5.
18 ibid, p. 4.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication


Research Finding USA Australia South IPRA
Africa members
Evaluation recognised as
necessary 75.9% 90% 89.1% 89.8%
Frequently undertake
research aimed at evaluating 16% 14% 25.4% 18.6%

Figure 1. International Public Relations Association (IPRA) survey conclusions, 1994.

There is little evidence that the usage of research has increased significantly in public relations
since 1994 and some surveys suggest that, even if more research is being used, there is still a
long way to go.

A survey of Marketing Directors in the UK in 2000 found only 28 per cent were satisfied with
the level of evaluation of their public relations, compared with 68 per cent satisfied with
evaluation of sales promotion; 67 per cent with advertising evaluation and 65 per cent
measurement of direct marketing results. 19


(Marketing Directors, UK, Dec 2000)

Advertising 67%
Sales Promotion 68%
Direct Marketing 65%
Public Relations 28%
0 10 20 30 40 50 60 70

Figure 2. Test Research survey of UK Marketing Directors, 2000.

Recent research in the US also confirms that clients and employers of PR practitioners want more
research and evaluation. The annual Harris/Impulse Public Relations Client Survey in the US in
2000 which surveys over 4,000 Fortune 500 marketing communication managers found the ability
to measure results ranked 15th among 26 leading criteria for assessing their PR consultancy.
More importantly, measures results was the equal fastest growing criteria with a nine per cent
increase since 1998. 20

A 2001 Public Relations Society of America Internet survey of 4,200 members found press
clippings (used by over 80 per cent) and gut feel/intuition (used by over 50 per cent) were the
two most used research tools or methods of evaluation in the previous 24 months. Media
content analysis was used by around one-third (with many citing Advertising Value Equivalents as
a key metric. Surveys and focus groups were used by less than 25 per cent of PRSA members. 21

This extensive body of research sends a clear message to PR and corporate communication
practitioners. Planning and evaluation research is poorly used and practices need major reform to
achieve professionalism and the levels of accountability required by modern management.

19 Survey of Marketing Directors, Test Research, UK, 2000.

20 Thomas L. Harris/Impulse Research 2000 Public Relations Client Survey, Impulse Research
Corporation, Los Angeles, 25 October, 2000.
21 Media Relations Reality Check, Public Relations Society of America Internet survey of 4,200
members,, 2001.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Why isnt PR Researched and Evaluated?

Lack of budget and lack of time are the two most commonly given reasons for research and
evaluation not being carried out in PR and corporate communication. However, a number of
studies point to other key factors affecting the use of research and evaluation.

1. Communication as a Two-Way Outcomes-Focussed Process

At the core of the PR industrys approach to research and evaluation is the history and
nature of the practice itself. Grunigs Four Models of Public Relations shown in Figure 3
provide a concise and useful summary of the evolution of PR and an insight into why
research has not been widely applied. 22



Purpose Propaganda Dissemination of Scientific Mutual

information persuasion understanding

Nature of One-way, truth One-way, truth Two-way Two-way

communication not essential important imbalanced balanced

Research Little, press Little readability Feedback Formative

clippings only tests possibly, Formative research
usually readership research Evaluation of
surveys Evaluation of understanding
sometimes attitudes

Historical PT Barnum Ivy Lee Edward Bernays Bernays,

figures educators

Where Sports, theatre, Government, non- Competitive Regulated

practised product profit orgs, business business and
promotion structured modern flat
companies structure

Figure 3. James Grunigs Four Models of Public Relations, 1984.

PR had its birth in Press Agentry which focussed almost exclusively on publicity in an
illustrious era that spawned the phrase any publicity is good publicity. This was followed by
evolution of the Public Information model of PR which became the dominant paradigm from
the 1950s onwards. The Public Information model looked beyond publicity to publications,
events and other communication activities, but still focussed on disseminating information.
Information disseminated can be measured simply by counting media articles, column
inches/centimetres of publicity, the number of publications distributed, and so on
commonly referred to as measurement by kilogram.

22 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt, Rinehart & Winston, Inc., 1984, p.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Dissemination of information, the focus of the Press Agentry and Public Information models
of PR, is one-way information flow and focusses on producing outputs not on achieving
outcomes. Under these approaches, responsibility for what happens with information
disseminated whether it is read, understood, believed, remembered or acted upon is not
seen as part of the ambit of the PR practitioner or corporate communicator. Hence, under
these approaches, little need has been seen for evaluative research.

However, as shown in Grunigs Four Models of Public Relations, Best Practice PR is

evolving to Two-Way Asymmetric and, ultimately, Two-Way Symmetric relationships with
target audiences and stakeholders. In a two-way model (true communication), emphasis
shifts from simply disseminating information (outputs) to persuading and creating
understanding which involve changing attitudes, awareness and/or behaviour (outcomes).
While such changes can occasionally be observed, achievement of such objectives, and
quantification of the degree of change, usually requires research.

Most private and public sector clients today expect and require persuasion rather than
simple Public Information from their PR/corporate communication function whether it is to
consider or buy a product or service, give blood or donations to charity, or motivate people
to drive safely. Noble and Watson declare: The dominant paradigm of practice is the
equation of public relations with persuasion. 23

Drawing from communication psychology, McGuires Output Analysis of the

Communication/Persuasion process lists six stages of persuasion as (1) Presentation; (2)
Attention; (3) Comprehension; (4) Acceptance; (5) Retention; and (6) Action. 24

Public Information focuses only on (1) presentation. To achieve persuasion, PR and

corporate communication need to create attributes 2 - 6 within target audiences. In most
cases, the only way that audience attention, comprehension, acceptance, retention and
resulting action can be monitored and demonstrated is through research.

Nevertheless, PR and corporate communication remain largely one-way information

dissemination functions focussed on outputs which presents a barrier to PR becoming a
strategic management function and blinkers practitioners to the necessity and value of

New interactive communication technologies such as Web sites, chat rooms and online
forums offer the potential to improve two-way interaction. However, an alarming finding of a
2000 survey of 540 Australian PR practitioners was that, while 78.4 per cent believe new
communication technologies make it easier to enter into dialogue with stakeholders and gain
feedback, 76.4 per cent indicated that the important benefit of new communication
technologies was that they provided new ways to disseminate information. 25

PR practitioners need to adopt modern strategic approaches to communication which

recognise it as a two-way process and focus on outcomes, rather than simply producing
outputs. With this will come a new appreciation of research both formative research for
identifying audience attitudes, awareness and needs, and evaluative research for measuring
changes effected through communication.

23 Paul Noble and Tom Watson, Applying a Unified Public Relations Evaluation model in a European
Context, paper presented to Transnational Communication in Europe: Practice and Research,
International Congress, Berlin, 1999, p. 2.
24 W.J. McGuire, Attitudes and Attitude Change in Lindzey, Gardner & Aronson, The Handbook of
Social Psychology Vol II, (3rd ed), Random House, New York, 1984.
25 Elizabeth Dougall and Andrew Fox, New Communication Technologies and Public Relations: the
findings of a national survey undertaken in December 2000, University of Southern Queensland, June,
2001, p. 18.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

2. SMART Objectives
A second major factor affecting evaluation of PR and corporate communication is lack of
SMART objectives that is objectives that are Specific, Measurable, Achievable, Relevant
and Timely.

Many PR and corporate communication programs have broad, vague and imprecise
objectives which are unmeasurable even if a six-figure research budget is available. Plans
too frequently have stated objectives such as:

To increase awareness of a policy or program;

To successfully launch a product or service;
To improve employee morale;
To improve the image of a company or organisation.

Such objectives are open to wide interpretation and lack any baseline for measurement.
What level of awareness currently exists? Within what target audience is greater awareness
required? What comprises a successful launch and, therefore, what should be measured?
What is known about employee morale currently? What does management want staff to feel
good about? What is the image of the organisation currently and what image is desired
among whom and by when?

Many leading PR and communication academics point to lack of clear objectives as one of
the major stumbling blocks to evaluation of PR and corporate communication. Grunig refers
to the typical set of ill-defined, unreasonable, and unmeasurable communication effects
that public relations people generally state as their objectives. 26

Pavlik comments: PR campaigns, unlike their advertising counterparts, have been plagued
by vague, ambiguous objectives. 27

Wilcox et al note: Before any public relations program can be properly evaluated, it is
important to have a clearly established set of measurable objectives. 28

Relevance is also an important factor in setting objectives that are measurable. Most
companies and organisations have over-arching corporate and marketing objectives and PR
and corporate communication may inherit these as part of its brief along with advertising,
direct marketing and other disciplines. While PR and corporate communication objectives
must complement and contribute to overall objectives, they should be sufficiently different or
discreet so outcomes are traceable and attributable. If PR and corporate communication
share broad objectives with advertising or direct marketing, for example, their outcomes will
not be able to be identified as they will be diffused within overall outcomes. Specific PR and
corporate communication objectives and/or sub-objectives must be written to ensure
relevance and the specificity needed for evaluation.

The principle of micro-measuring and macro-measuring is a useful approach to resolve

this broad versus narrow objectives dilemma. Macro-measuring refers to the overall
determination of outcomes for the company or organisation. Micro-measuring refers to the
determination of the results of specific activities such as events, product launches, media
publicity, analyst briefings, etc. While measurement at the macro level is ultimately
important, micro-measuring to establish the return from specific communication activities is
important (a) in itself to determine their success or otherwise and whether they should be

26 James Grunig, and Todd Hunt, Managing Public Relations, Holt Rinehart & Winston, Inc., 1984, p.
27 John V. Pavlik, Public Relations - What Research Tells Us, Sage Publications, 1987, p. 20.
28 Dennis Wilcox, Philip Ault & Warren Agee, Public Relations Strategies and Tactics (5th ed), Longman,
New York, 1998, p. 193.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

continued, and (b) to progressively track cumulative contributions to overall outcomes, as

communication is usually a long-term multi-faceted task.

Utilising micro-measuring and macro-measuring approaches, and recognising that

measurable objectives usually require numbers, percentages and dates, PR and corporate
communication objectives could include:

Gain at least a 10 per cent increase in favourable media publicity compared with last
Improve the perception of the organisation among key journalists;
Generate a minimum of 40,000 visits per month to the organisations Web site;
Increase attendance at specified events by a minimum of 20 per cent;
Gain at least a 3.75 average satisfaction level (on a five scale) among attendees at
Reduce complaints within a stakeholder group by 25 per cent.

These objectives provide the basis for both output and outcome measurement and are
indicative of Key Performance Indicators (KPIs) or Key Results Areas (KRAs) that can be
set for PR and corporate communication.

A further important point about objectives is that they must be agreed by management.
Particularly where PR-specific sub-objectives or micro-measuring criteria are set as above,
PR and corporate communicators must secure management buy in and agreement that
achievement of the objectives set will contribute to overall corporate and/or marketing
objectives. Too often PR practitioners devise a set of objectives which are not shared by
management, resulting in inevitable disappointment on all sides.

To ensure objectives are achievable, communication practitioners need to have at least a

rudimentary understanding of communication theory. Assumptions about what
communication can achieve lead to misguided and overly optimistic claims in some plans
which make evaluation risky and problematic.

Pavlik makes the sobering comment that ... much of what PR efforts traditionally have been
designed to achieve may be unrealistic. 29

A comprehensive review of communication theory is not within the scope of this analysis,
but some of the key learnings are noted as they directly impact the way PR and corporate
communication programs are structured and, therefore, how they can be evaluated.

Communication theory has evolved from the early, simplistic Information Processing
Model which identified a source, message, channel and receiver. As Flay and a number of
others point out, the Information Processing Model assumes that changes in knowledge will
automatically lead to changes in attitudes, which will automatically lead to changes in
behaviour. 30

This linear thinking was also reflected in the self-explanatory Domino and Injection
concepts of communication and the Hierarchy of Effects model which saw awareness,
comprehension, conviction and action as a series of steps of communication where one
logically led to the next. Another variation of the Hierarchy of Effects model that has been
used extensively in advertising for many years termed the steps awareness, interest, desire
and action. These theories assumed a simple progression from cognitive (thinking or
becoming aware) to affective (evaluating or forming an attitude) to conative (acting).

29 Op cit, p. 119.
30 B.R. Flay, 'On improving the chances of mass media health promotion programs causing meaningful
changes in behaviour' in M. Meyer (Ed.), Health Education by Television and Radio, 1981, Munich,
Germany, pp. 56-91.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

However, an extensive body of research questions these basic assumptions and these
models. The influential Theory of Cognitive Dissonance developed by social psychologist,
Dr Leon Festinger, in the late 1950s proposed that attitudes could be changed if they were
juxtaposed with a dissonant attitude but, importantly, dissonance theory held that receivers
actively resisted messages that were dissonant with their existing attitudes.

The view of communication as a simple injection of information was also challenged by

broadcaster, Joseph Klapper, whose mass media research in 1960 led to his law of
minimal consequences and recast notions on the power of the Press and
communication effects. 31

Festingers Theory of Cognitive Dissonance and Klappers seminal work contributed to a

significant change from a view of communication as a linear process which progressed
automatically from one stage to the next to a minimal effects view of communication. This
has been built on by more recent research such as Hedging and Wedging Theory
developed by Professors Keith Stamm and James Grunig, which has major implications for
public relations and corporate communication. 32

PR and corporate communication programs frequently propose to change negative attitudes

to positive attitudes. But, according to Hedging and Wedging Theory, when a person with a
firmly held (wedged) view is faced with a contrary view, he or she will, at best, hedge.
Hedging is defined by Stamm and Grunig as a cognitive strategy in which a person holds
two or more conflicting views at the same time. This research suggests that it may be
improbable or impossible for attitudes to be changed diametrically from negative to positive
or vice versa. Attitudes can be moved from wedging to hedging, or hedging to wedging,
but not wedging to wedging. Yet, PR programs frequently propose to do this.

For example, a company with a poor environmental record is highly unlikely to create a
good environmental image in a relatively short period such as 12 months or even a few
years. Hedging and Wedging Theory suggests that, if the company can demonstrate
change for the better (action) as well as good intentions, it may at best persuade audiences
to reconsider and take a more balanced or open-minded position. Full support for the
company will take much longer.

A significant contribution to communication theory is James Grunigs Situational Theory of

communication. In contrast to the simplistic Domino Theory, Situational Theory holds that
the relationship between knowledge (awareness), attitudes and behaviour is contingent on a
number of situational factors. Grunig lists four key situational factors: (1) the level of
problem recognition; (2) the level of constraint recognition (does the person see the issue or
problem as within their control or ability to do something); (3) the presence of a referent
criterion (a prior experience or prior knowledge); and (4) level of involvement. 33

Outcomes of communication may be cognitive (simply getting people to think about

something), attitudinal (form an opinion), or behavioural. PR and corporate communication
practitioners should note that results are less likely the further one moves out along the axis
from cognition to behaviour. If overly optimistic objectives are set, particularly for
behavioural change, evaluation will be a difficult and frequently disappointing experience.

PR and corporate communication programs need to have SMART objectives

Specific, Measurable, Achievable, Relevant and Timely and be agreed with

31 John V. Pavlik, Public Relations - What Research Tells Us, Sage Publications, 1987, p. 93.
32 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt, Rinehart & Winston, Inc., 1984,
p.131 and p. 329.
33 Op cit, pp. 77-80.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

3. Understanding Research
A third factor affecting formative and evaluation research in PR and corporate
communication has been the predominantly arts and humanities background of most
practitioners, with little formal training in research. PR practitioners have largely come from
journalism, media studies, communication, PR and liberal arts courses, with only a
smattering of technical entrants such as scientists, engineers, accountants, etc. In the main,
it can be said that PR is rhetoric rather than numeric in orientation and training.

Until quite recently, research was not included in public relations and communication
courses, resulting in practitioners entering the field without knowledge of how to plan,
undertake or manage research and little training in related areas such as statistics,
psychology and sociology.

In his UK studies, Tom Watson observed that lack of knowledge and, possibly disinclination
to learn about evaluation techniques, were key factors in PR programs not being formally
evaluated. 34

At a pure or basic research level, public relations needs to build its body of theory and
knowledge. There are, at the core of public relations, fundamental questions over the nature
of PR and what it does in society. The Edward Bernays paradigm outlined in his influential
1920s book, Crystallising Public Opinion and expanded in his classic 1955 PR text, The
Engineering of Consent, on which much public relations thinking is based, is under
challenge from new approaches such as Co-orientation Theory and the Two-Way
Symmetric Model of public relations developed by Grunig.

The Bernays paradigm defines public relations as a form of persuasive communication

which bends public thinking to that of an organisation a concept that some, such as
Marvin Olasky, say has destructive practical applications and continued use of which will
speed up PRs descent into disrepute. 35

Co-orientation Theory, as the name suggests, uses Grunigs two-way symmetrical or

asymmetrical approaches to communication in which the organisation and its publics meet
in the middle, or at least somewhere in between their poles of opinion a concept
supported by modern mediation and relationship-building thinking, but not yet widely
understood and accepted in boardrooms and by many communication practitioners.

At an applied level, public relations and corporate communication practitioners need to

greatly expand knowledge of both formative and evaluative research. Most practitioners
have only a basic understanding of Otto Lerbingers four basic types of PR research:
environmental monitoring (or scanning), public relations audits, communications audits, and
social audits. Many use the terms interchangeably and incorrectly and have little knowledge
of survey design, questionnaire construction, sampling, or basic statistics and are, therefore,
hamstrung in their ability to plan and manage research functions.

PR and corporate communication practitioners need to gain increased knowledge of

research techniques and methodologies and increased numeric skills to present
qualitative and quantitative data to management in support of recommendations and

34 Tom Watson, Integrating Planning and Evaluation Evaluating the Public Relations Process and
Public Relations Programs, in Handbook of Public Relations, Sage Publications Inc, London, pp. 259-
35 Marvin N. Olasky, "The Aborted Debate Within Public Relations: An Approach Through Kuhn's
Paradigm". Paper presented to the Qualitative Studies Division of the Association for Education in
Journalism and Mass Communication, Gainsville, Florida, August, 1984.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

4. The Multi-Disciplined Multi-Media Nature of PR

Another of the challenges or hurdles particularly affecting evaluative research is the multi-
disciplined and multi-media nature of PR and corporate communication. This description is
not related to multimedia technology, but to the multiple sub-sets or sub-disciplines that
comprise public relations and the multiple media or channels which programs use.

PR and corporate communication is comprised of a number of sub-disciplines such as

media relations, employee relations, community relations, government relations,
shareholder relations, and so on. Within these sub-disciplines, practitioners employ a wide
range of communication media including publicity, publications, video and multimedia
programs, events, Web sites, sponsorships, etc, to interface with target audiences in the
client organisations macro environment. The extensive range of communication activities
that can comprise PR and corporate communication is illustrated in the Macro
Communication Model shown in Figure 4. 36

Telephone Reception Advertising
Customer Service

On-line services
The Client
Events Direct
Internal Marketing
Relations Sales Promotion
Bulletins Annual Publicity
Video Brochures,
& AV Booklets

Figure 4. Jim Macnamaras Macro Communication Model, 1992.

Some studies have explored a PR Index as an overall rating system for PR and corporate
communication programs, or looked for other simple solutions. However, most academics
and researchers believe that the different sub-disciplines and media require different
research methodologies appropriate to their objectives and audiences. There is no single
planning research or evaluation methodology that can be used to meet all needs.

PR and corporate communication practitioners need to recognise and use a range of

research methodologies relevant to the various sub-disciplines of PR and corporate
communication and the various media used such as publicity, publications, events,
etc. No single research methodology can be used for all purposes.

5. When to Measure and Evaluate?

A fifth key factor in implementing measurement and evaluation has been the traditional and
long-held view that evaluation is conducted at the end of an activity. Management theory
through most of the 20th century advocated the PIE Model Plan, Implement, Evaluate.

36 Jim Macnamara, The Asia Pacific Public Relations Handbook, Archipelago Press, Sydney, 1992, p.43.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

However, the PIE Model is redundant and ineffective as a research and evaluation guide.
For a start, it makes no provision for formative research that should occur before activity
starts as a basis for planning. Furthermore, evaluation at the end of an activity runs into
three fundamental problems which render it ineffective:

1. At a practical level, in most cases practitioners will be out of time and out of budget at
the end of programs and projects, so research will simply not get done;

2. Management will often not wait till the end of long-running programs to see evidence of
effectiveness and they may cancel or change activities without a research basis;

3. Most importantly, results of research conducted after an activity has been completed
come too late to be of any strategic benefit. What is the point of finding out after
publishing a newsletter that it is not what readers want? What use is finding out that
employees would have preferred an Intranet after the budget has been expended on an
expensive multimedia presentation?

Post-program evaluation, therefore, offers little benefit and significant risk of bad news,
which has led to the fear of evaluation discussed by Lloyd Kirban and others.

Identification of these weaknesses in the traditional evaluate at the end approach has
given rise to a new approach to evaluation which tips the previous concept on its head.
Modern thinking is that research should begin before undertaking communication activities.
Marston provided the RACE formula for public relations which identified four stages as
research, action, communication and evaluation. Cutlip and Center provided their own
formula based on this which they expressed as fact-finding, planning, communication and
evaluation. 37

Borrowing from systems theory, Richard Carter coined the term behavioural molecule for a
model that describes actions that continue endlessly in a chain reaction. In the context of a
behavioural molecule, Grunig describes the elements of public relations as detect,
construct, define, select, confirm, behave, detect. 38

Craig Aronoff and Otis Baskin take this thinking further, advancing the view that ...
evaluation is not the final stage of the public relations process. In actual practice, evaluation
is frequently the beginning of a new effort. The research function overlaps the planning,
action and evaluation functions. It is an interdependent process that, once set in motion, has
no beginning or end. 39

Thus, when carried out early before or at the beginning of activities, evaluative research
merges with formative research.

This approach is in line with Pattons definition of evaluation. He says: The practice of
evaluation involves the systematic evaluation of information about the activities,
characteristics and outcomes of programs, personnel and products for use by specific
people to reduce uncertainties, improve effectiveness, and make decisions with regard to
what those programs, personnel, or products are doing and effecting. In expanding on his
definition of evaluation in 1982, Patton said: The central focus is on evaluation studies and
consulting processes that aim to improve program effectiveness. 40

37 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt, Rinehart & Winston, Inc., 1984, pp.
38 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt, Rinehart & Winston, Inc., 1984, pp.
39 Otis Baskin, and Craig Aronoff, Public Relations - The Profession and The Practice (3rd edition), Wm.
C. Brown Publishers, 1983, p. 179.
40 Michael Patton, Practical Evaluation, Sage Publications, 1982, p. 15.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Paul Noble has also argued that evaluation should be a proactive, forward-looking activity.
He says: Naturally the collection of historical data is an essential prerequisite, but
evaluation is not restricted to making conclusions on past activity. The emphasis on
improving programme effectiveness strongly indicates that the information collected on
previous activity is used as feedback to adapt the nature of future activities. 41

This shift in focus is important because it can resolve the fear of being evaluated which
has kept many practitioners from embracing evaluation more enthusiastically. When
perceived simply as looking back to measure past performance, evaluation can be
threatening to practitioners. It appears to be a system for checking up on them. But, when
used as a process to gather information in order to advise management on and plan future
activity, evaluation takes on a new, much more positive persona. Also, in this context, it is
more likely to attract funding from management.

This suggests that a re-positioning of evaluation is required within the PR and corporate
communication field: from post-implementation reviewing of past performance to a process
of continuous systematic gathering of information in order to plan future activity more

Evaluative research should be carried out to identify existing awareness,

perceptions, information needs and media preferences before communication
activities are planned and implemented, thus blending and integrating with formative
research in a continuous cycle of gathering information as part of two-way

6. Cost
Cost, in terms of both money and time, can be a barrier to undertaking research but not
nearly to the extent claimed. As has been shown, there are other more significant barriers
which communication practitioners need to overcome to be able to evaluate their work and
show Return on Investment for the organisation from their activities.

If practitioners do not recognise the importance of two-way communication and focus on

creating outcomes such as increased understanding, changing attitudes or behaviour
and building relationships, evaluation will appear irrelevant and they will remain post
offices for disseminating information, or what Grunig calls communication technicians;

Without clear specific objectives, activities will be unmeasurable irrespective of budget;

Without some knowledge of research techniques and methodologies relevant to the

various sub-disciplines and different media used in PR and corporate communication,
formative and evaluative research will not be able to be implemented by practitioners
and will either remain not done, or be imposed from outside;

If evaluation is left to the end, it will either not be done through lack of time, budget or
both or, if done, results will be too late to be incorporated into strategic planning.

Furthermore, there are many low-cost and even no-cost research methodologies which
will be outlined in this text that render the cost argument an excuse rather than an obstacle.

41 Paul Noble, 'A Proper Role for Media Evaluation' in "International Public Relations Academic Review",
No. 1, March, 1995, p. 2.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Models for PR Research and Evaluation

A number of models have been developed to explain how and when to apply research and
evaluation in PR and corporate communication. Five leading models have been identified and
reviewed by UK academics Paul Noble and Tom Watson:

1. The PII Model (1985) developed by Cutlip et al;

2. The Macro Model of PR Evaluation (1992), renamed the Pyramid Model of PR Research
(1999) developed by Jim Macnamara;
3. The PR Effectiveness Yardstick Model (1993) developed by Dr Walter Lindenmann;
4. The Continuing Model of Evaluation (1997) developed by Tom Watson;
5. The Unified Evaluation Model (1999) outlined by Paul Noble and Tom Watson. 42

PII Model
Cutlip, Center and Brooms PII Model, outlined in their widely used text, Effective Public
Relations, takes its name from three levels or steps of research which they term Preparation,
Implementation and Impact. 43

Specific research questions arise at each step in the PII Model, illustrated in Figure 5. Answering
these questions with research contributes to increased understanding and adds information for
assessing effectiveness. Noble and Watson explain: The bottom rung (step) of preparation
evaluation assesses the information and strategic planning; implementation evaluation considers
tactics and effort; while impact evaluation gives feedback on the outcome. 44

Social and Cultural Change

Number who repeat behaviour
Number who behave as desired
IMPACT Number who change attitudes
Number who change opinions
Number who learn message content
Number who attend to messages and activities
Number who receive messages and activities
Number of messages placed and activities implemented
Number of messages sent to media and activities designed
Quality of messages and activity presentation

PREPARATION Appropriateness of message and activity content

Adequacy of background information base for designing program

Figure 5. PII Model of Evaluation by Cutlip, Center & Broom, 1993.

42 Paul Noble and Tom Watson, Applying a Unified Public Relations Evaluation model in a European
Context, paper presented to Transnational Communication in Europe: Practice and Research,
International Congress, Berlin, 1999, pp. 8-24.
43 Scott M. Cutlip, Allen H. Center and Glen M. Broom, Effective Public Relations, 6th Edition, Prentice-
Hall, Inc., 1985, p. 296.
44 Op Cit, p. 9.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

A noteworthy and pioneering concept of the PII Model was the separation of outputs from impact
or outcomes and identification that these different stages need to be researched with different

However, the PII Model does not prescribe methodologies, but assumes that programs and
campaigns will be measured by social science methodologies that will be properly funded by
clients/employers. 45 Perhaps this is easier said than done.

Pyramid Model of PR Research

Evaluation: The Achilles Heel of the Public Relations Profession, an MA thesis extract published
in IPRA Review in 1992 46 and the 1994 International Public Relations Association (IPRA) Gold
Paper (No. 11) built on the PII Model, advocating recognition of communication projects and
programs in terms of inputs, outputs and outcomes and recommended that each stage should be

The Pyramid Model of PR Research, a revised version of the Macro Model of PR Evaluation, is
intended to be read from the bottom up, the base representing ground zero of the strategic
planning process, culminating in achievement of a desired outcome (attitudinal or behavioural).
The pyramid metaphor is useful in conveying that, at the base when communication planning
begins, practitioners have a large amount of information to assemble and a wide range of options
in terms of media and activities. Selections and choices are made to direct certain messages at
certain target audiences through certain media and, ultimately, achieve a specific defined
objective (the peak of the program or project).

In this model, inputs are the strategic and physical components of communication programs or
projects such as the choice of medium (eg. event, publication, Web, etc), content (such as text
and images), and format. Outputs are the physical materials and activities produced (ie. media
publicity, events, publications, Intranets, etc) and the processes to produce them (writing, design,
etc). Outcomes are the impacts of communication, both attitudinal and behavioural.

The Pyramid Model of PR Research endeavours to be instructive and practical by providing a list
of suggested research methodologies for each stage. 47

Within the pyramid, key steps in the communication process are shown, with research
methodologies aligned with their appropriate purpose, suggesting to practitioners what to use and
when. Appropriate methodology selection is an important element of implementation. For
instance, a methodology for measuring the number of messages sent cannot be used to draw
conclusions about the number of messages received, considered, retained or understood by
target audiences. Not every step and every methodology are shown. However, this model
provides considerable detail and a practical overview of applied research for public relations.

The model deliberately combines formative and evaluative research in the belief that the two
types of research must be integrated and work as a continuum of information gathering and
feedback in the communication process, not as separate discrete functions. The model suggests
that research should be undertaken before, during and after communication activities in order to
identify, understand and address audience needs, interests and attitudes; to track progress; and
to benchmark key metrics pre and post implementation. This fits with the scientific management
of public relations approach to research recommended by Glen Broom and David Dozier 48

45 Ibid,. 9.
46 Jim Macnamara, Evaluation: The Achilles Heel of the Public Relations Profession in IPRA Review,
Vol. 15, No. 2, 1992, p. 19.
47 Jim Macnamara, "Public Relations and the Media: A New Influence in 'Agenda-Setting' and Content",
unpublished Masters Degree (MA) thesis, Deakin University, 1993.
48 G. M. Broom & D. M. Dozier, Using Research in Public Relations: Applications to Program
Management, Englewood Cliffs, NJ, Prentice-Hall, 1990, pp. 14-20.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Pyramid Model of PR Research

Applicable Methodologies:

Number who . change ...... Quantitative surveys (large scale structured)

OUTCOMES behaviour ..... Sales; Voting results; Adoption rates; Observation
(Functional & Number who .. change
organisational attitudes . Focus groups; Surveys (targeted) (eg Customer, Employee
evaluation) or Shareholder Satisfaction); Reputation studies

(Proposed as a 4th stage of Number who understand messages.. Focus groups; Interviews; Complaint decline; Experiments
communication in some models) Number who retain messages . Interviews; Focus groups; Mini-surveys; Experiments
Number who consider messages ...... Response mechanisms (1800, coupons); Inquiries
Number & type of messages reaching target audience Media Content Analysis; Communication Audits
OUTPUTS Number of messages in the media ...` Media Monitoring (clippings, tapes, transcripts)
(Process & program Number who received messages ..... Circulations; Event attendances; Web visits & downloads
evaluation) Number of messages sent .... Distribution statistics; Web pages posted

Quality of message presentation . Expert analysis; Peer review; Feedback; Awards

Appropriateness of message content ... Feedback; Readability tests (eg. Fog, Flesch); Pre-testing
INPUTS Appropriateness of the medium selected ... Case studies; Feedback; Interviews; Pre-testing (eg. PDFs)
research) How does target audience prefer to receive information? ... Academic papers; Feedback; Interviews; Focus groups
What does target audience know, think, feel? What do they need/want? Observations; Secondary data; Advisory groups; Chat rooms
& online forums; Databases (eg. Customer complaints)

What is measured: (key stages & steps to communication) Copyright Jim R. Macnamara 1992 & 2001

Figure 6. Pyramid Model of PR Research, Jim Macnamara, 1992. Revised 1999.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The research methodologies listed in the Pyramid Model are not intended to be exhaustive, but
representative of the informal and formal research methods available to PR and corporate
communication practitioners. The suggested list does, however, show the wide range of informal
and formal tools available for research.

Of particular note in this model is the large number of research and evaluation methodologies
available to practitioners which are no cost or low cost including:

Secondary data (existing research) which can be accessed within the organisation (eg.
market research) or from the Web, the media, research services such as LEXIS-NEXIS,
academic journals or professional organisations;
Existing databases (eg. customer complaints);
Advisory or Consultative Groups;
Online chat rooms and other feedback mechanisms;
Unstructured and semi-structured interviews;
Readability tests on copy (eg. Fog Index, Dale-Chall, Flesch Formula);
Pre-testing (eg. PDF files of publications);
Tabulation of distribution and circulation data; and
Response mechanisms (coupons, fax-backs, 1800 toll free numbers, Web surveys and
feedback forms, etc).

The range of low-cost and no-cost research methodologies demonstrates that some level of
formative and evaluative research can be undertaken within any budget.

The Pyramid Model of PR Research is practical in that it suggests the highest level of evaluation
possible, but recognises that this will not always be feasible. By identifying a menu of evaluation
methodologies at the communication practitioners disposal from basic to advanced, or what
David Dozier calls a cluster of technologies 49, some evaluation is possible in every program
and project. With this approach, there is no excuse for having no research.

Rigorous, objective evaluation methodologies are highly recommended for evaluation wherever
possible. It needs to be recognised that some of the basic methodologies, on their own, will not
provide reliable evaluation. However, a combination of a number of basic evaluation methods at
an input and output level will substantially increase the probabilities of outputs achieving the
desired outcomes.

For instance, if (a) the proposed story list for a newsletter has been pre-tested with potential
readers in advance to ensure it is what they want to read about; (b) the name has been selected
through a competition involving readers, and (c) readability tests have been conducted on draft
copy to ensure it is easy to understand for the level of readers receiving it, there is a high
likelihood that the publication will be well-accepted, read and understood by the target audience.

Progressing further up the Pyramid Model and evaluating outputs such as (d) measuring
circulation to see if it is increasing; (e) tracking responses through a response mechanism such
as a coupon or competition and (f) conducting an occasional reader survey, further increases the
chances that the newsletter will have the desired impact on its audience.

The Pyramid Model of Evaluation applies both closed system evaluation and open system
evaluation. As outlined by Otis Baskin and Craig Aronoff, closed system evaluation focuses on
the messages and events planned in a campaign and their effects on intended publics. Closed
system evaluation relies on pre-testing messages and media and then comparing these to post-
test results to see if activities achieved the planned effects. 50

49 David M. Dozier, Program evaluation and the roles of practitioners in Public Relations Review, No.
10, 1984, pp. 13-21.
50 Otis Baskin, and Craig Aronoff, Public Relations: The Profession and the Practice (Third Edition), Wm.
C. Brown Publishers, 1988 and 1992, p. 191.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Open system evaluation recognises that factors outside the control of the communication
program influence results and, as the name suggests, looks at wider considerations. This method
considers communication in overall organisational effectiveness. A combination of closed and
open system evaluation is desirable in most circumstances.

PR Effectiveness Yardstick Model

Respected US practitioner, Dr Walter Lindenmann, has proposed an approach to research and

evaluation based on three levels of sophistication and depth, rather than the chronological
process of communication from planning through implementation to achievement of objectives.

Lindenmann sees Level One as evaluation of outputs such as measuring media placements or
impressions (total audience reached). He terms Level Two Intermediate and describes this level
as measuring comprehension, retention, awareness and reception. Level Three is described as
Advanced and focuses on measuring opinion change, attitude change or, at the highest level,
behavioural change.

Level One output evaluation is the low cost, basic level, but even this should be more detailed
than counting up media clippings or using gut reactions which are informal judgements lacking
any rigour in terms of methodology, Watson explains. 51

Intermediate measurement criteria in Lindenmanns PR EffectivenessYardstick introduce a

possible fourth stage of communication outgrowths, also referred to as out-takes by Michael
Fairchild. 52 This stage refers to what audiences receive or take out of communication activities.
Several academics and researchers support identification of this additional stage in
communication after inputs and outputs because, before audiences change their opinion,
attitudes or behaviour, they first have to receive, retain and understand messages. They point out
that outgrowths or out-takes are cognitive and suggest a different term for behavioural impact.

However, Lindenmann omits inputs as a stage in communication. He splits inputs into his
Intermediate and Advanced levels. Therefore, this model has the advantage of separating
cognitive and behavioural impact objectives, but it is not as clear that research should begin
before outputs are produced.

Like the Cutlip et al PII Model, Lindemanns Effectiveness Yardstick Model does not specify
research methodologies to use. However, he broadly outlines a mix of qualitative and quantitative
data collection techniques such as media content analysis at Level One; focus groups, interviews
with opinion leaders and polling of target groups at Level Two and, at Level Three (Advanced), he
suggests before and after polling, observational methods, psychographic analysis and other
social science techniques such as surveys. 53

In presenting his model, Lindenmann supports the concept of a cluster of technologies (Dozier,
1984) or menu of methodologies (Macnamara, 1992) for PR research:

it is important to recognise that there is no one simplistic method for measuring PR effectiveness.
Depending upon which level of effectiveness is required, an array of different tools and techniques is
needed to properly assess PR impact. 54

51 Paul Noble and Tom Watson, Applying a Unified Public Relations Evaluation model in a European
Context, paper presented to Transnational Communication in Europe: Practice and Research,
International Congress, Berlin, 1999, p. 13.
52 ibid, p. 19.
53 ibid, p. 13.
54 Walter K. Lindenmann, An Effectiveness Yardstick to Measure Public Relations Success, in PR
Quarterly 38, (1), 1993, pp. 7-9
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication


Behaviour change
Attitude change
Opinion change

Level 3



Level 2



Targeted Audiences
Media placements

Level 1

Figure 7. PR Effectiveness Yardstick Model, Walter K. Lindenmann, 1993.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Continuing Model of PR Evaluation

Watson draws on elements of other models but comments that the PII and earlier Macro Model of
PR Evaluation models are too complex and lack a dynamic element of feedback. 55

What Watson is referring to is the apparent end of research once impact or outcomes are
achieved suggested by lack of a loop back to the beginning of the preparation/input stage in the
PII and even the revised Pyramid Model of PR Research. This conclusion is more the result of
graphic representation than intent as it is implicit in both Cutlip et als Step Model and the Pyramid
Model of PR Research that research findings are constantly fed back into the planning process
with research continuing in a rolling program, and these models should be read this way. The PII
Step Model and the Pyramid Model of PR Research arrange research and evaluation activities in
chronological order of activity from preparation/inputs to impact/outcomes for practical illustration
but, in reality, input, output and outcome research is dynamic and continuous.

The lack of an obvious dynamic quality in other models led Watson to develop the Continuing
Model of PR Evaluation (illustrated in Figure 8) of which the central element is a series of loops
which reflect Van Leuvens effects-based planning approach and emphasise that research and
evaluation are continuous. 56

Figure 8. Continuing Model of Evaluation, Tom Watson, 1999.

55 Tom Watson, Measuring the Success Rate: Evaluating the PR Process and PR Programmes, in P. J.
Kitchen (Ed), Public Relations Principles and Practice , International Thomson Business Press, 1997,
pp. 293-294.
56 Paul Noble and Tom Watson, Applying a Unified Public Relations Evaluation model in a European
Context, paper presented to Transnational Communication in Europe: Practice and Research,
International Congress, Berlin, 1999, p. 16.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The elements of the Continuing Model of PR Evaluation are: (1) an initial stage of research,
setting of objectives and choice of program effects; followed by (2) strategy selection and tactical
choices; and, as the program rolls out, (3) multiple levels of formal and informal analysis from
which judgements can be made on progress in terms of either (a) success or (b) what Watson
calls staying alive. These judgements are fed back through iterative loops to each of the
program elements. This feedback assists practitioners validate initial research and adds new data
to allow streamlining of objectives and strategy and adjustment or variation of tactics as required.

However, while this model graphically shows the flow of research results back into planning, it
provides no details on what comprises multiple formal and informal analysis.

Unified Evaluation Model

Drawing on all previously developed and published models, Paul Noble and Tom Watson
developed the Unified Evaluation Model in an attempt to combine the best of each and produce a
definitive approach.

The Unified Evaluation Model (illustrated in Figure 9) identifies four stages in communication by
adding Lindenmanns and Fairchilds concept of out-takes or outgrowths to the three-stage
concept advanced by other models. Noble and Watson prefer to call the four stages or levels
Input, Output, Impact and Effect.

This supports inputs and outputs thinking in other models, but separates outcomes into two types:
cognitive which they call impact, and behavioural which they term effect.

Recognition of the need for different research methodologies to measure cognitive and
behavioural outcomes is important, but it is not certain whether the substitution of terms clarifies
or confuses. In many cases, cognitive change such as increased awareness or a change of
attitude (which Noble and Watson call impact) can be seen as an effect, while behavioural
change such as reduced road deaths (which they term effect) can be legitimately described as an
impact of communication. A case can be made for the terminology used in all models and
distinctions may be splitting hairs rather than providing clarification for practitioners.

As with many of the other models, research methodologies are not spelled out in the Unified
Model of Evaluation. Watson points out that the research methodology required should be
governed by the particular research problem in the particular circumstances that apply.
Consequently, any listing would simply be a collection of likely approaches rather than something
of universal applicability. 57

All researchers would undoubtedly agree with Watsons statement that there are no universally
applicable research methodologies. However, by not attempting to list methodologies applicable
to various stages of communication, practitioners are left with theoretical frameworks and a lack
of practical information on what they can do to implement the theory.

IPR PRE Process

The UK Institute of Public Relations issued a second edition of its Public Relations Research and
Evaluation Toolkit in 2001 presenting a five-step PRE (Planning Research and Evaluation)
Process which provides yet another model of how research and evaluation should be integrated
into PR and corporate communication. The PRE Process is presented in a 42-page manual that
contains considerable practical advice on the steps and methodologies for planning research and
evaluation. 58

57 ibid, p. 19.
58 Michael Fairchild, The IPR Toolkit: Planning, research and evaluation for public relations success,
Insititute of Public Relations, UK, 2001, pp. 6-24.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Input Stage
Planning and Preparation

Output Stage
Messages and Targets

Impact Stage
Awareness and Information

Effect Stage
Motivation and Behaviour

Tactical Feedback
Management Feedback

Figure 9. Unified Model of Public Relations Evaluation, Paul Noble and Tom Watson, 1999.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The PRE Process is a five-step model as shown in Figure 10. This model lists the steps of
undertaking PRE as (1) setting objectives; (2) developing a strategy and plan; (3) conducting
ongoing measurement; (4) evaluating results and (5) conducting an audit to review.

Audit Setting
Where are
we now? objectives
Where do we
want to be?

the PRE
Results &
How did
we do? Strategy
& plan
How do we
get there?

Are we
getting there?

Figure 10. PRE Process in IPR Toolkit, Institute of Public Relations, UK, 2001.

Also, the IPR PRE model lists a wide range of relevant planning and research methodologies, a
summary of which is presented in the following table.


1. Audit Where are we now? Analyse existing data
Audit of existing communication
Attitudes research, loyalty, etc
Media audit
Desk research
Gather inputs
Establish benchmarks
2. Setting objectives Where do we want to be? Align PR with strategic objectives
Breakdown into specific
measurable PR objectives
3. Strategy & plan How do we want to get Decide strategy
there? Decide tactics
Decide type and level of
research to measure outputs
(media analysis, literature
uptake, etc), out-takes (focus
groups, surveys, etc) and
outcomes (share price, sales,
audience attitude research,
behavioural change)

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

4. Ongoing Are we getting there? Media content analysis

measurement Audience research
Focus groups
Inquiries, sales, etc
5. Results & How did we do? Evaluate results
evaluation Capture experiences & lessons
Review strategy
Feed back into continuous PRE

The Pyramid Model of Evaluation and the IPR PRE Process are the only models which
comprehensively list research methodologies and, while these must be appropriately selected,
this information is important to practitioners to apply research and evaluation. Both models give
some guidance on the applicability of methodologies, particularly the PRE Process.

The IPR Toolkit manual describes a number of methodologies in detail and presents sample data,
charts and graphs from research, making this a useful practical guide for PR practitioners. Further
case study information and details of research suppliers are listed on a related IPR Web site

The key point common to all models is that public relations and corporate communication should
be research-based and that research should be applied as part of the communication process,
not as an add-on or ad hoc element done periodically. In particular, research should not be left to
the end. PR and corporate communication practitioners should research inputs to planning and
preparation (such as decisions on media, content suitability for audiences, etc); outputs produced
(such as publications, publicity and events); the awareness and attitudinal impact or out-takes of
those communication outputs; and the ultimate effect or outcome in terms of attitudinal or
behavioural change.

The loop back arrows of the Continuing and Unified models of communication evaluation are
useful in emphasising that learnings from research at each stage should be fed back into planning
and preparation ie. they are an input for the next activity or stage. Other models should be read
as implicitly including constant feedback from research into planning.

There are other models and approaches including the Short Term Model of Evaluation
proposed by Tom Watson and Michael Fairchilds Three Measures. But these present partial
research and evaluation solutions and complement rather than offer alternative thinking to the six
major models outlined.

PR Research Methodologies
Noting the large number of informal and formal research methodologies suggested in the Pyramid
Model of Evaluation, the PRE Process, and proposed by James Grunig and others, it is not
possible to describe each in detail. However, some of the most common and important research
methodologies applicable to PR and corporate communication are briefly described.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Secondary Data
Secondary data refers to information gained from sources other than primary (ie. new original)
research. Many research studies are conducted and available which can assist PR and corporate
communication practitioners. These include:

Market research and customer satisfaction research that may be available in the organisation;
Online research services such as LEXIS-NEXIS;
Research undertaken by professional organisations such as a Human Resources Society (eg.
on employee communication), advertising or marketing bodies;
Publicly released polls such as Gallup;
Social indicators research available through subscription services or publicly released.

Also, organisations such as the Institute of Public Relations in the UK (IPR), the Public Relations
Society of America (PRSA), the Public Relations Institute of Australia (PRIA) and the International
Public Relations Association (IPRA) advise their members of relevant research studies and post
copies of or links to substantial amounts of research relevant to their members on their Web sites.

The World Wide Web has revolutionised research. However, the focus of communication
practitioners in using the Web has again been principally information dissemination. As well as
being another channel or medium for communication, the Web is the worlds largest database
containing thousands of pages of research and information, most of which are free.

Case Studies
Case studies can be collected and used systematically and, when applied this way, comprise
formal research useful for Best Practice comparison or for identifying the most effective

For instance, an organisation undergoing a name change and lacking budget to commission
research to identify target audience attitudes and needs, collected case studies of a number of
organisations which had undergone name changes and then emulated the strategies which were
effective for them.

Case studies are readily and usually freely available on PR organisations Web sites as well as
numerous commercial PR Web sites, in books, awards compendiums and in university libraries.

Readability Tests
While not conclusive, readability tests typically estimate the number of years of education that a
reader needs in order to easily understand text. Methods include the Gunning Fog Index, the
Flesch Reading Ease Formula, the Dale-Chall method, Cloze Procedure or Signalled Stopping
Techniques (SST). These are very simple and can be self-administered by practitioners with no
training and at no cost to ensure text messages are able to be understood by intended audiences.
Methods are outlined in texts such as Cutlip et al. 59

Rather than relying on intuition (gut feel) or experience alone to plan communication activities,
ideas and even designs can be pre-tested. Pre-testing is an excellent example of an evaluation
methodology which can be carried out at the beginning before implementation.

With the proliferation of e-mail, a fast and highly cost-effective method of pre-testing designs and
draft copy for proposed publications, event programs, or concepts is to e-mail a PDF file to a

59 Scott M. Cutlip, Allen H. Center, and Glen M. Broom, Effective Public Relations, 6th Edition, Prentice-
Hall, Inc., 1985, pp298-299, and James Grunig, Managing Public Relations, Holt Rinehard & Winston,
Inc., 1984, pp195-198, and Otis Baskin and Craig Aronoff, Public Relations - The Profession and the
Practice, Wm C. Brown, 1983, p. 186.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

sample of target audience members for comment or voting on their preferences. Pre-testing can
also be done by fax or in face to face meetings.

Pre-testing, widely used in advertising, can play a key role within the advocated continuous model
of research and evaluation beginning with inputs. It is better to know, for example, that the stories
or design proposed for a newsletter are not what the intended recipients want before publication,
or to know that they would prefer an Intranet rather than a publication before undertaking
expensive production.

Response Mechanisms
Response mechanisms such as toll-free numbers, coupons and competitions can be used to
track audience receipt and consideration of communication. Also, analysis of Web site visits and
downloads can be used to track and demonstrate out-takes and sometimes outcomes (eg. orders
placed or bookings made on an e-commerce site). Even though responses do not always
indicate whether audiences retain or act on information, high or growing levels of response
indicate receipt, acceptance and, to some extent, understanding of messages.

Media Content Analysis

Media monitoring is widely used in public relations to track editorial publicity. However, collecting
press clippings and transcripts or tapes of broadcast stories and programs is data collection, not
research. As Gael Walker from the University of Technology Sydney notes: ... collection of data
is only the beginning of research. 60

As well as being rudimentary, presenting press clippings is an entirely quantitative way to

measure. Articles clipped may be critical, supportive of competitors, or in media that do not reach
key target audiences. Hence, presenting these as evidence of results is misleading.

In an attempt to provide qualitative assessment of media coverage, PR practitioners have

accepted that negative publicity is unlikely to achieve objectives, not to mention unwelcome, and
begun to adopt various forms of analysis. Media analysis has become the most widely used
research methodology in PR and corporate communication and, accordingly, will be examined in
some detail.

Systems adopted have ranged from do-it-yourself methods using a spreadsheet and applying
subjective ratings of articles to outsourced professional analysis by specialist research firms
which have formed in major markets.

The most basic form of media content analysis is positive/negative/neutral ratings. These are
applied based on the belief that positive coverage supports achievement of objectives, while
neutral coverage at least raises awareness. However, positive/negative/neutral ratings face a
number of major challenges to their validity and value. Firstly, they are usually applied arbitrarily
without objective or consistent criteria. Secondly, they are subjective, often applied by junior
media monitoring agency staff or communication practitioners themselves. Thirdly, they are
simplistic and imprecise as, in reality, some articles are slightly positive or negative while others
are very positive or negative. Fourthly, and most importantly, they are an unreliable method of
qualitative assessment because positive articles may appear in media which do not reach target
audiences or which, while generally positive, may not contain key messages. As such, they do
not contribute to achieving objectives.

With the focus of management on bottom line results, PR and corporate communication
practitioners have also sought ways to show a dollar value of their efforts. One approach to this
has been the practice of calculating Advertising Value Equivalents (AVEs), also referred to as
EAVs, which involves counting column centimetres or inches of press publicity and seconds of air

60 Gael Walker, "Communicating Public Relations Research". Paper to University of Technology Sydney,
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

time gained and multiplying the total by the advertising rate of the media in which the coverage
appeared. It is not uncommon, using this method, to find PR campaigns valued as the equivalent
of many hundreds of thousands or even millions of dollars of advertising.

Many PR practitioners and some monitoring and analysis firms use this method of evaluation, but
it is widely regarded as invalid and, in some cases, unethical and dishonest.

A summary of expert views and research on the subject shows that, one hand:

Advertising Editorial

Is identified as a paid for/sponsored Appears as independent comment under

message from the advertiser (ie. self the imprimatur of the editor or writer

Appears separated from the news of the day Appears as news (fact)
and some people pay less attention to
advertising than news or other programs

On the other hand:

Advertising Editorial

Content is never critical or inaccurate as it is May contain criticism and inaccuracies

written by the client

Never mentions competitors, except in May contain competitor coverage, even

product comparisons favouring the client unfavourable comparisons

Is placed in selected media strategically May be in unimportant media including

important to the client some not relevant to target audiences

Positioning is often controlled May be well or poorly positioned

Layout and design is client determined for Is laid out by sub-editors. No control of
impact, including use of headlines and logos headlines or photos. Rarely uses logos.

Thus, advertising comparisons grossly undervalue favourable publicity on one hand. On the other
hand, they may grossly overstate and misrepresent the value of publicity, particularly when it
contains criticisms or competitor coverage, when it does not contain key messages and when it
appears in media that do not reach target audiences.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The Public Relations Institute of Australia (PRIA) issued a Position Paper in 1999 on research
and evaluation which states:

The PRIA does not recognise Advertising Value Equivalents (AVEs) of editorial media coverage as a
reliable or valid evaluation methodology. Editorial and advertising cannot be directly compared. 61

Wilcox et al support this view describing the methodology as a bit like comparing apples and
oranges. 62

The 2001 UK Institute of Public Relations IPR Toolkit on planning research and evaluation,

Despite their widespread use, advertising value equivalents (AVEs) are flawed by the fact that
advertising and PR use quite different methodologies. Valid comparison is therefore difficult, if not
impossible. Opportunities to see (OTS) provide a more useful quick hit quantitative measure (but only
of output, not outcome). The public relations industry must get better at proving the worth of PR in its
own right, and the value of more in-depth use of research, in order to wean both practitioners and
clients away from AVEs. 63

Guidelines and Standards for Measuring and Evaluating PR Effectiveness published by the
Institute for Public Relations in the US, says:

Advertising Equivalency is often an issue that is raised in connection with Media Content Analysis
studies. Basically, advertising equivalency is a means of converting editorial space into advertising
costs, by measuring the amount of editorial coverage and then calculating what it would have cost to
buy that space, if it had been advertising.

Most reputable researchers contend that advertising equivalency computations are of questionable
validity. In many cases, it may not even be possible to assign an advertising equivalency score to a
given amount of editorial . 64

The advertising industry also has condemned the use of AVEs for measuring publicity. The
Advertising Federation of Australia (AFA) issued a policy on AVEs in 2001 stating:

The AFA does not support the practice of using Advertising Value Equivalents (AVEs) as a
measurement of editorial publicity. Well targeted, creative and strategically focussed advertising is
inherently different to the editorial gained from public relations activities. Both forms of communication
have their distinct benefits and cannot be benchmarked against each other . 65

Also, the Australian Association of National Advertisers (AANA) has circulated a policy statement
to its members which said in part:

AANA notes that professional PR organisations including the Institute of Public Relations in the UK
and leading PR academics in the US and UK have condemned the practice as of questionable
validity and flawed.

AANA concurs with these views and believes this matter should be brought to the attention of
members in the interests of Best Practice and to inform our members of more reliable and credible
methods for evaluating PR. 66

61 Research and Evaluation, Position Paper, Public Relations Institute of Australia, 1999.
62 Dennis Wilcox, Philip Ault & Warren Agee, Public Relations: Strategies and Tactics (3rd ed), Harper
Collins, NY, 1992, p. 211.
63 Michael Fairchild, The IPR Toolkit: Planning, research and evaluation for public relations success,
Institute of Public Relations, UK, 2001, p. 37.
64 Guidelines and Standards for Measuring and Evaluating PR Effectiveness, Institute for Public
Relations, Florida, USA, 2000.
65 Letter to PRIA, Advertising Federation of Australia, 13 February, 2001.
66 Policy statement issued by Australian Association of National Advertisers, March, 2001.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

James Grunig adds his considerable academic weight to the argument against AVEs and
particularly attacks multipliers or weightings which are applied on an assumption that the third
party endorsement of editorial is more credible than advertising. Grunig says:

Most reputable researchers contend that advertising equivalency computations are of questionable
validity. In many cases, it may not even be possible to assign an advertising equivalency score to a
given amount of editorial coverage (for example, many newspapers and/or magazines do not sell
advertising space on their front pages or their front covers)

such a measure only measures what stories placed in the media would have cost if the space had
been purchased. They do not even measure exposure because it is entirely possible that no one in a
strategic public read your story Also, the weightings for third party endorsement are totally made
up. Research does not support the idea that there is such a thing as third-party endorsement. 67

Credibility multipliers or weightings ranging from three times up to nine times are applied to the
equivalent advertising cost by some PR practitioners. Users of this method usually include the
total volume of editorial coverage, including negative, neutral and competitor coverage, making it
doubly spurious. The Institute for Public Relations in the US describes this practice in its
guidelines as follows:

Some organizations artificially multiply the estimated value of a possible editorial placement in
comparison to advertising by a factor of 2, 3, 5, 8 or whatever other inflated number they might wish to
come up with, to take into account their own perception that editorial space is always of more value
than is advertising space. Most reputable researchers view such arbitrary weighting schemes aimed
at enhancing the alleged value of editorial coverage as unethical, dishonest, and not at all supported
by the research literature. 68

Credible media content analysis methodologies evaluate media coverage based on a number of
criteria such as:

The medium in which articles appear (circulation and target audience reach/relevance);
Positioning (eg. front page or headline news, main page, run of paper, etc);
Size or length;
Headline or photo mentions;
Issues discussed;
Messages contained;
Sources quoted (share of voice).

A number of research companies offer media content analysis services. In the UK, there are 15-
20 media content analysis firms, while in the US and Australia there were at least four in 2001.
There are also software programs such as MASS MEDIAudit, part of the MASS COMaudit
suite, now available which allows PR and corporate communication practitioners to analyse media
content and present statistical summaries as well as charts and graphs to management showing
target audiences reached, key messages communicated, and share of voice compared with
competitors or other sources.

Paul Noble argues that media evaluation is not the answer to the evaluation of public relations
programs as a whole, although one might be tempted to think otherwise given some of the
hyperbole surrounding it. However, he acknowledges that media evaluation does have a very
positive role to play . 69

67 Jim Grunig, Evaluation in International Public Relations Association e-group, 4 August, 2000.
68 Guidelines and Standards for Measuring and Evaluating PR Effectiveness, Institute for Public
Relations, Florida, USA, 2000.
69 Paul Noble, 'A Proper Role for Media Evaluation' in "International Public Relations Academic Review",
No. 1, March, 1995, p. 1.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

The Pyramid Model of PR Research clearly suggests that media content analysis is but one
methodology or tool for evaluating PR and corporate communication. But, given the high
proportion of PR activity that is still focussed on media relations and publicity (42 per cent of total
PR budgets, according to the 2000 Harris/Impulse PR Client Survey 70), and the ability of
computer-aided media content analysis systems to produce data, charts and graphs that meet the
numerically-orientated information needs of management, this field of evaluation is likely to
continue to grow.

Leading Messages
by Volume & Favourability

Acme has fair fees & charges

Acme offers competitive interest rates

Acme offers good customer service

Acme is an innovator in products & services

Acme is a leading business bank

25 20 15 10 5 0 5 10 15 20 25
Unfavourable Volume (Mentions) Favourable

Favourable Unfavourable

Figure 11. Sample chart from Media Content Analysis by CARMA International.

Media analysis can be a powerful research tool for not only looking back, but also for forward
planning. Figure 11 shows that the fictitious company, Acme Bank, has successfully
communicated that it is a leading bank with fair fees and charges and attractive interest rates, but
lack of innovation and customer service are unfavourable messages that it needs to address in
future. Effort can be focussed on these areas, with follow-up analysis to evaluate results.

Also, media content analysis is not only a methodology for evaluating an organisations own
coverage. The media report on competitors, issues and trends, providing a vast database of
information that media content analysis can data mine. John Naisbitt demonstrated in his
popular book, Megatrends, that media content analysis can provide valuable insights into what is
likely to be on the public agenda in the future. 71

Surveys are one of the most commonly used research instruments, employed for market
research, customer satisfaction studies and social research. Customised surveys can be used in
PR and corporate communication for a wide range or formative and evaluative research
purposes. As well as attitude and perceptions surveys to gain insight into target audiences,
survey questionnaires can be used to evaluate the effectiveness of:

70 Thomas L. Harris/Impulse Research 2000 Public Relations Client Survey, Impulse Research
Corporation, Los Angeles, 25 October, 2000.
71 Scot M. Cutlip, Allen H. Center and Glen M. Broom, Effective Public Relations, 6th Edition, Prentice-
Hall Inc., 1985, p. 215.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Publications (reader surveys);

Events (audience surveys);
Presentations (audience surveys);
Employee communication;
Shareholder communication;
Member communication in organisations;
Intranet, Extranet or Web sites (online surveys);
Community relations programs (local community surveys).

Surveys are also a key evaluation tool for use in new, emerging practices such as reputation

With the proliferation of e-mail and growing use of the Web, e-surveys are revolutionising
research and lowering costs by eliminating printing of questionnaires, reply-paid postage costs
and partly automating data entry. Distributed by either e-mail or on Web sites, e-surveys utilise
interactive questionnaires into which respondents directly enter their responses. In sophisticated
examples, data is automatically collated in servers when respondents click Submit.

As well as costing much less than equivalent dead tree surveys, e-surveys often gain faster
responses, and time taken in data entry and collation in dramatically reduced with automated
systems. Advanced programming skills are needed to design e-surveys, however.

Surveys can be self-administered if budget is not available to engage a professional research firm
or consultant. However, it is important that survey questionnaires are properly constructed using
reliable question techniques.

Focus Groups
PR and corporate communicators can also use focus groups for qualitative research and these
can be conducted quite cost-effectively. The only major cost is for a moderator who, ideally,
should be independent and trained in psychology and focus group discussion techniques.

Focus groups, which usually involve small groups of eight to 12 people, can be used to identify
attitudes, perceptions and pre-test ideas or strategies. In simple terms, qualitative research such
as focus groups identify what people think. Quantitative research is required to identify how
many people think that way.

Ethnographic Studies
Ethnographic research, widely used in sociology and anthropology, is based on observation,
participation and role-playing techniques by impartial researchers, usually more than one, who
immerse themselves in a community or target group. A key benefit of ethnographic studies is that
they identify how individuals and groups think, behave and inter-relate in their natural
environment. After a settling in period, studied individuals and groups often forget that
ethnographic researchers are there, allowing researchers to use their fly on the wall techniques
to observe and record free of the intervention effects common with other research instruments
such as survey questionnaires or interviews.

Evaluating Relationships
Linda Childers and James Grunig have done considerable work in looking beyond communication
outputs and even beyond traditional outcomes or impacts such as changes of attitude or
behaviour to pinpoint how PR/corporate communication adds value to the organisation as a
whole. They identify that public relations makes an organisation more effective when it
identifies the most strategic publics as part of strategic management processes and conducts

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

communication programs to develop and maintain effective long-term relationships between

management and those publics. 72

Childers and Grunig argue that relationships with key constituencies is a key goal that adds value
to an organisation and, therefore, that relationships should be a key focus of public relations. In a
sense, this is a truism and a return to the original meaning of the term public relations. This view
is supported by the focus on and growth of relationship marketing in preference to transaction
marketing which seeks short-term sales. Childers and Grunig report that sound relationships lead
to greater co-operation between companies and organisations and their key stakeholders, as well
as benefits through conflict avoidance effects such as reduced litigation, fewer complaints and
less interference by government.

The International Association of Business Communicators Excellence Study provided evidence

that there is a correlation between communication effects and maintaining quality long-term
relationships.73 Therefore, communication practitioners who build and maintain relationships with
key stakeholders can, if they track and measure these, demonstrate their value to their company
or organisation overall.

A number of methodologies are applicable to evaluating relationships including surveys, focus

groups and structured and unstructured interviews.

The Ethics of Research & Evaluation

It may seem out of context for ethics to be discussed in relation to research and evaluation. But
there are major ethical issues at both a philosophical and methodological level in research and

At a philosophical level, PR and corporate communication practitioners make recommendations

to their employers and clients every day proposing expenditure of hundreds of thousands or
sometimes millions of dollars, and suggesting changes to the strategic direction of companies and
organisations. As has been shown, this is done with little or no research basis in many cases.
Furthermore, when recommendations are accepted, programs are often implemented with scant
tracking and measurement to determine whether they are effective.

Is that responsible? Is that moral? Is that ethical?

To glibly commit shareholders or taxpayers funds to programs and campaigns based only on
intuition and personal experience, without an objectively researched basis to recommendations
and rigorous evaluation of the effectiveness of expenditure and activities, has to be viewed as
highly questionable from an ethical standpoint.

At a methodological level, when research is conducted, it needs to be undertaken with rigour and
integrity. Presenting misleading or erroneous data intentionally is clearly unethical. However,
doing so through ignorance and failure to consult readily available literature or seek advice is also
ethically questionable.

A research culture needs to be developed in PR and corporate communication, applying informal

and formal research tools systematically and rigorously as part of professional communication
practice, and smoke and mirrors methods such as Advertising Value Equivalents using
credibility multipliers should be avoided.

72 Dr Linda Childers and Dr James Grunig, Guidelines for Measuring Relationships in Public Relations,
Institute for Public Relations, Florida, USA, 1999, p.9.
73 David Dozier with Larissa Grunig & James Grunig, Managers Guide to Excellence in Public Relations
and Communication Management, Lawrence Erlbaum Associates, NJ, 1995, pp. 226-230.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Pressure from clients and employers is cited in defence of invalid and simplistic evaluation
methodologies. My client wants it or my boss likes to see dollar values are common
justifications for AVEs and other such evaluation shortcuts. However, it is not acceptable for
accountants to falsify tax returns because their clients exert pressure to reduce their taxation, or
for lawyers to arrange for witnesses to lie in court because their clients pressure them to prove
their innocence? Such behaviour from professionals is widely viewed as unacceptable and public
relations needs to set and meet no less standards if it is to become and be recognised as a
profession and gain credibility and respect.

The Benefits of Using Planning Research & Evaluation

If implemented in a from the ground up continuous way, planning research and evaluation offer
major benefits to PR and corporate communication practitioners in their pursuit of professionalism
and acceptance as part of senior management, as well as at a practical level in gaining adequate
budgets and support.

Australian marketing writer, Neil Shoebridge, said in a column in Business Review Weekly in
April, 1989: For public relations to be widely accepted as a serious marketing tool, it needs to
develop new ways to prove its worth and make its actions accountable ... Pointing to a pile of
press clippings is not enough. 74

The importance of research in gaining access to senior management and having influence in
organisational decision-making is iterated in the following quote by James A. Koten, then Vice-
President for Corporate Communications at Illinois Bell:

To be influential, you have to be at the decision table and be part of corporate governance. You can
be there if the things you are doing are supported by facts. That is where the public relations person
has generally been weak and why, in most organisations, public relations functions at a lower level.
The idea is to be where the decisions are made in order to impact the future of the company. To do
so, you have to be like the lawyer or financial officer, the personnel officer or operations person. You
have to have hard data. 75

Professor David Dozier of San Diego State University says: The power-control perspective
suggests that public relations program research is a tool a weapon perhaps in the political
struggle to demonstrate the impact of public relations programs and to contribute to decision-
making... Success in this struggle means greater financial and personnel resources for the public
relations unit and greater power over decisions of the dominant coalition. 76

Dominant coalition theory developed by professors of industrial administration, Johannes

Pennings and Paul Goodman, at the University of Pittsburgh, provides an effective model for
seeing why public relations is often remote from the centre of decision-making and policy-making
in organisations. 77

Research shows that the dominant coalition in modern companies and organisations is
comprised of numeric-orientated executives such as accountants, engineers, technologists, sales
and marketing heads, and sometimes lawyers. Their vocational orientation is exacerbated by time

74 Neil Shoebridge, column in "BRW Magazine", April, 1989.

75 David Dozier, "The environmental scanning function of public relations practitioners and participation in
management decision making". Paper presented to the Public Relations Division, Association for
Education in Journalism and Mass Communication, Norman, Oklahoma, August, 1986, p. 2.
76 David M. Dozier, 'The Innovation of Research in Public Relations Practice: Review of Program Studies'
in Public Relations Research Annual, Vol. 2, Lawrence Erlbaum Associates, 1990, p. 12.
77 James E. Grunig, and Todd Hunt, Managing Public Relations, Holt, Rinehart & Winston, Inc., 1984, p.
PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

pressures due to their busy schedules. Studies of dominant coalitions show that the language of
the dominant coalition is principally numbers including dollar amounts and percentages; charts
and graphs; diagrams such as flow charts; and illustrations which simplify and summarise; with
text well down the list of what has most impact. Ironically and perhaps tellingly, PR and corporate
communication proposals and reports have been traditionally text-based.

A 1985 survey of Public Relations Society of America (PRSA) and International Association of
Business Communicators (IABC) members in the US and Canada showed that scanning research
is positively associated with participation in management decision-making and membership of the
dominant coalition. 78

In their 1979 and 1985 surveys of 208 PRSA members, Professors Glen Broom and David Dozier
also found that increases in overall evaluation research activities were associated with increased
participation in management decision-making. 79

Formative and evaluative research tools and methodologies provide PR and corporate
communication practitioners with the data to more accurately plan and the metrics to implement
Best Practice comparison, benchmarking, Key Performance Indicators (KPIs) or Key Results
Areas (KRAs) and other modern management reporting techniques such as Balanced Score
Card, bringing PR and corporate communication into line with management strategies and
expectations in the Age of Accountability.

In this context, research can provide the long-sought key to the boardroom door for PR and
corporate communication.


* Jim Macnamara is a prominent international authority on measurement and evaluation of public

relations and corporate communication. He holds a BA in journalism, media and literature, an MA
by research in media and public relations and in 2005 will have completed a PhD in media
research. He was a contributing author to the landmark International Public Relations Association
(IPRA) Gold Paper on Evaluation in 1994 and he is the author of nine books on media, PR and
communication as well as numerous papers published in professional and academic journals
including IPRA Academic Review, Asia Pacific Public Relations Journal and Strategic
Communication Measurement in the US and the UK Journal of Communication Management.
After more than 20 years working in the media and public relations, he founded and is CEO of
The MASS Communication Group, an independent communication research and consulting
group, and operates the Asia Pacific franchise of global media analysis firm, CARMA

Copyright MASS Communication Group Pty Limited, 2002

1st floor, 87-97 Regent St, Chippendale, NSW, 2008, Australia
Telephone: 61 2 9698 1110

78 David M. Dozier, op. cit., p. 19.

79 John V. Pavlik, Public Relations - What Research Tells Us, Sage Publications, 1987, p. 69.

PR METRICS Research for Planning & Evaluation or PR & Corporate Communication

Case Studies: Informal and Formal Research

A leading industry association planned to produce a poster containing important industry
messages for display in retail stores. A number of creative graphic designs were mocked up and
reviewed. However, rather than select one based on intuition or personal preference, the
association e-mailed a PDF (a digital image captured in Adobe Acrobat) of each design to a
sample of retailers for their feedback and voting. The retailers chose entirely differently to the
association for reasons they explained. This valuable pre-testing resulted in a poster that met the
needs of the target audience and cost only one hour of time.

Case Studies
A company facing a serious crisis involving food contamination needed to produce a PR strategy
over one weekend. No time for research? Wrong. Enterprising PR consultants spent their Friday
night searching the Web and the CompuServe Marketing Forum for case studies, as well as
posting questions in various online chat rooms. The result was that, by Monday morning, they
had received more than 20 documented case studies of food contamination crises and how they
were handled from around the world. They then emulated the strategies which appeared to be
effective in these cases studies and avoided strategies which were reported to be ineffective. This
was, in effect, an international Best Practice study and captured valuable research for little cost.

A major Federal Government agency needed to determine how well its communication was
working with key stakeholders which included State departments, local councils and regional field
officers. Limited budget was available for research. PR staff noted that most stakeholders used e-
mail. So an e-survey was designed using a specialist research firm and distributed electronically.
The result was dramatically reduced cost due to no requirement for printing, postage or reply-paid
envelopes, and fast responses. Many stakeholders reported the e-survey was more convenient
than printed questionnaires which had to be completed by handwriting and mailed or faxed.
Around 700 responses were received, collated, analysed and reported in less than 10 days at
approximately half the cost of a traditional printed survey.

Informal Focus Groups & Unstructured Interviews

A PR executive for a major corporation briefed to improve the effectiveness of internal
communication asked for budget to conduct an employee survey. However, this was denied on
the basis that staff were surveyed out. Rather than be defeated and resort to gut feel, the PR
executive noted that the company had a large staff canteen busy at lunch time every day. He and
his staff visited the canteen over 10 days informally interviewing employees. Staff were more than
happy to share their views. The result was an extensive compilation of data and comments on
staff information needs and media preferences which the PR executive used as the basis for
recommendations to management. Quotes from staff, without names, were used to back up his
recommendations which were accepted due to the weight of evidence.