You are on page 1of 11

Implementing the

VOL.3 NO.3

Spring Balanced Scorecard


2002
Using the Analytic
Hierarchy Process
B Y B . D O U G L A S C L I N T O N , P H . D . , C PA ; S A L LY A . W E B B E R , P H . D . , C PA ;
A N D J O H N M . H A S S E L L , P H . D . , C PA

THOMAS SAATY’S INNOVATIVE TECHNIQUE PROVIDES A SYSTEMATIC PROCESS

FOR CHOOSING METRICS.

C
ompanies using the balanced scorecard have Cheng Hsu illustrated the importance of the balanced
achieved mixed success. We believe the dif- scorecard concept developed by Robert Kaplan and
ficulty lies with the process of choosing met- David Norton.1 Since the publication of Kaplan and
rics and using them appropriately. To solve Norton’s 1992 Harvard Business Review article, compa-
the dilemmas of how to systematically nies have implemented the balanced scorecard with
choose appropriate metrics and how to compare divi- mixed results.2 They seem to have a difficult time
sions with differing metrics, we suggest using a tool choosing the proper metrics and then using them
called the Analytic Hierarchy Process (AHP). appropriately.
The AHP is a unique method that provides a means “The point to using the balanced scorecard as a man-
of choosing appropriate metrics in virtually any context. agement tool is not to adopt a specific set of metrics by
It uses a hierarchical approach to organize data for prop- cloning them from a particular list,” wrote Clinton and
er decision making. The method is easy to implement, Hsu.3 “The idea is to analyze each of these components
simple to understand, and able to satisfy every require- (relationships and management perspectives) and con-
ment of metric choice and scorecard construction. The sider how they link to strategy and link together to sup-
process can neatly capture the consensus of a potential- port a meaningful continuous improvement and
ly divergent group of managers and can be quickly and assessment effort.”
easily updated as desired. The AHP is a powerful yet The balanced scorecard concept is the balanced pur-
simple tool that provides great promise toward imple- suit of objectives in four key areas: customers, financial, inter-
menting the balanced scorecard for practical use. nal business processes, and innovation and learning. The
balanced scorecard provides summary-level data
BALANCED SCORECARD CONCEPT emphasizing the metrics most critical to the company in
In the September 1997 issue of Management Accounting each of the four areas only. Kaplan and Norton are
(now Strategic Finance), B. Douglas Clinton and Ko quick to point out that success in using the balanced

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 1 SPRING 2002, VOL.3, NO.3


scorecard is highly contingent on the ability of the cho- and provide important motivational incentives yet
sen metrics to encourage action and appropriate change ensure the process is equitable and congruent with the
for the company. overall goals of the company as a whole?
In Kaplan and Norton’s second balanced scorecard To solve the dilemma of implementing the balanced
article, they emphasized the importance of tying the scorecard based on a method of systematically choosing
metrics to strategy.4 In their discussion, they presented metrics, several important issues need to be addressed
a typical project profile for building a balanced score- and satisfied. For example, the scope of the method
card, but they showed that the ways in which particular must include the following requirements:
companies use the balanced scorecard often differ con- ♦ Handle subjective, qualitative judgments as well as
siderably, as do the benefits derived from the different quantitative data simultaneously;
approaches. ♦ Be used with any metric—financial as well as non-
In more recent years, the balanced scorecard has financial, historical as well as future oriented—and
grown into a tool used as a basis for developing a strate- allow comparison between categories;
gic management system. By observing how companies ♦ Be accurately descriptive in capturing what metrics
applied the balanced scorecard, Kaplan and Norton rec- users believe are critically important yet not dictate
ognized how useful the approach can be in coupling to them what should be important;
strategy development with strategy implementation. ♦ Be easy enough for anyone involved in choosing
Different methods of employing the balanced scorecard metrics to use;
are used to clarify and update strategy, to communicate ♦ Handle direct resource allocation issues, conduct
intentions, to align organization and individual goals, cost/benefit analysis, and be used in designing and
and to learn about and improve strategy. It is this very optimizing the strategic management system;
diversity in choosing metrics, however, that provides ♦ Remain simple yet effective as a procedure, even in
the central focus of problems in implementing and group decision making where diverse expertise and
using the balanced scorecard. Unfortunately, no one preferences must be considered;
really knows the best way to choose the metrics. ♦ Be applicable in negotiation and conflict resolution
by focusing on the relationships between relative
S Y S T E M AT I C A L LY C H O O S I N G M E T R I C S benefits and costs for each of the parties involved;
How should a company choose the metrics to use in ♦ Be relevant and applicable to a dynamic, constantly
each of the four categories? This question is made more changing organization;
difficult because it is almost certain that different met- ♦ Be used for guiding and evaluating strategy, and for
rics are appropriate for different divisions within a com- establishing a basis for performance evaluation and
pany. Although a useful goal is “balance,” when, if incentive-based compensation;
ever, does one of the four areas become more important ♦ Be mathematically rigorous enough to satisfy strict
than another? If one of the four areas is treated as more academicians and mathematically oriented practition-
important than another, how will the relative impor- ers and compensation specialists.
tance of the four areas be determined and communicat-
ed? How should top management go about The tool that can satisfy all of these requirements is
meaningfully comparing divisions that are using differ- the Analytic Hierarchy Process (AHP). In fact, the issues
ent metrics and weight the relative importance of the listed above were adapted from the seminal book writ-
four areas? Will managers earning performance-based ten by the creator of the AHP, Thomas L. Saaty,
compensation tied to their individualized balanced explaining what AHP can do.5
scorecard perceive they are being treated fairly when
compared to managers of other divisions? With each PROBLEMS CHOOSING METRICS
division using different metrics for bonus calculations, Perhaps the most candid discussion of the successes
how can a company simplify the compensation process and failures of the balanced scorecard was presented in

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 2 SPRING 2002, VOL. 3, NO. 3


1999 by Arthur M. Schneiderman, former vice president
of quality and productivity at Analog Devices, Inc. He
was responsible for the development in 1987 of the first
Table 1. Why Most Balanced
balanced scorecard that the company used, and he
offers advice from his 12 years of experience as a “bal- Scorecards Fail
anced scorecard process owner.”6 He states that “a good
1. The independent (i.e., nonfinancial) variables on
scorecard can be the single most important manage-
the scorecard are incorrectly identified as the pri-
ment tool in Western organizations,” but “the vast mary drivers of future stakeholder satisfaction.
majority of so-called balanced scorecards fail over time
2. The metrics are poorly defined.
to meet the expectations of their creators.”7
3. Improvement goals are negotiated rather than
Schneiderman lists six reasons why most balanced
based on stakeholder requirements, fundamental
scorecard implementations fail (see Table 1). First on process limits, and improvement process
the list is the incorrect choice of variables as drivers of capabilities.
stakeholder satisfaction. In addressing this issue, 4. There is no deployment system that breaks high-
Schneiderman states: level goals down to the subprocess level where
actual improvement activities reside.
The difficulty in identifying scorecard metrics 5. A state-of-the-art improvement system is not used.
is compounded by the emerging requirements 6. There is not, and cannot be, a quantitative linkage
of nonowner stakeholders: employees, cus- between nonfinancial and expected financial
tomers, suppliers, communities, and even results.
future generations.8
Source: A.M. Schneiderman, “Why Balanced Scorecards Fail,”
Schneiderman mentions how difficult it is for organi- Journal of Strategic Performance Measurement, January 1999.

zations to clearly identify social responsibilities. At least


part of the problem is the subjectivity involved. The
subjectivity issue is also likely to create difficulties in
defining the metrics. Even so, Schneiderman appears to quate” and “poor.” In addition, “Only 16.8% rated cus-
recognize that value creation takes place at the activity tomer metrics as ‘very good to excellent,’ and only
level and recommends pushing metrics downward to 12.3% said their internal process metrics were ‘very
lower organization levels. He argues that: good to excellent.’”10
Like Schneiderman, Frigo and Krumwiede did not
There is great value in even subjective agree- conclude that the balanced scorecard was a poor
ment that if all of the goals of subordinate method or an inadequate tool. In fact, they concluded
scorecards are achieved, then a higher-level that firms using a balanced scorecard for performance
goal will also be achieved, almost with measurement rated their performance measurement
certainty.9 systems much higher on average than other compa-
nies.11 Thus we see even more evidence that choosing
Schneiderman is not the only balanced scorecard and properly defining metrics is a difficult task. More-
advisor to lament the difficulty with metric choice and over, the vast majority of companies that have perfor-
the problems that result. About the same time, Mark mance management systems are dissatisfied with their
Frigo and Kip Krumwiede published a study that rated metrics—including those companies that employ the
performance metrics in the balanced scorecard frame- balanced scorecard. Clearly, a systematic process is
work. After examining survey data, they concluded that needed to help companies choose metrics that mea-
scorecard users rated about a third of customer and sure results appropriately. This is where the AHP
internal process area metrics as between “less than ade- can help.

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 3 SPRING 2002, VOL. 3, NO. 3


THE AHP METHOD four balanced scorecard categories. The second level of
According to Saaty, the AHP is “a framework of logic the hierarchy contains the metrics that are used to mea-
and problem solving” that organizes data into a hierar- sure performance for each of the four categories.
chy of forces that influence decision results.12 The The AHP can be used in two ways to implement a
method is mathematically rigorous yet easy to under- balanced scorecard: (1) at the beginning of the process,
stand because it focuses on making a series of simple to help choose metrics, and (2) after the metrics are
paired comparisons. These comparisons are used to chosen, to help understand their relative importance to
compute the relative importance of items in a hierarchy. a firm’s managers and employees. We illustrate both in
Accordingly, the AHP can help decision makers under- the following sections.
stand which metrics are more important by indicating
which metrics weigh more heavily on a decision than AHP AND METRIC SELECTION
others. Determining which metrics to use in a balanced score-
Managers typically will not use a method that they card requires selecting the set of metrics for each of the
do not understand, and if they are forced to use such a four areas mentioned earlier: customers, financial, inter-
method, they will likely not be happy about it. Unfortu- nal business processes, and innovation and learning.
nately, the best methods (e.g., most accurate or useful) The first decision is to choose who will be involved in
are often the most complex as well. In contrast, the selecting the metrics. It is likely that different person-
AHP is simple, adaptable to groups and individuals, nel should be used in the metric identification process
natural to intuition and general thinking, encourages for each separate area. The participant selection process
compromise and consensus building, and does not also requires deciding which levels in the organization
require a great deal of specialized knowledge to master. will be used to select the metrics (e.g., plant level, divi-
The AHP is able to deal with the subjectivity involved sion level, corporate level, or all three). The number of
with attempting to define vague, judgmentally based participants should be large enough to ensure that a suf-
metrics such as those associated with social responsibili- ficient number of potentially appropriate metrics are
ty. When a large number of metrics are possible, the identified but small enough that the metric selection
AHP is able to help decision makers simplify the num- process does not become unwieldy. The metric selec-
ber of possible metrics. tion process in each of the four areas can proceed either
Here is a general description of the AHP pertaining simultaneously or one area at a time.
specifically to choosing balanced scorecard metrics.13 Participants initially should be encouraged to brain-
The AHP uses a series of paired comparisons in storm and use their experiences and expertise to identi-
which users provide judgments about the relative domi- fy all possible metrics in each area. After they identify
nance of two items. The term “dominance” here is a the set of possible metrics, the next step is to reduce
greater-than type of measure for a given attribute or the list to a smaller number of metrics. If possible, all
quality. Dominance can be expressed in terms of impor- participants should meet in person or via conference
tance, preference, equity, or some other criterion that call to discuss which metrics are most important.
causes the individual to believe that one item is better After the list has been whittled down, the AHP facili-
or more appropriate than another. This criterion need tator uses the reduced set of metrics and has partici-
not be specified or readily definable to use the method; pants respond to the AHP task. Each participant is
that is, the dominance decision could be based on expe- asked to compare all possible pairs of metrics in each of
rience, intuition, or some subjective judgment that per- the four areas as to their relative importance using a
haps the user cannot articulate. written survey. From the participants’ responses, the
The AHP assumes hierarchical relationships without AHP facilitator computes a decision model for each par-
compromising the relative balance that the balanced ticipant that reflects the relative importance of each
scorecard intrinsically encourages. In a balanced score- metric (the decision weights sum to 100%).14 After the
card task, the first level of the hierarchy contains the AHP decision models are obtained, participants are giv-

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 4 SPRING 2002, VOL. 3, NO. 3


en the decision models of the other participants and are not required to have a prescribed number of met-
asked to reflect on their original metric choices. Then rics per category or the same number of metrics across
the group meets again to determine the final set of met- categories.
rics for the scorecard. The two-step procedure used to determine the over-
An important feature of the AHP is that it produces all performance involves (1) using the AHP to compute
an inconsistency index for each participant’s decision decision weights of relative importance for each metric
model. AHP software points out where an individual’s and (2) using an algorithm to compute an overall perfor-
responses in making paired comparisons are inconsis- mance score. The following example illustrates these
tent. The participant can then rethink the relative two steps in detail.
importance of possible factors.
Another strength of the AHP is that it keeps some T H E A N A LY T I C H I E R A R C H Y P R O C E S S
possible metrics from being discarded prematurely. For Step one uses the AHP to compute decision weights of
example, assume that 12 participants identified seven relative importance for each metric. While a relatively
possible metrics related to internal business processes. small number of individuals may have participated in
After the AHP task, assume that nine of the 12 partici- the metric selection process, determining how to weight
pants rated metric number three as not important, but the relative importance of the categories and metrics
the other three participants rated the metric as very can involve as many individuals as desired from all lev-
important. These decision weights would presumably els within the organization. These individuals proceed
cause much discussion among the participants. Though through the AHP paired-comparison procedure, first
most of the participants initially believed that metric comparing the relative importance of the four balanced
number three was not important, the AHP might sug- scorecard categories in the first level of the AHP hierar-
gest it be included in the balanced scorecard because of chy. The respondents may want to consider the current
its relative importance to the
other participants.
Once the balanced score-
card metrics are chosen, a Figure 1. Balanced Scorecard Hierarchy
balanced scorecard hierarchy Strategic Objective: Success in Pursuing a Differentiation Strategy
is established. This hierarchy
Internal Innovation
should be revisited during Business and
the strategic planning process Level 1: Categories Customer Financial Processes Learning
and adjusted commensurate Level 2: Metrics Metric 1 Metric 1 Metric 1 Metric 1
with changes in strategic Metric 2 Metric 2 Metric 2 Metric 2
direction and operational tac- Metric 3 Metric 3 Metric 3 Metric 3
tics. Figure 1 presents a Metric 4 Metric 4 Metric 4
hypothetical structure for a Metric 5 Metric 5
balanced scorecard that Metric 6
comes from using the AHP.
The strategic objective on
which this balanced scorecard is based is success in product life-cycle stage when making this determina-
pursuing a differentiation strategy. Presenting an over- tion. For example, while in the product introduction
all strategic objective on the face of the scorecard stage, formalizing internal business processes may be of
serves as a reminder and reality check. For those ulti- considerable relative importance, but when dealing
mately using the scorecard, this information can be with a mature or declining product, the desire to mini-
helpful to keep in mind in the final evaluation of the mize variable cost per unit may dictate the financial cat-
chosen metrics. As shown, the four scorecard categories egory to be of greater importance than the three other

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 5 SPRING 2002, VOL. 3, NO. 3


scorecard categories. An illustration of the paired com- the respective scorecard categories—the process is
parison process for level one—the category level—is identical to that used for level one. Paired comparisons
presented in Figure 2. are made between all combinations of the metrics pro-
For the second level of the AHP hierarchy— posed within each scorecard category. A sample survey
identifying the relative importance of the metrics for instrument showing a complete set of questions for

Figure 2. AHP Paired-Comparison Process


A scale is presented listing two balanced scorecard categories on opposite ends. The first decision is to select which of
the two categories is considered to be more important in the balanced scorecard context. The second decision is to
indicate how much more important one believes the selected category to be. The degree of importance is measured on
the nine-point scale shown. Even numbers are not defined, but they can be used to represent intensities between the
odd numbers.

1=EQUAL (the categories are of equal importance in the scorecard)


3=MODERATE (one of the categories is slightly more important that the other)
5=STRONG (one of the categories is strongly favored over the other)
7=VERY STRONG (one of the categories is strongly favored over the other, and its dominance is
demonstrated in practice)
9=EXTREME (the difference in importance between the two categories is so extreme that the cate-
gories are on the verge of not being comparable)

Survey Question: In measuring success in pursuing a differentiation strategy, for each pair, indicate which of the two
balanced scorecard categories is more important. If you believe that the categories being compared are equally impor-
tant in the balanced scorecard process, then you should mark “1.” Otherwise, mark the box with the number that corre-
sponds to the intensity on the side that you consider more important described in the above scale.

Consider the following examples:

Customer 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Financial

In this example, the customer category is judged to be strongly more important


than the financial category.

Internal
Customer 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Business
Processes

In this example, the customer category is judged to be equally important to the


internal business processes category.

Note: There are no right or wrong answers to the paired comparisons.

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 6 SPRING 2002, VOL. 3, NO. 3


both levels is presented in Figure 3. By examining the combination of three metrics proposed for each of the
scales you will notice that all possible paired compar- respective four scorecard categories.
isons are made. For level one, this comparison includes In Figure 3, we use three metrics in each balanced
all paired combinations of the four scorecard categories. scorecard category. The choice of three metrics and the
For level two, the comparisons are between each paired specific metrics chosen are merely examples. Limiting

Figure 3. Sample Survey Instrument


Level One (Scorecard Category)
Please place an “X” in the appropriate box to indicate which item in each pair, if any, is more important than the other
for the company to succeed in pursuing a differentiation strategy.

1=EQUAL 3=MODERATE 5=STRONG 7=VERY STRONG 9=EXTREME

Innovation Internal
and 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Business
Learning Processes
Innovation
and 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Customer
Learning
Innovation
and 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Financial
Learning
Internal
Business 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Customer
Processes
Internal
Business 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Financial
Processes

Customer 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Financial

Level Two (Metrics to Use in Each Scorecard Category)


Innovation and Learning:
Please place an “X” in the appropriate box to indicate which item in each pair, if any, is more important than the other
for the company to succeed in the innovation and learning category.

1=EQUAL 3=MODERATE 5=STRONG 7=VERY STRONG 9=EXTREME

Number
Market Share 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 of New
Products
Revenue from
Market Share 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 New Products
and Services
Number of Revenue from
New 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 New Products
Products and Services

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 7 SPRING 2002, VOL. 3, NO. 3


Internal Business Processes:
Please place an “X” in the appropriate box to indicate which item in each pair, if any, is more important than the other
for the company to succeed in the internal business processes category.

1=EQUAL 3=MODERATE 5=STRONG 7=VERY STRONG 9=EXTREME

Number of Minimizing
Good Units 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Variable Costs
Produced Per Unit
Number of Number of
Good Units 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 On-Time
Produced Deliveries
Minimizing Number of
Variable Costs 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 On-Time
Per Unit Deliveries

Customer:
Please place an “X” in the appropriate box to indicate which item in each pair, if any, is more important than the other
for the company to succeed in the customer category.

1=EQUAL 3=MODERATE 5=STRONG 7=VERY STRONG 9=EXTREME

Revenue 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Market Share

QFD (Quality
Revenue 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Function
Deployment) Score

Market Share 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 QFD Score

Financial:
Please place an “X” in the appropriate box to indicate which item in each pair, if any, is more important than the other
for the company to succeed in the financial category.

1=EQUAL 3=MODERATE 5=STRONG 7=VERY STRONG 9=EXTREME

Cash
Value- 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Residual Income
Added
Cash
Value- 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Cash Flow ROI
Added

Residual Income 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Cash Flow ROI

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 8 SPRING 2002, VOL. 3, NO. 3


our choices to three metrics for each category is simply in Figure 1, the number of metrics can vary across cate-
for convenience in using a small number of paired com- gories. Potential metrics can be obtained through
parisons in our illustration, and the number of metrics benchmarking processes, brainstorming sessions, or oth-
in each category is not constrained to three. As shown er methods such as historical analysis. For companies
already using scorecards, their existing list of metrics is
a likely place to start.

Figure 4. Sample Survey Outcome O B TA I N I N G O U T P U T


Example After the decision makers complete the survey instru-
Weights are computed for each set of paired com- ments, the values are put into the AHP software, called
parisons that are made. The weights for each set of Expert Choice.15 The software computes both local
comparisons are referred to as local weights, and weights (the relative importance of each metric within a
the overall weights for the decision model are called category) and global weights (the relative importance of
global weights. Each set of local weights sum to each metric to the overall goal) and shows the relative
1.00, as do the global weights. importance in percentage terms of all metrics and score-
card categories. A sample outcome is shown in Figure 4.
Assume that the average local weights for the score- The AHP produces a separate set of decision weights
card categories (Level One) are as follows: for each participant. The usual process to obtain a con-
sensus weighting for all participants is to average the
Category Relative Weight decision weights across participants. Management can
Innovation and Learning 0.32 be confident that this summary reflects the collective
Internal Business Processes 0.25 consensual view of all of the respondents to the survey.
Customer 0.21 For example, as shown in Figure 4, market share in the
Financial 0.22 innovation and learning category is the most important
Total 1.00 metric to achieving the goal of success by pursuing a
differentiation strategy.
This outcome would mean that the respondents
Step two in determining overall performance
believe the most important category is innovation
involves using an algorithm to compute an overall per-
and learning, followed by internal business process-
formance score. The AHP weights developed by this
es, with customer and financial categories about
summary process could be used to compare the perfor-
equally weighted.

Assume that the average local weights for the metrics used in each category (Level Two) are as follows:

Innovation and Internal Business


Learning Processes Customer Financial
Market Share .40 No. of Product .33 Revenue .20 Cash Value-Added .28
Units Produced
No. of New .35 Minimizing .33 Market Share .38 Residual Income .32
Products Variable Cost
Per Unit
Revenue .25 No. of On-Time .33 QFD (Quality .42 Cash Flow ROI .40
from New Deliveries Function
Products Deployment)
Score
1.00 1.00 1.00 1.00

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 9 SPRING 2002, VOL. 3, NO. 3


Balanced Scorecard
Strategic Objective: Success in Pursuing a Differentiation Strategy

Categories and Metrics Level One * Level Two Global Outcome

Multiplying the local


decision weights
from Level 1 by the
local decision
weights for the
Level 2 metrics
develops the global
outcome

Innovation and Learning


Market Share (.40 * .32) .128
No. of New Products (.35 * .32) .112
Revenue from New Products (.25 * .32) .080
Total: Innovation and Learning .320

Internal Business Processes


No. of Product Units Produced (.33 * .25) .0833
Minimizing Variable Cost Per Unit (.33 * .25) .0833
No. of On-Time Deliveries (.33 * .25) .0833
Total Internal Business Processes .250

Customer
Revenue (.20 * .21) .042
Market share (.38 * .21) .080
QFD (Quality Function Deployment) Score (.42 * .21) .088
Total Customer .210

Financial
Cash Value-Added (.28 * .22) .062
Residual Income (.32 * .22) .070
Cash Flow ROI (.40 * .22) .088
Total Financial .220

Sum of the Global Weights 1.000

These results indicate that the least important metric is revenue (customer category) and the most important is market share

(innovation and learning category).

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 10 SPRING 2002, VOL. 3, NO. 3


mance of divisions that use different sets of metrics. can be quickly and easily updated as desired. These
Because the relative AHP weights for each set of met- facets make it a powerful yet simple tool for choosing
rics sums to 1.00 (or 100%), the company could devise a balanced scorecard metrics. Using AHP is likely to lead
performance measurement system that makes use of to better balanced scorecard outcomes. ■
the weights to derive an overall performance score. For
example, a division might be rated on each metric as B. Douglas Clinton, Ph.D., CPA, is an associate professor of
excellent (1.00), good (0.75), fair (0.50), or poor (0.25), accountancy at Northern Illinois University. He can be
and those ratings then could be weighted using the reached at (815) 753-6804 or clinton@niu.edu.
AHP model. A division that received excellent ratings
for all scorecard metrics would thus have an overall per- Sally A. Webber, Ph.D., CPA, is an associate professor of
formance score of 1.00 (or 100%). Computing an overall accountancy at Northern Illinois University. She can be
performance score would work to alleviate the tendency reached at (815) 753-6212 or swebber@niu.edu.
of managers to ignore some metrics. Marlys Lipe and
Steven Salterio showed that when balanced scorecards John M. Hassell, Ph.D., is a professor of accountancy at Indi-
included different metrics for each division, managers ana University, Indianapolis. He can be reached at (317)
who were asked to evaluate the various divisions 274-4805 or jhassell@iupui.edu.
seemed to consider only the metrics that were common
to all divisions’ scorecards. They also did not appear to 1 B. D. Clinton and K. C. Hsu, “JIT and the Balanced Scorecard:
include or consider the information related to unique Linking Manufacturing Control to Management Control,”
Management Accounting, September 1997, pp. 18-24.
scorecard metrics.16
2 R. S. Kaplan and D. P. Norton, “The Balanced Scorecard—-
Measures that Drive Performance,” Harvard Business Review,
STRENGTH OF AHP METHOD January-February 1992.
3 Clinton and Hsu, Management Accounting, p. 24.
The AHP approach is well suited to implementing the 4 R. S. Kaplan and D. P. Norton, “Putting the Balanced Scorecard
balanced scorecard process. It can be used to help com- to Work,” Harvard Business Review, September-October 1993.
5 T. L. Saaty, Fundamentals of Decision Making and Priority Theory
panies choose appropriate metrics and derive weights of
with the Analytic Hierarchy Process, RWS Publications, Pittsburgh,
relative importance for both categories and metrics. Pa., 1994.
The AHP is able to satisfy the requirements of metric 6 A. M. Schneiderman, “Why Balanced Scorecards Fail,” Journal
of Strategic Performance Measurement, January 1999.
choice and scorecard construction mentioned earlier. 7 Schneiderman, Journal of Strategic Performance Measurement, p. 6-7.
The method is simple to understand and easy to imple- 8 Schneiderman, Journal of Strategic Performance Measurement, p. 7.
9 Schneiderman, Journal of Strategic Performance Measurement, p. 9.
ment, and a relatively large number of possible metrics
10 M. L. Frigo and K. R. Krumwiede, “Balanced Scorecards:
could be included. Even if the decision maker has a A Rising Trend in Strategic Performance Measurement,”
good understanding of the metrics and their features, Journal of Strategic Performance Measurement, February-March
1999, p. 44.
the task of narrowing this list can get confusing when 11 Frigo and Krumwiede, Journal of Strategic Performance
trying to make trade-offs among many metrics at once. Measurement, p. 44.
12 Saaty, Fundamentals of Decision Making and Priority Theory with
As the AHP always makes comparisons in pairs, this
the Analytic Hierarchy Process, p. 5.
task is reduced to a manageable level. 13 The method described reflects a sample of how the AHP could
The AHP easily handles qualitative and quantitative be used in choosing metrics. Any mistakes in interpretation of
the method or its use are the sole responsibility of the authors
metrics simultaneously while incorporating subjective of this article and do not necessarily represent the endorsement
elements of the choice process that may perhaps be so of the creator of the AHP, Thomas L. Saaty.
14 Software used to compute the AHP decision weights is provid-
deeply latent to the respondents’ underlying thought
ed by Expert Choice, Inc., Pittsburgh, Pa.
processes that the respondents are unable to articulate 15 Expert Choice, Inc., Pittsburgh, Pa., 1992.
them. The AHP also is able to neatly capture the con- 16 M. G. Lipe and S. E. Salterio, ”The Balanced Scorecard:
Judgmental Effects of Common and Unique Performance
sensus of a potentially divergent group of managers and Measures,” Accounting Review, July 2000, pp. 283-298.

M A N A G E M E N T A C C O U N T I N G Q U A R T E R LY 11 SPRING 2002, VOL. 3, NO. 3

You might also like