You are on page 1of 10

Research

Publication Date: 7 July 2009 ID Number: G00168499

CRM Performance Metrics That Matter Most in the


Contact Center
Michael Maoz

To arrive at a list of the most critical performance metrics for customer service, both a
bottom-up approach and a top-down approach must be employed. The bottom-up
dimension examines whether or not the existing metrics are sufficient to improve the
customer experience, while the top-down approach examines the opportunities for an
enterprise to differentiate itself, and how to prioritize these opportunities.

Key Findings
• On the executive/board level, a maximum of seven key metrics should be followed, such
as revenue, shipments, orders, earnings per share and profit margins.

• Most organizations find it difficult to link CRM metrics (for example, customer
satisfaction) to basic operational metrics, such as average handling time (that is, cost),
and instead settle for cost cutting.

• The central issue of customer service representative (CSR) morale is often overlooked,
and customer satisfaction and cost savings suffer as a result.

• Social CRM, or the support of communities, can help with customer service, but few
organizations have the skills required to create a business case.

• The bottom line in business success is the degree of customer loyalty and the
willingness to promote your brand. For this reason, customer experience measures
should drive CRM process improvement efforts in customer service.

• Recommendations

• Arrive at a list of critical CRM performance metrics before evaluating software and
technology.

• Use a cross-channel program management approach to customer metrics to avoid an


incomplete understanding of the true customer experience.

© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form
without prior written permission is forbidden. The information contained herein has been obtained from sources believed to
be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although
Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal
advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors,
omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein
are subject to change without notice.
TABLE OF CONTENTS

Analysis ............................................................................................................................................. 3
1.0 Harnessing Customer Service Technology to Improve Business Performance ............ 4
2.0 Setting Performance Metrics in Line With Enterprise Business Priorities...................... 6
3.0 How to Get the Basic Performance Metrics Right.......................................................... 6
4.0 Agent Satisfaction Is Key to Improving Customer Service Delivery............................... 7
5.0 Performance Metrics for a Multichannel Service Delivery Strategy Are Complex to
Measure................................................................................................................................ 7
6.0 Measure How Well the Service Channel Performed to Design Standards .................... 8
7.0 How Contact Centers Move to Profit Centers ................................................................ 8
8.0 Securing a Budget for Community Analysis Requires a Business Case ....................... 9
9.0 Final Thoughts: Consider a Net Promoter Score or Similar Approach .......................... 9

LIST OF FIGURES

Figure 1. Setting Performance Metrics in Line With Enterprise Business Priorities.......................... 5

Publication Date: 7 July 2009/ID Number: G00168499 Page 2 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
ANALYSIS

A recurring question posed by Gartner clients is: What are the right key performance indicators
(KPIs) and metrics by which we should evaluate our contact center, in particular, and customer
service, overall? This is not a simple question to answer, because there are several types of
measures of success, none of which should be left out.
There should be a bottom-up approach to metrics and a top-down approach. The bottom-up
dimension looks at what the enterprise measures already, and makes sure that each of these
measurements leads collectively to improvements to the customer experience as a whole. Think
of this as a contextual approach (see "The Benefits of Contextual Performance Management in
the Contact Center"). The new, contextual performance metrics that need to be incorporated into
customer service and a CRM program are:

• Multichannel success

• Service process success relative to design

• Successful management of communities

• CSR job satisfaction level

• Overall customer experience level


Our research in customer experience focuses on four tiers of metrics:
1. Executive/board level: Here, we see an average of seven key metrics. They tend to all
be financial, such as revenue, shipments, orders, earnings per share and profit margins.
2. CRM or call center level: These are more tactical, and include upselling and cross-
selling, customer satisfaction levels, the cost of service delivery, the cost of marketing,
campaign response rates, new customers acquired, wallet retention and customer
retention. These are key indicators of the health of a channel or, sometimes, of a single
process.
3. Measuring processes across multiple interaction channels (contextual metrics): We look
at these in detail in this piece of research.
4. Basic/operational metrics, such as customer wait time in the contact center (see How to
Get the Basic Performance Metrics Right section).
Although many businesses and organizations succeed in tying executive metrics (No. 1 above) to
the CRM level (No. 2 above), it is very difficult to tie the CRM metrics to the basic operational
metrics, such as average handling time (No. 4 above). As an example, look at the scenario in
which the cost of service is ranked as the second most vital metric to the customer service
organization, while customer satisfaction is ranked the top priority. (See "Fifty Things to Do Right
Now to Improve the Customer Experience" for more in-depth coverage on moments of truth and
CRM.)
The primary driver is internal: to identify the biggest contributors to cost of service. The second is
external, and very different: the drivers of customer satisfaction. Both of these types need to be
outlined in Level 3, where they manifest as things such as billing process accuracy, on-time
delivery processes, the complaints handling process, case set up and policy adherence. There
would also be input from Level 4: metrics. Clearly, many of the Level 3 metrics are not under the

Publication Date: 7 July 2009/ID Number: G00168499 Page 3 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
complete control of the customer service organization. Therefore, multichannel,
multidepartmental root cause analysis becomes key.

1.0 Harnessing Customer Service Technology to Improve


Business Performance
Engineering 101 always begins with the words, "form follows function." Essentially, a square
wheel is not optimal to ride smoothly down a highway. From the top down, organizations should
always be posing the business questions: What are our opportunities for differentiation, and how
do we prioritize these opportunities? For example, an enterprise may decide that its priorities are:

• Customer satisfaction

• Profitability

• Agent churn

• Cost to serve

• Client retention

• Reputation/net promoter

• Upsell/cross-sell
The next step is to establish exact values for each of the priorities. Unless there are discrete
metrics that can be measured, current ability and need for improvement will remain vague. The
metrics are often (but not always) based on industry standards (see Figure 1).

Publication Date: 7 July 2009/ID Number: G00168499 Page 4 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
Figure 1. Setting Performance Metrics in Line With Enterprise Business Priorities

Opportunities for Differentiation

Ratings

Examples: Competition: r

Decision support r 4.5 Profitability

Agent navigation r 4.2 Agent churn

WFO r 4.1 Cost to serve

Business process/
r 3.9 Retention
rules engine

Customer community r 3.9 Net promoter

Predictive analytics r 3.7 Upsell

Collaboration tools r 3.7 Customer satisfaction

0 1 2 3 4 5
Source: Gartner (July 2009)

Publication Date: 7 July 2009/ID Number: G00168499 Page 5 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
2.0 Setting Performance Metrics in Line With Enterprise
Business Priorities
Many of these values can and should be set in consultation with customers and prospects.
Leading companies look at the top five to seven areas for differentiation, and then go out and
measure current capabilities on a scale of 1 to 5. The scores correspond to performance as
perceived by the customer through surveys, focus groups and community feedback.
Once the current level of competency is decided upon, you can examine each of the priorities to
determine how you are performing against your required/desired state. For example, one
consideration is how well your competitors are doing in a similar area. Once this work is done,
using the ideas discussed, you are in a position to examine how much work needs to be put into
improving any KPI. Often a simple process change is enough to bolster or dramatically improve a
metric. But, in other cases, technology is required to automate, streamline or enable a capability.
For example, surveys might reveal that customer satisfaction is negatively impacted because
there is no simple way to click and connect with a CSR while navigating the corporate website.
Collaboration tools might be required to address this issue and support the process improvement.
By posting the most important KPI for your service organization, showing its alignment with
enterprise objectives, and demonstrating the current state of proficiency for each of the
measures, you are making a strong statement about what is most important to the organization,
why you are making the investments that you are making, and how you expect those investments
to help you improve the business (see "CRM Contact Center Customer Service Tools That Drive
Business Benefit").
Before examining the correct mix of KPIs, the enterprise needs to decide who is responsible for
running a performance measurement program. We recommend combined responsibility among
marketing, sales and service and IT, because it reflects the changing nature of how customers
and competitors rate customer service delivery and the customer experience in CRM programs.

3.0 How to Get the Basic Performance Metrics Right


Most organizations rely on the director of customer service to gather KPIs that are primarily
operational and tactical, such as:

• Average wait time

• Average handling time and average wrap-up time

• Labor cost per call

• Number of calls handled per agent per hour

• Distribution of calls over the day/month/year

• Number of calls handled per team per shift

• Percentage of calls that were "one and done" versus handed off/escalated

• Number of agent touches until case was resolved

• Customer satisfaction level with a call's resolution

• Customer satisfaction with agent knowledge/attitude

Publication Date: 7 July 2009/ID Number: G00168499 Page 6 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
• Workforce utilization rate (correct number of agents per shift)
We see a significant shift from general KPI measures to contextual performance management
metrics. Contextual means that a given operational KPI will be weighed against a business
objective to understand the value of the metric. For example, average call talk time may improve
by 30 seconds or 12%, which appears good, until the broader context of measuring the change in
customer defection, average customer spending and customer satisfaction is examined. This
requires a broader team, including sales, marketing and IT, because the way that this data is
arrived at, and the steps that will need to be taken, cut across and impact customer service and
these other functional areas.

4.0 Agent Satisfaction Is Key to Improving Customer Service


Delivery
Many organizations either do not measure the satisfaction of CSRs, or measure but do little with
the information. The increased use of self-service technologies has resulted in less opportunity
for an enterprise to hear directly from the customer in the form of real-time telephone
conversations. That makes these remaining calls much more critical to maintaining a rapport with
the customer and best reflecting corporate goals.
When customers perceive that the CSR is rushed, unmotivated, poorly equipped or trained to
manage the interaction, or slow in responding, customer satisfaction drops. Correspondingly,
Gartner benchmark data shows that well-trained and motivated CSRs drive customer loyalty. We
suggest that KPIs include agent job satisfaction and a record of completing specific course work
to maintain their skills. You will want to compare all the agent populations. Many large
organizations have multiple contact centers, and some of these will be insourced, while others
outsourced. There is also a growing use of home-sourced (or work-at-home) CSRs.
Organizations need to compare the performance of the work-at-home agents against the
performance levels of the agents in formal contact centers to uncover any anomalies.

5.0 Performance Metrics for a Multichannel Service Delivery


Strategy Are Complex to Measure
The emergence of multichannel customer service strategies complicates the analysis of customer
service KPIs, because a specific channel may be providing high customer satisfaction, while the
overall service experience is not. The assistance of team members from sales, marketing and IT
is needed to create and maintain a broader program of KPI analysis. For example, many
telephone wrap-up surveys ask the customer about his or her feelings regarding a recent service
call. This is tactical questioning. Whereas the customer may give high marks for the specific,
isolated service experience, he or she may not have been asked about the broader context.
Consider a case wherein the customer had started the process of problem resolution or inquiry on
the Internet. He or she may have searched there for an answer to a problem and failed. They the
customer launched a frustrating chat session. That may have failed, so the customer service
contact center was called. Here, the customer may have worked through an interactive voice
response (IVR) set of key choices, then waited in queue for a service agent who, upon
connection, asked the customer to repeat his or her reason for calling. Let us assume that the
CSR provided an excellent service. The customer is happy with the CSR and the call. When the
customer agrees to take the telephone survey specifically about the success of the recently
completed telephone interaction, he or she is likely to answer the question with a positive rating of
the agent. However, the customer may be negatively inclined to the business and unhappy
overall with the service experience, but was not asked this broader question. This is an example
of contextual performance versus tactical customer feedback management.

Publication Date: 7 July 2009/ID Number: G00168499 Page 7 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
6.0 Measure How Well the Service Channel Performed to Design
Standards
Multichannel design is difficult to engineer, so constant feedback from customers is necessary. It
is one thing to measure the client's satisfaction with a specific interaction, but another thing to set
a KPI that exposes how well each service channel performed to the design standard for an
overall improved customer experience. For example, if the website has an advanced search
engine that was designed to eliminate the need to chat or call into the contact center, then a KPI
needs to graphically illustrate how often that was (or was not) the case. The same concept
applies to each aspect of service delivery. Some examples include:

• How often did website navigation and accurate content succeed in eliminating a call? In
Web-based knowledge management, this is referred to as "relevance of response" (see
"Use Contact Center Agent Knowledge for Self-Service Cost Savings" and "Gartner's
Strategic CRM Framework for E-Services").

• How often did a searchable FAQ solve the problem?

• Which customer searches failed? Could there have been content created to solve the
issue? Was it created? What was the impact?

• How often was the chat session the final stop in resolving a service issue?

• How often did the IVR resolve the issue? In the cases where it failed, exactly where did
it fail and why?

7.0 How Contact Centers Move to Profit Centers


In the transition from cost center to profit center, some KPIs will be in the context of customer
segments. Rather than view the service level received by all customers and the impact on the
business, some organizations will want to cross-reference the analysis by customer type (for
example, a Tier 1 customer versus a Tier 2 or Tier 3 value customer). You will also need to run
an analysis of what customer needs are and how (and whether) the needs/expectations differ by
segment, and then decide what action to take given the outcome of the surveys. Only the
participation of marketing and sales, together with customer service and IT, can make this
possible. Most service interactions may require a certain score, regardless of the customer
segment, while for a small number of interactions, there could be processes reserved for a
particular customer segment (for example, a segment that fits a profile for profitable cross-selling,
or a segment identified as at risk of defecting that requires extra attention to nudge it back to
higher satisfaction).
There are emerging KPIs that are derived from the increased visibility and proliferation of online
communities, and the ongoing desire to convert the contact center from a cost center to a profit
center. One of these is reputation. For example, what is the reputation of the contact center in the
eyes of online communities or forums?
There are two parts to creating useful, actionable knowledge from communities. The first is taking
the decision to either listen to (or observe) existing communities that are independent of your
control or establish and support a community of your own. When you listen only, you will have to
dedicate a resource from within the customer service organization, usually in coordination with
marketing.
If you are monitoring external communities, then you will likely want to consider a third-party
expert (for example, J.D. Power: www.jdpowerwebintelligence.com/index.php ). It requires
training and experience to know where to look for conversations about your company. Google

Publication Date: 7 July 2009/ID Number: G00168499 Page 8 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
search, Twitter, Facebook, Jigsaw, LinkedIn, SecondLife and MySpace are only some of the
more public tools and platforms where you can find comments about your company, or use them
to find out about your company. There are many others, but you will have to decide what
constitutes meaningful data from which you can draw insight and what is extraneous chatter.
Analyzing and creating insight from communities is a new field, so finding trained resources
remains a challenge.

8.0 Securing a Budget for Community Analysis Requires a


Business Case
Social media is emerging as an important means of gauging business pattern sensitivity, or how
responsive an enterprise is to customer need. An effective type of social media is the online
community, either a private (branded, invitation-only) community or an external community not
under the control of the enterprise. The ability to leverage a community to focus on customer
service process improvement is proving as impactful as complex analytical approaches.
If you have your own community (or communities), then you will want to work with whomever
manages that community to examine posts, or even actively pose questions to the community as
a way of ascertaining the consensus about your service levels. You can use service
improvements as a basis for building a larger business case for the funding of communities where
the goals span service, marketing and sales functions. This work will require a dedicated
resource, which, in turn, requires budgeting.
To win support for such a community initiative, you will need to use language appropriate to a
business problem. For example, if you are considering using the community to collaborate on the
building of service solutions, then you can use hard economics. For example, the anticipated cost
to maintain this function per year is $56,000; We expect to deflect 5% of customer services cases
this year; We have 100,000 cases per year, at a cost of $18 per case to analyze, solve and post;
We anticipate saving $90,000 (5,000 cases x $18/case), while improving customer satisfaction
and generating ideas from the community. The closer to actual budget numbers, the greater the
likelihood of project approval.

9.0 Final Thoughts: Consider a Net Promoter Score or Similar


Approach
One of the most basic results of delivering customer service is to win customer loyalty. The
concept behind the Net Promoter Score (NPS) is to keep an accurate gauge of each customer's
likelihood of staying loyal and suggesting to his or her friends, family or cohort group that they,
too, would benefit from your product and service. The power of social networks magnifies the
impact of a customer who is a willing (and unsolicited) promoter of your product/service. As such,
a measure of a customer's loyalty (such as NPS) should be considered part of your overall KPIs.

Publication Date: 7 July 2009/ID Number: G00168499 Page 9 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
REGIONAL HEADQUARTERS

Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
U.S.A.
+1 203 964 0096

European Headquarters
Tamesis
The Glanty
Egham
Surrey, TW20 9AW
UNITED KINGDOM
+44 1784 431611

Asia/Pacific Headquarters
Gartner Australasia Pty. Ltd.
Level 9, 141 Walker Street
North Sydney
New South Wales 2060
AUSTRALIA
+61 2 9459 4600

Japan Headquarters
Gartner Japan Ltd.
Aobadai Hills, 6F
7-7, Aobadai, 4-chome
Meguro-ku, Tokyo 153-0042
JAPAN
+81 3 3481 3670

Latin America Headquarters


Gartner do Brazil
Av. das Nações Unidas, 12551
9° andar—World Trade Center
04578-903—São Paulo SP
BRAZIL
+55 11 3443 1509

Publication Date: 7 July 2009/ID Number: G00168499 Page 10 of 10


© 2009 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

You might also like