You are on page 1of 32

Service Desk Metrics

By Daniel Wood
Head of Research
SDI

©Copyright SDI 2010 All Rights Reserved 1


Table of Contents

WHY SHOULD WE CARE? ......................................................................................................5

THE COMMON PROBLEMS ....................................................................................................6

HOW TO MEASURE METRICS ................................................................................................7

BEFORE YOU BEGIN ...............................................................................................................8

TAKE OWNERSHIP..................................................................................................................9

WHAT METRICS TO MEASURE ..............................................................................................9

HOW TO MEASURE THE INTANGIBLES ...............................................................................10

STATUS DATA.......................................................................................................................11

AVERAGE COST PER CALL/PER E-MAIL ...............................................................................12

HOW TO CALCULATE COST PER CALL/PER E-MAIL............................................................12

CALL ABANDON RATE .........................................................................................................15

LONGEST DELAY IN QUEUE (LDQ) ......................................................................................16

SELF SERVICE AVAILABILITY................................................................................................17

TIME TO RESPOND AND RESOLUTION DATA.....................................................................18

FIRST TIME FIX RATE ............................................................................................................19

FIRST LEVEL FIX (FLF)............................................................................................................20

©Copyright SDI 2010 All Rights Reserved 2


CALL DURATION...................................................................................................................21

INCIDENT TURNAROUND TIME (TAT) .................................................................................22

COMMUNICATIONS ETIQUETTE ..........................................................................................23

ANALYST AVAILABILITY ......................................................................................................24

STAFF SHRINKAGE ...............................................................................................................24

FINANCIAL METRICS.............................................................................................................25

INDUSTRY METRICS .............................................................................................................27

REPORTING THE BUSINESS VALUE OF YOUR SERVICE DESK ............................................28

THE FUTURE..........................................................................................................................30

©Copyright SDI 2010 All Rights Reserved 3


Introduction

One of the areas in which Service Desks have traditionally been very weak is in reporting
their metrics. But what exactly do we mean by metrics? The most common and literal
understanding is a measurement of common Service Desk measures. For example, first
time fix rate, call volume and call duration would be ones that desks use the most.
Metrics can also pertain to other measures as well though, as there are financial and
performance measures that desks can use. Examples of performance metrics would be
customer satisfaction and user productivity. Financial metrics take on a more strategic
perspective and include measures such as direct costs, investment and ongoing project
costs. Financial metrics may seem to many Service Desks to be beyond their immediate
remit but, as will be demonstrated, desks need to take control of these metrics.

©Copyright SDI 2010 All Rights Reserved 4


Why should we care?

The simple answer to this question is that Service Desks need to know how much the
services they provide actually cost: the reality is that very few desks have a tangible grasp
of these figures. The recent SDI benchmarking survey found that 74 percent of
respondents did not know the average cost per e-mail (fully loaded) and 81 percent had
no idea how much each incoming call was costing their Desk. These two channels are the
most popular for customers to contact the Desk, so without a firm handle on how much
each interaction costs, there is no possibility of the Desk being able to communicate its
value to the rest of the organisation.

One may ask: why is it important to understand the value of IT to the business? The
answer here is simple; if you don’t know what your value is then it becomes much harder
to justify any budget increases for your Desk, or tangible reasons why you need to retain
staff (or indeed employ more). If desks are unable to measure and communicate their
value, then the stigma of IT being a ‘cash drain’ will remain in vogue. Furthermore, with
companies tightening their belts, reducing spending on IT might be near the top of the
cull list – if companies don’t know the value of IT then it will be more likely that Service
Desk budgets will be cut in preference to areas of the business that do provide tangible
evidence of their value.

There are two main reasons why Desks find it difficult to establish their business value.
Firstly, they find it hard to establish what metrics they should be measuring, how to
measure them, and what they should do with the results. Secondly, the majority of Desks
are concerned with reporting how they spend money, not on determining the value that
these expenditures actually provide. The good news is, that the fixes to these problems
are relatively straightforward: with a few simple metrics measurements the business will
have a much greater understanding and appreciation of the value of the Service Desk.

The other main reason is that the Service Desk – as its name implies – is primarily
concerned with delivering a service to its users and customers. One of the ways to
achieve this is to manage their expectations by setting in place contracts. In doing so,
both parties know what they are expected to deliver and when the resolution can be
expected. Measuring this data will demonstrated if these deadlines are consistently being
met, and therefore if a good service is being delivered. If targets are consistently
breached then this indicates that there are some key problems that need to be addressed
as a matter of urgency – if you are not measuring this data then you will not know if this
is indeed the case.

The ultimate answer to the question of ‘why should we care?’ is that if you don’t know
how much your services cost or if you are delivering a good service that meets
expectations then you can expect some tricky times ahead. With IT being viewed as the
perennial cash-drain it has never been more important to demonstrate your value both
from a financial and customer perspective.

©Copyright SDI 2010 All Rights Reserved 5


The Common Problems

It is surprising how many desks share the same problems when it comes to the measuring
and reporting of metrics. Some of these are:

• Confusion over how to translate customer satisfaction into a tangible business


value.

• Difficulty in proving the value of the services provided and demonstrating how
and why the Service Desk operates as it does.

• Problems in forecasting costs and expenditure due to too many variables.

• Conflict between Service Desk KPIs and customer expectations.

• Hard to appreciate service improvement opportunities when consumed with the


day-to-day running of the Service Desk.

• Problems arising from the lack of a common terminology and use of the wrong
words and meanings.

• Analysts reporting metrics in different ways or not reporting them at all.

• No clear owner for reports and for metrics reporting.

• Lack of communication with the business and difficulty in translating metrics into
terminology that is useful to the business.

• Difficulty in understanding what metrics should be measured and what to do


with the information when it has been collected.

©Copyright SDI 2010 All Rights Reserved 6


How to Measure Metrics

Part of the problem for Desks in measuring metrics is that they measure them at a fixed
point in time. The value of IT at any one point in time is essentially zero at best and often
negative. This is because the actual value of IT is very hard to measure and convey
because ongoing projects may not yet be completed – thus all that is demonstrated is
expenditure and not cost-savings and other long-term benefits. It is little wonder,
therefore, that IT is stigmatised as a cash drain when all they seem to do is spend with
scant return on investment. Determining a monetary figure for getting users back to
work and productive is also difficult to obtain as it involves so many factors both tangible
and intangible. In light of this, it is little wonder that the Service Desk is viewed as a ‘cash
drain’. Instead, Service Desks are usually concerned with demonstrating that IT is not
wasting money and is not disrupting the business rather than attempting to define how
they are helping the business and conveying the value that they provide.

The real value of the Service Desk is better determined over a period of time. Value
comes from a sustained change in business performance rather than a single
transformation. This will only become apparent by comparing performance over different
periods of time. For example, you could compare call abandon rates, or cost per e-mail
and see how these tally with results a year ago. If there are improvements, then these
findings can be communicated to the rest of the business. If there is no change, or results
are actually down on last year, then it will highlight the areas that are in need of
improvement. Either way, a comparison of different periods is the only practical way of
gauging changes in value – a fixed period in time will not provide the results that are
required. Comparing different periods also enables desks to see the impact of new
business capabilities on business performance.

Demonstrating value will have to be the top priority for many Service Desks with budgets
set to be slashed, and companies looking to make significant savings. Thus, it has never
been more important to make sure that you know what metrics to measure and how and
understand how they can be used to demonstrate your value. Perhaps the government’s
austerity measures will offer a wake-up call to many Service Desks, and this guide will
help you to navigate the minefield.

©Copyright SDI 2010 All Rights Reserved 7


Before you Begin

Before we delve into the area of metrics, it is important to first assess why we want to
measure metrics in the first place, and how they can be of importance to the business. In
a sense then, we are beginning with the end in mind. Below are some key steps to take
before embarking on measuring metrics:

• Determine business needs: It must be determined what the business expectations


of the Service Desk are and, vice versa, what the Service Desk expects from the
business. Once this is determined, it will be much easier to see where the desk is
failing or where it is performing particularly well. It will also facilitate a debate as
to whether expectations on both sides are unrealistic and, if so, how more
appropriate assessments can be made.

• Review previous experience: This is important as it will determine the previous


performance of the Service Desk. If metrics have been measured before then a
new audit will reveal whether the desk is improving or going backwards. If no
metrics have been taken before then a history of the desk can be charted
including problems and difficulties that the desk has encountered. This will for the
basis of the business and account plan.

• Review existing Service Level Agreements: This is vital in assessing whether the
desk has been performing admirably or poorly. The results of this review will
provide input into an account profile and evaluation of processes. Outlined below
is a summary of what the audit should include.

1. Business unit or subunit profiled


2. Applications supported and associated SLA
3. Other services provided
4. Composite or summary service satisfaction query responses
5. SLA audit results
6. Gaps in service expectations and actual services provided
7. Future service needs

• Determine Satisfaction Level: The purpose of this procedure is to develop a broad


view of the overall satisfaction the user is experiencing from the Service Desk

• Determine Expectations Gap: The purpose of this procedure is to complete the


account profile by analysing the information contained in an account profile and
highlighting the gaps uncovered between the user’s expectations and the services
provided by the Service Desk

Once the above steps have been undertaken then a full analysis of the desk’s
performance can be created, using a variety of metrics measurements.

©Copyright SDI 2010 All Rights Reserved 8


Take Ownership

One of the most common problems with reporting metrics is making someone
responsible. One individual member of the team (not necessarily the Service Desk
Manager, although this is probably the most likely and pragmatic candidate) needs to be
accountable for the measuring and reporting of metrics. This is a large task, but is
important that it is not a shared role. This is because if it is shared then there becomes no
one point of contact and thus it become difficult to understand who is ultimately
responsible.

What Metrics to Measure

It should come as little surprise that metrics is a hard subject to tackle. One of the reasons
that it is so difficult is because there are many tangible and intangible measures that are
involved in the service delivered by the Service Desk. It is the intangible measures that
provide the greatest challenges. For example, how do we affix a quantitative measure to
customer satisfaction? Exactly how can Service Desks communicate the business benefits
of happy customers to the rest of the organisation? Service Desks may use customer
satisfaction as a way of meeting their own targets and KPIs, but they have no way of
translating this into a statistic that will convey this in a value to the business like ROI can,
so, more often than not, the intangibles are left out of the mix completely.

Standard

Standard metrics are the ones that most desks will be familiar with such as cost per call/e-
mail, call duration, and first time fix rate. They are termed standard metrics because they
are the ones that every desks should be measuring. At the bare minimum, desks need to
know the cost of every interaction and the average time that it takes to resolve an
incident.

Performance

These metrics are ones that delve deeper into the actual impact your Service Desk has on
your customers. The most important of these is customer satisfaction and user downtime.
They measure how satisfied users are with the service provided by your desk and how
quickly they can get up and running again after being assisted by the Service Desk. Whilst
the majority of desks measure these metrics, very few assign them a monetary value.
Having a high customer satisfaction rate is important, but it is much better to present this
to the business in money terms. A method of achieving this is detailed in the next
section.

©Copyright SDI 2010 All Rights Reserved 9


Financial

Financial metrics appear to many desks to be outside of their remit as they are considered
to be the domain of financial directors. However, there is considerable scope for the desk
to become more involved in measuring the financial side of their service delivery. For
example, they can report on staffing costs, investment costs and depreciation. Being
cognizant of these figures will make a much stronger business case for future investment
in technology or staff.

How to Measure the Intangibles

The best way of measuring intangibles like customer satisfaction or perception of the
Service Desk is to assign a weighting criteria that includes such things as customer need,
business and technical risks, strategic fit, revenue potential, and level of required
investment. In this way, each aspect can be assigned a different numerical score and the
weighted totals can be added to give a quantitative total for its business value.
Weightings are assigned based on each criterion’s importance given the ongoing
business strategy and business environment. So, for example, in the case of customer
satisfaction, customer need would be given a higher weighting than business and
technical risks. However, strategic fit, revenue potential (external desks only) and required
level of investment would also receive a sizeable weighting as they are critical to
customer satisfaction – if customer satisfaction is low, but it is business priority to
increase this measure, then the strategic fit will receive a low rating. Companies can
determine their own weighting system and the categories included in it, but this offers a
practical way for the intangibles to be afforded a measure and rating.

©Copyright SDI 2010 All Rights Reserved 10


Status Data

Status data provides a snapshot of the data being handled by the Service Desk at any
given point in time. It is useful as it provides an overview of the conditions currently faced
by the Service Desk. Recording status data at set dates and times will allow you to gain a
better understanding of the value of your Service Desk as it will allow you to report on
performance and note any changes or transformations. It is also useful because, as
noted, the value of IT is often negative because of ongoing investment that has yet to
bear the fruits of financial returns. Therefore, a snapshot provides a much better
understanding of what it actually going on and the current challenges facing the desk.

Some of the metrics that might be measured in the status data process are:

• The number of active incidents, problems, and changes.


• The status of active incidents (i.e. awaiting customer feedback, sent to second
line, waiting for component etc.)
• The number of incidents assigned per analyst or to each member of the second
and third line.
• The number of current major incidents or problems.
• How current incidents, problems and changes are assigned throughout the team.

©Copyright SDI 2010 All Rights Reserved 11


Measuring the Tangibles

There are some key metrics which, if measured correctly, will prove invaluable in
determining the value of your Service Desk. The fact that over 70 percent of respondents
in our benchmarking survey stated that they did not measure these metrics is a cause for
concern. Listed below are some of the metrics that all desks should be measuring.

Average Cost per call/per e-mail

This, along with cost per e-mail, is the essential metric to grapple if you want to
determine the value of your Service Desk, yet 74 percent of desks do not. 1

Things to consider:

• What costs should be included in this calculation? To give an accurate and fair
measurement the cost of second and third line support should be included.

• It must be determined what measures will be incorporated to give the final figure.
For example, it might be decided that some intangible measures should be given a
weighting and added to the final total, such as call waiting time or informal peer
support.

• You will also need to know how much your staff costs are to get an accurate
handle of call costs.

How to calculate cost per call/per e-mail

Some companies use the actual budget of the Service Desk to calculate their cost per call.
In essence, they include every cost that is involved in running their Service Desk and
divide this by the number of calls that they receive. This method is a little too simplistic
for what we’re really looking for in the cost per call metric, but it does highlight why a
comparison of metrics is so difficult – desks who use this system will have a much larger
cost than desks who measure it in one of the ways detailed below.

Others will include every cost involved in taking the call. For example, they will include
postage costs if hardware needs to be replaced to rectify the user’s problem. They might
also include the cost of using technicians or field agents. Some will also include the cost
associated with the loss of productivity created by the user being on the telephone. This
is why there is such a high variation in the reporting of this metrics. This is a much more
involved way of measuring the metric but it may also be more informative. If a value can
be placed on productivity loss then it will be clear how vital the Service Desk is to the
operation of the business. If you can report that your desk saved x amount of productivity
then this will place your desk in a very strong position.

1
According to the 2009 SDI Benchmarking Survey (available at www.sdi-europe.com)
©Copyright SDI 2010 All Rights Reserved 12
The Formula

As noted there are lots of different ways in which this metric can be measured, but one
of the best all-encompassing ways is outlined here.

1. Understand how much your Service Desk staff cost, broken down into as small a
unit as possible. Your HR department will be able to tell you all the components
you will need to measure this such as: salary, benefits, heating, lighting,
equipment and any other measures that you think should be included.) From this
data you can then work out how much an analyst costs to employ per minute.
2. Add to this figure the lifetime cost of software support including support and
maintenance. You can split the costs over 3 years to give you some idea of what it
actually costs to run the systems that you do.
3. You might want to add hardware costs, and the cost of using second and third
line (although of course you could have analyst cost per call, second line cost per
call etc.)

Adding up the above will give you the cost per call/e-mail per minute, which then needs
to be multiplied by the time duration of the call/e-mail.

You will want to consider what type of calls that the Service Desk receives. If there are a
high number of calls that require basic assistance - like password resets – then the cost
per call will be lower because the Desk can get through a higher number of calls. Calls
that involve more complex problems will necessarily take longer to resolve and thus incur
a higher cost per call. Thus any cost per call figures should take these factors into
account. It would be unfair to compare a log and refer desk to a technical one.

Other Considerations

Some other metrics will also be useful in gaining a comprehensive overview of your
telephone data. You might want to consider the time take to answer the phone by your
analysts, broken down into different times of the day. This will allow you to see if your
staff are being productive and whether there is a need for more, or less, staff at different
times of the day. For example, if there is a deluge of calls at 9am but less from 4pm
onwards then this will help determine what shifts analysts should be working and
whether extra cover is required at different times of day. You may also want to consider
bringing in some extra second line staff to cover busy shifts and release some first-line
staff to work on more technical tasks later in the day.

Of course, if call waiting times are high or low throughout the day then this will indicate
that either more or less staff are needed. You might also want to break this data down
further and look at the number of calls answered per analysts. Whilst you could argue
that this will allow you to see which analysts are being the most productive, it may also
give a false impression as some calls will necessarily be longer. Doing this though will
highlight any anomalies in analysts’ performance – if analyst ‘x’ usually takes 50 calls a
day then the reason he only took 30 another day may be because his call duration was
much higher.

©Copyright SDI 2010 All Rights Reserved 13


You might also want to look at the number of outbound calls made by analysts at
different times of the day. Whilst this is not a commonly used metrics, it is useful to have
this data as it will allow you to see how often analysts contact customers to follow up on
issues. In many ways this is as important as measuring incoming calls – if customers find
that the phones are engaged then the volume and duration of outgoing calls may explain
why this is happening. If there are a high number of outgoing calls then you might also
want to consider suggesting that analysts contact customers by other methods such as
email or live chat in order to free up the phones.

©Copyright SDI 2010 All Rights Reserved 14


Call Abandon Rate

This metric can be considered one of the most critical measures for a Service Desk. Even
recently, it was reported that the Inland Revenue failed to pick up 43% of its incoming
2
calls. If customers are not able to contact their Service Desk then this is obviously a cause
for concern, and indicates that either analysts are not being proactive enough in making
sure they are able to take calls, or the desk is understaffed. However, it is important to
note that this metric may be beyond the Service Desk’s control as some users will hang
up the phone because they have dialled the wrong number or their problem has fixed
itself. This is something to be aware of if you’re tempted to place too much emphasis on
this measure. Making sure that customers can contact the Service Desk is obviously the
most important consideration for any Service Desk and this is why this measure is so
crucial. In short, if the Desk is not able to take a customer’s call then the service provided
is simply not good enough.

Given its importance, it is surprising that 17% of desks still don’t measure it. 3 Of course,
some calls may be abandoned because customers expect the phone to be answered
instantaneously whereas the desk may be operating under different guidelines, or no
limits at all. To rectify this, the desk should have fixed guidelines as to when calls should
be answer (within 10 seconds for example) and this should be conveyed to customers in
the SLA. This way, both customers and analysts understand what is required of them and
may help rectify call abandon rates. Desks with high abandon rates may also want to
look at the feasibility of improving their self-help portals to allow users to fix their
incidents themselves. 4 A large part of the problem with call abandonment is that there
are simply not enough analysts available to take the call. To rectify this it is important that
the desk is properly manned at all times with the optimum number of staff. This might
mean that staff have to be pulled away from other areas such as second line, but this is
an important step to make if you are serious about bringing down your call abandon
rate. Changing staffing patterns also helps to keep things fresh and removes the
boredom and familiarity of a typical 9-5 shift whilst also allowing staff to learn new skills,
be it technical or customer service focused.

2
http://news.bbc.co.uk/1/hi/uk_politics/8460182.stm
3
SDI/Avocent 2009 Benchmarking Survey, available from www.sdi-europe.com
4
More information on self help portals is available from www.sdi-europe.com
©Copyright SDI 2010 All Rights Reserved 15
Longest Delay in Queue (LDQ)

This is ostensibly linked to the call answering time and call abandon rate. From this
measure (which would ideally be located on a screen that was visible to all analysts) you
can tell at a glance if more analysts are required to man the desks and reduce this time.
LDQ will also allow you to measure the ‘worst case scenario’ for a customer. There are
two statistics that you can pull from this. One is the longest delay before the call was
finally handled (longest delay to answer) and the other the longest delay before call was
abandoned. These measures will allow you to manage your resources more effectively
(i.e. if the LDQ typically occurs at 9am Monday morning then this is when you need to
have more analysts available) and will provide a useful measure for your business. If you
can report that most calls are answered within x amount of time, but in a worst case
scenario it will be y amount of time then this helps to manage expectations. Relaying this
information to users will let them know how long they might have to wait, which will
then allow them to terminate the call if they wished to. In addition, if your longest times
are always at the same time of day then you could communicate this to your users –
something along the lines of 9am Monday mornings are a very busy time for us, if you
can wait an hour before calling you will experience much lower waiting times.

©Copyright SDI 2010 All Rights Reserved 16


Self Service Availability

In the industry today lots of Service Desks are utilising self help or self service portals to
enable users to gain access to information quickly and free up Service Desk resources.
Also included under this banner is live or web chat. It is important to get a handle on
how often these services are available and how accessible they are. This is typically done
by adding up the total time that both systems are available (this will vary dependent on
whether the services are available 24/7). You can then assign this total a points score,
weighted by which self service sites were available (if they all were then this score would
be 100). For live/web chat it is essentially the same metric as call statistics so therefore we
are looking at call waiting, abandon rate, first time fix and call duration. It is essential that
live chat is not treated as a separate entity to calls but is considered in the same way.
However, live chat should have its own target metrics as call duration will be different to
phone calls. Self service and live chat are also intrinsically linked – the lower the
availability of self help the higher the number of live chat interactions will be.

©Copyright SDI 2010 All Rights Reserved 17


Time to respond and resolution data

These measures are often referred to as SLAs. It compares the agreed business response
and resolution times and those actually achieved.

Some examples of the metrics that could be measured are:

• Percentage of responses that are on time, an hour late, two hours late etc.
compared to the target.
• Percentage of resolutions on time compared to target.
• Spread of resolution times broken down by company, business unit, priority or
impact.

The results of these provide a good indication to the rest of the business as to the
performance and business value of the Service Desk. Just like any aspect of the business,
Service Desks have an obligation to meet customer expectations and adhere to contracts.
This metric applies equally to telephone and email communications and it is vital that
customers and analysts are both aware of the contracts in place so that they can manage
expectations.

Many desks struggle to create proper SLAs and binding contracts, and even if they are in
place they are often ignored and allowed to pass their time limit. Furthermore, users are
often unaware of what the SLAs are for their incident and are not kept informed of when
they are about to breach. The solution to this is simple – communication.

©Copyright SDI 2010 All Rights Reserved 18


First Time Fix Rate

Whilst one of the goals of ITSM is to stop incidents occurring, it is inevitable that they will
still happen. First time fix is a measurement of how quickly these incidents are resolved
when they do occur. This rate is the time it takes an analyst to resolve the incident before
the call is terminated. However, this definition can have variations – a popular one is
‘Logged and resolved by the same analyst, without being assigned to anyone else, and
resolved within 30 minutes.’ It is important to assign a time limit when measuring this
metric as otherwise a fix may take a significant amount of time to administer. While the
definition of first time fix is straightforward, determining how it is measured and
recorded is decidedly more difficult. Some ITSM software allows you to flag a first time fix
and the data can then be reported from this. If your software does not have this
functionality, then analysts will have to record this manually.

Knowing the first time fix rate is important as this will give you an understanding of the
competency level of your analysts and the type and difficulty of the incidents that your
analysts are attempting to grapple with. The First Time Fix Rate provides a tangible
measure to report to the rest of the business but must also be used in context. If this
figure is low then it may not necessarily be a reflection on your desk; it might be the case
that the desk is understaffed or that the incidents reported by your customers are too
technical and beyond the capabilities of analysts. Either way, desks that want to gain a
better understanding of their business value must measure this metric so that it can
inform staffing – and ultimately budget – levels that the desk requires so that it can
perform the standard of service that customers expect.

©Copyright SDI 2010 All Rights Reserved 19


First Level Fix (FLF)

First level fix is fundamentally different to First Time Fix Rate in that it describes the
number of incidents that are fixed at the first level, not necessarily the first contact. This
fix will likely be administered to a relatively simple incident with the customer up and
running in a short period of time and with minimal disruption. Often desks become
enamoured to this metric and relentlessly pursue ways to get this figure down as, usually,
customer satisfaction is directly proportional to this rate. It can be measured in the same
way as First Time Fix Rate, i.e. either by ITSM software or manually.

There are a number of problems with an obsession with trying to get the FLF rate down.
Primary among these are that the FLF rate is not necessarily indicative of the competency
and efficiency of your Service Desk. Whilst FLFs will be the cheapest of the support
options for your desk, they may not necessarily be the best solution. Analysts may
attempt to solve the incident only for the customer to then re-open the call when they
find that the problem re-occurs. To see whether this is indeed the case an examination of
re-opened incidents may prove telling. The question must also be raised as to whether a
FLF is the best solution for the customer. The analyst may pull out all the stops to ensure
that a fix is administered whereas it may be in the customer’s best interest to have
second or third line administer the fix. This is because the analyst may take longer to
solve the incident, wasting both the customer’s time and taking the analyst away from
other tasks which may lead to call abandon rates increasing.

It should also be considered that a high FLF rate might indicate that the incidents being
raised by customers are so basic that they could conceivably fix them themselves by
utilising self-help. So whilst a high FLF may seem like good news for the Service Desk
Manager and provide something tangible to wow the bosses with, it could be that a real
opportunity is being missed to reduce the number of calls to the desk through the use of
self-help and provide a comprehensive knowledge base.

©Copyright SDI 2010 All Rights Reserved 20


Call Duration

A lower call duration time is the Holy Grail for some service desks. They perceive that a
shorter time on calls demonstrates that their analysts are being efficient and that they are
highly competent. However, a shorter call time might simply indicate that there are a
higher number of repeat calls. In a paper produced for the contact centre industry, an
agent-lead initiative revealed that the existence of an average call handling time target
was contributing to a significant number of repeat calls. When average call handling
5
targets were abolished, the repeat contact resolution figures improved considerably.

Some organisations actually impose strict parameters on how long calls should last, with
penalties for analysts who regularly exceed these. Obviously, this is not the norm and is
only viable on pure log and refer desks. Even then, a mandatory limit can be to the
detriment of customer satisfaction as upon calling the service desk, customers will find
that all but the simplest incidents require referral to second and third lines of support.
Indeed, instead of fixed call duration limits, a better way of reducing call times would be
to ensure that analysts are given the correct training to enable them to deal with
incidents quickly and efficiently. This could be achieved by sending analysts on an
effective telephone skills course.

Call duration is also highly selective. If desks are highly technical then one would expect
that the call duration would be higher given that they are trained to deal with a wide-
variety of incidents. This is one of the reasons why call duration should not be held in
such high esteem as other measures are equally important. In addition, it is vital that call
duration statistics are placed in the correct context and include all the other measures
attached to this such as repeat calls and first time fix. You should also consider how long
an analyst has been in their current position, as new starters are more likely to have
longer call duration metrics. Call duration is not, in short, the be all and end all of service
desk metrics.

5
Defining the causal link between how you treat employees: Its impact on customers and creating the
psychological climate to drive up profitability.
A White Paper submission for the CCA Business Briefing March 2006 Prepared by:
Michael Anderson, Capgemini UK plc March 2006
©Copyright SDI 2010 All Rights Reserved 21
Incident Turnaround Time (TAT)

Incident turnaround time, as the name suggests, is the time taken to resolve an incident.
It includes the conversation with the customer, call logging and initial diagnosis, research,
resolution and follow-up. TAT will also include first level fix.

Obviously, a lower figure here is preferable as it indicates that customers are able to get
back up and running quickly. However, the TAT will vary immensely depending on the
severity of the incident, the complexity of the incident, the competency of the analyst
and the quality of the knowledge database. In addition, pressure on analysts to reduce
the TAT may lead to analysts closing incidents early which will serve to increase the
number of follow-up calls and the number of incidents that will be re-opened. To see
whether this is the case, a comparison between the TAT and the number of re-opened
calls should be drawn. If there proves to be a relationship, then managers may want to
consider imposing less stringent TATs.

©Copyright SDI 2010 All Rights Reserved 22


Communications Etiquette

This is probably not seen as a typical metric but it is important if you want to understand
how your desks is dealing with customers and if there are any potential problems. It will
also allow you to make sense of any bad customer feedback scores. The easiest way to
measure this metric is to listen in calls or observe analysts. You will have to let analysts
know if you are listening in to them remotely. Every Service Desk will have their own
standard greeting and expectations about how analysts should treat their customers so it
will be these that they are measured against. It is important to gain a real understanding
about how customers are treated as this will help explain any problems with customer
satisfaction and feedback. If you can get to the root of the problem then it is much easier
to administer a fix.

A form of communication that often is not monitored is email and live chat. In many
ways, these communication channels are as important as phone calls. Emails should be
periodically reviewed to ensure that they are meeting standards and web chats stored to
be examined at regular intervals. Again each desk will have their own standard format for
emails, but some points to consider are if you allow you allow informal communication
from analysts to customers that they are friendly with or whether emails and live chat
must be entirely professional.

©Copyright SDI 2010 All Rights Reserved 23


Analyst Availability

This is a measure of the time that an analyst is busy compared to when they are idle or
available. To calculate this you need to divide the workload hours by staff hours. This is
an important measure as it gauges how well staff are scheduled and ultimately how
efficiently it is using its resources. If occupancy is low then this means that analysts are
sitting around not doing anything, if it is too high then it suggests that analysts are being
overworked. The real benefit of this metric is that it can help calculate your staffing levels
and understand when you need more or less analysts available. It will also help to
determine if staff need to be pulled in from second line to help man the phones or vice
versa.

Staff Shrinkage

This is the amount of time that analysts are unavailable, but not due to them being
involved in another interaction. Therefore, it relates to things such as staff holidays,
sickness, absence, maternity and long term sickness. It may also include unexplained time
when analysts are away from their desks i.e. tea and coffee breaks, cigarette breaks etc.
Again – like availability measures – it will help determine staffing levels and ensure that
there are the correct number of analysts available. This will let you calculate the correct
levels for each half hour period during the day.

©Copyright SDI 2010 All Rights Reserved 24


Financial Metrics

ROI

ROI compares investment returns and costs by constructing a ratio, or percentage. In


most ROI methods, an ROI ratio greater than 0.00 (or a percentage greater than 0%)
means the investment returns more than its cost. When potential investments compete
for funds, and when other factors between the choices are truly equal, the investment—
or action, or business case scenario—with the higher ROI is considered the better choice,
or the better business decision.

One serious problem with using ROI as the sole basis for decision making is that ROI by
itself says nothing about the likelihood that expected returns and costs will appear as
predicted - ROI says nothing about the risk of an investment. ROI simply shows how
returns compare to costs if the action or investment brings the results hoped for. (The
same is also true of other financial metrics, such as Net Present Value, or Internal Rate of
Return – these can be reviewed at a later date). For that reason, a good business case or
a good investment analysis will also measure the probabilities of different ROI outcomes,
and wise decision makers will consider both the ROI magnitude and the risks that go with
it.

Decision makers will also expect practical suggestions from the ROI analyst on ways to
improve ROI by reducing costs, increasing gains, or accelerating gains.

Example: Simple ROI for Cash Flow and Investment Analysis

Return on investment is frequently derived as the “return” (incremental gain) from an


action divided by the cost of that action. That is “simple ROI,” as used in business case
analysis and other forms of cash flow analysis. For example, what is the ROI for a new
ACD System or Service Desk Software for example that is expected to cost £250,000 over
the next five years and deliver an additional £350,000 in increased profits during the
same time?

Simple ROI:
(Gains less Investment Costs) divided (Investment Costs)
Therefore:
(£350,000 less £250,000) divided (£250,000) = 40%
Simple ROI is the most frequently used form of ROI and the most easily understood. With
simple ROI, incremental gains from the investment are divided by investment costs.

©Copyright SDI 2010 All Rights Reserved 25


Alternative Measurements:
Let’s assume that the total cost of your Service Desk is £700,000 and your Service Desk
takes 5,500 calls in January and 4,000 calls in February.

Example:
The Cost per call for January = £700,000 divided 5,500 = £127.27/Call
The Cost per call for February = £700,000 divided 4,000 = £175.00/Call
The following should be considered……

Why has the cost per call increased?


o
Pros (but in the interim a higher cost/call)

SD Staff Efficiency means that the number of calls resolved and training given to the
person calling the SD (call resolution) has resulted in less calls – good practical service
empowering the user to take responsibility and understand their issues

Technological advances – implementation of new Systems – automatic password reset


system for example

Self help – SD has marketed the Self Help Portal and this is reducing the number of calls
to the SD
o
Cons

Staff Sickness (less Service Desk Staff available to take calls) – Review Abandoned Rates

Loss of business (major contract) – due to poor service from SD Staff – review SD Staff
performance

System failures – less time to assist customers

What is the solution?

Review staffing levels

Understand historical sickness levels – make provision to ensure there is cover if required

Fair call distribution system in place

©Copyright SDI 2010 All Rights Reserved 26


Industry Metrics

One of the most common questions from Service Desk Managers is “How do I find
industry-wide metrics?” The question is common because Service Desk Managers want
to see for themselves how well their desk is performing in comparison to the rest of the
6
industry. If – as they hope – they are outperforming other desks in areas such as first
contact resolution or call abandon rates then this gives them something tangible to take
to upper management and provides demonstrable evidence of their desk’s proficiency.

The desire for industry-wide metrics is understandable, but there must be a caveat that
benchmarking by using industry metrics is often misleading. For every desk that equals or
outperforms industry metrics there will be just as many, if not more, who fall short of
these. The reason that they fail to measure up to these metrics does not necessarily
suggest that the desk is failing or that there is cause for concern (although, of course, it
may very well be the case that the desk is not performing as well as it should be). As
noted in the consideration of metrics above, there are lots of reasons why a desk’s
metrics might not measure up. Of course, metrics might show up some inherent flaws in
the desks that are in need of urgent attention, but equally they may give a false
representation – desks may be seen to be performing badly but are in fact doing as well
as they possibly can; desks may be seen to be performing well but there may be
significant room for improvement.

Thus industry benchmarks should be taken with a pinch of salt. A much more accurate
way to assess how well your service desk really is performing is to arrange a site visit with
a similar size desk or s similar company. This will provide a much more telling assessment
of your own service desk and provide a wealth of ideas for improvements or highlight
where you are performing particularly well. 7

6
Those who are interested in benchmarking their desk should refer to the 2009 SDI/Avocent
benchmarking survey www.sdi-europe.com
7
SDI provides a benchmark buddy system to help desks that want this level of interaction. We also
offer experience events where attendees can network with other service desks.
©Copyright SDI 2010 All Rights Reserved 27
Reporting the Business Value of your Service Desk

The question posed is one that has consumed – and indeed troubled – Service Desk
Managers across the land, but the good news is that reporting value needn’t be an
insurmountable challenge. The first and most obvious step is to start measuring metrics!
Choose the metrics which will allow you to calculate this value – cost per e-mail and cost
per call are a good starting point – and get tangible figures from them. The crucial point
is to know why you are measuring certain metrics and know what you intend to do with
them. Otherwise it simply becomes measurement for measurement’s sake and this is of
no value to the desk or to the business.

How best to measure the capabilities of your desk and convey this to the rest of your
organisation? One of the crucial measures of a desk’s capabilities and performance is the
percentage of SLAs that are met. This is a key metric as this is the contract between your
desk and the customer – it is a document that informs customers how long they will have
to wait for a fix. If the fix is delivered within the timescale imposed in the SLA then the
desk is performing its obligations to the customer and this information can be relayed to
the rest of the organisation. If, however, SLAs are regularly being missed then this
indicates that the desk is not performing as it should do, or that SLAs need to be revised
in light of staffing issues or an increase in the number of incidents being reported to the
desk. 8 SLAs are crucial as these are the agreements between your desk and your
customers – a desk is of no value if it does not meet its customers’ (and therefore its
company’s) expectations.

There are also other factors that should be considered to those who want to place a
tangible business value on their service desk. As mentioned above, customer service may
also provide a good way of conveying your value to the business. If customers are happy
with the service that they are receiving then this can be construed as having a positive
business impact. If SLAs are being met on a regular basis then the service desk is enabling
customers to get up and running quickly and reduces periods of downtime and inactivity.
Happy workers who have the tools to perform their jobs to a high standards are the
foundations for any successful organisation – if you can transmit that the service desk is
fostering and improving this, then it should be axiomatic to your organisation that your
desk sits at the heart of the business.

Of course, the other metrics that were outlined above can also be used to support the
business value of your service desk, but they should all be used in context and properly
explained in terms of business value. For example, a low call duration may demonstrate
that analysts are adept at their jobs and that employees are efficient in arranging for
incidents to be fixed. However, as shown above, the call duration figure alone is not
enough; we must also include the first time fix rate, first level fix rate and average
number of calls per analyst in addition to the ‘essential’ metrics such as cost per call, cost
per e-mail and SLA’s met. While it may appear that the monetary metrics are the core of
business value, it should be noted that customer satisfaction should also feature

8
If this is the case then it follows that the number of calls the desk receives will also
increase as frustrated customers badger for incident updates. Thus, a high call volume
may be indicative of problems with SLAs and require further investigation.
©Copyright SDI 2010 All Rights Reserved 28
prominently. As mentioned above, the value of IT at any one point in time is likely to be
negative or zero – it is only when customer and performance metrics are considered that
the ‘true’ value of the service desk is revealed. Only when these metrics are explained
together can the business properly grasp the value of the service desk and ensure that it
sits at the heart of the organisation.

©Copyright SDI 2010 All Rights Reserved 29


The Future

The recent SDI benchmarking survey contained some revealing insights into the service
desk industry. We are seeing that businesses are finally realising that the service desk is a
core business function. This is evidenced by the increase in budgets afforded to the desk
which allows them to increase staffing levels and the amount of training. We are also
seeing that organisations are keen for analysts to continue their career post service desk
in another position in the company. Analysts then, like their desks, are becoming highly
valued commodities.

What accounts for this trend? There is probably no one single answer, but it is clear that
measuring metrics and reporting their value effectively has played a significant role. Just
like any other aspect of the business, the service desk has to justify every aspects of its
operation and convey what resources it requires. Informing the business about first level
fix rates alone will not provide them with a true impression of your service desk as the
terms and data will be too obtuse for them to understand. However, demonstrating why
first level fix rates are important (used in the context of other metrics) will give the
business a clear view of your service desk, explained in terms that they can understand
and process. This trend is already in place and its continuation will be key in ensuring that
the service desk continues to prosper and thrive in the future.

It should also be noted that each metric is not isolated from the others: therefore if you
improve in one area this will have a discernable impact on another. This is called a
balance scorecard as it requires you to understand the impact of one metric on another.
For example, if you make improvements in reducing your first time fix rate or have a
lower call duration then it will follow that cost per call will also be reduced. Thus it is
important to bear in mind that behaviour changes will have impacts beyond the metric
that you are working on: a balance scorecard will let you see how your performance is
changing various metrics and whether they are having a beneficial or detrimental effect.

©Copyright SDI 2010 All Rights Reserved 30


About The Service Desk Institute

SDI is the new driving force for the ITSM and service desk industry with a vision of
being recognised globally as the professional body that drives exceptional IT
service and support. As the leading authority on Service Desk and IT support
related issues, SDI will enable organisations to enhance the value of business and
IT integration through exceptional IT service and support. SDI is responsible for
setting global industry standards, delivering thought-leadership and knowledge,
and influencing service improvement for individuals and organisations. The
globally recognised SDI Service Desk Certification standard is the only best
practice standard that provides a clear and measurable set of standards for a
Service Desk operation.

Acting as an independent advisor, SDI captures and disseminates creative and


innovative ideas for tomorrow's service desk and support operation. SDI sets the
standards for the IT support industry and is the conduit for delivering knowledge
and career enhancing skills to the professional community.

It also offers the opportunity for international recognition of the support centre
operation through a site certification audit programme. It members span
numerous industries and include, AOL (UK), Barclays Bank, Computer Associates,
ITV, O2, T K Maxx, United Biscuits and E.On. Further information about the SDI
can be found at www.sdi-europe.com

About Service Desk Institute Experience (SDIe)

Service Desk Institute Experience (SDIe) will be the organisation of choice for
businesses looking for engaging strategic and practical ways to improve IT service
management performance. SDIe aims to provide fun, engaging, motivational,
enthusiastic and positive learning and networking experiences through
membership, events, conferences and awards. SDIe aims to support its members
in improving their service desk performance and enhancing their careers. SDIe
members will have access to all the leading sources of industry intelligence, plus
exclusive cutting-edge analysis and regular industry updates and events
www.sdi-e.com

©Copyright SDI 2010 All Rights Reserved 31


LANDesk

LANDesk creates innovative technologies and products for enterprise IT


management, including systems, security, IT service, and process management.

LANDesk helps customers streamline operations and maintenance tasks,


automate and standardise processes, reduce errors, and transition from a reactive
environment to one that’s more proactive and service oriented. This enables
organisations to reduce operating costs, simplify management, and increase the
availability of critical IT environments 24/7 via integrated, centralised software.

LANDesk solutions also enable organisations to discover, manage, update, and


protect all the deployed systems via a single, easy-to-use console that integrates
systems lifecycle management and endpoint security management.

The user is able to automate patch management and deployment, control and
encrypt USB and other devices to prevent data leakage, enforce endpoint security
policies for mobile users, and grant network access control to protect against
virus outbreaks and unauthorized access.

More information can be found at www.landesk.com

©Copyright SDI 2010 All Rights Reserved 32