Professional Documents
Culture Documents
Incident and
Problem
Management
LEGAL NOTICE
This publication is based on current information and resource allocations as of its date of publication and
is subject to change or withdrawal by CA at any time without notice. The information in this publication
could include typographical errors or technical inaccuracies. CA may make modifications to any CA
product, software program, method or procedure described in this publication at any time without
notice.
Any reference in this publication to non-CA products and non-CA websites are provided for convenience
only and shall not serve as CA’s endorsement of such products or websites. Your use of such products,
websites, and any information regarding such products or any materials provided with such products or
at such websites shall be at your own risk.
Notwithstanding anything in this publication to the contrary, this publication shall not (i) constitute
product documentation or specifications under any existing or future written license agreement or
services agreement relating to any CA software product, or be subject to any warranty set forth in any
such written agreement; (ii) serve to affect the rights and/or obligations of CA or its licensees under
any existing or future written license agreement or services agreement relating to any CA software
product; or (iii) serve to amend any product documentation or specifications for any CA software
product. The development, release and timing of any features or functionality described in this
publication remain at CA’s sole discretion.
The information in this publication is based upon CA’s experiences with the referenced software
products in a variety of development and customer environments. Past performance of the software
products in such development and customer environments is not indicative of the future performance of
such software products in identical, similar or different environments. CA does not warrant that the
software products will operate as specifically set forth in this publication. CA will support only the
referenced products in accordance with (i) the documentation and specifications provided with the
referenced product, and (ii) CA’s then-current maintenance and support policy for the referenced
product.
Certain information in this publication may outline CA’s general product direction. All information in this
publication is for your informational purposes only and may not be incorporated into any contract. CA
assumes no responsibility for the accuracy or completeness of the information. To the extent permitted
by applicable law, CA provides this document “AS IS” without warranty of any kind, including, without
limitation, any implied warranties of merchantability, fitness for a particular purpose, or non-
infringement. In no event will CA be liable for any loss or damage, direct or indirect, from the use of
this document, including, without limitation, lost profits, lost investment, business interruption, goodwill
or lost data, even if CA is expressly advised of the possibility of such damages.
This publication may contain sample application programming code and/or language which illustrate
programming techniques on various operating systems. Notwithstanding anything to the contrary
contained in this publication, such sample code does not constitute licensed products or software under
any CA license or services agreement. You may copy, modify and use this sample code for the
purposes of performing the installation methods and routines described in this document. These
samples have not been tested. CA does not make, and you may not rely on, any promise, express or
implied, of reliability, serviceability or function of the sample code.
Copyright © 2010 CA. All rights reserved. All trademarks, trade names, service marks and logos
referenced herein belong to their respective companies. Microsoft product screen shots reprinted with
permission from Microsoft Corporation.
Principal Authors
Peter Gilbert
Pamela Molennor
Marvin Waschke
Authors
Jason Albert
Gladys Beltran
Anita Cooke
Angela Tracey Lee Domingo
Christy Druzynski
Andrew Feldman
Michael Getz
Rich Graves
Wayne K. Hirabayashi
Brian Johnson
John Kampman
Alan Kasper
Richard Lankester
Randal Locke
Peter McKay
David A. Messineo
Crystal Miceli
Richard W. Philyaw
Terry Pisauro
Amy Spada
Julia Swanson
Craig Tootle
Christophe Trinquet
Steve Troy
The principal authors and CA would like to thank the following contributors:
Jacob Lamm
Fran Chock
Dale Clark
Terrence Clark
Connie Cobb
Brian Dempster
Edward Glover
Carolyn Jones
Anders Magnusson
Aparna Manda
Hazel Nisbett
Brian Poissant
Helge Scheil
Scott Scribner
Cindy Smith
Cheryl Stauffer
Freya Winsberg
CA CMDB Development Team
CA CMDB QA Team
CA CMDB Support Team
CA Unicenter Service Desk Development Team
CA Unicenter Service Desk Knowledge ToolsDevelopment Team
CA Unicenter Service Desk QA Team
CA Unicenter Service Desk Support Team
CA PRODUCT REFERENCES
This document references the following CA products:
■ CA CMDB™
■ CA SupportBridge™
FEEDBACK
Please email us at greenbooks@ca.com to share your feedback on this publication. Please
include the title of this publication in the subject of your email response. For technical
assistance with a CA product, please contact CA Technical Support at
http://ca.com/support. For assistance with support specific to Japanese operating systems,
please contact CA at http://www.casupport.jp.
DOCUMENTATION CHANGES
The following is a list of new chapters and new or revised topics in the May 24, 2010 update
to this Green Book:
■ Chapter 13: Integrations—Updated the link to Service Desk Integrations Green Book.
The following is a list of new chapters and new or revised topics in the November 30, 2007
update to this Green Book:
■ Chapter 10: Effective Use of the Status, Priority, Root Cause, Service Type, and
Category Fields—Updated the Priority Usage (see page 204) topic.
■ Chapter 14: Architecture Choices—Updated the High Availability (see page 301)
section.
■ Chapter 16: Security—Updated the Permissions and Access (see page 355) section.
8 Contents
Chapter 6: Knowledge Management 129
Why Knowledge Management? .....................................................................................129
KM for the Service Desk ............................................................................................129
Drivers ...................................................................................................................130
Objectives ...............................................................................................................130
Tactics ....................................................................................................................130
Outcomes ...............................................................................................................131
Best Practices .............................................................................................................131
Approaches .............................................................................................................131
Processes ................................................................................................................133
Measurement and Reporting ......................................................................................139
Leadership, Roles, and People ...................................................................................144
Managing Knowledge Documents ..................................................................................147
Knowledge Document vs. Knowledge Tree Document ...................................................148
The Anatomy of a Knowledge Document .....................................................................149
Categorization .........................................................................................................151
The IT Environment..................................................................................................153
Ways to Create or Capture Knowledge ........................................................................154
Authorship and Ownership ........................................................................................155
Permissions and Security ..........................................................................................156
Retrieving Knowledge from CA Unicenter Service Desk Knowledge Tools ............................157
Multiple Retrieval Paradigms......................................................................................157
Multiple Retrieval Interfaces ......................................................................................157
Knowledge Tree Documents ......................................................................................160
URL Launch .............................................................................................................160
Emailing a Document................................................................................................161
Web Services ..........................................................................................................161
Common Knowledge Base .........................................................................................161
Searching and Browsing...............................................................................................161
Keyword Search.......................................................................................................161
Knowledge Use Beyond the Service Desk .......................................................................164
Knowledge for Event Management—CA Unicenter Network and Systems Management (NSM)
.............................................................................................................................164
Chapter 10: Effective Use of the Status, Priority, Root Cause, Service
Type, and Category Fields 201
Responsibilities ...........................................................................................................201
Incident Management ...............................................................................................201
Problem Management ...............................................................................................202
Incident Status Field Usage ..........................................................................................203
Problem Status Field Usage ..........................................................................................204
Priority Usage .............................................................................................................204
Severity Usage ...........................................................................................................205
Incident and Problem Activities .....................................................................................206
Root Cause ................................................................................................................206
Service Types .............................................................................................................207
Categorization ............................................................................................................210
10 Contents
CA Unicenter Service Desk Dashboard ...........................................................................220
Benefits ..................................................................................................................221
Dashboard Defined...................................................................................................223
Architecture and Components ....................................................................................224
Customizing CA Unicenter Service Desk Dashboard ......................................................226
Graphs: Changing a Graph Type ................................................................................227
Graphs: Changing Colors ..........................................................................................228
Graphs: Changing a Title ..........................................................................................229
Graphs: Adding or Removing Point Labels ...................................................................234
Graphs: Other Formatting Changes ............................................................................235
Tables: Modifying Fonts or Colors...............................................................................235
Tables: Changing a Title ...........................................................................................236
Tables: Hiding a Column ...........................................................................................237
Tables: Adding Column Totals....................................................................................238
Using Alarms to Highlight Exceptions ..........................................................................239
Modifying Options to a "Time Period" Radio Button .......................................................241
Schedule Data Refreshes ..........................................................................................242
Finding Data for Reporting ...........................................................................................243
Predefined Reports ...................................................................................................243
Entity Relationship Diagram ......................................................................................244
Advanced Techniques - Understanding ddict.sch ..........................................................245
Working with Dates ..................................................................................................248
Reporting Tips and Tricks .............................................................................................249
Legible Universal Unique Identifiers ............................................................................249
Major Database Entities ...............................................................................................250
Incidents, Problems, and Requests .............................................................................250
Change Orders ........................................................................................................253
Other Database Entities ............................................................................................254
12 Contents
Chapter 17: Advanced Tuning 365
Introduction ...............................................................................................................365
Sizing and Scalability...................................................................................................365
Understanding the Architecture..................................................................................366
General Considerations .............................................................................................371
General Tuning Recommendations ................................................................................375
Know the Signs........................................................................................................375
Manage Performance ................................................................................................381
Index 389
From the beginning, an implementation is filled with choices. The first step is to look at
support operations and determine the role of CA Unicenter Service Desk in the target
environment. The introduction of new software is often an occasion for re-evaluating
practices and making improvements. ITIL® (Information Technology Infrastructure Library)
best practices are an excellent starting point. They have been adopted by enterprises all
over the world. ITIL best practices are often combined with practices and controls that
derive from principles of IT governance, such as CobIT® (Control Objectives for Information
Technology).
CA Unicenter Service Desk is designed to do more than simply record incidents and
problems. Beyond storing incident and problem records in a relational database, CA
Unicenter Service Desk provides an ecosystem where the service-desk call-taker,
technician, manager, and executive all can perform their jobs effectively. In this book, the
features of this ecosystem are discussed in detail.
Efficiency is important in business, and no less important with a service desk. Two of the
most significant developments are self service and knowledge management. Both of these
features move work from the service desk personnel to the clients of the service desk:
■ Self service users can create, review and update their own entries, increasing the
number of service desk clients who can be supported by a single call-taker.
■ Knowledge management transfers knowledge to the end user without the intervention
of a service desk technician, freeing the technician for other tasks. It also helps service
desk staff share what they know with each other. CA Unicenter Service Desk performs
basic knowledge publication and administration, while its companion product, CA
Unicenter Service Desk Knowledge Tools, is an advanced system for complete
knowledge management.
These innovations have brought benefits to both service desk organizations and the end
users. Self service users are happy to avoid telephone queues, while service desk managers
are happy to see productivity improvements on their staff. Looking up a document and
fixing a problem instead of waiting for a technician, saves time and gives users a satisfying
sense of accomplishment.
Automated and remote support is another means of reducing the cost of support while
increasing quality. Automated and remote support detects and corrects problems with
minimal human intervention. This level of automation has long been a goal of service
desks, but until fairly recently, it has been a goal, not a reality. CA now supplies
technologies and products that provide these capabilities in the form of CA Support Bridge
automated solutions.
ITIL has led the way to viewing the service desk as a service in and of itself. For every
service, there is always an expected level of service, although the expectation is often
implied and open to interpretation. Making those implicit expectations explicit and
supported by the service desk software increases the overall effectiveness of the service
desk and improves relations between the service desk and its customers. CA Unicenter
Service Desk contains flexible service level agreement support. There are many ways to
take advantage of this flexibility. This book provides the tools to take service level
agreements to a higher level.
Reporting is a vital part of every enterprise management application, and service desk is no
exception. Reports are probably the most customized part of any system. They can take
many forms: paper, online, and dashboards. The CA Unicenter Service Desk Dashboard
provides flexible and dynamic graphic management reports. CA Unicenter Service Desk
handles reporting in several different ways, each specialized for specific purposes. Reports
can be generated and formatted with required information in varying presentations. This
book contains advice from experts who know service desk reporting inside and out and
have experience with a wide range of reporting requests.
A service desk technician is often an expert in many areas. Desktops, servers, peripherals,
applications, and networks all are touched by the service desk organization. In order to do
the job well, the service desk must be able to work quickly and conveniently with other
systems. CA Unicenter Service Desk expedites the job by integrating with other products.
But integration is not always easy. A customer may want something special. A legacy
installation may present special problems. This book contains notes about many of the
integrations currently available. These notes will also be a source of suggestions for solving
integration problems that are not addressed here.
Although architectural choices are typically made before implementation begins, the section
on architecture is found toward the end of this book. The first chapters all are related to
process decisions. Will we use self service? Which applications will we integrate with? How
should our security work? Each one of these decisions modifies architecture requirements.
Choosing the right architecture provides the means to implement the processes. Without
deciding on the processes to be implemented, the architecture is only a guess. CA
Unicenter Service Desk provides architectural alternatives, each suited to different
situations. The best way to plan an architecture is to understand your process decisions.
In sum, CA Unicenter Service Desk is a large product that can be configured and tailored to
fit varying requirements. This book will help you configure, tailor, and tune it to your needs.
This book provides the consultant, architect, or manager with best practices for
implementing CA products to support incident management, problem management, and the
service desk function.
16 Overview
Who Should Read This Book?
CA Unicenter Service Desk, CA Unicenter Service Desk Knowledge Tools, and CA Unicenter
Service Desk Dashboard are the primary products addressed. The complementary CA CMDB
and CA SupportBridge products are also discussed.
Readers of the more technical areas in this book will benefit strongly from some prior
familiarity with the primary products. Therefore, readers are encouraged to make use of the
standard product documentation and to attend the relevant CA Education courses.
Although this book is not designed for executives, some parts of it, especially some of the
higher-level best practices, may be of interest to senior management.
■ Some say that support success has nothing to do with the attitude of the support
users; the service desk exists to ensure that services are delivered. If the service desk
keeps vital services up and running during scheduled hours, the service desk is
successful, regardless of the opinion of the service desk users.
■ Other sites rely heavily on service level agreements to measure the success of the
service desk. If the service levels are met, the service desk is successful. No other
questions are asked.
There is no ideal measure. In reality, all service desk installations use all of these measures
to some extent, perhaps putting extra weight on one aspect or another. Most combine all of
these measurements together and implicitly assign weights to each that represent the goals
and attitudes of their organizations.
Even with an articulated and measured concept of success, there are many choices to make
in managing a service desk. CA Unicenter Service Desk has flexibility built into its
foundation to accommodate a wide range of these choices. More than anything else, it is
designed to accommodate different styles of support management. The CA Unicenter
Service Desk development team has worked very hard to meet two goals that often
conflict: maximize flexibility in accommodating different support settings and goals, but, at
the same time, minimize the difficulty and cost of upgrading sites that have been tailored
and tuned to site-specific requirements.
Support Roles
■ At most service desks, the bulk of incidents are opened by the end users of the services
that the service desk supports. However, end users are not the only source of
incidents. Ideally, a significant number of incidents are opened automatically through
monitoring systems such as CA Unicenter NSM Database Monitoring Option for
OpenVMS (CA Unicenter NSM).
■ When the primary channel for user-opened incidents is the telephone, there may be a
specialized group on the service desk staff that only answers calls. Typically the call-
taker will exercise some form of triage: assigning a preliminary priority to each
incident, perhaps assigning the ticket to an analyst who specializes in a particular
subject area, and moving critical incidents to the head of the queue.
■ The next role is the core of the service desk staff. These people are called “analysts” in
the CA Unicenter Service Desk product and documentation, although they are also
frequently called service desk technicians, support representatives, or customer service
representatives (although “customer service representative” is usually limited to
external customer service rather than internal service desks and help desks).
Technicians are expected to resolve the bulk of incidents that come to the service desk.
They are usually generalists, expected to be able to handle typical user needs, but not
be experts in any specialized area. One of the challenges of service desk management
is training service desk technicians in the latest technology while balancing the load of
processing incidents.
■ Subject matter experts (SMEs) are used as a resource by the service desk for their
expertise in some area. Typically, these experts are not a member of the service desk
staff and are not generally trained in service desk procedures. Often, SMEs are equated
to level two technicians, meaning that they are assigned incidents and problems that
cannot be resolved by the front line technicians.
■ Some sites distinguish a third level of technician. These are almost always SMEs who
work on extraordinarily difficult or urgent incidents or problems.
■ Service desk managers are responsible for carrying out the support directions given by
upper management and supervising the activities of the service desk staff. Support
managers are typically the heaviest users of service desk reports and dashboards. They
use these measures to decide how to allocate staff and assign priorities, and establish
policies in order to achieve support goals.
■ Upper management, the CIO, CFO, CEO, and often an IT committee, usually view
support as part of the IT portfolio that must be governed to conform to the overall
direction of the enterprise. Reports that clearly describe the costs and benefits of
service desk are an important part of an accurate presentation of the service desk in
the IT portfolio.
Note: Following ITIL best practices, problem resolution is distinguished from incident
resolution. Sometimes separate staff is assigned to each area. The distinction between
incidents and problems is not the same as the distinction between level one and level two
technicians. An incident generates a problem, but there may be a longer term root cause
that must be addressed with a problem resolution. Resolving a problem can require a high
level of expertise, but not always. Incident resolution can require enormous skill to restore
20 Support Management
Support and Compliance
service quickly, while long term problem resolution can be a very simple but time-
consuming task.
IT compliance means following the laws and regulations that affect IT. The Sarbanes-Oxley
statute on the integrity of financial reporting is an example of a law that affects IT.
An efficient and well-run service desk is an important part of compliance. This is because
incidents often jeopardize compliance. Nevertheless, incidents occur, and, when they occur,
they must be responded to quickly and efficiently to remain in compliance. In addition to
quick resolution of incidents, compliance usually requires a record of when the incident
occurred, exactly what occurred, and the response taken. A service desk with a thorough
log of activity is important in keeping these records straight.
In addition to record keeping, a service desk that can prompt users to follow appropriate
procedures is very helpful. Knowledge tools that present users with appropriate
documentation on compliance are also useful.
■ KGIs measure the achievement of enterprise financial and customer goals, which
appear in documents like financial and sales reports that are compiled after the actual
business is completed.
■ KPIs gauge internal processes and innovation. These measurements indicate the
likelihood of favorable business outcomes in the future.
Note that KGIs are all comparisons with a baseline, hopefully showing improvement over
time.
KPIs are measures without baselines. CobIT suggests looking at these metrics to gauge the
likelihood that they will meet the KGIs above:
■ Elapsed time from identifying a symptom to entry in the service desk system
22 Support Management
Chapter 3: Best Practices
This is an important part of this book, not because it can be copied, but because it can be
studied for relevance and used as appropriate guidance for your organization. It is a myth
that generic guidance is there to be copied; it is there to be adapted, assimilated, and
automated. This can only be accomplished when your organization is familiar with the
policies and the impact of the cultural change that will be needed to ensure that everyone
wants to adhere to new practices.
This chapter is designed to help you identify the best practices of technology and
technology processes. It does not recreate best practices from ITIL or any other source, nor
does it demand blind adherence. It is designed for you to be able to put together a clear
understanding of requirements for your specific business needs that will benefit from a best
practices approach.
Subway Maps
In the 1850s, the engineers of Victorian London designed the immense underground
system for mass transport of the population around the capital, with a myriad of possible
entry and exit points. They designed a complex series of linked tracks, with careful
consideration given to details such as making sure that tunnels did not sit directly above
other tunnels so as to avoid collapse. They ensured that the even more complex above-
ground urban terrain did not complicate their selected solutions, and they made sure that if
you needed to ascend or descend in order to use their services, the underground tracks
could be reached by elevators to facilitate changing both direction and altitude without too
much inconvenience.
The London Underground now covers 253 miles and has 275 stations. It is both above and
below ground, as well as being multi-level under the ground. Rather cleverly, a “circle” line
was designed and built to ensure that many of the main tracks were linked at multiple
access points.
The first thing these engineers did not do was to consider how they might explain to
customers the ways in which they were to get from point A to point B. Eventually a map
was produced that simplified the underground by ignoring the multiple levels and largely
ignoring the above-ground topography. Essentially it took a three-dimensional model and
collapsed the view to two dimensions; it also abstracted the reality of the topography to a
form that was simple to construct, display, and understand.
IT Maps
We can apply the analogy of the map of the London Underground to explaining the IT
Infrastructure Library (ITIL). The complexity of ITIL is legendary. Fortunately by stealing
from the genius who simplified the engineering of the subway systems, it is possible to
explain ITIL in a rational manner, in a way that has depth and that shows the most
important interfaces, yet can be understood by the average person. If you can read a
subway map, you can understand the complexities of ITIL.
There are really only two concepts that you need to get straight to understand how ITSM
and ITIL can be explained in the same way as a nineteenth century engineering paradigm.
■ First, all best practices are based on the Deming Cycle of Plan, Do, Check, Act. All focus
on gradual continuous improvement using this approach.
■ The second is that you need a means to assess and measure improvement, and in IT
the most common way is to assess levels of maturity using a capability maturity model
(CMM) analogous to that developed by Carnegie Mellon University.
CA has created two maps that look like and can be used in the same way as a subway map:
Operational, tactical, and strategic processes are discussed at the outset, and proper
thought is given to how the “levels” will be achieved. The goals of both business and IT are
considered for the programs/projects. The “Plan, Do, Check, Act” cycle becomes, in effect,
our “circle line,” and the ITIL process “tracks” are located in the most appropriate location
to illustrate major interfaces and “direction.” ITIL is not definitive (nor can it be, because of
the broad nature of the guidance), but it can be illustrated in a way that lets us see the big
picture and not get distracted by the tunnels: the procedures, software, and organizational
changes that need to be addressed to make it all work.
24 Best Practices
Mapping the System
Incident Management
Here is a map of the most commonly implemented ITIL process, incident management:
The track shows the most important subprocesses, or stations, but does not confuse things
by going into detail on either the business issues or the IT software supporting it. It is
placed at the “check/act” point of the cycle since it is the check on all of the other
processes, and the major purpose is to take action to get things up and running for the
user.
ITSM is mature in the market place, and most organizations will want to link incident and
problem management. So the problem track starts out at the ”act” corner, but, because it
is a process focused on eradicating problems, it proceeds right through the P-D-C-A cycle.
This allows us to start connecting the stations logically in a manner that provides the
opportunity to consider other processes at the right time.
Additional References
A book by several CA authors, “Service Management Process Maps - Your Route to Service
Excellence,” which develops analogies for each ITIL process track, will be published in Q2
2007 by Van Haren publishers.
Comprehensive information about CA's best practices for Service management may be
found in the Service Management section of the Business Service Optimization pages on
www.ca.com (http://www.ca.com).
IT Capability Solutions
An IT Capability Solution is a set of processes and activities that are aligned to achieve a
specified business outcome. In other words, a Capability Solution is designed to address
IT's most pressing business requirements.
Capability Solutions were developed by CA to help companies optimize and automate their
IT processes and infrastructure. They are based on industry best practices and standards,
and address specific business needs by enabling the achievement of a desired future state.
Each Capability Solution is comprised of a set of artifacts that document the processes and
activities required to achieve a desired level of IT effectiveness. This artifact collection
provides the foundation to solve a common business need that many customers experience.
The incident and problem management Capability Solution does the following:
■ Automates IT processes to consolidate, log, track, manage, and escalate incidents and
problems.
This Capability Solution is the basis for delivery of a service desk that leverages certified
best practices, such as ITIL, to unify incident and problem management.
Maturity Models
26 Best Practices
Capability Solution Components
Assessments
Assessments are structured CA service offerings that are used to analyze an organization's
current maturity level, identify and prioritize process gaps, and define an organization's
desired "to be" state.
Solution Blueprints
Based on a four-step maturity model, the solution blueprints help achieve a desired future
state by charting a clear and practical course for IT infrastructure and IT business
processes. They help to lay out IT processes and technology capabilities that enable
movement to the desired future state. They outline business, information, application, and
infrastructure requirements. They also assess the potential impacts on people, process, and
technology, in relationship to an organization's goals. They define a plan that fulfills the
organization's needs and can be executed effectively.
CA Reference Architectures
Reference architectures have been created for each Capability Solution, to speed up
solution implementation and ensure quality. This reference architecture consists of the
Solution Architecture Overview (SAO), the Solution Architecture Specification (SAS), and a
Process Workflow Engine.
ROI Tools
The ROI tools project the business value of moving from a current IT environment to a
more automated and efficient IT state. The ROI tools are integrated with other CA delivery
methodology components such as the CA Profiler, the Maturity Model tools, and Solution
Blueprints. The ROI tools factor in operational, maintenance, and capital costs; quantifiable
productivity and efficiency gains; and the people and technology investments required to
achieve the desired future state.
Summary
CA's Capability Solutions provide a framework to help you systematically automate and
improve your IT infrastructure management operation. Based on the desired end state and
level of investment, you will have the foundation to create a business-driven IT
organization that enables you to do the following:
■ Identify and prioritize the most critical IT processes that can be optimized
■ Determine the right combination of technology processes and people for the desired
outcome
■ Chart a specific path based on best practices and pre-built solutions to automate critical
processes
Detailed information on the IT Capability Solutions may be found in the Service Support
section of the Service Solutions and Education pages of www.ca.com (http://www.ca.com).
CANEXION
One of the biggest challenges facing IT organizations is how to provide value to the
business and achieve “best practices” service management excellence while significantly
reducing implementation time and costs.
CA customers can download a demo version of CANEXION that includes best practices for
the service support processes. To perform the download, access the following
link:http://www.ca.com/Solutions/Collateral.aspx?CID-90120. For more information on the
solution, please refer to the following resources:
http://www.ca.com/Files/SolutionBriefs/canexion_solutionbrief.pdf
28 Best Practices
CANEXION
The following graphic provides an example of the details for an incident and problem
management best practices process:
When the Incident Recording box in the flowchart is clicked, work instructions for executing
this piece of the process, as implemented in CA Unicenter Service Desk, are shown:
Comprehensive information about CA's best practices for service management may be
found by choosing the Best Practices Programs link from the Service Management section
on http://www.ca.com/us/service-management.aspx.
Utilizing CA Technology
This section discusses the service desk function and how it relates to the ITIL processes in a
CA Unicenter Service Desk implementation. It also outlines the factors that assist in driving
points of integration with other toolsets, which can provide an additional layer of
30 Best Practices
Utilizing CA Technology
effectiveness in your implementation. The section on enhancing the processes with the use
of technology shows you how you can significantly enhance the communication between
support groups as well as increase the capabilities of the service desk.
■ The implementation of the problem management process. This will leverage the
incident management process to determine true problems, and then manage the
process of Root Cause Analysis, and finally the remediation of the Root Cause via the
change management process, while creating the Known Error record determined in the
analysis.
Understanding the CIs affected by incidents, problems, and changes in the environment lets
you quickly determine the risk in any situation. Leveraging the information contained in the
CMDB also enhances the understanding of impact to the business services. The use of
technology lets you effectively manage customer expectations, track the functions occurring
at the service desk, and evaluate how the current processes are functioning so that
continual process improvement can occur.
You must use technology effectively to get the most out of these processes. You need to
track, not only that an end user had an outage, but, also the amount of actual effort
associated with that outage. Each step along the troubleshooting process should be
documented within that incident so that you can understand what effort was used. This
allows problem management to look for more effective ways to streamline the root cause
analysis and resolution of problems in the environment.
Also, by leveraging not only a Known Error database (normally housed in a knowledge
facility like CA Unicenter Service Desk Knowledge Tools) but also, knowledge in general, a
service desk can function very effectively, while extending the self service capabilities (call
avoidance) to the end-user population.
There are several areas that you must determine at the onset of implementing the incident
management process. First, you must define the incident statuses to be used before you
create them. Examples of effective incident statuses and their usage are given in the
chapter called “Effective Use of Statuses, Priorities, Root Causes, Service Types, and
Categories.”
Next, you must develop the categorization scheme that you want to follow. This is normally
determined by looking at the Key Performance Indicators (KPIs) you must maintain and at
the management reporting requirements. A good categorization scheme will assist in the
incident matching process, which is one of the stages in the problem management process.
The KPIs will be analyzing types of calls to verify continual process improvement, not only
in the amount of incidents in each category, but also with the average time needed to
resolve the incidents in each category. This is the only way to measure whether the
functions provided are improving customer service, reducing costs, and enabling the
business to function in a more productive manner.
The categorization scheme is one of the most critical decisions in the implementation of CA
Unicenter Service Desk. This category, which is called the Incident Area, has several
features that provide additional value to your organization. First is the ability to ask
questions based on the selection of an Incident Area? These questions, called Properties,
allow an organization to allow further areas of data gathering dynamically. You even have
the ability to make any or all of the Properties required when that Incident Area is selected.
An example of an Incident Area with a required Property might be Email Outage with a
required Property of “What Email Client are you using?” Selections could then be Webmail,
Outlook 2000, or Outlook 2003.
Additional features that are derived from the Incident Area are the automatic assignment of
the incident to the appropriate group based on the type of outage. This way the support
matrix is enforced by the service desk application and not by having the analysts reading
through a document to determine the correct group ownership.
Another important mechanism is the ability to associate a Service Type to the incident
based on the selected category. Service Types are escalation and notification rules that
assist the service desk in ensuring that the events that govern the Service Level associated
with an incident, problem, or change are managed effectively. This ensures that each
customer has the end user experience that was agreed to in a Service Level Agreement.
Service Types can be assigned based on the following areas: category, priority,
organization, affected CI, or end user. See the “Service Types” and “Effective Use” chapters
of this document for examples.
Automatic generation of incidents via monitoring tools like CA Unicenter Network and
Systems management is very powerful. The ability to create incidents based on alerts or
events provides an increased understanding of outages, as well as the potential for
decreasing the Mean Time to Resolution due to an earlier notification on outages. Creating
incidents from other tools is also possible, such as creating incidents from policies in CA
Unicenter Asset Management or from rules being broken in CA Cohesion Application
Configuration Manager (ACM), which comes with the CA CMDB.
Next is the determination of the priority scheme to be used within the organization. Within
the application itself is a priority that is the overall criticality of the incident, problem, or
change. Within the incident management process, there is an incident priority that is
calculated based on the impact and severity of the call. This may be a determination of the
actual priority of the incident itself.
32 Best Practices
Utilizing CA Technology
analyst's knowledge and the resolution that was discovered for an outage can save a
significant amount of resolution time for other analysts who see this same type of outage.
An effective knowledge management process also is a great storage place for your policies
and procedures for all of your ITIL processes and internal policies. Having a tool that can
leverage advanced searching, navigation and knowledge lifecycle management capabilities
is significant as an enabler to the ITIL processes.
Most organizations have been performing some type of incident and change management
processes in the past, whether or not they labeled them as such. The problem management
process is different from the other processes. This is where you must do trending analysis
and historical analysis, and determine the Root Cause of a specific failure type. The problem
management process can then keep this failure from recurring and create Known Error
documents that detail the remediation process so that if this failure ever returns it can be
eradicated quickly through incident management.
The major problem management activities are problem control, error control, proactive
problem management, reporting, and problem reviews. They enable the problem
management process to handle problems efficiently within the service support process.
Problem management is tightly integrated with the change management process as its
mechanism to eradicate errors in the organization via a Request for Change (RFC). Once
the RFC has been completed, the problem resolution can be verified, the problem can be
closed, and the Known Error document can be completed and made available for viewing by
the incident management process.
Most of the problem management process is managed within CA Unicenter Service Desk
and CA Unicenter Service Desk Knowledge Tools. The process is used to drive the root
cause analysis and build up the Known Error and Workaround Knowledge Database. Proper
creation of the root cause process within problem management is managed and measured
by the effectiveness and usage of the Known Errors and Workarounds contained in the
Known Error database.
The knowledge management facility within CA Unicenter Service Desk Knowledge Tools not
only provides a mechanism for standard technical and non-technical documents but also the
Known Errors and Workarounds that need the same search capabilities, as well as the
categorization features that knowledge management enables. The knowledge management
facility also allows for proper workflow around the management of the knowledge document
itself, from draft to retirement, thus ensuring that the Known Error and Workaround
documents are clearly articulated and follow proper knowledge-creation techniques.
Summary
From a technology perspective, the proper use of CA Unicenter Service Desk effectively
enables the incident and problem management processes to function more effectively, by
automating as many functions as possible. By properly setting up the incident and problem
statuses that are being used, each person involved in these processes knows what stage
the incidents and problems are currently in. Also, using impact and urgency settings, you
can allow the technology to determine the actual priority of the outage. Technology also
gives you the ability to categorize the incidents and problems properly. This enables
automatic assignment to the appropriate support group while at the same time asking the
proper questions based on the Category selected. You can also leverage the service desk to
be able to set the appropriate Root Cause of a problem and report against the Root Cause
and the Category of the incident or problem.
Building the knowledge base with your Known Errors, Workarounds, and general knowledge
will significantly increase the effectiveness of your service desk. Finally, leveraging Service
Types to enable proper event escalation and notification rules will ensure each incident or
problem is handled effectively within the organization and that no incident or problem goes
unnoticed.
All of this information lets you pull the necessary information for the KPIs so you can
continually improve each of the processes and gain an understanding of how effectively the
service desk is performing.
There are several options when it comes to setting up the proper support model within your
ITIL Implementation. ITIL version 2 identifies local, central, and virtual service desks. Each
of these has strengths and weaknesses. What you need to avoid is the “Log and Flog”
approach that some organizations have today, where you just take the calls and try to
process them; gather some data, but not all; and throw them over the fence to the next
support team with no ownership of the original call whatsoever.
Traditionally speaking, organizations with a single location have the support staff for each
area all in one location, making this easy to manage. However, you can have a Local
service desk in large organizations as well. This would mean that each area/region/location
would have its own service desk and function on its own. Some of the pitfalls of this is
keeping consistency across the processes at each of these service desks, and reporting
across them all.
One of the greatest strengths of the local service desk is that they are relatively easy to
manage effectively. However, be aware that this strength is diminished once you have
more than one site to support. And with each new site you begin to see diminishing returns
on your investment. Many companies today even have several local service desks, and their
major problem is that there is no consistency in the delivery of the service to the business.
34 Best Practices
ITIL Support Model Considerations
This is where other support models may lend a better support function than the local
service desk model. Also, from a service reporting capability, multiple local service desks
will not provide consistency across those entities, making consolidated reporting difficult.
This model is easily supported with CA Unicenter Service Desk and can be enhanced by
adding CA Unicenter Service Desk Knowledge Tools, which provides advanced knowledge
management capabilities.
The centralized service desk is where all calls are logged in a central location, even though
the organization has multiple locations. This can provide benefits such as a reduction in
operating costs, a consolidated reporting structure, and possibly even improved resource
utilization. Many organizations have this structure in their environment today. They
leverage the ability to manage everything from one location, providing reduced personnel
costs and increased communication between support groups.
Having the centralized service desk model in place provides the organization with its one-
stop shopping capabilities, and thus the ability to provide greater visibility into the actual
capabilities of the IT organization and its support function. This support model also provides
the basis for understanding the actual utilization of your resources and gives you the
understanding of places that need more resources for improved delivery of services.
CA Unicenter Service Desk and Knowledge Tools can fully enable this support model. With
the tight integration between these two products you can seamlessly segment the functions
necessary where they reside, as well as segment the necessary knowledge documents.
The virtual service desk is an open architecture model; it doesn't really matter that
everything is in one location, as long as everyone has access to the technology via the Web
or some other resource. In today's world this is becoming more and more prevalent. This is
the model that best fits the model of the future or the “follow-the-sun model.” With the
increase in technology capabilities, the future is really bright for the virtual service desk
model. In the near future, you will have the ability to connect from anywhere to
everywhere the resources that you need to resolve all incidents and problems on the fly.
With the capabilities of high speed wireless devices and wireless cards, you have the ability
to get to the information that you need, when you need it, providing the optimum
effectiveness for a follow-the-sun model.
The virtual service desk model allows the company that is very diverse in its locations and
functions to be able to manage the policies, procedures, escalations, and notifications that
are necessary to ensure that the proper level of service is delivered back to the business.
Leveraging the proper incident and problem management procedures in a virtual service
desk model allows the organization to grow effectively while providing a substructure with
fewer personnel. Ultimately the remote worker will be able to effectively provide support
from anywhere in the world to those that need assistance.
Additionally, since CA Unicenter Service Desk is web-enabled, all functions can occur from
anywhere anytime. This gives the virtual service desk the ability to globally secure the
necessary information with data partitions and form groups.
Form groups allow you to determine at the form level who can see what fields based on
their login. Data partitions provide, at an SQL level, the ability to segment information that
populates the forms. Thus you can limit what information different groups can have access
too and what information they can change. Also, having this flexibility can provide the
necessary metrics or KPIs to properly determine if the virtual service desk is performing at
an adequate level and continually improving.
Summary
When you think about the proper support structure for your service desk, make sure that
you look at the end result you are working toward. Determine which metrics you need to
measure against and the best method for getting this accomplished.
As you create this support structure, you will start gaining an understanding of the true
value of effective process management at the service desk. You will also learn how
technology enables these different support models to work as effectively as possible,
enabling the ability to grow the support services while maintaining an effective, efficient
support organization with the increased communication and escalation processes in place.
There is a misconception in the marketplace about the ITIL best practices and how they
relate to implementation. Since ITIL has release numbers, people think that they can order
a particular version of the best practice off the shelf and deploy it in their organizations.
ITIL best practices are not enabled by implementing a toolset that is ITIL compatible. The
real implementation of ITIL is one of People, Process, and Technology.
This section tells you how to enable or enhance your best practices implementation.
Additionally, there is information from a technology point of view that explains how CA
Unicenter Service Desk is capable of supporting this initiative.
36 Best Practices
What Needs to be Done?
Where Do We Start?
All best practices implementations start with a business requirement that answers the
question: why are we going down the best practices path with our organization? Every
organization is looking for improvement. Even if “Never change a winning team” might be
very attractive, there is always room for improvement. Reasons to implement ITIL best
practices include the following:
As you can see, there are various reasons for organizations to define their business
requirement and to justify the implementation of it. The bottom line is that people want to
close the gap between IT and the business, and to make sure that both are aligned in order
to reach the common business goals.
Once you have defined the business requirements for the best practices implementation, it
is time to get management commitment. This is significant, because just the tool to enable
ITIL is never enough. Maybe you require some organizational change to assure that the
requirements and objectives are met. This can be compared to the “Friday afternoon golf
course syndrome.”
Imagine your CIO at a golf course with his peers. Just before he tees off one of the other
CIOs tells him, “We do ITIL and we perform much better now.” Your CIO thinks, “Hmmm, I
never heard of that but it sounds great.”
The next thing he does is drive back to the office, where he tells the IT director that the
company is doing ITIL, and the IT Director is responsible for it. The IT Director asks, “But
how?” The CIO responds: “Just do it with what you have today.”
The next Friday the CIO is back on the golf course and when he swings and hits the ball he
says to the others: “Oh, by the way, we do ITIL as well and it is great!”
And what about the IT Director, you are asking? Well he is still trying to figure out how it
works and where to get the funding.
A story? Unfortunately this is reality for many organizations and it is not what we mean by
“management commitment.” There is more that needs to be done to make best practices
effective.
What Is Next?
Making a start is probably the biggest challenge for all organizations. How would you
determine what you already have and what else you need to implement? Even more
interesting, how are you going to measure your progression and how do you adjust?
The advice here is to keep it very simple and structured. Remember that “Rome was not
built in one day.” You will need to determine as an organization where to start, and what is
most important. The key to the solution can be found in a structured approach. The figure
below shows the approach for enabling processes.
Reviewing your current way of working can lead to some very nice surprises. Believe it or
not, what you do today for processes shows a lot of resemblance to the ITIL best practices
as they are documented. Again, the best practices are descriptive, not prescriptive. They
provide you with a framework to work from to select the options that you find valuable.
To determine where you are (your current state), you will need to conduct an analysis of
the ITIL processes to make sure you have a clear view on your current way of working and
where improvements can be made. These analytical steps look at people, process and
technology, and rank an organization leveraging the Capability Maturity Model developed by
the Carnegie Mellon Software Engineering Institute.
38 Best Practices
Where Are We Now? (The “As Is” Assessment Phase)
4. Integration: Identifies when at least two processes have reached a level of “control”
and the shared inputs and outputs have been defined and can be measured.
5. Optimization: Describes the facility of the ITIL process to receive and provide quality
data to external management and business processes such as Finance, Human Resources,
Sales, and Marketing.
The analysis provides the perspective and context for the implementation and support of
the ITIL best practice in the current environment.
In this phase, clear objectives are set for the organization on goals and objectives. Where is
the IT department heading? What are the business requirements? What are the
achievements we want to strive for, now and in the near future? This part drives objectives
for service improvement. For example, why does the IT department want to have better
and improved incident management, or why does problem management need to be
introduced?
The objectives that are set here will subsequently be used as input to measure the success
of the best practices initiative.
This phase is the hardest part. It is normally where the Services Improvement Program is
defined. This is the encompassing main plan to get everything right. In here you will see
organizational change, process adjustment and implementation, tool selection, and training
initiatives. Here you will need the management buy-in, the funding, and the commitment of
people to make the change. Remember, tools and processes are very helpful but at the end
of the day, its people who make the difference.
Once the organization hits this stage of the best practices implementation, it is time to look
back and see if the benefits are really starting to add value. As stated before, these
measurements can only be validated when they have been defined in SMART (Specific
Measurable Attainable Realistic Timely) goals. Only then can you truly measure the effects
of the implementation and start planning for improvement of the processes and adjustment
of the technology that helps you drive the processes.
■ Incident Management
■ Problem Management
From a processes perspective, everything looks fine: people are allocated to roles and
tasks, and processes have been selected and implemented. Now, what about the tools?
Most often, tools go hand in hand with the implementation of the processes. Tools do
support the processes and help automate them to make sure one can implement repeatable
and consistent processes. In 95% of implementations you could say that 80% of the best
practices implementation is centered on the people and processes and the remaining 20%
is technology.
40 Best Practices
Where Are We Now? (The “As Is” Assessment Phase)
This is not indicating that technology is not important, but it is there to support the other
two; it is not driving them. The diagram below shows a sample approach to the
implementation of best practices combined with technology. The main objective here is to
make sure that tools and best practices are effectively combined and are supporting the IT
department as well as the business.
When reading the above information it becomes clear that technology on its own is not
providing all you need to enable ITIL best practices in your organization. You need a plan,
commitment, funding, and so on.
On the technology side, there is CA Unicenter Service Desk, which has strong support for
the ITIL service support processes: incident, problem, change, configuration, and release
management. When installing the solution in your infrastructure you will have to make the
decision whether you want to enable the ITIL screens or not. Also, in existing
environments, you can still make that decision. In this section we will discuss in brief what
to look for and what actions are required to enable ITIL in the technology.
When you are installing CA Unicenter Service Desk for the first time, there are several
things to look at when you are doing the installation. As part of the ITIL initiative, you will
need to tell the CA Unicenter Service Desk software that you are installing the ITIL version.
This is done during installation of the server application. Below you see the screen shot
taken from the configuration phase of the installation. You simply select Use ITIL
Methodology and you are all set to begin the journey.
The remainder of the installation is not different from the standard installation. The
selection of the “Use ITIL” box tells the system to load different forms and fill the system
tables with the ITIL terminology. After completion of the installation program, the system is
ready to record your very first incident and leverage the defined process.
42 Best Practices
Where Are We Now? (The “As Is” Assessment Phase)
Simply log on to the CA Unicenter Service Desk using an analyst user ID, and select the
“New Incident” option from the menu below.
The following screen will display, showing all the attributes required to register a new
incident and to start the process of supporting the end users.
Basically you have two options when you are already running CA Unicenter Service Desk
and want to enable ITIL. Both options depend on the processes implemented and the level
of ITIL compatibility that is desired. Another question here might be whether you want to
preserve all data collected prior to the ITIL conversion.
Some companies, as they move to the ITIL processes, choose not to keep their old data
since it won't really map to the new processes. In that case, you may want to consider
starting over with a new installation.
To maintain existing data, you can run the configuration program again and select the ITIL
checkbox. This will load the ITIL forms and set up the CA Unicenter Service Desk for ITIL-
based incident and problem management processes.
Prior to converting from a non-ITIL to an ITIL installation you should complete the following
steps just to ensure that no data loss occurs:
These modifications affect all servers in the infrastructure that are part of the CA Unicenter
Service Desk implementation
Once all of the above steps have been completed, you can rerun the CA Unicenter Service
Desk Configuration utility. By selecting the checkbox for the ITIL forms, the configuration
program will populate the database with the necessary forms and form groups for accessing
the forms. If you have adapted any of the forms and those adaptations are still necessary,
you will need to make the same adaptations to the new ITIL forms. Then ensure that all
information is still working in the ITIL installation.
When converting from non-ITIL forms to ITIL forms, note that the current requests will not
convert to incidents. Also note that all requests, incidents, and problems share the same
table (the call_req table) with the primary distinction between them being the value of the
TYPE field. This field is what the CA Unicenter Service Desk application looks for to
determine the proper display form to use.
Summary
As you can see from this chapter, technology isn't the biggest factor behind best practices
such as ITIL. However, it must not be neglected in the overall implementation. Before you
can begin to harvest the low hanging fruit of an ITIL implementation, you will need to look
at least at service desk (as an ITIL function) and incident and problem management. These
three pieces will give you the basis to move forward quickly. In fact, this will allow you to
44 Best Practices
Where Are We Now? (The “As Is” Assessment Phase)
create a Single Point of Contact (SPOC). And this SPOC will enable the IT department to set
up service support procedures and allocate the appropriate staff to satisfy the needs of the
end users.
The most important thing to remember here is that ITIL is not a one-off implementation.
Neither is it shelf-ware that you can buy at your local store. Together with the investment
in people and technology, it is part of the best practices journey a company undertakes.
ITIL is helpful because it does the following:
■ Lowers TCO
Using CA CMDB for configuration management focuses on mitigating operational risk, and
this in turn reduces cost. The goal of configuration management is to maintain control over
the IT infrastructure from a service perspective. It is designed to ensure that the core
business services are optimally available to the organization. What goes into CA CMDB
depends in part on the data found within the asset management system and how that data
is accessed using federation.
CA CMDB delivers significant value to those organizations that use it for both incident
avoidance and incident/problem management to effect reductions in downtime. Used in
conjunction with an effective change management system, CA CMDB makes a positive
difference by allowing you to understand the impact of a proposed change in advance of
executing the change. Understanding the change impact significantly reduces the
occurrence of incidents. Whether the incident is the result of a poorly managed change or
simply a failure of some IT component, CA CMDB provides the ability quickly to find the
problem by effective root cause analysis.
CA CMDB provides its users with the bottom-line ability to understand what components
(CIs) make up their most critical business services. Nearly all IT disciplines benefit from
knowledge of relationships among CIs and how CIs combine to provide services for the
organization.
CMDBs are commonly used to store the authorized versions of configuration items (CIs)
and the relationships among them. They provide critical information needed to address
incidents and to resolve problems within IT configurations. However, simply storing CIs,
their attributes, and their relationships is not valuable unless functions are available to
access the data in a meaningful way. A well-constructed CMDB does not import massive
amounts of data. Instead, it federates all the data known about an IT infrastructure's
configuration in a central place, forming a single source of truth about the infrastructure.
Such a properly constructed system can vastly improve the ability of a support staff to deal
with incidents, problems, and changes.
CI Attributes
With CA CMDB, all configuration items share a set of common attributes. In addition to
these attributes, each CI record is classified by a family designation which drives a second
unique set of attributes for the CI. Together, the common attributes and the family-specific
attributes provide a complete picture of the CI's configuration.
Every CI has a family attribute and a class attribute. The classification of a CI record drives
its family attribute value, which in turn determines which family-specific attributes the CI
has. Every CI also has a name and a description. A CI's name helps distinguish it from
other CIs in a list. Included in a CI's common group of attributes are attributes that are
used to reconcile the CI with other CIs in CA CMDB.
CA CMDB is able to reconcile data from multiple different applications into a single CI
record. For example, if the Microsoft SMS application has network information for
SERVER001, and a second application such as SAP has procurement information for
SERVER001, CA CMDB is able to take both data sets, and reconcile them into a single
record for SERVER001 in the CMDB. To do this, CA CMDB uses common attribute values
found in most applications for the CI in question. These include
■ CI's Name
■ Host/System Name
■ Serial Number
■ Asset Tag ID
■ DNS name
■ MAC address.
48 Using CA CMDB
Common Objects and Functions
The Common Object Registration API, or CORA, automatically reconciles hardware CIs
based on different combinations of these specific attributes found in different data sources
(SMS, SAP, CA Unicenter APM, CA Unicenter NSM, and so on). CORA combines the data
from these different sources into a single CI record in CA CMDB and ensures that
duplication of CI records doesn't occur.
Some of the other common attributes for all CIs include their acquisition, installation, and
expiration dates as well as their warranty period. A CI's operational and administrative
statuses can be tracked in CA CMDB and play an important role in incident, problem, and
change management.
CA CMDB is implemented within the MDB, a management database which contains many
common tables for the CA r11 products. Organizations, Locations, Contacts and
Manufacturers/Models are shared between CA CMDB and the other r11 products. Every CI
can be associated with one or more MDB Contact and Organization objects. In addition,
every CI can be associated with an MDB Location and with a Manufacturer/Model.
Family-Specific CI Attributes
A CI's family designation determines the family-specific set of attributes that it contains.
For example, a CI in the Hardware.Server family has a different set of attributes from a CI
in the Software.Application family. Whereas a Hardware.Server CI has attributes that
represent its swap file size, its memory capacity, and the amount of memory actually
installed in the server (to name just a few), a Software.Application family CI has attributes
that describe the software's installation directory, the amount of storage it uses, and so on.
Relationships
CA CMDB relationships differ from those of network and systems management systems. CA
CMDB CI relationships have types. CA CMDB relationship types are critical to understanding
the meaning of a connection between two CIs. CA CMDB includes an extensive set of CI
relationship types, and new relationship types can be added to the system.
With CA CMDB there are two kinds of relationships: hierarchical and peer-to-peer. In
hierarchical relationships one CI is a provider and the other CI is a dependent. Hierarchical
relationships have different names depending on their direction. For example, the Provider-
to-dependant relationship might be “backs up” whereas the dependent-to-provider
relationship would be “is backed up by.” You are probably familiar with the concept of
parent/child relationships; a parent/child relationship is a type of hierarchical relationship
where the parent is the provider and the child is the dependent.
Federation
One of the main functions of CA CMDB is to aggregate CIs of interest related to one IT
configuration in a single place. Almost all IT organizations have data about the
configurations in separate management data repositories (MDRs). For example a network
and systems management system is an MDR, and so is an identity management system or
an application performance management system. CA CMDB can import CI information from
all of these sources and preserve the link from the imported data back to its source.
An MDR provider is a CA CMDB object that allows CIs to be associated with an external
MDR. MDR providers define the callback mechanism used to launch an MDR's web-based
User Interface in context. CIs which have been imported from a configured MDR are
automatically mapped back to their source. An MDR also can be configured manually to
map a CI to a federated web-based application. You only need to know the CI's federated
asset ID (by which the MDR knows the CI).
Scoreboard
In addition to providing “the single source of truth” about configurations, CA CMDB provides
a number of other beneficial functions such as the ability to find CIs quickly, to address
incidents and analyze problems with configurations, and to plan changes to configurations.
Many of these functions are provided through CA CMDB's scoreboard.
A scoreboard is the primary interface to the data contained in CA CMDB. It is the first page
a user sees when they log into CA CMDB. There are scoreboard items, for example, that
assist in locating CIs by type, CIs without maintenance, or CIs whose maintenance is due to
expire. Each scoreboard item shows a count of the CIs matching the scoreboard item's
stored query. By grouping a number of related scoreboard items into folders, CA CMDB's
scoreboard offers an overall view of an organization's configuration.
One group of useful CA CMDB Scoreboards is the group that lists CI changes. The CA CMDB
Changes scoreboard folder is organized into three subfolders showing CIs that have been
changed in the past day, past week and past month. These offer a way of identifying at a
glance what has changed in an IT configuration. The specific CI attributes which have
changed can be examined in the CI's audit log that's accessible from its log tab.
The CI Priorities scoreboard folder provides a way for an analyst to find important CIs for
which they are responsible, as well as CIs owned or maintained by their organization. The
CI Status scoreboard folder contains a number of useful queries permitting CIs that are in
“Disposed Of,” “In Service,” and “Active-No Maintenance” statuses to be quickly located.
All CA CMDB scoreboards are complemented by CA CMDB's flexible searching capability and
query language. The search function allows easy access to the most commonly used CI
attributes, while allowing more complex queries to be composed in the “additional search
arguments” field. If a useful query is not included in CA CMDB scoreboard, then it can be
added using “Customize Scoreboard” file menu.
50 Using CA CMDB
Managing Configuration Items
The impact analysis function which is available when viewing a CI makes it possible to
search for and generate reports on CIs that are related to the current CI, but may be
separated by several layers of relationships. This function is critical when diagnosing
incidents and problems, or when performing impact and root cause analysis of changes.
Likewise, CA CMDB Visualizer provides a graphical depiction of a CI and its related CIs
including the types of the relationships. In the Visualizer, users can double click CIs to
follow relationship trails so that the impact one CI has on the other CIs in CA CMDB can be
accurately gauged.
With the Change Impact Analysis tool, analysts can quickly determine which Servers or,
more importantly, which Business Services may be impacted by the outage of a particular
CI.
In this section, we look at how to manage the enhanced capabilities that CA CMDB brings to
CA Unicenter Service Desk r11.2. This includes the CI Families, Classes, and Attributes, as
well as Models, Relationship Types, and Scoreboards.
The primary classification attributes of a configuration item (CI) are its Family and Class
attributes. Together, these two fields describe the CI type and drive associated CI
processes within the CA CMDB application.
When creating a new CI, you select the CI Class. The Class drives selection of the Family,
which in turn determines the associated family-specific attributes that are available to more
fully describe the CI.
CA CMDB includes a rich set of out-of-box content for the Families, Classes, and associated
CI attributes. It includes more than 50 Families along with over 225 Classes, and has MDB
extension tables that contain family-specific attributes for each CI Family. When
implementing CA CMDB, you need to determine the types of CIs that you want to manage,
as well as the attributes that you need to track for them. If you plan to manage CI types
that were not included in the out-of-box content, you can create new Families and Classes
to accommodate them. The process for creating a new CI Family is detailed in the CA CMDB
Administrator Guide.
The following table contains the Families and Classes that are included in the CA CMDB
r11.1 out-of-box content. It is immediately followed by a table that describes each Family.
Family Class
(May be multiple Classes per Family)
Cluster Cluster
Cluster.Resource Resource
Contact Executive
External contact
Managerial
Other contact
Technical
Other Contract
Warranty/Maintenance Contract
Other Document
User Guide
Hardware.Mainframe Cray
Group 80
System 390
System z
Tandem - Mainframe
52 Using CA CMDB
Managing Configuration Items
Family Class
(May be multiple Classes per Family)
Unisys - Mainframe
Vax - Mainframe
MVS
OS/390
Hardware.Monitor CRT
Flat Screen
Other Monitor
Terminal
Copier
Digital Camera
Electronic Whiteboard
Other Hardware
Projector
Shredder
Television
VCR/DVD
Video Camera
Ink Jet
Laser
Microfiche
Other Printer
Plotter
Hardware.Server AIX
HP UX
Linux
Server
Sun
Tandem
Family Class
(May be multiple Classes per Family)
Unisys
Unix
Vax
VM
Windows
z/OS Server
Disk Array
DVD
File System
Hard Drive
Optical
Silo
Tape Array
Tape Library
Zip Drive
GSX Server
Hardware.Workstation Workstation
Location Building
Campus
City
Country
Datacenter
Floor
Network.Bridge Bridge
54 Using CA CMDB
Managing Configuration Items
Family Class
(May be multiple Classes per Family)
Network.Controller Controller
Network.Port Port
Network.Router Router
Organization External
Internal
Project Project
SAN.Interface Interface
SAN.Switch Hub
Switch
Building Security
Data Security
Other Security
Service Component
Document
Person
Practice
Process
Role
Service
Underpinning Contract
Software.Application Application
Software.Bespoke Bespoke
Software.COTS COTS
CICS
Family Class
(May be multiple Classes per Family)
Websphere MQ
Software.Database CA-Datacom
CA-IDMS
DB2
IMS
Ingres
Oracle
SQL
Sybase
Software.In-House In-House
HP UX OS
Linux OS
MVS OS
OS/390 OS
Other Software OS
Sun OS
Tandem OS
Unisys OS
Unix OS
Vax OS
VM OS
Windows OS
z/OS
Satellite Link
Telecom.Other ACD
IVR
Other Telecom
PDA
56 Using CA CMDB
Managing Configuration Items
Family Class
(May be multiple Classes per Family)
Radio Handsets
Telecom.Voice Centrex
Desk Phone
PBX
Phone Card
Pager
Family Description
Family Description
58 Using CA CMDB
Managing Configuration Items
Family Description
Telco PBX/Polycom
CI Attributes
The shared attributes for each CI are held in the ca_owned_resource record in the
MDB. These attributes may be viewed using the MDB Viewer on http://ca.com/support
(http://www.ca.com/support), and are also available in the CA CMDB Data Model
Reference Guide that is provided to customers as part of CA CMDB documentation.
■ Family-specific attributes
Each Family of CIs has a set of family-specific attributes that reside in an Extension
Table in the MDB. The family-specific attributes describe the unique characteristics of
each type of CI. For example, a CI in the Hardware.Server family has attributes that
represent its swap file size, its memory capacity, and the amount of memory actually
installed in the server (to name just a few), while a Software.Application family CI has
attributes that describe the software's installation directory, the amount of storage it
uses, and so on.
All of the family-specific attributes and their definitions, along with the corresponding
extension table names, are included in the CA CMDB Data Model Reference Guide.
If you need to add more attributes to the out-of-box content for a particular Family,
you can utilize Web Screen Painter to do so. See the CA Unicenter Service Desk
Implementation Guide for details.
Together, the shared attributes and the family-specific attributes provide a complete picture
of the CI's configuration.
CI attributes and the management of changes to the attributes are critical to the success of
the Incident and Problem Management processes. CA CMDB gives users the ability to
update any CI's attributes. In addition, CA CMDB's federation capabilities allow attribute
updates to occur automatically using inputs from federated data sources. CA CMDB tracks
attribute changes in an audit log, which gives users the ability to see who or what changed
the value of an attribute, and what its “before” and “after” values are.
This audit capability is powerful and provides the basis for compliance initiatives. In a
combined CA CMDB and CA Unicenter Service Desk implementation, users can see what or
where CI attributes were changed, and then be can look at the CI Detail Screen to see if
there was an associated change order for that data change.
60 Using CA CMDB
Managing Configuration Items
Models are a shared resource with CA Unicenter Asset Portfolio Management (CA Unicenter
APM), and through the use of a mapping file are indirectly shared with CA Unicenter NSM.
So you want to be careful in adding any additional Models if you have CA Unicenter APM
installed. The Model represents the Manufacturer and Type of CI that is being managed
(mostly in the Hardware, Software, and Service CIs). This allows for greater reporting and
incident matching as it relates to Problem Management. Since the Model is being tracked,
you have the ability to search based upon the Model of a CI and can see whether there are
any CIs that need to be attached to a specific problem prior to a Request For Change (RFC)
being created.
The Model is associated to a CI Class. This provides for significantly more effective
reporting capabilities. Also, note that you must specify the Manufacturer when you create a
new Model. So be sure that you have the required Manufacturers built prior to the creation
of any new Models.
CA CMDB provides a point for managing the types of relationships or connections between
CIs that define Services, Processes and other high level structures which contribute to the
success of your Business.
■ Hierarchical
■ Peer-to-peer
The out-of-box relationships provided with CA CMDB, along with their descriptions, include:
Parent to
Child / Child to Parent
Provider to / Dependent to
Dependent Provider Description
complies to is complied to by One entity abides by regulations (CobIT, SOX, etc) set
forth by another entity
62 Using CA CMDB
Managing Configuration Items
Parent to
Child / Child to Parent
Provider to / Dependent to
Dependent Provider Description
is gateway for has for gateway A relationship in which one entity, a hardware
(computer) or network component, allows or controls
access to another management device.
is high has for high The high availability server relationship usually uses
availability availability clustering and database mirroring to provide very
server for server rapid recovery from system failures.
Parent to
Child / Child to Parent
Provider to / Dependent to
Dependent Provider Description
is the parent of is the child of One entity is the parent of another entity if the other
entity cannot exist without the parent entity.
64 Using CA CMDB
Managing Configuration Items
If you need to add relationships to the out-of-box content, you can do so using a CA CMDB
administration function. Below is a screenshot of a Create New Relationship Type. The most
critical fields are the labels: “Parent to Child” and “Child to Parent.”
Since the relationship type between CIs is one of the most powerful features of a CMDB,
you want to ensure proper labeling. This gives Analysts and Change Managers a way to
know exactly how CIs are linked together, and also gives them vital information regarding
the potential risk associated with an incident (outage) or problem within the Service Desk.
Note that if you have a peer-to-peer relationship, these fields must be the same and the
Peer-to-Peer flag must be set to Active. If the labels are not the same for a peer-to-peer
relationship, it won't make sense when you are seeing these relationships in an impact
analysis or when using the Visualizer.
Make sure that you are very careful when you create these new Relationship Types so that
they make sense in your environment. Don't just create relationships without knowing why
you have them.
The Scoreboard changes that are associated to CA CMDB are those associated to the CI
Families themselves.
The first sets of Scoreboard Queries are the Configuration Item Lists by CI Family. Each of
the 50+ CI families has an associated stored query associated in the Scoreboard, so that
you can easily see summaries and details for each family of CIs.
Next is the CA CMDB Changes to CIs. This is one of the most useful set of Scoreboard
Queries for both the Configuration Manager and the Change Manager. It also aids the
Incident and Problem Manager by showing the exceptions with the CIs in the environment.
This set of Scoreboard Queries shows the CA CMDB Changes by Day, Week, and Month,
and listed by the CI Family. This makes it much easier to manage to only those CIs that
have had attribute changes associated with them. These attribute changes could have
occurred using a Federated Data Source from a management data repository (MDR) or
manually changed within the CA Unicenter Service Desk/CA CMDB interface.
66 Using CA CMDB
Understanding Federation
Understanding Federation
Federation Adapters provide a convenient way for customers to include configuration items
(CIs) from practically any data source into CA CMDB. This section outlines some best
practices to make it easier to import and manage those CIs. The information here is
intended to supplement the data already in the CA CMDB documentation as well as
information in CA Cohesion ACM and Advantage Data Transformer (ADT) documentation.
Note: This document is not intended to duplicate the information contained in existing
CA product documentation. The definitive source of information for how to work with ADT is
the ADT documentation, which is distributed with the CA CMDB product. ADT
documentation is accessed from the Start menu under Programs, Computer Associates,
Advantage, Data Transformer, Documentation. See the CA CMDB Implementation Guide for
information about GRLoader.
Importing data is a simple process with several steps. At a very high level, the import
process utilizes the following four components:
■ Mapper (ADT):
The Mapper component is used to define input and output data sources and to store
metadata about those data sources. This metadata includes information about the data
source as well as information about the fields or attributes contained in the input or
output table. The Mapper is so named because it enables the product user to define the
process for copying (or mapping) input attributes to output attributes. The map is
created using a highly visual interface that employs a drag and drop visual paradigm to
draw lines which connect input to output. The map, along with the metadata is stored
as a “program” in the MDB. From the program, a script is generated.
> Manages profiles which store security and database access information.
*The Script Manager is used to manage the scripts described above, as well as to
assign profiles to each input or output table in the program. The script manager has
facilities to create and edit profiles.
■ Server (ADT):
The ADT Server reads the script and associated profiles, and reads the input data
source to create an XML document. Note that up to this point, the CA CMDB Server is
not involved.
GRLoader reads in the above XML document and communicates with the CA CMDB
Server to copy the data stored in the XML document into CA CMDB.
Metadata
There is a collection of metadata for each family of CIs, describing the attributes of that
family. In addition, there are two “special” collections of metadata.
■ ALL - the set of attributes that spans all families (the union of all attributes from all
families).
■ Common Attributes - the set of attributes that are shared across all families (the
intersection of attributes from all families).
■ In the data tab of the mapper, metadata is seen organized in a tree, first by underlying
database access type (SQL, XML, Access, etc), then by data location, and finally by
68 Using CA CMDB
Understanding Federation
table name. The data here is actively used by ADT when a Federation Adapter is
executed.
Federation Adapters
■ Finally, there are the Federation Adapters themselves, which are ADT programs. There
are two kinds of Federation Adapters: those provided “out-of-the-box,” and Custom
Federation Adapters which may be customized to meet special needs.
CA CMDB supports input from a multitude of sources. These sources are referred to as
management data repositories (MDRs). MDR sources include: flat files, ODBC (Open
DataBase Connectivity, pronounced as separate letters) databases, IBM DB2, Ingres, Lotus
Notes, Microsoft SQL Server, Oracle, Sybase, XML, SAP, and CA Unicenter APM.
The ADT documentation set, distributed with CA CMDB, lists the supported input data
sources.
One of the most common data sources used to load CIs into the CMDB is the Excel
spreadsheet. Excel spreadsheets are handled as Generic ODBC databases. This document
refers to named areas within Excel spreadsheets as tables.
While ADT includes database-specific support, the Generic ODBC interface is recommended
for use because it is the most general of the interfaces and is simple to implement. It
should be used unless there is a specific attribute type that needs to be imported, or when
a very large number of CIs must be imported and the additional overhead of ODCB is an
issue.
It is best if the input data is relational in its structure. Data about each family or set of CIs
with similar attributes should be located in a separate table. For example, one table may
have server information, while another has router information, and a third has information
about services. CI information should be stored in a denormalized format prior to input, as
the loading process normalizes data as required.
(If you are familiar with CA Unicenter Service Desk, you will notice that this is very different
from loading data using pdm_load, where the data must be normalized prior to loading.
CA CMDB GRLoader uses un-normalized data and does lookups to locate the referred-to
SREL attribute value, meaning that the input data should contain the data value itself and
not the ID of the data value.).
Formatted fields such as dates, mac addresses, currencies, and so on should be altered as
necessary so that they are stored in a consistent format in the staging area. See the
CA CMDB Data Model Reference Guide for information about proper formatting of data such
as localized dates and times.
Creating tables or views which consolidate information about a family of CIs is outside the
scope of this document. However, this can often be done by joining tables using SQL, Excel
macros, or using various data transformation tools.
It is recommended that you create a staging area where views can easily be created and
data can be manipulated prior to running GRLoader. Each family of CIs should be broken
out into its own set of tables, as shown below:
While it is possible for information spanning multiple families of CIs to be stored in a spare
database table, this is not the recommended input structure. The remainder of this section
discusses a single input table and assumes that the process is repeated for every input
table.
70 Using CA CMDB
Understanding Federation
Each MDR contains many attributes. Not all of them need to be loaded into the CMDB.
■ Attribute name
■ Attribute type (character, numeric, date, date/time, person's name, mac address)
■ Length
■ Does the field need to be validated in any way during import? Examples of this might
be date fields, numeric values, and so on
Does the value in the field have a strong relationship with data in another table? A classic
example of this might be the primary contact for a CI. The contact might need to exist
before a CI can reference the contact. In CA Unicenter Service Desk this is called an SREL
relationship.
Excel spreadsheets are unique data repositories in that the columns of a spreadsheet are
not strongly typed (char, int, and so on). This contrasts with a traditional relational
database management system where the type and length of each field are clearly defined.
All fields in the Excel spreadsheet should be changed to format “text.”
Now, that the input has been identified, it is necessary to determine the target location for
this data. This information should be added to the grid above (see updated table below).
For each attribute being copied from the input MDR to the MDB, it is necessary to
determine the corresponding MDB attribute. Note that the attributes of a CI are organized
in two collections: common attributes and family-specific attributes.
Locate the most likely candidate for the “family” for this group of CIs by referencing the
“Data Model Reference” and reviewing the section “Families and Classes.”
Once the family is selected, the attributes for the CIs in this family are the ones described
in the “Common CI Attributes” section of the Data Model Reference, together with the
family-specific attributes listed on the page for the selected family.
During the planning stage, you need to associate the attributes in the input data with the
attributes which are found in the MDB. You also need to identify the attributes in the out-of-
box solution that you do not wish to view. You need to identify those that you require that
are not provided by default. Consider creating a document to organize the inputs and
outputs.
It is much easier to add attributes to an existing family than to create a new family.
To help make this determination, see the CI Families and Classes section earlier in this
chapter.
72 Using CA CMDB
Understanding Federation
■ Update the Class table (ca_resource_class) to define the classes which are contained in
this family.
These modifications should all be designed and documented in detail before the
implementation is begun.
Further details on creating new Families and Classes can be found in Chapter 8, “Managing
Configuration Items,” in the CA CMDB Administrator Guide.
■ Screen layouts
■ Federation Adapters
Further details on adding attributes can be found in Chapter 5, “Customizing the Schema
Using the Web Screen Painter,” in the CA Unicenter Service Desk Modification Guide.
2. Consolidate data about CIs in a single family into a single view, repeated for
each family.
■ Prepare ADT and CA CMDB Federation Adapters to accept the data (discussed in the
next section).
Excel spreadsheets are often used as sources of data for the CMDB. While the out-of-box
spreadsheet is the most complete, it probably contains some attributes that you may not
have available at the time of import and can be ignored. In fact, it is common for the first
load of the CMDB to include the fewest attributes necessary to define the CIs. The CIs can
be updated with additional attribute data later.
2. Select all cells (upper left corner of spreadsheet, left and above of cell r1c1).
3. Choose format/cells/text.
4. On the first row of the first sheet, enter column heading which are valid SQL data
names. (Warning: stay away from SQL keywords). The attribute names do not have to
exactly match the names in CA CMDB, but should easily map to CA CMDB names.
74 Using CA CMDB
Understanding Federation
7. Important: Replace the “R1C1” text in the upper left box (above the spreadsheet
cells) with the name you wish to give to this table. In this illustration “servers” was
entered, resulting in:
8. Save the table. It is now available for processing by ADT using the generic ODBC
driver.
Begin Implementation
After the plan to modify the attribute set has been established, the next step is to begin the
implementation.
Important: Do not modify any content provided by CA. Instead, make a copy and modify
that copy. If this procedure is not followed, your modifications may be lost when the
software is upgraded.
All CA CMDB definitions are located in the CMDBFederationAdapters folder within ADT.
Customizations should be placed into another folder of your creation such as
“CustomFederationAdapters.”
Implementation Overview
The first step is to modify the metadata about the table input and the xml output. This
includes:
3. Create a Custom Federation Adapter which maps the input to the output and performs
any necessary lookups and/or transformations.
Implementation Details
It is strongly recommended that you define a separate folder for any set of related
custom Federation adapters or metadata that you create.
Start the ADT Mapper from the windows start menu to accomplish this:
2. Choose File/New/Application for every collection of related metadata and programs you
wish to create.
76 Using CA CMDB
Understanding Federation
Metadata about the output target usually comes out-of-the-box and is modified per the
requirements uncovered during the planning stage.
Out-of-the-box, CA CMDB includes several sample input formats in the form of Federation
Adapters. These adapters utilize several input sources (Excel Spreadsheet, Access Database
containing SMS data, Access database containing UAM data, and so on)
A good starting point for out-of-the-box content is the sample spreadsheet provided with
CA CMDB, found at: c:\program files\ca\ca cmdb\data\federationAdapters\cidata.xls.
It includes the majority of the attribute data for all out-of-box families.
Custom Metadata:
It is rare for a customer to use the out-of-the-box input formats exactly as provided. Most
of the time, the input has a subset of the data that the CMDB models provided out-of-the-
box, or it can be extended to include additional family/class/attribute definitions.
When the source of data for the CMDB does not exactly match the out-of-the-box format, it
becomes necessary to create a new input format to match your data. Custom metadata
must be created so that Custom Federation Adapters can be built.
Likewise, if the underlying data structures in CA CMDB have been modified through the
addition of new classes, families, or attributes, it is necessary to update the metadata so
that the Custom Federation Adapters can be built.
To make it easier and less error prone, ADT has facilities for automatically generating
metadata about tables. While ADT has support for multiple drivers to support many
databases, use the ODBC driver unless one of the database-specific drivers provides
functionality that is not available in the ODBC driver. The remainder of this section refers to
the ODBC driver. See the ADT documentation for information on how to use drivers for
other databases.
ODBC Driver:
The ODBC driver requires that an ODBC DSN (Data Source Name) be created for every
input database or spreadsheet in order to create the metadata. Multiple tables in the same
database or spreadsheet can share a single DSN.
DSNs are created using the "ODBC Data Source Administrator" program found in your
computer's Control Panel under the Administrative Tools menu in Windows 2000/2003/XP.
Make sure to create a System DSN.
(See the “Data Population” chapter of the CA CMDB Implementation Guide for additional
details.)
c. Click Add.
d. Select the type of database you wish to use for input. Some hints:
e. For an Excel spreadsheet, select “Microsoft Excel Driver,” not the “Trieber” or
the “Driver do Microsoft Excel” driver.
If you do not see a driver for your source input database, it is probably
because you have not installed the ODBC client software provided by
the database manufacturer.
78 Using CA CMDB
Understanding Federation
f. The exact dialog that occurs next varies from driver to driver, but in general
you are asked to give a name to the DSN, and to provide additional
information about the source of data. See the database provider
documentation for information on how to create a DSN. Some hints:
For an Excel spreadsheet, click Select Workbook and select the file
that contains the input data.
After the DSN has been created, ADT has facilities to scan ODBC tables. This is
accomplished by executing the “scanner” function of the ADT Mapper.
The scanner interrogates the database to discover the structure of the selected table(s), the
attributes it contains, and the data types of each of those attributes. This process is
remarkably powerful and simple to use.
After the tables are scanned in, they behave exactly as any table that was provided in
the out-of-box content. It is used in the creation of a mapping program discussed later
in this chapter.
For more information about scanning custom tables, review the Advantage Data
Transformer Getting Started.
Whenever the definition of a family has changed (attributes have been added, changed
or deleted), the metadata must be updated as well. The artifacts from the Planning
stage should document those changes.
Note that metadata for a family change should be updated in five places:
> Finally, both the family and the XML templates noted in the CIDATA.XLS
spreadsheet should be updated.
As with customizing input metadata, there are two ways to update the family
metadata:
XML Scan:
The quickest way to update the family metadata is to add the new attributes to the xml file
that contains all the family attributes, and then use the ADT Mapper to scan in the updated
XML. In this way, the new attributes are available for use in your custom federation
adapter.
You should put your updated files in a separate location so that your custom template is not
replaced when CA CMDB upgrades are installed. For example c:\Program Files\CA\CA
CMDB\cmdb\data\federationadapters\customTemplates.
80 Using CA CMDB
Understanding Federation
Say, for example, that you wish to add the “widget_count” attribute to the hardware.server
family. You would follow these steps:
d. Select File/Scanners/XML.
You are prompted for the name of the input file. You should specify the same
name as in step b above.
e. When you click OK, a new object is created. It is named “GRLoader” and
appears in the metadata store tree (see below). It is necessary to rename this
object. For this example, this object holds metadata about hardware servers; it
makes sense to rename it to HardwareServer or even CustomHardwareServer
to distinguish it from the out-of-box metadata.
Note: To perform the rename, right-click GRLoader object in the tree, and
then click Properties. Change the name of the file to a more appropriate name.
g. After you have updated the metadata, verify that the desired attribute(s)
appear in the metadata by including the metadata definition in a program. In
the sample screen below, “widget_count” was added to all programs that
involve hardware servers (it can be seen in the box titled “HardwareServer”:
82 Using CA CMDB
Understanding Federation
When the changes to the metadata are few, it may be practical to manually update the
existing XML metadata.
4. Use the XML editor built into the ADT Mapper to update the XML metadata.
Note: When adding CA CMDB attributes to the metadata, be sure to place the new
attribute in the proper place in the XML tree. Use the up and down arrows to move it to the
proper position. For more information about the XML editor, click Help in the XML editor.
Say, for example, that you wanted to add the attribute “cluster_date” to the cluster family.
Double-clicking the “Cluster” node opens up the XML Editor. From there you can click New
Component icon to add in CA CMDB attributes (called Elements in ADT) and the XML
attributes for update_if_null=, date_format= and loopup= as needed.
The image below shows a new attribute being created. Note that the CA CMDB attribute
“cluster_date” is added as an “ELEMENT.” You should manually add the xml attributes of
dateformat and update_if_null underneath the cluster_date Element. (This process is not
illustrated in the image.)
Static Transformations:
If, during the planning stage, there was an identified need to create fixed constant data for
inclusion in the resulting XML, ADT Data transformations should be created to provide those
fixed values.
In the worksheet example above, it was determined that the constant “Server” was not
available in the input, but needed to be included in the XML output. This value could either
84 Using CA CMDB
Understanding Federation
be supplied by adding a column to the data while it was in the staging area and reading in
the text “Server” for every CI, or it could be provided as a fixed constant field when ADT is
generating the XML.
■ Set Lookup by userid - used when a contact is associated with a CI by userid instead of
by combo_name (lastname, firstname middle)
■ Set Update if null - this field needs to be set when a value in CA CMDB needs to be
cleared
■ Set Dateformat utc - needed when the format of the date is in UTC (UNIX Time Code)
format. The default is “localtime.”
Filtering:
Often times an MDR may contain more CIs than you wish to import into the CMDB. Let's
say that only CIs with a specific attribute value should be imported. (Actually, the filter
could be based on an attribute that may not even appear in the CMDB.)
In this case, the ability for ADT to join tables (in this case it would be an inner join) can
come in very handy.
Example:
The MDR contains data from several locations, but for a particular business reason only CIs
from the “Brooklyn” and “Santa Barbara” locations should be imported into the
HardwareServer table.
First, create a new table which contains the list of all valid locations, then add this table to
the load program. This new table is an input table.
Next, drag the location attribute to the validation table as shown below.
Now, only rows which appear in the validation table are copied to the HardwareServer
table.
Inputs:
Hardware_server_xls
Server1 Manhattan
Server2 Brooklyn
Server3 Islandia
86 Using CA CMDB
Understanding Federation
ValidationTable
Valid_value
Santa Barbara
Brooklyn
Server2 Brooklyn
Note that the only row that made it past the filter was the one row that had a location equal
to any of the rows in the validation table.
Look-Up Tables:
Often times, it is necessary to look up one value in a table to obtain another value which
would be stored in the CMDB.
A classic example of this is the CI status. One MDR might maintain status as one set of
integers, while another MDR would maintain status as a completely different set of integers.
When the CI is imported, it would be necessary to convert the status from the different
MDRs to the status values stored in the CMDB.
SQL Join:
One way that ADT provides this functionality is by doing a look up using an SQL Join.
In the simplified example below, the table “input” has a field “fielda” which contains a value
that needs to be looked up in the “lookupTable.” You should notice the dotted line
connecting the input table to the lookupTable, indicating join. The value of the translated
fielda is stored in the output in the field1 field.
The key to making this work well is the properties of the input table. Notice that there is a
new tab, “ANSI Join” that only appears for tables involved in a join. In the following
example, a left outer join is used - allowing CIs to be created even if they do not have valid
values for “fielda.”
See the online help for more information about performing table joins.
In situations where ADT performance is important (note that ADT is rarely the bottleneck)
you may wish to use an in-memory transformation to preload the lookup table. The
program would have 2 phases. First, the lookup table has to be copied to the in memory
version. And second, the input would be run against the in-memory table.
88 Using CA CMDB
Understanding Federation
For further information about using in memory data transformations, see the ADT online
help.
Every CI in the CMDB can be associated with one or more data sources. While this is
not required, it is valuable to associate the CI back to its origin for several reasons:
Browsing/reporting.
MDRs are defined in the CA CMDB Administration user interface and must be defined before
importing any CI which is contained in that MDR. See the CA CMDB Implementation Guide
for details. The MDR definition contains two main sections:
■ The top part of the MDR definition describes the MDR itself (button name, MDR name,
MDR class, active, owner, description).
■ The bottom part of the MDR definition describes the MDR launcher parameters and how
the web browser URL is constructed, including substitutions. The Parameters field is
filled in with values as a reminder of the substitution values that are possible. You
should change this field to a value more appropriate to your MDR. Use the help
pulldown to learn details of what other values are available for substitution, or see the
CA CMDB Implementation Guide.
In raw XML, a CI is associated with an MDR by using the <mdr_class> <mdr_name> and
<federated_asset_id> xml tags. When using an ADT Federation Adapter, this is
accomplished by copying data to the mdr_name, mdr_class and federated_asset_id fields in
the output XML. You should consider using a static transformation to assign values for the
mdr_name and mdr_class when those values do not appear in the input tables.
Once a CI is associated with an MDR, the CI detail attributes tab window contains an MDR
Launcher button which initiates a web browser connection to the MDR.
Substitution is key in the launch process. It allows the CMDB to launch to the MDR in the
context of the current CI.
Each MDR has its own syntax for performing launch in context. When defining an MDR, the
best thing to do is to launch the MDR using a web browser and reverse engineer the URL.
90 Using CA CMDB
Understanding Federation
Some example CI data and MDR definitions better illustrate the launch in context capability.
If you had the following CIs defined:
1 CA stock stockQuote CA
4. For CA software, linking to the support connect web site is very useful.
5. A single CI (CMDB Server) can be associated with multiple MDRs. In this case, multiple
MDR Launch buttons are provided, as shown below:
MDR Launch of thick clients such as the Microsoft SMS product can be accomplished using
Citrix or other web enabling technologies.
MDR Launch of local files (such as text, PowerPoint presentations, Excel spreadsheets, and
so forth) which reside on the client's hard drive is prohibited by default by most browsers,
as there is a potential exposure in launching something destructive to the client
workstation. There are tools on the web to unlock this capability. Some of these tools are
browser-specific; several are free.
If the web browser allows launch of local documents, URLs should be constructed in the
form:
See your web browser documentation for information about using file URLs.
If the input source is XML, ADT can be used to read that data and apply transformations,
just as if the input were in a database table. On the other hand, if the origin of the raw XML
is sufficiently flexible, it is perfectly acceptable to create the XML, perhaps applying an XSLT
transformation to get the XML document properly formatted, and then input that data
directly into GRLoader.
92 Using CA CMDB
Understanding Federation
This chapter discusses creation of XML without using ADT. However, the discussion applies
to all CIs which are input to GRLoader, regardless of origin.
A good starting point for creating XML directly can be found in the following folder:
CA CMDB/cmdb/data/federationadapters/templates
There, examples of XML for each family are found. If the input requirements are very
simple, you may not even need those templates.
<ci>
<name>ci name</name>
</ci>
Note: For readability, end tags have been intentionally left off in all examples.
Alternately, if the class is not directly available but the manufacturer and the model are,
you can utilize information in the ca_model_def table to derive the class, based upon the
manufacturer and model. In this case, to create a new CI you would need:
■ <ci>
> <name>
> <manufacturer>
> <model>
For a discussion of models, see the section on using models to determine class in Managing
Configuration Items.
■ <ci>
> <class>
> <mdr_class>
> <mdr_name>
> <federated_asset_id>
To create a relationship, you need only provide identifying fields using CORA lookup. The
minimum is:
■ <relation>
> <provider>
<name>
■ <type>
■ <dependent>
> <name>
■ Name
■ Serial number
■ Asset id
■ System name
■ DNS name
■ Mac address
Notes on Relationships:
1. The relationship type determines which CI is the provider and which is the dependent.
If the relationship type is a dependent/provider type relationship, then the CI roles are
reversed automatically.
2. If the CI was created with more identifying fields, then it is recommended that you
specify the same fields when creating a relationship involving that CI.
GRLoader automatically updates CIs when the -a parameter is provided to the line
command. If the identifying fields are provided, a matching CI is located, and all non-null
values in the input replace the values in the CMDB. Thus:
<ci>server1
<name>server1
<class>Server
94 Using CA CMDB
Understanding Federation
followed by:
<ci>server1
<name>server1
<class>Server
<description>This is my server
Care should be taken to specify the correct identifying attributes. Otherwise, GRLoader may
not find the required CI. See the section on CORA for details.
Important! Use the system_name attribute ONLY with the CI that owns the system_name,
or else ensure that it is unique.
<ci>server1
<name>server1
<system_name>server1
<class>Server
The above description creates a Server CI. However, when the following XML is loaded:
<ci>
<name>harddrive1
<system_name>server1
<class>Hard Drive
This is an ERROR! Our intention was to create a second CI for the hard drive. Instead, this
XML locates the CI with system_name=server1 and update the name attribute. This
effectively partially overlays the server1 CI, and is clearly not what was desired.
<ci>server1
<name>server1
<system_name>server1
<class>Server
<ci>
<name>server1 harddrive1
<system_name>
<class>Hard Drive
<ci>
<name>harddrive1
<system_name>server1 harddrive1
<class>Hard Drive
The important thing to realize here is that the system_name parameter, if used, must be
unique to have unique CIs.
Another important issue here: a single CI identification standard should be decided, and all
imported CIs should follow that convention. Otherwise, you could lose data by ending up
with duplicate CIs, which are not permitted by CORA.
Create a Custom Federation Adapter that maps the input to the output and performs any
necessary lookups and transformations.
During the planning phase, the mapping from the input tables to the output was
determined. Necessary metadata was created in the previous step, so we are now ready to
create the custom federation adapter.
Design Considerations
A simple Custom Federation Adapter may have one input table and one output XML file.
Following the recommendation that each of the input CIs has their own input table, it may
be helpful to have multiple input tables and multiple output tables in the same Federation
Adapter. Each input table should have its own output table, to help ensure that the correct
attribute names are used in the class/family mapping. Use of the “ALL” output XML object is
useful, but can produce incorrect results when data is mapped to attributes incorrect for the
particular class.
96 Using CA CMDB
Understanding Federation
in which the data is broken down into separate input files might be done as separate load
programs or as a single load operation, as shown below. (The program below also uses
static transformations to set the class name.)
1. Create ODBC DSNs for any input databases (if you have not already done so).
The window on the right of the program tab changes color from dark grey to
light grey, indicating a program is open, and the program name appears in the
window title bar. This area is the program workspace.
a. Locate the input table metadata in the tree in the “Browse Metadata Store”
window.
b. Drag the input metadata table to the program workspace. It has a red title bar
indicating input.
a. Locate the output XML metadata in the Data tab of the Browse Metadata Store
window.
b. Drag the XML metadata to the program workspace (move it right of the input
table).
c. Right-click title bar of the XML table, and check “Target Table.” The title bar
should turn blue.
a. Note: Be sure that you drag the database fields to the green icons.
b. Make sure that the connecting lines are solid not dotted.
98 Using CA CMDB
Understanding Federation
a. Drag any needed transformation from the transformation tree onto the
program workspace.
d. Set a profile name (we will create the profile itself later) to dbname_profile.
a. Right click each output table (xml file) and click properties.
13. Right-click gray area in the workspace and click “generate script.”
16. Create all profiles referenced in steps 11-12 in the script manager.
b. For Generic ODBC interface profiles, set the Server to the DSN created in
step 1.
19. Use the checklist in the CA CMDB Data Model Reference to verify that you have
completed all necessary steps to run the Federation Adapter.
20. From the script manager, find the compiled script and run it.
23. If not successful, use the checklist in the CA CMDB Data Model Reference.
24. If MDR_Name or MDR_CLASS were mapped in the data then ensure that a valid MDR
exists.
a. Use the CMDB UI admin tab to define the MDR_Name and MDR_Class (See the
CA CMDB Implementation Guide for a description of these fields).
a. Run GRLoader for each output xml file created. Be sure that the file containing
relationships is loaded last.
26. Logon to CA CMDB user interface and verify the data is correctly loaded and that
relationships are visible in the Visualizer.
After verifying that the program works in a test environment, you can copy your program to
the production staging area and run the program there.
Programs are copied using the export/import utility available from the file menu in the
mapper. (dtimport.exe and dtexport.exe are the command line versions of the utility.) This
utility should be used when sending programs to CA Support.
The program export/import utility copies not only the visual program, but also the
necessary metadata. Before running the programs copied from the test environment to the
production staging area, you need to:
> Families
> Classes
> Locations
> Contacts
> Manufacturers
After all this preparation is complete, GRLoader can be run. It can be run from any platform
that supports Java. See the CA CMDB Implementation Guide for information about running
it on a non-Windows platform.
Finally, after the data has been implemented in your test environment, you are now ready
to copy it into production. Don't forget, you will need to copy over any new
class/family/company/location/contact/mdr information prior to running GRLoader in your
new environment.
Getting Support
It is important to know where to turn for help when working with a heavily integrated
product such as CA CMDB. When interacting with the CA Support team, we recommend that
you have the following information readily available:
■ Technical descriptions of what is occurring and what you are trying to do.
■ Other details: Error messages, system logs, dumps, screen shots, steps taken to try to
address the issue, and so on.
When interacting with CA CMDB support staff, also provide the following additional items:
Federation Adapters CMDB Exported script, raw input data, generated XML
The asset is central or prominent in many CA products. Assets can be inserted in the MDB
by a variety of sources including discovery tools, such as CA Desktop Management System
for Windows (CA DMS) and CA Unicenter Network and Systems Management (CA Unicenter
NSM), or ownership tools such as CA Unicenter Service Desk or CA Unicenter APM.
Even though an asset can be discovered by multiple products, the asset schema is designed
to reconcile the fact that an asset coming from different sources is actually the same asset.
This is accomplished via asset registration using the CORA.
The asset schema accomplishes the goal of providing a set of common asset tables for
hardware and software assets. The schema allows for a cross-product view of assets and
meets the following requirements:
■ Allow for the discovery of an asset by multiple sources (for example, CA DMS and
CA Unicenter NSM)
■ Allow for the discovery of multiple values of an identifying asset property (for example,
multiple DNS names and/or MAC addresses)
■ Allow for multiple “virtual” assets to be discovered and reconciled to one physical asset
(for example, VMWare images or dual-boot scenarios)
Each of the products discussed here (CA DMS), CA Unicenter APM, CA Unicenter Service
Desk, and CA Unicenter NSM) leverage CORA to register an asset when it comes into view
using the product's particular operations. For example, when CA Unicenter NSM or CA DMS
“discovery” occurs, each discovered asset is registered. Similarly, when a CA Unicenter
Service Desk or CA Unicenter APM user enters asset information using the data entry forms
of those products, that asset is also registered.
Consider the scenario where we have run an initial first-level discovery with CA Unicenter
NSM, or where a new asset is registered due to discovery by CA Unicenter NSM Continuous
Discovery. That server or desktop asset is registered with the identifying properties known
to CA Unicenter NSM, typically DNS name and MAC address. Subsequently, a CA DMS scan
is performed for that same server. CA DMS also registers the asset, also using the DNS
name and MAC address, plus additional identifying properties. Because the DNS name/MAC
address pair matches the previously registered asset, the information held by CA Unicenter
NSM is now effectively joined to the information held and managed by CA DMS.
If a user is creating a ticket in CA Unicenter Service Desk for that same server, when they
enter in their information as prompted by the ticket creation web forms, CA Unicenter
Service Desk registers the server, and it matches with the previously discovered
information. Now, the additional information entered through CA Unicenter Service Desk is
also available through the unified view of the asset in the MDB.
See the section below on CORA for a detailed discussion of registration scenarios and how
the matching and reconciliation occurs and is represented.
All Unicenter r11 products utilize the common MDB schema to store and manage their data.
As the interface through which these assets are registered and as the only source for
updating these tables, the CORA ensures that asset data flows consistently, thereby
supporting the data and referential integrity of the MDB's master asset data model.
The master asset data model consists of the following 3 levels of asset references:
■ The asset source level, which consists of the ca_asset_source table, and is used to
track assets as they enter the system from different data sources, whether input
manually or through discovery.
■ Finally, the physical asset level, which consists of the ca_asset table, stores the
identifiers that define the object as a distinct, physical asset.
After CORA is given a set of registration identifiers from the calling r11 application, it
performs one of the following actions:
■ Return the asset source reference if the registration identifiers match an existing asset,
thus preventing duplicate assets from being registered.
■ Insert a new physical, logical, logical property, or asset source record into the database
depending on where the mismatch occurs. This step also prevents duplication of data
by inserting records only at the appropriate levels. For instance, if there are no physical
assets that can be identified by the registration identifiers, a new physical asset is
created. However, if a physical asset can be identified by the registration identifiers,
but not a logical asset, then a new logical asset is created and linked to the existing
physical asset.
■ Update an existing identifier(s) in the database with one of the registration identifiers.
In this scenario, a single physical asset can be identified by the registration identifiers
and one or more identifiers need to be updated.
■ Merge two physical or logical assets together. In this scenario, CORA received
information indicating that two or more physical assets are, in fact, the same asset.
The existing physical assets are merged together to form one asset and information for
each asset is stored in ca_logical_asset_property table.
For r11.1, when a product registers an asset and CORA generates a UUID that matches an
existing asset, CORA also automatically links (reconciles) Owned and Discovered
information for that asset.
To determine which CORA version is being used by the product, execute the following
command:
coraver
When an asset is registered, CORA generates the asset uuid (ca_asset) by applying black-
box logic to the following six properties:
■ Serial Number
■ Host Name
■ Mac Address
■ DNS Name
CORA applies the following weighting system to these properties to determine if a match
exists. Since certain properties are considered “more important” than others, CORA
recognizes a duplicate based on those values alone.
■ Serial Number is the most highly weighted field. Two assets with the same serial
number are always matched by CORA unless the Asset Tag or Host Names are
different.
■ Alt Asset ID is the second most highly weighted field. Serial Number and Alt Asset ID
appear at the highest level of the Asset Registration schema in ca_asset). If Serial
Number and Asset Tag match, CORA can create a new asset only if the Host Name is
unique.
■ Host Name appears in the middle level (ca_logical_asset). If Serial Number and Alt
Asset ID are blank the Host Name takes precedence over DNS and MAC Address
values. Although more than one DNS/MAC pair can be specified for the same Host
Name, it is still considered to be the same Asset.
■ DNS Name and MAC Address are weighted the same. CORA will recognize the same
asset if DNS or MAC address match and will create a new asset when they do not.
■ Finally, although Asset Label (Name) is required to create an asset, you can have
multiple assets with the same name as long all the other CORA fields are empty.
Attributes for each asset are divided into “Discovered” and “Owned” in order to facilitate
reconciliation and verification capabilities. The primary tables used to identify data sources
are:
■ CA_DISCOVERED_HARDWARE
■ TNG_MANAGEDOBJECT
■ PD_MACHINE
■ CA_OWNED_RESOURCE
To understand how these tables relate to one another, consider the following diagram.
The ca_asset_source table contains the subschema_id column which identifies the origin of
the asset. The subschema_id values are maintained in ca_asset_subschema as shown with
the following query:
■ “ITSM” (IT Service Management) objects, which includes “Owned” sources such as CA
Unicenter APM, CA Unicenter Service Desk, and CA CMDB (ca_owned_resource), have a
subschema_id of “1.”
If an asset is registered in the MDB by different products, CORA only registers that asset
once then links the information from the different data sources. As a result the ca_asset
table has a single unique entry for each asset.
The following screenshots provide a walk through of the queries executed after a sample
asset is registered by CA Unicenter APM/CA Unicenter Service Desk, CA Unicenter NSM, and
CA DMS. Note that the order in which the products register the asset is not relevant to the
process.
The ca_logical_asset_property shows the logical instances of the same asset. For instance,
if the same asset is registered by CORA with different DNS and/or MAC address but same
Host name, CORA recognizes it is the same asset and stores two logical instances in this
table:
Note: In this example the DNS name input by CA Unicenter APM (CA Unicenter Service
Desk, CA CMDB) did not use the fully qualified name as it was discovered by CA Unicenter
NSM (CA DMS). It is only an example.
Note: Here you can see the different data sources from the subschema_id value
Although “class” is a mandatory field to for asset registration, it is not a field that is used by
CORA. The concept of “class” in fact, is interpreted differently by different products. For
example:
■ In CA Unicenter Service Desk, the concept of “family” is used to identify the highest
level of definition for a CI and each family can consist of one or more “classes” to allow
for a more granular categorization of CIs. Further, each family has an extension table
that defines the attributes that are visible in the CI Detail page. When CA CMDB is
implemented, it includes over 50 families and over 140 classes that are each stored in
the MDB and shared between CA Unicenter Service Desk and CA Unicenter APM.
■ When the MDB used by CA CMDB is shared with CA Unicenter APM, those CMDB
families are shared and are known to CA Unicenter APM as “asset types” for “models”
and “assets.” In other words, for CA Unicenter APM:
In CA Unicenter APM, the Asset Type is a family_id field for ca_model_def and a
resource_family for ca_owned_resource.
■ CA DMS, on the other hand, does not use families and classes to register discovered
assets. However, if CA Unicenter Service Desk is also installed and integrated with
CA DMS, when CA DMS initiates the creation of a CA Unicenter Service Desk ticket and
■ CA CMDB content creates new families and classes, however, these classes are not the
same classes that are used by CA Unicenter NSM to classify discovered objects. In fact,
only a small number of CA Unicenter NSM classes match CA CMDB classes. However,
procedures are provided in the CA CMDB Administrator Guide for mapping CA Unicenter
Service Desk\CA CMDB classes to CA Unicenter NSM classes
Note: As multiple CA Unicenter Service Desk/CA CMDB classes can be mapped to the
same CA Unicenter NSM Class, the pdm_nsmimp can not use the CA Unicenter NSM
class to determine which Class to use when creating the asset because it doesn't know
how to pick the correct one if multiple ones are mapped.
The full schema for the MDB is viewable through the Implementation Best Practices page
(formerly the “r11 Implementation CD”) which is available on http://ca.com/support
(http://www.ca.com/support).
The Common Asset Viewer (CAV) (formerly known as the Asset Maintenance System) is a
collection of browser based view-only screens available to various CA applications so that
they can view details on any asset in the MDB. The CAV provides a common interface which
can be used for viewing owned and discovered asset information.
In r11, the CAV is embedded with CA Unicenter Service Desk, CA CMDB, CA Unicenter APM,
and CA DMS applications. In CA Unicenter Service Desk and CA CMDB, when looking at a
CI, CAV provides a common interface through which the consolidated asset details relating
to the CI can be viewed. It also enables navigation from the asset data to other CA asset-
related applications and allows users to see data that is stored about the CI in these other
applications.
CAV contains three tabs: one for displaying Owned asset information, one for displaying
Discovered asset information, and one for displaying Network asset information. The asset
data contained in these tabs are typically read from and maintained by the following
CA applications:
CA CMDB
■ Launching CAV
> CAV can be configured to display any combination of the three CAV tabs
(Owned Asset, Discovered Asset, and Network Asset) by specifying the correct
parameter when invoking its URL.
> CAV is accessed using a URL. Configuration parameters are available on the
URL that allow users to configure how CAV appears (whether a pop-up window,
what type of data is displayed, and so on).
> Using the URL, CAV also can be configured to provide links to CA Unicenter
Service Desk, CA Unicenter APM, or CA DMS applications. For example, you
can begin in CA Unicenter Service Desk and launch into CAV to view details on
an asset. This is accomplished by forwarding URL information from one
application to the other.
■ Architecture
> CAV is a J2EE Struts/JSP application and connects to the MDB using JDBC.
> Each application that embeds CAV installs it into its own application directory.
The CAV directory is not shared by different applications. However, as long as
all instances of the CAV are pointing to the same MDB, they all display the
same data.
> CAV uses a silent installer which is invoked by the parent application.
> Note: CAV is not able to support applications residing in multiple MDBs
> CAV configuration information is stored in the AMS.properties file located in the
CAV installation directory. This file can be edited manually but in most cases
should be edited by using the AMSConfig.java program included with CAV.
Instructions for using this program are in the top of the AMS.properties file
itself.
> Error logging is controlled using the log4j.properties file located in the CAV
installation directory. When experiencing problems with CAV the first step to
take is to turn on enhanced logging by changing the 'rootcategory' property
specified in this file to DEBUG.
> When experiencing problems with CAV make sure to note the CAV version
number. This can be found in the version.rel file located in the CAV installation
directory.
Resolution of incidents and problems reported to the service desk often involves combining
data from many different sources. A support analyst may need to see hardware or software
details, contract information, recent changes, and other data that resides in disparate
management data repositories to resolve a reported failure. In today's environment part of
the required data may reside in a CMDB, while the rest may be in repositories maintained
by discovery tools, network management applications, asset management systems, change
management applications, and so on. The challenge is to provide an easy means to access
this data, regardless of where it resides.
CA, along with BMC, HP, IBM, and Fujitsu, is a founding member of the CMDB Federation
Standards Working Group (CMDBf). This group of major CMDB players along with Microsoft,
a later participant, recognized the need for standards to address inter-operation of CMDBs
and other management data sources across vendor lines. The CMDBf has been working
since February, 2006, to develop a set of specifications and supporting materials that
enable the creation of a federated CMDB that spans multiple authoritative data sources,
regardless of the origin of the data source.
In this effort, a CMDB is defined as a data repository that contains Configuration Items
(CIs) that have been authorized using a configuration management/ change management
process. A CMDB contains a subset of the universe of attributes that describe a particular
CI, and also contains information about the relationships between and among CIs.
Along with the CMDB itself, the CMDBf has defined two more pieces of the federation
picture:
■ A management data repository (MDR) is a definitive data source that has additional
information about the CIs in the CMDB that may be of interest to a group of users or
another application. Examples are discovery applications, network management
applications, asset management systems, and so on.
■ Transaction artifacts (TAs) are process outputs like incidents, problems, change orders,
alerts, and so on.
A Federated CMDB may contain any combination of CMDB, Management Data Repositories,
and Transaction Artifacts. A typical example of a federation scenario could include a CMDB,
a network discovery tool, a network management tool, and a service desk. In the real
world, this could be a total CA solution, or could be a mix of CA and other third party
products.
Federation is the system that ties all of these data sources together into a virtual database.
To participate, an MDR must register with the Federation. When they register, the MDR tells
the Federation how to connect with them, what capabilities they provide, and what data
they would like to consume.
CA Unicenter Service Desk typically provides transaction artifacts (incidents and problems)
to applications like CA Service Level Management and CA Unicenter® Service Accounting,
and it consumes data from the CA Unicenter® Service Catalog, Discovery applications,
Network Management applications, and so on.
The overall Federation specification being defined by CMDBf is being completed in stages
and includes:
■ Document formats for exchanging meta-data about and instances of resources, process
artifacts, and relationships.
For more details on the direction CMDBf is taking and the phases in which the specification
will be created, the White Paper describing the Federated CMDB Vision can be found at
http://www.cmdbf.org (http://www.cmdbf.org).
Self service lets customers address their own support or service needs, without having to
interact with an agent.
For CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools, self service
is usually delivered through the medium of web-based self help.
Consumers
There are two main consumers of CA Unicenter Service Desk or IT self service:
■ Employees of the organization (who are provided with IT support or other types of
business service)
■ Customers (who are consumers of the organization's products or services and are not
employed by the organization)
Other people, like vendors and partners, are likely to benefit from self service; but this
chapter is focused on the two main consumers, employees, and customers.
In this chapter-unless further qualified-the term “customer” is used to refer to both external
non-employees and internal employees of the organization.
Support Levels
The traditional support model structures the provision of service and the organization of
teams into tiers or levels. Therefore, traditionally, the initial contact is with first-line agents
who are usually generalists. If their expertise is insufficient to resolve the incident, they
escalate the incident to a higher-level support tier where a specialist can address it. In ITIL
terms, this is called “functional escalation” to differentiate it from “hierarchical escalation”
in which supervisors or managers must make the decisions.
Functional escalation may result in an incident being raised through a number of tiers, each
of which offers a greater level of specialized expertise. Surveys have shown that the
estimated costs of support increase significantly as incidents are escalated. In cost terms, it
is therefore advantageous to resolve incidents effectively at the lowest appropriate tier.
Level 0
Organizations implementing self service support use a new tier for support, level 0.
As can be seen from the chart above, significant cost savings can be obtained by deflecting
calls from level 1 to level 0 self service channels. When implemented effectively, average
costs at level 0 are not zero-investment is required to build and maintain the self service
capability-but average costs can be reduced. The marginal cost of highly repeatable
incidents can be reduced tremendously, making those repetitive incidents with a common
solution particularly apt for deflection to level 0.
■ Increased support efficiency. By deflecting calls away from costly support channels,
the efficiency of the support organization is likely to be improved if an effective and
well-targeted self service capacity is provided. This provides opportunities for “more to
be done with the same” resources.
focused towards more productive ends. This improved allocation of resources changes
the role of analysts, making their work less repetitive and more engaging.
Capabilities
The CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools solutions
provide the following out-of-the-box self service capabilities:
■ Dynamic FAQ listing to push (bubble up) the top knowledge documents to users
■ Announcements of service alerts and important messages, which can provide links to
knowledge articles and self service automation
When augmented with complementary CA products, other options can be added, including
the following:
■ Password Reset by end users, with self-authentication. This can be done with a range
of password reset tools, including CA Identity Manager - Password
Management/Password Reset functionality.
■ Service Catalog offering subscription to service offerings-with rate plans and service
level selection-through to approval and subsequent fulfillment. CA Unicenter® Service
Catalog can support this capability.
For more information on the above three integrations, see the chapter on integration. For
an overview of CA Unicenter Service Desk Knowledge Tools and Knowledge Management,
see Chapter 6, Knowledge Management. For an overview of CA SupportBridgeand CA
SupportBridge Self Service Automation, see Chapter 7, Support Automation.
Self service is not merely a technology. While technology is a critical enabler for most self
service initiatives, success depends as well on the effective contribution of people, process,
and content.
Management Commitment
As is the case with most programs that result in change, adopting self service requires
executive commitment. Deflecting incidents effectively to the point of self-support requires
a quality of interaction at the new interface that reproduces, in another form, several of
those elements that would have otherwise been provided by a live agent. The redirection of
focus and attention necessary to maintaining the quality of that interaction, especially over
time, requires some measure of change in the support organization. Merely switching on a
self service tool, even a powerful integrated solution, is unlikely to bring the customers to
that channel. And that solution, alone, is unlikely to be sufficient to keep them there.
Management needs to drive the initial change-for example, in process support and in
knowledge management-and provide the sustained focus for success.
Ease of Use
Usability is critical to the success of self service. A poor usability experience can result in a
user defecting to a more costly phone call, or even worse, ending up with two complaints:
one is the problem they were trying to address when they attempted to use self service,
and second is a complaint about not being able to use the self service tools.
The functionality of the tools presented to the user should be self-evident. It is reasonable
to expect the self service site to be publicized and marketed to customers. It is usually not
reasonable for the users to be expected to have been trained in the use of the site, or to
require them to have read a user guide.
Ease of use also extends to content. Knowledge content, self service automation options,
categorization structures, and catalog entries all need to be expressed in terms that are
simple and meaningful to customers.
To that end, simplicity and obviousness are goals of web self service. For more details on
ways to simplify users' web site experience in general terms, the following book is a good
background source, and a strong statement on the need to avoid complexity: “Don't Make
Me Think: A Common Sense Approach to Web Usability” by Steve Krug (2nd Edition, 2005).
For those interested in a broader discussion about the usability of any kind of product or
system and some of the important considerations in the design of everyday objects, see the
book The Design of Everyday Things by Donald A. Norman (2002).
Quality of Content
The content of the self service site must be relevant and usable. It must be pertinent to
customers' current needs and be expressed in terms customers can easily understand.
Supporting Processes
Just providing an interface is not sufficient. The following processes should be supported:
■ Knowledge Management
The knowledge content provided through self service should be useful and timely. This
is critically dependent on effective supporting knowledge lifecycle management
processes. See Chapter 6, Knowledge Management, for details.
To promote the opening of requests or incidents through the web, customers must be
able to trust that the web will provide them with a reasonable degree of service almost
as quickly as telephone support. A common cause of failure for web-based incident
management is the provision of web-based access without a corresponding
commitment to deal with web-generated requests and incidents quickly. Support
organizations must measure the speed of response to web-generated support.
Organizations that are not prepared to adequately support the web channel and are
tied to longstanding telephone-oriented metrics alone-such as Average Speed to
Answer (ASA)-risk neglecting the web, to the detriment of the web initiative. See
Chapter 8, “Service Levels,” for more details on defining service levels.
■ Availability Management
Since the self service system is usually not limited to business-hours-only support, it
needs to be made available continuously, on a 24x7 basis, and needs to be reliable.
This system should be monitored to ensure that it is supporting agreed user loads and
maintaining the required levels of service.
Unlike business-oriented web sites, which often generate closely-watched rich metrics,
internal IT web support sites (in common with most inward-facing intranet-based systems)
often lack extensive metrics, measurements, and reporting.
Careful attention should be paid to metrics and measurements for self service. Useful areas
to consider are:
Key Functions
Interfaces
CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools have two out-
of-the-box interfaces that are relevant to self service: the employee interface and the
customer interface. Out-of-the-box form groups exist for these two interfaces, and these, in
turn, are used by out-of-the-box Employee and Customer Access Types. These interfaces
can be changed though adaptation of the relevant HTMPL pages, and through changes in
JavaScript. Nevertheless, many adjustments can be made by configuring existing functions,
and many disparate functions are already highly integrated. It is best practice to use out-
of-the-box approaches these where they map to organizational needs. This section
illustrates these major capabilities.
Announcements
A major objective of the CA Unicenter Service Desk is to act as a single point of contact for
the customer, and a key role is to keep the customer informed. The announcements feature
can achieve this. When customers seek assistance and visit the customer support web
interface, they see a series of announcements.
Knowledge Search
The various default search settings are determined by the following Administrator setting:
Some sites use the option of not searching by default on the Resolution field, or limiting the
default search further to the Problem field alone. This can be useful in mapping search to a
specific controllable field. More information on this can be found in Chapter 6, Knowledge
Management.
The My Bookmarks and Submit Knowledge links can be removed from this interface by
adjusting the knowledge privileges in the Access Type of the user accessing this screen. The
Add Bookmarks and Create Document privileges, respectively, control access to these
functions.
Top Solutions
The Top Solutions list in the self service interface is sometimes referred to as Dynamic
FAQs. This list is normally ordered by FAQ rating, and is, therefore, a reflection of
usefulness of the documents in the knowledge base (based on links to knowledge
documents, votes, and, normally, hits). Since the FAQ rating for a document will decay over
time (unless it is used further) an element of timeliness is incorporated into the FAQ rating.
Normally it is preferable to display a few Top Solutions on this page rather than a large
number, in order to avoid a lot of potentially distracting elements, competing for user
attention on the self service landing page.
Rather than having customers call into the service desk to check the current status of their
incident or request, self service provides a cost-effective and continuously available means
of reviewing activity being performed on their behalf. It is therefore critical that the service
desk analysts log meaningful status information in the incidents and requests.
The list of Open Requests, Closed Requests, and Open Change Orders is generated by the
scoreboard. To add, remove, or change the items in this list of links, use the Customize
Scoreboard feature to adjust the scoreboard or stored queries for the Access Type being
used.
Organizations that publish request, incident, and problem details to customers need to
ensure that activity log entries that are not set as internal will contain only those remarks
suitable for customer review. Confidential information must be omitted. Activity log entries
that have been marked as internal are flagged in the analyst's view with a sunglasses icon,
as follows:
This combined customer searching and category browsing interface is used after an initial
search is launched, or when the user decides to browse more solutions.
Customer Bookmarks
This feature allows customers to bookmark knowledge documents for fast reference at a
future date. Bookmarks are added and removed by clicking a link inside the knowledge
document.
Even when a service transaction might ultimately involve agent interaction, the mere fact
that a request or incident is defined through the web provides many opportunities for
saving time and effort. These include the following:
■ Use of properties. The use of properties for a Request Area or Incident Area helps
define the request or incident. The correct use of mandatory and non-mandatory
properties ensures that important data is provided that will minimize the need to collect
extra information before the analyst can begin working on the request or incident. In
this way, customers share the burden of recording and classification (although this may
need analyst refinement or correction).
■ Use of priority. The customer has the option of setting a Priority for the request or
incident. Priority values that are available in the self service interface are set by editing
the text segments of the web.cfg file that begin with the following remarks:
Please note that these settings apply to both requests and incidents.
Search Pop-Up
A search box can be made to pop-up in the self-service interface when a request or incident
is opened.
This prompt is activated by installing the following option using Options Manager:
prompt_for_knowledge
Note: When this option is installed, the activity log for the incident or request automatically
records whether or not the user searched the knowledge base.
The use of this pop-up is up to your organization. Opinions vary about the extent to which
knowledge search should be pushed to people who need to open a request or incident.
After opening a knowledge document, the user is able to read the document and access a
series of functions. One of these is an opportunity to indicate whether customers feel that
their problem has been solved. This is enabled by checking the setting in Administrator
called “Make voting mandatory by displaying a Solution Survey popup if the user did not
vote.” The voting is not required; the user can opt out of the vote by closing the pop-up,
but the use of this pop-up provides an additional prompt if the user did not vote before
closing the knowledge document window. This option can be found at the following location:
Note: Opinions vary about the effect of this extra pop-up on customer navigation and
overall self service satisfaction. Customers that are reviewing several documents may find
the extra window cumbersome and it could discourage use of the system. Please consider
the pros and cons of this option before implementing it.
Action Content
Knowledge documents contain rich content and graphics. A new feature in CA Unicenter
Service Desk r11 provides an easy way to bring “action content” to a knowledge document.
In this example, a hyperlink has been inserted into a knowledge document that, when
clicked by the end user, creates a new incident based on an existing incident template.
Therefore, by following the knowledge document, a substantial degree of definition and
classification can be achieved without the user even realizing it.
The steps needed to set up a template and insert a “Create Ticket link” into the above
knowledge document are easy. No coding is required. The CA Unicenter Service Desk
Knowledge Tools HTML editor handles the generation of the HTML code, and simple
configuration is all that is needed.
User Management
A question that often arises when implementing a self service site is whether members of
the user community can be consistently identified as unique users and, therefore,
authenticated appropriately by the application:
■ Organizations that have a stable end-user community often re-use data stored in an
existing user directory when addressing how to authenticate users into CA Unicenter
Service Desk. Various options are available for authentication. (More details are
available in the authentication section in Chapter 16, Security, of this book.)
■ In organizations where there has been significant change, merger and acquisition
activity, or a lack of consistent user-naming conventions and identity standards, the
deployment of self service to the entire user community may prompt some choices
about how to represent users.
CA Unicenter Service Desk has the ability to log in a “guest” user, but this login cannot be
differentiated easily, so features like personalization of screen appearance, auto-filling of
user name, and knowledge segmentation (where knowledge content authorized to a
particular group of users) are not easy to accomplish.
The guest user may, however, be of significant value in providing self service access to
users who need support precisely because they are unable to supply adequate credentials
(for example, when they forget their password) or are prevented by some reason from
accessing the network resources that they are normally entitled to.
As this chapter has explained, CA provides powerful self service functionality out-of-the-
box. Different features and interfaces are tightly integrated: users, knowledge,
announcements, incidents, and so on.
■ Less integration effort required and simpler maintenance of system and upgrades,
promoting lower TCO
This chapter provides best practice approaches to KM from process, people, and technology
perspectives, as well as guidance on how to use CA technology to support these
approaches.
For more information about product functionality mentioned in this chapter, refer to the
documentation set for CA Unicenter Service Desk, CA Unicenter Service Desk Knowledge
Tools, and CA Unicenter Service Desk Dashboard. In particular, the CA Unicenter Service
Desk Knowledge Tools Administrator Guide (available via the Technical Support selection at
www.ca.com (http://www.ca.com)) and the online help files for CA Unicenter Service Desk
Knowledge Tools provide additional relevant information.
As with many other terms, knowledge management means different things, depending on
the context. KM has been used to denote things as diverse as data warehousing, corporate
learning and training systems, and portal-accessible databases that document re-usable
consulting experiences and re-usable best practices. Since the mid-1990s, KM has also
often been used to refer to a strategic approach that focuses on knowledge as an inherently
valuable organizational asset-resulting in a variety of organizational initiatives and
innovations such as Chief Knowledge Officers and Communities of Practice.
The focus in this book is narrower and more defined: knowledge management for the
service desk. Here, KM involves the creation and sharing of useful and reusable knowledge
that helps IT provide effective technical support.
Such knowledge is typically made directly accessible to people who are employees of the IT
organization. Increasingly this knowledge is also made available-through self service-to
other employees and to customers.
Drivers
■ The main business drivers of service desk KM typically include the following:
■ Cost constraints
Objectives
Service desk KM objectives are usually based on a desire to be more efficient with
resources, while continuing to deliver on key high-level goals. Common objectives are as
follows:
■ Increase efficiency
Tactics
The most frequently used tactic when pursuing service desk KM objectives is the rollout of
knowledge management to IT support analysts. This is often followed (but rarely preceded)
by the deployment of self service to end users or customers.
The introduction of knowledge technologies and knowledge processes for analysts, end
users, and customers has become an accepted industry practice. The third approach listed,
embedding knowledge into a variety of applications, is especially promoted by CA so as to
make applications more self-supporting by embedding supportability into the applications
themselves. Where knowledge is concerned, this means seamlessly providing relevant
knowledge from the service desk, but within the context of those applications. (More details
about this approach are available in the chapter of this book on service-oriented
architecture and web services.)
Outcomes
KM programs that target knowledge for IT analysts typically target the following types of
outcome:
This list will differ depending on organizational goals. Additionally, many of the outcomes
will be different when introducing KM in a self service initiative because KM can, in some
cases, create counter-intuitive results in certain measurements. For example, the
deployment of an effective knowledge base for analysts and end users might be expected to
reduce average talk time per call. However, despite an increase in support efficiency and a
reduction in the average cost of supporting an end-user enquiry, average talk time per call
may increase. This is because the self service knowledge base may achieve substantial
levels of call deflection (where a large number of simpler commonplace questions that
previously required telephone interaction are resolved through self service), but the
remaining types of incidents that are not solved through self service and still result in a call
to the service desk are now more difficult. Although the net result may be positive, some
measurements could be misinterpreted unless a broad and balanced set of metrics is
collected before and after the implementation.
Best Practices
Approaches
ITIL
ITIL version 3 elevates knowledge management, defining it as a process. The ITIL Service
Transition book, in particular, includes overview analysis and high-level guidance about
using knowledge management when instituting IT services.
Knowledge management can also be applied, in a number of ways, to the more operational
aspects of ITIL that are the main focus of this book. With ITIL v3, as has been the case with
ITIL v2, knowledge management can help the processes of incident management and
problem management. Organizations can benefit from CA Unicenter Service Desk
Knowledge Tools' powerful knowledge creation, management, retrieval, and measurement
capabilities to support or automate aspects of many ITIL-defined activities, including the
following:
■ Incident Management
> Diagnostic scripts and known error searching can be valuable during the initial
diagnosis of an incident (using powerful search, retrieval, and linking functions)
> Knowledge searches can be used to help find known errors during further
incident investigation and diagnosis
■ Problem Management
> Accessing information about known errors, and helping with problem matching
to obtain the resolution if the problem has occurred before
The following tables list the elements of the solve and evolve loops of KCS, and highlight
some key points:
Searching is Creating When a solution is not found, the way a search is framed
provides the basis for a new solution
Performance Assessment Measure activity, monitor quality, and reward added value.
Target outcomes, not activities.
Leadership Vision and strong leadership are critical factors for KCS
success
Those interested in KCS can delve into much greater detail on the above-and many other
closely related topics-in a KCS Foundations Workshop. This training class covers areas
ranging from assigning roles and responsibilities to learning how to champion knowledge
management to relevant stakeholders.
Aside from ITIL and KCS, various industry bodies, industry analysts, and consulting firms
have provided guidance that deals directly or indirectly with knowledge management for
the service desk.
HDI, an organization whose members work in the field of IT service and support, has
published a book on service desk knowledge management, entitled Collective Wisdom:
Transforming Support with Knowledge, by Françoise Tourniaire and David Kay (2006). This
book surveys several different approaches to CA Unicenter Service Desk Knowledge
Management, and is a useful collection of best practices and ideas that can be used during
and after a knowledge implementation project. The HDI website is www.thinkhdi.com
(http://www.thinkhdi.com).
Processes
In the early days of help desk knowledge bases, in a majority of cases, little attention was
given to identifying and capturing re-usable content, refining it, managing it over time, and
retiring it when it reached the end of its useful life. At that time, although a rudimentary
knowledge base was often created, executing a knowledge search from a ticket and linking
a solution back into the ticket was often mistakenly seen as the only important feature of
help desk KM. This frequently resulted in a knowledge base with limited value, because
there was little control and limited insight into what had been placed in the knowledge
base, and insufficient means of improving knowledge base content over time.
Some other more sophisticated approaches-also available to help desks for many years-had
been dependent on highly structured and complex diagnostic tools for problem resolution.
Although powerful, these tools often required time-consuming engineering efforts and
highly skilled maintenance in order to encapsulate the knowledge necessary to solve
incidents effectively.
An example of a high-level knowledge lifecycle is shown above. This chart does not
necessarily show a recommended approach, it merely illustrates one of several possible
variations on how an organization might choose to handle knowledge creation, depending
on organizational priorities and needs.
The phases shown in this sample lifecycle are primarily human activities, not technical
steps. Initially, the creator of knowledge identifies a current or anticipated need for content.
They author the content. They, or others, may want to revise and review the content before
it is published so that it can be aligned-to an acceptable degree-with the structure and
standards required. The content is published. Thereafter, it gets periodic evaluation and
revision where needed. If the content is evaluated and deemed irrelevant to current needs
or is found to be obsolete, it is retired rather than retained in the knowledge base.
Although this is only an example, one aspect of the depicted lifecycle is common to almost
all lifecycle approaches-that it is cyclical, or ongoing. The content in the knowledge base
must be managed over its lifetime, and this may involve many iterations. Therefore, the
knowledge base is never finished; it needs ongoing attention and commitment.
Just-In-Time Knowledge
Many people presume that a knowledge document must be perfected, carefully verified,
and thoroughly approved before it can be used. People with this viewpoint often consider
such steps important to avoid any risk of the content being less than ideally presented, or
perhaps as a careful guard against it containing any incorrect technical information (which
in a worst case scenario might even cause damage to the user's environment). This is
reflected in the approach that has been used by some organizations for many years: a
multi-step approval process which must be completed before any content is made available
in the knowledge base. Organizations wanting to exercise this kind of high level of control
over content often use two or three of the following types of review before access to a
knowledge document is provided to system users:
■ Technical review
■ Legal review (especially for special types of content or for external publication)
CA Unicenter Service Desk Knowledge Tools supports the use of definable multistage
approval processes, as required for this approach. Organizations that take this approach
often have an emergency “hot issue” procedure to expedite the publication of those
occasional knowledge documents that are particularly urgent, by means of special approval
by certain authorized managers.
It is interesting to note, however, that many organizations, even those which are process-
driven and quality-conscious take a different approach. They bypass a multiphase approval
process in order to increase the speed with which content is routinely created. Time-to-
market is thus often seen as more important than thorough verification.
its visibility is expanded to additional groups. Therefore, at the start of the process it may
only be available to certain analysts, but as its efficacy is demonstrated through effective
re-use, it may be made available throughout the IT support organization. Yet later, after
further successful re-use, it is made available to end users or customers through self
service. During this process, analysts who are licensed to do so can “fix” the content, when
needed, to improve it. Other analysts who lack this authority can “flag” the content, so that
it can be improved by others. KCS, therefore, uses demand to drive the creation and
refinement of content when that content is shown to be needed.
Regardless of whether KCS or another process approach has been adopted, many CA
Unicenter Service Desk Knowledge Tools sites want to push knowledge content out quickly.
We have found sites running CA Unicenter Service Desk Knowledge Tools which allow the
knowledge content team (which does not have specialist technical knowledge) to research
an issue and publish a knowledge document to end users even before the KD has
undergone a technical review or has received approval by domain specialists. In such
places, speed is a major goal, and the content team will, if necessary, later fall-back and
withdraw the content after publication if it is subsequently found to be erroneous.
Content
Knowledge content for use by service desk analysts or end users should be as follows:
■ Relevant to users' needs. It should target the incidents and problems that are actually
occurring, or those that are likely to occur based on reasonable predictions or past
patterns (for example, defects that were discovered during a beta process, common
questions that arose during the design or implementation of a system, or a small
number of high-impact questions that were anticipated by the system's designers).
■ Not too complex. Given a choice, it is preferable that content be a little too easy rather
than a little too difficult. Simpler knowledge documents and more complex ones can be
linked together, if needed, making the advanced content accessible, but allowing the
user to avoid detailed steps or explanations that they may not need or understand.
Sources of Content
■ Solutions generated from CA Unicenter Service Desk during incident management and
problem management workflow, in reaction to an incident or problem. This can be
achieved using the solution logging functionality.
■ Proactive or reactive content created by a members of a content writing team (if one
exists, given the support model being used).
■ There is, therefore, a wide range of potential contributors. The kinds of contributors
that would apply to a specific implementation will depend on the type of knowledge
creation model that is used. In KCS, the first item in the list above-solution creation
during the incident management process-is likely to predominate. Organizations
following other approaches may use dedicated individuals or a special knowledge team
to author content. CA's solution can support a federated knowledge contribution model
which, by default, allows any contact defined in the system to submit a knowledge
document. Knowledge Report Card further supports a federated knowledge model,
since it can provide metrics to any analyst about the knowledge they have contributed.
A common mistake at the outset is for an organization to attempt to capture too much, or
what is described as “everything we know,” in order that it be transferred into the
knowledge base. This could be a potentially unending and ultimately futile task, given the
huge quantities of discovered material, and the lack of any attention paid to weighing the
costs against the benefits of capturing it, let alone considering whether the content is up to
date and truly reflective of current needs.
Instead, an organization should focus on capturing content that will be valuable or highly
re-usable, and weigh it against the cost of incorporation in the new system.
Sources of content may be varied, and all types of knowledge, regardless of their current
medium should be open for consideration (even if only high-value content is included).
Examples of content sources include the following:
■ FAQs
Sometimes, it is useful to conduct a short “mini knowledge audit” to list the different types
of knowledge that exist in the relevant domain. In this list, for each of the types of content,
indicate the following:
■ Location of content
■ Current format
■ Complexity of capture and conversion into CA Unicenter Service Desk Knowledge Tools
The outcome of this process might be a limited number of highly targeted knowledge
documents (for example, perhaps 10 to 30 in a small or medium-sized implementation)
that reflect high-value needs that can be used to “seed” the knowledge base.
Those advocating the adoption of KCS are not keen on wholesale conversion of legacy
content, as the KCS philosophy is for the content to be demand-driven. Therefore, if legacy
content must be retained, it may be better to keep it in the legacy tool rather than convert
it. Then, if and when an old solution is needed to handle a new incident, only that solution
is linked or pointed to (from the new incident tracking system and knowledge base), or
recreated-using best practices-in the new system.
Consistency of Content
Content should be reasonably consistent. There should be a basic content style that is
generally adhered to, and a roughly consistent look and feel for each type of content.
Document templates can be used to standardize much of the layout of a knowledge
document. This helps make the appearance more consistent and can enable quicker and
easier reading.
Many organizations create a style guide to provide guidance on the content and style of a
knowledge document. The style guide typically makes references to the structure and style
of a KD. This document should be kept very brief so that it is read and referred to by most
of the contributors (unless the organization exclusively uses a team of dedicated writers, in
which case the document can be more detailed without affecting its capacity to be read by
necessary parties).
When completed, the style guide is often made available within the CA Unicenter Service
Desk Knowledge Tools system itself.
It is important to ensure that the knowledge base remains relevant, and does not stagnate
through the retention of obsolete or incorrect content. Many organizations conduct regular
content reviews. Documents that are no longer suitable can be updated (and returned back
to the approval process) or retired.
Rather than set a common review date for a large number of knowledge documents, which
risks resulting in a log-jam of content at one time, most sites spread this activity out so
that document reviews are staggered. It is a common practice among many organizations
to set an expiration date on the KD (often set six months out) to ensure that the content is
monitored in a timely way. Auto-notifications are sent prior to expiration.
Knowledge Retirement
■ Monitoring for low usage or low linkage (for example, using the Knowledge Report Card
metrics and the Published Document Analysis metrics from CA Unicenter Service Desk
Dashboard)
■ Users responding to notifications such as automatic Review Date emails and the
automatic emails prior to the expiration date (which are sent 7 days before expiration)
If, for some reason, a knowledge document is still published on its expiration date, it will be
automatically expired, and its status will change to Retired. Activity notifications can also be
used to send warning emails automatically once a KD expires.
KDs that are set to Retired are not deleted from the knowledge base; they remain in a
Retired status and are visible in the Knowledge Categories interface, although they are not
normally searchable through the standard search interfaces used for incident and problem
management. Users with appropriate edit rights can elect to inactivate KDs by deleting
them using the Knowledge Categories interface. These KDs will normally be inaccessible
from CA Unicenter Service Desk Knowledge Tools, but they are not deleted from the
underlying database tables. If deletion from the database is required, KDs can be purged
using the rule-based archive and purge functionality of CA Unicenter Service Desk.
■ To support operational activities. To provide visibility into data, monitor activities and
outcomes, and identify exceptions. To facilitate the day-to-day allocation of resources
and help determine the routine focus of employee, supervisory, and managerial
attention.
■ System Reports (for example, system tables, synonyms, access rights, and so on)
■ Knowledge Utilization (for example, searches, hits, linking to incidents, and so on)
■ Impact of knowledge on service management metrics and service desk metrics (for
example, average cost of support, mean time to resolution, first call resolution (FCR),
customer satisfaction, and so on)
Metrics—Key Considerations
Look at the results of knowledge activities, not just the activities themselves. For
example, the quantity of documents submitted is not meaningful without information
concerning the usefulness of the documents and the benefits derived from them. KCS
best practices help to distinguish between different types of measurements-such as
activities and outcomes-and encourage an approach whereby a balanced set of metrics
is measured and reported. The identification of suitable goals is important, too, as is
explained, below. Setting goals on the wrong metrics can be counter-productive and
detrimental to overall success.
Ultimately, the most important measurements to compare against are ones that reflect
the goals of the IT function and the goals of the organization as a whole: the cost of
providing support, customer satisfaction, business service availability, and so on.
Mapping the knowledge initiative to these higher-level measures (not merely measuring
the more routine operational ones) will provide a greater opportunity to align to
business needs.
Without knowing where things started from and how far they have progressed, it is not
possible to quantify change or achievement. Therefore, it is important to produce
measurements before a knowledge initiative begins, and not wait until it is
implemented before collecting data. Create an initial baseline, and continue to measure
thereafter.
Changes in process, roles, and activities may cause negative trends in some former metrics
and positive improvements in others, especially while in transition. Take a broad approach,
and look at a range of metrics to help interpret the overall patterns, and to discern the
interplay between measures.
Customer surveys are valuable for collecting qualitative data regarding customers' opinions,
as well as producing metrics on customer satisfaction and other measurable items.
Customer satisfaction is usually a key measure, too. This data can be collected using the CA
Unicenter Service Desk customer survey automation functionality. A knowledge initiative,
especially where it has a self service component, may, therefore, benefit from well-targeted
surveys in a number of ways.
CA Unicenter Service Desk Knowledge Tools includes knowledge report card (KRC)
functionality. The report card indicates each individual's contribution of knowledge, and
illustrates how each KD they have contributed has been used.
Capabilities include automated email and web-based distribution of the report card. KRC
has user, team and category based views. As well as providing key knowledge summary
metrics, KRC also provides drill-down detail.
Information provided by the CA Unicenter Service Desk Dashboard includes the following:
■ Knowledge content
> Major knowledge categories can be visualized with indications of the quantities
in each status. This shows content in various stages of the knowledge lifecycle.
> Content publication counts are broken out by category and month, indicating
knowledge creation volumes.
■ Knowledge activity
■ Knowledge gaps
> Display of knowledge documents that remain unedited—for the longest periods
of time—in a pre-published status.
The Consortium for Service Innovation's KCS model is a good source of information on best
practices for knowledge metrics and reporting. (The Consortium for Service Innovation
website is www.serviceinnovation.org (http://www.serviceinnovation.org)).
Metrics for IT Service Management, by Peter Brooks (2006) provides a more general
perspective, and is useful for principles of metric design. It also includes metrics for various
ITIL processes.
During the planning of a service desk KM rollout, questions frequently arise concerning the
various roles and responsibilities involved, and what types of staffing will be needed to
make the implementation successful. Different staffing approaches exist, and these will be
outlined briefly in this section.
Leadership
and in the types of service or support they provided. When asked which success factor they
felt was most critical and which they wished they had known about before implementing,
one response was repeatedly mentioned: the need for executive “buy-in” from the outset.
Since knowledge management can, to some extent, transform the way in which the service
desk works, adjustments are necessary in various tasks people perform, and in how
technology is used. Changes must be made, too, in terms of how an individual's
participation and contribution is valued, how goals are promoted, and how some aspects of
work are evaluated and measured. This type of change needs strong leadership
involvement for several reasons:
■ To champion the program-especially during its rollout and early phases-and to promote
the successes of the initiative
For the above reasons, it is recommended that executive management be involved in the
project from the beginning.
There are several models for organizing roles and responsibilities for service desk KM. We
will focus on two broad approaches here. For more details, refer to other sources mentioned
in this chapter, particularly the Collective Wisdom book by Françoise Tourniaire and David
Kay, mentioned above: It is a good source of information on this topic, and provides a
framework for evaluating the different approaches.
In cases where the knowledge specialists who create content are not domain experts, they
may depend considerably upon escalation groups or technical specialists in other areas of
the IT organization, and may delegate to them, collaborate with them, or interview them
when drafting new knowledge content. In organizations where the content team consists of
domain specialists—or in cases where the content is not particularly complex—there may be
a somewhat greater degree of technical self-sufficiency.
continued investment in the knowledge management staff, and in the case of generalist
authors, also needs a substantial level of co-operation from the external groups who may
be needed to assist them.
In other organizations the role of creating content falls on all members of the support
organization, who create content as an integral part of the incident management workflow.
A specialist dedicated knowledge team that writes content may not exist at all. KCS falls
within this second category: the knowledge is created by analysts and is demand-driven
reflecting the context in which it is sought, mapping clearly to customer needs. This
approach requires a willingness to focus on a disciplined adoption of the relevant knowledge
management processes and needs a strong level of leadership.
There is no “one size fits all” model for organizing KM for a service desk, and one of several
approaches—or a mix of approaches—may be used. Factors such as the complexity of
incidents and problems, the volume of incidents handled at different support levels, the
capabilities of the organization to make and sustain changes to existing processes, and the
priorities of senior managers can all play a part in selecting the right approach. For further
details, refer to Collective Wisdom.
Experience with different organizations adopting service desk KM has identified three stages
of awareness that sometimes are revealed when it comes to employee goals and incentives.
They are illustrated in the following scenario:
At first the organization is focused on technology, staffing, and perhaps questions like
categories and content format. There is recognition that content must be created, and a
concern about whether enough content will be available when the system goes live. No
incentives are planned for now.
Someone has an idea: “Set a goal for everyone in support to create at least five
knowledge documents over the next two weeks.” This seems to make sense to
everyone; it seems to offer a fast way of documenting what the organization
collectively knows. Another idea is proposed: “Give a special reward to the person who
creates the largest number of knowledge documents. This will make people work even
harder to create a great knowledge base.”
The team looks at the count of documents in the system two weeks later, and the
results appear to be even better than they had expected: there are dozens of entries
(especially from several staff members who seem to have contributed a
disproportionate amount). Apparent success is broadcast to the support team and to
management; they announce that an “excellent” knowledge base has been built.
In this scenario, there is eventually a realization that the incentives failed to work. The goal
had been the production of as much content as possible: the quantity of activity (creating
documents), rather than the value of the outcome (usable solutions that cause incidents to
be closed rapidly) had been the stated objective. It had been backed up by incentives such
as a prize and recognition.
Goals and incentives can be a good idea, but they should be used carefully in order to meet
organizational needs.
Recommendations
■ Measure activities, but do not set goals on activities. Instead set goals on outcomes.
This is a KCS recommendation that can be applied to many types of situations, and it
could have helped identify the appropriate types of metrics to target.
■ Incentives can often be effective. Ensure they are aligned to appropriate goals.
Knowledge documents can contain many types of content, but in most implementations
they are used to represent problem resolution information, or what KCS refers to as a
solution. From an ITIL perspective, a knowledge document is an excellent way to capture
information about a known error or workaround.
Knowledge documents contain a set of standard fields that are normally used to provide a
consistent structure to the content. Fields such as Title and Summary are primarily
intended to make the content easily distinguishable, and the scope readily understandable,
when viewed in a list of items or when scanned quickly. The Resolution field typically stores
the body of the solution, in the form of rich text, tables, graphical images, and so on. These
fields and a series of other KD attributes define such properties as categorization,
ownership, permissions, modification date, and a range of other metadata that can help
with management and retrieval. CA Unicenter Service Desk Knowledge Tools is, therefore,
said to be based primarily on structured content.
Knowledge documents can be used to link to other forms of content, stored either within or
outside CA Unicenter Service Desk Knowledge Tools, including unstructured content. Also, a
KD can be designed so that a person viewing it can perform a range of actions, by including
something called action content when creating the document. These linkages, for example,
allow the editor of a KD to insert a link into the Resolution field which, when clicked on, will
create an incident in context. This incident can be generated using an incident template,
providing pre-defined attributes that can automatically classify the incident and
consequently determine the policy applied to it. The insertion of this type of “create ticket”
link can typically be accomplished quickly without the need for coding.
Unlike most knowledge documents, where the main body of content is contained in a
Resolution field, there is a special type of knowledge document that is used quite
differently, the knowledge tree document.
When looking at a knowledge tree document, the user is prompted by a series of questions,
each with a list of multiple choice answers. By responding to the questions, users are
directed to the correct answer or to the information they require. Behind the scenes,
interaction is governed by a decision tree, which is designed by the creator of the
Knowledge Tree document.
Aside from having a decision tree in place of a Resolution field, a knowledge tree document
shares the same descriptive metadata (such as Title, Summary, Modify Date, and so on) as
a regular knowledge document.
In most knowledge bases, Knowledge Documents are typically used much more frequently
than Knowledge Tree Documents. The KT application provides straightforward visual tools
to build a tree, reducing the technical effort required-no coding is normally needed to
create a tree. However, the process of deciding the right questions to ask in the tree and
identifying the paths the tree needs to follow to map correctly to a business process (or to
perform an effective diagnosis) can involve considerable planning and maintenance.
Therefore, although Knowledge Documents may be more appropriate in many cases,
Knowledge Tree Documents may be well-suited to the following situations:
■ High value to the organization (because of high levels of reuse or high impact)
A knowledge document will appear differently depending on which of its fields are exposed
in the document template that applies to it. Therefore, there is some flexibility in terms of
how the fields of a KD are used. Nevertheless, most organizations seem to use the main
fields in knowledge documents in ways that are reasonably close to the out-of-the-box
design. This section provides an explanation of how those fields are typically used.
Title
By default, the Title field is unlabeled and appears at the top of the knowledge document. It
is typically a short description, in a few words, of what is contained in the KD. Not only does
it appear at the top of the KD, providing confirmation to the user that the content they are
looking at is relevant, but it appears in a search results list (and in various other lists used
to retrieve a KD-such as the employee interface dynamic FAQ list) and is, therefore, the
primary means by which the user distinguishes between this KD and other KDs before they
decide to open it to view it in detail. Therefore the careful selection of words in the title is
important in establishing an easily discernable link from a list to the content of the KD.
As with all content, the Title field should be expressed using the words that users would
use, in terms users are likely to be aware of when they search for content. These
considerations can apply in many different ways depending on the type of content, but one
common observation is that a Title that expresses the cause of an problem (which may be
unknown to the user) may be less effective than a title that refers to the symptom of the
problem (which the user is more likely to be aware of). The Title field is searchable using
the text search functions of CA Unicenter Service Desk Knowledge Tools.
Summary
The Summary field usually contains a brief statement summarizing the knowledge
document, in a couple of sentences or less. The Summary field is searchable using the text
search functions of CA Unicenter Service Desk Knowledge Tools.
Problem
The Problem field is searchable using the text search functions of CA Unicenter Service
Desk Knowledge Tools. This field is used differently, depending on the needs of the
implementation. Two general approaches are common:
> Remove the Problem field from document templates used to display KDs
> Use the hidden problem field to store explicit keywords (to allow extra control
over keywords contained in KDs)
> Retain the Problem field in document templates used to display KDs, as is the
case with the default built-in template
> Use the field for a statement of the “problem” in the form of a question (for
example, to support NLS Search); or to elaborate briefly on the “cause” or
“symptoms” of the problem; or for another purpose
Resolution
The Resolution field usually represents the main body of content contained in the
knowledge document. It usually contains a detailed explanation in the form of a variety of
rich text, bulleted and numbered lists, tables, images, and hyperlinks. Also, this field can
contain action content that can launch various functions.
Sometimes other multimedia content is linked to from the Resolution field of the KD,
allowing CA Unicenter Service Desk Knowledge Tools to support multimedia. Supportable
formats and standards for hyperlinked content are determined by the user's browser and
desktop.
Content in the resolution field can be added to and edited by inserting HTML tags and text
into this field or by using the built-in HTML editor.
The resolution field is searchable using the search functions of CA Unicenter Service Desk
Knowledge Tools. By default, the field can contain 32,768 characters (32k) of text,
including any HTML code. The administrator can adjust the limit on this field to 256,000
characters (by accessing the Knowledge | Documents | Documents Settings | Maximum
Resolution Size setting contained in the Administrator tab). The capacity of the resolution
can, therefore, be expanded to accommodate more content. However, only the first 32,768
of non-HTML code will be indexed. Therefore, when searching the Resolution field, the
search engine attempts to perform matches only against content within the first 32,768
characters of Resolution field text for each knowledge document (as displayed in the User
View of the KD).
Categorization
In CA Unicenter Service Desk Knowledge Tools, knowledge documents are placed into
categories.
CA Unicenter Service Desk Knowledge Tools uses multilevel categorization structure. Under
the top category (called TOP) many sub-categories can be set up. These sub-categories can
in turn have many sub-categories, and so on. Therefore, multiple levels of category can be
created.
Each knowledge document is associated with one category, which is called its primary
category.
If required, a knowledge document can also be associated with categories other than its
primary category using a feature called Category Links. This enables a single version of a
KD to simultaneously reside in multiple parts of the category structure, without a need to
duplicate the KD. This could be useful, for example, when a KD is about the security
aspects of a supported application: it can be placed both in a category relating to the
specific application concerned, and simultaneously into another category concerning general
security issues (that is, not limited to the application concerned). Regardless of how the
user browses, they will be able to access the same piece of content.
Note: Knowledge documents cannot be published if they are linked to the TOP category.
As well as aiding easy retrieval, categories are also used to help manage common types of
content consistently. A category can have a document template assigned to it, promoting a
consistent look and feel. A category can have an approval process associated with it,
helping to ensure that when a KD is created in a category, the approval process appropriate
to that type of content is utilized.
Some organizations assign owners to different areas of the knowledge base. They ensure
that content falling within their areas of responsibility is created and maintained
appropriately. Setting up category owners for the various categories in CA Unicenter
Service Desk Knowledge Tools assigns new documents created in a particular category to
this category owner. (This assumes that the users who create the documents are not
analysts who may want to own that document, or if they are analysts, that they create the
document with the Assign to Category Owner check box on the Submit Knowledge form
checked). Category ownership is also a concept that applies to the Knowledge Report Card
(KRC), whereby viewers of the KRC can elect to see knowledge document metrics for the
various categories they are responsible for.
Note: If a KD is linked to multiple categories, it is the KD's primary category (rather than
other linked categories) that is able to determine the KD's permissions, its assignee, its
document template, and its approval process.
Knowledge categories are created and edited using the Knowledge Categories screen. This
interface can be launched from Knowledge Tab | View | Knowledge Categories, or
Administration Tab | Knowledge | Knowledge Categories. Normally, all analysts have access
to the Knowledge Categories interface.
Some organizations use a 4-tier hierarchy. The tiers may represent System-Type-
Element Module (STEM) or Subject-Component-Item-Module (SCIM). This approach is
commonly used when the organization continues to use a support categorization
hierarchy that is mandated by well-established business rules or the philosophy of a
legacy application.
The categorization system depth, and the scope of what a category is used to
encompass, varies in different parts of the categorization structure.
■ Parallel Categories for CA Unicenter Service Desk and CA Unicenter Service Desk
Knowledge Tools.
> Category Browse. The Category Browse button in an incident can navigate the
user from a categorized incident directly to a corresponding knowledge
category with relevant content.
■ No Categorization Structure.
Unlike other best practices, Knowledge-Centered Support (KCSSM) does not mandate
the use of a categorization structure to define knowledge. Rather, the solution in KCS is
defined by a series of components framed in terms used by the user, which include a
description of the problem; the environment; and, when developed further, some
An entire discipline called information architecture has built up over the past few years
around creating effective structures for managing and providing access to information. It is
recommended that the following advice be considered when designing a category structure:
■ Category structures can affect multiple groups of people. Think broadly, when
designing them.
■ Create a cross-departmental team that includes stakeholders from the various groups
that the categorization scheme will need to encompass. What seems ideal for one
department may not work well for another. Try to obtain some consensus.
■ Allocate time to perform several iterations of the category design. It is unusual to get
the structure right the first time.
■ Category design and naming for end-user navigation should be especially clear and
understandable.
The IT Environment
Knowledge documents contain a CI field. This is visible in the edit view of the KD. The CI
can be used for several purposes:
The CI value is not used by CA Unicenter Service Desk Knowledge Tools alone. The same
definition of a CI used by CA Unicenter Service Desk Knowledge Tools can also be used by
CA Unicenter Service Desk, CA Unicenter Asset Portfolio Management, and the CA CMDB
solutions, when these products use a common database.
To reveal the CI in the user view of KDs, modify the document template that those KDs
use. Follow these steps:
2. Edit the Document Template in which you want to add the CI reference
4. Select the place in the template where you want to add the CI reference
5. Select the Template Placeholder called TAG_SD_ASSET to place a reference into the
Document Template
■ Analysts can create knowledge from an Incident (or from a Problem, Request, or
Issue). Select Solution from the Activities menu in the Incident. The Accept and
Submit Knowledge button launches the Create New Document interface, in
context, capturing various attributes of the underlying Incident.
■ The Knowledge Categories interface launched from the Administrator Tab can be
used to create knowledge content. Right-click on a category to create a knowledge
document.
■ Employees and Customers can, where permitted, submit content via self service. See
Chapter 5 of this book for an illustration of the Submit Knowledge link in the self
service interface.
■ Knowledge documents can be imported using the knowledge import tool, pdm_kit.exe.
Information on this command line utility can be found in Appendix D of the CA
Unicenter Service Desk Administrator Guide.
■ Knowledge can be created using web services. Although it is not common to use web
services to capture knowledge in this way, web service methods do allow knowledge to
be created and modified. For more information on web services, refer to Chapter 15,
SOA and Web Services, in this book, and to the CA Unicenter Service Desk Web
Services User Guide.
A Knowledge Document has various attributes that indicate who is responsible for its
creation and maintenance. They are as follows:
KD Attribute Description
Initiator The initiator is the contact that created the KD. This is a read-only field. In
does not change during the life of the KD.
Assignee Initially, when the KD is created by analysts, they can leave it assigned to
themselves, or they can assign it to the category owner, if one exists. If
the KD is created by an employee or customer, then the Assignee is set as
the category owner (or the assignee is left unassigned, if no category
owner exists).
Author The rules for creating the initial Author are the same as for the initial
Assignee. Thereafter, the Author will not change, unless it is manually
updated.
The author field is used in the KRC, since My Documents refers to KDs that
are authored by the logged in user.
The rules for creating the initial Owner are the same as for the initial
Assignee.
Thereafter, until being published, the Owner will remain the same, unless
it is manually updated.
Subject This field can be used to store the name of a contact with expertise in the
Expert document's subject matter.
The rules for creating the initial Subject Expert are the same as for the
initial Assignee.
Thereafter, the Subject Expert will remain the same, unless it is manually
updated.
A contact in CA Unicenter Service Desk may be made a member of one or more groups.
Permissions determine which groups a can read a knowledge document and which groups
can write (or in other words edit) the knowledge document. Permissions for a specific KD
can be set using one of the following two approaches:
The above permissions model may be subjected to further constraints: The KD status
(whereby normally only published knowledge documents are visible from the search tools),
and from other factors such as approval process settings, the KD owner, the KD assignee,
and access type settings.
Organizations that wish to define permissions in a very granular way may opt to use
document permissions for all or part of their knowledge base. Organizations that wish to
aggregate permissions management and simplify administration may prefer to use category
permissions throughout.
To speed up the process of editing the live knowledge base (allowing authorized users to
edit a published knowledge document without needing to “unpublish” it first), consider
using the following configuration setting in Administrator: Knowledge | Approval Process
Manager | Approval Process Settings | Permissions for Document Edit after Publish
The permissions model described here is implemented using CA Unicenter Service Desk
data partitions functionality.
In some cases, an organization may wish to extend the knowledge permissions model
beyond what is described above. For example, an organization may wish to add a field to
the skeletons table (the table used to store knowledge documents) and then want to
reference this field in a data partition constraint, in order to further limit who can access
knowledge documents.
Note: In this example the name of the new field should be added to the
@EBR_FILTER_COLUMNS argument in the NX.ENV file, so that the new field is cached for
KT searching.
Note: This change should not be made by editing the NX.ENV file directly. To learn more
about how to make changes to NX.ENV, please refer to the Advanced Tuning chapter of this
book.
The rights to perform different knowledge activities, such as create a KD, bypass the
approval process, delete categories, and so on, are set by configuring the access type.
CA Unicenter Service Desk Knowledge Tools offers many different approaches for retrieving
content. These different paradigms are available side-by-side in the same system, and are
often interwoven. The paradigm selected will depend on the context: What the user is
doing, what they want to know, and to some extent, what their personal preferences are.
They include the following:
■ Keyword Search
■ Using bookmarks
■ Navigating trees
Access to knowledge must be available where it is needed such as at the service desk, in
incident management, and in problem management. The user should not have to open a
new application. Owing to the strong out-of-the-box integration between CA Unicenter
Service Desk and CA Unicenter Service Desk Knowledge Tools, knowledge can be accessed
in a wide variety of ways: from requests, incidents, problems, the Profile Browser, and so
on. Knowledge is available not only for analysts, but for end users and others. Chapter 5 of
this book, explaining self service capabilities, provides details on retrieving knowledge
through self service interfaces. The following list provides some of the out-of-the-box
examples of those available to Analysts or those working within the IT organization:
Type of Example
Retrieval
Drop-Down Search knowledge from Search Box (upper right area of main analyst
Search Box interface). Retrieval can be based on text search, or the Document ID.
Analyst Search Search from incident (or request or problem). The results of the search
from Incident and the selected KD are displayed within the incident itself.
Analyst Category Category Browse from Knowledge Tab in Request / Incident / Problem.
Browse from
Incident
Type of Example
Retrieval
Analyst Search Analysts can search and filter in the main Knowledge Tab.
from Knowledge
Tab
Analyst Browse Browse into the category structure from the main Knowledge Tab.
Category from
Knowledge Tab
Knowledge tree documents are retrieved in the same results list, browse list, bookmark list,
and announcement link as regular knowledge documents.
As can be seen in the above screen shot, a knowledge tree document is found in the results
list, intermingled with other KDs. The difference can be seen by comparing the different
icons used in the list:
URL Launch
Knowledge can be launched from third party applications using a URL. Syntax for the
following URL launches is supported:
For information on the correct syntax, search for “Opening Knowledge Documents from a
URL” in the online help system of the CA Unicenter Service Desk Knowledge Tools
application.
Emailing a Document
Web Services
Knowledge can be accessed through web services. Various methods are available,
permitting the search, retrieval, creation and updating of documents, as well as a range of
other operations. For more information, refer to Chapter 15, Service-Oriented Architecture
and Web Services, of this book and the CA Unicenter Service Desk Web Services User
Guide.
As explained above, CA Unicenter Service Desk Knowledge Tools provides a rich variety of
paradigms and interfaces for the retrieval of knowledge, so that the user can use the most
appropriate one given their context and needs. CA believes that there is no one single
approach that fits all needs of all people: keyword search, browsing, natural language
search, answering questions through trees, links to knowledge. Some approaches will be
more effective than others, given the specific situation that applies. Therefore many
different choices are provided by the system.
The approach of CA Unicenter Service Desk Knowledge Tools is not, however, one of using
many different knowledge bases and diverse management systems for those differing
knowledge paradigms. On the contrary, a single knowledge base with a common
management system is used. This means that the administration and rules governing
things like permissions, categorization, ownership, and knowledge lifecycle management for
the various paradigms are unified into one common approach: CA Unicenter Service Desk
Knowledge Tools.
CA Unicenter Service Desk Knowledge Tools offers an option of two different search types:
Keyword Search and Natural Language Search (NLS).
Keyword Search
If the Keyword Search option is used, searches are based primarily on keywords supplied
by the user. Matching and ranking is based on the extent to which these keywords are
contained in the retrieved knowledge document. However, user-supplied keywords do not
define the search entirely: many additional features are used, including noise words,
synonyms, and special terms.
Searches that employ the NLS search option produce results based on the similarity
between the pattern of words used in search string and the pattern of words contained in
the Problem field of the retrieved knowledge document. This approach is designed to
generate matches between a search enquiry and a knowledge document that contains a
problem statement with the same or similar meaning. Therefore, statistical properties
relating to the inclusion and ordering of terms are incorporated into the relevance
calculation. (Parsing features common to Keyword Search, such as noise words, synonyms,
special terms, and so on are also used). Since, for NLS, the search string submitted by the
user is typically formulated as a question, the corresponding Problem field in the knowledge
document is also designed to accommodate a problem statement that will be entered by
the document author. This problem statement should anticipate the commonly asked
question that is answered by the knowledge document.
Most organizations use the Keyword Search option as the system default. Typical users of
CA Unicenter Service Desk Knowledge Tools are nowadays experienced users of popular
web-based search engines that generally use keyword-based searching (and a variety of
different search ranking approaches) to generate results. The Keyword Search option for CA
Unicenter Service Desk Knowledge Tools is based on users typically supplying just keywords
(or a combination of keywords and noise words) and does not expect a query in the form of
a question. For example, users of Keyword Search typically search for “Inkjet 5057A driver”
or “5057A driver” rather than “Where can I find a driver for the inkjet 5057A?”
Typical type of query “5057A driver” “Where can I find a driver for the
performed inkjet 5057A?”
Although Keyword Search can search for text contained in the Title, Summary, Problem,
and Resolution fields of a knowledge document, matches against these fields are not
treated equally. The system applies a pre-determined weighting to each of the four fields.
By default, the field weights are defined in the NX.ENV file as follows:
# Text field weights for calculating of document relevance for Keywords Search
@EBR_TITLE_WEIGHT=16
@EBR_SUMMARY_WEIGHT=6
@EBR_PROBLEM_WEIGHT=4
@EBR_RESOLUTION_WEIGHT=1
This default setting means that-other things being equal-a matching word found in the Title
field (weight=16) will contribute more to the relevance ranking than the same matching
word found in the Resolution field (weight=1). These field weights can be adjusted if
needed. For example, if an organization uses the Problem field consistently to store
carefully chosen keywords, then it may be preferable to increase the relative field weight
for Problem.
If the schema of KT is adapted to add a new knowledge document field (to the skeletons
table), it may be desirable to add that field as an optional filter in the Advanced Search
interface. If this new field is added as a filter, then it is normally recommended that the
field is added to the @EBR_FILTER_COLUMNS argument in the NX.ENV file, so that the field
is cached during searches.
Note: This change should not be made by editing the NX.ENV file directly. To learn more
about how to make changes to NX.ENV, please refer to the Advanced Tuning chapter of this
book.
CA Unicenter Service Desk Knowledge Tools can be launched in context from the
CA Unicenter Network and Systems Management Event Console. This can help an Event
Management operator use CA Unicenter Service Desk Knowledge Tools to find out the
meaning of an alert message that may be documented in the knowledge base, or to
determine appropriate actions in response to a network condition.
■ CA SupportBridge Self Service Automation enables end users to easily help themselves
by executing common tasks simply by clicking a link.
These solutions integrate tightly with many service desk applications, including
CA Unicenter Service Desk, BMC/Remedy, HP/Peregrine, and others. Additionally, support
automation can empower knowledge management applications like CA Unicenter Service
Desk Knowledge Tools for the delivery of automated content, both in the context of
employee self service and technician-answer lookup.
The business benefits of a support automation solution fall into these three categories:
■ Call deflection - By automating the steps to fixing a problem with a PC, support
automation allows more issues to be deflected from the first level technicians. End
users can simply and effectively fix their own problems by running scripted wizards or
links that fix the problem for them, without having to know or follow technical
instructions.
■ Call prevention - Support automation solutions can monitor the desktop in ways that
watch specific actions on the PC that can potentially cause problems, and proactively
address issues. In this way, the computer “fixes” itself before small issues become big
issues.
Tools
CA SupportBridge Live Automation provides many tools for delivering remote support, and
can support this remote access through restrictive firewalls. These tools include the
following:
■ Live chat
■ Automated tasks
■ Desktop sharing
While technicians have gotten quite adept at providing support through the use of tools like
remote control, very little has been done to automate solutions. It is in this area that
support automation adds great value to a support operation. Virtually all of the steps a
technician takes manually to solve a problem can be automated in a script. Scripting
technologies like vbScript or JavaScript are very powerful in the actions that can be
executed on a remote PC. Examples of things that can be done via an automated task
include the following:
Scenario
An end user calls the service desk, telling the technician that the computer is behaving
oddly. By using CA SupportBridge Live Automation, the technician can remotely execute an
automated task to check specific system services and their default settings. A process that
may take 4 or 5 minutes manually, or even longer when dealing with a non-technical end
user, is now reduced to a 2-minute call by automating the delivery of the support action.
The technician just selects the automated task that will be run on the customer's PC to
retrieve all the system services and the current state of each service, and uses this screen
to make the changes remotely:
CA SupportBridge Self Service Automation allows end users to support themselves without
the need to carry out complicated technical steps. Building on the foundation of scripting
that was described above, an end user can simply click a link to launch an automated
script.
■ A knowledge document
■ A chat window
End users are often challenged when it comes to helping themselves. By making the
solution to a problem as easy as clicking a link, end users will use CA SupportBridge Self
Service Automation instead of picking up the phone to log an incident with the service desk.
This has a tremendous impact on the volume of incidents that must be handled by the
service desk.
Script Executor
The script executor is the main .EXE component of the CA SupportBridge Self Service
Automation solution. Its purpose is simple: to download the required automated task,
execute it, and make the results available to the framework. It has no User Interface (UI).
When you reach a web page to execute an automated task, you have passed through
workflow that may include login or other gathering of information and you have invoked a
WebLaunch process to download and run the Script Executor. Arguments that specify which
automated task to execute and which session to log results to are passed to the Script
Executor when it is invoked.
One of the attributes of an automated task is whether it provides its own user interface. It
is possible for automated tasks to provide their own user interface by using components
such as the Web Browser control in Windows and writing content to them to provide a user
interface. This would be desirable in cases where it is necessary to provide complex user
interface workflow such as wizards, or where tight control is required over branding and
over look and feel.
Automated tasks that do not provide their own user interface are presented in a container
user interface that is part of the CA SupportBridge Self Service Automation framework. The
self service framework provides a simple user interface that lets you escalate to CA
SupportBridge Live Automation or to save the Log. In the main panel of this user interface,
you are shown a progress animation while the script executes. When the script has
completed, the script results are shown. The script results are in HTML form, constructed by
the server by taking the return value of the script and applying the relevant XSL transform,
per the automated task configuration defined in the administration interface.
For scripts that provide their own UI, the framework container is not shown. Rather, the
script is simply invoked by the framework and can interact with the user or not, as it likes,
with no further interaction from the framework itself. Because the options to save the live
log and to escalate to live are not shown explicitly by the framework in this mode, the
functionality for these two functions is made available to the scripts through the Functions
library, so that a script author may present these options within the UI.
Form Filling
To a third party developer, this simply requires including one of the Self Service pages as a
hidden frame in the form page, making a JavaScript call into the self service frame and
implementing a JavaScript callback function in the page to which the self service code will
return the automated task results.
The code below provides a sample for how this may be done from any web page:
This code would simply be a hyperlink on a web page that would execute a specific script
from the CA SupportBridge server:
Scenario
Assume a user wants to set up a printer. By navigating to a support portal, and clicking a
link, the user can be walked through a simple process without having to do anything
technical. The user selects the printer, and the automated task takes care of everything
else.
Content creation is the key to leveraging the power of support automation. An automated
task can be built and delivered through multiple channels. Technicians can use automated
tasks in a remote, one-to-one, break-fix scenario through CA SupportBridge Live
Automation. Additionally, the same automated task could be delivered to a customer via a
CA SupportBridge Self Service Automation Solution. To this end, CA SupportBridge now
ships with an integrated development environment (IDE) to make content creation much
easier. The IDE allows for libraries to be created and re-used as often as necessary. The
IDE also allows for scripts to be tested outside of the CA SupportBridge application.
An automated task consists of a series of steps. Each step is one of these types:
Thus, distributed workflows can be created for automated tasks. For example, telemetry
data is gathered from the customer machine and displayed to the technician. Then the
technician can be asked to provide input that controls additional actions to be taken on the
customer machine. The following diagram provides an example of this:
Automated tasks are authored, debugged, and tested in the CA SupportBridge Automated
Task Editor. They can be deployed directly from the Automated Task Editor to the CA
SupportBridge server for use in CA SupportBridge Live Automation or CA SupportBridge
Self Service Automation. Automated tasks can also be saved to a file format from the
Automated Task Editor for easy distribution between systems.
The following diagram shows what happens when an automated task is executed:
To help you achieve the full value of support automation, CA provides services to assist you
in the implementation of this process. They include the following:
■ Assessment services where CA Technical Services professionals guide you through the
process of identifying support automation opportunities
CA also provides pre-built content in the form of ready-to-use automated tasks for common
problems as well as components (automated task step templates and libraries) that can
form the building blocks of customer-authored tasks. This content is offered under a service
offering from CA Technology Services. Detailed information may be found in the Service
Support section of the Service Solutions and Education pages of www.ca.com
(http://www.ca.com).
The first step in the process is to identify service desk processes that are good candidates
for automation. These could be solutions to specific common problems or simply data
gathering and diagnostic processes.
One key driver for automating a task should be the cost savings that would result from
automating it versus not automating it. Thus, a short task may be worth automating even if
it only saves one minute per call if that task is repeated hundreds of times a day. Similarly
a rarely performed task might still be worth automating if it takes a long time to execute
manually and if the time needed can be substantially reduced through automation.
All of the core service desk metrics of Average Handle Time, First Call Resolution, and
Customer Satisfaction can be positively influenced by well managed automation. For
example, it may be worth automating a task purely because the automation removes the
potential for human error, even if the automated solution is not any quicker than the
manual equivalent.
Tasks and processes that are good candidates for automation meet some or all of the
following criteria:
At this first step of the process, the goal is to prepare a short list (5-10) of tasks that meet
the three criteria above.
The majority of diagnostic and repair processes can be accomplished through scripted
solutions and are easy to automate.
However, some tasks are simply not easy to automate. For example, some processes
involve the use of Graphical User Interface (GUI) tools that require user input. If no
scriptable (for example, command line) equivalent functionality exists, this is probably not a
good candidate for automation.
At this stage of the process, the goal is to revisit the short list of potential automated tasks
created in the previous stage, and remove any tasks that are simply not automatable or
would be too difficult or time consuming to automate.
Required Skills: Service Desk Analyst, Automated Task Author (training available)
For each of the tasks on the short list, the desired user experience should be designed. For
example, if the task is to install a network printer, the previous steps would have
determined that the task is worth implementing, and that installing a network printer is
something that can be done from a script.
Then you must specify how this automated task should execute, which involves answering
the following questions:
■ Does it need to present any user interface to the customer information or to gather
information?
The automated solution can be developed and tested using the Automated Task Editor, as
described elsewhere in this section.
CA provides training to equip you with the skills to do this yourself. In addition, CA offers
services to author automated tasks if you who would prefer that option.
Deploying an automated task involves deciding which technician roles should have access
to the task in the case of CA SupportBridge Live Automation and where to expose links to
the automated task in the case of CA SupportBridge Self Service Automation.
Also, for some automated tasks, it may be necessary to configure credentials as part of the
deployment process to enable the task to run with the requisite privileges.
Deployment also involves technician training in the use of the automated solution.
Once deployed, it is important to monitor the effectiveness of the automated task over time
to ensure that it:
■ Works as designed
If any of these criteria is not being fully met, the automated task should be refined by
returning to the design phase of the process and iterating until the implementation delivers
the desired benefits.
Architecture
The CA SupportBridge server environment follows typical web application standards. There
are three standard components of the server environment: a web server, an application
server, and a database server. Each server can be scaled appropriately to handle different
loads.
Web Server
The web server can be any of a number of supported web servers, including Internet
Information Server (IIS) and Apache. The web server component performs the following
two roles:
■ Serves static content — The contents of the “static” directory in the CA SupportBridge
installation should be copied to a directory where it can be served directly by the web
server.
■ Forwards requests for dynamic content made over HTTP to the Application Server—
Some JSP pages, such as the automated task results viewer, the log viewer, and all of
the administration pages, are always served this way. The main communications
between the customer/technician .EXEs and the server usually occur directly to the
application server over a socket connection to the CA SupportBridge socket server. In
cases where the customer/technician .EXEs are behind more restrictive firewalls, HTTP
connections are used. In those cases, the web server strips HTTP data from the request
and passes the remaining body to the application server.
Application Server
The application server consists of any one of a number of supported J2EE Application
Servers such as Tomcat or WebLogic. CA SupportBridge is distributed as a Web ARchive
(WAR) file and is deployed within the chosen J2EE application server. The J2EE application
server is responsible for receiving HTTP requests for dynamic content forwarded from the
web server and making these requests available to the Java Servlets and Java Server Pages
(JSP) that comprise the CA SupportBridge presentation layer.
The servlets and JSPs of the presentation layer in turn utilize functionality in the form of
Java Beans that reside at the business logic layer within CA SupportBridge. CA
SupportBridge does not utilize the Enterprise Java Beans (EJB) model and therefore does
not require a J2EE application server with an EJB container.
The java beans at the business logic layer communicate with the database server through
JDBC with connection pooling. The application establishes these connections and its own
connection pool, and does not utilize database connection handling that is available with
some J2EE application servers. It is designed this way in order to be as portable between
application servers as possible.
Communications that come directly to the CA SupportBridge socket server therefore avoid
significant server overhead associated with making requests through HTTP. This overhead
saving is significant with respect to server environment sizing and hardware requirements.
Database Server
All configuration and historical data is stored in an SQL server or Oracle database. In large
deployments, with perhaps 100 or more concurrent support technicians handling in excess
of 10,000 incidents per month, it is usual to replicate historical data to a second database
server. The application supports the configuration of a “reporting” database, which is solely
responsible for all reporting requirements. This separate server removes the need to place
large query loads on the live database.
Supported Technologies
OS Microsoft XP
Microsoft Vista
Microsoft Windows 2000
Microsoft Windows 2003
Red Hat Linux 3.0 x86
As supported platforms may change over time, check with CA for an up-to-date list of
supported platforms. Customers may log on to SupportConnect, select the Documentation
option, and then select CA SupportBridge Live Automation. The list of supported platforms
is in the CA SupportBridge Technical Overview pdf.
Sample Deployment
The following diagram provides a summary of what the application architecture looks like.
This diagram breaks up the application into 3 layers: the business layer, the application
layer, and the database layer.
One of the primary disciplines applied to properly addressing this issue of chaos is service
level management (SLM). SLM takes a rather broad approach in controlling the level of
structure (and therefore chaos) in an organization by focusing on several variables: cost,
quality, and value. Generally speaking, quality impacts cost; the more quality desired, the
higher the cost. Quality, however, impacts value. Generally higher value is perceived based
on the level of quality of the service.
However, there is a break-even point. At a certain point, quality will cost more than the
value it creates. A balance is required. This balance is always a work in progress. How is
such balance maintained over time? By constant identification, resolution, improvement,
and learning from issues. This is where SLM and incident and problem management are
directly related.
One of the key activities of SLM is that of creating different forms of service agreements.
Service agreements can be divided into two types:
As service levels are measurable, by definition, the act of aligning customers' expectations
with the provider's capability comes down to aligning the metrics each is accountable for.
Making sure the risks of execution are properly assigned to customer and service provider
through these metrics-identified in both SLAs and OLAs-is the ultimate test of SLM as a
discipline.
As a mechanism for supporting the business, the SLM system you put in place also must
focus on realizing specific benefits: business, financial, employee, innovation, and internal.
These benefits generally are best focused around three areas where ROI is generally
measured: customer satisfaction, productivity, and capacity planning.
SLM leverages SLAs and OLAs, working together to align both the customer perspective and
the service provider perspective. Tying these together requires a form of shared strategy.
To aid in this discussion we separate service-focused SLAs from support SLAs.
Understanding the differences between them becomes easier if you see the role of SLM
from a holistic perspective.
Each of these types of “contract” has its place within service level management, and within
service management. Often, however, the tools used to implement each contract differ and
the ITIL processes that support, or are supported, through these tools differ. Together,
however, SLM, in conjunction with incident and problem management, must support the
following goals:
■ Availability of services
Leveraging SLM when building an incident and problem management process using CA
Unicenter Service Desk and related technology must be staged appropriately. A technology
by itself, not supported by the appropriate process capabilities and resource skills, will have
minimal overall effect on achieving the expected goals.
This section helps you implement service types, service entitlement, and service contracts.
It contains technical examples of how to create a service contract and how to create service
types when out-of-the-box content is not enough.
Additional information can be found in the service level agreements section in Chapter 4,
“Policy Implementation,” of the CA Unicenter Service Desk Administrator Guide.
Service Types
CA Unicenter Service Desk uses service types to assist in meeting defined service level
agreements (SLAs). Service types are used to automate conditional checks on an incident,
problem, or change request, and if met, will trigger the appropriate notifications or actions
if those conditions are true.
Service types are composed of one or more service type events. Service type events
reference a condition macro, such as “req.status=open,” a delay time, and an action to
perform when true or false. Actions point to macros that define what to do when the
condition proves to be true. Many condition macros ship with CA Unicenter Service Desk
out-of-the-box, and when necessary site-defined condition macros can be created from the
Administration tab under the “Events and Macros” node.
1. Map out the process flow for the service type, including events, based on the SLA.
5. Create the service type events using the out-of-the-box events or the ones created in
step three.
Service type events are automatically assigned to an incident or problem by populating one
of the following objects on the incident or problem detail screens:
■ Priority
■ Category
■ Configuration item
Note: When the affected end user is populated on the incident or problem, the service type
will be assigned based on the service type of the affected end user or the service type of
the affected end-user's organization. Whichever service type has the highest ranking will be
assigned.
All service type events that are associated with a priority, category, affected end user, or
configuration item will be applied to the incident, problem, or change request. The service
types applied will be listed on the Service Type tab under Service Types in Effect. The
Attached service type events will also be listed.
Ranking Considerations
Only one service type will be assigned to an incident or problem when the option
“classic_sla_processing” in Options Manager is installed. If that option is installed, ranking
will decide which object's service type will be assigned to the incident, problem or change
request. Using the installed option, you are able to create a service type and assign it a
ranking so that if multiple objects have a service type linked to them, the one with the
highest ranking is assigned, and the associated conditions and macros are initiated.
Action Macros
Action macros contain code to do things like update an incident status, set a flag, or
increase the priority. They are the only type of macros that cannot be created from the
Administrator tab in CA Unicenter Service Desk because they were not intended to be
changed by end users. For this reason, changes to these macros are not supported by CA
Technical Support.
If you find that the out-of-the-box action macros do not allow you to complete the building
of your service types, it is possible to modify the spel code in existing macros by extracting
the database column, making the required modification, and then reloading the updated
macro. It is also possible to load a new action macro into the MDB.
Note: Customizations of this nature must be done with care as this type of change cannot
be supported by CA Technical Support. You are responsible for testing and maintaining
these macros. Be sure to test thoroughly before placing the macros into production.
Important! This should be done with extreme care since any modifications to spel macros
will not be supported by CA Technical Support. This is better attempted with the assistance
of CA Technical Services.
The following example shows how to add a new macro based on an existing macro. Out-of-
the-box, there are macros to update the status of an incident, but what happens if you've
created a new status and need a macro to use it? It is easiest to extract an existing macro
with similar functionality, update it, and load it into the database as a new macro. The
following shows how a new action macro would be created to update an incident using a
site defined status called “Validate.”
3. Extract a similar existing action macro, such as the “Set Status = Open” by running the
following command:
TABLE Spell_Macro
del description fragment id last_mod_dt lock_object msg_html ob_type persid sym type
usr_integer1 usr_integer2 usr_integer3 usr_string2 usr_string3 usr_string4
{ "0" ,"set status of a request to open", "// set status = open\\0012// where status can
be assigned to one the following\\0012// CL = closed\\0012// CLUNRSLV = closed-
unresolved\\0012// FIP = fix in progress\\0012// OP = open\\0012// RSCH =
researching\\0012\\0012status = \"OP\";", "13013" ,"" ,"1" ,"" ,"cr" ,"macro:13013" ,"Set
Status = Open" ,"ACT" ,"", "" ,"" ,"" ,"" ,"" }
> Modify the description and fragment values to reference the new status
> Modify the sym value with the new status code
TABLE Spell_Macro
{ "0" ,"set status of a request to validate", "// set status = validate\\0012// where status
can be assigned to one the following\\0012// CL = closed\\0012// CLUNRSLV = closed-
unresolved\\0012// FIP = fix in progress\\0012// OP = open\\0012// RSCH =
researching\\0012\\0012status = \"zVal\";", "" ,"1" ,"" ,"cr" ,"Set Status = Validate"
,"ACT" ,"", "" ,"" ,"" ,"" ,"" }
5. Load the modified macro into the TEST system's database and test
Before you decide to create a new action macro, other alternatives should be explored. For
example, you may need to modify the "Notify Asset's Primary Contact" action macro so that
the message it sends out only consists of the incident number, status and the description of
the incident, as opposed to the default message that sends out the priority, incident
assignee, incident end user, and incident description. Although this is an action macro,
similar functionality can be attained by creating a new Object Contact Notification from the
Administration tab and browsing to Notifications. For example, Resource Contact refers to
the primary contact of the configuration item (affected_resource.resource_contact).
Next you can create a new multiple notification macro that has the desired text to send out
when called, and under the object tab, select the new object that was previously created,
Resource Contact, and then use the macro in an event and it will be sent to whoever is the
assets primary contact.
Service Entitlement
Service entitlement assists in providing quality service to customers. To define and control
the nature of the service provided, CA Unicenter Service Desk providers usually have
separate agreements with each of their customers. A customer in this sense could be a
department or organization. Service entitlement addresses this requirement by providing a
highly scalable mechanism for managing support policy differences for varying groups of
end users, and by providing a virtual private database that defines the specifics of the
contracted support. This allows the service desk to focus its resources on customers entitled
to support, thereby saving money and optimally utilizing scarce service and support
personal resources.
Service entitlement is aimed for customers who are outsourcers or employ complex service
Level processes that need to be managed inside of CA Unicenter Service Desk.
Unlike the basic service type ranking, service entitlement factors in all attributes such as
end user, asset, priority, organization, and category to help meet an SLA by aggregating
them on an incident or problem to provide a single, sequential list of actions to enforce all
policies. This works by assigning different service types, conditions, and events to
configuration items, priorities and categories for each customer base or organization,
resulting in the ability to track multiple conditions per ticket.
Service entitlement also filters out irrelevant categories, so the analyst can choose from the
organization's list of approved categories instead of the entire list.
Service Contracts
categories listed on the service contract may be selected for the incident or problem. In
addition, the only service types that may be applied are the private ones listed on the
contract. This ensures that these events for one organization are not accidentally mixed
with another's.
Service contracts define the mappings of service types to assets, contacts, and priorities. At
runtime, only categories defined by the contract can be applied to an incident or problem.
The service types applied to an incident or problem are those defined and mapped in the
Service Contract detail screen.
1. Verify that the time-to-violation options are installed under Options Manager on the
Administration tab. These options control the projected violations parameters that will
be used. If any of these options are installed or de-installed, you must recycle the CA
Unicenter Service Desk services.
> Ttv_enables
> Ttv_evaluation_delay
> Ttv_highlight
> From the Administration tab, choose CA Unicenter Service Desk, Service
Contracts, Create New Contract.
> Fill out the default fields, including adding an Analyst as a Client Advocate and
an Employee as a Client Contact.
> Leave the organization's Service Type and Default Service Type blank for now.
> Leave the Service Contract window open, and navigate to the main CA
Unicenter Service Desk screen.
> From the main CA Unicenter Service Desk screen, choose Search,
Organizations.
> Edit the organization, and add the previously created contract into the Service
Contract field.
> Return to the contract detail screen. Be sure the organization is now displayed
on the form. (If not, choose View, Refresh.)
> From the Private Service Type tab, click Add Private Service Type.
> Fill out the default values for the new Private Service Type. Be sure that the
Service Contract field points to your new contract.
> Select Add Service Type Event from the Incidents tab.
> Create several events to add to the service type. Set the delay time for each
event (format 00:00:00). In this example, initial, warning, and violation tasks
were created using the 12hr cr resolution events. For testing set the delay
times to a very short time. For example, initial 0 seconds, warning 45 seconds,
and violation 1 minute.
> Save or accept the events, and navigate back to the Private Service Type.
> After all the events are saved, the event list will be shown under the Incident
Event List.
> Close the Private Service Type, and navigate back to the Service contract.
> For the second Private Service Type, add a different Service Type event (1 hr
unassigned) and set a different delay time. For example, use 30 seconds.
> Save the event and Service Type, and navigate back to the Contract detail.
> Private Incident Areas or Categories are used the same way as standard
categories. The main difference is these categories are restricted by
organization via the contract object.
> Click Copy Existing Incident Area. (Using the “Copy Existing Incident Area”
feature, the new Incident Area will take advantage of pre-populated data such
as the assignee and attached properties.)
> Search and select the Applications Incident area from the list screen.
> Fill out the Symbol field. Be sure to set the Service Type to the first Private
Service Type you created.
> Save the Incident Area, and navigate back to the contract. Be sure that you
see the Incident Area listed under the contract (choose View, Refresh if
necessary).
> Click Map Single Priority to link a priority to a Service Type within this contract.
> At the Map Priority screen, set the Service Type to Unassigned or the 2nd
Private Service Type you created.
> Save the Service Type Map, and navigate back to the contract detail screen.
> From the main CA Unicenter Service Desk window, create a new Incident
(choose File, New Incident).
Incident Area
Priority
Assignee
> The organization's service contract should be automatically associated with the
incident if the contact selected belongs to that organization.
> Review the service type tab. You should see the service types attached. The
service types reflect those mapped to the attributes we set within the incident.
> You should also see the service type events shown that come from the service
types.
> Refresh the incident, and note the changes to the service type tab. You can
view any events triggered by going into Activities/Event History.
The primary goal of incident management is to resolve incidents quickly so as to restore the
use of IT services. The primary goal of problem management is to permanently eliminate
the causes behind incidents to prevent recurrence.
For these goals to be achieved in a way that limits downtime for customers-internal and
external-and maximizes the productive time of the IT resources performing the work of
incident and problem management, effective supporting processes must be implemented.
Effective Prioritization
When an incident is presented to the service desk, certain decisions about how to handle it
must be made. One of the first decisions to be made is to determine what the priority of the
incident is relative to the existing incidents currently being managed.
In an earlier chapter, suggestions were made about values that could be used to prioritize
incidents and problems. It is important to have these clearly defined levels of priority and to
ensure that everyone, including IT resources and the user community, understands what
these levels are and the reasons behind them. If any one group perceives the priorities
differently than another, or if a user or group abuses higher-level priorities to gain a higher
level of service, there is a risk that more highly critical incidents will not be addressed in the
timely manner that they deserve. If this occurs, the reputation of the service desk, as well
as the achievement of any SLAs or OLAs that apply to the resolution of the incident or
problem, could be compromised.
There are two tasks that must be accomplished for prioritization to be reliable and effective:
■ The definitions of the priorities must be agreed upon by the user community and IT,
and they must be clearly documented.
■ The service desk and other support analysts must be educated on what the resolution
procedure is for each level of priority.
Impact to the business should be the primary driver of priority selection. Therefore, it's
important to work with business users to define the priorities appropriate to the business
model. For example, in a call-center business, the telecom system is critical. The highest
level of priority would be assigned to an incident or problem that affected the ability for the
call-center to take calls. The next priority level might be assigned for an incident affecting a
system that supports the primary business activity, but which is not required to provide at
least minimal service to the customer.
According to ITIL best practices, the priority levels should be not just clearly defined but
also automatically determined based on the urgency and impact of the outage. This
requires another level of definition-for urgency and impact-and similar communication and
consensus among all affected organizations, including the user community. CA Unicenter
Service Desk supports this method of priority determination.
In the case of tools with an end-user interface for users to submit incidents, the ability of
the users to make their own priority selection must be evaluated. If the users do not
understand the clearly defined priority levels or cannot be depended upon to make the
appropriate selection, the decision may be taken out of their hands.
In the case of CA Unicenter Service Desk, often the Priority field is replaced with the
Severity field in the end-user web interface to prevent unnecessary escalations and
notifications as a result of improperly selected priority levels. Another alternative is to
modify the web.cfg file or implement a data partition to limit the priority values available to
the end users when initiating an incident. If the decision about priority is not permitted at
the user level, it must be evaluated upon arrival in the service desk queue.
Once the priority is assigned, and the incident or problem is in the hands of the analysts,
the owner of the incident or problem must know what to do with it. Manual or automated
procedures should be implemented to direct the incidents and problems through their
lifecycles.
In one example, a business may implement a procedure where every Priority 1 incident is
immediately assigned to an individual owner rather than remaining in a queue. In
CA Unicenter Service Desk, this can be achieved through the use of service types (which
are used to enforce priority-related activities to support business service availability
agreements) and automatic assignment, which is covered later in this chapter.
Again, consistency is the key to success here. To ensure that service level agreements
(SLAs) and operational level agreements (OLAs) are met regularly, every incident or
problem must have a priority assigned and the predefined activities associated with that
priority must be executed via CA Unicenter Service Desk service types.
Sometimes a combination of factors may come into play to determine the activity required
for a particular incident or problem. Incidents may have a different set of service
expectations defined than problems, and the associated activities would therefore be very
different. As a general rule, the simpler and more consistent the priority-based activities
are across categorizations and organizations, the easier it will be to ensure that the
processes are being followed accurately. For example, it is simpler to know that all priority
1 incidents should be resolved in 2 hours, regardless of the possible SLAs associated, than
to know that priority 1 printer incidents have 8 hours but that priority 1 server incidents
have only 30 minutes.
Now that everything is prioritized appropriately, it's critical for management to have an
understanding of what incidents and problems exist at each priority level. In CA Unicenter
Service Desk, the out-of-the-box scoreboard in the analyst web interface displays records
based on priority. However, as priorities are defined specific to your organization, it is
important that the scoreboard nodes be updated to reflect the names and numbers of
priorities appropriate to your business.
Appropriate Assignment
Once the priority is determined, and you know how fast you need to work through the
incident or problem, the next step is to decide who is going to work on it.
In CA Unicenter Service Desk, the assignment of incidents and problems generally takes
place at two levels: assignee and group. The group assignment is often driven by the
incident or problem area, which is the categorization of the particular type of problem being
experienced. There is also a capability to automatically assign the incident or problem to
members of the assigned group, based on work shift (the defined hours the analyst is
expected to be available, configurable by the service desk), physical location, availability
and current workload.
In the most common scenario, dedicated first-line support makes an attempt to use
published knowledge, known errors, and workarounds, as well as their experience, to solve
the reported incident before assigning it to a higher level of support, usually possessing a
more specific skill set. Even if this team is only able to handle very simple outages, the
process of triage that occurs here ensures that the next assignment is to the correct
support group, preventing delays in service.
When this is the case, the initial assignment of the incident is to the main service desk,
regardless of the categorization of the reported issue. First-line support people answering a
hotline would make an initial assignment to themselves, which can be automated in
CA Unicenter Service Desk. Data partitions, form modifications, or even limitations in the
selection of areas can be used to drive all user-submitted incidents directly to first line
support resources.
If first line support is incapable of solving the incident, it would then be assigned to the
next level of support.
Centralized Model
If little or no centralized service desk resources exist, it is even more critical that
categorization areas be defined properly. In this scenario, all users submit their own
incidents, selecting the appropriate incident area, which will, in turn, drive group, and
(potentially through automatic assignment) assignee selection.
A common challenge in this model is that users do not typically have an understanding of
the greater issue that could be causing the symptom they are experiencing. A user might
select "Network" as a categorization when they are unable to access the Internet, when the
issue is truly related to the configuration on their own workstation. An incorrect assignment
of this kind causes delays in resolution as the incident could languish (especially if at a
lower priority) in the wrong queue for an extended period of time before being transferred
to the correct support organization.
Decentralized Model
Note: Most of the above discussion is specific to incidents, because problems are generally
created within the IT organization, and often by the assignees themselves. This eliminates
most of the concerns about appropriate assignment, as the level of expertise to make an
accurate assignment usually exists and is applied.
Group Definitions
To ensure incidents and problems are being serviced appropriately, it is important to have
clearly delineated groups of resources defined by skill set. A single analyst may belong to
several groups, but each group should be specific enough that the right people are working
on the right things at the right time.
There are groups that are specific enough to ensure efficient service, but not so specific as
to cause troublesome upkeep. Some examples are Printer Team, Windows Server Team,
and UNIX Server Team. An example of something that might be too specific is Dot Matrix
Printer Team. Similarly, a too generic group might be the Infrastructure Team. The rule of
thumb is that members of a group should be interchangeable, so that each member can
solve any issue assigned to the group.
Groups are also commonly created by physical location, to reduce group size and
complexity through the provision of on-site support.
Automatic Assignment
CA Unicenter Service Desk provides the ability to automatically assign incidents and
problems to the individual members of the assigned group based on a few criteria.
■ Availability
> The Available checkbox on the group member contact record must be selected
■ Location
> If selected, the location of the end user determines the subset of group
members available for automatic assignment (those at the same locale)
■ Workshift
> If so configured, auto-assignment will only take place for each group member
during the active window of his or her work shift
■ Workload
> If all criteria are met for multiple members of a group, the selection of which
group member to assign the incident to is determined by finding the analyst
with the lowest count of incidents already assigned to them
> This does not take into account the complexity of the currently assigned
incidents or problems
When analysts access the service desk, they are usually doing so to view their current
assignments and make updates accordingly. Therefore it is critical to analyst efficiency to
provide an easy way for them to view their own assignments, both individually and for their
groups. Providing the ability to analysts to view similar incidents being worked on by other
members of their team enables the quick identification of trends that could require
resolution through problem management.
Notification
Once assignments of priority and responsibility are made, a process must exist to alert the
support groups or individual assignees that some action is expected on their part.
Notifications are also appropriate for management team members who need to be kept up
to date on the progress of highly visible incidents or problems.
As a general rule, notifications should only be sent when absolutely necessary to the
process of incident and problem management or to keep the affected end user informed.
Too often, an excessive influx of notification messages tires out the recipients, which results
in the creation of email client rules to automatically file or delete the messages from the
service desk, negating the purpose of the notifications.
Activity Notifications
CA Unicenter Service Desk provides many out-of-the-box message formats for common
activity notifications. Several of the most common usages of these notifications are listed
here.
Many more activities exist, and they can be added to or modified according to your specific
requirements. The customizable text of these notifications include variable values that are
populated from the related incident or problem to provide as much detail as is desired to
the recipient.
Activity notifications occur for every incident or problem when they are enabled, regardless
of priority. To perform conditional notifications, service types must be utilized.
When notification requirements are different for organizations, types of issue, priority or
other factors, service types can be utilized to send notifications out on a defined schedule
based on conditional statements to limit the number and type of alerts issued.
For example, an organization may only want notifications upon creation of incidents or
problems with a higher level priority. This can be achieved through the following process.
> Determines if the priority or any other factor meets the requirements of the
business and returns true if conditions are met
> Defines the message format and recipients (such as end user or assignee for
the incident or problem)
■ Create an event
> Associates the condition to be met with the action to be taken, in this case the
notification we created previously
> Schedules the event to be run when the service type is associated with an
incident or problem
> Service Types can be associated most simply via the assigned priority but this
can also be achieved through association of service types to the following:
Individual Contacts
Configuration Items
The history of all incident and problem notifications is held in the CA Unicenter Service Desk
database and can be reviewed at any time from the Notification History menu option within
an incident or problem detail or from CA Unicenter Service Desk's main menu under the
View option.
Notifications can be sent to any defined email address, including those destined for mobile
devices. User preferences for where and when they would like their notifications delivered
are configurable at the individual contact level.
Escalation
Notification, assignment, and prioritization activities all support the reduction in mean time
to resolution; but without a clearly defined escalation procedure, management is not
engaged to intervene when the plan is not being executed appropriately.
When to Escalate
Escalations should be executed in time for a difference to be made, not, as in the case of a
violated business service level agreement, after it's too late to prevent impact. Therefore, it
is important when setting up service types to initiate escalations prior to the “point of no
return,” when the damage has already been done.
Service Types and their accompanying events are the typical way automated escalation
procedures are implemented. A detailed explanation of how to use service types is in
another chapter of this book.
Analysts can also initiate escalation activity notifications by simply increasing the priority of
an incident or problem, but this can skew reporting and is not advised for most
implementations unless there has been an actual change in the impact or the urgency of
the incident.
Manual escalations can also take place through the use of manual notifications. These allow
for an analyst to notify an interested person, such as the manager of the assigned group, of
the incident or problem, while logging the notification as part of the permanent record. This
alleviates the pain of searching email folders to prove a notification was sent and received.
Manual notifications can be sent via email to any email client, including mobile devices.
Acting on an Escalation
Because CA Unicenter Service Desk supports notification to any email client, and provides
as part of most notifications a link to the incident or problem that is the subject of the
notification, managers receiving escalations are able to act quickly upon escalations to
prevent the consequences of violating a service level agreement or annoying customers
with delays.
Even when a web browser is not available to access the application from a desk,
CA Unicenter Service Desk provides the capability to access and update an incident or
problem directly from a handheld mobile device through the out-of-the-box PDA interface.
This limited interface provides managers the ability to update, transfer, or close an incident
or problem in the field.
Responsibilities
Incident Management
Level 1 Analysts
Level 1 analysts at the service desk are the first line of contact between the end user and
IT. Their responsibilities are as follows:
■ Logging all calls from the end user when the phone rings, an email is sent, or someone
reports a problem in person, thus creating new incidents with all the necessary
information.
■ Using the knowledge tool to solve incidents with workarounds, known errors, or general
knowledge. By following the steps in the knowledge tool to troubleshoot and resolve
the incidents, the mean time to resolution decreases (even if the average call time
actually increases when all the steps are followed).
■ Updating the status of the incident based on the current phase of the troubleshooting
process. Updating each activity with comments and the amount of time spent in each
update.
■ Escalating the incident to the Level 2 analyst after exhausting normal troubleshooting
techniques and checking the knowledge base.
■ Contacting the end user when Level 2 analysts indicate that they have resolved the
incident.
■ Submitting candidate knowledge documents for review and subsequent publishing into
the knowledge base.
Note: Some organizations permit authorized Level 1 analysts to publish directly to the
knowledge base without requiring extra approval.
Level 2 Analysts
The Level 2 analyst is generally responsible for handling escalations from Level 1.
Responsibilities may include the following:
■ Creating workarounds, known errors, and general knowledge within the knowledge tool
to assist Level 1 in incident resolution by providing the necessary steps to resolve the
incident.
■ Updating the status of assigned incidents based on the current phase of the
troubleshooting process.
■ Updating each activity with comments and the amount of time spent in each update.
■ Changing the status to Resolved once the resolution has been found, so that Level 1
can verify from the end-user perspective that the incident can be set to Closed.
Note: Depending on your organization's policies, if Level 2 is contacted directly by the end
user, it may temporarily function as Level 1 by logging and then owning the call, while
reminding the end user to call Level 1 next time.
Problem Management
The analysts working in problem management have a completely different focus from those
who are working in incident management. The problem management analysts are
responsible for the following types of activities:
■ Incident matching, which means looking for problems in the environment and tying
them to existing or new problems.
■ Trending of incidents, which means taking a look at the history of incidents in the
environment and performing root cause analysis to see if the cause can be determined
based on the outages in the environment.
■ Determining the appropriate requests for change (RFCs) that would be required to
resolve the problems in the environment, as well as which CIs each RFC would affect.
■ Determining errors in the environment, assessing their impact, and documenting the
error resolution.
■ Assessing the known errors to management so that they can order remediation or
determine that the remediation is too costly at this time.
202 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Incident Status Field Usage
If the incident Status field is used correctly and the analyst changes the status
appropriately, managers and end users can see from the status what is happening with an
incident.
With effective use of the incident Status, service type escalations and notifications easily
occur as part of conditional statements. Conditions on the service types can be used to
check for incident Status prior to the firing of an event. Thus if an analyst has
acknowledged the incident in the appropriate amount of time, there is no need to notify the
analyst again that the call needs to be worked on.
Open An incident is first opened and has not been seen by the analyst.
Acknowledged Analyst changes the status to Acknowledged after seeing the Incident.
Work In Progress Analyst changes the status when beginning to work on the incident.
Hold Work is delayed. The Analyst is waiting for a part or cable pull, for
example. It can also be used when the end user is not available, such
as on vacation or traveling.
Transferred Analyst needs to have another person own the incident. Also when
going between Level 1 analysts and Level 2 analysts.
Resolved (see Analyst completes the incident, before the service desk calls the end
note below) user for verification of completion. (This status stops all escalations
and notifications.)
Closed The service desk has called the end user and verified that the incident
is considered complete.
Re-Open If an incident wasn't really resolved, the status can be changed to this
by the end user or analyst, and the process begins again.
Note: Service desks commonly use the Level 1 analyst to contact the end user to verify
that an incident can be set to Closed. Thus, the person resolving the incident can only put it
into a Resolved status, and the corresponding “Reported By” Level 1 analyst or a designee
would then verify with the end user and change the status to Closed as appropriate.
The use of these status codes is meant to align the use of the CA Unicenter Service Desk
application to the needs of the business. The lists presented here and in the application are
by no means all-inclusive and may need to be modified to meet client requirements.
Proper use of the problem Status field is similar to that of the incident. The main difference
is the addition of a couple of new Statuses: Awaiting RFC, Known Error, and Workaround.
The problem Status may also help use the conditions within the Service Type escalations
and notifications. However, in most cases, these escalation and notification timeframes are
much longer with respect to problem management than to incident management.
Open A problem is first opened and has not been seen by the assigned
analyst.
Work In Progress Analyst changes the status when beginning work on the problem.
Known Error The root cause has been determined and documented for remediation
in the rest of the environment.
Transferred Analyst needs to have another person own the incident. Also when
going between Level 1 analysts and Level 2 analysts or other support
groups.
Closed The service desk has called the end user and verified that the
problem has been completed.
Priority Usage
The priority of the incident or problem is critical to determining what level of effort will be
taken by the analyst, and to setting the escalations and notifications that will occur. It is
the priority that helps management understand what things in the environment have the
potential to negatively affect the site from a production standpoint.
A Field Developed Utility (FDU), available through CA Technical Services, allows the Impact
and Urgency to determine the priority of the incident or problem. Instead of just setting the
default 'Incident Priority' field, this FDU sets the priority of the call based on the
combination of both the Impact and the Urgency of the incident or problem. This can help
eliminate giving incidents and problems the wrong priority.
204 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Severity Usage
Please note that FDUs are not supported by CA Support. To have this FDU supported, a
support agreement must be created via the CA Technology Services organization.
Otherwise, the support of this utility and any questions or errors related to its use will be
the responsibility of the customer organization.
2 High—Outages that, if not resolved quickly, will affect the ability of the
business to function properly
5 Extremely low—The outage is very minor and can be fixed in the future
4 Low—Minor outages have occurred, but a root cause must be found in the
near future
Severity Usage
Severity can also be used as an escalation mechanism. Using both priority and severity
together allows an analyst to determine which calls are critical. When placing escalations
via the Service Types, increasing the severity of the call provides more value than changing
the priority of the call. This will also help in understanding the metrics associated with the
calls for the number of times escalated and the average escalation of calls by priority. Also
from an automation standpoint and the integration with Spectrum and CA Unicenter
Network and Systems Management (NSM) on critical calls, you can pass the priority as a 1
and the severity as a 5, which will put this call at the top of the analyst's work list from a
process perspective.
It is critical to understand what is occurring in both the incident and problem management
processes. To do this, you can use the Activity Menu Items. Each of the Activity Menu Items
is listed below with a description of when they are to be used. With each of these, the
analyst should be putting in the amount of effort in the Time Spent field. This will update
the Total Activity Time of the associated incident or problem. This will significantly aid in
providing additional metrics on level of effort. Having not only the Open Date, Closed Date,
and Time, but the actual amount of effort involved in resolving the incident and/or problem
helps management direct resources efficiently. Additional client-defined activities can be
added from the administrative web interface.
Log Comment: Log the tasks you have been doing, your comments, and the time spent.
Transfer: Log the need to transfer the call to another analyst and why.
Researching: Record that you are researching information to solve the incident or
problem.
Close Incident: Log incident closure, after verification by the end user that the incident
can be closed.
Callback: Log that you talked to the end user—either you called them or they
called you—and the conversation and time spent.
Escalate: Record a manual increase in the priority of the incident or problem, and
why.
Manual Notify: Notify someone of what is going on with the incident or problem outside
of the normal notification methods.
Root Cause
Root cause can provide a significant amount of value in terms of reporting and trend
analysis. The root cause is the underlying reason that the original incidents had to be
opened. This could be User Error, Documentation, System Failure, or a host of other
reasons. Root causes should be standardized by each site and used properly so that
additional reporting can be accomplished. Root cause analysis also provides the ability to
know where the user community might benefit from additional FAQs being published, and
provides metrics that may identify areas where training or other action is necessary.
Below are sample root causes. As you can see, they can be very simple. Too many types of
root causes make it difficult to get the desired value out of this field, while too few
206 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Service Types
decreases the information provided by the field. Analysts and technicians should have a
thorough understanding of root cause types to use them effectively.
Unknown When the root cause of the problem is unknown (Use only rarely)
User.Error When the end user wasn't performing the task properly
User.Training.Issue When the end user doesn't know how to perform the task
Service Types
One of the powers of CA Unicenter Service Desk is the ability to escalate and notify the
necessary parties within the incident and problem processes. In most cases, these
escalation and notification rules are based on the priority of the incident or problem. In
some cases, however, the service type may be based either on the end user, the
organization, the category of the incident or problem, or the CI associated with the call. In
any combination of the above, the same type of events and escalations can occur. As noted
in the chapter on Service Types, there are many ways in which to set them up to provide
the substructure for helping you manage your service desk. Remember that the Service
Type itself should bring to the forefront those incidents or problems that need attention,
before they miss a level of service.
In the table below, you will find a sample of incident management escalation and
notification events based on the priority of the incident. For problem management, the
timeframes are significantly longer to encompass historical data in most cases. Note that
the name of the Service Type generally indicates resolution times.
Another flexible feature of CA Unicenter Service Desk is the ability to repeat events, which
reduces the number of events to manage in the event engine and allows it to scale easily.
As noted above in the effective use of the Status and the Priority fields, this will provide a
strong basis for the conditions of the events to determine the necessary actions to be
taken.
In the table below, we only show actions on a true condition; however, leveraging actions
on a false condition can provide value as well. For example, for a Priority 1 Service Type
named “2 Hr Resolution,” if the 2 hours has been exceeded, event #4 checks the status. If
it is not in a Resolved status, it will set the violation flag and execute notifications.
However, for a false condition, you may want to notify the “Reported By” analyst so he or
she can call the end user back and verify that the incident or problem is resolved, and then
change the status to Closed.
The Service Type events can be very powerful, but keep them minimal to be effective. If
you create too many notification and escalation rules, the analysts and managers will stop
looking at them. Provide the correct mix of notification and escalations to keep the process
moving forward, but not so many that they become a nuisance.
The below table sets up Service Type escalation and notification rules that will help ensure
that, not only the assignee, but all involved are notified if an incident is getting close to
missing a Service Level.
208 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Service Types
Categorization
A very powerful feature in incident and problem management is the ability to properly
categorize the incident and let CA Unicenter Service Desk provide significant automated
assistance. These categories let the analyst determine what kinds of calls are being handled
by the service desk. If you properly align the categories with your knowledge categories,
you can get additional benefits, as described in the “Knowledge Management” chapter, such
as the Category Browse function.
■ Service Types can be assigned based on the categorization of the incident or problem,
which will set up the necessary escalation and notification rules that are in effect for the
incident or problem.
■ Different Surveys can be based on the category of the incident or problem. This
eliminates a simple generic survey, and gives you a better understanding—by type of
call—of how the level of service was received by the end user. It also allows you to
elicit more specific information from the customer, based on the type of service that
was provided.
210 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Categorization
Then Level 1 can capture this data for troubleshooting, and can start the
troubleshooting on incidents before they are transferred to Level 2. Using this feature
can save time as it allows you to gather all of the needed information up-front and
ensures that you don't have to contact the customer to ask the standard follow-up
questions.
The organization of these categories is critical as well. It can be multi-level with no limits
(other than the label field is limited to 128 characters). Effective organization of these
categories and the use of multiple levels of classification can significantly enhance
reporting. It can also enhance the ability to capture the right information from the end
user. As a byproduct, you can assign the call to the appropriate group.
Since most sites are looking for reporting based on types of calls, categorization is critical
to the success factors aligned to these types of KPIs. The end users have a significant
advantage when properties are associated with the categories, so that they can place all
pertinent information in the incident immediately. Then the analysts don't have to call back
for more information; they can begin troubleshooting immediately.
You do not want too many top-level categories. Somewhere between 8 and 12 is a good
number of top-level categories. Then, drill deeper into the category as needed to increase
the reporting capabilities, as well as properly assign the incident or problem. On the other
hand, avoid creating a tree that is too deep. Users and analysts become frustrated if they
have to take too many steps up and down a tree to find the right category. Unless there is
a true need for a category, do not create it. In the table below, you can see an example of
the top-level categories and some lower-level categories.
Note: Be careful when you begin using these categories. Make sure that the analysts are
using them properly and not just placing all calls into a generic category, like General. One
way of judging the quality of your category system is to watch how many incidents have a
generic classification. If categories are few or scattered, incidents are put in generic
categories because the right category does not exist. If there are too many categories,
incidents are classified as generic because the user gave up on navigating to the right one.
Each site is different in its focus and needs, and must determine its own Incident Areas
based on its reporting and assignment needs. These are suggested Incident Areas. (Note
that the name of the Incident Area should be short, but understandable. The description
can provide the details.)
212 Effective Use of the Status, Priority, Root Cause, Service Type, and Category Fields
Categorization
User.Error When the user doesn't know what to do or has done it incorrectly
Unknown When the Incident doesn't fit another category (rarely used)
There are several ways to retrieve key metrics and data about incident and problem
management directly from the CA Unicenter Service Desk and CA Unicenter Service Desk
Knowledge Tools web interface. The advantage of these report types is that they are all
quick and easy for a user to access directly from the interface.
Scoreboard Queries
The scoreboard allows users to organize key CA Unicenter Service Desk and CA Unicenter
Service Desk Knowledge Tools data and tasks into manageable queues on the CA Unicenter
Service Desk home tab. The scoreboard structure consists of folders for grouping together
nodes. Each node is a stored query against the system that returns a total count of the
objects returned by that query, as well as a link to the results.
A default scoreboard is provided for each user. The administrator can predefine scoreboards
by access type, as well as allow users to personalize their own scoreboards using the
Customize Scoreboard File menu option. Users can add and modify folders and nodes, as
well as create new stored queries to add to their scoreboard. They can use scoreboard
nodes for information they need to access frequently. Scoreboard nodes are great starting
points into a user's daily activities within the incident and management process.
An administrator can also create stored queries for users to access within the Administrator
tab.
Below are several examples of common questions that can be easily answered by using a
scoreboard query:
■ How many incidents are assigned to the mainframe group? Of those incidents, how
many are priority 1?
■ Are there any problem tickets in the Network problem area (category) that are
currently unassigned?
Scoreboard Graphs
Each folder within the scoreboard has simple graphing capabilities that let you graph all the
nodes contained in that folder. Scoreboard graphs, accessed by right-clicking any folder and
choosing Graph Items, are very useful for a technician or manager to quickly review and
compare multiple nodes or queries at the same time. Scoreboard graphs are also helpful if
you need to quickly pull up incident and problem management data in a meeting, or copy
and paste into a document or email.
Often, clients will design their scoreboards in such a way to provide the most value from
this graphing capability. One way this is accomplished is by creating scoreboard folders for
managers containing nodes for each of their employees so they can get a quick idea of the
workload of each.
The above scoreboard graph shows the results of each of the scoreboard nodes under the
Incidents > Assigned folder.
Summary and detail reports are available from most CA Unicenter Service Desk and
CA Unicenter Service Desk Knowledge Tools list screens (also referred to as search/filter
screens). An analyst, manager, or administrator accessing a list screen has access to a
menu option called Reports. This menu let you create a summary or detail report directly
from the list screen. These reports are useful when you need to print out or review the
consolidated output of a specific scoreboard query or search results list.
A summary report displays the key attributes for the object type displayed in the list. In the
case of an incident, the following attribute values are shown: Incident #, Customer
(affected end user), Assignee, Open Date, Status, Priority, and Summary.
A detail report is accessed from the same menu as the summary report, but it displays all
attributes for all the items in the list, including the activity log.
Note: Detail reports return all attributes for the entire list, and may take a long time to
process and display. Because of the time they take to process and the large amount of data
they return for review, detail reports are only recommended when a limited amount data
has to be processed.
Summary and detail reports can be modified to meet customer specific needs. Details on
modifying these reports are covered in Chapter 3, “Custom Reports,” in the CA Unicenter
Service Desk Modification Guide.
■ An analyst wants to print out a list of all the incidents that are priority 1 in the network
incident area but have no assignee.
■ A manager needs a list of the problems that broke the SLA violation yesterday. This list
should include all the details on the ticket so the manager can have a full
understanding of the history and activity of each problem.
■ An administrator needs a list of all of the user IDs of inactive contacts in the system.
Analysis Reports
Analysis reports are available in the web interface from the Administration Tab in the
CA Unicenter Service Desk folder. These reports are used by administrators and managers
to get a perspective on the high-level activities within incident and problem management.
These reports are very useful for a support manager to glance at before going to a meeting
with management. Metrics such as the total number of priority one incidents in the last 30
days and their average time to close provide extremely valuable data to have at your
fingertips.
Below is an example of an analysis report that details the activity summary for the past
year's incidents, broken down by priority:
Incident and problem managers often need more complex reports to analyze trends within
their environment. Reporting tools such as Microsoft Access and Business Objects Crystal
Reports provide this level of complexity, as well as the ability to customize reports to meet
the organization's unique needs. CA Unicenter Service Desk provides more than sixty
runtime versions of commonly requested reports in both Access and Crystal Reports.
These reports cover a variety of topics, including request, change, issue, and knowledge
management. The reports can be customized using full versions of the Access and Crystal
Reports.
For a definition of all predefined reports, review Chapter 7, “Report Generation,” in the
CA Unicenter Service Desk Administrator Guide. Additional detail on customizing these
reports is included in Chapter 3, “Custom Reports,” in the CA Unicenter Service Desk
Modification Guide.
■ Provides information for service desk manager and team lead role
The CA Unicenter Service Desk Dashboard provides insight into incident management and
problem management, as well as other practices that relate to the service desks, such as
request management, change management, configuration management, knowledge
management, and issue management. It is an add-on to CA Unicenter Service Desk and
CA Unicenter Service Desk Knowledge Tools that presents visually rich representations of
critical metrics in CA Unicenter Service Desk Dashboard displays.
Note: If necessary, system administrators can use the CA CleverPath Forest & Trees
developer to develop additional metrics to present information not already included in a
CA Unicenter Service Desk Dashboard control.
Benefits
While dashboards are often considered to be tools for executives, most of the out-of-the-
box content in the dashboard is geared more towards the day-to-day duties of those people
directly managing the everyday activities at the CA Unicenter Service Desk. This would be
the CA Unicenter Service Desk manager or perhaps a team leader in the support
organization.
The standard implementation of CA Unicenter Service Desk Dashboard is one where data is
displayed from the live database storing CA Unicenter Service Desk and CA Unicenter
Service Desk Knowledge Tools data (the MDB). Data is thus very timely, with data updates
performed whenever the user wants to select Refresh, updating the data displayed by
querying the database (or, via a minor customization, when scheduled). Data is therefore
described as “near-real time.”
CA Unicenter Service Desk Dashboard aims to provide its users with actionable information
that can be used to make decisions and manage operations effectively. Given the large
quantity of transactional data collected by a service desk, CA Unicenter Service Desk
Dashboard focuses on exposing exceptions rather than merely displaying routine data.
Management can then rapidly focus attention on those areas requiring decision-making and
action.
CA Unicenter Service Desk Dashboard provides drill-down capability from charts into the
supporting data that comprises a part of the chart (such as a list of incidents), and then
drill-down capability into CA Unicenter Service Desk itself, where more details are available
(such as all the details of a specific incident).
CA Unicenter Service Desk Dashboard comes prepackaged with rich content in the form of
standard metrics, tables, and charts. This content reflects common data requirements that
CA has identified over many years as a vendor of service desk technology to a variety of
different organizations.
The out-of-the-box content is a valuable starting point for a CA Unicenter Service Desk
Dashboard implementation. Organizations typically want to customize and augment the
CA Unicenter Service Desk Dashboard content, too, to reflect their specific approaches and
business needs. CA Unicenter Service Desk Dashboard has a powerful design environment
(in the form of CA CleverPath Forest & Trees developer) that can be used to perform
customizations. Some of the more commonly used customizations are presented later in
this chapter.
Dashboard Defined
What is a dashboard? Just think of the instrument display on the dashboard in a car. At a
glance, you can determine such things as the speed at which you are traveling and whether
the oil pressure is OK. The car's dashboard gives you information needed to measure
performance against some standard, like how fast you can go, whether you need gas, and
whether the car is running properly so that you can reach the destination.
In the business world a dashboard serves a similar purpose: providing information to those
responsible for accomplishing a goal so that they can make informed decisions in order to
reach their goal. Informed managers can ask better questions and make better decisions.
A dashboard supports learning about an organization, its work processes, and its interaction
with its environment and other processes. A dashboard should direct attention to
exceptions that need to be addressed, and then provide supporting information to identify
root causes and or related exceptions. For example, you don't just notice that there are
many incidents that have had no recent activity; you ask why and then make decisions that
address the “why.”
An operational dashboard, such as the CA Unicenter Service Desk Dashboard, should also
provide timely access to information so that, for example, information about incident
inactivity arrives in time to address it before customers begin to complain.
Important! The installation procedures for the CA Unicenter Service Desk Dashboard are
described in detail in product-specific installation and administrator guides. You must use
these guides to correctly install the CA Unicenter Service Desk Dashboard before any of the
customizations described in the sections below can be completed. To locate product
documentation, log in to CA SupportConnect via the Technical Support link under
www.ca.com/support (http://www.ca.com/support), and then click Documentation under
Downloads.
The CA Unicenter Service Desk Dashboard is, in fact, a CA CleverPath Forest & Trees
application called uspDashboard.ftv. By default, it is installed in the following folder:
drive:\program files\ca\service desk Dashboard
The CA Unicenter Service Desk Dashboard queries data directly, from the CA Unicenter
Service Desk tables in the MDB database.
Database Connection
The CA Unicenter Service Desk Dashboard installation configures a data source named
CA_ServiceDesk that connects directly to CA Unicenter Service Desk database. Depending
on the RDBMS used for the CA Unicenter Service Desk database, you may need to complete
additional tasks to establish a database connection. See the CA Unicenter Service Desk
Dashboard Administrator Guide for details.
Note: The CA Unicenter Service Desk Dashboard does not leverage the CA Unicenter
Service Desk object layer and, therefore, does not recognize CA Unicenter Service Desk
data partitions.
The installation of CA Unicenter Service Desk Dashboard on the user's desktop includes a
CA CleverPath Forest & Trees runtime, which supports the use of CA Unicenter Service Desk
Dashboard.
The CA Unicenter Service Desk Dashboard also leverages external files such as DLLs for
tasks such as date processing.
When drilling down from CA Unicenter Service Desk Dashboard into CA Unicenter Service
Desk, the launch of the CA Unicenter Service Desk web client may prompt you to enter
your username and password. There are ways to set up CA Unicenter Service Desk
authentication so that users who are logged into their local machine or domain are not
prompted for their username and password when launching CA Unicenter Service Desk. If
this is done, the additional login step can be avoided. Further information on authentication
may be found in Chapter 15, Security.
CA Unicenter Service Desk Dashboard lets you build applications that visualize, analyze,
and monitor data. It provides a simplified, consistent interface to a large palette of
application controls that capture user input and display data, charts, reports, and pictures.
It can collect and combine data from nearly anywhere—relational databases,
multidimensional data sources, flat files, spreadsheets, XML or HTML documents—and
quickly present it in a single, completely customizable application that can be run on a LAN,
an intranet, or the Internet.
CA Unicenter Service Desk Dashboard has out-of-the-box filters that can be applied to
enable managers to see a subset of data that relates to their personal business needs.
These filters are as follows:
■ By Group Assigned
■ By Analyst Assigned
■ By Customer Organization
In larger organizations, especially those that have multiple service desks, the above filters
may provide data that is too broad for the needs of a local manager or a user with only a
limited frame of reference. Therefore, there may be a need to customize CA Unicenter
Service Desk Dashboard so that additional query constraints limit the data returned.
Historical Data
CA Unicenter Service Desk Dashboard mostly focuses on recent data, rather than historical
data. Although some items have a longer range, much of the content relates to activity
within the last 60 days or less. The queries that underpin the out-of-the-box content have
been optimized so that a reasonable balance exists between the quantity of data returned
and the processing overhead required to handle the data. In very large systems,
implementers may want to customize the longer-range or high-volume queries to return
shorter period data sets.
Data Export
CA Unicenter Service Desk Dashboard is able to export data into a number of formats.
These include delimited text, HTML, and Microsoft Excel spreadsheet formats.
Important! Instructions contained in this document assume you have successfully opened
the CA Unicenter Service Desk Dashboard application using the CA CleverPath Forest &
Trees developer. We recommend that you save the original version of the .FTV file in a safe
location. It is also assumed you have experience working with CA CleverPath Forest &
Trees. To learn more about CA CleverPath Forest & Trees application building and
customization, attend the CA Education course FT320 CleverPath Forest & Trees: Building
Actionable Dashboards. Information on this class can be found at www.ca.com/education
(www.ca.com/education).
Helpful Tips
Version Control
Before customizing the CA Unicenter Service Desk Dashboard, we recommend that you
make a backup of the original out-of-the-box version of the application. This can be done
either of two ways:
■ Opening the CA Unicenter Service Desk Dashboard application in the developer version
of CA CleverPath Forest & Trees and choosing File/Save As from the menu.
■ Navigating to the program files\ca\service desk dashboard folder and making a copy of
the USPDashboard.ftv file.
Name the copy in a way that identifies it as the original version of the software, or place it
in a folder that identifies it as such. Because both the icon placed on the PC's desktop and
the Startup Menu during the CA Unicenter Service Desk Dashboard installation process look
for a file called USPDashboard.ftv, the file you modify-the application that users will
ultimately use-must either be named USPDashboard.ftv, or the icon and Startup Menu must
be modified to look for the new file name. If you choose to use a new file name, we
recommend that you name it z_USPDashboard.ftv in keeping with the CA Unicenter Service
Desk standard for customizations.
The CA Unicenter Service Desk Dashboard application is designed so that when you choose
the File, Exit option from the menu with the blue background on the Home tab, the file is
automatically saved before it is closed. If you do not want to save your changes, do not use
that means to close your file. Instead, close the file from the menu on the CA CleverPath
Forest & Trees developer toolbox. (Press F6 if needed to show the toolbox.) When choosing
File, Exit from the developer toolbox, you will be prompted as to whether you would like to
save, and you can respond “no.”
Note: Using the File, Exit on the CA Unicenter Service Desk Dashboard Home tab will save
your changes under the existing filename. Do not exit the CA Unicenter Service Desk
Dashboard that way if you do not want your changes saved. If you want to exit CA
Unicenter Service Desk Dashboard without saving changes, exit using the File, Exit option
from the CA CleverPath Forest & Trees developer toolbox menu. If you want to save your
customizations under a different filename, use the File, Save As menu option on the CA
CleverPath Forest & Trees developer toolbox.
Changing a graph type (for example, from a bar to a pie graph) is easy with right-click
options available with the CA CleverPath Forest & Trees developer.
Steps:
1. Navigate to the group with the graph by clicking on the associated tab.
2. Right-click on the graph and choose Graph, Type from the pop-up menu.
3. Select the desired type and associates options (for example, 3D).
4. Click OK.
Changing graph colors is easy with right-click options available with the CA CleverPath
Forest & Trees developer.
Steps:
1. Navigate to the group with the graph by clicking on the associated tab.
2. Right-click on the graph and choose Series, Type from the pop-up menu.
3. Click on the series you want to change from the list on the left.
Note: if you cannot access this option, close this dialog, then press F7 to go into layout
mode; right-click on the graph and choose Graph from the pop-up menu. When the
graph is open, choose Graph, Modify from the menu and uncheck Rebuild Graph When
Data Columns Change.
5. Click OK.
Changing a graph title is easy with right-click options available with the CA CleverPath
Forest & Trees developer.
Steps:
1. Navigate to the group with the graph by clicking on the associated tab.
2. Right-click on the graph and choose Annotations, Text from the pop-up menu.
5. Click OK.
6. Exit CA CleverPath Forest & Trees developer and save your changes.
8. If you followed all of the above instructions, and after testing your screens the changes
are not reflected in CA Unicenter Service Desk Dashboard, please try the following
alternative steps:
Alternative Steps:
1. Navigate to the group with the graph by clicking on the associated tab.
In our example we will modify the Active by Incident Area graph under Requests /
Incidents / Problems, Request / Incident / Problem Areas. We will modify the title of
this graph to read Active by Incident Category.
3. From within layout mode, determine the name of the view that holds the graph of
interest.
Note: Views are the building blocks of a CA CleverPath Forest & Trees view file. For
more information on views, please refer to the CA CleverPath Forest & Trees online
help.
4. From the File Menu choose File, Find. The View File Find dialog displays.
Note: The Find menu can be a valuable resource when making modifications to CA
Unicenter Service Desk Dashboard. The Find utility lets you search for, and in most
cases edit occurrences of a specified text string in any part of the current view file,
including formulas, queries, object definitions (views, groups, substitutions) and view
data.
6. Notice many checkbox options available below the Find field. Some preferred options to
check off here would include the following:
a. Options: Select Use LIKE; clear 1st occurrence; clear Match case
8. Notice the Results window occupying the bottom of the View File Find dialog. Each row
in the Results window lists a unique occurrence of the string in the Find field.
9. Using the Object column in the Results window, open any column with sub-type
Formula.
10. Notice the Edit Formula window is displayed and the string is highlighted. Look for lines
of code that use the name of the view.
11. In the previous example, the variable tmpTitle holds the string that will be placed into
the Graph Title. As a result, you must find the variable assignment (usually
immediately above the SetProperty command), and change the string to the desired
title.
13. Double click the object column for the next line in the Results window, making any
further changes if necessary.
14. Exit CA CleverPath Forest & Trees developer and save your changes.
15. Restart CA Unicenter Service Desk Dashboard and test your screens.
Adding or removing point labels (the values that show on or next to graph bars, pie slices,
or lines) is easy with right-click options available with the CA CleverPath Forest & Trees
developer.
Steps:
1. Navigate to the group with the graph by clicking on the associated tab.
2. Right-click on the graph and choose Annotations, Text from the pop-up menu.
4. Click the Visible checkbox. (When checked, point labels will show. When unchecked
point labels do not show)
5. (If adding point labels) Choose the desired Location, Line Style, and Value Format from
the drop down lists and choose which labels to show from the check boxes.
6. Click OK.
Many other changes can be made to graph type views using the options available with the
right-click menu, including the graph background, axes title formats and fonts, and the
graph gridlines. Most changes are fairly intuitive. For help, refer to the CA CleverPath Forest
& Trees help file.
Note: You may find it easier to make graph changes while in the Edit Graph Window. To
open the Edit Graph Window, press F7 to go to Layout mode; then right-click on the graph
view and choose Graphs.
Changing table fonts and colors is easy with right-click options available with the CA
CleverPath Forest & Trees developer.
Steps:
1. Navigate to the group with the table by clicking on the associated tab.
2. Right-click the table somewhere other than a column title and choose Layout, Attributes
from the pop-up menu. (Note that right-clicking a column title provides a different pop-
up menu)
3. Click on the attribute you want to change from the list on the left.
4. Click Change.
5. If appropriate, click the Font tab and choose the font desired.
6. If appropriate, click the Shading tab and choose the color desired.
7. Click OK.
9. Click OK.
Changing a table title is easy with right-click options available with the CA CleverPath
Forest & Trees developer.
Steps:
1. Navigate to the group with the table by clicking on the associated tab.
2. Right-click the table somewhere other than a column title and choose Layout, Title from
the pop-up menu. (Note that right-clicking a column title provides a different pop-up
menu)
5. Click OK.
Using a built-in feature of CA CleverPath Forest & Trees, you can choose not to show a
column in a table without having to change the query.
Steps:
2. Right-click on the heading (title) of the column you want to hide and choose Layout.
(You'll see a window similar to the one below.)
CA Unicenter Service Desk Dashboard users may find column totals (sums, averages,
minimums, and so on) helpful to their analysis. It's easy to add these.
Steps:
2. Right-click the title of the column and choose Layout (you'll see a window similar to the
one below).
4. Choose the desired total type (for example, Minimum Value, Average) from the Total
drop-down list.
Note: You can use a “formula” total type to add words like “total” or “minimum” in the
column total to the left of the total value to clarify what the value actually represents.
Another way to highlight exceptions is to change the way information is displayed, say,
when a number has crossed a specified threshold. CA CleverPath Forest & Trees lets you
define alarms on table views that will do just that. It's a two step process: define the alarm
and then apply the alarm to a column. Important Note: The instructions below apply only to
tables of data, not to graphs.
1. Navigate to the desired CA Unicenter Service Desk Dashboard page by clicking on the
associated tab.
2. Right-click on the view to which you want to apply the exception alert and choose
Alarms.
3. Click on the 1 tab at the top (you will see a window similar to the one below) or the
next available alarm number if you are defining more than one alarm.
4. Enter the criteria for your alarm (for example, a range from 5 to 999999).
For more information on defining alarms or specifying alarm conditions, see the
CA CleverPath Forest & Trees developer help file.
1. Right-click on the column header of the column to which you want to apply the alarm
and choose Layout (you'll see a window similar to the one below).
3. Click OK.
Note: By default, CA CleverPath Forest & Trees applies all alarms to the first column. If you
don't want an alarm applied to the first column, right click on the column header of the first
column, choose layout, and uncheck the alarm.
Several of the tabs in CA Unicenter Service Desk Dashboard include radio button views for
choosing time periods. For example, the Trends, Unattended group has a radio button view
named rbnResponseHours that offers choices for viewing tickets unattended for a time
period that ranges from 1 hour to 30 days. Different organizations have different volumes
of tickets and differing thresholds for what defines an exception worth managing. To
customize the time period choices to better fit your organization:
2. Right-click on the view and choose Formula from the pop-up menu (you'll see a window
similar to the one below).
3. Comment out the options you no longer want to show by placing // in front of the
corresponding line of code.
4. To add new options, create new lines using the existing ones as an example.
5. Click the test button (the check mark) to confirm there are no errors.
6. Click OK.
CA CleverPath Forest & Trees lets you schedule view calculations by enabling a few options.
The design of CA Unicenter Service Desk Dashboard makes it possible to schedule a
calculation of just one view that will in turn calculate all others on a CA Unicenter Service
Desk Dashboard page (group/tab). Note that CA Unicenter Service Desk Dashboard is
designed to show near-real-time data and, therefore, automatically refreshes data as the
CA Unicenter Service Desk Dashboard user navigates to a page (group/tab) after the
application is opened. These scheduled calculations take place in addition to automatic
refreshes to reduce the need to click Refresh on a CA Unicenter Service Desk Dashboard
page. Note also that scheduled calculations, as created in the steps below, only happen
while the application is open.
Steps:
6. Click OK.
A primary concern for those involved in reporting is how to get to the data. This includes
how to connect a report to the data source, and how to get the necessary data elements
into the report.
CA Unicenter Service Desk's reporting options are designed to connect to and use
CA Unicenter Service Desk data in various ways, allowing flexibility for reports to address
different needs of the organization. To support various reporting options, there are various
ways to reference and understand the data. This section tells you about the resources you
can use to reach your reporting goals.
Predefined Reports
Once a report writer has had the confidence-building experience of experimenting using
trial and error in a new reporting tool for a while, the documentation for reporting becomes
much easier to apply to real-world requirements.
For summary, detail, and analysis reports, extensive documentation is provided in the
Custom Reports section of Chapter 3 in the CA Unicenter Service Desk Modification Guide.
For Crystal reports, the Crystal developer (a third-party tool) has a comprehensive online
help facility, coupled with an extensive knowledge base located on the Business Objects
website. Their website also provides forums, and there are several news groups and
independent forums on the internet.
Microsoft Access has experts in every corner of the globe and many companies. There is no
shortage of help when dealing with Microsoft products such as Access and Visual Basic for
Applications (VBA). Note that there are no predefined CA Unicenter Service Desk
Knowledge Tools reports in the MS Access format.
At the time when the reporting tools start becoming familiar, the following resources are
available to help the report writer navigate CA Unicenter Service Desk and CA Unicenter
Service Desk Knowledge Tools data and relationships.
CA Unicenter Service Desk tables are part of a larger database, CA's management database
(MDB). The MDB comprises a very large number of tables. For a report writer interested
specifically in CA Unicenter Service Desk, it is helpful to understand which subset of tables
is specific to CA Unicenter Service Desk.
The CA Unicenter Service Desk entity relationship diagram (ERD), available from CA, is the
perfect tool for understanding the table names, field names, and relationships between the
tables. It is a graphical representation of the principal tables and relationships that make up
the CA Unicenter Service Desk subset of the MDB. A PDF version of the ERD can be found
at the CA Unicenter Service Desk product home page on SupportConnect. A poster version
is also available. For assistance, contact Technical Support at http:/ca.com/support.
The ERD is intended to provide an overall view of the prominent tables and relationships
within the CA Unicenter Service Desk database. As is typical for an ERD. the image is made
up of a series of boxes, representing tables, and lines, representing relationships between
the tables. Each box includes a left justified heading that is the table name, and the names
of the fields that make up each table row. Each relationship line begins with a solid circle,
and terminates with an unfilled circle.
To find a relationship between two tables, note the name of the table (child) where the solid
circle begins, then follow the line until it terminates at another table (parent), with the
unfilled circle. The solid dot indicates there is a field in the table that holds the value of a
record identifier in the table at the unfilled circle. In this way, users can understand the
strategy required for building a new report from scratch, or help a user understand existing
SQL in prepackaged CA Unicenter Service Desk reports.
In an attempt to keep the image readable, not all of the tables and relationships have been
included in the ERD. Therefore, the graphical depiction should not be mistaken for a
complete picture of the entire CA Unicenter Service Desk database. Noticeably absent from
the ERD are all tables relating to CA Unicenter Service Desk's complementary product,
CA Unicenter Service Desk Knowledge Tools. Also for legibility, only one relationship line is
included between any two tables, even if there are several fields within one table holding
pointers to the second table. For example, the diagram shows a single relationship line
beginning at the call_req table and terminating at the ca_contact table, even though there
are actually five different fields in the call_req table holding values for records in the
ca_contact table.
To understand—in great detail—the specific relationships between the fields within the
CA Unicenter Service Desk database, refer to the $NX_ROOT/site/ddict.sch file on
CA Unicenter Service Desk's primary server. It is from the text file ddict.sch that this
diagram was created, and in that file, foreign keys for CA Unicenter Service Desk and
CA Unicenter Service Desk Knowledge Tools are exhaustively and completely represented.
An excellent reference for report writers can be found on the CA Unicenter Service Desk
primary server at $NX_ROOT/site/ddict.sch. Ddict.sch is a text file containing all table
names, field names, and foreign key references addressed by CA Unicenter Service Desk
and CA Unicenter Service Desk Knowledge Tools. The ddict.sch file is generated and
managed by CA Unicenter Service Desk and its utilities and should never be changed
manually. The file's function within CA Unicenter Service Desk is to serve as an aliasing
bridge between CA Unicenter Service Desk and the database. The file is part of CA
Unicenter Service Desk's internal processing that allows many command line utilities, such
as pdm_extract, as well as summary, detail, and analysis reports, to use identical syntax no
matter the underlying database.
While the ERD, discussed previously, can be used to plan the tables and fields that will be
included in a report, ddict.sch can be used to plan the join syntax and field lengths
necessary for building and displaying the report. Following a few rules of thumb, users will
find ddict.sch a useful reference for understanding foreign keys and field lengths when
building summary, detail, and analysis reports, as well as Crystal and MS Access Reports.
Ddict.sch has three distinct sections: a table definition section that defines table structures;
a "table info" section that defines indexes for each table; and a table/field aliasing section,
apparent by its syntax for each alias beginning with "p1" then continuing to define aliases
for all tables and some fields.
Relationships in ddict.sch
When starting from the database or from the ERD, a report writer will likely have noted the
names of the tables to be used in the report. As an example, a report listing active
incidents would likely use the call_req, ca_contact, and pri tables. The next step would be
to pinpoint the foreign key relationships between those tables. To find the relationships, the
report writer would first find the table alias (every table in ddict has an alias definition even
if the table name and alias name are exactly the same), open ddict.sch and search the
document using the following syntax:
CURR_PROV call_req
The table name following the syntax CURR_PROV is the name of the table exactly as it is
found in the database. Note the alias of the table depicted by the table name immediately
following the p1 syntax at the beginning of the line; in this case the alias is Call_Req. Now
that the alias is determined, then search the ddict.sch file for the key words:
TABLE Call_Req
The entire definition of the call_req table is enclosed in the curly braces, {}, following the
key words TABLE Call_Req; each field definition is on its own line terminated by a
semicolon.
Within the table definition, all foreign keys are defined by the syntax REF, then the parent
table name in a fashion as follows:
Searching for every instance of REF will reveal all foreign keys in the table. The relationship
is characterized by one of the following key words:
UUID
INTEGER
STRING nn
UUID means that the parent table will have a primary key called "id" defined as UUID in
ddict.sch. Here is an example:
Call_Req.customer = ca_contact.id
INTEGER means that the parent table will have either a primary key field named id that is
an integer OR the parent table will include a field named enum that is defined as an integer.
When the enum field is present, it takes precedence as the field that completes the foreign
key relationship. For example, the relationship between Call_Req and Priority is defined as
follows:
Call_Req.priority = Priority.enum
Call_Req.department = ca_resource_department.id
The reason for the difference is that the priority table has an enum field defined in addition
to its primary key called "id,” while the ca_resource_department table has no enum field
defined, so the default field for completing the foreign key relationship is the primary key,
called "id".
The third type of foreign key definition, STRING nn, refers to a string field in the parent
table called PERSID, unless there is a field called "code.” When the "code" field is defined, it
is used instead of the PERSID field. As a result the relationship between Call_Req and
Prob_Category is as follows:
Call_Req.category = Prob_Category.persid
Call_Req.status = Cr_Status.code
The Cr_Status table includes a string field named code, so it supersedes the PERSID field as
the foreign key relationship, whereas the Prob_Category table has no field named "code"
leaving the default field PERSID to be the relationship field.
Because summary, detail, and analysis reports access the database through native CA
Unicenter Service Desk connections, the aliased table and field definitions in ddict.sch are
the proper syntax to use within those reports. Alternatively, the Crystal and MS Access
reports connect to the database using ODBC technology, thus avoiding the native CA
Unicenter Service Desk architecture, and as a result must address the database with names
explicitly defined in the DBMS. Therefore, when Crystal and Access Reports are compared
to summary, detail, and analysis reports, there will be some notable differences in the
names of tables and fields. A good example of aliased field names vs. explicit DBMS names
is the primary key of many of the tables in the CA Unicenter Service Desk schema.
The primary key of every table in ddict.sch is always aliased as "id" if it is not already
named id. Here is an example of a query using the aliased names exactly as defined in
ddict.sch (this syntax is relevant for summary, detail, and analysis reports):
ON Call_Req.affected_rc = ca_owned_resource.id
Even though ddict.sch is a very good reference for figuring out the foreign key relationships
between tables, note that the syntax for SQL when using Crystal and MS Access reports
must match the un-aliased database, as opposed to the aliased ddict.sch names.
Note the difference between the aliases and the actual database names in this SQL
Statement (this syntax is relevant for Crystal and MS Access reports):
ON call_req.affected_rc = ca_owned_resource.own_resource_uuid
Keep in mind that all the table names are aliased in ddict.sch, as well as some field names.
The aliased fields can be found within the same syntax as the aliased tables, with field
names enclosed in curly braces. Here is an excerpt from ddict.sch defining the
ca_owned_resource table and its primary key:
id -> own_resource_uuid ;
The syntax above is the ddict.sch code that declares there is a table named
ca_owned_resource in the database whose definition includes a field named
own_resource_uuid; but the field name in ddict.sch (and thus summary, detail and analysis
reports) will be always known as "id."
When viewing any timestamp through the CA Unicenter Service Desk web interface or
reporting interfaces, the timestamp is presented in the local time and local format of the
viewing user. A user can compare the open date of an incident through the CA Unicenter
Service Desk web interface to the open date of the same incident in a detail report, to the
open date in a Crystal Report, and see a consistent timestamp, no matter the location of
the database server or the application server, or the state of daylight savings time, and so
on. If two users in two different time zones view the same incident, the open date will be
displayed in the local time of each user, for example, 1 hour apart in neighboring time
zones, depending on Daylight Savings Time.
Behind the scenes, products using the MDB are storing their dates as the number of
seconds since midnight, January 1, 1970. When a timestamp is displayed to a CA Unicenter
Service Desk user, there is a formula, a time adjustment, and a format applied to the
stored value, which allows the display to be relative to the person viewing the interface.
This is a large part of the reason the Crystal reports require the u2lpdmtime.dll file on the
local machine. u2lpdmtime.dll houses functions that interact with the operating system to
provide the exact time stamp for the user. Available functions are as follows:
■ PDMTimeToDateTime
■ DateTimeToPDMTime
These functions are provided to Crystal Report writers so that all timestamp handling will be
consistent with CA Unicenter Service Desk's timestamp handling. The functions can be
selected for use in the Formula Editor, listed in the Function Tree window, under expanded
headings: Functions->Additional Functions->pdmtime. They can be found throughout the
prepackaged Crystal Reports in the formula editor, used for date formulas.
Similar functions are provided in MS Access reporting, stored in the locally installed library
ctime.dll. The functions are as follows:
■ TimeToAsciiI
■ AsciiToTimeI
These functions are provided to MS Access application writers so that all timestamp
handling will be consistent with CA Unicenter Service Desk's timestamp handling. The
functions are declared in the MS Access database using VBA under the Modules object,
classified in the Misc Modules section, and are wrapped in public functions CvrtToUnixTime
and CvrtFromUnixTime. CvrtToUnixTime and CvrtFromUnixTime are used in queries
throughout the MS Access application to convert the timestamps to the users' local
requirements.
Normally, a customer will choose to implement either the MS Access or the Crystal reports
option, depending on the particular skill set available within the organization and depending
on the organization's enterprise-wide business intelligence strategy.
Taking the existing skill set into consideration is important because even though the
predesigned reports are useful to most CA Unicenter Service Desk implementations, it is
expected that many organizations will desire or even need to change the existing reports to
match intra organization processes, match company-wide interface standards, or simply to
provide a degree of personalization to the reports.
No matter which reporting option is chosen, there are some lesser known tips and tricks
that help in producing better reports and understanding the database. This is a selection of
some useful items that may prove helpful.
As a prerequisite, we recommend that the report writer requests query access to the
database directly, so that the raw SQL can be tested directly against the data. This
approach allows the report writer to test the results of any queries directly and thus check
the underlying validity of the report without having to consider how the content or
appearance of a report might have been restructured or modified by any intermediate
layers introduced by the reporting tool.
Each database SQL tool acts differently when a binary field is displayed in its window or on
its command line. Because binary fields are often displayed in their raw format, it is useful
to know that the MDB is packaged with two formulas that convert the binary to
hexadecimal, making it possible to compare records with the naked eye.
The following syntax will return the requested records with the contact_uuid expressed in
hex, making visual comparison possible:
For Oracle:
For Ingres:
Even though the individual objects in the MDB relating to CA Unicenter Service Desk and
CA Unicenter Service Desk Knowledge Tools are connected through a very large number of
relationships, it is possible to break this down into major logical subject areas. Visualizing
subject areas is useful for both getting a handle on the database and for understanding
which tables will most likely be involved when a report is requested. Most often, when
CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools application
users and administrators ask for reports to be written, they will be referring to one of these
major subject areas.
This section will help a report writer understand the vocabulary to use when conversing
with people or groups requesting reports, and help in mapping the requirements that come
out of those conversations to tables in the database. Note that terminology can be changed
during the CA Unicenter Service Desk/CA Unicenter Service Desk Knowledge Tools
implementation, so the vocabulary used here pertains to non-customized software.
The subject areas listed are not intended to be an exhaustive list of tables, but instead a
general guideline for researching which tables to review when planning a report.
Application users who are interested in incident management and problem management
often speak in terms of comparing and enumerating incidents, problems, and requests.
There is central interest in understanding the volume of incidents, requests, and problems
over time periods, in understanding parties responsible, in categorizing, prioritizing,
comparing lengths of time between the open, resolve, and close dates, and in analyzing the
history of events related to incidents, problems, and requests.
Although not an official CA Unicenter Service Desk term, often people refer to requests,
incidents, and problems collectively as tickets. There are some implementations focus on
only one or two types of the three entities, so a report writer may find they are dealing with
only requests (for example, if they have yet to adopt ITIL) or only with incidents and
problems (if they choose not to use request definitions in the system).
This table includes some of the most important considerations when reporting on incidents,
problems, and requests.
call_req This table could be considered the core of the incident and problem
management system. Each record in the table represents a single incident,
problem, or request.
act_log This table holds the history of events related to a request, incident, or
problem. In the application, users refer to this as the activity log. The
activity log has a row for each field change made to a call_req record. As a
result, the activity log tracks status changes, re-categorizations, assignment
history, free-form comments, priority changes, and so on.
There are multiple activity logs for each call_req record. act_log.call_req_id
= call_req.persid
cr_prp Properties are fields optionally exposed to the user for data entry when a
related category is chosen. A report writer is concerned with these fields
when they exist because they complete the picture of the request record.
There can be multiple (or zero) rows of properties for each request.
cr_stat The status indicates the relative standing of the request, incident, or
problem record determined by an organization's process. A report writer will
be asked for both the current status of all the records and the history of
each record's status. Examples of status include Open, Work in Progress,
Resolved, Closed. Additionally, the field call_req.active_flag is an indicator of
the state of the request, incident, or problem record - it can be either 1
(active) or 0 (inactive). Often the active_flag and status fields are used in
conjunction to qualify or disqualify a record from a report.
call_req.status = cr_stat.code
urgncy The urgency of a request, incident, or problem record. This may or may not
be used by each implementation. From a reporting standpoint, whether or
not the urgency table is used is up to interpretation of the implementers.
call_req.urgency = urgncy.enum
crt The call_req type table is handled completely behind the scenes. This is not
something visible to end users. It is critical to a report writer because it is
the sole indicator of whether the record is an incident (I), problem (P), or
request (R or '').
call_req.type = crt.code
OR -
prob_ctg This table holds the details pertaining to the request area, incident area, or
problem area of the record (depending on the record type). This is
sometimes referred to as the category of the record. When users select a
category through the application interface, they are navigating a hierarchy
of terms that leads them to selecting the category that most closely
identifies the symptom of the problem that led them to create a record.
When a report writer looks at the prob_ctg table, however, he/she sees a
string of terms separated by the period symbol (that is,
Software.Microsoft.Excel.Formula Error). This can be parsed and used to
produce reports to answer questions as specific as "How many incidents
were created because users saw a Formula Error in Excel?"; or as general as
"How many incidents were generated because users saw errors in their
software?".
call_req.category = prob_ctg.persid
rootcause The Root Cause table has the same ability as the prob_ctg table to hold a
"dot" notation that can be parsed, but is usually implemented as a single
string. The root cause table holds terms that most closely describe the
actual root cause of the problem. Examples of root cause records could be
user error, defect, documentation, and so on.
Reporting on the root cause table will allow answering questions such as
"How many incidents were created as a result of documentation
deficiencies?” It can also give managers insight into cause and effect
impacts when combined with categories, like “How many incidents were
created for Excel as a result of documentation deficiencies?".
Categories record a symptom while root causes record the root of the
problem.
call_req.rootcause = rootcause.id
pri The Priority indicates to a user in what order the request, incident, or
problem needs to be addressed. There will rarely be a report that does not
take priority into consideration. As a report writer, it is very important not to
confuse the pri.enum with the pri.sym field. The pri.enum field, with values
ranging from 0 - 5 is an integer that is, used as a foreign key value in many
tables. The pri.sym field is a text field, represented out-of-the-box as 5-
None, that the user sees when selecting a priority through the application
interface.
call_req.priority = pri.enum
Call_req.impact = impact.enum
interface Interface holds a value that describes the origin of the record in the call_req
table. The record can be created by a number of different interfaces such as
email, command line, graphical interface, web services, and so on.
Call_req.created_via = interface.code
sevrty Severity is similar to impact and urgency in that it has a short list of values
that can be used to classify the call_req record.
call_req.severity = sevrty.enum
srv_desc This field is used for SLAs, but only if classic SLA processing is enabled. See
CA Unicenter Service Desk documentation for a discussion of classic SLA
processing.
call_req.support_lev = srv_desc.code
Change Orders
A report writer will most likely focus on two main tables in the change order management
system, chg and wf. The structure of the change request tables is very similar to the
call_req set of tables, in that there is a status, category, template, properties, and so on.
Change requests also enable a workflow process, with a central table, wf, which is a slight
departure from the call_req tables. From a user standpoint, workflow is a series of tasks
that are completed in a particular (system enforced) order. The tasks are items such as
approvals, prerequisites to a change, the change task itself, post requisites, and so on.
A report writer will be asked for change request reports dealing with volume, assigned
parties, affected resources, and simultaneously scheduled change requests. A report writer
may also be asked for elapsed time between workflow tasks, or elapsed time between the
inception and completion of a change request.
The following tables make up the core of the change request focused area of CA Unicenter
Service Desk:
■ chg—Holds change request records. Each change has a record in this table.
■ chgalg—Holds the history of events related to each change request. In the application,
users refer to this as the change activity log. The change activity log has a row for each
field change made to a chg record. As a result, the change activity log tracks status
changes, recategorizations, assignment history, freeform comments, priority changes,
and so on. There are multiple change activity logs for each chg record. For example,
chgalg.change_id = chg.id.
■ chgcat—Holds the details pertaining to the change category of the record. When users
select a category through the application interface, they are navigating a hierarchy of
key words that leads them to selecting the category that most closely identifies the
need that led them to create a change request.
When looking at the chgcat table, the value stored is actually a single string of key
words separated by the period symbol (that is, Software.Microsoft.Excel.Upgrade). This
can be parsed and used to produce reports to answer questions as specific as "How
many change requests were created because users needed the latest version of
Excel?", or as general as "How many change requests have been created for Microsoft
products this quarter?" Here is an example: chg.category = chgcat.code.
■ chgstat—The status indicates the relative standing of the change order record
determined by an organization's process. A report writer will be asked for both the
current status of all the records and the history of each record's status. Examples of
status include Open, Abandoned, Resolved, Closed. Additionally, the field
chg.active_flag is an indicator of the state of the change record - it can be either 1
(active) or 0 (inactive). Often the active_flag and status fields are used in conjunction
to qualify or disqualify a record from a report. Here is an example: chg.status =
chgstat.code.
■ prp—Properties are fields optionally exposed to the user for data entry when a related
change category is chosen. A report writer is concerned with these fields when they
exist because they complete the picture of the chg record. There can be multiple (or
zero) rows of properties for each change. Here is an example: cr_prp. owning_cr =
call_req.persid.
■ toc, product, perscon, repmeth, pri, impact, interface—Each of these tables is used to
select information on a change request.
■ wf—Stores operational information related to the workflow. There are many workflow
tasks for each request, when the workflow is in use.
There are many other entities stored in the MDB that relate in some way to incident
management, problem management, or the CA Unicenter Service Desk. Here is a selection
of some additional tables that may be relevant to common reporting needs, categorized by
the topic or area concerned. For more details on these tables and their fields please review
“Appendix A” in the CA Unicenter Service Desk Modification Guide.
Issues
issue, issprp, issstat, isscat, iss_template, issalg, isswf, tskstat, wftpl, tskty, toc, product,
perscon, repmeth, pri, impact, interface, rootcause, srv_desc
Contacts
Configuration Items
Surveys
SLAs
srv_desc, attached_sla, bpwshft, svc_contract, sdsc_map, slatpl, evt, evt_dly, pri, call_req,
prob_ctg, chg, chgcat, issue, isscat, usp_contact, usp_organization, usp_owned_resource,
wftpl, wf, isswf
Security
There is no formal support for multilingual installations of CA Unicenter Service Desk r11.x.
Localized versions of CA Unicenter Service Desk are released to run in a single language on
an operating system that is released specifically for the language. Internationalization and
localization are formally supported in no other way.
CA Unicenter Service Desk does, however, contain features that can be used to simulate
some aspects of multilingual support. This chapter explains the methods used for these
simulations.
Note: These are field modification methods, not product features, and they are not
supported as product features. They have not been tested the way a released product
feature is tested. Their efficacy depends on the skill and knowledge of the implementer.
These methods have, however, worked satisfactorily at some sites.
For sites that want to try these techniques, they are explained here. Applying these
techniques is for experienced practitioners who are prepared to provide their own technical
support. Like any software project, good results will come from careful work and a
willingness to analyze and improvise as necessary.
These techniques must be installed manually, and language support depends on the
character sets and code pages supported by the relational database. More extensive
multilingual support is anticipated for future releases.
Note: Sites should test their work thoroughly before placing the modified product into
production.
Many organizations are required to support users in their native language. In Europe and
Canada, governments have passed laws to ensure that their citizens can communicate with
the government in the official language of their choice. In commercial enterprises, it makes
sense to provide service to customers in the language that they speak and understand. Not
doing so can result in miscommunication and frustration for users as they are forced to
navigate in an unfamiliar language.
This chapter explores two approaches to working with a multilingual user base and
examines some of the key issues to consider as you weigh your options and project scope.
As with all functionality, it is important to clearly establish the scope of your multilingual
requirements up front.
Specifically, the server's code page determines which languages can coexist in a single
installation. While you can have many Latin-based, single-byte languages working together,
you often cannot support double-byte languages like Chinese, Japanese, or Korean on the
same instance. The reverse is also true; that is, a server code page that supports double-
byte languages does not support single-byte languages. In either situation, characters that
are not in the original code page are stored or displayed inconsistently, potentially causing
garbled text and loss of data.
Three test fixes available from CA Technical Support that provide improved UTF-8 character
support on Microsoft Windows servers. These test fixes are: T555150, T555164, and
T555167. See the documentation provided with them to learn about the known issues.
For instance, if you intend to support end users in only one of several languages, and you
do not plan to roll out a self service interface, focus your planning to ensure that your front
line staff can speak the required languages during the hours that support is offered. If you
intend to send email notifications, you will need to ensure that notifications are translated
appropriately. Simply ensuring that you have adequate coverage for all of your support
languages can become challenging if you need to span time zones and different business
hours.
Alternatively, if you intend to allow analysts and end users access to web interfaces in the
language of their choice, you need further planning to address the following:
■ How to provide appropriate telephone coverage for your customers that are calling in to
the Service Desk, while ensuring that any (typically unilingual) level 2 or level 3 expert
support teams are provided with ticket data in their language, while still providing data
in the customer's language
■ How to provide error messages, alerts, and online help in all of the languages that you
are using
Clearly outline the corporate working language and the process for ensuring that support
teams are provided with the information necessary to perform their jobs.
Architectural Options
Two approaches for implementing multilingual functionality, with pros and cons for each
approach, follow:
The first approach for supporting a multilingual implementation focuses on the use of form
groups. This approach leverages the default functionality of CA Unicenter Service Desk form
groups to offer a user the service desk interface in one of several languages. Each user
defined in the CA Unicenter Service Desk system will be configured with an assigned form
group. This assigned form group determines which set of forms is displayed when the user
connects to, and interacts with, CA Unicenter Service Desk.
The form group approach is the most common for implementing multilingual functionality.
It has the benefit of being quick to implement if the scope of your implementation is limited
to a customer/employee self service interface-because self service has a limited number of
files that would need to be altered. As the project scope extends to include, for example,
the ability to switch between languages more dynamically, additional modifications would
need to be made and thus extend the time required to complete an implementation. The
following diagram shows how the form group option could be implemented:
The pros and cons of the form groups approach are shown in the following table:
Pros Cons
3. May have success adding double-byte 3. Cannot easily choose to use another
languages if working on a single language.
CA Unicenter Service Desk server
implementation. Must update the user's contact
record to specify a different
For information about known language form group (typically an
issues, see the Limitations and Admin function).
Known Issues section.
4. Access to Analyst online help files in the
language of choice requires
modifications.
The second approach, which uses the secondary server, is shown in the following diagram.
This approach leverages the availability of CA Unicenter Service Desk in multiple languages.
CA Unicenter Service Desk r11.1 versions in French and German and a CA Unicenter
Service Desk r11.2 version in English can be used together to achieve a multilingual
implementation.
Because of some minor changes from r11.1 to r11.2, some of the r11.1 files will need to
be configured to be compatible with r11.2. When all languages are available at the r11.2
level, compatibility will no longer be an issue.
In the secondary server approach, you would install the desired language as a service desk
secondary server and configure it to point to the service desk primary server. Then users
would simply point to the URL of the web server hosting the CA Unicenter Service Desk
interface in the language of their choice. If desired, this solution could be further enhanced
by leveraging the form group feature described in the previous section to add additional
language support and functionality.
Note: Of the four languages in which CA Unicenter Service Desk is currently available, the
secondary server approach would work for English, French, and German support. This
approach would not work for the Japanese language because it contains characters that are
not in the Microsoft Windows 1252 code page (the standard code page that supports
English, French, and German). These characters are not stored or displayed consistently
and, thus, can cause garbled text and loss of data.
The pros and cons of the secondary server approach are shown in the following table:
Pros Cons
While neither approach lets the user dynamically switch back and forth between languages
on demand, the secondary server approach lets the user start with a main page where they
choose the language that they want to use for the current session. This choice is
implemented by providing the links to each web server that is labeled with the language
offered by each server.
When planning the self service interface implementation, ensure that all interactions with
the end user are in his or her language. These interactions include the following:
■ Telephone support
■ Email notifications
■ Web interface
Telephone Support
Even in situations where you are providing a self service interface, there is likely to be a
requirement to provide telephone-based support (for example, for high priority issues). In
planning multilingual telephone-based support, the support hours must be understood and
appropriately staffed. You may decide to employ a follow-the-sun model and leverage an
available service desk that is in a different time zone to provide coverage during your
location's off-hours. In those cases, ensure that the remote location's scheduled front-line
personnel can provide the required language coverage or ensure that you have clearly
communicated to your user base what level of service can be expected in “off-hours”
situations.
The analysts that provide telephone support may or may not be provided with a
multilingual capable interface. In either case, the analysts need to take the information
provided by the end user and enter it into the system in a manner that can be used by the
level 2 or level 3 support teams and in a way that allows for the necessary management
reporting.
Email Notifications
Even if you are not intending to roll out a self service interface to end users, you may want
to send notifications to keep them informed about the current status of their
incidents/requests or the details of a proposed resolution to an issue. For this
communication to be effective, it must be in a language that can be understood by the
customer. Typically, a multilingual email message would be issued. The email is divided into
several sections with the same information repeated in each of the supported languages; a
note at the top of the email indicates that the message is translated below.
Where the type of email notification described in the preceding paragraph is not considered
an acceptable solution, and a notification containing a single language is required, a custom
notification script could be created to collect the formatted ticket information; then, based
on a flag setting in the user's contact record, the correct language template could be
selected and used as the basis for generating and sending the message content and
notification.
Careful planning is necessary to provide a multilingual self service web interface where a
user can do the following:
While all of these items are out-of-the-box features offered by the CA Unicenter Service
Desk self service interface, the following sections review areas to be considered to ensure a
positive and effective end-user experience in a multilingual context.
Knowledge Management
Regardless of the choice of approaches, the CA Unicenter Service Desk knowledge base is
designed with a single language paradigm in mind. Specifically, this affects features like
noise words, where you can define the words from the knowledge document that should be
ignored, and search terms that can help create a better set of search results. However, a
multilingual implementation could produce a list of noise words that incorporate words from
other languages and omit words that represent a “non-noise” word from another supported
language. Other features categorized as parse settings that were not originally designed for
a multilingual approach will need to be reviewed and adjusted to produce the best results
possible.
Another factor to consider is that filtering could be used for any knowledge documents
written in a language other than the one being displayed to the end user. Filtering could be
accommodated by leveraging the View data partition functionality. A field or attribute in the
document would need to be set to identify the language of the knowledge document to
ensure that the View data partition can be built to filter the search results as required.
CA Workflow
The CA Workflow worklist interface does sense the language settings of the user's web
browser. The user is presented with screens translated into the selected language if it is
one of the following 10 languages:
■ English
■ Japanese
■ French
■ German
■ Italian
■ Spanish
■ Brazilian Portuguese
■ Simplified Chinese
■ Traditional Chinese
■ Korean
This minor adaptation involves adding a new attribute, to store a translated label for each
language being supported to the reference data table, and adding a new factory that will
use this label field as the field to display when objects of this factory are selected. Three
key fields in the detail_cr.htmpl form in the employee interface require the use of this
functionality—status, priority, and category. In the following example, the MAJIC code for
the pri (Request/Incident Priority) object that has been altered to add one of the following:
■ A new attribute (zfrench) for storing the French translation of the label or symbol field.
■ A new factory (pri_fr) for providing a version of the Request/Incident Priority object
that uses the zfrench field as the default COMMON_NAME value. This feature is used
when deciding which field to display when records of the pri_fr factory are selected in
the user interface.
};
More detail on MAJIC code syntax for attributes and factories is available in Appendix E:
“Object Definition Syntax” of the CA Unicenter Service Desk Modification Guide. In the
r11.2 version of the detail_cr.htmpl page, you can leverage this new factory functionality by
locating the default PDM_MACRO statement for the priority attribute shown (the following
line was extracted from the default detail_cr.htmpl file from the French installation.
In line with implementation best practices, be sure to copy the default detail_cr.htmpl form
from the <installation directory>\bopcfg\www\htmpl\web\employee directory to the
<installation directory>\site\mods\www\htmpl\web\employee directory prior to making the
suggested modifications.
Also be sure to copy the default dtlDropdown.mac macro from the <installation
directory>\bopcfg\www\macro directory to the <installation directory>\site\mods\
www\macro directory.
#args
For example:
"$args.&{attr}.&{common_name_attr}",
<PDM_FMT PAD=NO ESC_STYLE=JS2>"$args.&{attr}.&{common_name}"</PDM_FMT>,
Modify it as follows (noting the minor change to the third-to-last line only):
If you use the form group approach to provide multilingual functionality, the authentication
functionality can be configured to let users into the system based on their network
credentials without asking for login information. After a user has connected, the configured
access type controls which form group is used for presenting the language forms. If this
pass-through configuration is not possible to implement, the login.htmpl page must be
adapted to show the appropriate labels.
If you use the secondary server approach, it probably makes sense to start with an initial
page that provides the ability to select the language. This choice would trigger functionality
that routes the user to the correct URL of the web interface in the desired language.
Announcements
Either of two approaches can ensure that displayed announcements are in the correct
language based on the interface being viewed. These approaches are as follows:
■ Data partitions—Create a View type data partition constraint on the Note_Board table
that looks for some type of language identifier before displaying the announcement.
(Information on setting up Data Partitions is provided in Chapter 4: “Implementation
Policy” of the CA Unicenter Service Desk Administrator Guide in the section entitled
“Security.”) This could be an additional field or some predetermined text that prefixes
the announcement. Then, in the employee interface, you could strip off this prefix text
so that it is not displayed in the interface using the JavaScript substr() function as
shown.
In line with implementation best practices, be sure to copy the default home.htmpl
form from the <installation directory>\bopcfg\www\htmpl\web\employee directory to
the <installation directory>\site\mods\www\htmpl\web\employee directory prior to
making the suggested modifications.):
p.appendChild(document.createTextNode(result[j]));
b. Use the following sample code to remove the first four characters of the
announcement's text:
if(j>0)
p.appendChild(document.createTextNode(result[j]));
else
p.appendChild(document.createTextNode(result[j].substr(4)));
It is important to ensure that error messages and alerts are provided in the language being
used by the chosen interface. Both architectural options can accommodate displaying error
messages and labels in the correct language. These messages are primarily collected
together in the msg_cat.js file. By updating the msg_cat_site.js file for each interface with
the desired translations, you can ensure that users receive the labels and error messages in
the language that they have chosen to work in.
Note: In line with implementation best practices, be sure to copy the default
msg_cat_site.js file from the <installation directory>\bopcfg\www\wwwroot\scripts
Spell Check
If you use the form group architectural approach to implement multilingual functionality,
setting up the scoreboards for the end-user groups is a straightforward matter of leveraging
the chosen access type/form group configuration to implement the desired scoreboard
labels and content.
If you use the secondary server approach, you can implement one of the following three
options:
■ A similar access type and form group approach as described in the previous paragraph
(that is, building a separate scoreboard for users in each language).
Note: In line with implementation best practices, be sure to copy the default
msg_cat_site.js and cst_fldrtree.js files from the <installation
directory>\bopcfg\www\wwwroot\scripts directory to the <installation
directory>\site\mods\www\wwwroot\scripts directory prior to making modifications.
When defining incident/request areas and their associated properties, you need an
approach for exposing the desired language-specific label/sample values for each property.
Some custom code can be assembled to take formatted delimited values from the default
property fields. These formatted values can then be parsed and used to populate the
appropriate label and sample value information into fields in the request/incident form of
the user interface for each of the supported languages.
Schema Additions
Regardless of the approach selected to offer multilingual user interface functionality to your
end users, it is advisable to add new fields to capture and display information in the
appropriate language. Besides those fields discussed in the previous Reference Data
section, and fields to help in the language classification of various items like
announcements and knowledge documents, it makes sense to add a description field for
each supported language. (You also may want to consider adding a summary field for each
supported language.) The description field of the language spoken by the end user would
then be displayed. The out-of-the-box description field (and summary field) should be used
to store the end-user description of the requirement in the primary corporate support
language. This description field provides the typically unilingual level 2 and level 3 support
teams with a consistent place that holds the translated information of the end-user's
request. This field also provides a consistent source of data for building management
reports.
To assist in the population of this corporate support language field with a quick translation
of the customer's input request description, functionality offered by providers such as
www.freetranslation.com (http://www.freetranslation.com) or www.babelfish.com
(http://www.babelfish.com) can be integrated into the interface. These sites offer in-line
translation that can assist in quickly transferring the general idea of a block of text from
one language to another. While some limited functionality (especially from
www.freetranslation.com (http://www.freetranslation.com)) is available at no charge, more
useful functionality that could be embedded and activated through a button on the service
desk interface would need to be purchased directly from those providers.
Activity Logs
To reduce the chance of exposing information to the end user in a language that they may
not understand, it makes sense to flag the majority of activity logs as “internal.” The
exception to this would be for activity logs that are written specifically for the end-user's
information such as Log Comments or Solutions. Using the internal flag causes the activity
logs to be hidden by default from the end-user's interface, thus reducing unintelligible
clutter on the screen. Level 2 and level 3 support groups attempting to post solutions to the
end user may need to initially post an internal comment to the ticket with the information
to be translated and receive assistance from another designated group (or tool) to help
translate the data and repost it for the end user.
The analyst and administrative interfaces are more complex and full-featured than the self
service interface. As such, if your multilingual implementation involves a requirement to
include these interfaces in the project scope, the project becomes significantly more
complex. We recommend that you reduce the scope of the project by not including the
Administrative interface or including support for only a subset of the languages planned for
the self service interface. This chapter assumes that only the web interface is targeted for a
multilingual implementation; the Java Client is excluded from the scope of this discussion.
Specifically, we will review the following:
■ Email notifications
Email Notifications
Where the type of email notification described in the preceding paragraph is not considered
an acceptable solution, and a notification containing a single language is required, a custom
notification script could be created to collect the formatted ticket information; then, based
on a flag setting in the user's contact record, the correct language template could be
selected and used as the basis for generating and sending the message content and
notification.
The architectural options (form groups approach and secondary server approach) previously
discussed in the Architectural Options section of this chapter would also be valid options for
the Analyst and Administrative interfaces. Most of the discussion under the heading of the
Self Service Interface also applies to the Analyst and Administrator interfaces.
In this section, we briefly review the areas previously covered and then move on to
additional areas to consider when implementing a multilingual CA Unicenter Service Desk
solution.
Knowledge Management
Regardless of the choice of architectural option, the CA Unicenter Service Desk knowledge
base is designed with a single language paradigm in mind. The analyst interface is affected
in the same way as the employee interface with regard to features like noise words and
parse settings. Analysts that must support groups of customers working in more than one
language will probably not be able to leverage the simple data partition solution that
restricts the view to documents of a specific language. This can lead to cluttered returned
search results. To improve this situation, the knowledge document authors and
administrators can employ various techniques, such as providing language indicators in the
knowledge document content, to allow for obtaining improved results. In addition, tips and
training should be assembled by the knowledge authors to help equip users to obtain
optimal results when searching the knowledge base content. On-going analysis of the actual
search terms and associated results should also be a priority in providing guidance on fine-
tuning knowledge base searches. (This information is captured automatically by the system
and is available for this type of analysis.)
CA Workflow
In CA Workflow, analysts work with the system like the self service end users described
previously. The primary additional consideration is that Workflow administrators will need to
spend more time and effort to accommodate the multilingual nature of the workflow,
managing the forms and notifications that are generated during the workflow.
As with the employee interface, if you use the form group approach to provide multilingual
functionality, the authentication functionality can be configured to let analyst users into the
system based on their network credentials without asking for login information. After a user
has connected, the configured access type controls which form group is used for presenting
the language forms. If this pass-through configuration is not possible to implement, the
login.htmpl page must be adapted to show the appropriate labels.
If you use the secondary server approach, it probably makes sense to start with an initial
page that provides the ability to select the language. This choice would trigger functionality
that routes the user to the correct URL of the web interface in the desired language.
Announcements
Since analysts can create and deploy announcements, they must know how to create and
segment announcements for the self service interface. They must also be provided with an
updated analyst interface with the appropriate code to display the announcement records.
In the analyst interface, error messages and labels are set up and function like in the self
service interface.
Spell Check
Again, the out-of-the-box CA Unicenter Service Desk spell check functionality was designed
to work only in a single language mode. While you can add words from multiple languages
into the lexicon file using the pdm_lexutil.exe utility, this produces a dictionary file that
may not be optimal. To address this, you could remove the spell check feature from the
CA Unicenter Service Desk screens entirely or carefully construct the spell check's
dictionary lexicon file to produce a reasonable list of words that would benefit most users.
The primary limitation with using a shared dictionary for multiple languages is that words in
the dictionary for one language may interfere with correctly checking the spelling of words
from another language.
Given the additional complexity of the analyst interface as compared to the self service
interface, the simplest approach for providing a scoreboard in the appropriate language is
to use the access type approach and, for each different scoreboard that is required, create
a matching access type.
The same issue regarding the display of property labels and sample values that was
discussed for the self service interface also applies to the analyst interface. Also, given that
analysts and administrators must configure these entries, it makes sense to create a
customization to use JavaScript functionality in the analyst interface to support a solution
for storing, parsing, and effectively displaying the property information in a multilingual
fashion.
Schema Additions
As noted previously, new fields used to capture and display the ticket information in the
language spoken by the end user would need to be added to the schema and then
displayed in the interface (in addition to the out-of-the-box description field and perhaps a
summary field). The analyst would then see the customer's original description in the
customer's language and a translated version in the language the analyst is most familiar
with.
When analysts work with requests from customers who speak a different language, adding
the ability to do some level of in-line translation to quickly transfer the basic idea of a block
of text from one language to another would be very valuable. This functionality can be
achieved through custom integration with providers like www.freetranslation.com
(http://www.freetranslation.com) and www.babelfish.com (http://www.babelfish.com), or
by having in-house analysts assist in manually translating key pieces of ticket information.
Activity Logs
In the case of employee users it makes sense to reduce the activity log entries so that only
logs that are pertinent to the user and in their own language are displayed. In the case of
analyst users, reducing the activity logs that are displayed would not be recommended as
the logs may provide needed detail or clarification. In the situation that these logs contain
information in a language that is not fluently understood, the analyst would have the
opportunity to leverage colleagues or in-line translation tools to assist them in
understanding the provided information.
The Web Screen Painter tool is also designed to work in a single language environment.
However, when using the multiple form group approach to achieve a multilingual
implementation, the Web Screen Painter should support all languages that use a single
code page.
PDA interface
The PDA interface works similarly to the main analyst interface. However, its functionality is
limited by comparison; thus the required effort involves similar activities and is not
significant.
Personalized Responses
When defining personalized responses, it makes sense to decide how you want the end user
to be notified (language/format) and how you want to organize the personalized responses.
Ensure that this functionality remains useful to the analyst by making it a simple means of
locating the desired messages for the various key situations.
Asset Viewer
Out-of-the-box, the Common Asset Viewer that gets launched from the Asset
(Configuration Item) detail screen is designed to reside solely on the primary server. This
feature is in the <Service Desk installation
directory>\bopcfg\www\CATALINA_BASE\webapps\AMS directory and is configured in the
AMS.properties file that is located under the WEB-INF\classes subdirectory. Initial testing
indicates that hosting a different language version of the Common Asset Viewer on a
secondary server does not work. Until an approach becomes available to configure
additional instances of the Asset Viewer, integrating a customized launch to a chosen
language version of the Asset Viewer will not be supported.
Online Help
Using the secondary server approach for implementing a multilingual architecture provides
the benefit of online help, labels, and error messages all being translated into available
languages without effort when connected to the secondary server's URL. Using the form
group approach requires that customized launches to the language-specific help files be
built by customizing the help_on_form() JavaScript function from the popup.js file. It
should be noted that these help files would not be easy to create for languages that CA
Unicenter Service Desk has not yet been localized into.
Summary
Multilingual functionality, especially with respect to end-user support in the self service
mode, is rapidly becoming commonplace in CA Unicenter Service Desk implementations.
The information covered in this chapter will help you to implement CA Unicenter Service
Desk to meet your major requirements for a multilingual implementation in advance of
additional functionality being built into future releases of the product.
Govern
■ IT organizations put IT governance processes in place to ensure they make the most
effective IT investment decisions to support their business strategy. CA's governance
solutions are designed to help IT organizations understand and account for the portfolio
of resources they have, and optimize the deployment of that portfolio in the most cost
efficient and effective way possible.
■ Important governance metrics, including insight into the quality and costs of providing
an end-to-end service rather than its individual piece parts, help management make
informed decisions to deliver superior business value in an optimized way.
Manage
Secure
■ CA security solutions can protect, monitor, and actively manage nearly every facet of
the enterprise, from business processes down to every infrastructure asset.
■ Increased focus on risk management, information integrity, and compliance has caused
security to evolve from a reactive technical discipline into a frontline business enabler.
■ The Unified Service Model is the centerpiece of CA's vision for delivering EITM and
provides a complete 360 degree or common view into the IT services delivered to the
business. The Unified Service Model defines the characteristics of a business service,
including component and relationship details, service levels, prices, costs, quality, risks
and exposures, identity and entitlement rights, and more.
■ The CA Integration Platform is the architectural foundation upon which CA's products
are integrated. It leverages a service-oriented architecture to deliver a set of shared,
modular services including an integrated workflow, common policy, consistent user
experience, and scheduling services.
The following illustration shows the CA Integration Platform, containing a rich set of
management and security services that deliver consistent definition and behavior.
276 Integrations
CA Unicenter Service Desk Integrations Green Book
■ Network Management
■ Desktop Management
■ Patch Management
■ Password Management
■ Change Management
■ Workflow Management
■ Accessibility Management
You can access the CA Unicenter Service Desk Integrations Green Book via the CA Green
Books link to Service and IT Asset Management at https://support.ca.com .
(https://support.ca.com/irj/portal/anonymous/phpdocs?filePath=0/common/greenbooks/se
rvmgt_greenbooks.html)
This section describes the CA Unicenter Service Desk components, and then compares
centralized, distributed, and global implementations so you can decide which one is right for
you.
There are several different CA Unicenter Service Desk components that run on a primary or
secondary server. It is beneficial to understand the purpose of these components,
especially during implementation planning. These components can be physically distributed
and are the same, regardless of platform. The main components are as follows:
■ Web Engine (webengine)—The Web Engine provides back end functionality for access
to CA Unicenter Service Desk via a browser. It is a daemon or service that responds to
cgi requests from a Microsoft IIS or Apache Tomcat web server. There must be a Web
Engine for WSP on the primary server so WSP Schema Designer can write schema files.
Web Engines are the true client of an Object Manager for user client web browsers.
Web Engines maintain sessions and cache htmpl web forms for connected users. You
can manipulate the cache using the pdm_webcache utility and see web client
connection statistics using the pdm_webstat utility.
■ Web Director—An optional process that provides load balancing among multiple Web
Engines. The basic way of using webdirector is for simple load balancing. The Web
Director selects the Web Engine with the least amount of active users and redirects the
user(s) to the desired Web Engine. Subsequent requests are handled by the Web
Engine directly without involving the webdirector. It is also possible to use the
webdirector to provide enhanced security for login while allowing most user interactions
to use a higher performance standard connection. The system administrator can
configure the webdirector to direct login requests to a specific Web Engine that uses the
SSL (secure socket) protocol. Once a user has been authenticated, subsequent
requests are redirected to a different Web Engine using a standard protocol. Web
Director is a specially configured pdmweb cgi.
Determining the optimal distribution of these components depends on the environment use,
supported number of users, available hardware and budget, as well as scalability,
availability, and failover requirements.
Centralized Implementation
When you perform a default installation of CA Unicenter Service Desk, you configure a
centralized implementation. In this type of implementation, all components are installed
and configured on one network-addressable entity, which is a primary server. You can
implement multiple Object Managers and Web Engines for client load balancing and failover,
but if the business continues to grow, you may encounter issues with all users connecting
back to the primary server. In this case, the architecture can be expanded into a distributed
environment.
instance. One or more Object Manager and Web Engine pairs may exist if there is sufficient
CPU and memory available. (Guidelines will follow in the Scalability section of this chapter.)
Distributed Implementation
The CA Unicenter Service Desk components can be distributed to exist on two or more
servers, so that there is a primary server, a separate database server for a Remote MDB,
and optional secondary servers with the following local components:
■ Database Server:
> CA IAM
■ Primary Server:
> CA Workflow
This environment would optimally support large implementations and have the most
flexibility to support scalability and availability, as well as failover configurations.
See the appendix on distributed processing, for more information on the underlying
application architecture that supports the distribution of processes.
Global Implementation
You can implement a global service desk when you require a “follow-the-sun” service desk
or when network bandwidth is too limited to implement a distributed service desk. For
example, you may have business locations in different countries with only a slow link
between them.
Scalability
CA Unicenter Service Desk is a highly scalable solution. It has the flexibility to add
secondary servers to support additional connections as you grow, and the ability to
distribute the server and its components.
The first thing that must be decided when designing the service desk solution is the type of
implementation. The most scalable architectures are the Distributed Architecture and the
Global Implementation. In these designs, the primary server is the application server and
all web services are handled by one or more web servers.
The Web Director component plays an important role in an environment with multiple
secondary servers.
Scalability Advice
The rule of thumb for how many concurrent connections can be supported per Web Engine
is approximately 250 to 400 users, depending on the server load. The actual results are
heavily dependent on the hardware, overall system load, and what the users are doing on
the system. For example, a user who frequently queries and modifies or creates objects
requires more bandwidth and resources than a user who is only looking at a single object in
read-only mode.
Many environments have heavy reporting and querying requirements that put additional
processing loads on the server. To minimize the impact to the production database and
production application environment, a reporting database is often used. Rather than
directing reporting to the production database, the reporting database is created as a
mirror copy of the production database, with synchronization occurring nightly or according
to some other pre-defined schedule.
CA's Stress and Interoperability Lab is a world-class stress and integration service that
measures the scalability and performance of both individual products and integrated
solutions in environments that simulate our largest customers' environments.
In this environment, CA Unicenter Service Desk was able to sustain a 10,000 concurrent
user load for an 8-hour period, while maintaining a very responsive system, without taxing
any partition.
Many customers create scripts to load test CA Unicenter Service Desk. The load testing
information below is applicable to both Mercury LoadRunner® and Borland® Silk®.
Login SID/FID
To load test CA Unicenter Service Desk, various numbers and strings need to be parsed or
generated, during script runtime, and then fed back into the script. Therefore, the testing
tool needs adequate run-time parsing capabilities. The SID (SessionID) and FID (FormID)
are two such numbers. The autLogin.htmpl exposes the SID and FID on the login page.
Other values may need to be parsed, depending on what the particular user models entail.
For every child popup window, a new FID is generated and must be parsed. If a ticket is
being created and saved, then the object's PERSID should be parsed. This value, along with
many others, is initially returned in the http response data for the CREATE NEW CR/ISS/CO
page.
Memory Management
Web Engine memory management has been designed, in part, by the use of special
CA Unicenter Service Desk window numbers. For every browser popup window, a unique
number is assigned to that initial popup window. When the child popup window is closed, a
remove cache statement is issued referencing the CA Unicenter Service Desk window
number, thereby freeing up Web Engine memory for Web Engine reuse. This number must
be generated on the client side (for example, from within the script).
Here is an example:
Original recorded script URL with hard-coded CA Unicenter Service Desk number:
"…. &KEEP.POPUP_NAME=USD1126125306923"
Pseudo code for generating the CA Unicenter Service Desk number in script:
usd_number_1 = random(1111111..9999999);
Using a variable in place of the hard-coded CA Unicenter Service Desk number, the
modified URL becomes:
Using a variable for removeCache statement; then for the remove cache URL statement
later on in the script:
"…FID=0+REMOVECACHE=USD" + usd_number_1
Parsed data is used in two areas: in URL strings and in form data.
Notes
■ SID and FID are generated on the server side and must be parsed.
■ CA Unicenter Service Desk numbers are generated on the client side and can be
random, script-generated numbers, but they must be unique.
■ The values described thus far are parsed from HTTP RESPONSE BODY data (with the
exception of CA Unicenter Service Desk numbers).
Multitenancy
Multitenancy requirements are not only for outsourcers. These requirements can also come
from companies that have several service desks and need to consolidate them into a single
service desk. If every service desk cannot be standardized on the same process definition
for incident and problem management, and every department/organization does not share
the same policies, but you still want to run a single instance of CA Unicenter Service Desk,
your installation should be architected as a multitenancy installation.
To support multitenancy, we need to modify some of the out-of-box features and functions
in CA Unicenter Service Desk. This section discusses how and what to do in order to run
CA Unicenter Service Desk with multitenancy capabilities.
Qualifying Questions
Requirements for each tenant should be gathered and then compared. Based on the
requirements, the following questions should be answered in order to determine the right
multitenancy architecture:
■ Will there be standardizations on naming conventions for data? For example, assets,
categories, userids.
■ Will any of the processes be shared between tenants? For example, required fields,
categories, properties.
■ Will the analyst forms need to be modified to provide different views to each tenant?
■ Will there any special needs for reporting (breakdown, sort order, format, and
presentation)?
■ Will the operators themselves be multitenant? For example, an operator working with
all the tenants instead of mono-tenant analysts.
■ Will the end user self service forms need to be modified to provide a different view to
each tenant?
■ Should we install a secondary server (Object Manager + Web Engine) for each tenant?
■ What will be the role of the tenant inside CA Unicenter Service Desk?
■ Will the tenants need the same integrations with CA and non CA-products?
This option will consist of one primary server, one MDB, and one or more secondary
servers.
■ Most likely, the primary server is implemented and administered at the outsourcer
location.
■ Data partitions, stored queries, and SLA events may have a significant impact on the
overall performance of the system.
■ At least one secondary server (Object Manager + Web Engine) will be placed at each
tenant location (250-400 users per Web Engine). This depends on the legal
relationships between the tenant and the outsourcer.
■ Error conditions exist that could affect the entire CA Unicenter Service Desk server.
■ YES
> If the self service interface will be the only interface deployed to each tenant,
this option may be a viable solution
> If there is a central administrator for CA Unicenter Service Desk for all tenants
■ NO
> If all data must be managed and kept separate for each tenant
In this section we will only discuss supported and documented adaptations. Rules for
multitenancy can be leveraged using in-house scripts to improve filtering or to match any
special policy required for a specific tenant.
All of these rules will be specified through the Administration portion of CA Unicenter
Service Desk, as specified by the Service Desk Administrator.
Outsourcers may have specials requirements on archiving and purging incidents and
problems for each tenant. The Archive/Purge rules are specified in order to match each
tenant's requirements.
The outsourcer defines archive/purge rules according to the requirements of each tenant.
In the examples below, an archive rule has been defined to automatically archive/purge
priority 1 incidents after 365 days:
Tenant B wants the outsourcer to purge priority 5 ticket every 180 days:
Using Archive/Purge features, outsourcers can match each tenant's requirements regarding
archiving and purging.
Attachments Library
■ Repository location
In a multitenancy configuration, macros and events are used in conjunction with Service
Contracts. See the Service Contract Section on how to create specific events for any single
tenant. The Service Level Agreements section in Chapter 4: Policy Implementation of the
CA Unicenter Service Desk Administration Guide also may be consulted for more
information.
Here is an example of assigning permissions at the category level. Tenant-A can read and
write any document in this category, Tenant B can only read documents in the this
category, and Tenant-C users have no access to documents in this category:
For special cases, permissions can also be assigned at the document level or by using data
partitions (see the “Security” chapter). Data partitions can also be used to leverage
business policy in a knowledge document for a particular tenant.
Out-of-the-box, general settings of knowledge tools are global to the whole system. These
settings have to be set according to all tenants' requirements.
Notifications
■ Notification Events:
> Tenant-A wants to receive a notification for each step of the incident lifecycle
> Tenant-B wants to receive a notification only at the opening and closing of the
ticket
■ Message Templates:
> Tenant-B needs to receive only summary and ref_num. “ref_num” has to be in
Arial bold
> Tenant-C wants to have out-of-the box notification with the logo of its
company and its style sheet
To accomplish all of the tenant requirements, the multitenancy installation has to create a
new notification. These new notifications could be specific per tenant or for the whole
installation. New notifications can be created, as needed, to send notifications using any
other method, like pager, fax, SMS, or simply for reformatting messages for a particular
tenant.
See “Process and Workflow,” chapter 9 in this book, for details on how to create new
notification methods. A sample of a notification script is located in the CA Unicenter Service
Desk install directory in $NX_ROOT/samples/ntf_meth.
Options Manager
Most of the settings in the Options Manager are used to configure the service desk
application itself. Some of these settings will affect the behavior of the whole system. The
behavior of some options will affect all of the users from all of the tenants. In cases where
this is not desired, code will be required to bypass this behavior.
Security
Security is a key feature for multitenancy, particularly if tenants (customers) are using CA
Unicenter Service Desk to open, update, and close tickets.
Outsourcers must provide appropriate connectivity for their tenants and analysts. (For more
detail, see the Firewalls section in this book.) Depending on the number of concurrent
customers per tenant, there may be secondary servers on the tenant sites.
Once a user (outsourcer or tenant) has been authorized to access the system, CA Unicenter
Service Desk will determine what level or levels of access the outsourcers and tenants may
have. Using the Access Type feature (see the Security section in Chapter 4: “Policy
Implementation” of the CA Unicenter Service Desk Administration Guide for more
information), outsourcers will be able to define access for every actor as follows:
> Most of the time tenants will use the Customer web interface. This may vary if
tenants use their own internal analyst web interface.
> In most of the cases, tenants only have access to Requests (Modify) and some
Inventory Data (View). Below is an example of how to define an access type
for a tenant (customer):
> An outsourcer analyst accesses all other functional area except the
Administration / Security area.
■ Data Partitions
> Restrict user access to data records in a subset of the MDB based on their
content. Typically in a multitenant environment, there are data partitions that
range from one data partition per tenant to one data partition per access type.
Data partitions are very useful for defining who has access to particular
segments of data, based on their role as defined within the product.
Outsourcers implement data partitions to make sure that each of its tenants is
working only on its own data. If outsourcers have dedicated analysts for a
particular tenant, they can also implement data partitions to make sure the
analysts are only working on that tenant's tickets.
Data partitions are also useful to meet the requirements of outsourcing. One
caution is that they can sometimes be resource-intensive, depending on the
query.
One of the keys to customizing the interfaces is in defining form groups to organize the
forms. Form groups represent a collection of forms grouped together to allow different
users to have different versions of a window. You can create form groups, customize
the forms to suit the needs of your tenants, and then associate the form group
containing the customized forms to an access type for this tenant. This enables the
tenant to have its logos, captions, and style sheets in all of its screens. Form groups
also provide the ability to implement field level security for a particular group of users
assigned to the specific form group.
In a typical scenario, one tenant will require the outsourcer to have additional user-
fields for incident creation, while another tenant will be satisfied with the standard
model. These two tenants can be supported by the same engine, with different incident
creation screens.
There isn't any known limit on the number of form groups a multitenant site may have.
But having a lot of form groups increases the complexity of system administration and
may introduce maintenance complexity.
For more details, see the “Policy Implementation” section of the CA Unicenter Service
Desk Administrator Guide and the Security chapter in this book.
Application Data
For an optimized multitenant service desk, outsourcers should consider the following:
■ Define Naming Conventions for all data objects. Standardization will result in easier
reporting and system data retrieval for each tenant. Partitioning and queries may be
based on naming conventions, so make sure adequate time is spent reviewing data,
scrubbing, and defining naming rules prior to loading.
■ Define Shared data. Shared data values need to be strictly managed. For example, the
Priority field will be shared by each tenant accessing the incident, problem, or change
order forms. The Priority field is made up of six priority values from “None” to “5.”
There cannot be multiple Priority 1 values for each tenant, so default values and
reports that are associated with Priority out of the box will affect all users of the
system. The same Priority 1 and its internal functionality will be used by each tenant.
■ Define Tenant Only Data. Document what data must only be visible to a particular
tenant. Determine whether additional fields will need to be added to existing tables to
partition against. You will want to do this to any table that will have subsets of data
available to certain tenants only.
Depending on the outsourcing conditions and the tenants' requirements, it may be useful to
use the array below to determine what has to be shared totally, partially, or not at all. As
mentioned before some tables, like priority, severity, and impact, will be considered as
“shared data;” and tables like ca_location, ca_organization or ca_contact will be considered
as tenants-only data. To match tenant requirements, some tables that should be, in theory,
shared, won't be because Tenant X requires a private value for this particular table (status,
category, root cause, reporting, notification methods, work shift). In that case, the table is
90% shared, and only 10% of the table is tenant-only.
ca_Location X x
Priority X
status X X x
Most of the time, installations in R11.X and R6 perform these operations with a strict
naming convention and data partition constraints. Due to the context of the outsourcing
contract and a tenant's complex requirements, filtering scripts sometimes have to be
developed.
Configuration Items
As mentioned above, correct naming conventions and data partitions are critical to a
multitenant environment.
Part of the implementation process is to determine how a tenant plans to communicate its
asset (Configuration Item) information to the outsourcer, and in turn to the MDB. See the
CORA and Common Asset Viewer sections of this book to see how CA shares/reconciles
asset information among multiple applications.
Service Contracts
■ Users from Tenant-A get support for their desktop hardware, but not the software
installed on the desktop
■ Users from Tenant-C get general software support but only break-fix hardware support
With Service Contracts, we provide low-level and highly-scalable support for policy
differences for different groups of end users.
Our overall goal is to enhance CA Unicenter Service Desk's SLA functionality to help large
call centers support multiple organizations, for example, the call center is an outsourcer for
support. Each tenant may have negotiated different SLAs with the call center.
Typically an SLA exists between the service provider and service consumer. The SLA clearly
defines what level of service the tenant can expect from the outsourcer, and generally an
increase of support entails a greater price tag to the consumer. Penalties may apply if the
provider fails to comply with the SLA contract, making the “time to violation” a very
important runtime metric for classifying the priority of a ticket.
■ All new tickets must be acknowledged and confirmed (that is, an email must be sent to
the consumer indicating the incident report is opened).
■ Password reset issues must be resolved and closed within 10 hours of opening.
■ Priority 2 through 5 issues are open only during normal business hours (that is, the
“SLA clock” stops ticking during non-business hours, holidays, and so on).
■ Only software issues are supported. Issues dealing with hardware failures are not the
support provider's responsibility.
The service contract is a new CA Unicenter Service Desk object. It introduces the ability to
define Service Types specific to a particular organization for each reference field. It also
eases service level administration by centralizing the service level management.
Contracts also define private categories. A ticket under a contract may only use the
categories defined in the contract.
This model is extensible. A site can add shared reference fields, like Urgency or Root Cause.
The contract object restricts the category list on a ticket to categories defined in the
contract.
This option enables operations that require multiple, regional support sites to operate as
after-hours or overflow support for customers from other regions. This is true even if the
support center the customer uses has different support processes and Service Level
Agreements (SLAs).
Multisite support enables the CA Unicenter Service Desk interface to automatically connect
to the system where the end user is managed. This enforces work flows, processes, and
SLAs for the tenant's customers.
This technology enables the service desk database to be geographically distributed across
physical boundaries via a highly efficient architecture. Since only the contact list is
replicated, a minimum of data is moving across the network, thus minimizing the impact on
the infrastructure.
Scalability
■ One primary (regional) server is implemented and administered at each of the tenant
location.
■ One or multiple optional secondary server(s) can be used to support the server load at
each tenant location.
■ One master server will exist and be linked to each of the tenant primary regional
servers.
■ Each secondary server will have at least one Object Manager/Web Engine pair.
■ Each Object Manager/Web Engine pair can support up to 250 concurrent users if the CA
Unicenter Service Desk server manages a low number of automated events and
processes.
■ If user connections are creeping up to into the 250 to 400 users-per-web-engine range,
a new Web Engine should be added.
■ One bad query can take down a single tenant's regional CA Unicenter Service Desk
server.
■ It is unknown how many queries, data partitions, SLA events, notification, or any other
component will erode server performance.
■ Only minimal contact and incident tracking information will be replicated to the master
server. This data is subsequently pulled down by the regional servers that have been
configured to communicate with the Master site.
■ By default, knowledge tools documents, CI data, and Workflow Tasks are not replicated
across regions.
■ YES
> If all data must be managed and kept separate between tenants.
> If all tenants have their own business rules and processes that need to be
enforced.
> If global analysts need access to each tenant region to assist in incident or
problem management.
> To decrease the impact of a single primary server failure. If one region's CA
Unicenter Service Desk server goes down, the other regions are still
operational.
> If global call takers will take off-hour calls for remote tenant regions.
> If global analysts will work virtually on tickets in all tenant regions.
■ NO
> If all data at each tenant location must be replicated to the master region
(outsourcer's location).
Implementation Guidelines
The following are things you must do and must not do during your implementation:
■ Define naming conventions for all data objects. Standardization will result in easier
reporting and system data retrieval for each tenant. Partitioning and queries may be
based on naming conventions, so make sure adequate time is spent reviewing data,
scrubbing, and defining naming rules prior to loading.
■ Define shared data values, which need to be strictly managed. For example, the Priority
field will be shared by each tenant user accessing the incident, problem, or change
order forms. The Priority field is made up of six priority values from None to 5. There
cannot be multiple Priority 1 values for each tenant, so default values and reports that
are associated with Priority out of the box will affect all users of the system. The same
Priority 1 and its internal functionality will be used by each tenant.
■ Define tenant-only data by documenting what data must only be visible to each tenant.
Determine whether fields will need to be added to existing tables to partition against.
You will want to do this to any table that will have subsets of data available to certain
tenants only.
■ Define data partitions for each table that contains data that cannot be shared between
tenants. Not all tables should be partitioned on, because this will cause a complex
implementation. Some tables have shared values and functionality such as Priority,
Urgency, and Impact.
■ Do partition using efficient data partition constraints. Where possible, avoid the use of
queries with the % character and extensive use of dotted notation. More detail can be
found in the Security section (“Establish Data Partitions” sub-section) of “Chapter 4:
Policy Implementation” in the CA Unicenter Service Desk Administrator Guide.
■ Define scoreboard queries needed to support each tenant. Poorly written scoreboard
queries have devastating effects on CA Unicenter Service Desk performance.
■ Do use the IN keyword to allow a stored query to reference two (or more) tables
without creating a join. This can result in significant efficiencies in executing the query.
■ Define form groups required by different tenants in order to have different views. The
level of effort to maintain the CA Unicenter Service Desk system will increase with each
form group added to the system. Keep these as minimal as possible.
■ Define a strict change control for the production environment. Each proposed
modification to the environment, no matter the perceived size or level of effort, must
first be done in a test environment that mimics the production environment exactly and
be signed off by each tenant before being moved into production.
Each regional server will run independent processes, and store and manage its own data.
Therefore, each regional server in a multisite environment should be implemented following
the best practice methodologies:
1. Define global analysts. They must have the same login id in all tenant regions.
High Availability
More and more companies are consolidating due to mergers and buy-outs or simply want to
consolidate separate service desks so as to benefit from economies of scale. Therefore, a
new need exists to create an enterprise service desk solution. The enterprise service desk
will run 24 hours a day, seven days a week. This means that the service desk is expected to
maintain high availability (HA) to provide for the company's business needs.
High availability for CA Unicenter Service Desk begins by following the same principles as
any high availability project. This book cannot go beyond issues specific to CA Unicenter
Service Desk. We encourage sites to investigate high availability best practices on all levels.
Consider the following points:
■ On the first level, a clean Uninterruptible Power System (UPS) will provide continuous
service.
■ An auxiliary generator added to the UPS will maintain power during blackouts.
■ The servers themselves should have dual power supply, dual network interface
controller (NIC), and RAID Level 5 or better for disk redundancy.
The existence of all this redundancy in the server components will not necessarily prevent
the failure of the server over time and subsequent downtime, but it will extend the life of
the server before a failure occurs.
The CA Unicenter Service Desk is a web-based application. The MDB stores the data when
an incident or problem is generated. This environment will include a primary CA Unicenter
Service Desk application server, database server, and possibly a few secondary web
servers. The primary CA Unicenter Service Desk application server is not cluster aware, but
is cluster tolerant. Therefore, to achieve high availability on the next level, each server
must have its own redundancy.
This is a diagram of the CA Unicenter Service Desk high availability best practice. It
includes a primary application server and a backup primary server. There is an enterprise
database cluster server and the backup database node. The secondary web servers are by
location or region.
Windows Environment
The following is an example of how each server is designed for high availability.
The primary CA Unicenter Service Desk application server can be replicated to the backup
primary server by using Microsoft Windows 2003 clustering. The cluster servers need to be
in an active/passive mode configuration. As noted earlier, the primary server is not cluster
aware, but is cluster tolerant. This means that the primary servers will not automatically fail
over to the backup node on the cluster. Scripts will need to be in place to identify the
cluster name and to execute the failover. The clustered node that is now the live active
primary application server is recycled and attached to the MSQL database.
Documentation and scripts to build the HA cluster for the primary application server are
available on SupportConnect. Log on to SupportConnect from http://www.ca.com/support
(http://www.ca.com/support). From the Support heading, click Technical Support. From the
SupportConnect home page, click Product Home Pages on the left menu bar. From the
Please Select a Product drop-down list, select CA Unicenter Service Desk. Scroll down to
Technical Information at the bottom of the CA Unicenter Service Desk page, locate the
Technical Information section, and click the Latest r11 Implementation Best Practices link.
This will bring you to the Implementation Best Practices page. Click Fault Tolerance on the
left menu bar. Scroll down to CA Unicenter Service Desk r11.x. Click on the link and it will
take you to the following document links. Each document has a brief summary of what is
contained in the document:
■ Preparing Unicenter USD r11.1 or 11.2 for Microsoft Cluster Server (MSCS)
■ Failover Considerations
This presentation details the installation of a Unicenter Service Desk Primary Server and
Remote Components in an MSCS environment:
■ Customization
■ Implementing Unicenter Service Desk (USD) r11.x with Microsoft Server Clusters
Note that each link contains a separate document that will show step-by-step
implementation instructions for the various CA Unicenter Service Desk HA MSCS
Environments.
In addition, remember to download the usdCluster.zip file referenced in both the Part 1
presentation and the PDF file. If you decide to use the Part 3 presentation, remember to
download the usdPSCluster.zip file. There are links there for the zip files.
If you are using CA Unicenter Service Desk r11.2, be aware that a change to the install
behavior might generate the following error:
As part of the install process, a simple check is made to determine if the pm.xml and wl.xml
files exist in the CATALINA_BASE\wepapps. If they are not detected, the install copies the
epdc.jar file from the CATALINA_BASE\common\lib to the tomcat\4.1.31\common\lib
directory in Shared Components. This process is fine for the installation on the first node in
the cluster. On subsequent nodes, the install detects that the files already exist, and it will
perform an upgrade (rather than an install). Since the epdc.jar file copy step is not included
in an upgrade, the above error will be generated.
Note that this caveat only applies to CA Unicenter Service Desk r11.2 - it does not affect
r11.1.
The database uses Microsoft SQL Clustering to maintain high availability. When using MSQL
2005, the cluster must have only two separate nodes. The configuration of the SQL cluster
creates an active/passive mode. Fiber channels using dual paths attach the two cluster
nodes to the storage file system. The storage file system is usually a Storage Area Network
(SAN) that provides redundancy and shared storage.
Documentation to build the HA MSQL cluster is available on SupportConnect. You can follow
the instructions in the Primary Application Server paragraph above to navigate to the Fault
Tolerance section of the CA Unicenter Service Desk Implementation Best Practices page,
and then choose the “Unicenter Service Desk r11.x” link. This link will take you to the
document called “Best Practices for Implementing USD r11.x in an HA MSCS Environment-
Part 4.”
The secondary web servers are located in a single location or region where users can gain
access to the CA Unicenter Service Desk application. The secondary web server has one
connection that goes back to the primary application server. To create an HA environment
with the web servers, a minimum of two servers should be placed in each location. The
secondary web servers use the web director component. This component provides load
balancing between the web servers and maintains a single URL between the web servers. If
one of the two servers fails, those users will be notified and asked to log in. Any information
in progress at the time of the server failure will be lost. Users in that location will be able to
sign on again to the remaining web server. If the remaining web server also fails, the user
can log on to another location/region secondary web server.
The following is an example of how each server will be designed for high availability.
The primary CA Unicenter Service Desk application server in a UNIX and Linux platform can
maintain an HA environment by doing the following:
> Have a script built to rename the standby server name from the production
application server name.
> Have the script to change the standby IP address from the production
application server IP address.
■ Maintain strict change management on the production and backup server. Make sure all
maintenance and patches are applied to both servers.
Currently the CA Unicenter Service Desk application supports the 10 G version of Oracle. A
few options that maintain HA on the Oracle database are as follows:
> Provides fiber channels using dual paths to attach the two cluster nodes to the
storage file system. The storage file system is usually a Storage Area Network
(SAN) that provides redundancy and shared storage.
■ CA WANSync Oracle.
Parallel Implementation
■ There can be legal requirements to keep some records physically inside certain legal
boundaries, which necessitate two implementations.
■ For security purposes, service desks may be separated to avoid giving analysts a
security clearance.
In theory, SOA does not place any requirements on the platform on which an SOA system is
built. In practice, SOA is almost exclusively built using W3C-defined web services. Web
services are well suited to an SOA architecture because they are standardized, based on the
ubiquitous http protocol, and have provisions for describing and publicizing available
services.
CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools provide web
services. When these services were designed, two goals were present. The first goal was to
expose as much CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge
Tools functionality as possible. This was done by providing low-level services that expose
the building blocks from which CA Unicenter Service Desk was built. These services
constitute a tool kit from which higher level business services can be built for an SOA. In
addition, this tool kit can be used to build very specific interfaces, such as those used in
some product integrations that may even constitute new capabilities for a service desk.
The second goal was to develop high-level services that could fit directly into an SOA.
Examples of high-level services are createRequest or attachChangeToRequest. Working
with these high-level services and the lower level building blocks, CA Unicenter Service
Desk can be used to provide a loosely coupled service desk service that can be accessed
anywhere a web service can be used.
Important! Web services are a powerful technology that requires programming skills that
are not supported by CA Technical Support. Like other sections of this book, this chapter
ventures beyond the limits of product documentation. Testing and maintenance of web-
services-based projects are the responsibility of the site. The samples here are samples,
not production code. We suggest that you follow sound software engineering practices, such
as source control and code reviews, while developing your project. We highly recommend
ITIL release management practices for web services projects. Be sure to test carefully
before going into production.
A web service is a collection of services that have been deployed over the web. Another
application can access these services to perform a task by calling the service through the
internet using standard protocols. The key standards are XML for data formatting, SOAP for
message exchange, and WSDL for describing the service.
The advantages of using a web service, in addition to those mentioned in the above SOA
section, are that they are free of vendor-specific libraries, are platform and language
independent, and are readily available for a variety of development platforms.
For a more detailed description of web services and the associated standards, review the
articles on web service basics at http://msdn2.microsoft.com/en-
us/webservices/aa740691.aspx (http://msdn2.microsoft.com/en-
us/webservices/aa740691.aspx) or Wikipedia's definition at
http://en.wikipedia.org/wiki/Web_services (http://en.wikipedia.org/wiki/web_services).
Service aware is a paradigm developed by CA that focuses on all the components within an
enterprise communicating with the service desk function. Service aware attempts to
achieve the holy grail of IT: a self-managing infrastructure.
The first task in a self-managing infrastructure is to have applications and devices report
problems directly to the service desk, thus removing the end user from the equation. This
significantly reduces reporting time, improves research and troubleshooting, and avoids
communication problems with the end user. Service aware is part of CA's vision for a self-
managing enterprise and part of the on-demand strategy.
Applications have communicated with CA Unicenter Service Desk for many years. For
example, network monitoring tools have often reported incidents directly to the service
desk. But this new level of integration is much tighter and provides an increased level of
automation.
The diagram below shows the virtual architecture of service aware. By using web services,
the core functions of the service desk and knowledge base are provided to the applications
and devices who consume the support.
As shown in the diagram, the traditional support model is still available as exceptions to
automation that still exist. The CA Unicenter Service Desk and CA Unicenter Service Desk
Knowledge Tools web services are the means of delivering on-demand services in this
model.
The following sections describe the functions available in the web service, the typical tasks
that they are used to perform, and the common pitfalls that implementers and developers
should be aware of.
Note: Although there are a variety of other APIs and integration alternatives provided, the
web services API should always be the first option when integrating other applications with
CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge Tools. The web
service offers much of the functionality that is available through the interface from outside
of the tool and allows customers to reuse many of the policies and business processes
already created.
For more information on the web services, see the CA Unicenter Service Desk Web Services
User Guide. Additionally, a java language example is provided with all r11 versions in the
NX_ROOT\samples\sdk\websvc directory.
Version r11 provides two distinct web services, one specific to r11 and another for
backwards compatibility with integrations that used the CA Unicenter Service Desk 6.0 web
service. Notice that the 6.0 web service does not cover CA Unicenter Service Desk
Knowledge Tools functions. Users who integrated with the CA Unicenter Service Desk
Knowledge Tools 6.0 web services should review and update those integrations to work
with the new r11 web service.
A list of all the methods available through both of these web services is available on a CA
Unicenter Service Desk web server from the following URL:
http://localhost:8080/axis/servlet/AxisServlet
(http://localhost:8080/axis/servlet/axisservlet).
http://localhost:8080/axis/services/USD_R11_WebService?wsdl
(http://localhost:8080/axis/services/usd_r11_webservice?wsdl).
The WSDL is a key component when integrating through the web service. The WSDL tells
the other application where the web service is located and all the associated methods of
that service.
Authentication
To communicate with the web service (unless using certificate-based authentication that is
described later in this chapter), a user must first login with a valid CA Unicenter Service
Desk userid and password. The user's security and access is the same as it is in the CA
Unicenter Service Desk interface, defined by mechanisms such as access types and data
partitions. Once users are logged into the web service, they can only perform tasks and
retrieve data based on their access. This also applies to CA Unicenter Service Desk
Knowledge Tools, which use permission groups to segment the knowledge base. Users
cannot, therefore, retrieve knowledge documents that they do not have access to.
An analyst type user or a user performing analyst functions will use a concurrent user
license when accessing the web service. This is important to keep in mind when building
integrations so that your environment does not exceed its number of licenses.
The web service communicates with CA Unicenter Service Desk's object layer. By
interacting with this layer, updates made through the web service trigger the appropriate
business processes, escalations, and service levels within CA Unicenter Service Desk.
Additionally, using the object layer allows developers to work at a layer above the database
and not concern themselves with those low-level connections.
Every object within CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge
Tools has a unique ID. At the object layer, this id is referred to as a handle or persistent id
(PERSID). When accessing and updating objects, the developer is often required to use the
handle of an object to uniquely identify it in a method call.
Technology
Both the 6.0 and r11 web services provided with all r11 versions are built on Java (J2EE)
technology and run on Apache Axis and Apache Tomcat. All the prerequisites are installed
with CA Unicenter Service Desk, and the web services can be hosted on any of CA Unicenter
Service Desk's supported server platforms. The use of Java, Apache Axis, and Apache
Tomcat does not limit the technology used to communicate with the web service. As
mentioned previously, the advantage of a web service is that it is vendor- and platform-
neutral, allowing a variety of platforms and development environments to communicate
with it.
Much of the same functionality available through the interface is accessible through the web
service. For example, there are functions to create and update contacts, assets, and tickets.
Tickets can also be transferred or escalated, users can be notified, and CA Unicenter
Service Desk tasks can be updated through the web services. Using the same API, CA
Unicenter Service Desk Knowledge Tools functions can be accessed including the ability to
create, search, retrieve, and rate documents.
■ login/logout
■ These two functions control the session with the web service. The userid used to login
dictates the security and access throughout the rest of that session.
■ doSelect/doQuery
Both of these methods allow querying of specific objects to retrieve data. For example,
query may retrieve the ref_num IDs (ref_num 's) of all open tickets for a given user.
■ createRequest
The most common integration with a service desk is to open a request or incident. This
method creates a request, incident, or problem based on the parameters passed to it.
■ getHandleForUserid
In many cases the web service requires the handle (or unique id) of an object to be
passed to a method call. This method retrieves the handle of a user given their unique
userid.
■ createTicket
Like the createRequest function, this method creates a ticket; however, the type of
ticket is dependent on the web service policy created by the CA Unicenter Service Desk
administrator. This method is designed to simplify the most common integration with
CA Unicenter Service Desk by hiding some of the CA Unicenter Service Desk-specific
knowledge and complexity associated with other method calls.
■ createAttachment
When reporting an incident with another application, it is often helpful to attach a log
(or other file) to the incident to improve troubleshooting.
■ Keyword search
Search performs a search for knowledge documents just as a user would through the
web interface by providing a search string.
■ getDocument
Use this method to retrieve the attributes of a knowledge document given its ID.
■ createDocument
The last three methods are knowledge base functions that use the functionality provided
with either CA Unicenter Service Desk Knowledge Tools or keyword search. Keyword search
is the embedded search and retrieval tool that comes with CA Unicenter Service Desk.
Customers who do not have CA Unicenter Service Desk Knowledge Tools can still use the
web service but will only have access to the basic embedded knowledge methods. When a
Keyword Search-only customer attempts to access a CA Unicenter Service Desk Knowledge
Tools-specific method, such as FAQ or getDecisionTrees, the user will see the following
error: Soap Exception: Keyword Search does not support this feature.
For more information on these and all available web service methods, see the chapter “Web
Services Methods” in the CA Unicenter Service Desk Web Services User Guide.
This section will cover the most common tasks performed using the web service, and
provide examples and tips on how to perform these tasks.
Creating an Incident
The most common task performed through the web service is creating a request, incident,
or problem. All of these ticket types can be created using the createRequest method while
other ticket types, such as change orders and issues, use their own methods
(createChangeOrder and createIssue, respectively). Like the web interface, the
createRequest method can create an incident and set all of the associated attributes and
properties of that ticket.
The example below walks through the steps of creating an incident and setting the affected
end user to employee123, priority to 2, and incident area or category to applications.
Before going through the example, review chapters 3, 4, and 5 in the CA Unicenter Service
Desk Web Services User Guide, which describe using the web service and its methods.
1. Login
The first step is to login to the web service using the login method with a userid that
has access not only to create an incident but to retrieve the key attribute data we need
to set the priority, category, and group fields. To verify that a user will have the correct
level of access, test the scenario in the web interface and validate the user's access
type. If the web service login is successful, an integer session id or SID will be returned
that has a value greater than zero. A call to login therefore fails when the returned SID
is less than or equal to zero. The SID is used in almost all other method calls to ensure
that a valid user is accessing the methods.
We need to set the affected end user or customer attribute in the incident and
therefore need to retrieve the handle (PERSID) of that user. As mentioned above, the
affected end user's userid is employee123 so we simply pass that string value along
with the SID to the getHandleForUserid method. This method will return the handle
for that user as a string that we will use later.
Next we need to retrieve the handle for the incident area we want to assign this
incident to: applications. The doSelect method queries CA Unicenter Service Desk for
information that we are looking for. In this case we want to query all incident area
objects to find the handle for one that has a name (sym) “applications.” By reviewing
“Appendix B: Objects and Attributes” in the CA Unicenter Service Desk Modification
Guide you can locate the cr (call request) object which holds requests, incidents, and
problems. The incident area or category attribute of the cr object is the pcat object
(problem category). To retrieve the handle for the “applications” incident area, the
doSelect method call must go against the pcat object and have a where clause sym =
“applications.” The doSelect method will return an XML node that contains the results of
the query. XML parsing is necessary to retrieve the handle for the incident area.
This pseudocode shows how this actual method call would look:
To retrieve these two handles, you do not have to call the web service, but instead can
use the documentation which covers commonly used objects (Chapter 3 of the CA
Unicenter Service Desk Web Services User Guide). In the section on Perform Common
Tasks we can find that priority 2 has a handle of pri:503. Recall that the cr object holds
requests, incidents and problems. By default the createRequest function creates a
request and we must explicitly tell it to create an incident. The type attribute in the
request object has a value of R for request, I for incident and P for problem. However,
the web service requires that we pass this value as a handle. This is again a commonly
used object and is provided in the documentation as crt:182. For more information on
setting the type attribute, review the “Note on Using the ITIL Methodology Installation”
in the CA Unicenter Service Desk Web Services User Guide.
Now that we have the handles for all the incident attributes we can create the incident.
To successfully call the createRequest function, the method requires the SID, any of the
required attributes for the request object passed in the attrVals[] array, and any
required properties based on the incident area (category) in the propertyValues[] array.
The last three parameters in the method call are used to retrieve attributes from the
ticket once it is created. The attributes[] array is used to specify specific attributes to
retrieve while newRequestHandle and newRequestNumber retrieve the handle and
ref_num of the new ticket as string values.
This pseudocode shows how this actual method call would look:
// assign the name value pairs to the attrVals array of strings. The
customerHandle and categoryHandle hold the handles of values returned from
previous function calls in this example
//create empty array of strings variables for properties and return attributes
as we are not passing data. The method still requires passing array of string
objects in the function call.
String newRequestHandle = “”
String newRequestNumber = “”
//actual function call, the creator handle is left blank which tells the method
to default to the logged in user of the web service. The template parameter is
left blank as we are not setting that value
6. Logout
The logout method ends the session for the SID that was created during login. The SID
can no longer be used.
This example is now complete and a new priority 2 incident should have been created in CA
Unicenter Service Desk for employee123, and should have been assigned to the
applications category.
Updating an Incident
There are several different methods provided to update attributes and perform actions such
as transferring or escalating a ticket. These methods perform actions or events in CA
Unicenter Service Desk that can trigger the support process already defined in the tool.
Below is an example of using the three most often used action methods: transfer,
changeStatus, and createActivityLog. It walks through the steps of updating the incident
created in the previous example. The incident will be transferred to analyst123, a log
comment activity will be added, and the status will be changed to Work In Progress.
Before going through this example, review chapters 3, 4, and 5 in the CA Unicenter Service
Desk Web Services User Guide, which describes the web service and its methods.
1. Login
As noted in the previous example, a valid call to login is necessary to acquire the SID
to use in the other methods.
To transfer the ticket we first need to acquire the handle for the analyst that will be the
new assignee. The assignee's userid is analyst123 so we pass this string value along
with the SID to the getHandleForUserid method. This method will return the handle for
the assignee as a string that we will use later. Additionally we need the user's handle
that will perform this transfer activity, the creator. To demonstrate that it was done
through the web service, we'll use the userid webserviceuser. Another call to the
getHandleForUserid function is required to retrieve the handle for this second user.
With the assignee, creator, and incident handles, we now have all the information we
want to update during the transfer. The transfer method allows updating of the
assignee, group, and organization attributes. One or more of these attributes can be
updated at a time. There are several Boolean parameters to set that tell the method
what to update. Because we want to set only the assignee, we need to pass a Boolean
True value in the setAssignee parameter and False for setGroup and setOrganization.
This pseudocode shows how this actual method call would look:
4. Log a comment
The next step in this example is to log a comment on the previously created incident.
The createActivityLog method can create a variety of activities. To tell the method
the type of activity to create, we need to pass the LogType. In the case of a comment
the LogType is LOG. This LogType and others are documented with the method
description in the CA Unicenter Service Desk Web Services User Guide. The method call
also has parameters for TimeSpent and Internal. The TimeSpent parameter takes an
integer value for the amount of time spent on this individual activity. In this example
we'll pass 0, which is the default, to show that it was part of an automated process
done through the web service. The internal parameter, when set to True, allows a
comment to be visible to analyst users only.
This pseudocode shows how this actual method call would look:
To update the status, we need to retrieve the handle of the new status using the
doSelect method. The new status name or sym is Work In Progress. See Appendix B in
the CA Unicenter Service Desk Modification Guide to locate the request status object:
crs.
This pseudocode shows how this actual method call would look:
The final step in this example is to change the status of the incident. From the previous
steps we have handles for the creator, incident, and status. This is all the information
we need to call the changeStatus method.
This pseudocode shows how this actual method call would look:
7. Logout
Logout of the web service using the logout method and invalidate the SID.
This example is now complete and the incident has a new assignee, priority, and comment.
The advantage of using the web service in this case is that all the notifications and
escalations will take place. If there is a service event that occurs when an incident is
unassigned for an hour, that event will not trigger because of the actions taken through the
web service.
CA Unicenter Service Desk Knowledge Tools use the same web service as CA Unicenter
Service Desk and therefore leverage the same login/logout session parameters. The most
common task performed using the knowledge-related functions is a search of the
knowledge base. The search method takes a search string as input to query the knowledge
base. This method can be used by customers who are licensed for either Keyword Search or
CA Unicenter Service Desk Knowledge Tools; however, the natural language (NLS) type
search is only available with CA Unicenter Service Desk Knowledge Tools. The search
method takes a variety of input parameters, allowing a user to define the type of search
that will be done as well as how the results will be returned.
The pseudocode example below shows how a call to the search method is formatted,
followed by an explanation of the key parameters.
integer resultSize = 10
integer maxDocIDs = 20
The goal of this example is to return knowledge documents that are relevant to the search
string “printer toner error.” The search method call takes several key parameters that
define how the search will be performed as well as constrain the results that are returned.
The first parameter in the function call is the SID which designates proper access to the
web service followed. The SID is followed by the search string which in this example is
“printer toner error.”
In this example the resultSize variable is used to set the limit of knowledge documents with
their appropriate attributes to return to 10. This parameter differs from the maxDocIDs
variable which sets the maximum number of knowledge document IDs to return based on
the search. The search method call in this example will return the first 20 relevant
document IDs, and, for the top 10 results, will return all the attributes listed in the
propertyList. The extra 10 document IDs could be useful later if additional results need to
be retrieved.
The propertylist variable specifies the knowledge document attributes to return for those
top 10 documents. In this example id, Title, and Summary are retrieved. For a list of all the
knowledge document (kd) object attributes, review Appendix B in the CA Unicenter Service
Desk Modification Guide.
The next two parameters in the function call set how the results will be sorted. The default
search orders results by their relevance. To ensure that the results are in ascending order,
the next value passed is True. Therefore the output of the search will show the top 10
documents ordered from most relevant to least relevant.
The next parameter passed is a Boolean False value to tell the search method not to
retrieve related categories for the returned documents. This is helpful if the results need to
use or display the other categories that the documents reside in.
SearchType, matchType and searchFields all designate how the search will be performed
against the knowledge base. SearchType designates the type of search as either natural
language or keyword. MatchType is used to specify if the search should be an “Or,” “And,”
or “Exact Match” search. SearchFields is an integer value that tells the method which fields
should be searched. In this example the title, summary, and problem fields will be used to
find the keywords “printer toner error.”
The last two string parameters are left empty in this example to take the defaults. The first
empty string can be used to limit the search to one or more knowledge categories. The
default is to search all knowledge categories. The last empty string can be used to add an
additional “where” clause on the search.
The first example showed how the web service can be used to create an incident. This can
become a tedious task if there are lots of attributes to set during the call. The createTicket
and createQuickTicket methods make this task much easier for simple tickets and defined
integrations.
createQuickTicket
This method is best used when a simple ticket needs to be created with only a brief
description. The ticket type is chosen based on the user's preferred ticket setting with their
access type. The method call only requires a SID, customer handle, and description. This
makes this method very straightforward and easy to use but lacks the detail that the more
advanced methods provide. createQuickTicket is best used for simple or test integrations
that only need to report a ticket to CA Unicenter Service Desk. This is not recommended for
large scale integrations as it creates tickets with little detail for analysts to use in trouble
shooting.
createTicket
Like createQuickTicket this is a method that simplifies the process of creating a ticket.
createTicket, however, uses the web service policy settings within CA Unicenter Service
Desk administration. These policies allow a CA Unicenter Service Desk administrator to
define the ticket type, frequency, and problem type that an integration can create. This not
only simplifies the integration but also can prevent ticket floods if a problem repeatedly
occurs with an integrated application.
A web services policy controls the access that the user (or integrated application) has when
communicating with CA Unicenter Service Desk and CA Unicenter Service Desk Knowledge
Tools through the web services. As seen in the above screen shot, the policy defines any
limitations on accessing or updating data with the web service. The Access Control tab
specifies the number of operations that can occur within an hour. For example a user under
this policy can create a maximum of 60 tickets an hour, has no limitation on creating
objects, and cannot create attachments, query data, or search the knowledge base.
This screen shows a sample web services problem type that defines how duplicate tickets
are handled
A problem type links to a policy and defines the ticket type and template that will be
created when this problem occurs. Additionally the problem type can define what to do
when a duplicate ticket is encountered. In this example a new ticket is not create but an
activity log is added to the previously created ticket.
The example below walks through the steps of accessing the sample web services policy
and using it to report an account lockout error with the expense application.
This is slightly different from the previous examples. When using policies, the user
must define the policy they are using at login using the loginService method. In
addition to the username and password the method requires the code of the policy that
will be used.
To report the ticket only one other method call is required. The createTicket method
only requires the SID and problem type to create the new ticket. This method utilizes
the web services policy and problem types defined in CA Unicenter Service Desk which
greatly simplifies this process. Additional information can be passed to this method
such as a description, asset, and end user but none of these fields are required.
//define the string variables for the new ticket number and handle String
newTickethandle, newTicketNumber
This example is now complete and demonstrates the significantly reduced amount of work
that is necessary when using web services access policies. Policies are highly recommended
when an integration is being built by a user who has little or no experience with CA
Unicenter Service Desk. The policy hides the CA Unicenter Service Desk complexity and
allows the user to focus just on reporting the problem and letting the CA Unicenter Service
Desk administrator define how that integration will be interpreted.
For more information on web services policies and the createTicket method, review the
“Simplified Web Services Access” section in Chapter 2 of the CA Unicenter Service Desk
Web Services User Guide as well as the Online Help in the CA Unicenter Service Desk web
interface.
Using Public Key Infrastructure Authentication and Interacting with the Web
Interface
This section details how to use the web services advanced features for secure
authentication as well as interacting with the web interface. The web services Public Key
Infrastructure (PKI) features provide an additional layer of security when authenticating to
CA Unicenter Service Desk. For additional information on PKI authentication and the
loginServiceManaged method, see the CA Unicenter Service Desk Web Services User Guide.
This information is provided to show detailed steps and sample code that leverages the web
services ability to generate certificates and then use these generated certificates to access
the web services. Certificates are often used in environments that require a higher level of
security such as having external users or vendors accessing the web services.
In the example shown below we complete the login process using the CA Unicenter Service
Desk certificate and then conduct several common web services calls. The more interesting
of the two is the getBopsid() web services method call, which allows us to obtain a token
that is linked to a specific user. This token can be used to login to the CA Unicenter Service
Desk web interface as the linked user without being prompted for a password. This allows
seamless integration to be enabled between different applications. It should be noted that
the generated BOPSID token does expire after around 10 seconds, meaning that it must be
used promptly.
Prerequisite
Be sure to use the AXIS Tool known as WSDL2Java to generate the required stub classes. If
you haven't previously created the stub classes, then continue to the next section entitled
“Generating Stub Classes with AXIS Tool WSDL2Java” for a sample script along with the
steps for generating the stub classes. If you have already completed this step skip to the
following section entitled “Creating and using a PKI Certificate.”
If you need to generate the stub classes for the CA Unicenter Service Desk web services,
open a command prompt and change the directory (cd) to the X:\program files\CA
directory, then run the dir /x command to see what the short form of the CA Unicenter
Service Desk directory is. In the example screenshot below the short name is SERVIC~1.
Note what your CA Unicenter Service Desk directory is as we will need this information
later:
Search for javac.exe on all of the server's local drives. If you locate one, then note its path
as we will need to reference it in our bat file shortly. If you do not find one, go to
http://java.sun.com/j2se/1.4.2/download.html
(http://java.sun.com/j2se/1.4.2/download.html); locate the Java J2SE SDK; and click on
the link that says Download J2SE SDK. Then install the SDK. This step may require a
reboot.
@echo off
::##################################################################
::# Simple bat file to Build r11 USD Stub classes
::# Use it to create the required USD r11 Java Web Services classes
::#
::# Usage: build_wsdl
::#################################################################
@REM Update this with the PATH to USD NX_ROOT location
@SET USD_SHORT_PATH=C:\Progra~1/CA/Servic~1/
@REM Update this with the PATH to the JDK javac.exe compiler
@REM (this is used in the 2nd part of this file)
@SET JAVAC_EXE="C:\j2sdk1.4.2_13\bin\javac.exe"
@REM Update this to the path to the USD r11 NX_ROOT/java/lib location
@SET USD_TOMCAT=%USD_SHORT_PATH%java/lib
@SET CP=%USD_TOMCAT%/axis.jar;%USD_TOMCAT%/commons-
discovery.jar;%USD_TOMCAT%/commons-
logging.jar;%USD_TOMCAT%/jaxrpc.jar;%USD_TOMCAT%/saaj.jar;%USD_TOMCAT%/log4j-
1.2.8.jar;%USD_TOMCAT%/xml-
apis.jar;%USD_TOMCAT%/xercesImpl.jar;%USD_TOMCAT%/wsdl4j.jar;%USD_TOMCAT%/axi
s-ant.jar
@cd WEB-INF\classes
%JAVA_EXE% -cp %CP% org.apache.axis.wsdl.WSDL2Java
http://localhost:8080/axis/services/USD_R11_WebService?wsdl
@cd ..\..
::##################################################################
::# This next section compiles the Service Desk stub code
::# Once complete, you should recycle tomcat with the following
::# commands or by recycling Service Desk:
::# pdm_tomcat_nxd -c STOP
::# pdm_tomcat_nxd -c START
::##################################################################
@SET CP=".\classes;%CP%"
@SET STUBS_DIR=classes\com\ca\www\UnicenterServicePlus\ServiceDesk
@cd WEB-INF
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\ArrayOfInt.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\ArrayOfString.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\ListResult.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\USD_WebService.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\USD_WebServiceLocator.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\USD_WebServiceSoap.java
%JAVAC_EXE% -classpath %CP% -deprecation -d classes
%STUBS_DIR%\USD_WebServiceSoapSoapBindingStub.java
@cd ..
#!/bin/ksh
# Update this to the path of the USD Java lib directory
USD_JAVA_LIB=/opt/CAisd/java/lib
-----------End of Script---------------
After creating the batch file, run it from the command line. Ensure that you are running the
file from the NX_ROOT\bopcfg\www\CATALINA_BASE\webapps\axis\ directory. After
running this batch file from the command prompt you should now have the stub classes in
place and compiled as shown in the screen shot below. The last step will be to run the
following commands to recycle Apache Tomcat (or you can simply recycle CA Unicenter
Service Desk):
pdm_tomcat_nxd -c STOP
pdm_tomcat_nxd -c START
This command creates the DEFAULT.p12 file in the current directory. This policy will
have the password equal to the web services policy name that already exists in CA
Unicenter Service Desk (in this case DEFAULT). This command will also add the
certificate's public key to the field pub_key field (public_key attribute) in the sapolicy
table/object.
3. Open the CA Unicenter Service Desk web interface and navigate to Administration ->
Web Services Policy-> Policies.
4. In the DEFAULT web services policy, insert the Proxy Contact (in this case ServiceDesk)
and confirm that the DEFAULT policy record field shows Has Key = YES.
<html>
<head>
<title>PKI Login</title>
<style type="text/css">
TD{font-family: Verdana;}
.font1{color:#336699;font:bold 14px;}
.font2{color:#6A7A94;text-decoration:none;font:bold 11px;padding: 0px 6px 0px 6px;}
</style>
</head>
<body>
<form name="frmLoginPKI" method="post" action="pkilogin.jsp">
<table width=99% cellpadding=0 cellspacing=0>
<tr><td colspan=2><hr></td></tr>
<tr>
<td CLASS="font1" colspan=2 valign="center" align="center">Log in
using PKI and lookup User Handle</td>
</tr>
<tr><td colspan=2><hr></td></tr>
<tr><td colspan=2> </td></tr>
<tr><td colspan=2> </td></tr>
<tr>
<td CLASS="font2" align=right>Server Name:</td>
[assign the value for TD in your book]<input type=text id=server
name=server value="localhost"></td>
</tr>
<tr>
<td CLASS="font2" align=right>Port:</td>
[assign the value for TD in your book]<input type=text id=port
name=port value="8080"></td>
</tr>
<tr>
<td CLASS="font2" align=right>Directory:</td>
[assign the value for TD in your book]<input type=text id=dir name=dir
value="C:\Program Files\CA\Service Desk\bopcfg\www\CATALINA_BASE\webapps\axis"></td>
</tr>
<tr>
<td CLASS="font2" align=right>Access Policy Name:</td>
[assign the value for TD in your book]<input type=text id=accessPolicy
name=accessPolicy value="DEFAULT"></td>
</tr>
<tr>
<td CLASS="font2" align=right>UserID to Lookup:</td>
[assign the value for TD in your book]<input type=text id=userId
name=userId value="ServiceDesk"></td>
</tr>
<tr>
<td CLASS="font2" align=right>Protocol (http/https):</td>
[assign the value for TD in your book]<input type=text id=protocol
name=protocol value="http"></td>
</tr>
<tr><td colspan=2> </td></tr>
<tr><td colspan=2> </td></tr>
<tr>
<td colspan=2 align=center><input type="submit" value="Log me
in!"></td>
</tr>
</table>
</form>
</body>
</html>
7. Create a new jsp file named pkilogin in the
NX_ROOT\bopcfg\www\CATALINA_BASE\webapps\axis directory and copy the code
below into the new file, pkilogin.jsp.
<%@ page
import="com.ca.www.UnicenterServicePlus.ServiceDesk.USD_WebServiceLocator" %>
<html>
<head>
<title>Login...</title>
<style type="text/css">
TD{font-family: Verdana;}
.font1{color:#336699;font:bold 14px;}
.font2{color:#6A7A94;text-decoration:none;font:bold 11px;padding:
0px 6px 0px 6px;}
</style>
</head>
<body>
<tr><td colspan=2><hr></td></tr>
<tr>
<td width="10"> </td>
</tr>
<tr><td colspan=2><hr></td></tr>
<tr><td colspan=2> </td></tr>
<tr>
<td width=10> </td>
<td class=font2>
<p>
<%
int SID;
String userHandle;
String bopSid;
try
// Creating a password to be used when extracting the private key, it's the
// Extracting the private key, the first parameter is the alias associated
// the second parameter is the password to extract the key (this defaults to
// the Access Policy name as well ... when using pdm_pki utility)
Signature s = Signature.getInstance("SHA1withRSA");
s.initSign(key);
s.update(accessPolicy.getBytes());
// Logging into Service desk using the access policy as the first parameter
SID=Integer.parseInt(sessionid);
out.print("Got user handle for " + userId + " of '" + userHandle + "'<p>");
out.print("<a
href="+protocol+"://"+server+":"+port+"/CAisd/pdmweb.exe?BOPSID="+bopSid+"
target=_new>Click here VERY SOON to login seamlessly using the BOPSID as user
"+userId+"</a><p>");
// Now logout
usd.logout(SID);
catch(Exception e)
%>
</td>
</tr>
<tr><td colspan=2> </td></tr>
<tr><td colspan=2> </td></tr>
<tr>
<input onclick="window.location='pkilogin.htm';"
type="button" value="Try Again" tabindex=1>
</td>
</tr>
</table>
</body>
</html>
Note: The Directory field is the location where the certificate file that was previously
created can be found.
9. After clicking the Log me in! button, a results page should be displayed that shows the
details of the PKI authentication and other web services calls. The URL that is
embedded in the results page is a hyperlink to the CA Unicenter Service Desk web
interface that will provide a seamless login (no login/password required) using the
BOPSID functionality that was called after the login process completed. The getBopsid
method provides for opening the web interface without a login and is often used in
integrations. Since the BOPSID is a limited life token that is linked to a specific user,
you need to click on the URL within a few seconds in order for the login process to
complete successfully. The format of a URL using a BOPSID is: http://<server
Common Pitfalls
This section covers several of the common mistakes that users encounter when using the
web services.
Using Handles
The most common error that users encounter is passing a sym value to a function rather
than a persistent_id or handle. For example when a customer calls the createRequest
function they pass a list of attributes that can include the request or incident's priority. The
value passed with priority must be a handle for that priority such as pri:502, instead of the
actual priority value of 3. This mistake often appears in the SOAP error message “Bad
Handle.: For a list of out-of-the-box handles, consult the Perform Common Tasks section in
Chapter 3 of the CA Unicenter Service Desk Web Services User Guide.
Many of the methods in the web service require an array of strings. An array of strings is a
data type that is used to pass data to the CA Unicenter Service Desk and CA Unicenter
Service Desk Knowledge Tools web service that can hold zero or more string values.
Note: When passing data to functions that require an array of strings, the function will
accept an empty array of strings but will not accept a single empty string. For example,
passing “” to a function that requires an array of strings will result in a data type error.
Below are several examples of how to define an empty array of strings that can be used to
pass to a method:
■ C#
■ Java
Below is an example of how an array of strings showing name value pairs is formatted in
XML:
<ArrayOfString>
<String>assignee</String>
<String>cnt:38293</String>
String>description</String>
String>priority</String>
<String>pri:38903</String>
</ArrayOfString>
■ Setting up CA Unicenter Service Desk so that users can access the application across a
network (including, where required, through a firewall)
■ Controlling permissions and access rights within the CA Unicenter Service Desk
application itself
In today's corporate landscape, it has become more and more important for corporate data
networks to provide strong security. Technology resources must be protected and yet must
still provide the appropriate level of access to those who are entitled to use them. Many
techniques can be used to build security into the network architecture. Firewalls, network
address translation, and demilitarized zones are typical elements used to enforce a secure
network.
A service desk system must be able to work within the design of the network architecture
and abide by the policy of the security elements of the network. It must do this, and yet let
internal and—if need be—external users obtain access to such services. This section
discusses what network security considerations you may face when implementing CA
Unicenter Service Desk and how to deal with them.
Ports
CA Unicenter Service Desk uses a variety of ports for a multitude of distributed processes
that need to communicate with each other. Each of these processes must be able to
uniquely address any other processes that it may communicate with. Much of the
architectural design of CA Unicenter Service Desk, when dealing with network security, is
about managing port communication.
Listed below are the key elements of CA Unicenter Service Desk that use ports:
■ Slump
> The primary means for communication in CA Unicenter Service Desk is slump.
This messaging protocol is controlled by a daemon run on the Primary Server.
All processes will register with the slump daemon across the network using the
slump port. Details on how slump works are explained in the sections below.
> By default slump uses the following ports: 2300 (for proctor communication),
and a series of other ports starting with 2100 and higher, depending on how
many processes are running.
■ Fast Channel
> Once a process has registered with the slump, some processes are coded to
communicate directly with other specific CA Unicenter Service Desk processes
through Fast Channel. CA Unicenter Service Desk processes negotiate Fast
Channel connections on a variable range of ports. Some messaging, such as
'app_gone' termination messages and the slump heartbeat, still passes through
slump_nxd when a Fast Channel is open. This activity can be seen in the
message counts in slstat output. Compared to the data passing over the Fast
Channel, this activity is insignificant.
> By default Fast Channel uses a series of other ports starting with 2100 and
higher, depending on how many processes are running.
> Note that if CA Unicenter Service Desk servers are intended to communicate
across a firewall, the fixed sockets setting in the NX.env file should be
uncommented and set (details discussed later in this chapter). This allows for a
more predictable port range.
> Primary Servers also must be able to communicate with the DBMS that is
hosting the MDB. The DBMS client connection also has a port defined for it as
well.
> By default SQL Server uses port 1433, Oracle uses port 1521, and Ingres uses
port 1524.
■ User Interface
> CA Unicenter Service Desk primarily uses a web interface. All web users of
CA Unicenter Service Desk will connect via http, using the port defined by the
CA Unicenter Service Desk Web Server.
336 Security
Network Security Concepts
> By default most Web Servers use port 80 for http. Tomcat Tomcatfor
CA Unicenter Service Desk uses 8080.
> Web Servers using SSL by default use port 443 for https communication.
■ Web Services
> The SOAP based Web Services for CA Unicenter Service Desk and CA Unicenter
Service Desk Knowledge Tools also use http using the port defined by the
CA Unicenter Service Desk Web Server.
> By default CA Unicenter Service Desk Web Services use the Tomcat port 8080.
> CA Unicenter Service Desk uses email in a couple of ways. Notifications can use
SMTP to send out email notifications. The mail eater can receive email. Email
protocols such as SMTP, IMAP, and POP3 have well-defined ports.
> By default SMTP uses port 25, IMAP uses port 143, and POP3 uses port 110.
■ LDAP
> The LDAP integration will need to communicate with the LDAP directory server.
This communication uses the port for LDAP.
■ Integrations
> CA Unicenter Service Desk may need to connect to other web services such as
CA Workflow. The ports used by the interfaces for such systems are defined by
those systems.
> Typically many applications' web services are served using http on port 80.
Although many of the ports listed here are commonly used with their default numbers, do
not assume that your implementation uses these ports. It is not uncommon that corporate
network security standards may require using different ports for many applications. It is
also possible that other applications may already be using some ports required by
CA Unicenter Service Desk. Check with the appropriate corporate network and security
groups to see what ports are in use and what ports can be used.
Setting a non-default port for services and processes provided by CA Unicenter Service
Desk will be explained later in this chapter. For non-CA Unicenter Service Desk services and
processes, check with the system administrator for such systems to see what ports are
currently in use.
Firewalls
A firewall is a system or group of systems that enforces an access control policy between
two networks. A firewall typically takes one of two forms:
Both types of firewall allow the user to define access policies for inbound connections to the
computers they are protecting. Many also provide the ability to control what services
(ports) the protected computers are able to access on the Internet (outbound access).
If CA Unicenter Service Desk has processes that exist on both sides of a firewall, the
firewall must be configured to allow network traffic to flow on the necessary ports. Which
components exist on which side of the firewall determines what the port access policy
should be.
NAT
Network Address Translation (NAT) provides a way to hide the IP addresses of a private
network from a public network, while still allowing computers on that network to access the
public network. NAT can also be used to consolidate the use of public IP addresses. Using
NAT, one or more devices on the private network can be made to appear as a single IP
address to the outside public network. NAT is sometimes considered a poor man's firewall
since it obscures the private network from the public world. Quite often routers and
firewalls will include NAT as a feature.
338 Security
Network Security Concepts
In the diagram below, you can see how multiple IP addresses from the private network
(192.168.10.x) are translated to the public network (141.202.100.x). In this particular
example, all private network addresses are translated to a single public IP address. Access
from the private network to the public network in this example is straightforward. The
router understands which packets were sent from which private IP address and can route
any response back to the right private IP address. However, access from the public network
to the private network can be challenging, since there is no way to distinguish which private
IP address should be used. This can be solved using a technique known as port forwarding,
which can also map specific ports on the router's public network address to specific IP
addresses and ports on the private side.
Private Network
Translates to
192.168.10.100 141.202.100.72
Translates to
Translates to
192.168.10.102 141.202.100.72
CA Unicenter Service Desk sends messages between processes that need to absolutely
resolve the locations of both ends of the communication. If one process is on the private
side of NAT, the process on the public side may not be able to route any messages to the
private side process.
Port forwarding could solve this issue. Here the application on the public side of the router
is configured to send its network traffic to the IP address/DNS name of the router as if it
was the server on the private side. The application on the public side specifies a port that is
unique to the application on the private side. When the router sees the network traffic on
that port, it automatically forwards the traffic to the appropriate private side server. The
catch here is that the port used on the public network must be unique to the application on
the private network. The router may not know how to distinguish between packets intended
for other applications using that port. Consult with the corporate network group on how to
best configure NAT and port forwarding.
DMZ
A Demilitarized Zone (DMZ) is a network that allows private secure networks and public
unsecured networks to share resources and services minimizing potential security threats.
A DMZ employs the use of multiple firewalls that allow access from public networks to the
DMZ and allow separately controlled access from the private networks to the DMZ. DMZs
may also employ the use of NAT to help manage IP addresses.
In the diagram below, you can see two firewalls that bookend the two access points into the
DMZ. Each firewall has its own policy for managing the network traffic that it will allow into
the DMZ. The policies of both firewalls should not allow any access from the public Internet
to the private corporate network yet allow services hosted in the DMZ to be accessible by
both. Quite often the security policy on the public firewall is much stricter than the security
policy on the private firewall.
When elements of CA Unicenter Service Desk are run within the confines of a DMZ, keep in
mind where the CA Unicenter Service Desk servers are in relation to the firewalls.
Understand what ports are going to be in use across that firewall, and make sure that the
firewall policy allows network traffic on the appropriate ports. For example, if a server
hosting the CA Unicenter Service Desk web engine is in the DMZ, the public firewall should
allow web traffic from external web browsers to the server hosting the web engine. The
private firewall should allow the server hosting the web engine in the DMZ to communicate
with CA Unicenter Service Desk servers in the private network.
Distributed Processes
The architecture for CA Unicenter Service Desk allows for various components of the
system to be distributed in a heterogeneous (that is, platform-neutral) infrastructure. The
various processes/daemons of CA Unicenter Service Desk do not have to reside and be
configured all on one server. This architecture allows CA Unicenter Service Desk to be
scalable and more fault tolerant. However, this can add much complexity to the
architecture, especially when implemented in a complex network.
There are two different types of CA Unicenter Service Desk servers that can be installed:
primary servers and secondary servers. The various processes that can run in CA Unicenter
Service Desk are contained in one of these two types of servers. In addition to the primary
340 Security
Network Security Concepts
and secondary servers, there can be additional servers in the architecture that run such
things as the database, email, or LDAP, as well as CA remote common components such as
CA Identity Manager and CA Workflow. These should also be considered part of the
CA Unicenter Service Desk architecture.
The architecture of CA Unicenter Service Desk allows you to combine primary or secondary
servers with some of these additional remote components, or distribute each of them onto
separate physical (or virtual) servers. It is also useful to note that, in a single instance of
CA Unicenter Service Desk, there can only be one primary server, but there can be many
secondary servers. To establish communication between the primary and its secondary
servers: secondary servers run a process called proctor that listens for primary servers.
When a primary server comes online it will know about its secondary servers and make
initial contact using proctor. Once the link has been established, processes running on all
servers will use the slump and Fast Channel ports for further communication.
Slump and Fast Channel ports are opened as needed by the various processes running on
the primary and all the secondary servers. By default the ports are assigned somewhat
randomly. The first port assigned is the default slump port (typically 2100) and then any
available ports are used subsequently. However, there is a way to make this port
assignment more predictable using the fixed sockets setting in NX.env. When this setting is
enabled, slump will attempt to only open ports that are slightly higher than the starting
slump port. This includes Fast Channel ports that are simply slump ports that processes can
use to communicate with each other directly (as opposed to routing through the slump
daemon on the primary server).
When dealing with the corporate network architecture, it is important to understand where
each of these servers resides in relation to each other and in relation to the security
elements of the network. Now the network nodes (represented by IP address or DNS name)
and the ports used in network communication between processes affected by the network
security elements can be understood. This information should then be shared with network
security personnel who can then configure the network security systems to allow for
appropriate network traffic to flow.
In this example there are three key networks to consider: the Internet (public network),
the DMZ, and the private corporate network. CA Unicenter Service Desk Primary Server and
some secondary servers with web engines reside in the private network along with the
database server and a LDAP server. Internal users also have direct access to the corporate
network. Inside the DMZ is a secondary server with a web engine and a Microsoft Exchange
server providing email. All external users have access to the Internet. In the follow
example, the option using default ports has been chosen.
■ On the internal network the CA Unicenter Service Desk Primary Server, the SQL Server,
the CA Unicenter Service Desk Secondary Server, the Active Directory/LDAP Server and
all internal users have no firewalls between them. Therefore:
> The CA Unicenter Service Desk Primary Server can access the SQL Server on
port 1433.
■ The CA Unicenter Service Desk Primary Server can access the Active Directory/LDAP
server on port 389.
> The CA Unicenter Service Desk Primary Server can communicate with the
internal CA Unicenter Service Desk Secondary Server on port 2300 for proctor
communication and 2100+ for all slump/Fast Channel communication.
> Internal users can access the web engine on the internal Secondary Server on
port 8080.
■ The private firewall between the DMZ and the internal network must allow for the
following traffic:
■ Between the DMZ Secondary Server and the Primary Server: ports 2300 (proctor) and
2100-2200 (expected slump/Fast Channel ports).
Note that the slump fixed sockets setting on the CA Unicenter Service
Desk Primary Server has been enabled by the implementer to allow for
a more predictable port range (2100-2200).
> Between the Exchange Server and the Primary Server: port 25 for SMTP
communication.
> Between the Exchange Server and the Internal users: port 25 for SMTP and
110 for POP3.
■ The public firewall between the Internet and the DMZ must allow for the following
traffic:
> Between the external user web browsers and the web engine on the DMZ
Secondary Server: ports 80 and 8080 for http (for IIS and Tomcat
respectively).
342 Security
Network Security Concepts
■ The NAT between the Internet and the DMZ must perform the following:
> Reroute all traffic on ports 80 and 8080 bound for a particular DNS name (that
resolves to the router) to the DMZ CA Unicenter Service Desk Secondary
Server.
Most of the port configuration information for CA Unicenter Service Desk resides in the
NX.env file located in the root installation directory on the primary server. Typically this file
is never edited manually since there is an appropriate user interface for configuring most of
the settings contained within NX.env. It is strongly recommended that any change to
NX.env be made through the appropriate UI wherever possible as explained in the sections
below. In addition to the NX.env file, port information is also stored in the web engine
configuration files. These files are managed through a utility called pdm_edit which is
discussed below.
These ports are defined during the configuration process. The General Settings section of
the configuration utility (pdm_configure) allows you to define the starting slump port and
the Proctor port. These settings can also be found in NX.env under NX_SLUMP_NAME (for
slump port) and NX_MGR_PORTNUM (for Proctor).
There is no UI available for configuring the slump fixed sockets. This particular setting must
be enabled through manually editing the NX.env file. This is usually configured simply by
uncommenting the following section in NX.env:
! NX_SLUMP_FIXED_SOCKETS=1
Simply remove the exclamation point (!) and save the file. The CA Unicenter Service Desk
service should be recycled for the setting to take effect.
Note: It is strongly recommended that a backup of the NX.env file be taken before any sort
of manual edit is made.
The DBMS port is another setting that should be configured during the configuration
process using pdm_configure. Depending on the DBMS system used (SQL Server, Oracle, or
Ingres) an appropriate UI will appear in the configuration process.
Initially the port for a web engine installed on a primary server is set during the
configuration process using pdm_configure.
However as additional secondary servers with web engines (and web directors) are added
to the CA Unicenter Service Desk architecture, the ports are configured using the
pdm_edit.pl utility. This utility generates all the appropriate files where port information is
stored and used for system configuration. The utility is self documenting and instructs the
344 Security
Network Security Concepts
person making the configuration change where to find and place all the appropriate files in
the last step of the utility's process.
The port used for the LDAP integration can be found in the administration UI of
CA Unicenter Service Desk under Options Manager. A CA Unicenter Service Desk user with
appropriate administration rights can access this section of the system. The LDAP port
information can be found in the LDAP section of Options Manager under the setting:
ldap_port. Also found here is the hostname of the LDAP server. Changing this setting
usually requires a recycle of the CA Unicenter Service Desk service.
The ports used for the email integrations can also be found in the administration UI of
CA Unicenter Service Desk under Options Manager. The email ports information can be
found in the email section of Options Manager under these settings:
maileater_imap_host_port, maileater_pop3_host_port, and mail_smtp_host_port. Also
found here are the hostnames of the email IMAP, POP3, and SMTP servers. Changing these
settings usually requires a recycle of the CA Unicenter Service Desk service.
346 Security
Authentication
The port used for the CA Workflow integration can be found in the administration UI of
CA Unicenter Service Desk under Options Manager. The CA Workflow port information can
be found in the CA Workflow section of Options Manager under these settings:
cawf_pm_location, cawf_pm_url, cawf_wl_location, and cawf_wl_url. Note that in these
settings the port is embedded in the URL to the process manager (pm) and the worklist
(wl). Changing these settings usually requires a recycle of the CA Unicenter Service Desk
service.
Authentication
CA Unicenter Service Desk has many ways to authenticate users. This section outlines the
advantages and disadvantages of each. It also explains the misconceptions regarding
authentication and CA Unicenter Service Desk's LDAP integration.
Authentication Methods
CA Unicenter Service Desk has various methods available for the sole purpose of user
authentication. They include the following:
■ PIN Authentication
■ External Authentication
> HTTP
> CA SiteMinder
Open authentication is open to anyone. This method is the least secure but the easiest to
implement. When implemented, any users-regardless of credentials-are granted access to
CA Unicenter Service Desk as long as the user name they enter in response to the login
prompt is defined as a contact in the CA Unicenter Service Desk database. No checking is
done against any third party authentication directory of any nature.
PIN Authentication
PIN authentication is a bit more secure than open authentication and is fairly easy to
implement. Users are granted access to CA Unicenter Service Desk as long as the user
name and corresponding PIN are defined in the CA Unicenter Service Desk contact record.
As with an open system authentication setup, no checking is done against any outside third
party authentication directory. The PIN is defined in the user's contact record as the
Contact ID. One drawback to note is that the data is in clear text and, therefore, is easily
modifiable by anyone who has access to end-user contact records.
Operating system authentication is both secure and easy to implement. Users must have
the authority to log in to the “domain” where CA Unicenter Service Desk is located, and
must be defined in the CA Unicenter Service Desk contact record. One of the drawbacks of
this method, as well as the methods documented above, is that the end users will always
be challenged for access credentials-even if they provided them earlier in order to access
the domain in the first place.
It is important to make a distinction between the CA Unicenter Service Desk LDAP interface
for importing contact data into CA Unicenter Service Desk and using LDAP for user
authentication. The LDAP data-importing integration affects the creation/updates of
CA Unicenter Service Desk contact information and is established through a combination of
Option Manager settings and contact detail screens. You do not need to set any Option
Manager options to be able to authenticate against a LDAP directory. In fact, the
CAUnicenter Service Desk authentication module (boplogin) actually has no idea what
authentication method is used behind the scenes. In the case of operating system
authentication, boplogin simply calls the operating system with whatever credentials the
user entered and allows/denies access based on the data returned.
348 Security
Authentication
If the operating system that boplogin resides on uses an LDAP directory, such as Microsoft's
Active Directory, for authentication of users, boplogin will authenticate to the LDAP-
compliant directory seamlessly via an operating system call.
External Authentication
External authentication is typically more secure than operating system authentication. The
main benefit with this method is that users will not be asked for credentials to access
CA Unicenter Service Desk if they have already been authenticated via an external
authentication method. One drawback is that it is more difficult to implement. But in the
eyes of most security analysts, it is a requirement, regardless of the degree of difficultly.
When external authentication is implemented, CA Unicenter Service Desk will not allow
access unless the end user has been previously authenticated by the configured external
authentication system.
HTTPD (IIS/Apache) and Tomcat are two of the most common applications that boplogin
interfaces with for external authentication. In this scenario, CA Unicenter Service Desk
obtains the user's identity from the HTTP header, specifically the HTTP_REMOTE_USER
variable. Note that if Active Directory is the authentication authority, both IIS and Tomcat
can be configured to use it as the access authority. Some sites have even used Apache in a
similar manner. The directions for setting up IIS or Tomcat to enable CA Unicenter Service
Desk to use them for external (also referred to as pass-through) authentication follow.
On Microsoft Server hosting IIS Web Server, start Internet Service Manager (IIS) Manager
from Start, All Programs, Administrative Tools.
From Internet Information Service, <IIS Server name>, Default Web Site, CAisd, right-click
and choose Properties.
Go to Directory Security tab, under Anonymous access and authentication control, and click
Edit.
Uncheck Anonymous Access, and choose either Basic Authentication or Integrated Windows
Authentication.
350 Security
Authentication
Basic Authentication
This authentication will pop up a logon window when a user is trying to access the web site.
You can specify another trusted domain for authentication other than the domain where the
web server resides.
Because this authentication will transfer the password in clear text, SSL needs to be
considered to encrypt the password.
This authentication will not initially pop up a logon window. Instead, it uses the current
Windows logon user's credentials to authenticate. If the initial authentication fails, a logon
window will pop up for a valid user name and password.
This authentication is normally best suited for scenarios in which the web server and users
are in the same domain.
Note: If both Basic Authentication and Integrated Windows Authentication are checked,
Integrated Windows Authentication takes precedence.
The detailed description of these two authentication mechanisms can be found in the IIS
online help document from Microsoft.
1. Download the latest jcifs.jar file from http://jcifs.samba.org and copy it into here:
NX_ROOT\bopcfg\www\CATALINA_BASE\webapps\CAisd\WEB-INF\lib.
2. Add the following to the web.xml file located in
NX_ROOT\bopcfg\www\CATALINA_BASE\webapps\CAisd\WEB-INF. (Add the content
prior to the <!-- USD servlet mappings start -->section.):
<filter>
<filter-name>NtlmHttpFilter</filter-name>
<filter-class>jcifs.http.NtlmHttpFilter</filter-class>
<init-param>
<param-name>jcifs.smb.client.domain</param-name>
<param-value>domainname</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>NtlmHttpFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
3. Cycle Tomcat.
4. Log into the web interface and set the access types to allow external authentication.
6. Run the Tomcat URL and you should log into CA Unicenter Service Desk directly.
You can also use CA SiteMinder for external authentication. Using CA SiteMinder allows you
to continue to take full advantage of any existing CA SiteMinder Security policies you have
implemented at your site. In all three cases, IIS, Tomcat, and CA SiteMinder, the
integration provides “single signon” for an improved user experience.
1. Via the CA Unicenter Service Desk Web Client, log on to Unicenter Service Desk using
an ID that has CA Unicenter Service Desk administrative privileges.
4. From there, click on the Web authentication Tab; then check the Allow External
Authentication check box, and click Save.
5. Do the same for all other Access types in the list per your site's requirements, except
for the Administrator Access type.
When the CA Unicenter Service Desk configuration changes have been completed, the CA
SiteMinder web agent and web server have to be configured to ensure that the web server
populates the HTTP_REMOTE_USER header correctly. Follow the instructions outlined below
to make the appropriate CA SiteMinder configuration changes:
Using the IIS Server Admin Console, select the website instance. Right click to select
properties of the website. Then choose the Directory Security tab and ensure that the
checkbox to allow basic Authentication is checked. Repeat the process for all web server
instances that the CA SiteMinder agent is set to secure.
Set up a user account on the IIS server host (if the system is stand-alone) or host-domain
(if the system is attached to a domain controller). Use the Windows User Manager tool that
is used by the CA SiteMinder Agent to assert an identity under which to run a request that
IIS receives. This account must be set to never expire and not be subject to password-
352 Security
Authentication
Using the Windows Security Settings tool, grant the proxy-user account created above the
privilege to “Act as part of the operating system” on the IIS server host.
Using Windows Explorer, browse to the IIS doc root folder used by the IIS web servers on
the IIS server host. Right click and select Properties Security Settings for the folder
concerned. Add the proxy-user account above, with NTFS permissions to read/execute
content under that doc root folder. Repeat this step for all IIS web-server instance doc-root
folders.
Using the Policy server admin console, select the Agent Config Objects for the IIS web
agents concerned and ensure that the following parameters are configured:
DefaultUserName—Uncomment the parameter name (remove the “#” from the name) if
commented and set the value of this parameter to the name of the proxy-user account
defined on the IIS web-server host above.
DefaultPassword—Uncomment the parameter name (remove the “#” from the name) if
commented and set the value of this parameter to the password of the proxy-user account
defined on the IIS web-server host above.
Using the Policy server admin console, select the Agent Config Objects for the web-agents
concerned and ensure that the following parameter is configured:
Note for All Web Server Versions (IIS, Apache, Domino, SunOne/iPlanet)
By default, when the above web-agent configuration parameters are set, the web-server
will set the REMOTE_USER header to the value of the SM_USER/SMUSER header, which
contains the user's CA SiteMinder logonid by default.
If you wish the REMOTE_USER header to be set to the value of a user attribute other than
the user's CA SiteMinder logon ID, then configure an onAuthAccept response in the CA
SiteMinder realm Policies or as a Global Policy that will cause the web-agent to inject a
user-defined attribute value with a HTTP header of your choice into the request; for
example, “HTTP_SSN=<user's ssn-attribute name in the user-directory>.”
Be sure to bind this response to onAuthAccept rules in the relevant CA SiteMinder realms or
in the Global Policy realm. Then set the following attribute in the respective web-agent
configuration objects to ensure that the web-server sets the REMOTE_USER header using
your custom HTTP header value:
RemoteUserVar-Set this parameter value to the name of your custom HTTP header; for
example SSN, for the HTTP header example cited above. The REMOTE_USER header will
then be set by the web-server to the value of the user's SSN attribute rather than their
CA SiteMinder logon ID value, for the example cited above.
After completing the required CA Unicenter Service Desk and CA SiteMinder configuration
changes, validate that the integration is working properly by logging on users whose access
types are set for external authentication.
Once you have proven that the integration is working as expected, update the
Administrator Access type.
Caution: Do not update the Administrative Access Type until you have proven that the
integration is working. Doing so ahead of time could prevent you from gaining
administrative access to CA Unicenter Service Desk.
One last external authentication method CA Unicenter Service Desk can use for
authentication is CA's Identity and Access Management (CA IAM) product, CA Identity
Manager. CA Identity Manager is a toolkit that is used to “embed” security into applications.
CA Unicenter Service Desk uses CA Identity Manager as an authentication/authorization
mechanism as well as an LDAP repository. It interfaces with CA Identity Manager
programmatically, versus HTTP headers required by the other methods.
The biggest advantage you achieve by using CA Identity Manager over the other methods is
that one instance of CA Identity Manager can manage user authentication across multiple
directory stores. It is also the authentication provider and contact repository for
CA Workflow. One disadvantage is that CA Unicenter Service Desk will always prompt for
access credentials regardless of whether they were authenticated previously.
The CA Identity Manager integration for CA Unicenter Service Desk is described in the
Integrations chapter of this book.
354 Security
Permissions and Access: Access Types, Function Access, Form Groups, and Data Partitions
Configuring security within CA Unicenter Service Desk will allow an administrator to control
access to all aspects of the CA Unicenter Service Desk application. This includes
determining which contacts are granted and/or denied access at varying levels within the
application, what kind of access is given, and what forms are viewable to users.
For more information, please refer to the CA Unicenter Service Desk r11 Administrator
Guide, Chapter 4: Policy Implementation. In addition you can refer to the CA Unicenter
Service Desk online help, which is available through the application itself.
Other areas of security, such as web authentication and permissions for CA Unicenter
Service Desk Knowledge Tools, are discussed in separate areas of this book.
Security is important for any service desk for a number of different reasons. Because
different types of groups within your organization—or even multiple organizations—may all
share one individual CA Unicenter Service Desk instance, it is very important to map out
your security correctly to ensure data integrity and compliance to relevant regulations and
laws. Privacy is often a major consideration for service desks. Security needs to be
established so that users are given permission or access to only those areas of the
application to which they are authorized and to ensure they view the correct UI.
Configuring Security
Security within CA Unicenter Service Desk is controlled in a number of different ways. The
easiest way to start is conceptually—by first determining the business needs of your
organization, and then configuring the appropriate security within CA Unicenter Service
Desk. The recommended approach is top down, starting with the highest level of security
first, then getting more granular as you progress.
It is often expedient to discuss your service desk with your chief compliance officer early in
the security configuration process. Laws such as SOX (Public Company Accounting Reform
and Investor Protection Act of 2002) and HIPAA (Health Insurance Portability and
Accountability Act of 1996) have provisions that can apply directly to service desk security.
Addressing these issues early and correctly can save much time and trouble. The
regulations and requirements vary with the circumstances of each business. Experience has
shown that CA Unicenter Service Desk security is flexible enough to meet the vast majority
of regulatory requirements using the planning and implementation techniques described
here and in the product documentation. Nevertheless, starting out right is always an
advantage.
First and foremost, define roles and responsibilities from the perspective of your business.
Start with determining who will be using the application and separating these contacts into
logical groupings such as analysts, customers, employees, and so on. From there,
determine how each of these groups of users will be accessing the application. What kind of
access should be granted to each group of users? Will analysts have full access to the entire
application? Will employees only be able to view their own tickets? Will customers only be
able to view those tickets that are opened under their organization? Will internal users view
a different UI than external users? These are the types of questions that need to be
carefully thought out and documented above all else before setting up your architecture.
Next, map all of the above back to the CA Unicenter Service Desk functionality. Determine
how you will associate access types to contacts. This will assign appropriate function
access, form groups, access and grant levels, and data partitions to accomplish your exact
business needs. These components are discussed in more detail later in this chapter.
Note: Access types also drive the type of authentication methods that are used, as well as
access to CA Unicenter Service Desk Knowledge Tools. These two topics are discussed in
earlier chapters of this Green Book.)
Once you have mapped your security model back to CA Unicenter Service Desk
functionality, the next step in this process would be for an administrator to actually define
security within CA Unicenter Service Desk and assign the appropriate privileges to each
contact in your organization. All aspects of security can be configured using the CA
Unicenter Service Desk Web Client. There is no longer a requirement for a thick client in
CA Unicenter Service Desk r11.
Finally, you will also need to thoroughly test your configuration to verify you have set it up
correctly before going live with these changes.
356 Security
Permissions and Access: Access Types, Function Access, Form Groups, and Data Partitions
As mentioned earlier, defining access types is a typical starting point when configuring
security within your system. Access types encompass all security related aspects of your
service desk including function access, form groups, grant and access levels, and data
partitions. They can be thought of as logical groupings of users who will have similar access
to areas within the application. There are several access types provided out-of-the-box with
CA Unicenter Service Desk: Administrator, Analyst, Customer, Employee, Knowledge
Engineer, and Knowledge Manager. You can modify the out-of-the-box access types or
create entirely new access types of your own. Each access type can then be associated with
a contact within each individual's contact record. By default, any time a new contact is
created, they are given an administrator access type unless this default setting is explicitly
changed. This default setting can be modified within the access type detail.
Function Access
An administrator on the system can specify, from a very high level, the functions in the
application to which contacts have modify, view, or no access rights. These function areas
include: Requests, Change Orders, Issues, Inventory (sites, organizations, contacts, assets,
and so on), Reference Data (announcements, priorities, severities, and so on),
Notifications, Administration, and Security.
Inventory
Includes the following records that are used to establish your configuration: sites, locations,
classes, models, vendor providers, organizations, groups, customers, contacts, analysts,
configuration items, and service level agreements.
Reference Data
Includes the following data that is referenced during processing: announcements, contact
types, impacts, priorities, severities, time zones, and urgencies.
Notify
Admin
Includes the following information related to your system's external set up: stored queries,
remote references, reports, options manager, notification urgencies, events, data
partitions, constraint types, and constraints.
Security
Function access can also drive access rights for Web Screen Painter. Web Screen Painter is
a component of CA Unicenter Service Desk that allows for simplified modification of the
CA Unicenter Service Desk database schema and UI. Function access can control which
access types have varying privileges within Web Screen Painter, including which contacts
can make modifications to the UI, which contacts can make modifications to the database
schema, and which contacts can actually publish those changes, making the modifications
publicly available.
358 Security
Permissions and Access: Access Types, Function Access, Form Groups, and Data Partitions
Once you have set up your access types and function access, you can determine the type of
access that contacts will have to the web UI. CA Unicenter Service Desk can be modified to
have distinct web interfaces so that different types of contacts can view separate UIs.
Each web interface is defined as its own set of forms. Administrators on the system can
create and modify existing forms to customize the UI that users view within the application.
It is recommended to make use of Web Screen Painter to modify or add new forms. Not
only is it easier to modify forms using this tool, but modifications made here are saved in
the supported way. Supported UI adaptations are those which follow the explicitly
documented procedures, and are saved under the
$NX_ROOT/site/mods/www/htmpl/web/<interface type>/<form group name> directory
once complete. Any changes made to forms using Web Screen Painter are automatically
saved to this directory structure.
Once you have designed or modified your forms, you can add them to existing out-of-the-
box form groups, or alternatively you can group them together in your own custom form
groups. These custom form groups can then be associated to access types, as
'Customization Form Groups', allowing different types of users to view separate UIs.
Let's take an example of an employee who would like to open up an incident, assuming
CA Unicenter Service Desk is configured as such. The form that defines an incident under
the covers is detail_in.htmpl. The system first looks to see if the employee's access type
points to a Customization Form Group. If it does, under the covers the system will look for
a modified detail_in.htmpl within
$NX_ROOT/site/mods/www/htmpl/web/employee/<customization_form_group>.
■ If a modified detail_in.htmpl is found within this directory, then the system displays the
customized incident form.
■ If no customized version of this form is found here, the system will display the default
detail_in.htmpl form found under $NX_ROOT/bopcfg/www/htmpl/web/employee. This
directory contains the default out-of-the-box forms for the employee interface. This
default form group is defined within the Access Type detail as the 'Web Interface Type'.
Authorized views are granted for access types and are also a part of the form group
definition. There are views set up for a CA Unicenter Service Desk Analyst, System
Administrator, Knowledge Manager, and Knowledge System Administrator. These views
basically grant access to the main tabs in the CA Unicenter Service Desk Web Interface.
These include the Service Desk Tab, Administration Tab (including all CA Unicenter Service
Desk related entries), Knowledge Tab, and Administration Tab (including Knowledge
Related entries). As seen in the Authorized Views screen shown below, note that Employees
do not have access to any of the main tabs within CA Unicenter Service Desk. From an
alternate view, if we were to look up the access type defined for an administrator, we would
see that administrators have access to all Authorized Views.
Note: For more information on Security and Permissions for CA Unicenter Service Desk
Knowledge Tools please see the Knowledge Management Chapter.
The most granular way that security is defined is using data partition constraints which are
bundled up into data partitions. You can have an unlimited number of data partition
constraints assigned to a data partition. Data partition constraints control, right at the
object (table) and attribute (field) level, which specific records a user is granted or denied
access to, and which ones they can view, update, create, and delete. Data partition
constraints are similar to function access, only constraints get much more granular and
specific as they enforce row level security. Out-of-the-box, each access type within
CA Unicenter Service Desk is assigned to a data partition by default, which in turn holds a
number of different constraints. You can add to or modify these data partitions or
constraints.
You can associate a data partition to an individual contact and also to an access type. When
you are defining data partitions to both, you are given the option within the contact's
access type to 'override' the data partition set for the individual contact. In the event that a
contact record has both, the data partition set for the access type will always be the trump
card and will hold more weight than the data partition defined for the individual contact.
You first determine to which table you would like to restrict access; these are referred to as
“controlled tables.” Within CA Unicenter Service Desk 11.2 there are 25 controlled tables
which all can be found under Administration. Data partition constraints can then be written
against these controlled tables. The setup of the constraint is similar in some ways to an
360 Security
Permissions and Access: Access Types, Function Access, Form Groups, and Data Partitions
SQL select statement which includes a “where” clause. The select statement goes against
the controlled table, and the constraint can be considered the “where” clause. Aside from
different syntax, the main difference is that in an SQL select statement you refer to
CA Unicenter Service Desk database tables and fields defined as schema, and in data
partition constraints you refer to database objects and attributes. The CA Unicenter Service
Desk database objects and attributes are defined in MAJIC code. The MAJIC code relates
objects to schema as follows:
MAJIC code is defined within files that have a .maj extension and are saved within
$NX_ROOT/bopcfg/majic.
Note: For more information on MAJIC code and associated objects and attributes within the
CA Unicenter Service Desk architecture, please refer to the Green Book chapter on
Architectural Choices. Additional documentation can be found in the CA Unicenter Service
Desk 11.2 Modification Guide appendix on Objects and Attributes.
A common view constraint that might be written against the Call_Req table is something
like:
customer=@root.id
The Call_Req table holds records for Requests, Incidents, and Problems. (These record
types are distinguished using the type parameter. In the below paragraph any references to
Call Requests will include Call Requests, Incidents, and Problems.) This constraint is
provided out-of-the-box for employees. The above constraint is saying “allow the logged in
user to view only those Call Requests to which they are the customer” - in other words, the
affected end user. To break it down by component, customer on the left of the equal sign
means just that, the customer or affected end user of the Call Request. The @root means
the logged in or current user, and ID means the internal ID. The above dotted notation is
supported for view constraints only and it indicates a join to another table, in this case a
join between the Call_Req table and the ca_contact table. In the above constraint we are
comparing the ID of the Call Request's affected contact or customer, and the ID of the
logged in user.
■ If they do not match the end user will be displayed an “access denied” error message.
You can specify the text of the aforementioned error message that will be displayed to the
end user within the constraint detail.
362 Security
Permissions and Access: Access Types, Function Access, Form Groups, and Data Partitions
We could add a constraint to the employee data partition such that all employees who log
into the CA Unicenter Service Desk Self Service Interface will create incidents by default
rather than call requests, enabling ITIL methodology. To do so we would create a data
partition constraint of type Default against the Call_Req table, and the where clause would
be:
type='I'
Note that the 'I' is case-sensitive and represents an incident type. At this point, the
administrator needs to customize the terminology on the Employee Web Interface so that
the term Incident displays instead of the term Request. This requires slight modification to
JavaScript and HTMPL code. Otherwise, the employee sees Request and refers to requests,
causing confusion for the analyst who sees them as incidents.
Access and grant levels are used when creating or modifying contact records, to determine
which access types can be granted to another contact. A user can assign an access type to
the contact record of another user only if the access level of the access type they are
attempting to assign is ranked the same as or lower than the grant level of their own
access type. In other words, if you are defined as an analyst within CA Unicenter Service
Desk, out-of-the-box, you can create other contacts that are analysts, customers, and
employees. However, you are restricted from creating a contact in the system that is an
administrator because this ranks higher than an analyst. If you are an employee, out-of-
the-box (as seen below) you do not have the ability to create contacts in general because
you have no grant level. However, as an administrator you can modify these access and
grant levels.
At the database layer, most security information is defined in the MDB in the acctyp table.
You can reference $NX_ROOT/site/ddict.sch to see how the schema is structured, and how
the acctyp table references other tables in the MDB for information on form groups, data
partitions, and so on. At the object layer, most security information is defined in the
access_type object. This object is defined in MAJIC code in
$NX_ROOT/bopcfg/majic/bop.maj.
Note: For more information on MAJIC code, and associated objects and attributes within
the CA Unicenter Service Desk architecture, please refer to the chapter on Architectural
Choices.
In terms of the UI, the HTMPL code that displays the access type form is
detail_acctyp.htmpl. All notebook tab definitions for this form (Function Access, Web
Interface, Web Authentication, and CA Unicenter Service Desk Knowledge Tools) can be
referenced from here as well.
More information on the existing CA Unicenter Service Desk MDB schema and UI, and
instructions on how to make modifications can be found in the CA Unicenter Service Desk
r11 Modification Guide.
364 Security
Chapter 17: Advanced Tuning
Introduction
The primary goal of the advanced tuning information in this chapter is to help ensure that
your CA Unicenter Service Desk deployment is performing as efficiently as possible. This
includes the following:
■ Signs that may indicate a potential performance problem, as well as the steps to take
to prevent or resolve that problem
In addition to this document and the documentation provided with the product itself,
further guidelines can be found on the Implementation Best Practices pages available
through the following:
https://support.ca.com/irj/portal/anonymous/phpdocs?filePath=0/common/impcd/r11/Com
mon/impcdfaqr11.mht&fromKBResultsScreen=T
Note: This document is a work in progress. Updates will be provided as they become
available.
An appropriately sized and regularly tuned CA Unicenter Service Desk implementation will
enable you to achieve the following:
■ Use existing resources wisely and efficiently, thereby saving time and money
Even if CA Unicenter Service Desk has already been deployed, familiarizing yourself with
these guidelines may help you better understand how your environment and business
requirements influence the ratio and placement of each CA Unicenter Service Desk
functional component-and what signs may indicate that it is time to reconsider that ratio.
http://supportconnectw.ca.com/public/impcd/r11/scalability/scalability_guidelines__usd.ht
m
(http://supportconnectw.ca.com/public/impcd/r11/scalability/scalability_guidelines__usd.ht
m)
The CA Unicenter Service Desk Architecture consists of a primary server which controls a
majority of the CA Unicenter Service Desk functions, several independent components, such
as the management database and optional workflow and embedded CA Identity and Access
Management (CA IAM) security interface, and one or more optional secondary servers
which are used to distribute the processing load.
Primary Server
The primary server consists of several “core processes” which are required and several
“optional processes” which are not. Core processes include but are not limited to:
■ Animator
CA Unicenter Service Desk uses processes called database agents [not to be confused with
CA Unicenter Network and Systems Management (CA Unicenter NSM) database agents] to
execute data base transactions asynchronously. Various CA Unicenter Service Desk
processes start database agents as they are needed.
The Object Server (domsrvr process) uses the virtual database (bpvirtdb_srvr process) to
manage several database agents designated for handling database transactions. Upon
startup, a fixed number of database agents is started. In response to an increase in load
from the domsrvr, the bpvirtdb_srvr can start additional database agents, up to a defined
maximum, and when the load subsides, it will eventually stop those database agents to
return to the initial number of active agents. Both values-the number of initial agents and
the maximum number of agents-are controlled by variables which can be set and adjusted.
Further details are provided later in this chapter.
Secondary Server
Most CA Unicenter Service Desk architectures also include one or more secondary servers
to distribute the processing load and improve performance.
Each secondary server includes the proctor as its “core” process along with one or more
optional processes. The proctor serves as the “agent” for the pdm_d_mgr on the primary
server. It starts and waits for messages from SLUMP and starts required CA Unicenter
Service Desk processes as directed.
Optional Components
With the exception of the Object Manager, any of the “optional” components can be
installed on either the primary or a secondary server, depending on your individual site
requirements. In some cases, such as with the Web Engines, you can install multiple
copies.
When you install a primary server, two distributed object managers (domsrvr) are installed
by default: one for client connections and one dedicated to Web Screen painter (WSP).
When you install a secondary server you can configure additional Object Managers to run,
and for scalability it is recommended that you configure multiple domsrvr\Web Engine pairs
either on your primary or secondary servers based on your expected end-user load.
Note: The object manager on a secondary server acts as a peer to its counterpart on the
primary server and retrieves data from the bpvirtdb_srvr on that server.
server in the architecture. It should be installed on the server that is part of the
authentication domain.
■ LDAP Virtual DB — which acts as an agent for communications with an LDAP server.
There can be only one LDAP Virtual DB in the architecture and it must be installed and
configured on a server that has access to LDAP.
■ Web Engines — which run the engine for the web servers, allowing access to CA
Unicenter Service Desk through a browser interface. A Web Engine is required for WSP
on the primary server so the WSP Schema designer can write schema files. There can
be multiple Web Engines and they are typically paired with an Object Manager.
■ Web Directors — which manage distribution of load to the Web Engines. These are
optional and there can be more than one.
■ NSM Event Converter — which interacts with CA Unicenter NSM to open CA Unicenter
Service Desk issues in response to designated NSM events. The NSM Event Converter
must be running on a NSM machine and must have access to the NSM COR.
For architectures in which there is a high concurrent load on Knowledge Tool functions—for
example, where there are significant periods when the aggregate number of system-wide
knowledge queries exceeds 2 per second—it is recommended that you locate the knowledge
search daemon (bpebr_nxd) and the knowledge indexing daemon (bpeid_nxd) on a
separate machine with its own secondary server. Please note that both components—the
search and indexing daemons—must be located together on the same server.
A similar approach may be required when there is a large number of knowledge documents.
Although it is difficult to establish precisely how many knowledge documents constitutes a
large number due to multiple factors, such as modes of use and document size, a system
that contains more than 50,000 documents is likely to benefit from this type of placement
consideration.
The main reason for locating the knowledge daemons on a separate secondary server, in
such a situation, is to help prevent any adverse performance impact of intensive knowledge
searching/indexing on core CA Unicenter Service Desk functions, and vice versa.
Independent Components
These components are referred to as independent because they can be installed separately
from the primary and secondary servers and can be used by other products. For example,
the management database (MDB) can also be used by CA Unicenter NSM.
■ MDB — which provides the central database for CA Unicenter Service Desk. This is a
required component and there can be only one MDB in the CA Unicenter Service Desk
architecture, although it may be shared with another CA solution, such as CA Unicenter
NSM.
■ CA IAM — which provides an alternative to the default validation performed by the host
operating system. CA IAM is only required for CA Workflow, and since the intention is
for CA IAM to act as a single repository of your organization's user and access policies,
you should only have a single copy of it installed in your architecture.
■ Common Services—- which are used by multiple CA solutions to manage inter- and
cross-product communications and messaging. This includes CAF, CAM, and CCI.
Note: Common Services are not currently used by CA Unicenter Service Desk,
however, they may be installed by other CA solutions that are either integrating or co-
existing with CA Unicenter Service Desk.
General Considerations
■ Location of the MDB - local or remote (use of a remote MDB is typically considered best
practice for a larger deployment).
■ How the product is used. For example, the use of complex workflow and automation
will add a significant load to the system.
■ Pending or future changes to the environment or the business process that might
impact the server load.
The following sections identify scalability considerations for key CA Unicenter Service Desk
components, including the primary and secondary servers, Object Server and Web Engine
(which are typically paired), the MDB, and the client interfaces.
■ In most situations, multi-CPU or dual core servers provide a much greater benefit than
hyper-threaded CPUs.
■ Although the minimum disk space recommendation is 20GB, you must also factor in
additional space to accommodate growth, such as new MDB table entries and new
CA Unicenter Service Desk related documents. To tune disk access, the
recommendation is to use SATA drives configured to use RAID 0 or, preferably, RAID
0/1.
■ CA Unicenter Service Desk does not typically use a lot of network bandwidth; however
it can be severely affected by a network with poor bandwidth or by a network with
large roundtrip latencies.
■ Only load contacts and assets that are required and in use. Importing assets that are
not used might add significant load to the environment.
■ If you are doing a significant amount of complex reporting you should consider
implementing a mirrored database against which to do reporting.
■ Do not increase the amount of logging or debugging information that is captured unless
there is a business need for it or unless you are specifically told to do so by
CA Support.
Note: You still need reliable and reasonably fast access from secondary to primary
server. Adding local secondary servers can reduce the WAN bandwidth needed to
connect to the primary but it won't fix poor or very slow connectivity.
■ Large geographic distribution of sites being serviced. Network bandwidth and latency
may still be an issue but there is less bandwidth consumed between primary and
secondary server than from secondary server to web client.
For large installations or installations in which there will be resource constraints for a single
secondary server, multiple secondary servers may be required. In determining how many
secondary servers to deploy (and where) you need to balance cost of adding a secondary
server with the savings that may result from improvements in performance.
Finally, the recommended best practice for secondary servers is to use multiple machines
or partitions rather than one single large machine.
As the name suggests, an Object Manager manages all CA Unicenter Service Desk objects.
There is always an Object Manager on the primary server and enterprise systems with
multiprocessor servers or secondary servers can add more Object Managers to the primary
server and secondary servers as needed.
Each Object Manager has a name which is synonymous with the name it uses to
communicate with other objects. For example, the default Object Manager is always called
“domsrvr.”
Web Engines, on the other hand, prepare a web page for the web client. All systems have
one or more Web Engines and each Web Engine connects to an Object Manager in order to
process requests to CA Unicenter Service Desk objects. By default, the Web Engine
connects to the default Object Manager, but if you have multiple Object Managers you can
set this value to the name of any of the available Object Managers.
Note: Although not required, it is still considered best practice to deploy Web Engines and
Object Managers as pairs.
Although each Web Engine can be run and accessed directly, that requires the web browser
to enter the specific CGI interface for that Web Engine. The result is that system loading is
determined solely by the users, and in the worse case scenario, if all clients connect to a
single Web Engine this could over burden that Web Engine, while leaving other Web
Engines unused. A better approach is to assign Web Engines to web directors. In this case,
two or more Web Engines will specify a single web director. All requests that “initially” go to
one of these Web Engines will be directed to the web director for load balancing and then
redirected to the most available Web Engine in the group.
To determine how many users are connected to a particular Web Engine, execute the
following command:
pdm_webstat
For example:
=========================================
=========================================
The Currently Active Sessions total represents the number of users connected to a Web
Engine.
The basic rule of thumb for r11 is that if you have 200-400 users per Web Engine
(dependant upon usage), it is probably time to add a new Web Engine.
Note: The CA Unicenter Service Desk web interface uses many JavaScript, style sheets,
and image files which can be fairly large. You can improve the performance of the web
interface considerably by configuring your HTTP server so that these files are cached by the
user's browser, allowing them to be loaded only once a day. The default installation
automatically configures caching for Apache and IIS.
When a Web Engine is geographically separated from the primary server, it is often co-
located with an Object Server on a local secondary server. In general, a good rule of thumb
is to have one Object Server and Web Engine pair for every 200-400 concurrent users, but
this ratio can be adjusted based on business processes, response time, and other network
factors.
Once the deployment is complete you should continue to monitor CPU and memory usage
for Object Server and Web Engine pairs. In general, you should add more resources when a
pair is waiting on these resources, and add another pair if adequate memory and CPU are
already available.
Note: This guideline does not apply to all scenarios. For example, some times the true root
cause of the problem can stem from another area entirely, such as a bottleneck in I/O
subsystems or network (bandwidth or roundtrip latency) or, in some cases, even a missing
index in the MDB.
■ If response time is good there is no need to add CPUs, even if at 80% utilization.
■ If CPU utilization is consistently low and memory is available and there are no other
performance related issues (such as the DBMS), consider adding an additional Object
Server\Web engine Pair to improve response time (if improvement is needed).
■ Use only one Object Server\Web Engine pair per CPU. Allocate 1GB RAM per pair.
■ Use only one Object Server\Web Engine pair for each 300 simultaneous users (range is
200-400 based upon type of load).
■ Monitor CPU and memory consumption\availability on ALL servers. This is the best
indication of load or reserve capacity. Add more resources if an existing pair is already
nearly fully utilized. If CPU and memory resources are still available you can add more
pairs.
Refer to Monitor Key System Resources (see page 380) in the next section for additional
information on monitoring these resources.
MDB Guidelines
The Management Database (MDB) is a single database that, in addition to maintaining all
CA Unicenter Service Desk data, also contains several common tables and other product
specific tables that were previously stored in separate product database. For example, CA
Unicenter Service Desk could share an MDB with the Unicenter Asset Portfolio Management
solution. In which case, data for both products would be maintained in the same database.
MDB performance is affected by the amount of data it contains, the amount of available
disk space the server contains, the network connectivity between the MDB and the primary
and secondary servers, and the amount of tuning that is done both during installation and
on a regular basis. The MDB can be located locally or remote to the primary server, and
although there may be multiple MDBs in use in your environment, CA Unicenter Service
Desk will only connect to a single MDB. That MDB should be the enterprise or central MDB.
■ Use of Reiser file system is NOT recommended as it is not a suitable format for large
databases.
■ When the MDB is integrating information for multiple CA products, the stated hardware
and software prerequisites should be increased as necessary for an enterprise MDB.
Further discussion on MDB sizing, including formulas for estimating server sizing when
multiple solutions will be sharing the same MDB, can be found at the following link:
http://supportconnectw.ca.com/public/impcd/r11/scalability/doc/nsmdoc/MDB%20Sizing%
20Formulas.pdf
(http://supportconnectw.ca.com/public/impcd/r11/scalability/doc/nsmdoc/MDB%20Sizing%
20Formulas.pdf)
Client Interfaces
In general, client machines should meet the minimum requirements as stated in the
product documentation and should have a 1024x768 capable monitor.
If the Java Client interface will be installed on Citrix, the underlying Java infrastructure may
cause a significant amount of memory to be consumed for each client, and therefore, limit
the number of Java clients that can run on a single Citrix instance.
This section provides information on how to identify a potential problem and what you can
do to improve your CA Unicenter Service Desk performance.
Once your CA Unicenter Service Desk implementation is in place and running, how do you
know if you have a lurking performance issue? More importantly, how can you identify a
potential performance problem—before it becomes a real problem?
Some of the most common signs to watch for include the following:
Regularly monitoring key resource consumption and ensuring that routine maintenance is
conducted can further help spot smaller problems before they grow into larger problems.
The database agent will only process a single database request (for example, select,
update, or insert) at one time, and until a response is received from the database, any
access to the database through that database agent will be locked. All other database
requests are subsequently routed to other available database agents.
By default, all queries running longer than two seconds (2000 milliseconds) for selects and
one second for inserts will be written to the stdlog. Therefore, your first step is to review
stdlog to determine the following:
■ Are they grouped together during peak processing time or spread throughout the day?
■ What is the exact # milliseconds value? This represents how long the database took to
respond to the query. If it is just over the threshold and if there are multiple similar
messages that are not grouped in a cluster, it may not be a problem.
■ The complete query that took the DBMS a while to respond to.
■ To debug the long running query, take the query in question from the stdlog and run it
directly against the database in a SQL session to see what response time you achieve.
Note: When you are testing, be sure to have SQL Server start from the same state each
time. The cache (sometimes referred to as the “buffer”) needs to be cleared out first,
otherwise it will prevent the data and\or execution plans from being cached, thus
corrupting the next test. To clear the SQL Server's cache, execute the following:
DBCC DROPCLEANBUFFERS
This clears all the data from the cache. Then, execute the following:
DBCC FREEPROCCACHE
You can also use standard database tools to get the Query Execution Plan in order to
evaluate the costs of the query and identify where it is spending a majority of its time and
effort.
If neither of these options help, you may need to consider modifying the users' behavior
through additional training and notification. Users often do not understand the strain their
searches can produce on the system. Provide them with a list of rules for optimizing their
search strategies (while minimizing the impact to the CA Unicenter Service Desk
performance) and include examples (and regular reminders). For example,
Like '%Smith%'
uses a double wildcard which is very bad because it has to perform a complete table scan to
look for a match. In addition, it may also return far more results which are irrelevant. On
the other hand,
Like 'Smith%'
LIKE 'Smith'
is even better yet. Generic queries that use double wildcards not only affect the system
performance but will likely return too many results to be useful. Any query that returns
more than 100 results is expensive for CA Unicenter Service Desk, and more or less useless
for the user.
The pdm_vdbinfo command produces a report that can provide you with a snapshot of what
each database agent is currently doing in your CA Unicenter Service Desk environment. To
run pdm_vdbinfo, simply enter pdm_vdbinfo from the command line on the primary
server and direct the output to a text file for review. For example:
The primary function of this report is to identify the amount of work currently en route, or
queued to the database agents. To make the most of this information, review it alongside
the STDLog output and in the context of the users' perceived performance.
One of the most important pieces of information from the pdm_vdbinfo report is whether
work requests are being queued. For example, when a user's search request is backlogged
(queued) in the Virtual Database (bpvirtdb_srvr), that user will not get a response until the
backlog clears, and the longer the backlog, the longer the user has to wait for the response.
This type of delay can cause the user to interpret the performance as a slow response or
“hang” in the interim.
Pdm_vdbinfo is one of the best tools for determining when to increase the number of
database agents. Since this report produces a snapshot in time, you will need to run this
command on more than one occasion to see how the system is performing over time. After
executing this report, carefully note the number of the following:
■ Pending requests.
■ Tables with a large number of I/O requests that are performed on ID 00 database
agent. ID 00 is the generic update agent. It is used for updates to all tables that do not
have specific agents associated with them. If there are a lot of updates to a specific
table on agent 00, it will be busy processing those updates and not available to process
updates for other tables. Moving this table to a separate agent will alleviate the
overhead caused by updates to this single table.
The report output is divided into several sections. For example, the header section contains
most of the information necessary for making preliminary judgements about the state of
the bpvirtdb_srvr process:
■ Min Config Agents - which is set by the bpvirtdb_srvr -n parameter in the pdm_startup
file. In this example bpvirtdb_srvr is set to start 25 SELECT database agents upon
startup.
■ Max Config Agents - which is set by the NX_MAX_DBAGENT variable. In this example,
the bpvirtdb_srvr is configured to start up a maximum of 40 SELECT database agents.
Upon startup the minimum number of agents will be started; when all of those agents
are busy, additional agents will be started (up to the maximum number of agents
specified). When agents are no longer busy for a period of time, they will be stopped
until the specified minimum number of agents is reached. At times, during runtime, the
system reduces the maximum number to a number that is one less than the configured
number.
■ Tgt num idle - which represents how many agents must be idle for a period of time
before the bpvirtdb_srvr process starts terminating database agents down to the
minimum configuration limit. This value is hardcoded to “2.”
■ Num Agents running - which identifies the total number of agents currently connected
to the bpvirtdb_srvr process.
■ Num Agents starting - which indicates the number of database agents that
bpvirtdb_srvr is in the process of starting.
■ Num Requests pending - which indicates the total number of SELECT requests currently
queued in the bpvirtdb_srvr process waiting for an available database agent.
■ Actual num idle - which indicates the number of SELECT agents waiting for work. In
this example, there are no (0) agents waiting, which indicates that there is a backlog as
all agents are doing work.
If there is no queuing or if the queuing is only short tem, generally, there is no need to
increase the number of agents. However, if database requests are constantly queued during
peak times, this may indicate a valid need to increate the number of agents.
Note: If query requests are being queued, you need to determine the cause before making
any adjustments. Otherwise, if you increase the number of database agents without
knowing the actual cause of the increased queuing the result may be degraded rather than
improved performance. For example, if the database is already overworked as the result of
bad index statistics, and cannot handle more requests efficiently, then increasing the
number of requests sent to the database could result in slower response, and ultimately,
more queuing.
It may be useful to run pdm_vdbinfo under different load conditions to determine the effect
of load level on queuing. Compare the results to the STDlogs to determine if long running
queries are being executed at the time. Remember to look both forward and backward in
the log because long running queries are logged only when the query completes (which
may occur after you detect queuing). If throughput was acceptable and the backlog was
based on load, try increasing the number of database agents.
The pdm_vdbinfo report also includes a Delayed ID Queue section which identifies the
“select short” cache usage. A select short query is one that has the WHERE clause of
“WHERE id=?” and only returns one row. Consider the following example:
This listing displays any table that is accessed as part of a select short query. For each table
entry the following will be seen:
■ Queue - which indicates the actual number of select short responses that are currently
queued. If the queue “wraps,” this number equals the MaxQueue value. A queue is
limited to its MaxQueue value. On the MaxQueue +1 select short query, the oldest
query is dropped and the queue wraps.
■ MaxQueue - which is the configured maximum length of the queue for this table. The
default value is 101 rows.
■ Min - Which is the shortest time the queue has taken to wrap since the bpvirtdb_srvr
process started. If the queue has not wrapped, this value is 0 (zero).
■ Max - which is the longest time the queue has taken to wrap since the bpvirtdb_srvr
process started. If the queue has not wrapped, this value will be 0 (zero).
■ Cur - which indicates the age of the oldest entry in the queue. If both Min and Max
values are 0 (zero), this is the amount of time since the table was first referenced.
When an update occurs, the queue should not wrap until all the domsrvr processes have
requested the update. A good rule of thumb is to keep the Min value above two seconds. If
the Min value is above two, increasing the number of queued entries will only increase
memory usage and not improve performance.
System resources such as CPU, Memory, I/O Subsystem, and Network usage (both
bandwidth and round trip latency) should be monitored regularly, but before you add
additional resources to avoid a seemingly obvious shortfall, keep in mind that these
resources are interrelated-a problem with one can easily mask a problem with another. For
example, if you add memory because SQL is using too much I/O, SQL may cache disk to
memory and you will end up with too much CPU use while the real problem may actually be
a missing index.
If CPU consumption is the problem, consider the following ways to resolve the issue:
■ Add CPU(s).
■ Remove load (that is, fewer WebServer\DOMServer pairs) from the affected server
(you might need to move these to a new server).
■ Tune the load - look at customization and configuration changes and adjust
accordingly. For example, how practical (or important to your business) is it to log
everything? Have you defined 10M assets but only really need 10K?
■ Split the CA Unicenter Service Desk disk from other applications (if other resource
intensive applications are using the same disk).
■ Look at what I/O you are doing to determine if the cause is logging level, audit, excess
(or insufficient) data use, index maintenance, shared MDB, and so on.
The use of RAID I/O systems is highly recommended as a means to gain additional I/O
performance for highly utilized I/O subsystems.
■ Add memory.
■ Reduce load.
■ Look at why memory is being used. Is there a missing index? Is reporting being done
against an online transaction oriented database?
■ Add bandwidth or reduce latency-but fix the one the matters most to your environment
or business requirements.
■ Remove load.
Deploy a secondary server based on geographical distribution of load, but keep in mind that
CA Unicenter Service Desk is still constrained by poor networks, so make sure that you
have the best possible network connection between the secondary servers and the primary
server or MDB (the primary server and MDB server should be close to each other).
Manage Performance
In addition to monitoring resource usage, taking steps to ensure that those resources are
being used appropriately can also help improve performance.
Regular use of the new Archive and Purge facilities should be a key part of your CA
Unicenter Service Desk maintenance routine. By using these facilities to remove database
rows that are no longer required, you can significantly improve performance—especially
when complete table scans are being done at the request of end-user searches.
The use of continuous rule-based archive or purge utility can be particularly effective in
larger installations.
Archive and Purge rules are configured through the web interface:
For more complete information on using the Archive and Purge Rule, consult the product
documentation and online HELP system.
Modifying the following CA Unicenter Service Desk environment variables can also provide a
degree of tuning. However, before you make any changes to these variables you should
first work closely with CA Technical Support or Technical Services to determine which
changes are appropriate for your individual environment.
■ NX_MAX_DBAGENTS sets the maximum number of Select database agents. When you
add an object manager (whether on a primary or secondary server), increase the
variable by four. For example, if the current value is six and you add one object
manager, change the NX_MAX_DBAGENT value to 10.
■ NX_VIRTDB_SS_QUEUE controls the size of select short cache. The default value for all
tables is 100 cache entries. To change the select short queue for a particular table, edit
the entry for NX_VIRTDB_SS_QUEUE_tablename=xxxxxx
Note: This may result in fewer database reads since the cache size is larger and the
data may be contained in the cache.
Typically, you would use the Options Manager to control system behavior, but on occasion,
CA Technical Support might instruct you to modify a particular environment variable
“directly.” In that event, you can use the following procedure.
Important! Never modify the NX.env file directly. Instead, modify the NX.env.tpl
environment template file as instructed below, and allow the configuration process to apply
these changes to the NX.env file. In addition, do not update the NX.env file on your client
or secondary server installations as version control automatically updates those files.
2. You can view and modify this file using your favorite text editor (for example, Windows
Notepad).
Note: Always create a backup of this file before making changes to it.
3. Make the changes as instructed by your support technician, and save the changes.
4. Run the CA Unicenter Service Desk configuration utility on your primary server
installation to apply the changes you made to the environment variable template file to
the actual environment file. Refer to the installation chapter for your platform's
Implementation Guide for information on how to run the configuration utility.
These changes will take effect after the CA Unicenter Service Desk has stopped and
restarted.
Environment variables set in the NX.env file can be overridden by setting the environment
variable in the process space in which a process runs. Although convenient in some limited
cases, this is usually not desirable. Preceding a variable setting with an @ symbol prevents
the variable from being overridden by variables in the process space. Unless there is a
specific reason for allowing an override, the @ symbol should always precede the variable
name in the template file.
The comment characters for this file are # and !. The ! character is also used to disable an
option.
The architectural choices for deploying CA Unicenter Service Desk are largely dependent on
its model for distributing processes. This model has been developed as the product evolved
and is unique to CA Unicenter Service Desk. An understanding of the model can be helpful
in understanding deployment design and is often critical to troubleshooting problems. The
CA Unicenter Service Desk server is designed to be distributed. All CA Unicenter Service
Desk processes communicate via TCP/IP asynchronous messaging, which is dispatched by a
process called slump. Inter and intra host process communication is transparent. In other
words, the code for a CA Unicenter Service Desk process does not change if the process it is
communicating with is local or remote.
When a process starts up, it registers with the slump process. The location of the slump
process is found in the NX.env file. An important part of a secondary installation is writing
an NX.env file that has the host name and port number of the single slump process that
controls the entire distributed system. This is the only information the processes need to
plug into the distributed system.
When a process registers its presence, it sends its slump name to the slump server process.
The slump name is coded into the process at development time, often with a suffix that
differentiates it from other identical processes. The slump name indicates what the process
does and the types of messages it will send and respond to. The slump names of processes
appear in the stdlog in the fourth column.
The slstat command dumps a list of processes registered with slump, using their slump
names and the time they registered. This can be useful in tracing problems with the
distributed system.
The third column in the stdlog is the host name of the server on which the log was written.
Stdlog entries are always written locally, so the host name is always the same in a given
stdlog file. The host name becomes useful when stdlogs from different hosts are merged.
To be merged, the stdlogs from all the hosts have to be collected into the same directory.
This can be the log directory on one the primary or a secondary server, but CA Unicenter
Service Desk must be installed in some form on the host where the merge is run. The CA
Unicenter Service Desk service or daemons need not be running.
UNIX:
Windows:
type std* | pdm_logcvrt | sort > merged_log
pdm_logcvrt uses timestamp information in the first line of each stdlog to adjust
timestamps, which may differ from server to server in the logs, so that they merge into a
consistent sequence. Merging stdlogs from primary and secondary servers can be helpful
when you are troubleshooting problems between hosts.
Note: Merging logs containing data from different years produces data that is not sorted
correctly.
The UNIX and Windows implementations of CA Unicenter Service Desk are nearly identical.
One area where they appear to differ is starting the system. On Windows, a service is
started using the Windows service manager. On UNIX, a script called pdm_d_mgr, is run.
Actually, the service is a wrapper around the script, so the two are equivalent. The script
can be used on Windows, but only for troubleshooting. You must be careful not to run
pdm_d_mgr while the service is up, or start the service when the system has already been
started with pdm_d_mgr.
Note: Always start the service when bringing up a Windows system in production.
The overall configuration of the system is determined by which processes are started on
which host. The order in which processes are started is very important. A dependent
process cannot be started until its provider is started. This coordination comes from the
daemon manager. Its job is to coordinate orderly startups and shutdowns, and also to
respond when a process halts. This is usually detected by a broadcast message that is sent
out from the slump server when a process loses its connection. The daemon manager has
options when a process halts unexpectedly. It can restart the process, or halt and restart a
group of processes. In extreme cases, it can shut down the entire system. These options
are configured in the pdm_startup file. The options are set by development. Do not change
them. However, checking their configuration is occasionally helpful in deciphering what
happened during a crash.
The daemon manager is helped by a proctor process on each host where CA Unicenter
Service Desk is installed. They act as agents for the daemon manager on remote hosts, and
to start and stop processes on the daemon manager's request.
In order for a secondary server to start, its pdm_proctor process must be running. Usually
pdm_proctors are configured to start at boot time. The proctors listen on the
$NX_MGR_PORTNUM (default 2300) for messages from the daemon manager. After the
proctor starts a process, it registers with slump using the usual slump port (default 2100).
By using the proctors, the configuration of each of the secondary servers can be controlled
from the primary server. The distribution of the CA Unicenter Service Desk server is
determined by what is installed on which box and by the pdm_startup file. The pdm_startup
configuration file determines which processes are started and in what order. The pdm_edit
perl script is a tool for modifying the pdm_startup file. The grammar and syntax of the
pdm_startup file is documented in the comments in the file.
The database is distributed differently. CA Unicenter Service Desk can run on a local or
remote database like other service desk components, but the database processes have
their own mechanisms for remote operations. Each database has its own way of
communicating, although ODBC and JDBC now provide some standard methods.
In theory, CA Unicenter Service Desk could be configured to run with every process on a
different computer. This would be a difficult configuration to manage. In practice,
configurations are limited to those supported by the regular installation scripts.
390 Index
CI field • 153 contracts • 295
CI with knowledge documents • 153 function of • 295
CISee Configuration Item • 30 service • 186
classattributes • 48 types • 181
classic_sla_processing option • 184 Control Objectives for Information
client connections • 279 TechnologySee CobIT • 15
client load balancing • 280 CORASee Common Object Registration
Close • 197 API • 48
Closed Date • 206 cost • 181
Closed Requests • 122 Create Document privileges • 121
cluster servers • 302 create tickets • 318
CMDBf • 113 createActivityLog • 315
CMDBSee configuration management createAttachment • 311
database • 47 createChangeOrder • 312
CMM (Capability Maturity Model) • 38 createDocument • 311
CobIT (Control Objectives for Information createIssue • 312
Technology) • 21, 61 createQuickTicket • 319
code page • 257 createRequest • 307, 311
Common Asset Viewer createTicket • 311, 319
multilingual considerations • 273 creating
Common Asset Viewer (CAV) • 111 incidents • 124
common attributes • 60 requests • 124
Common Object Registration API (CORA) creating incidents • 312, 318
• 48, 93, 103, 295 creating incidents from policies • 31
Common Services • 370 creating quick tickets • 318
compliance • 21 creating tickets • 319
components Crystal Reports • 219, 243
described • 279 Crystal ReportsSee Business Objects
condition macro • 183 Crystal Reports • 219
conditional macro • 197 customer interface • 120
conditional notifications • 197 customer satisfaction • 141
conditional statements • 197 customer service representatives • 19
conditions • 186 customer support • 101
configuration • 382 customer UI step • 170
configuration item • 183, 295 customers • 115
attributes • 48 customizable text • 197
creating with XML and GRLoader • 93 Customize Scoreboard • 50, 122
Lists • 65 Customize Scoreboard File menu • 215
tables • 255 customizing
configuration item (CI) • 47, 51, 113, CA Unicenter Service Desk Dashboard •
153 226
configuration management • 47
configuration management database
relationships • 49 D
configuration management database
(CMDB) • 47, 113 daemon manager • 382
Configuration utility • 44 Dashboard
connection pooling • 177 benefits • 221
connections • 335 defined • 223
consumers • 115 data
Contacts • 48, 100, 292 exporting • 226
tables • 255 for reports • 243
contactSee end user • 183 scheduling refreshes • 242
content data partitions • 292, 360
authors • 145 defining • 300
automated creation • 170 data sources • 48, 60, 69, 76, 94, 113,
creating • 146 225
knowledge • 136 database
of self service site • 118 distributing • 382
prebuilt • 173 server • 301
review • 139 dates • 248
sources • 136
392 Index
G impact • 192
on service management • 140
impact analysis • 47, 51
getBopsid • 321 Impact field • 204
getDecisionTrees • 311 impact table • 253
getDocument • 311 implementation
getHandleForUserid • 311, 315 centralized • 280
global distributed • 281
analysts • 301 global • 282
server • 301 multitenancy • 291
Global Implementation • 282 overview • 15
grant levels • 363 parallel • 306
graphs • 217 single server • 287
changing colors • 228 implementing
changing point labels • 234 incident management • 30
changing titles • 229 problem management • 30
changing types • 227 inactivity • 140
other changes • 235 inbound connections • 338
GRLoader • 67 incentives • 146
group • 193 incident
group assignment • 193 activities • 206
group definitions • 195 categorizing • 210
groups priority • 204
permissions • 291 root cause • 206
guestuser • 126 service type • 207
severity • 205
status • 204
H Incident Area • 31
Incident Areas • 210
HA (high availability) • 301, 304 incident management • 47
HA cluster • 302 analysts • 201
handles goals • 40
for assignee and creator • 315 implementation • 30
for incident areas • 312 process • 31
for priorities • 312 status • 203
for status • 315 incident matching process • 31
using • 332 Incident Recording box • 28
heterogeneous (that is, platform-neutral) incident resolution • 19
infrastructure • 340 incident statuses • 31
hierarchical escalation • 115 incident template • 125
hierarchical relationships • 49, 61 incidents
hierarchy • 152 assigning • 192
high availability (HA) • 301, 304 creating • 312, 318
highlight exceptions • 239 prioritizing • 191
historical data • 225 reporting on • 250
host name • 382 transferring • 315
HP/Peregrine • 165 increasing
HP-UX • 177 priority • 199
HTTP_REMOTE_USER • 352 Information Technology Infrastructure
HTTP_REMOTE_USER variable • 349 Library (ITIL) • 15, 47
HTTPD (IIS/Apache) • 349 Ingres uses port • 335
HTTPD user validation • 292 installation
existing • 44
integrated development environment
I (IDE) • 170
integrated solutions • 283
IBM AIX • 177 Integrated Windows Authentication • 351
IBM DB2 • 177 interface table • 253
IIS • 349 interfaces
IIS (Internet Information Server) • 176 multilingual considerations • 270
IMAP • 335 internal authentication options • 292
internationalization • 257
394 Index
live chat • 166 Microsoft Access • 219
load balancing • 304 Microsoft Excel • 143
load testing • 283 Microsoft IIS • 177, 279
LoadRunner • 283 Microsoft Internet Explorer • 177
localization • 257 Microsoft reports • 243
log entries • 122 Microsoft SQL Clustering • 304
logging in • 225 Microsoft SQL Server • 177
login Microsoft Vista • 177
call • 315 Microsoft Windows • 177
LOGIN SID/FID • 283 Microsoft Windows 2003 clustering • 302
login/logout • 311 Microsoft XP • 177
models • 61
modify the spel code • 184
M MSQL 2005 • 304
MSQL database • 302
macros multilingual
action • 184 implementation • 258
notification • 185 support • 257
maileater • 346 versions of product • 260
management multilingual considerations
service • 181 activity log • 269, 272
management commitment • 118 alerts • 267
management data repositories (MDRs) • announcements • 267, 271
50, 113 authentication • 266, 271
management database (MDB) • 48, 279 CA Workflow • 271
managing Common Asset Viewer • 273
availability • 119 error messages • 267, 271
knowledge • 119 fields • 269, 272
serviced levels • 119 interfacesSee • 270
support • 19 knowledge base • 263, 270
manual escalations • 199 labels • 271
manual notifications • 199 notifications • 270
Mapper component • 67 online help • 274
mapping properties • 269
ITIL • 23 reference data • 264
service delivery • 23 scoreboard • 272
service support • 23 support • 257
mappings of service types • 187 Web Screen Painter • 273
master asset data model • 103 multiple notification macro • 197
master server • 298 multisite
matchType • 317 support • 282
maturity models • 26 MultiSite Environment • 297
MDB (Management Database) • 279 multitenancy
MDBSee management database • 48 capabilities • 285
MDR definition • 76 inside knowledge tools • 291
MDR providers • 50 My Bookmarks link • 121
MDRsSee management data repositories
• 50
Mean Time to Resolution • 31, 191, 198 N
measurement • 144, 215
of knowledge base • 139 naming conventions • 294, 300
measurements • 119 native languages • 257
measurements of success • 19 Natural Language Search (NLS) • 157,
memory management • 284 162, 317
message formats • 197 near-real-time data • 222, 242
Message Templates • 292 Netscape • 177
messaging protocol • 335 network
metadata • 68 access • 335
metrics Network Address Translation (NAT) • 338
knowledge • 144 network interface controller (NIC) • 301
retrieving • 215 Network/Hardware firewall • 338
396 Index
problem relationships
activities • 206 creating • 65
categorizing • 210 hierarchical • 49, 61
priority • 204 peer-to-peer • 49, 61
root cause • 206 relationships (CA CMDB) • 49
service type • 207 relationships between tables • 244, 245
severity • 205 remote registry • 166
Problem field • 121, 150 REMOVECACHE statement • 284
problem management • 47 repmeth table • 253
analysts • 202 report card • 136
goals • 40 reporting
implementation • 30 on knowledge base • 139
process • 33 reporting database • 177
problem resolution • 19 reports
problems analysis • 218
prioritizing • 192 change request • 253
reporting on • 250 commonly requested • 219
Process Workflow Engine • 27 Crystal • 243
Proctor port • 343 customizing • 217, 249
proctors • 382 exporting • 143
product table • 253 getting the data • 243
Properties • 31, 210, 250 Microsoft • 243
multilingual considerations • 269 of incidents, problems, and requests •
Proxy Contact • 325 250
prp • 253 options • 215
Public Key Infrastructure (PKI) • 321 predefined • 243
public network • 338 summary and detail • 217
purge rules • 289 syntax • 247
Request for Change (RFC) • 33, 202
requests
Q for change • 253
reporting on • 250
quality • 181 resolution
queries of incidents • 19
compare • 217 of problems • 19
stored • 215 Resolution field • 121, 150
to access • 215 resolution procedure • 191
Query RESPONSE BODY data • 285
Constraints • 225 responsibilities • 201
quick tickets retrieving
creating • 318 knowledge • 157
retrieving metrics • 215
RFC (Request for Change) • 33, 202
R ROI
measured • 181
RAID Level • 301 ROI tools • 27
ranking • 184 roles in support • 19
Real Application Clustering (RAC) • 305 root cause • 19
real-time data • 222 analysis • 206
Red Hat Linux • 177 field • 201
reducing Root Cause Analysis • 30, 33, 51
mean time to resolution • 191 routers • 338
reducing mean time to resolution • 198
Reference Architecture • 27
reference data
S
multilingual considerations • 264
regions • 282 Sarbanes-Oxley statute • 21
defining • 301 Scalability • 282, 298
schema files • 279
SCIM (Subject-Component-Item-Module)
• 152
398 Index
SMTP • 335
SOA (service-oriented architecture) • 307
T
SOAP • 308, 335
socket server • 176 tables
sockets • 335 adding columns • 238
Software firewall • 338 changing fonts and colors • 235
solution • 134, 135, 147 changing titles • 236
survey • 125 hiding columns • 237
Solution Architecture Overview (SAO) • taxonomy design • 152
27 TCP/IP asynchronous messaging • 382
Solution Architecture Specification (SAS) technical support • 101
• 27 technician
Solution Blueprints • 27 role of • 19
solutions Technician to End User ratios • 165
in knowledge base • 136 technician UI step • 170
solve loop • 132 template
spel macros • 184 for incidents • 125
spell check functionality • 268, 272 tenancy requirements • 289
SQL server • 177 Tenant Only Data • 294
SQL Server uses port • 335 tenant-only data • 300
SSL • 351 test fixes • 257
static directory • 176 third-party service • 297
Status field • 201, 203, 204 tickets
stdlog • 382 creating • 319
STEM (System-Type-Element Module) • duplicate • 319
152 reporting on • 250
Storage Area Network (SAN) • 304 Time • 206
stored queries • 122, 215 time periods
Stress and Interoperability Lab • 283 choosing • 241
stub classes • 322 Time Spent field • 206
style guide • 138 time to violation • 295
subject matter experts (SME) • 19 timestamps • 248
Subject-Component-Item-Module (SCIM) TimeToAsciiI • 248
• 152 time-to-violation options • 187
Submit Knowledge link • 121 Title field • 149
subway maps • 23 toc table • 253
summary reports • 217 TomcatSee Apache Tomcat • 177
SUN Java VM • 177 TOP category • 151
Sun Solaris • 177 Top Solutions • 122
support Total Activity Time • 206
and compliance • 21 trailing indicators • 21
for multilingual • 257 transaction artifacts (TAs) • 113
key indicators • 21 Transfer • 197, 315
levels • 115 transfer method • 315
management • 19 trends • 196
managers • 19 troubleshooting • 382
policy differences • 186 TYPE field • 44
representatives • 19
roles • 19
SLA • 181
U
support automation • 165
process • 173 u2lpdmtime.dll file • 248
survey • 125 UI scripts • 168
survey tables • 255 Unified Service Model • 276
surveys • 141, 210 Uninterruptible Power System (UPS) •
SUSE Linux Enterprise Server • 177 301
sym value • 332 UNIX platform • 305
System-Type-Element Module (STEM) • updating
152 incidents • 315
updating incidents • 315
upper management • 19
W
W3C-defined web services • 307
WAR (Web ARchive file) • 177
web
forms • 279
Web ARchive file (WAR) • 177
Web Director • 279, 368
Web Engine • 368
Web Engine (webengine) • 279
web interface • 292
Web Screen Painter
multilingual considerations • 273
Web Screen Painter (WSP) • 279
web servers • 335
supported • 176
web services • 335
accessing • 310
and knowledge tools • 161, 309
and SOA • 307
API • 309
defined • 308
for authentication • 321
methods • 311
search • 317
tasks • 312
W3C • 307
web.cfg file • 192
web-based self help • 115
webengine • 279
Webengine memory management • 284
WebLogic • 177
wf • 253
400 Index