KNOWLEDGE MANAGEMENT SYSTEM IMPROVEMENT TOWARDS

SERVICE DESK OF OUTSOURCING IN BANKING BUSINESS













MR PADEJ PHOMASAKHA NA SAKOLNAKORN














A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN INFORMATION TECHNOLOGY
DEPARTMENT OF INFORMATION TECHNOLOGY
GRADUATE COLLEGE
KING MONGKUT'S UNIVERSITY OF TECHNOLOGY NORTH BANGKOK
ACADEMIC YEAR 2007
COPYRIGHT OF KING MONGKUT'S UNIVERSITY OF TECHNOLOGY NORTH BANGKOK



ii
Name : Mr. Padej Phomasakha Na Sakolnakorn
Thesis Title : Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business
Major Field : Information Technology
King Mongkut’s University of Technology North Bangkok
Thesis Advisor : Assistant Professor Dr. Phayung Meesad
Co-Advisor : Dr. Gareth Clayton
Academic Year : 2007
Abstract
In business, knowledge is an organizational asset that enables corporations to sustain
competitive advantages. In addition to increasing the demands of IT outsourcing to
deliver world-class services, the Information Technology Infrastructure Library
(ITIL) is a key concept to provide the high quality service, and the IT service desk is a
crucial function for a whole concept of IT service management.
Three current problems include 1) technical staff turnover is very high; 2) more
than sixty percent of all resolving time is spent to resolve the repeat incidents; and 3)
the assigned resolver group to deal with the incident may be inaccurate due to human
error. Thus, this thesis proposes a framework for a knowledge management system
with root cause analysis so, called KMRCA IT service desk system and evaluates its
performance. The system is composed of two main functions, a searching knowledge
function, and an automatic assignment function. This thesis evaluated the performance
of the searching knowledge function using a simulation study and concluded that the
system could significantly reduce time in resolving incidents. Moreover, my thesis
enhances the framework to select the most suitable resolver group to deal with the
incident using Text mining discovery methods. The ID3 decision tress method could
increase productivity and decrease reassignment turnaround times. Furthermore, the
rules resulting from the rule generation from the decision tree could be properly kept
in a knowledge database in order to support and assist with future assignments.
(Total 153 pages)
Keywords : knowledge management, service desk, outsourcing, text mining, ITIL,
performance evaluation, simulation study, and decision tree.

______________________________________________________________ Advisor



iii
ชื่อ : นายเผด็ จ พรหมสาขา ณ สกลนคร
ชื่อวิทยานิ พนธ : ระบบการจัดการความรูเพื่อปรับปรุงการใหบริการแก ไข
ปญหาไอทีจากหน วยงานภายนอกให กับธุรกิจธนาคาร
สาขาวิชา : เทคโนโลยีสารสนเทศ
มหาวิทยาลัยเทคโนโลยีพระจอมเกลาพระนครเหนือ
อาจารยที่ปรึกษาวิทยานิพนธหลัก : ผูชวยศาสตราจารย ดร. พยุง มีสัจ
อาจารยที่ปรึกษาวิทยานิพนธรวม : ดร. การเร็ธ เคลตัน
ปการศึกษา : 2550
บทคั ดยอ
ในเชิ งธุ รกิ จได กล าวถึ งความรู ว าเป นสิ นทรัพยที่ สํ าคัญขององคกรที่ ผลักดันให เกิ ดความ
ไดเปรี ยบทางการแข งขันเชิ งกลยุ ทธ สํ าหรับการจัดจางบริ หารจัดการระบบงานสารสนเทศจาก
ภายนอกองคกรที่ ใหบริ การอย างมี คุ ณภาพโดยที่ ไอทิ ล (ITIL) เป นป จจัยสํ าคัญ ซึ่ งการใหบริ การ
แกไขปญหา นั้นเปนสวนที่สําคัญสําหรับการบริหารจัดการของการใหบริการดานสารสนเทศ
จากปญหาหลักสามประการคือ 1) ผูชํานาญเฉพาะดานมี อัตราการลาออกสูง 2) มากกวา60%
ของเวลาทั้งหมดถูกใชไปกั บการแกไขป ญหาที่เกิดซ้ําและ 3) การมอบหมายงานที่ไมเหมาะสม
เนื่องจากความผิดพลาดของมนุษย ดังนั้นงานวิจั ยนี้ไดนําเสนอขอบขายงานของระบบการจัดการ
ความรูกับการแกไขปญหาที่ ตนเหตุ และทําการประเมิ นผลความสําเร็จของระบบ KMRCA IT
service desk โดยระบบมี การทํางานหลัก 2 สวนคือ การคนหาความรู และ การมอบหมายงานแบบ
อัตโนมัติ การวิจัยไดประเมินผลความสําเร็จของการคนหาความรูโดยการจําลองสถานการณ และ
ผลสรุปแสดงใหเห็นวาระบบที่นําเสนอนั้ นไดลดเวลาแกไขปญหาอย างมีนัยสําคัญ ยิ่งไปกวานั้นได
ปรับปรุงขอบขายของงานวิ จัยใหครอบคลุม การมอบหมายงานให กับกลุมของผูแกไขปญหาแบบ
อัตโนมัติโดยใชเทคนิคการทํ าเหมืองขอความ เพื่อหาวิธี ที่เหมาะสมกั บระบบโดยใช ตนไมตัดสิ นใจ
ซึ่งผลของตนไมตัดสินใจแบบ ID3 นั้นใหผลที่มีความถูกตองมากกว า และไดนําไปสูการมอบหมาย
ผูแกไขปญหาที่เหมาะสมในแตละปญหาแบบอัตโนมัติ นอกจากนี้ผลลัพธจากกฎที่ไดจากต นไม
ตัดสินใจนําไปจัดเก็บไว ในฐานขอมูลของความรู เพื่อชวยสนับสนุนการมอบหมายในครั้งตอไป
(วิทยานิพนธมีจํานวนทั้งสิ้น 153 หนา)
คําสําคัญ : การจัดการความรู การใหบริ การแกไขป ญหา การบริ หารจากภายนอกองคกร ไอทิ ล
เหมืองขอความ การประเมินสมรรถนะ การจําลองสถานการณ และตนไมตัดสินใจ

___________________________________________________ อาจารยที่ปรึกษาวิ ทยานิพนธหลัก


iv
ACKNOWLEDGEMENTS

I wish to express my gratitude to a number of people who became involved with
this thesis. Foremost, I would like to thank my advisors, Assist. Prof. Dr. Phayung
Meesad, and Dr. Gareth Clayton for providing me with the opportunity to complete
my PhD thesis at King Mongut’s University of Technology North Bangkok.
I, especially, would like to thank at points on my advisor, Assist. Prof. Dr. Phayung
whose support and guidance made my thesis work possible. He has been actively
interested in my work and has always been available to advise me. I am very grateful
for his motivation, enthusiasm, and immense knowledge. He also contributes on my
work to be onboard of international publishing. I would like to thank Dr. Gareth
Clayton whose advances research methodology, particular statistics and simulation
techniques providing to me both concepts and real practices with consciously and
unconsciously ideas how good is good enough in experimental design should be taken
together that make him a great mentor. Moreover, I would like to show my faithful
thank to Assoc. Prof. Dr. Utomporn Phalavonk whose advocate of scheduling and
recommendations of graduate college’s regulations made me complete in my planning
and performing administrative tasks.
I would like to sincerely thank to Dr. Choochart Haruechaiyasak whose
knowledge and technical suggestions about text mining discovery algorithms in
particular word extraction and machine learning to facilitate the approach of
automatic resolve group assignment in place of the IT service desk agent’s tasks.
Thanks to Taweesak Suwanjaritkul and Pisit Thongngok whose knowledge with
regard to Visual Basic programming and SQL server 2005 database management that
made the prototype of KMRCA IT service desk system worked effectively.
Thanks to members of IT admin staff whose works made the most of my
administrative documents done during my study at the university.
This thesis could not be complete without my wife and all people in my family
particular Dad and Mom who have supported me since I was born.

Padej Phomasakha Na Sakolnakorn


v
TABLE OF CONTENTS
Page
Abstract (in English) ii
Abstract (in Thai) iii
Acknowledgements iv
List of Tables vii
List of Figures viii
Chapter 1 Introduction 1
1.1 Background and Statement of the Problem 1
1.2 Objectives 3
1.3 Hypothesis 3
1.4 Scope of the Study 3
1.5 Utilization of the Study 5
Chapter 2 Literature Review 7
2.1 Knowledge Management 7
2.2 Root Cause Analysis 10
2.3 Case-Based Reasoning 11
2.4 ITIL-Based IT Service Desk Function 14
2.5 Technologies for Service Desk 22
2.6 IT Service Desk Outsourcing 23
2.7 Decision Support System 24
2.8 Classification trees 25
2.9 Summary 28
Chapter 3 Methodology 31
3.1 Research Process 31
3.2 Information Collection and Requirement Analysis 32
3.3 Constructing an Instrument for Data Collection 34
3.4 The Proposed KMRCA IT Service Desk Framework 39
3.5 Methodology of Automatic Resolver Assignment 53
3.6 Summary 59




vi
TABLE OF CONTENTS (CONTINUED)
Page
Chapter 4 Experimental Results 61
4.1 The Results of Text Mining Discovery Methods of
Automatic Assign Function 61
4.2 The Results of Design of Experiment 63
4.3 The Results of Performance Evaluation 67
4.4 Summary 69
Chapter 5 Conclusion 71
5.1 Conclusion 71
5.2 Discussion 72
5.3 Future Work 73
References 75
Appendix A 81
Appendix B 89
Appendix C 129
Biography 153









vii
LIST OF TABLES

Table Page
3-1 The Rate of Incident Calls during Time in Business Day and Holiday 33
3-2 Percentage of Incident Calls by Severity 33
3-3 Classification of Calls by Incident Category 34
3-4 Summary of Probability Distributions for Computer Simulation 35
3-5 Comparison of Square Error by Function 36
3-6 A Good-of-fit Test of Time in Resolving Incidents by Severity 38
3-7 The Number of Incidents of System Types and Resolver Groups 53
4-1 The Number and Percentage of Correct Incident for Various Types
of Decision Trees 62
4-2 The Speed Compared with the Accuracy of Classification 62
4-3 Assigned Factor Values for Two-Level 64
4-4 2
3
Full Factorial Design of DOE for Responses Y of O
1
65
4-5 Coded Design Matrix of O
1
65
4-6 Absolute Value of Coefficients for Average O
1
and P-Value 66
4-7 Absolute Value of Coefficients for Average O
4
and P-Value 66
4-8 Comparison Tests of KMRCA and Typical IT Service Desk Systems 68
4-9 Comparison Outputs of KMRCA and Typical IT Service Desk Systems 68












viii
LIST OF FIGURES

Figure Page
2-1 The Case-Based Reasoning Cycle 12
2-2 Classification Hierarchy of Case-Based Reasoning Applications 13
2-3 Incident Management Process Overview 15
2-4 The Incident Life Cycle 17
2-5 First, Second, and Third Line Supports 18
2-6 Relationship between Incidents 19
2-7 Handling Incident Work-arounds and Resolutions 19
3-1 Input Analyzed Results 36
3-2 Probability Plot of Time between Arrivals 37
3-3 Probability Plot for Resolving Time by Severity 39
3-4 A Typical IT Service Desk Outsourcing Overview 40
3-5 Information Flow of IT Service Desk 41
3-6 A Conceptual Model of IT Service Desk System 42
3-7 A Proposed Framework of KMRCA IT Service Desk System 43
3-8 Information Flow of KMRCA IT Service Desk System 44
3-9 KMRCA IT Service Desk Process 45
3-10 Search Knowledge Procedure 46
3-11 Typical IT Service Desk and KMRCA IT Service Desk 48
3-12 The System Development Life Cycle (SDLC) 49
3-13 A Sample Display of Search Knowledge and Input Resolution 51
3-14 A Sample Display of Searching Results 52
3-15 A Sample Display of Assign Resolver Group 53
3-16 KMRCA IT Service Desk with Automatic Assignment Function 54
3-17 A Process of Automatic Resolver Group Assignment 54
3-18 Processes of Model Approach for Automatic Assignment 56
4-1 Pareto of Coefficients for Average Response Y of O
1
66
4-2 Pareto of Coefficients for Average Response Y of O
4
66

CHAPTER 1
INTRODUCTION

1.1 Background and Statement of the Problem
Knowledge management is the business process of managing the organization’s
knowledge by means of systematic and organizational specific processes for
acquiring, organizing, sustaining, applying, sharing, and renewing both tacit
knowledge and explicit knowledge by employees not only to enhance the
organizational performance, but also to create value [1, 2, 3, 4].
Due to the rapid change in technology and competition among global financial
institutions, the banks in Thailand also need to reduce costs and to improve their
quality of services by strategic information technology (IT) outsourcing such as data
processing and system development to the third parties. IT outsourcings are
understood as a process in which certain service providers, external to organizations,
takes over IT functions formerly conducted within the boundaries of the firm [5, 6].
The IT service desk is a crucial function of incident management driven by alignment
with the business objectives of the enterprise that requires IT support, balancing theirs
operations and achieving desired service level targets while IT Infrastructure Library
(ITIL) has become a strategic tool for efficiency and effectiveness of IT outsourcing
providers to provide a competitive approach. The ITIL defines a set of the best
practice processes to align IT services to business needs and constitutes the
framework for IT service management [7, 8].
The primary objective of the IT service desk is to resolve incidents related to IT
in the organization. As the case study, it appears that the IT service desk outsourcing’s
role is not quite a single point of contact [9]. The bank takes ownership of the help
desk agent called the first level support (FLS) which acts as more than just an
interface for internal users and external customers. Consequently, IT service desk as a
second level support (SLS) will resolve the assigned incidents from the FLS by
ensuring that the incident is in the outsourcing scope and still owned, tracked, and
monitored throughout its life cycle.

2
For the technologies regarding service desk, many organizations have focused
on computer telephony integration (CTI). The basis of CTI is to integrate computers
and telephones so that they can work together seamlessly and intelligently [10].
The major hardware technologies are as follows: automatic call distributor (ACD),
voice response unit (VUR) and interactive voice response unit (IVR) [11]. These
technologies are used to make the existing process more efficient by focusing on
minimizing the agent’s idle time. In resolving the incident effectively, IT service desk
agents must be very knowledgeable of their service supports, applications, and
support teams. Most efforts at improving service desk performance have been to make
the current system more efficient through applications of information technologies.
Those technologies do not address the problem of resolving performance dropped due
to incorrect assignments.
This thesis identifies three problems as follows:
1.1.1 The employee turnover is very high, particularly for technical employees
[12]. For the reason that service desk staff store significant knowledge regarding the
systems such as business processes, and technologies and if they leave their
knowledge often goes with them.
1.1.2 More than sixty percent of all resolving time is spent to resolve the
repeat incident [13].
1.1.3 The assigned resolver group to deal with the incident may be mistaken
due to human errors. Because the resolver group assignments are still performed
manually by IT service desk agents.
The first of two problems can be resolved by keeping employee’s knowledge
along with the organization by knowledge management approach and to conduct the
way to prevent the recurring incidents by using root cause analysis. The activities are
becoming the primary internal IT service desk functions of the outsourcing and they
are the potential to provide the competitive advantages. The last problem of
underlying for the incorrect resolver group assignment can be resolved by means of
automatic assignment approach. The Text mining discovery methods can find out the
suitable methods such as decision trees to support the correct assign and the rule
resulting from the rule generation from the decision tree could be properly kept in a
knowledge database in order to support and assist with further assignments.

3

1.2 Objectives
The objectives of this dissertation are as follows:
1.2.1 To propose a framework for knowledge management system with root
cause analysis based on ITIL best practice for IT service desk outsourcing in the
banking business called KMRCA IT service desk system.
1.2.2 To evaluate the performance of the KMRCA IT service desk system
before-and-after usage by using experimental design and simulation study.

1.3 Hypothesis
For the reason that the performance of KMRCA IT service desk system will be
higher than the Typical IT service desk system in terms of speed in resolving
incidents. Therefore, the defined hypothesis of the alternative hypothesis (H
1
) is the
average time in resolving incidents for all calls except for critical calls will be lower
in KMRCA IT service desk system than the currently Typical IT service desk system
and null hypothesis (H
0
) is that the average time in resolving incident of the both
systems are the same. Two rival hypotheses are compared by a statistical hypothesis
test.
H
0
: µ
1
= µ
2
, and
H
1
: µ
1
< µ
2 ,
where µ
1
and µ
2
are the average time in resolving incidents of
KMRCA IT service desk system and the average time in resolving incidents of
Typical IT service desk, respectively.
The statistical hypothesis test approach is to calculate the probability that the
observed effect will occur if the null hypothesis is true. In other words, if the p-value
is small then the result is called statistically significant and the null hypothesis is
rejected in favour of the alternative hypothesis. If not, then the null hypothesis is not
rejected. Incorrectly rejecting the null hypothesis is a Type I error; incorrectly failing
to reject it is a Type II error.

1.4 Scope of the Study
The scope of this dissertation is as follows:
1.4.1 This study focuses on the performance evaluation in terms of throughput
and average time taken in resolving incidents.

4
1.4.2 The performance evaluation is to compare before-and-after employment
KMRCA IT service desk system by using simulation study within Arena[56] software
package and design of experiment of 2
3
factorial design.
1.4.3 For the framework, IT service desk outsourcing includes IT service desk
agents and five resolver groups, including EOS (enterprise operating service), IE-AMS
(application management service), NWS (network service), OS-EC (operation service),
and VEN (vendor service).
1.4.4 ITIL-based KMRCA IT service desk processes include IT service desk
function, Incident management process and problem management process.
1.4.5 The proposed KMRCA IT service desk system developed based on
system analysis, system development life cycle (SDLC) method. In addition, the
system composes of two main functions, a searching knowledge function based on
case-based reasoning, and an automatic resolver group assign function based on the
method generating from text mining discovery algorithms.
1.4.6 The text mining discovers algorithms is to find out the strongest
methods by comparing seven decision trees within WEKA [65] machine learning,
Decision stump, ID3, J48, NBTree, Random Forest, Random Tree and REPTree.
1.4.7 The resolver groups are always available when they receive the assigned
incidents from the IT service desk agents.
1.4.8 For performance evaluation, a sample of incident data collected from
Tivoli CTI system of IT service desk outsourcing of selected 12,198 calls (prime time
on the working days) for 4-month during April to July 2006.
1.4.9 For the study of automatic resolver assign, a sample of incident data
collected from Tivoli CTI system of IT service desk outsourcing of all 14,440 cases
for 4-month during April to July 2006.
Obviously, the sample sizes are different from each other because there are on
the different sides of the study objectives. For performance evaluation using
simulation study, a sample size is selected 12,198 calls during the prime time on the
working days since the aim needs the simulation output as real as possible. Another of
automatic resolver group assignment, a sample size is all 14,440 cases because the
main purpose of the study requires all data to execute to the system no matter what
time concerns, but determine to assign correctly as relevant symptoms of the incident.

5

1.5 Utilization of the Study
1.5.1 The Performance evaluation using simulation study and experimental
design can be adopted to find out the specification of the knowledge management
system. For example, the performance evaluation of KMRCA IT Service Desk can be
applied to the other service desk functions to identify the KMRCA specifications and
then it can be modified according to the organization’s requirements.
1.5.2 The simulation study is also used to evaluate KMRCA IT service desk
system’s performance without interrupting the daily IT service desk’s operations.
Moreover, the way of simulation can be applied in several industries’ processes in
time being concern in order to manage constrictions of the system.
1.5.3 The ITIL-based IT service desk function in incident management and
problem management processes can be adopted and adapted to the organizational
outsourcing to deal with the ITIL certification.
1.5.4 The data preparation process and text mining discovery algorithm
method can be applied to the empirical studies that need data pre-processing and
transforming the results to find the strongest method for the classification approach.
1.5.5 The suitable decision tree-based in the function of IT service desk
system provides not only automatic resolver group assign, but also the knowledge
acquisitions that are the rules resulting from the rule generating from the decision tree
method. The acquired knowledge can be kept to support and assist to the further
assignments.
This thesis organizes the remainders as follows. Chapter 2 describes literature
review, including knowledge management (KM), root cause analysis (RCA), case
based reasoning (CBR), ITIL-based IT service desk, technologies for IT service desk,
IT service desk outsourcing, decision support system (DSS) for resource assignments
and classification trees. The details of the proposed model frameworks are illustrated
in Chapter 3. Chapter 4 gives results of the study and discussion. Finally, conclusion
and future work are presented in Chapter 5.

CHAPTER 2
LITERATURE REVIEW

This chapter describes the review of several literatures with regard to the study,
including knowledge management, root cause analysis, and case-based reasoning
which are illustrated in Sections 2.1, 2.2, and 2.3. Sections 2.4 and 2.5 describe ITIL-
based service desk function, and technologies for service desks. The IT service desk
outsourcing is describes in Section 2.6. Decision support system considering resource
assignment and Classification trees are illustrated in Sections 2.7 and 2.8. Moreover,
the summary is shown in Section 2.9.

2.1 Knowledge Management
The study of knowledge management started from Polanyi’s Tacit Dimension.
His analysis emphasized several key concepts. Firstly, the ability to identify the
outside objects, and then to know, is learned through a process of personal experience.
Secondly, tacitness and explicitness are distinct dimensions; the increase of one does
not come at the decrease of the other. Thirdly, since tacit knowing is an essential
element of any kind of knowledge and is acquired through personal experience called
indwelling, any effort to achieve absolute detachment, the objective of knowledge is
misdirected and self defeating. Polanyi’s work was situated in a philosophical context,
and focused on the definition of knowledge but not on the systematic effort of
managing it [14].
The conceptualization of KM was not developed until knowledge became central
to production and innovation in the 1990s. Peter Drucker [15] is among the first who
advocated the advent of a knowledge society. In the Post-Capitalist Society [15],
Drucker [15] documented the transformation from a capitalist to a Knowledge Society,
which began shortly after World War II, noting that the foremost economic resource
is no longer capital, land, or labor. Rather, it is and will be knowledge [15]. The field
of knowledge management has also been developed by the experience and philosophy
of Eastern society.


8
Nonaka and Takeuchi’s Knowledge-Creating Company [1], based on
experience in Japanese companies, is a pioneer work in mapping explicit and implicit
knowledge, as well as individual, group, and organizational knowledge into one
matrix describing called the dynamics of knowledge creation. They introduced the
socialization, externalization, combination, and internalization processes by the SECI
model that becomes popular in knowledge management today. This SECI model or
SECI processes explain the organizational knowledge creation theory and serve as a
method of understanding how an organization creates a new product, new process, or
new organisation structure. This concept is easily understood by focusing on the
project in the system solution business in which creation of a new product or new
process that leads to success. Though many success cases in business activity indicate
efficient and effective implementation of SECI an innovative organization does not
simply solve the existing problems or process external information for adapting to
environmental changes. In order to find out the problem or solution, it recreates a new
environment while producing new knowledge or information are from the inside
organization. For this reason, the SECI processes of knowledge management may be
considered comparable to the project management for organizing a project and
guiding it to success [16].
Knowledge management (KM) is the process of managing the organization’s
knowledge by means of systematic and organizational processes conducted by
employees to enhance the organizational performance and create value [1, 2, 3]. The
development of KM, on the other hand, has been driven by practices and development
in information and data management [4]. Organizations should therefore seek and
share a combination of tacit and explicit knowledge with suppliers and other parties in
the value chain to satisfy customer needs in a highly competitive environment. KM is
more than just the advantage of technology, intranet and internet, but includes
organizational issues, assumes information resource management together with the
cultural change which is important in the KM implementation process [17].
For the organizations, the knowledge management is about acquisition and
storage of employees' knowledge and making the knowledge accessible to other
employees within the organization [3, 18, 19, 20]. Nonaka and Takeuch [1] have
extensively studied knowledge in the organization and developed a model that







9
describes knowledge as existing in two forms. Tacit knowledge is defined as personal,
context-specific knowledge that is difficult to formalize and communicate. Explicit
knowledge is factual and easily codified so that it can be formally documented and
transmitted. Through knowledge management, a company changes individual's
knowledge into organizational knowledge [21]. Organizational knowledge is
knowledge held by the organization. The organization maintains the organizational
knowledge in organizational knowledge resources which are operated on by human or
computer processes that manipulate the knowledge to create value for the
organization [22]. Nonaka and Takeuchi [1] defined organizational learning as, “a
process that amplifies the knowledge created by individuals and crystallizes it as part
of the knowledge network of the organization.” In a service desk environment, much
of the knowledge is from experiential learning [23, 24]. A challenge is how to transfer
the knowledge gained by individuals into organizational knowledge.
Phomasakha and Meesad [9] reviewed several knowledge management system
(KMS) from several literatures regarding knowledge management systems and
proposed the KMS compose of five processes, (1) knowledge capturing or knowledge
discovery; (2) knowledge creation; (3) knowledge inventory or storing knowledge;
(4) knowledge sharing; and (5) knowledge transfer which are working in cycle and
the knowledge sharing and knowledge transfer are conveyed to the community of
practice (CoP) which people know how to use the real knowledge. However, the IT is
used to support only knowledge creation and knowledge inventory that are conducted
to the organizational memory (OM) [9].
For the service desk, the relevant knowledge management approach is of
problem solving. Gray [25] presented a framework that categorizes knowledge
management according to a problem solving perspective. The framework was defined
four cells according to the type of problem and the process supported. Along the
horizontal axis they defined two classes of problems as new problems and previously
solved problems. Along the vertical axis they define two processes of problem
recognition and problem solving. The primary function of the service desk is problem
solving of both new and previously solved problems. When solving new problems,
Gray [25] called this knowledge creation. Solving previously solved problems was
called knowledge acquisition.


10
Several characteristics can be defined that will make a KMS successful in the
service desk. The KMS must be able to gather knowledge from humans and other
sources. In an environment of IT outsourcing in banking business, IT service desk
outsourcing is a curial functions of an IT outsourcing provider who takes over IT
functions from its customer or the bank. However, the bank desires service level
targets based on service level agreement (SLA) to control the IT service desk
operations [26]. The purpose of the IT service desk outsourcing is to support customer
services on behalf of the bank’s business goals with technology driven. The role of IT
service desk is to ensure that IT incident tickets are owned, tracked, and monitored
throughout their life cycle.

2.2 Root Cause Analysis
A root cause analysis (RCA) is a structural investigation that aims to identify
the true cause of a problem, and the actions necessary to be taken to eliminate it [27].
The RCA is the process to identify effortless factors using structured approach with
techniques decided to provide a focus on identifying and resolving problems. The
RCA also provides objectivity for problem solving, assists in developing solutions,
predicts other problems, gathers contributing incidents, and focus attention on
preventing recurrences. The techniques of the root cause analysis are often applied for
input for decision making process. The root cause analysis identifies and prevents
future errors in the proactive mode [28]. However, root cause analysis will tell the real
reasons for problems [29]. The results of RCA, when eliminated or changed, will
prevent the recurrence of the specific or similar problems, and therefore the benefits
of the RCA are to improve the service level agreement (SLA) attainment and to
enhance quality services as well as customer satisfaction.
In this study is to develop not only knowledge management system (KMS), but
also the RCA embedded into the system in order to prevent the recurring incidents oin
the KMRCA IT service desk system. The KMS is designed to be incorporated into the
daily operation of the service desk to ensure high utilization and maintenance of the
knowledge stores [30]. Moreover, the knowledge-based library of RCA models could
be hierarchically structured and interconnected failure trees, the abnormalities in
process operations and output quality can originate from abnormalities in equipment
or in process conditions possibly due to basic failures [31].







11
2.3 Case-Based Reasoning
Case-Based Reasoning (CBR) is widely used in resolving incident that is able to
resolve a new incident by remembering a previous similar situation and by reusing
information and knowledge of that situation [32, 33]. More specifically, CBR uses a
database of incident to resolve new incidents. The database can be built through the
knowledge management process or it can be collected from the previous cases.
In resolving incident, each case would describe an incident and a resolution to that
incident occurred. The reasoner resolves new incidents by adapting relevant cases
from the library [34]. In addition, CBR can learn from previous experiences. When an
incident is resolved the case-based reasoner can add the incident description and the
solution to the case library. The new case that in general represented as a pair of
incident and resolution is immediately available and can be considered as a new piece
of knowledge.
According to Doyle et al. [35], Case-Based Reasoning is different from other
artificial intelligence (AI) approaches in following ways:
(a) Traditional AI approaches rely on general knowledge of an incident
domain and tend to solve incidents on a first-principle while CBR systems solve new
incidents by utilizing specific knowledge of past experiences.
(b) CBR supports incremental, sustained learning. After CBR solves an
incident, it will make the incident available for future incidents.
In 1977, Schank and Abelson’s [36] work brought CBR from research into
cognitive science [37]. They proposed that general knowledge about situations be
recorded as scripts that allow us to set up expectations and perform inferences [36].
Schank [36] then investigated the role that the memory of previous situations and
situation patterns scripts, MOPS play in incident solving and learning [36]. Almost at
a similar time, Gentner [38] investigated analogy reasoning that is related to CBR
while Carbonell [39] explored the role of analogy in learning and plan generalization
[38, 39]. Subsequently, increasing numbers of research paper and applications were
published, and CBR has grown into a field of widespread interest. It has proven itself
to be a methodology suited to solving “weak theory” incidents where it is difficult or
impossible to elicit first principle rules from which solutions may be created [40].



12
2.3.1 The CBR Cycle
The CBR process can be represented by a schematic cycle, as shown in
Figure 2-1. Aamodt and Plaza [33] described CBR typically as 4-RE cyclical process
comprising as follows:
1) RETRIVE the most similar cases; during this process, the CB reasoner
searches the database to find the most approximate case to the current situation.
2) REUSE the cases to attempt to solve the incident; this process includes using
the retrieved case and adapting it to the new situation. At the end of this process, the
reasoner might propose a solution.
3) REVISE the proposed solution if necessary; since the proposed solution
could be inadequate, this process can correct the first proposed solution.
4) RETAIN the new solution as a part of a new case.



FIGURE 2-1 The Case-Based Reasoning Cycle [33].

This process enables CBR to learn and create a new solution and a new case that
should be added to the case base. It should be noted that the Retrieve process in CBR
is different from the process in a database. If you want to query data, the database
only retrieves some data using an exact matching while a CBR can retrieve data using
an approximate matching. As shown in Figure 2-1, the CBR cycle starts with the
description of a new incident, which can be solved by retrieving previous cases and
reusing solved cases, if possible, giving a suggested solution or revising the solution,
retaining the repaired case and incorporating it into the case base.







13
However, this cycle rarely occurs without human intervention that is usually
involved in the Retain step. Many application systems and tools act as a case retrieval
system, such as some help desk systems and customer support systems.
2.3.2 A Classification of CBR Applications
Althoff [41] suggested a classification method of CBR application as shown in
Figure 2-2. Under this classification scheme, CBR applications can be classified into
two categories as follows:
(a) Classification tasks
(b) Synthesis tasks



FIGURE 2-2 Classification Hierarchy of Case-Based Reasoning Applications [41].

Classification tasks are very common in business and everyday life. A new case
is matched against those in the case-base from which an answer can be given. The
solution from the best matching case is then reused. In fact, most commercial CBR
tools support classification tasks.
Synthesis tasks attempt to get a new solution by combining previous solutions
and there are a variety of constraints during synthesis. Usually, they are harder to
implement. CBR systems that perform synthesis tasks must make use of adaptation
and are usually hybrid systems combining CBR with other techniques [37].



14
2.4 ITIL-Based IT Service Desk Function
ITIL (Information Technology Infrastructure Library) documents industry best
practice guidance. It has proved its value from the very beginning. Initially, OGC
collected information on how various organisations addressed Service Management,
analysed this and filtered those issues that would prove useful to OGC and to its
customers in UK central government. Other organisations found that the guidance was
generally applicable and markets outside of government were very soon created by
the service industry. Being a framework, ITIL describes the contours of organizing
service management. The models show the goals, general activities, inputs and
outputs of the various processes, which can be incorporated within IT organisations.
ITIL is wildly accepted approach IT Service Management (ITSM). It provides a
comprehensive a set of best practice for the IT service management, promoting a
quality approach to archiving business effectiveness and efficiency in the use of
information system. ITIL is based on the collective experience of commercial and
governmental practitioners worldwide. This has been distilled into one reliable,
coherent approach, which is fast becoming a de facto stand used by some of the
world’s leading businesses [42].
2.4.1 IT Service Desk Function in Incident Management
ITIL-based IT service desk in incident management process provides a vital
day-to-day contact point between users, customers, IT services and third-party support
organisations. Service Level Management (SLM) is a prime business enable for this
function. Strategically, for internal users and external customers the IT service desk is
probably the most important function in an IT organisation. For many, the IT service
desk is their only window on the level of service and professionalism offered by the
whole organisation or a department. This delivers the prime service component of
customer perception and satisfaction. The following is given a brief of Incident
Management and Problem management processes which the details are in the Service
Support book of ITIL book series.
2.4.2 Incident Management Process
The primary goal of the Incident Management process is to restore normal
service operation as quickly as possible and minimise the adverse impact on business
operations, thus ensuring that the best possible levels of service quality and







15
availability are maintained. 'Normal service operation' is defined here as service
operation within Service Level Agreement (SLA) limits.
Examples of categories of Incidents are as follows:
(a) application; such as service not available, application bug or query
preventing Customer from working, disk-usage threshold exceeded, and so forth.
(b) hardware; such as system down, automatic alert, printer not printing,
configuration inaccessible,
(c) service requests; such as request for information or advice or
documentation, forgotten password.
A request for new or additional service (i.e. software or hardware) is often not
regarded as an incident but as a Request for Change (RFC). However, practice shows
that handling of both failures in the infrastructure and of service requests are similar,
and both are therefore included in the definition and scope of the process of Incident
Management. As the Figure 2-3 shows the Incident Management Process overview
which includes Inputs, Outputs, and its activities [42].


FIGURE 2-3 Incident Management Process Overview [42].



16
Inputs are as follows:
(a) Incident details sourced from service desk, networks or computer operations,
(b) configuration details from Configuration Management Database (CMDB),
(c) response from incident matching against problems and Known Errors
resolution details,
(d) response on RFC to effect resolution for incident(s).
Outputs are as follows:
(a) RFC for Incident resolution; updated Incident record, including resolution
and or Work-arounds,
(b) resolved and closed incidents,
(c) communication to Customers,
(d) management information reports.
Incident Management activities are as follows:
(a) Incident detection and recording,
(b) Classification and initial support,
(c) investigation and diagnosis,
(d) resolution and recovery,
(e) Incident closure,
(f) Incident ownership, monitoring, tracking and communication.
Most IT departments and specialist groups contribute to handling incidents at
some time. The service desk is responsible for the monitoring of the resolution
process of all registered incidents in effect the service desk is the owner of all
incidents. The process is mostly reactive. Actually, the incidents cannot be resolved
immediately by the service desk may be assigned to specialist groups. A resolution or
Work-around should be established as quickly as possible in order to restore the
service to Users with minimum disruption to their work. After resolution of the cause
of the incident and restoration of the agreed service, the incident is closed. Figure 2-4
illustrates the activities during an incident life cycle.







17


FIGURE 2-4 The Incident Life Cycle [42].

Throughout an incident life-cycle it is important that the Incident record is
maintained. This allows any member of the service team to provide a Customer with
an up-to-date progress report. Example update activities include:
(a) update history details
(b) modify status (e.g. 'new' to 'work-in-progress' or 'on hold')
(c) modify business impact/priority
(d) enter time spent and costs
(e) monitor escalation status
An originally reported Customer description may change as the Incident
progresses. It is, however, important to retain the description of the original
symptoms, both for analysis and so that you can refer to the complaint in the same
terms used in the initial report [42].
Often, departments and specialist support groups other than the service desk are
referred to as second or third line support groups, having more specialist skills, time
or other resources to resolve incidents. In this respect, the service desk would be first
line support. Figure 2-5 illustrates how this terminology relates to the Incident
management activities mentioned in previous paragraphs.


18



FIGURE 2-5 First, Second, and Third Line Supports [42].

The service desk plays an important role in the Incident Management process,
as follows:
(a) All incidents are reported to and registered by the service desk where the
incidents are generated automatically, the process should still include registration by
the service desk.
(b) The majority of incidents which are possible up to 85% in a highly skilled
requirement. Thus, they will be resolved at the service desk.
(c) The service desk is the independent function monitoring incident
resolution progress of all registered incidents.








19
Incidents, the result of failures or errors within the IT infrastructure, result in
actual or potential variations from the planned operation of the IT services. The cause
of incidents may be apparent and that cause can be addressed without the need for
further investigation, resulting in a repair, a Work-around or a RFC to remove the
error. Successful processing of a Problem record will result in the identification of the
underlying error, and the record can then be converted into a Known Error once a
Work-around has been developed, and or RFC [42]. This logical flow, from an initial
report to the resolution of an underlying problem, is shown in Figure 2-6.



FIGURE 2-6 Relationship between Incidents.

It can be noted that the problem is the unknown underlying cause of one or more
incidents. Known Error is a problem that is successfully diagnosed and for which a
Work-around is known. In addition to RFC as a Request For Change to any
component of an IT Infrastructure or to any aspect of IT services.
When incident Management finds a Work-around it will be analysed by the
Problem Management team who will update the associated Problem record as shown
in the Figure 2-7. An associated Problem record may not exist at this time, for
example, the Work-around may be to send a report by fax due to a communication
line failure, but at this point there may not be a Problem record for the communication
line failure, which the Problem Management team would have to create [42].





FIGURE 2-7 Handling Incident Work-arounds and Resolutions [42].


20
The process is then that the service desk will link incidents that are clearly the
result of an existing Problem record. It is also possible that the Problem Management
team, while investigating the problem associated with the incident, finds a Work-
around or a resolution for a problem and/or some related incidents [42].
In this case, the Problem Management team should inform the incident
Management process in order that open incidents have their status changed to 'Known
Error' or 'closed' as appropriate. For the next part it will be described the Problem
management process.
2.4.3 Problem Management Process
The goal of Problem Management is to minimise the adverse impact of incidents
and problems on the business that are caused by errors within the IT Infrastructure,
and to prevent recurrence of incidents related to these errors. In order to achieve this
goal, Problem Management seeks to get to the root cause of incidents and then initiate
actions to improve or correct the situation [42].
The Problem Management process has both reactive and proactive aspects. The
reactive aspect is concerned with solving problems in response to one or more
incidents. Proactive Problem Management is concerned with identifying and solving
problems and Known Errors before incidents occur in the first place. The process is
intended to reduce both the number and severity of incidents and problems on the
business. Therefore, part of Problem Management's responsibility is to ensure that
previous information is documented in such a way that it is readily available to first-
line and other second line staff.
The scope of Problem Management process includes Problem control, error
control and proactive Problem Management. In terms of formal definitions, a
'Problem' is an unknown underlying cause of one or more incidents, and a 'Known
Error' is a problem that is successfully diagnosed and for which a Work-around has
been identified.
Inputs to the Problem Management process are as follows:
(a) Incident details from Incident Management
(b) configuration details from the Configuration Management Database CMDB
(c) any defined Work-arounds from Incident Management.








21
The major activities of Problem Management are as follows:
(a) Problem control
(b) Error control
(c) Proactive prevention of problems
(d) Identifying trends
(e) Obtaining management information from Problem Management data
(f) Completion of major problem reviews.
Outputs of the process are as follows:
(a) Known Errors
(b) A Request for Change (RFC)
(c) An updated Problem record, including a solution and or any work-arounds
(d) for a resolved problem, a closed Problem record
(e) response from Incident matching to problems and Known Errors
(f) management information.
A problem is a condition often identified as a result of multiple incidents that
exhibit common symptoms. Problems can also be identified from a single significant
incident, indicative of a single error, for which the cause is unknown, but for which
the impact is significant. A Known Error is a condition identified by successful
diagnosis of the root cause of a problem, and the subsequent development of a Work-
around. Structural analysis of the IT infrastructure, reports generated from support
software, and User-group meetings can also result in the identification of problems
and Known Errors. This is proactive Problem Management. Problem control focuses
on transforming problems into Known Errors. Error control focuses on resolving
Known Errors structurally through the Change Management process [42].
The Problem Management differs from Incident Management in that its main
goal is the detection of the underlying causes of an incident and their subsequent
resolution and prevention. In many situations this goal can be in direct conflict with
the goals of Incident Management where the aim is to restore the service to the
Customer as quickly as possible, often through a Work-around, rather than through
the determination of a permanent resolution (for example, by searching for structural
improvements in the IT infrastructure, in order to prevent as many future incidents as
possible). In this respect, therefore, the speed with which a resolution is found is only


22
of secondary (albeit still of significant) importance. Investigation of the underlying
problem can require some time and can thus delay the restoration of service, causing
downtime but preventing recurrence [42].

2.5 Technologies for Service Desk
The service desk technology means a number of technologies are available to
assist the service desk functions, each with its advantages and drawbacks. It is
important to ensure that the blend of technology, process and service desk staff will
meet the needs of both the business and the User.
The technology needs to support business processes, adapting to both current and
future demands. It is also important to understand that with automation comes an
increased need for discipline and accountability. The below are the several
technologies of service desk.
(a) integrated Service Management and Operations Management systems,
(b) advanced telephone systems for example auto-routing, computer telephony
integration (CTI), voice over internet protocol (VOIP),
(c) interactive voice response (IVR) systems,
(d) electronic mail such voice, video, mobile com., internet, email systems,
(e) fax servers (supporting routing to email accounts),
(f) pager systems,
(g) knowledge, search and diagnostic tools, and
(h) automated operations and network management tools.
In automating the agent-centric help desk, many have focused on computer
telephony integration (CTI). The basis of CTI is to integrate computers and
telephones so they can work together seamlessly and intelligently [10]. The major
hardware technologies are as follows: Automatic call distributor (ACD); voice
response unit (VRU), Interactive voice response unit (IVR), predictive dialing,
headsets, and reader bounds [11]. These technologies are used to make the existing
process more efficient by minimizing the agent's idle time and evenly loading the
agents in the help desk. These technologies do not address the problem of knowledge
loss when agents leave nor do they provide information to the agent in helping to
resolve problems.







23
2.6 IT Service Desk Outsourcing
Information Technology (IT) outsourcing has been one of the critical issue in
organization management [43]. The Outsourcing is to dismantle internal IT
departments by transferring IT employees, facilities, hardware leases, and software
licenses to third-party vendors [44]. Hirschheim and Lacity [45] defined IT
outsourcing as the practice of transferring IT assets, leases, staff, and management
responsibility [45]. According to Linder [46] argued that the concept of
transformational outsourcing is an emerging practice, where companies are looking
outside to help for more fundamental reasons, including 1) to facilitate rapid
organizational change; 2) to launch new strategies; and 3) to reshape company
boundaries.
Most of the bank organizations trends to outsource IT work by hiring a
professional company to run their IT operations. IT service desk should be the
window of IT service and professionalism offered by the organisation. The
intellectual capital in supporting the users and customers is a valuable business asset
and should not be discarded without a clear understanding of the business requirement
[42]. There two objectives of the IT service desk, one is to provide a single point of
contact for users and customers and another is to facilitate the restoration of normal
operational service with minimal business impact on the user or customer within
agreed service levels and business priorities.
IT service desk performed by the outsourcing company called IT service desk or
Second Level Support (SLS) is the main service function. With a Bank Help Desk or
First Level Support (FLS) provides a day-to-day contact point between customers,
users, bank’s vendors, and IT services. There are two types of incidents, Non-IT and
IT incidents. FLS and Bank’s vendors will handle the Non-IT incident. For the IT
incident, the FLS will assign it to IT service desk or Second Level Support (SLS) to
resolve and the SLS may assign to Third Level Support teams, including AMS teams,
EOS teams, NWS teams and Vendors support teams. Service Level Management
(SLM) is a prime business enabler for this function.
IT service desk outsourcing is not an actual single point of contact [9], though
general service desks or help desks serve an important role of the information
technology department by providing the primary point of contact for users to contact


24
analysts to help them resolve problems with information technology including
hardware, software, and networks [30]. Because the IT service desk performs to take
in the assigned incidents from the bank help desk or First Level Support (FLS) that
not directly contact to users or customer at the first time. However, the IT service
desk abides in the middle of the FLS and Third Level Support (TLS).
The authorized third level supports should be allowed to have access to allow
them to update the service desk records. The process to update the records will ensure
that resource usage is properly accounted for. However, it should be aware of what
your supplier is performing closely.

2.7 Decision Support System
In the past decade, contributions of decision support systems (DSS) for resource
assignments were proposed in several areas. In R&D project selection, a hybrid
knowledge and model approach for reviewer assignments, Sun [47] presented a
hybrid knowledge and model approach which integrated mathematical decision
models for the assignment of external reviewers to R&D project proposals. The
purpose of the model was to assign the most appropriate expert to relevant proposals.
Before the research above, Fan [48] proposed a decision support system for proposal
grouping, which is a hybrid approach for proposal grouping, in which knowledge
rules were designed to deal with proposal identification and proposal classification,
and a genetic algorithm was used to search for the expected groupings. Next was in
the area of decision support for the single-depot vehicle rescheduling problem
presented by Li [49] the aim of the system was to minimize operation and delay costs.
It was designed to obtain optimal vehicle assignments and reassignments. In the Navy
works, the problem of assigning navy personnel to jobs was resolved by a guided
design search in the interval-bounded sailor assignment problem proposed by Lewis
[50]. The paper offers an expanded interval bounded network flow model of the sailor
assignment process creating teams of skilled sailors to be assigned to ships. In 2003, a
decision support system for multi-attribute utility evaluation based on imprecise
assignments was proposed by Jiménez et al. [51]. The paper describes a decision
support system based on an additive or multiplicative multi-attribute utility model for
identifying the optimal strategy. Last but not least, in research for a rule-based system







25
of automatic assignment of technicians to service faults, Lazarov and Shoval [52]
presented a model and prototype system for the assign of technicians to handle
computer faults, including hardware, software and communications. Selection of the
technician most suited to deal with the reported failure was based on the assignment
rules which are a correlations between the nature of the fault and the technicians’
skills. The model was evaluated by using simulation test, comparing the results of the
model assignment process against assignment carried out by experts. The results
showed that the system’s assignments were better than the experts’.
The technologies that support service desks are described in Section 2.6.
However, the thesis met that those technologies do not address the issue of resolving
performance dropped due to incorrect assignments. Incorrect assignment is still taking
place because of human errors, because the assignment of resolver group to deal with
the incident is performed manually by IT service desk agents. In fact, technologies for
the service desk management do not focus on automatic assignment, although the
ITIL framework guides the IT service desk outsourcing to resolve incidents by putting
in place the best practice processes for IT service desk decision making regarding
assignment and reassignment. This thesis propose function of automatic resolver
group assignment that is based on text mining discovery methods, and implementing
the strongest method well as validating the selected method of the model.

2.8 Classification trees
A decision tree is a simple structure where a tree in which each branch node
represents a choice between a number of alternatives, and each leaf node represents a
classification or decision. The ordinary tree consists of one root, branches, nodes
(places where branches are divided), and leaves. In the same way the decision tree
consists of nodes which stand for circles or cones, the branches stand for segments
connecting the nodes. A decision tree is drawn from left to right or beginning from the
root downwards, so it is easier to draw it. The first node is a root. The end of the chain
“root – branch – node – … – node” is called “leaf.” From each internal node (i.e. not a
leaf) may grow out two or more branches. Each node corresponds to a certain
characteristic and the branches correspond to a range of values. These ranges of
values must give a partition of the set of values of the given characteristic [53].


26
The decision tree algorithms can be applied to solve the problem under
discussion. The decision trees also represent a supervised approach to classification.
Several decision trees studied are from WEKA, a suite of machine learning software
written in Java, developed by the University of Waikato, New Zealand, in a book
describing data mining in practical machine learning tools and techniques of WEKA
software [54],
The study implemented several decision trees, including Decision Stump, ID3,
J48, NBTree, Random Forest, Random Tree, and REP Tree. The below are brief
descriptions of various decision tree methods.
2.8.1 Decision Stump
A Decision stump [54] is consisting of a decision tree with only a single depth
where the split at the root level is based on a specific attribute per value pair. A
decision stump is a weak machine learning model. The models are often used as
components in ensemble learning techniques such as bagging and boosting.
2.8.2 ID3
An ID3 [55] has been found to construct simple decision trees and can be
described using the information gain criterion which is essentially the same as one. It
splits the data in two parts. The exact criterion is determined by examining the
entropy of the two subsets. The split that results in the largest information gain or
decrease in entropy is executed. However, the approach it uses cannot guarantee that
better trees have not been overlooked.
2.8.3 J48
A J48 [55, 56] classifier generates an unpruned or a pruned C4.5 decision tree
with slightly modified C4.5 in WEKA machine learning. The C4.5 algorithm
generates a classification–decision tree for the given dataset by recursive partitioning
of the data. The decision is grown using depth-first strategy. The algorithm considers
all the possible tests that can split the data set and selects a test that gives the best
information gain. For each discrete attribute, one test with outcomes as many as the
number of distinct values of the attribute is considered. For each continuous attribute,
binary tests involving every distinct values of the attribute are considered.









27
2.8.4 NBTree
The naïve Bayesian tree learner, NBTree [57], combines naïve Bayesian
classification and decision tree learning. In NBTree, a local naïve Bayes is deployed
on each leaf of a traditional decision tree, and an instance is classified using the local
naive Bayes on the leaf into which it falls. The algorithm for learning an NBTree is
similar to C4.5. After a tree is grown, a naive Bayes is constructed for each leaf using
the data associated with that leaf. An NBTree classifies an example by sorting it to a
leaf and applying the naïve Bayes to that leaf to assign a class label to it. NBTree
frequently achieves higher accuracy than either a naïve Bayesian classifier or a
decision tree learner.
2.8.5 Random Forest
A random forest [58] is an ensemble of unpruned classification or regression
trees, induced from bootstrap samples of the training data, using random feature
selection in the tree induction process. Prediction is done by aggregating, majority
vote for classification or averaging for regression, the predictions of the ensemble. A
random forest generally exhibits a substantial performance improvement over the
single tree classifier such as CART and C4.5. It generalized error of classifiers
depends on the strength of the individual trees in the forest and the correlation
between them.
2.8.6 Random Tree
A random tree [54] is a tree drawn at random from a set of possible trees. The
random means that each tree in the set of trees has an equal chance of being sampled.
Another way of saying this is that the distribution of trees is uniform. A random trees
can be generated efficiently and the combination of large sets of random trees
generally leads to accurate models. Random Tree models have been extensively
developed in the field of Machine Learning in the recent years.
2.8.7 REPTree
A REPTree is a fast decision tree learner which builds a decision/regression tree
using information gain as the splitting criterion, and prunes it using reduced-error
pruning. It only sorts values for numeric attributes once. Missing values are dealt with
using the C4.5’s method of using fractional instances.


28
2.9 Summary
According to the objectives of the thesis are relevant in two areas. First is the
performance evaluation of knowledge management system based on search
knowledge function in terms of speed in resolving incidents, and the second is the
automatic resolver group assignment based on text mining discovery methods which
are decision tree algorithms. The below are the summary of the review.
2.9.1 Knowledge management system and its performance evaluation
This section is to summaries the reviews of knowledge management, root cause
analysis, case base reasoning, ITIL-based IT service desk which includes service desk
function, incident management, and problem management. Technologies for service
desk in particular CTI system which is used in the IT service desk system.
Knowledge can be categorized into two different types, tacit and explicit, which
also differ in the level of structure of the organization [1]. Knowledge management
(KM) is the business process of managing the organization’s knowledge by means of
systematic and organizational specific procedures for acquiring, organizing,
sustaining, applying, sharing, and renewing both tacit knowledge and explicit
knowledge by employees to enhance the organizational performance and to create
value [2, 3].
With highly competitive business environments, managing tacit knowledge,
which includes the true value added intellectual assets of an organization, is an
essential task to maintain the organization’s core competency [4]. In addition to the
knowledge base is able to support the service desk environment. Thus, it can be
concluded that the Knowledge management system (KMS) composed of five
processes, including (1) knowledge capturing; (2) knowledge creation; (3) knowledge
storing or knowledge inventory; (4) knowledge sharing; and (5) knowledge transfer of
which are elaborated to the community of practice because this is how people develop
real knowledge. Both of knowledge creation and knowledge inventory are related to
IT therefore there is becoming organisational memory (OM) and this enables to be
organizations’ competitive advantage sources [9].
Knowledge management is a discipline that provides strategy, process, and
technology to share and leverage information and expertise that will increase human’s
level of understanding to more effectively solve problems and make decisions [20].







29
According to the ITIL guidance processes, the main purpose of incident
management is to minimise interruption in business activities and ensure availability
of service. In addition, the ITIL best practice approach, regardless of who actually
manages the various tasks, the service desk owns the entire process. It appears
unlikely that the service desk’s roles in incident management will extend beyond an
interface of internal user and external customer [8].
The intention of this thesis is to propose the model of knowledge management
with root cause analysis called KMRCA IT service desk and develop the prototype of
the KMRCA IT service desk system for IT service desk outsourcing. The system is
able to improve a performance of IT service desk function in terms of speed in
resolving incidents. By the way of case-based reasoning in the literature review can be
applied to search for the similar previous cases to resolve the incident.
2.9.2 Decision support system of automatic resolver group assignment
This section is to summaries the reviews of decision support systems focusing
on resource assignments in various areas. Through there are several papers of decision
support system regarding resource assignment there is no the research that applied the
text mining discovery methods. For example the research of automatic assignment of
technicians to service faults [52] using the rule-based system which the rules are
created by the experts who have well-knowledgeable how to solve several service
faults.
The KMRCA IT service desk system was required the automatic resolver group
assignment function. The function attempts to match the most suited resolver group
with the symptom of the incident. The text mining discovery methods is widely used
to search the strongest method of the model to classify the suited resolver group.
In fact, text mining is data mining applied to information extracted from text. It can be
broadly defined as a knowledge-intensive process in which a user interacts with
documented collection overtime by using the suitable analysis tools.

CHAPTER 3
METHODOLOGY

The chapter is to outline research process, to provide a rationale for the research
methodologies which were chosen, and to demonstrate the proposed model and a
prototype of KMRCA IT service desk system.

3.1 Research Process
The below show the result of operational steps of a research process that this
thesis is done step-by-step.
3.1.1 Formulate research problems
The thesis reviewed several literatures which are described in Chapter 2 and
then formulated problems and identify hypothesises that are introduced in Chapter 1.
3.1.2 Conceptualize a research design
The purpose of the thesis is to evaluate the performance of the KMRCA IT
service desk system by using the design of experiment and simulation. The main
function of the system is a Search knowledge function. When the agents use the
function the system can resolve incident faster than the previous system. The design
of experiment 2
k
factorial design is widely used to find the factors that influence with
defined valuables as key performance indicators (KPIs). The simulation study is used
to represent the both systems and the results of simulation for two systems are
comparable in terms of speed in resolving incident.
3.1.3 Construct tools for data collection
The thesis is empirical study and the sample of incident record of 14,440 calls
collected for 4 months during April-July 2006 from Tivoli CTI system of IT service
desk outsourcing in the bank. The selected tools were used to analyse the data,
including Arena simulation software package, Input Analyzer in Arena, Minitab 15
statistical analysis, WEKA machine learning, and MS Excel spreadsheet and data
filtering.




32
3.1.4 Select a sample
This step is selecting a sample which the accuracy of the findings largely
depends upon the way of selecting sample. The thesis selected two samples to support
the two objectives of the study. Firstly, the selected sample of 12,198 calls were used
for performance evaluation in simulation study and design of experiment. Secondly,
from the same sample, selected sample of all 14,440 cases were executed in the text
mining discovery methods of automatic resolver group assignment approach.
3.1.5 Write a research proposal
After all the preparatory work is done, this step is put everything together in a
way that provides adequate information for the advisor(s) and others. The thesis was
proposed as the topic of Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business: Evaluation its Performance.
However, the final title has been the same as topic proposal but just without
“Evaluation its Performance”
The review of literatures is not only in the first step of formulating a research
problem, but also in several steps, including research design, data collection, and
writing the thesis document. Because literatures have been issued every time since the
research start formulating.

3.2 Information Collection and Requirement Analysis
3.2.1 Information Collection
The objectives of the study are to evaluate performance of KMRCA IT service
desk system and research hypothesis is the average time in resolving incident of all
severities exclude severity 1 is lower that the previous IT service desk system. Thus,
the underlying incident data collected for 12,198 calls from the Tivoli CTI system of
IT service desk for four separate weeks randomly selected from four-month period
during April to July 2006. A sample of the incident data shows in Appendix A, A-1
Figure A-1.
From the sample, the columns contain several information of IT incident,
including ticket no., open date, open time, resolve date, resolve time, severity, system-
type failures, assigned resolver group, incident descriptions, incident resolutions,
caller details and so forth. As the research objectives, the thesis is focusing on the
performance evaluation that data include several columns of time and severity.





33
3.2.2 Requirement Analysis
The data is analysed based on the objectives of performance evaluation using
computer simulation. The study selects Arena discrete-simulation software package to
analyse the data and to build the conceptual model for computer simulation.
3.2.2.1 The rate of incoming calls
The nature of data particular inter arrival time of calls coming to the bank help
desk and the agents create the IT incident ticket sending to IT service desk to resolve
and then the service time in resolving that incident to be analysed. The thesis analysed
data and met that rate of incoming calls during time in business day and holiday are
different. Table 3-1 shows the rate of calls during time in business day and holiday.

TABLE 3-1 The Rate of Incident Calls during Time in Business Day and Holiday
Time Business Day (calls/hr.) Holiday (calls/hr.)
8:00 - 10:00 25.75 1.68
10:01 - 12:00 18.15 2.53
12:01 - 13:00 8.83 0.92
13:01 - 15:00 16.38 2.79
15:01 - 17:00 12.55 2.28
17:01 - 18:00 6.16 0.68


3.2.2.2 The percentage of incident calls by severity
Next is the percentage of incident calls by severity that is the frequency of
number of incident calls by severity is shown in Table 3-2.

TABLE 3-2 Percentage of Incident Calls by Severity
Severity Number of Calls Percentage (%)
1 86 0.71
2 395 3.24
3 11,680 95.75
4 37 0.30


As shown in Table 3-2, the rank of number of calls and their percentage is
Severity 3 (11,680, 95.75%), Severity 2 (395, 3.24%), Severity 1 (86, 0.71%), and
Severity 4 (37, 0.30%).



34
3.2.2.3 Incident Classification
The incidents are classified into six categories as shown in Table 3-3 with their
frequency of occurrence by the Tivoli CTI system. A Pareto phenomenon is observed
whereby the top three-problem categories account for 98.02 % of the total types of
calls received.

TABLE 3-3 Classification of Calls by Incident Category
Incident Category No. of Incidents Percent of Frequency
1) Hardware 6,454 52.91
2) Software 3,981 32.63
3) Network 1,522 12.48
4) Power Supply 211 1.73
5) Operations 30 0.25

3.3 Constructing an Instrument for Data Collection
3.3.1 Goodness-of-fit Test Method
As the data in terms of time between arrival and service time in resolving
incidents, it is necessary to know the basis of methodology regarding curve fit to the
nature of data that represented the data pattern in the computer simulation.
The quality of a curve fit is based primarily on the square error criterion, which
is defined as the sum of {f
i
- f(x
i
) }², summed over all histogram intervals. In this
expression f
i
refers to the relative frequency of the data for the i
th
interval, and f(x
i
)
refers to the relative frequency for the fitted probability distribution function. This last
value is obtained by integrating the probability density across the interval. If the
cumulative distribution is known explicitly, then f(x
i
) is determined as F(x
i
) - F(x
i
-1),
where F refers to the cumulative distribution, x
i
is the right interval boundary and x
i
-1
is the left interval boundary. If the cumulative distribution is not known explicitly,
then f(x
i
) is determined by numerical integration.
The results of Chi-square and Kolmogorov-Smirnov provide goodness-of-fit
tests for non-integer data. These results are presented in form of p-value which is the
largest value of the type-I error probability that allows the distribution to fit the data.
The higher the p-value, the better the fit. For example, if the p-value is greater than
0.05, then it would not reject the null hypothesis of a good fit at level of 0.05.





35
Table 3-4 shows the summary of probability distributions that will be fitted to
the data. If an enabled distribution function is calculated by the Input Analyzer. This
summary file provides the most complete compilation of information describing the
curve fit. By selecting Fit All, summary item causes a dialog to appear, showing the
results of the best-fit calculations. All of the applicable distribution functions are
listed, along with their corresponding square errors, ranked from best to worst. This
listing permits one function to be compared with another for the current data file.

TABLE 3-4 Summary of Probability Distributions for Computer Simulation
Distribution Parameter
Beta BETA Beta, Alpha
Continuous CONT CumP
1
, Val
1
, … CumP
n
, Val
n

Discrete DISC CumP
1
, Val
1
, … CumP
n
, Val
n

Erlang ERLA ExpoMean, k
Exponential EXPO Mean
Gamma GAMM Beta, Alpha
Johnson JOHN Gamma, Delta, Lamda, Xi
Lognormal LONG LogMean, Log Std
Normal NORM Mean, StdDev
Poisson POIS Mean
Triangular TRIA Min, Mode, Max
Uniform UNIF Min, Max
Weibull WEIB Beta, Alpha


3.3.2 Goodness-of-fit Test of Time between incident arrivals
A discrete event simulation package called Arena [59] is used to imitate the
conceptual models of IT Service Desk system and KMRCA IT service desk system.
A full exposition of the simulation model is available in Simulation with Arena.
However, the time between arrivals of incident calls is analysed by using Input
analyzer that is a standard component of the Arena environment. Figure 3-1 shows
patterns of the time between arrivals of incident calls fitted of Weibull distribution.


36


FIGURE 3-1 Input Analyzed Results
The distribution summary from the Input Analyzer shows as follows:
(a) Distribution : Weibull
(b) Expression : WEIB (3.64, 0.905)
(c) Square Error : 0.001045
(d) Chi-Square test, corresponding p-value : 0.706
The Input analyzer can be used to determine the quality of fit of probability
distribution functions to the input data and be used to compare distribution functions
by square error (Sq. Error) as shown in Table 3-5.

TABLE 3-5 Comparison of Square Error by Function
Function Sq. Error
Weibull 0.00104
Gamma 0.00161
Lognormal 0.00181
Exponential 0.00279
Erlang 0.00279
Beta 0.00360
Normal 0.07030
Triangular 0.10300
Uniform 0.13200





37
However, the lowest square error does not mean that the distribution function is
suited for the data until the p-value is evaluated by of goodness-of-fit test. The
goodness-of-fit tests use the following hypotheses:
(a) H
0
: The distribution adequately describes the data
(b) H
1
: The distribution does not adequately describe the data
By the hypothesis, If p-value > 0.05 at 95 % confidence interval the H
0
will be
accepted that means distribution according to the test case.
Another view of good-of-fit test is illustrated by probability plot. Figure 3-2
shows the probability plot of time between incident arrivals. The graph was generated
from Minitab-15 statistical analysis software package. The data points follow the
straight line, the p-value > 0.250, and the AD statistic (Anderson-Darling statistic
measures how well the data follow a particular distribution) is 0.424, it can be
concluded that at an alpha-level of 0.05, Weibull distribution provide a good fit for
the time between incident arrivals. Therefore, it can be used the fitted line to estimate
for simulation with the distribution instead of a default of exponential time arrival.

100.000 10.000 1.000 0.100 0.010 0.001
99.9
99
90
80
70
60
50
40
30
20
10
5
3
2
1
0.1
Call Arrivals
P
e
r
c
e
n
t
Shape 1.011
Scale 3.318
N 98
AD 0.404
P-Value >0.250
Probabi l i t y Pl ot of Cal l Arri val s
Weibull - 95% CI


FIGURE 3-2 Probability Plot of Time between Arrivals

The simulation model was verified to ensure that the IT service desk system
works properly in terms of Arena functionalities and the entities of the incident calls
follow the same path as described in the conceptual model shown in Appendix C, C-1


38
The verification was done by using the trace element that is adopted within a
discrete model to generate a detailed trace report of entity processing. The simulation
was run for 4 replications of 22 working days during prime time during 8:00 am. to
8:00 pm. The Trace output allows following the sequence of an entity as it flows
through the system, from entity creation until entity disposal.
The entity is a incident ticket which its process flow was intended design and
verifying the output, the model run with different replication numbers to verify that it
works properly under different conditions. After verifying operation of the simulation
model it was validated. In order to reduce variation, four replications were conducted
with different random number streams on the simulation model. A t-test with a 95%
confidence level was conducted to compare the results of the simulation model with
the results recorded for the actual system based on the data collected from Tivoli CTI
system. For each variable the null hypothesis of no difference between the systems
was rejected with a 95% confidence level which indicates the simulation model
adequately represents the actual system’s behaviors.
3.3.3 Goodness-of-fit Test of Service Time in Resolving Incidents
The simulation process requires expression of fitted distribution to the time in
resolving incidents therefore the resolving time by severity was analysed to fit the
suited distribution using Input analyzer. Figure 4-2 shows results of good-of-fit test.

TABLE 3-6 A Good-of-fit Test of Time in Resolving Incidents by Severity
Severity Distribution Expression Sq. Error p-value
1 Lognormal LOGN (2.37, 4.74) 0.002295 0.158
2 Lognormal LOGN (4.19, 6.46) 0.003581 0.078
3 Lognormal LOGN (7.87, 11.1) 0.015237 0.053
4 Beta 144*BETA(0.248,1.27) 0.037923 0.039

All the same, a probability plot of service time in resolving incidents can be
estimated the distribution fit by viewing how the points fall about the controlled line
as shown in Figure 3-3.








39
Lognormal distribution for Severity 1 Lognormal distribution for Severity 2
100.00 10.00 1.00 0.10 0.01
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S1 Resolving Time
P
e
r
c
e
n
t
Loc 0.1583
Scale 1.099
N 84
AD 0.543
P- Value 0.158
Pr obabi l i t y Pl ot of S1 Resol vi ng Ti me
Lognormal - 95% CI
100.0 10.0 1.0 0.1
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S2 Resolving Time
P
e
r
c
e
n
t
Loc 0.8753
Scale 1.071
N 90
AD 0.669
P- Value 0.078
Pr obabi l i t y Pl ot of S2 Resol vi ng Ti me
Lognormal - 95% CI
Anderson-Darling statistic = 0.543
p-value = 0.158
Anderson-Darling statistic = 0.669
p-value = 0.078
Lognormal distribution for Severity 3 Beta distribution for Severity 4
100.0 10.0 1.0 0.1
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S3 Resolving Time
P
e
r
c
e
n
t
Loc 1.212
Scale 0.9604
N 89
AD 0.736
P- Value 0.053
Pr obabi l i t y Pl ot of S3 Resol vi ng Ti me
Lognormal - 95% CI
1000.000 100.000 10.000 1.000 0.100 0.010 0.001
99
95
90
80
70
60
50
40
30
20
10
5
3
2
1
S4 Resolving Time
P
e
r
c
e
n
t
Shape 0.6166
Scale 38.25
N 37
AD 0.842
P- Value 0.039
Pr obabi l i t y Pl ot of S4 Resol vi ng Ti me
Beta - 95% CI
Anderson-Darling statistic = 0.736
p-value = 0.053
Anderson-Darling statistic = 0.842
p-value = 0.039

FIGURE 3-3 Probability Plot for Resolving Time by Severity

3.4 The Proposed KMRCA IT Service Desk Framework
This section illustrates a typical IT service desk system, conceptual model of IT
service desk for simulation modeling, KMRCA IT service desk framework, Incident
management and Problem management processes, Search information procedure, and
comparison of both a typical IT service desk and KMRCA IT service desk systems.
3.4.1 A Typical IT Service Desk Outsourcing
IT service desk is a crucial function of an IT outsourcing provider who takes
over IT functions from a bank. However, the bank desires service level targets based
on the service level agreement (SLA) to control the IT service desk operations. The
purpose of the IT service desk outsourcing is to support customer services on behalf
of the bank’s technology driven business goals.


40
The role of the IT service desk is to ensure that IT incident tickets are owned,
tracked, and monitored throughout their life cycle. Figure 3-4 shows a Typical IT
service desk outsourcing overview.



FIGURE 3-4 A Typical IT Service Desk Outsourcing Overview

There are three main agent levels in resolving incident end-to-end process.
These are (1) First level support called FLS, which is the Bank help desk agents;
(2) Second level support called SLS, which is IT service desk outsourcing agents; and
(3) Third level support called TLS, which is Resolver groups. In thesis is focusing on
the IT service desk outsourcing which includes IT service desk agents and technical
resolver groups. The Tivoli CTI technology is the use of interface among the three
levels of agents in order to make them work simultaneously on the current incident
ticket to be resolved by the target time. The internal users or external customers
directly contact the FLS agents with various incident reports. They can contact to the
FLS by several ways such as telephone call, fax, email, and internet. The incident
reports can be divided into two types by FLS depending upon the IT related that
incident. One is Non-IT incidents and another type is IT incidents. Both are reported
to FLS agents and then the agents review the reports in terms of incident types,
initiate severity, complete necessary incident descriptions and then open the ticket
one-by-one without recurring.







41
The Non-IT incident tickets are resolved by bank’s resolvers while IT incident
tickets are assigned to the IT service desk outsourcing or SLS agents to resolve those
incidents. Consequently, the SLS agents review and validate the assigned IT incident
ticket for adequacy and correctness based on outsourcing scope, incident types, and
severity criteria. If the assignment is not correct both of FLS and SLS will be
requested to solve the issue. The valid IT incident ticket may be resolved by the SLS
agents using knowledge management system [9] or be assigned to the resolver groups
or TLS to resolve the incident. TLS agents include five main resolver groups; (1) EOS,
(2) IE-AMS, (3) NWS, (4) OS-EC, and (5) VEN.
In resolving incident effectively, IT service desk agents perform actions based
on Incident management process and Problem management process which their
details are described in the next section. However, IT service desk agents take owner
of that assigned incident and attempt to resolve the incident by searching essential
information from several sources such as Data store, File Server, and Internet. If the
incident needed a high technical resolver the IT service desk agent will determine to
assign the incident to the technical resolver groups. Figure 3-5 shows Information
flow of IT service desk.

IT Service Desk
of IT Outsourcing
(Second Level Support)
Customers / Users Bank Help Desk
(First Level Support)
Internet
File Server
Data Store
Resolver Groups
(Third Level Support)
1) AMS Support,
2) EOS Support.
3) NWS Support
4) Operation Support
5) Vendor Support
Assign
Resolver?
SLS
Resolution
TLS


FIGURE 3-5 Information Flow of IT Service Desk




42
3.4.2 Conceptual Model of IT Service Desk
Figure 3-6 shows the conceptual model of IT service desk system, which the
incidents flow through the three agents, 1) FLS agents; 2) SLS agents; and 3) Resolver
groups. The conceptual model is conveyed to the simulation model.

FIGURE 3-6 A Conceptual Model of IT Service Desk System

However, the determination of severity based on impact bank’s business and
urgent required is assigned according the following criteria.
Severity 1 means a “critical” severity problem, (a major system, application or
network failure impacting on a large number of users and having a critical impact to
the user’s business) and where no workaround is available.
Severity 2 means a “high” severity problem and a workaround may be available.
In other words, one component of a system application or network has failed
impacting on a small number of users; or a fault which may have a potential “critical”
impact if not resolved quickly; or a problem impacting 1 user and the impact is
significant, such as end of month financials.
Severity 3 means a “moderate” severity problem (impact is moderate and only
to 1 user) and a workaround is available.
Severity 4 means a “low” severity problem (no impact to the user) and a
workaround is available.
According to the severity criteria, when the FLS agents create the incident ticket
they also initiate the severity to the ticket. If the incident ticket related to IT, so called





43
IT incident ticket it will be assigned to SLS agents to resolve that incident. Likewise,
the SLS agents check if the incident ticket is within the outsourcing scope and the
assigned severity is correct. However, the agent attempts to resolve the incident. If the
ticket is solved in the second level the incident will be closed. If the incident cannot
be complete at the second level it will be assigned to the relevant technical resolver
group who is responsible for resolving the incident.
3.4.3 KMRCA IT Service Desk Outsourcing Model
For the reason that he Bank takes the owner of first level supports called the
Bank help desk agents to initiate support, providing a vital day-to-day contact point
between internal users and external customers. Therefore, IT service desk agents is
not quite a single point of contact (SPOC) [9] and the resolver groups have more
specialist skills that can be available time or resources to resolve the assigned
incidents. The issues of resource high turnover especially technical staff of IT service
desk and recurring incidents are in the IT service desk system. Thus, the thesis
proposes the framework of KMRCA IT service desk as shown in the Figure 3-7.



FIGURE 3-7 A Proposed Framework of KMRCA IT Service Desk System

The model of the IT service desk outsourcing by putting the KMRCA into the
IT service desk functions. In fact, KMRCA is the KMS of organizational outsourcing
memory to provide resolution and results of root cause analysis in order to prevent the
recurring incidents or problems. Besides, the KMS enables IT service desk agents to


44
increase the speed in resolving incident. With the KMRCA, the agents can search the
similar cases from the knowledge database so that time taken to resolve incident is
reduced. As the Figure 3-1 shows the model for KMRCA IT service desk outsourcing.



FIGURE 3-8 Information Flow of KMRCA IT Service Desk System

Figure 3-8 shows the information flow of KMRCA IT service desk. The
KMRCA database includes knowledge of incident resolutions, results of root cause
analysis, and so forth. The IT service desk resolve the incident by accessing many
different information and knowledge sources via the KMRCA.
The KMRCA IT service desk approach serves as an intermediary between the
service desk agent and all data, information, and knowledge sources. The sources
range from files on the agent's computer, access to the database, communication with
other agents, and access to the Internet. While case-based reasoning systems enable
help desks to store and share knowledge in the form of cases. To resolve an incident is
the responsibility of the IT service desk agent.
However, the incident may be assigned to the relevant revolver group to resolve
that incident. No matter who resolves the incident, the resolution is provided and kept
into the Knowledge database after finishing resolving.






45
3.4.4 Incident Management and Problem Management processes
IT Service Desk function based on ITIL is in Incident management process. The
implementation of the KMS IT service desk system changes the process to the
incident management and problem management that performed by IT service desk
agents and the process is shown in Figure 3-9. A short process flow shows several
activities of incident management and problem management processes. However, the
details of the processes of Incident management and Problem management are shown
in Appendix B.



FIGURE 3-9 KMRCA IT Service Desk Process



46
3.4.5 Search Knowledge Procedure of KMRCA IT Service Desk
When IT service desk agents use KMRCA IT service desk system, they shall
perform searching by using search knowledge procedure as shown in Figure 3-10.




FIGURE 3-10 Search Knowledge Procedure

The narrative of Search Information Procedure has steps as follows:
1) IT Service desk agent reviews incident information and urgent required.
2) IT service desk agent determines if the ticket required escalation.
(a) If yes, proceed to Step 3 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 Search for the similar cases from KMRCA.





47
3) IT service desk agent escalates the ticket to the relevant resolver groups.
4) IT service desk agent searches similar cases from KMRCA database.
5) Was the incident resolved?
(a) If yes, proceed to Step 6 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 search similar cases from KMRCA database.
6) IT service desk agent provides resolution to FLS or Bank help desk and
updates into the KMRCA repository.
7) Recover group reviews the assigned ticket from SLS.
8) Recover group determines if KMRCA is require in resolving incident.
(a) If yes, proceed to Step 9 resolve incident without KMRCA.
(b) If no, proceed to Step 10 search similar cases from KMRCA database.
9) Resolver group resolves incident without KMRCA.
10) Resolver group searches similar cases from KMRCA database
11) Was the incident resolved ?
(a) If yes, proceed to Step 6 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 search similar case from KMRCA database.
12) Recover group provides resolution to FLS or Bank help desk and updates
into the KMRCA repository.
13) End


48
3.4.6 Comparison of Typical and KMRCA IT Service Desk systems
The comparison of a Typical IT Service Desk against the KMRCA IT Service
Desk is shown in Figure 3-11. Obviously, the difference between both of IT service
desk is that the KMRCA IT Service Desk includes KMRCA system as center point of
information. IT service desk agents search several information by the KMRCA.
The KMRCA system is connecting to several sources for acquiring several sources of
information such as Data store, File server, and Internet as well as receiving the
update resolution from the resolver group. However, the essential information such
the update incident resolutions have to be validated by IT experts via a domain expert.



FIGURE 3-11 Typical IT Service Desk and KMRCA IT Service Desk






49
3.4.7 Methodology of System Development
There are many methodologies for the development of information systems:
Systems Development Life Cycle (SDLC), Data Structure-Oriented design, Object-
Oriented design, Prototyping, among others. However, this thesis is concerned here
primarily with SDLC.
The Systems Development Life Cycle is referred to variously as the waterfall
model and linear cycle that methodology is a coherent description of the steps taken in
the development of information systems. Figure 3-12 shows the system development
life cycle (SDLC).


FIGURE 3-12 The System Development Life Cycle (SDLC)

The methodology SDLC is closely associated to what has come to be known as
structured systems analysis and design. It involves a series of steps to be undertaken
in the development of information systems as follows:
(a) Problem definition
On receiving a request from the user for systems development, an investigation
is conducted to state the problem to be solved and deliverable is Problem statement.


50
(b) Feasibility study
The objective here is to clearly define the scope and objectives of the systems
project, and to identify alternative solutions to the problem defined earlier and
deliverables is Feasibility report.
(c) Systems analysis phase:
The present system is investigated and its specifications documented. They
should contain our understanding of HOW the present system works and WHAT it
does. In addition, the deliverables are specifications of the present system.
(d) Systems design phase
The specifications of the present system are studied to determine what changes
will be needed to incorporate the user needs not met by the system presently. The
output of this phase will consist of the specifications, which must describe both
WHAT the proposed system will do and HOW it will work of the proposed system. In
addition, deliverables are the specifications of the proposed system.
(e) Systems construction
Systems construction includes Programming the system and development of
user documentation for the system as well as the programs. The deliverables are
programs, their documentation, and user manuals.
(f) System testing and evaluation
System testing and evaluation include testing, verification and validation of the
system just built as well as the deliverables are test and evaluation results, and the
system ready to be delivered to the user or client.
Note that the model has many attractive features such 1) clearly defined
deliverables at the end of each phase so that the client can take decisions on
continuing the project; 2) incremental resource commitment, the client does not have
to make a full commitment on the project at the beginning; and 3) Isolation of the
problem early in the process.










51
3.4.8 The Prototype of KMRCA IT Service Desk System
The prototype of KMRCA IT service desk system was developed using the
SDLC from problem definition to the system testing and evaluation. It includes
several functions based on the whole concept end-to-end of the IT service desk’s
functionalities. However, GUI menus for multi-agents can be connecting via internet
and logging on as client machines. In this chapter, two core functions of the system
are Searching Knowledge function and Decision support function of automatic
assignment.
The purpose of searching knowledge function is to find similar cases so that the
agents can select one or more of them in resolving the incident. Figure 3-14 displays
the Search knowledge and Input resolutions. On the left-hand side of Convex lens
icon, the agents can double-click on it in order to get in the search knowledge menu.
Then the search menu is displayed pop-up and agents can put some keywords on the
input search space, For example, the input search keyword of ‘Printer’ and then click
on search button that is giving a several results of similar cases with regard to printer
failures and it can be drilled down cases-by-case to get its details.



FIGURE 3-13 A Sample Display of Search Knowledge and Input Resolution
Input Resolution
< Search Knowledge


52
As the function of the knowledge is organized by the scope of dealing incidents
with system-type failures. The classification is to help IT service desk agents to
identity how to solve the incident by whom effectively. The incident scope is
described the general type of incident failures such as software, hardware, network,
operations and power supply.
The accessible required knowledge is relevant several menus, including search
menu and input resolutions as shown in Figure 3-14.
Some identified cases such the previous incidents that match the present one
may or may not help the agent in resolving the call. In this thesis, the knowledge
database store several cases that will be used in the case-based reasoning approach.



FIGURE 3-14 A Sample Display of Searching Results

In fact, the function of automatic resolver group is able to initiate the automatic
resolver group assignment by setting which severity is need automatic assignment.
Figure 3-15 shows the decision support function of assign resolver group.





53


FIGURE 3-15 A Sample Display of Assign Resolver Group

3.5 Methodology of Automatic Resolver Assignment
3.5.1. Sample and requirement analysis
Raw datasets are provided by the Tivoli system in a spreadsheet for 14,440
incident cases. They were collected for 4 months (April to July 2006). A sample of
the data is shown in Appendix A - Figure A-1. Each column (or attributes) contains
information about several IT incident tickets. However, in this study, we focus on the
information of four columns: incident descriptions, system-type failures, component
failures, and the assigned resolver groups who are related to those system-type
failures. A sample of the incident data shows in Appendix A-Figure A-1. Table 3-7
shows the number of incidents of various system types and their resolver groups.

TABLE 3-7 The Number of Incidents of System Types and Resolver Groups
System types EOS IE-AMS NWS OS-EC VEN Total
Hardware 0 0 5,605 1,841 294 7,740
Software 376 400 3,307 148 61 4,292
Network 0 0 308 593 1,120 2,021
Operation 0 6 6 0 18 30
Power Supply 0 0 0 357 0 357
Total 376 406 9,226 2,939 1,493 14,440

< Assign Resolver
Group


54
3.5.2 The proposed automatic resolver group assignment
The thesis improved the KMRCA IT service desk system by proposing the
automatic resolver group assignment function in the system. Figure 3-16 shows the
function of IT service desk outsourcing with automatic resolver group assignment and
the details of automatic resolver group assignments can be illustrated in terms of
process as shown in Figure 3-17.



FIGURE 3-16 KMRCA IT Service Desk with Automatic Assignment Function



FIGURE 3-17 A Process of Automatic Resolver Group Assignment

The automatic resolver group assignment function is one of the core functions
in the KMRCA IT service desk system. The focal point is the resolver group which
handles the proper allocation of resources to deal with the assigned incident.





55
The below are the narratives of automatic resolver group assignment process.
Step 1 : Start entering IT incident ticket of which includes text document.
Step 2 : Perform keyword-based word extraction.
Step 3 : Perform Text measures and case-terms data transformed through the
model classification.
Step 4 : Implement the ID3-based method to generate a pattern and to identify
a suitable resolver group(s). The generation rules from the ID3 method are shown in
Appendix A, A4 : An extended part of ID3 decision tree results and A5 : A sample of
ID3-based generation gules.
Step 5 : Calculate the percentages of matching words in the assigned resolver
group and display the results.
Step 6 : Determine if the percentage matching is equal or more than the
specified criteria.
(a) If yes, proceed to Step 8 Assign resolver group to deal with the incident.
(b) If no, proceed to Step 7 Notify IT service desk or SLS to make decision.
Step 7 : Notify IT service desk or SLS to make a decision
Step 8 : Assign resolver group to deal with the incident
Step 9 : Display the results of assignment
Step 10: Validate the assigned results and generated rules by IT experts
Step 11: Check if the IT expert has validated the result yet.
(a) If yes, proceed to Step 13 Check if the result is changed.
(b) If no, proceed to Step 12 Check if duration time is valid.
Step 12: Check if duration time is valid.
(a) If yes, proceed to End.
(b) If no, proceed to Step 10 Validate the assigned results.
Step 13: Check if the result is changed.
(a) If yes, parallel paths; proceed to Step 14 Update keywords
(b) If no, proceed to End.
Step 14: Update Keywords to keep generated rules and assignment results in
Knowledge database
Step 15: End of the process



56
3.5.3 Data preparation and selected model procedure
The raw dataset contains structured information about incident cases as
previously described in Section 3.2.



FIGURE 3-18 Processes of Model Approach for Automatic Assignment
The six steps of processes of model approach for automatic assignments include;
1) Data preparation with text documents of incident records; 2) Document collection
or Text corpus; 3) Data divided for training documents and testing documents; 4) Text
measures; 5) Method selection based on the training documents; and 6) Model
validation based on the testing documents. Figure 3-18 shows the processes of this
model approach for automatic assignment.
3.5.3.1 Data preparation
Data preparation processes [60] include data recognition, parsing, filtering, data
cleansing [61], and transformation. The study added Data grouping by keywords.
Hence, in this case, the data preparation processes are as follows:
(a) Data Recognition; This identifies the incident records
collected from Tivoli CTI system as the sample of raw structured data in spreadsheet
format.





57
(b) Data parsing; the purpose of data parsing is to resolve a
sentence into its component parts of speech. In fact, statements in computer language
have to be parsed. Therefore, the statements will be broken down and individual
words of which the incident report is composed are identified. The study modified
LexTo to break down the incident documents (Thai and English) into text. LexTo is
Java program of word extraction for both languages. The program was developed by
National Electronic Computer Technology Center of Thailand or NECTEC. The
program works with Lexitron dictionary. The study created another keyword
dictionary and modified the program to execute both dictionaries. Therefore, the
correctness of word extraction is more than 98.7 % of all words. The result of
keywords extract from the incident dataset are shown in Appendix A, A-2 Figure A-2
(c) Data filtering; it involves selecting rows and columns of
data for further Document collection or Text corpus. Consequently, the Text corpus
includes several columns, including System failure types, Sub-system or Component
failures, Incident descriptions, and Assigned resolver group.
(d) Data cleaning; the study makes correct inconsistent data,
checking to see the data are conforming across its columns and filling in missing
values in particular for the component failures and assigned resolver groups.
(e) Data grouping; from the word extraction that gives many
words and then grouped them into the words of component and system-type failures.
There are two types of data, 1) words with the same meaning, for example of a
keyword of “Hard Disk” being the same meaning with “Hard Drive” or “HD”, and 2)
the relevant words either singular or plural [62].
(f) Data transformation; the study transforms data prior to
data analysis. Several steps need data transformation such as Word extraction, Text
measurement, Text mining via WEKA machine learning, which is applied to discover
algorithms or methods, comparing several decision tree algorithms to find out the
most suitable method for the nature of incident data.
3.5.3.2 Dataset separation for training and testing
The sample of dataset is divided into two documents, (1) A training document
consisting of 66% of the samples and (2) A testing document consisting of 34% of the
cases.


58
3.5.3.3 Document collection
Document collection or so called “Text corpus” is the database containing text
fields, which include a sample of data. The data is a subset of the incident database.
The Textual fields are selected columns such as system type failures, component
failures, incident descriptions, and assigned resolver group [63].
3.5.3.4 Text Measures
The purpose of text measures is to find attributes that describe text in order to
know how many keywords (KW1, KW2, …, KWn, where n is the number of words)
related to the assigned groups are in the documents. The study developed program
that provides text measures based upon word counts across the sample of the text
documents. It displays the text measures.
3.5.3.5 Method Selection
Method discovery is the core of text mining algorithms. Several decision tree
methods of Decision Stump, ID3, J48, NBTree, Random Forest, Random Tree, and
REPTree were implemented within the WEKA framework by Written and Frank [54]
based upon the training dataset. Finally, the ID3 decision tree method was found to be
the strongest method for the nature of that dataset.
Text mining is data mining applied to information extracted from text. It can be
broadly defined as a knowledge-intensive process in which a user interacts with
documented collection overtime by using suitable analysis tools [64]. A text mining
handbook written by Feldman and Sanger [64] presents a comprehensive discussion
in text mining and link detection algorithms and their operations.
3.5.3.6 Model Validation
The proposed ID3-based model for the function of automatic resolver group
assignment. The model is illustrated in Figure 3-13. In order to validate the model,
Thesis implemented the ID3 within the WEKA based on the testing dataset and then
the details of the validation results of the ID3 method are shown in Appendix A, A-3 :
Evaluation result of ID3 decision tree method.
To estimate the classification evaluation approaches, it will be commonly uses
10-fold smooth out cross-validation [57]. The 10-fold cross validation which is
helpful to prevent over fitting and the result of accuracy is an average of any 9 divided
by 10 sample as training set and the rest as testing set for 10 times.





59
3.6 Summary
The purpose of IT service desk is to support services on behalf of the bank’s
technology driven business goals. The role of IT service desk is to ensure that IT
incident tickets are owned, tracked, and monitored throughout their life cycle.
Knowledge management is used as the framework to integrate the technology, people,
and process for improved service desk performance.
The purpose of this methodology is to demonstrate the proposed model and a
prototype of KMRCA IT Service desk system. In addition, the descriptions of
information collection and data analysis focused on the simulation study which are
used in the performance evaluation. To perform the new system IT service desk
agents and resolver groups have to perform the proposed processes particular search
knowledge procedure so that the agents can leverage the organization's knowledge
and solve the incident faster than working without the knowledge management system.
For the automatic assignment, this is another core function of the system. The
aim of the function is to demonstrate the proposed enhance model of decision support
system of automatic resolver group assignment and a prototype of ARGA-ID3 IT
Service desk system. The system was improved from the KMRCA IT service desk
system by embracing the automatic resolver group assignment. A sample is analysed
in terms of correlation between the system type failures and resolver group related the
failures. In addition to the core methodologies of text mining discovery methods of
classification trees, the strongest method is evaluated by 10-fold cross validation. The
10-fold cross validation is helpful to prevent over fitting.
The text mining discovery algorithm gives the optimized pattern discovery
framework to text. In particular, the class of simple combinatorial patterns over
phrases, and consider the problem of finding the patterns that optimize a given
statistical measure within the whole class of patterns in a large collection of
unstructured texts.

CHAPTER 4
EXPERIMENTAL RESULTS

This chapter provides experimental results of which is describing in terms of
performance evaluation. Section 4.1 shows the results of text mining discovery
methods of automatic assignment function. The results of experimental design with
screening design is to identified which factors are important on each influence
variable are illustrated in Section 4.2. Section 4.3 shows the performance evaluation
of KMRCA IT service desk system that is analysed and compared versus the previous
system of a typical IT service desk by using simulation study based on actual data.
Besides, the summary is presented in Section 4.4.

4.1 The Results of Text Mining Discovery Methods of Automatic Assign Function
In this section, the results were divided into two parts, (1) the comparison results;
and (2) Selected method evaluation. The experimental results particular the time taken
to build models are based on a notebook computer IBM ThinkPad model R50e with
memory 768 MB and 80 MB Hard disk with running speed at 5,400 rpm. In addition,
the software tool used in the experiment is WEKA machine learning software version
3.4.12 by changing the parameter of the maxheap in RunWega.ini to the max value at
1,280 MB instead of the default by 128 MB that is to support our immense dataset.
4.1.1 Comparison results
The comparison of various decision tree methods was conducted and
implemented within the WEKA framework. Based on the 66% of the sample dataset
of 9,530 records, There are seven classification trees were implemented, including
Random Tree, Random Forest, ID3, J48, NBTree, REPTree, and Decision Stump
within WEKA [54] with default parameters. In the experiment, the accuracy on the
sample was obtained using 10-fold cross validation, which is to prevent over fitting.
All the experimental results are shown in Tables 4-1 and 4-2. Table 4-1 shows the
number and percentage of correct incidents for various types of decision trees.
Table 4-2 shows the speed to build models, Size of trees, and accuracy of
classification for the individual classifiers, respectively.

62
TABLE 4-1 The Number and Percentage of Correct Incident for Various Types of
Decision Trees
Decision Tree
Classifiers
No. of
Correct instances
No. of
Incorrect instances
Accuracy of
Classification (%)
ID3 8914 616 93.5362
Random Tree 8914 616 93.5362
Random Forest 8913 617 93.5257
J48 8896 634 93.3473
NBTree 8890 640 92.2844
REPTree 8866 664 92.0325
Decision Stump 7587 1943 80.3746

From Table 4-1, it can be seen that ID3 and random tree were equally good in
terms of proportion of correct allocations with random forest not far behind. Decision
stump was worst

TABLE 4-2 The Speed compared with the Accuracy of Classification.
Decision Tree
classifiers
Time Taken to
Build Models (seconds)
Size of Tree
Accuracy of
Classification (%)
ID3 5.15 134 93.5362
Random Tree 20.89 167 93.5362
Random Forest 46.96 10 93.5257
J48 19.58 83 93.3473
NBTree 190.54 1 92.2844
REPTRee 10.39 85 92.0325
Decision Stump 0.59 1 80.3746

From Table 4-2, decision stump is by far the fastest classifier, by an order of
magnitude, but the highest proportion of misclassifications also it produces only one
tree. ID3 is the second fastest classifier, about twice as fast as the next one and it also
had the lowest proportion of misclassifications.


63
The comparison of decision tree methods is considered in terms of accuracy and
performance as shown in Tables 4-1 and 4-2, respectively. ID3 and Random Tree give
the highest accuracy among the others. However, the Random Tree is not fit to deal
with imbalanced samples, through it is easy to obtain rules from large datasets like
Random Forest. The Random Tree gives high accuracy, but it is poor performance in
terms of speed to build the model. Thus, the performance of ID3, J48, NBTree,
REPTree, and Decision Stump are comparable. Decision Stump gives the highest
speed, but the lowest accuracy. It generates of one tree like NBTree that cannot
support for the knowledge-based classification. Thus, considering both accuracy and
speed, the ID3 is the best choice.
4.1.2 Method evaluation
To validate the method of the automatic assignment function, using the testing
data by default value 10-fold cross validation within WEKA platform. The testing
data consisting of 34% of the sample dataset of 4,910 cases. In addition, the IT
experts who participate in the experiments also validate the result of validation.
The results show the accuracy assignment was 93.06 % of the cases, which indicates
the ID3-based method is significantly suited for the model of decision support system
for automatic resolution of group assignment. However, the details of results
generating by WEKA machine learning are shown in Appendix A, A-3.

4.2 The Results of Design of Experiment
4.2.1 Design of Experiment and Analysis
The use of design of experiment (DOE) and optimization technique was conduct
when the experimental is execution simulation models of both a currently typical IT
service desk and KMRCA IT service desk configurations and comparing their results.
The experiments include the study of three factors. They are often used to study
the performance of the process and the system [65]. The objective of the experimental
design is to determine the factors are most influential on the response of the system.
By using the experimental 2
3
full-factorial design which is to identify the effects of
three different interesting factors on eight dependent variables. Each factor at two
levels and then eight treatment combinations run in the 2
3
design. To perform
screening experiments is selecting the key factors affecting a response.

64
4.2.2 The Key Factors and Output Variables
According to González [30] argued that dependent variables are performance
variables tracked by the service desk which are common performance measurement.
The dependent three factors are as follows:
(a) Factor A: Time to type incident information and search the relevant
knowledge from the KMRCA system (minutes).
(b) Factor B: Time to resolve an incident using the KMRCA system (minutes).
(c) Factor C: Time to add new information into the KMRCA system (minutes).
In addition, the dependent Output variables are as follows:
O
1
: Throughput, total number of calls resolved in a period of time
O
2
: Time in resolving incidents of Severity 1 (minutes)
O
3
: Time in resolving incidents of Severity 2 (minutes)
O
4
: Time in resolving incidents of Severity 3 (minutes)
O
5
: Time in resolving incidents of Severity 4 (minutes)
O
6
: Number of incident calls in AMS queue.
O
7
: Number of incident calls in EOS queue.
O
8
: Number of incident calls in NWS queue.
The factors values were calculated from the average time consumed by the five
IT service desk staff who used the KMRCA IT service desk system in searching,
resolving, and keeping resolutions. In addition, the IT service desk manager as an IT
expert confirmed the results. Table 4-3 shows the assigned factor values for two-level.

TABLE 4-3 Assigned Factor Values for Two-Level
Factor Low (minutes) High (minutes)
A 0.8 1.2
B TRIA(1.0, 1.6, 3.3) TRIA(2.0, 3.0, 4.8)
C 1.5 2.4


However, a different output variable is needed to test for each incident severity
since they follow different paths through the IT service desk. The analysis of variance
(ANOVA) for full-factorial design is done to test that the main effects or interaction
parameters are equal to zero. In statistical analysis, the factors with a p-value lower
than 0.05 are considered as important factors that significantly influence the results.


65
The ANOVA analysis shows that the dependent variable of throughput (O
1
) and
variable of average time in resolving incidents of severity 3 (O
4
) are significantly
influenced by three key factors which were significant because the p-value lower than
0.05 and the other dependent variables do not have any factors that affect them
significantly that the others in all cases’ p-value are more than 0.05. Thus, the study
focused on the five variables, including throughput, average time in resolving
incidents of severities 1, 2. 3. and 4. Table 4-4 shows 2
3
factorial design of design of
experiment (DOE) for responses of throughput. However, the details of that result
shown in Appendix C, C-3 and C-4.

TABLE 4-4 2
3
Full Factorial Design of DOE for Responses Y of O
1

Run Factor Throughput (no. of calls / time period)
Order A B C
Yrep 1 Yrep 2 Yrep 3 Yrep 4
1 - - - 3585 3628 3585 3558
2 + - - 3585 3626 3585 3558
3 - + - 3584 3616 3584 3556
4 + + - 3584 3615 3584 3556
5 - - + 3584 3624 3585 3558
6 + - + 3584 3620 3584 3556
7 - + + 3584 3581 3583 3555
8 + + + 3533 3487 3513 3529

Table 4-5 shows coded design matrix of Throughput (O
1
)

TABLE 4-5 Coded Design Matrix of O
1

Run
Order
A B C AB AC BC ABC
Ave.
(Y)
SD.
(Y)
Var.
(Y)
1 - - - + + + - 3589.0 28.9 838.0
2 + - - - - + + 3588.5 28.1 787.0
3 - + - - + - + 3585.0 24.5 601.3
4 + + - + - - - 3584.8 24.1 580.9
5 - - + + - - + 3587.8 27.2 470.3
6 + - + - + - - 3586.0 26.2 688.0
7 - + + - - + - 3575.8 13.9 192.9
8 + + + + + + + 3515.5 20.9 435.7

As shown in Table 4-6 is the summary of absolute value of coefficients for
average response of Throughput (O
1
) and p-value by factors and their interactions.
From the Table 4-6, Factor A, Factor B and interaction AB are the most influence to
the Throughput, accordingly. In addition, the Figure 4-1 shows Pareto of coefficients
for average Response Y of O
1
.

66
TABLE 4-6 Absolute Value of Coefficients for Average O
1
and P-Value
A B C AB AC BC ABC
Absolute of Coeff. 7.844 11.281 10.281 7.281 7.656 9.344 7.344
p-value 0.0845 0.0161 0.0268 0.1078 0.0918 0.0424 0.1050


FIGURE 4-1 Pareto of Coefficients for Average Response Y of O
1


Another response is the time in resolving incidents of severity 3 that Table 4-7
shows the absolute value of coefficient for Average Time in resolving incidents of
severity 3 (O
4
) which all three factors are significant for the response of Time in
resolving incidents of severity 3 (O
4
). Therefore, a Pareto of coefficients of average
Time in resolving incidents of severity 3 (O
4
) as shown in Figure 4-2.

TABLE 4-7 Absolute Value of Coefficients for Average of O
4
and P-Value
A B C AB AC BC ABC
Absolute of Coeff. 0.188 0.638 0.438 0.012 0.012 0.012 0.012
p-value 3e-33 6e-46 5e-42 7e-07 7e-07 7e-07 7e-07


FIGURE 4-2 Pareto of Coefficients for Average Response Y of O
4



67
4.3 The Results of Performance Evaluation
The objective of thesis is to evaluate performance of the KMRCA IT service
desk system by using a simulation study. To demonstrate the concept of KMRCA IT
service desk system which has more speed in resolving incidents than the previous
Typical IT service desk system, therefore the research hypothesis is that the system
will have a shorter incident resolution time. A shorter incident resolution time will
occur because the knowledge management system with root cause analysis will
facilitate organizational learning and will enable IT service desk agents and resolver
groups to share knowledge sources to resolve the incident faster as well as it will be
preventing the recurring incidents. As the reason of reducing time in resolving
incidents therefore, it would be a higher throughput.
According to the hypothesis is that time in resolving incidents of all severities
except for critical incident that will be lower in KMRCA IT service desk system than
the previously Typical IT service desk system.
The developed simulation model is to test the hypothesis that describes Typical
IT service desk system and KMRCA IT service desk system. A simulation enables
service desk agents to perform analysis that captures the entire interrelationship
between callers, agents, skills, and technology [66]. In this case, the simulation model
research approach is adopted so that it can be conducted by experiments and
evaluated the knowledge management system without interrupting the IT service
desk’s daily operations. Furthermore, the simulation model will help to analyze the
advantages that can be obtained with the implementation of the knowledge
management system. The concept of KMRCA IT Service Desk can be evaluated its
performance using a simulation study. According to the research hypothesis is that the
new system will have a shorter time in resolving incident than the previous system.

4.3.1 Comparison of Test of KMRCA and Typical IT Service Desk Systems
The factors are analyzed with two levels (low or “-” and high or “+”) and their
were replaced with the resolving incident by severity in the assign in simulation
model so that the results of responses are shown in Table 4-8. Four replications of
each experiment were run for 22 working days in a random order and the results were
recorded for further statistical analysis. The details of comparison test are shown in
Appendix C, C-5 and C-6.

68
TABLE 4-8 Comparison Tests of KMRCA and Typical IT Service Desk Systems
Variables
Observed
t-value
Critical
t-value

p-value
1) Throughput 22.68 3.182 0.001
2) Average Resolving Time of severity 1 -0.83 3.182 0.466
3) Average Resolving Time of severity 2 0.16 3.182 0.882
4) Average Resolving Time of severity 3 3.26 3.182 0.047
5) Average Resolving Time of severity 4 -0.40 3.182 0.716


As the hypothesis is the average time in resolving incidents for all calls except
for critical calls will be lower in KMRCA IT service desk system than the current
agent of service desk system. Figure 4-8 shows the values of the observed t-value and
the value of Critical t-value with two-tail (α/2 = 0.025 and degree of freedom = 3) for
each dependent variable. As shown in Table 4-8, it can be noticed that in Throughput
and Time in resolving incidents of Severity 3, since the observed t-value is higher
than the critical t-value this means that H
0
is rejected. In other words, the means are
not equal. On the other hand, for Time in resolving incidents of Severity 1 and Time
in resolving incidents of Severity 2 the observed t-value is lower than the Critical
t-statistic, then H
0
is not rejected therefore it is concluded that those means are equal.
4.3.2 Comparison Output of KMRCA and Typical Service Desk Systems
Table 4-9 shows the comparison outputs of KMRCA and Typical IT service desk
system. The simulation of KMRCA IT service desk system gave more throughputs of
16.9 % and decreased the average resolving time in severity 3 of 4.8 %, but the results
of the others were not significant because they failed in the t-test.

TABLE 4-9 Comparison Outputs of KMRCA and Typical IT Service Desk Systems
Variables

KMRCA
IT service desk
Typical
IT service desk
Throughput (no. of calls per period) 3,531 3,019
Average Resolving Time of severity 1 (min.) 2.75 1.84
Average Resolving Time of severity 2 (min.) 4.26 5.43
Average Resolving Time of severity 3 (min.) 7.11 6.77
Average Resolving Time of severity 4 (min.) 25.22 21.54


69
4.4 Summary
In this chapter, the thesis shows the shows the results of Text mining discovery
methods of automatic assignment function and the results of performance evaluation
of KMRCA IT service desk system.
For the results Text mining discovery methods, this was to discover suitable
decision tree methods based on WEKA machine learning by comparing several
decision tree methods. Finally, the ID3 decision tree is the strongest algorithm. The
comparing results of decision tree methods show correctively classified instance more
than 93% of the cases. The ID3 classifier has the best performance in terms of speed
to build the model combined with a high accuracy of a classification. The model was
validated based on the training dataset within WEKA platform with 10-fold cross
validation and the accuracy of the results of the model was 93.06 % of the cases.
For the results of performance evaluation of KMRCA IT service desk system,
the summary from a computer simulation to quantitatively compare the currently
Typical IT Service Desk and proposed KMRCA IT Service Desk systems.
The simulation study result showed almost 17 % increase in throughput, and 4.8 %
decrease in just the average time in resolving incidents of severity 3. For the average
time in resolving incidents of severities 1, 2, and 4, the results of the t-test were failed
and no statistically significant difference could be concluded with confidence for of
critical , high, and low priority incidents. The improvements are significant and
provide justification for implementing the knowledge management system with root
cause analysis to the moderate-priority incident or incident of severity 3. With the
design of experiment, it can be used to design the specifications of the knowledge
management system. Furthermore, the advantage of the simulation can be performed
studying without interrupting the daily IT service desk operations.














CHAPTER 5
CONCLUSION

This chapter concludes the experiment results from evaluating performance and
comparing methods and discusses the advantages of the proposed framework. It also
suggests the ways to improve system as proposing in the further work.

5.1 Conclusion
This thesis makes three contributions. Firstly, the thesis proposes a framework
of knowledge management system and root cause analysis, so called KMRCA IT
service desk system. Secondly, the thesis evaluates a performance of KMRCA IT
service desk system by using a simulation study based on actual incident data and
compared the results with a previously typical IT service desk system. Thirdly, the
thesis proposes the process of text mining to discover methods which include data
preparation, document collection, text measurement, method selection, and method
evaluation through classification approach.
The proposed framework of KMRCA IT service desk system composes of two
main functions, 1) searching knowledge function; and 2) automatic resolver group
assignment function. The performance of KMRCA IT service desk system was
evaluated in terms of speed in resolving incidents. The experimental results indicated
that KMRCA IT service desk approached significantly enhance the performance of
the typical IT service desk system by giving more throughput and reducing time in
resolving incidents. In the study, the computer simulation was conducted to compare
the typical IT service desk system against KMRCA IT service desk system. The
simulation study result showed almost 17 % increase in throughput, and 4.8 %
decrease in average resolving time of Severity 3. At Severity 1, Severity 2, and
Severity 4 the t-test failed and then no statistically significant difference can be
concluded with confidence for critical, high and low priority incidents. Thus, the
advantages are significant and provide justification for implementing the knowledge
management system with root cause analysis on the moderate priority incidents.

72
For the Text mining discovery methods, the thesis discovers the suitable
methods within WEKA machine learning by comparing several decision tree methods.
Finally, the ID3 decision tree method is the strongest algorithm. The comparing
results of decision tree methods show correctively classified instance more than 93%
of the cases. In addition, the ID3 classifier has more performance in terms of speed to
build the model meanwhile the size of tree does not affect on accuracy classification.
The proposed ID3-Based model for automatic resolver group assignment of IT service
desk outsourcing in the bank. The comprehensibility of ID-3 decision tree indicates
the appropriate assigned resolver group to deal with the type of the incident. The
method of the model is validated based on the training dataset within WEKA platform
with 10-fold cross validation and the creativeness results of the model was 93.03% of
the cases. The experimental results indicate that the ID3 in terms of generated tree
rules and speed is the optimal method to deal the model with automatic resolver
assignment that would significantly increasing productivity in terms of more
assignments that are correct and then decreasing reassignment turnaround time.
Furthermore, the rules resulting from the rule generation from the decision tree could
be properly kept in knowledge database in order to support and assist with future
incident resolver assignments.

5.2 Discussion
The simulation output shows the IT service desk system yielded 17 % higher
throughput, but the t-test failed at the critical and high priority levels since resolving
time is quite limited that makes IT service desk agents urgently assign to the resolver
group without using the knowledge management system. For severity 4, there have a
lot of time in resolving incident low priority consequently the agent leave this
incident until resolver available to resolve that incident. Thus, the KMRCA IT service
desk system is not designed to support those severities. However, the throughput can
be improved by training the staff before they use the KMRCA system so that the
staff’s skill can make more decreasing time in resolving incident than without training.
Although the thesis proved that knowledge management with root cause
analysis is able to enhance the IT service desk outsourcing in banking business there
are several ways to continue improving the system. Firstly, the IT service desk system




73
should be automatic resolver group assignment because a manual assign may make
mistaken when agents select resolver or group to deal with the incident manually.
The circumstance when IT service desk agents received critical incidents of which
urgently required in resolving therefore they often suddenly assign to the relevant
resolver group without using the knowledge management system. The number of the
critical incident tickets is less than one percent, but they are significant impact on the
whole bank’s business processes. In addition, the specification of the knowledge
management system can be defined from the experimental design by three factors of
which time consumed when the agents perform using the system.

5.3 Future Work
The remaining issue of which one ticket is assigned to the most suited resolver,
it does not indicate that the incident ticket closed completely, since some incidents
may require resolver more than one. For example, the incident on ATM broken down
and hence customers cannot withdraw their money. These may cause of several
failures such as applications, networks and electrical power supply that impact on
many parties to be concerned. Thus, we will improve the model focusing on the multi-
resolver group assignments.
Another improvement of IT service desk is to search the relevant knowledge
automatically by using the text mining of transforming search to discover knowledge
in which the process extracting key words and then proceed to discover the relevant
knowledge. Through the search engines can help finding relevant documents a new
technology goes beyond simple document retrieval. The text mining make it possible
to discover new knowledge in the form of trends, anomalies, relationships, and
patterns that span multiple knowledge collections. By extending the way text
databases can be explored, text mining can contribute valuable content analysis and
decision support to the existing knowledge in the organization.


REFERENCES

1. Nonaka, I. and Takeuchi, H. The Knowledge-Creating Company. New York :
Oxford Press, 1995.
2. Allee, V. The Knowledge Evolution: Expanding Organizational Intelligence.
New York : Butterworth-Heinemann, 1997.
3. Alavi, M. and Leidner, D. E. “Knowledge Management Systems: Emerging
Views and Practices From The Field.” Proceedings of the 32
nd
Hawaii
International Conference on System, IEEE Computer Society (1999) : 239.
4. Davenport, T. H. and Prusak, L. Working Knowledge: How Organizations
Manage What They Know. Boston, Massachusetts : Harvard Business
School Publishing, 2000.
5. Grote, M. H. and Täube, F. A. “When Outsourcing is not an Option: International
Relocation of Investment Bank Research - Or isn't it?” Journal of
International Management. 1-13(2007) : 57-77.
6. Mahnke, V., Overby, M. and Vang, J. “Strategic Outsourcing of IT Services:
Theoretical Stocktaking and Empirical Challenges.” Industry and Innovation.
2-12(2005) : 205–253.
7. Behr, K., Castner, G. and Kim, G. The Value, Effectiveness, Efficiency, and
Security of IT Controls: An Empirical Analysis. University of Oregon, 2004.
8. Forte, D. “Security Standardization in Incident Management: the ITIL Approach.”
Network Security. 1 (2007, January) : 14-16.
9. Phomasakha, P. and Meesad, P. “Knowledge Management System with Root
Cause Analysis for IT Service Desk in Banking Business.” Proceedings of
the 2007 Electrical Engineering/Electronics, Computer, Telecommunications
and Information Technology (ECTI) International Conference, 2(2007), Mae
Fah Luang University, Chiang Rai, Thailand, (2007, May 9-12) : 1209-1212.
10. Clevel, B. and Mayben, J. Call Center Management on Fast Forward: Succeeding
Today’s Dynamic Inbound environment. Maryland : Call Center Press, 1997.
11. Anton, J. and Gusting, D. Call Center Benchmarking: How Good Is Good
Enough. Indiana : Purdue University Press, 2000.

76
12. Dawson, K. The Complete Guide to Starting, Running, and Improving Your Call
Center. CMP Books, New York : Focal Press, 1999.
13. Sandborn, S. “Structuring the service desk.” Information World. 23-52(2001) :
28
14. Zhang, J. and Faerman, S. R. “Divergent Approaches and Converging Views :
Drawing Sensible Linkages between Knowledge Management and
Organizational Learning.” Proceedings of the 36
th
Hawaii International
Conference on System Sciences, 2003.
15. Drucker, P. F. The Post-Capitalist Executive Managing in a Time of Great
Change. New York : Penguin, 1995.
16. Suzuki, Y. and Toyama, R. “A Self-evaluation Method of SECI Process in
Knowledge Management.” IEEE International Engineering Management
Conference. 2(2004) : 491- 494.
17. Chen, F. and Burstein, F. “A Dynamic Model of Knowledge Management for
Higher Education Development.” Proceedings of the 7
th
International
conference on Information Technology Based Higher Education and
Training , 2006 : 173-180.
18. Mertins, K., Heisig, P. and Vorbeck, J. Knowledge Management: Best Practices
in Europe. Berlin : Springer-Verlag, 2001.
19. Meso, P. and Smith, R. “A Resource-based View of Organizational Knowledge
Management Systems.” Journal of Knowledge Management. 3-4(2000) :
224–234.
20. Satyadas, A. and Harigopal, U. “Knowledge Management Tutorial: An Editorial
Overview.” IEEE Transactions on Systems, Man, and Cybernetics-Part C :
Applications and Reviews. 31-4(2001) : 429–437.
21. Sveiby, K.E. “The New Organizational Wealth. Managing and Measuring
Knowledge-Based Assets.” San Francisco : Berrett Koehler Publisher, 1997.
22. Holsapple, C.W. and Joshi K.D. “Organizational knowledge resources.”
Decision Support Systems. 31(2001) : 39–54.
23. Taylor, M.J., Gresty, D. and Askwith, R. “Knowledge for Network Support.”
Information and Software Technology. 43(2001) : 469–475.

77
24. Marcella, R. and Middleton, I. “The Role of the Help Desk in the Strategic
Management of Information Systems.” OCLC Systems and Services. 12-4
(1996) : 4–19.
25. Gray, P.H. “A Problem-solving Perspective on Knowledge Management
Processes.” Decision Support Systems. 31(2001) : 87–102.
26. Frey, N., Matlus, R. and Maure, W. “A Guide to Successful SLA Development
and Management.” Gartner Group Research Strategic Analysis Report, 2000,
October.
27. Anderson, B. and Fagerhaug, T. Root Cause Analysis: Simplified Tools and
Techniques. Milwaukee : ASQ Quality Press, 2000.
28. Doggett, A. M. “Selected Collaborative Problem-Solving Method for Industry.”
Selected paper (2004). Humboldt State University, 2004.
29. Wilson, P. F., Dell, L. D. and Anderson, G. F. Root Cause Analysis : A Tool for
Total Quality Management. Milwaukee : ASQ Quality Press, 1993.
30. Gonza´lez, L. M., Giachetti, R. E. and Ramirez, G. “Knowledge Management-
centric Help Desk : Specification and Performance Evaluation,” Elsevier,
Decision Support Systems. 40(2005) : 389– 405.
31. Weidl, G., Madsen, A. L. and Israelson, S. “Applications of Object-oriented
Bayesian Networks for Condition Monitoring, Root Cause Analysis and
Decision Support on Operation of Complex Continuous Processes.”
Elsevier, Computer and Chemical Engineering. 9-29(2005, 15 August) :
1996-2009.
32. Aamodt, A. A Knowledge Intensive Approach to Problem Solving and Sustained
Learning. PhD. dissertation, University of Trondheim, Norwegian Institute
of Technology, May 1991.
33. Aamodt, A. and Plaza, E. “Case-Based Reasoning: Foundational Issues,
Methodological Variations, and System Approaches.” AI Communications.
7(1994) : 39-59.
34. Reisbeck, C. K. and Schank, R.C. Inside Case-Based Reasoning. Hillsdale,
New Jersey : Lawrence Erlbaum Associates, 1989.
35. Doyle, M., et al. “CBR Net: Smart Technology over a Network.” TCD
Technical Report, 1998, July.

78
36. Schank, R. C. Inside Case Based Reasoning. New Jersey : Erlbaum, 1989.
37. Watson, I. Applying Case-Based Reasoning : Techniques for Enterprise Systems.
San Mateo, California : Morgan Kaufmann, 1997.
38. Gentner, D. “Are Scientific Analogies Metaphors?” Problems and perspectives.
Brighton, UK : Harvester Press, 1982 : 106-132.
39. Carbonell, J. G. Derivational Analogy in PRODIGY : Automating Case
Acquisition, Storage, and Utilization. Boston : Kluwer Academic Publishers,
1993.
40. Kolodner, J. L. Case-Based Reasoning. San Mateo, California : Morgan
Kaufmann, 1993.
41. Althoff, K. -D., et al. A Review of Industrial Case-Based Reasoning Tools.
Oxford : AI Intelligence, 1995.
42. Office of Government Commerce (OGC). Service Support. ITIL Version 2
Library, UK : TSO (The Stationery Office) publisher, 2005.
43. Yang, D.-H., et al. “Developing a decision model for business process
outsourcing.” Elsevier, Computers and Operations Research, 34-12(2007) :
3769-3778.
44. Lacity, M., Willcocks, L. and Feeny, D. Sourcing Information Technology
Capability. A Decision-Making Framework. Information Management:
The Organizational Dimension, Oxford : Oxford University Press, 1996.
45. Hirschheim, R.A. and Lacity, M.C. “The myths and realities of information
technology insourcing.” Communications of the ACM. 2-43(2000) : 99-107.
46. Linder, J. C., Cole, M. I. and Jacobson, A. L. “Business transformation through
outsourcing.” Emerald Strategy and Leadership. 30-4 (2002) : 23-28.
47. Sun, Y. H., et al. “A hybrid knowledge and model approach for reviewer
assignment.” Elsevier, Expert Systems with Applications. 34-2(2008) :
817-824.
48. Fan, Z.-P., et al. “Decision support for proposal grouping: A hybrid approach
using knowledge rule and genetic algorithm.” Elsevier, Expert Systems with
Applications, 2007.

79
49. Li, J.-Q., Borenstein, D. and Mirchandani, P. B. “A decision support system for
the single-depot vehicle rescheduling problem.” Elsevier, Computers &
Operations Research. 34-4(2007) : 1008-1032.
50. Lewis, M. W., Lewis, K. R. and White, B. J. “Guided design search in the
interval-bounded sailor assignment problem.” Elsevier, Computers &
Operations Research. 33-6(2006) : 1664-1680.
51. Jiménez, A., Ríos-Insua, S. and Mateos, A. “A decision support system for
multi-attribute utility evaluation based on imprecise assignments.” Elsevier,
Decision Support Systems. 36- (2003) : 65-79.
52. Lazarov, A. and Shoval, P. “A rule-based system for automatic assignment of
technicians to service faults.” Elsevier, Decision Support Systems.
32(2002) : 343-360.
53. Zhao, Y. and Zhang, Y. “Comparison of decision tree model of finding active
objects.” Advances in Space Research, 2007.
54. Witten, I. and Frank, E. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations. 2nd ed. San Mateo, California :
Morgan Kaufmann, c2005.
55. Quinlan, J. R. Induction of Decision Trees, Readings in Machine Learning.
Morgan Kaufmann, 1990 : 81-106.
56. Mitchell, T. M. Machine Learning. McGraw-Hill, 1997.
57. R. Kohavi. “A study of cross-validation and bootstrap for accuracy estimation
and model selection.” Proceedings of the Fourteenth International Joint
Conference on Artificial Intelligence, 2-12(1995) : 1137–1143.
58. Breiman, L. “Random Forests.” Springer, Machine Learning, 45-1(2001) : 5-32.
59. Kelton, W. D., Sadowski, R. P. and Sturrock, D. T. Simulation with Arena. 3rd
ed. Series in Industrial Engineering and Management Science. Singapore :
McGraw- Hill, c2003.
60. Pyle, D. Data Preparation for Data Mining. San Mateo, California : Morgan
Kaufmann, 1999.
61. Miller, T.W. Data Text Mining: A business applications approach. Prentice Hall,
2005.

80
62. Riloff, E. “Little Words Can Make a Big Difference for Text Classification.”
Proceedings of the 18th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval. 1995 : 130-136.
63. Liu Y., et al. “Handling of Imbalanced Data in Txt Classification: Category-
Based Term Weights.” Natural Language Processing and Text Mining,
London : Springer, (2007, March 6) : 171-192.
64. Feldman, R. and Sanger, J. The Text Mining Handbook : Advanced Approaches
in Analysing Unstructured Data. New York : Cambridge University Press,
2007.
65. Law, A.M. and Kelton, W.D. Simulation Modeling and Analysis. 3rd ed.
Singapore : McGraw-Hill Press, c2000.
66. Miller, K. and Bapat, V. “Case Study : Simulation of the Call Center
Environment for Comparing Competing Call Routing Technologies for
Business Case ROI Projection.” IEEE Winter Simulation Conference
Proceedings, Washington DC : IEEE Press, 1999 : 1694–1700.















APPENDIX A

A SAMPLE OF INCIDENT DATASET, SEVERAL RESULS FOR ANALYSIS OF
TEXT MINING DISCOVERY METHODS, AND METHOD VALIDATION















82
A-1 A Sample of Incident Dataset
Figure A-1 shows a sample of incident data in spreadsheet (Excel).

No. Incident Id. Open Date Open Time Resolve Date Resolve Time Incident Code Assigned Gr. Severity System Component Incident Descriptions Resolution Results
2587 TFB-00897593 1/4/2006 16:43:14 3/4/2006 18:04:45 CLOSED OS_EC 3 Hardware ATM : S1A1444 (IP) เซเว นฯ สวนผั ก (811): HAS BEENDISCONNECTED
2586 TFB-00897595 1/4/2006 16:55:00 2/4/2006 7:17:46 CLOSED VEN 3 Network WAN : S1A2192 (IP) ป มน้ํ ามั นบางจาก ปร link up ปกติ แล วครั บ
2585 TFB-00897596 1/4/2006 16:57:28 3/4/2006 18:06:59 CLOSED OS_EC 3 Hardware ATM : S1B1331 (IP) สาขาพั ทยาใต เครื ่ อง 2 (ศู นย ฯ): HASBEENDISCONN
3514 TFB-00897594 1/4/2006 17:32:18 3/4/2006 13:08:40 CLOSED OS_EC 2 Power Supply ระบบไฟฟ า : Link EDC สาขาหั วหิ น" ... LINEDOWN จนท. กรุ ณาตรวจสอบ/แก ไข
2583 TFB-00897598 1/4/2006 18:35:44 3/4/2006 18:09:08 CLOSED OS_EC 3 Hardware ATM : CDM21235 (IP) CDMสาขายะลา (256): HAS BEENDISCONNECTE
2581 TFB-00897600 1/4/2006 19:18:14 3/4/2006 18:18:37 CLOSED OS_EC 3 Hardware ATM : S1A2120 (IP) เซเว นฯ ห วยขวาง 4 (812): Has been marked down รบ
2577 TFB-00897602 1/4/2006 19:23:08 3/4/2006 18:20:32 CLOSED OS_EC 3 Hardware ATM : S1A2201 (IP) เซ็ นทรั ล ป นเกล า เครื ่ อง 4 (811): HASBEENDISCON
2563 TFB-00897623 2/4/2006 7:42:29 3/4/2006 11:42:41 CLOSED OS_EC 2 Software Data Warehouse : คุ ณ ธงชั ย 4300 แจ ง ระบบ Datawarehouse Job EDWPBOTFM
2562 TFB-00897624 2/4/2006 7:51:34 4/4/2006 17:24:44 CLOSED OS_EC 3 Power Supply ระบบไฟฟ า : ตู เอที เอ็ มS1A2366อาคารบี บี อโศก /กรุ ณาตรวจเช็ คLINEให ด วยค ะ
2561 TFB-00897625 2/4/2006 8:42:47 4/4/2006 17:42:04 CLOSED OS_EC 4 Hardware ATM : s1a1052อาคารชุ ดโบ เบ ทาวเวอร /กรุ ณาตรวจเช็ คLINEให ด วยค ะ
2560 TFB-00897626 2/4/2006 8:46:46 4/4/2006 17:40:07 CLOSED OS_EC 3 Hardware ATM : s1B2431 .เอที เอ็ มบ านหั วถนน เกาะสมุ ย/ตู DOWNกรุ ณาตรวจเช็ คLINEใ
2558 TFB-00897628 2/4/2006 10:00:28 2/4/2006 13:55:42 CLOSED VEN 3 Network ATM : S1A2264 โลตั ส รั งสิ ต นครนายก/กTrue k กฤษดากร ได ทํ าการตรวจเช็
2557 TFB-00897630 2/4/2006 10:18:41 2/4/2006 10:57:47 CLOSED VEN 3 Network WAN : S1A2015 (IP) บ.อั มริ นทร พริ ้ นติ ้ ง โรActive 10.48
2556 TFB-00897634 2/4/2006 12:41:52 4/4/2006 17:46:48 CLOSED OS_EC 3 Hardware ATM : S1A1142โรบิ นสั น รั ชดา กรุ ณาตรวจเช็ คLINEให ด วยค ะ
4103 TFB-00897709 3/4/2006 8:24:14 3/4/2006 16:06:48 CLOSED NWS 3 Software WIN2000 : RAT32 ฝ าย สอ. แจ ง Notebook ผuser k พิ ศิ ษฐ test ok
237 TFB-00897713 3/4/2006 8:26:20 3/4/2006 16:39:04 CLOSED NWS 3 Software WINNT : อาคารสี ลม ชั้ น8 ติ ดต อคุ ณอมรา IBMSD(theppitat) install windows
3998 TFB-00897717 3/4/2006 8:29:18 3/4/2006 12:13:42 CLOSED NWS 3 Network HQ : 1403003A0956 // ชั้ น 19 อาคารรา Confirmby K.Kripit.
811 TFB-00897657 3/4/2006 8:30:27 3/4/2006 13:17:55 CLOSED NWS 2 Network Branch : PU270 Server COM695 user k ปราโมทย test ok
4006 TFB-00897720 3/4/2006 8:31:10 3/4/2006 14:27:54 CLOSED NWS 3 Software WIN2000 : พหล ชั้ น 5 ติ ดต อคุ ณจี รศ กดิ ์ โทร 0recovery data /user test ok
559 TFB-00897725 3/4/2006 8:34:17 3/4/2006 12:18:41 CLOSED NWS 3 Hardware Personal Comp. : PCStandalone / จอภาพมื ด / Mทํ าการ จอ Monoter Dijital 1K63309
43 TFB-00897731 3/4/2006 8:38:29 3/4/2006 14:55:23 CLOSED NWS 3 Hardware Personal Comp. : สาขาบางกระบื อ ติ ดต อคุ ณทรงศั กได เพิ ่ ม ram8 mb และเปลี ่ ยน batter
60 TFB-00897742 3/4/2006 8:43:38 4/4/2006 12:05:35 CLOSED NWS 3 Hardware Printer : Cash service / type 4722 s/n 41-ทํ าการเปลี่ ยนชุ ด mechanic
212 TFB-00898641 4/4/2006 11:12:39 4/4/2006 14:22:51 CLOSED VEN 3 Operation Update Passbook : รหั สสาขา024 สาขาเยาวราช เครื ่ องปibmth k สรศั กดิ ์ ทํ าการแก ไขเปลี ่ ยน
551 TFB-00898795 4/4/2006 11:14:18 4/4/2006 17:24:55 CLOSED NWS 3 Hardware Printer : cashier / พิ มพ งานได 1-2 บรรทั ด แช างพอเจตน ได ปรั บแกนหั วพิ มพ ตอน
550 TFB-00898797 4/4/2006 11:15:24 4/4/2006 17:24:24 CLOSED NWS 3 Hardware Printer : GBS/ พิ มพ งานทางด านซ ายของกรช างพอเจตน ได ปรั บระยะหั วเข็ ม ตอน
439 TFB-00898798 4/4/2006 11:15:47 4/4/2006 13:25:54 CLOSED NWS 3 Hardware Printer : เครื ่ องพิ มพ 9055 ตํ าแหน งงาน CSOช างไพสิ ทธิ ์ ได เปลี ่ ยน motor ตอนนี ้ ใ
2884 TFB-00898788 4/4/2006 11:17:35 7/4/2006 14:08:03 RESTORE IE_AMS 3 Software Push Info.DelSy : ฝ าย บจ. แจ ง ระบบ Push ลู กค า AISaccount 0991208631 อาการ text
2955 TFB-00898801 4/4/2006 11:18:28 5/4/2006 14:53:44 CLOSED NWS 3 Hardware Printer : สาขาสํ านั กสี ลม ติ ดต อคุ ณราตรี โทร05/04/2006 14.25 ช างพงษ สานได เ
3149 TFB-00898806 4/4/2006 11:24:58 5/4/2006 14:55:50 CLOSED NWS 3 Software WIN2000 : PC=>หน าจอค าง Blue Screen 5/04/06 14.55 reinstall w2k\\user
2716 TFB-00898807 4/4/2006 11:25:10 17/4/2006 17:09:29 CLOSED NWS 3 Software Lotus Notes DB : PHA15 ติ ดต อคุ ณ จั นทรพั นธ โทร 0re-install lotus notes R6 - user te
3150 TFB-00898808 4/4/2006 11:25:48 5/4/2006 14:56:12 CLOSED NWS 3 Software WIN2000 : PC=>ลงโปรแกรมใหม ไม ได User 5/04/06 14.55 reinstall w2k\\user
1114 TFB-00898818 4/4/2006 11:33:51 5/4/2006 10:43:37 CLOSED NWS 3 Hardware Personal Comp. : ตํ าแหน งงาน PBO จอภาพเบลอ Mช างจํ านงค ได เปลี ่ ยน monitor s/n 5
2926 TFB-00898819 4/4/2006 11:35:33 4/4/2006 12:45:00 CLOSED EOS 1 Software Home Banking : ฝ าย ลส. แจ ง Home banking อากาEOS เข า Check ที ่ เครื ่ อง Web พบว
3926 TFB-00898835 4/4/2006 11:46:27 4/4/2006 14:37:03 CLOSED NWS 3 Software WIN2000 : RAT19 ฝ าย สท. Map เข าเครื ่ อง //Config Win2000


FIGURE A-1 A Sample of Incident Data

A-2 Pareto histogram of keywords extracted from the incident dataset
Figure A-2 shows a Pareto histogram of keywords extracted from the incident
dataset


0
500
1000
1500
2000
2500
3000
3500
4000
P
r
in
t
e
r
A
T
M
P
e
r
s
o
n
a
l
C
o
m
p
.
W
A
N
W
I
N

2
0
0
0
L
o
t
u
s
N
o
t
e
C
it
r
ix
L
o
t
u
s
N
o
t
e
s
C
lie
n








U
p
d
a
t
e

P
a
s
s
b
o
o
k
W
I
N

N
T
K
B
A
N
K
N
E
T
D
a
t
a

W
a
r
e
h
o
u
s
e
B
r
a
n
c
h
S
e
r
v
e
r
H
Q
M
S

O
f
f
ic
e

2
O
O
O
A
p
p
-
N
o
n
P
C
B
r
o
w
s
e
r
M
a
g
n
e
t
ic

S
t
r
ip
L
o
t
u
s

N
o
t
e
s

D
B
K
-
C
y
b
e
r

B
a
n
k
in
g
N
o
t
e
b
o
o
k
C
D
M
L
P
M
I
n
t
e
r
n
e
t

B
a
n
k
in
O
S
/
2
M
F
A

M
R
A
V
lin
k
C
a
r
d
L
in
k
B
r
a
n
c
h

A
p
p
.
S
t
a
t
e
m
e
n
t
W
I
N

X
P
A
n
t
i
V
ir
u
s
C
M
A
S L
I
C
I
S
L
o
t
u
s
N
o
t
e
s
S
e
r
v
e
M
S

O
f
f
ic
e

9
7
I
n
f
o

C
e
n
t
r
ix

C
T
S
c
a
n
n
e
r
T
r
a
n
s
a
c
t

B
P
B
a
n
k

R
e
f
e
r
e
n
c
e
H
o
s
t

o
n

D
e
m
a
n
d
C
a
s
h

C
o
n
n
e
c
t
C
a
s
h
A
d
m
in
.
o
n

W
e
D
C
S
D
M
S
H
o
m
e

B
a
n
k
in
g
C
T
R
P
e
o
p
le
S
o
f
t
F
C
D
S
S
M
M
W
I
N

9
8
B
a
r

C
o
d
e
F
I
C
S
F
X

o
n

w
e
b
B
ill
P
a
y
m
e
n
t
E
D
W
S
A
F
E
C
A
T
C
T
D

(
E
-
R
e
p
o
r
t
)
K
-
P
-
G
a
t
e
w
a
y
C
I
P
S
I
B
M
-
E
O
S
B
r

A
p
p
-
R
e
F
in
.
A
c
c
e
p
t
.
C
e
r
.
R
O
S
S
C
A
e
-
B
o
o
t
h
K
-
B
iz
N
e
t
N
A
V

(
P
C
)
S
Q
C
u
r
r
e
n
t
I
B
I
V
R
P
u
s
h

I
n
f
o
.
D
e
lS
y
B
L

E
n
t
r
y
L
M
S
-
R
e
p
o
r
t

M
g
n
.
P
r
in
t

S
e
r
v
e
r
C
a
ll
C
e
n
t
e
r
E
B
P
P
P
A
S
a
v
in
g

A
c
c
o
u
n
t
S
h
a
r
e

S
e
r
v
e
r
T
r
a
n
s
a
c
t

C
C
&
C
L
L
o
a
n
R
e
v
ie
w
(
H
o
s
t
M
I
S
E
x
im
b
ills
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%

FIGURE A-2 A Pareto histogram of keywords extracted from the incident dataset



83
A-3 Evaluation Results of Id3 Decision Tree Method
The evaluation results of Id3 decision tree method based on the Testing
documents of 4,909 records.
=== Run information ===
Scheme: weka.classifiers.trees.Id3
Relation: ID3- based Automatic Resolver Group Assignment
Instances: 4909
Attributes:
Anti-Virus
App-NonPC K-Cyber-Banking
ATM K-P-Gateway
Bank-Reference LI
Bar-Code LMS-Report-Mgn.
Bill-Payment LoanReview(Host
BL-Entry Lotus-Notes-DB
Br-App-Re LotusNoteCitrix
Branch LotusNotesClien
Branch-App. LotusNotesServe
Browser LPM
CA Magnetic-Strip
Call-Center MFA-MRA
CardLink MIS
Cash-Connect MS-Office-2OOO
CashAdmin.on-We MS-Office-97
CAT NAV-(PC)
CDM Notebook
CIPS OS/2
CIS PA
CMAS PeopleSoft
CTD-(E-Report) Personal-Comp.
CTR Print-Server
Current Printer
Data-Warehouse Push-Info.DelSy
DCS ROSS
DMS SAFE
e-Booth Saving-Account
EBPP Scanner
EDW Server
FCD Share-Server
FICS SQ
Fin.Accept.Cer. SSMM
FX-on-web Statement
Home-Banking Transact-BP
Host-on-Demand Transact-CC&CL
HQ Update-Passbook
IB Vlink
IBM-EOS WAN
Info-Centrix-CT WIN-2000
Internet-Bankin WIN-98
IVR WIN-NT
KBANKNET WIN-XP
K-BizNet Electrical-Supply

Assign-Group



Test mode: 10-fold cross-validation


=== Classifier model (full training set) ===
Id3
ATM = 0

84
| WAN = 0
| | Electrical-Supply = 0
| | | Update-Passbook = 0
| | | | Printer = 0
| | | | | Data-Warehouse = 0
| | | | | | LotusNoteCitrix = 0
| | | | | | | Personal-Comp. = 0
| | | | | | | | WIN-2000 = 0
| | | | | | | | | Branch = 0
| | | | | | | | | | CDM = 0
| | | | | | | | | | | Internet-Bankin = 0
| | | | | | | | | | | | LotusNotesClien = 0
| | | | | | | | | | | | | K-Cyber-Banking = 0
| | | | | | | | | | | | | | CTR = 0
| | | | | | | | | | | | | | | CardLink = 0
| | | | | | | | | | | | | | | | Home-Banking = 0
| | | | | | | | | | | | | | | | | IB = 0
| | | | | | | | | | | | | | | | | | WIN-NT = 0
| | | | | | | | | | | | | | | | | | | HQ = 0
| | | | | | | | | | | | | | | | | | | | Server = 0
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 0
| | | | | | | | | | | | | | | | | | | | | | Browser = 0
| | | | | | | | | | | | | | | | | | | | | | | CAT = 0
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 0
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.on-We = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.DelSy = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin.Accept.Cer. = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 0: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 1: NWS

85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin.Accept.Cer. = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.DelSy = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.on-We = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | CAT = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | Browser = 1: NWS
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 1: NWS
| | | | | | | | | | | | | | | | | | | | Server = 1
| | | | | | | | | | | | | | | | | | | | | Print-Server = 0
| | | | | | | | | | | | | | | | | | | | | | Share-Server = 0: NWS
| | | | | | | | | | | | | | | | | | | | | | Share-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | | | Print-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | HQ = 1: NWS
| | | | | | | | | | | | | | | | | | WIN-NT = 1: NWS
| | | | | | | | | | | | | | | | | IB = 1: EOS
| | | | | | | | | | | | | | | | Home-Banking = 1: EOS
| | | | | | | | | | | | | | | CardLink = 1: VEN
| | | | | | | | | | | | | | CTR = 1: EOS
| | | | | | | | | | | | | K-Cyber-Banking = 1: EOS
| | | | | | | | | | | | LotusNotesClien = 1: NWS
| | | | | | | | | | | Internet-Bankin = 1: EOS
| | | | | | | | | | CDM = 1: OS-EC
| | | | | | | | | Branch = 1
| | | | | | | | | | Branch-App. = 0: NWS
| | | | | | | | | | Branch-App. = 1: IE-AMS
| | | | | | | | WIN-2000 = 1: NWS
| | | | | | | Personal-Comp. = 1: NWS
| | | | | | LotusNoteCitrix = 1: NWS
| | | | | Data-Warehouse = 1: IE-AMS
| | | | Printer = 1: NWS
| | | Update-Passbook = 1: VEN
| | Electrical-Supply = 1: OS-EC
| WAN = 1: VEN

86
ATM = 1: OS-EC
Time taken to build model: 1.57 seconds
=== Stratified cross-validation ===
=== Summary ===

Correctly Classified Instances 4567 93.0332 %
Incorrectly Classified Instances 342 6.9668 %
Kappa statistic 0.8668
K&B Relative Info Score 404071.9478 %
K&B Information Score 6120.7864 bits 1.2468 bits/instance
Class complexity | order 0 7425.008 bits 1.5125 bits/instance
Class complexity | scheme 11293.8523 bits 2.3006 bits/instance
Complexity improvement (Sf) -3868.8443 bits -0.7881 bits/instance
Mean absolute error 0.0456
Root mean squared error 0.1526
Relative absolute error 20.9496 %
Root relative squared error 46.2673 %
Total Number of Instances 4909

=== Detailed Accuracy By Class ===

TP Rate FP Rate Precision Recall F-Measure Class
0.324 0.003 0.759 0.324 0.454 EOS
0.866 0.003 0.88 0.866 0.873 IE-AMS
0.99 0.129 0.93 0.99 0.959 NWS
0.884 0.01 0.961 0.884 0.921 OS-EC
0.837 0.01 0.91 0.837 0.872 VEN

=== Confusion Matrix ===

a b c d e <-- classified as
44 3 89 0 0 | a = EOS
10 110 7 0 0 | b = IE-AMS
0 9 3074 0 21 | c = NWS
4 3 89 903 22 | d = OS-EC
0 0 48 37 436 | e = VEN

87
A-4 An Extended Part of ID3 Decision Tree Results
Figure A-3 shows an extended part of ID3 decision tree results.



FIGURE A-3 An Extended Part of ID3 Decision Tree

A-5 A Sample of ID3-Based Generating Rules
Figure A-4 shows a sample of ID3-based generating rules.

Class
KW1 KW2 KW3 KW4 KW5 KW6 KW7 KW8 KW9 KW10 KW11 KW12 --- Assign Groups
ATM WAN E-Supply Passbook Printer D-WarehouLotusNote P-Comput Win2000 Branch Branch-Ap CDM
1 0 0 0 0 0 0 0 0 0 0 0 --- OS-EC
0 1 0 0 0 0 0 0 0 0 0 0 --- VEN
0 0 1 0 0 0 0 0 0 0 0 0 --- OS-EC
0 0 0 1 0 0 0 0 0 0 0 0 --- VEN
0 0 0 0 1 0 0 0 0 0 0 0 --- NWS
0 0 0 0 0 1 0 0 0 0 0 0 --- IE-AMS
0 0 0 0 0 0 1 0 0 0 0 0 --- NWS
0 0 0 0 0 0 0 1 0 0 0 0 --- NWS
0 0 0 0 0 0 0 0 1 0 0 0 --- NWS
0 0 0 0 0 0 0 0 0 1 1 0 --- IE-AMS
0 0 0 0 0 0 0 0 0 1 0 0 --- NWS
0 0 0 0 0 0 0 0 0 0 0 1 --- OS-EC
--- --- --- --- --- --- --- --- --- --- --- --- --- ---
Attributes


FIGURE A-4 A Sample of ID3-Based Pattern Kept in Knowledge Database
The IF-THEN Rules could be presented as in the following:
1. IF keyword (KW) = ‘ATM’ THEN Assigned Group is OS-EC ELSE Go to 2,
2. IF keyword (KW) = ‘WAN’ THEN Assigned Group is VEN ELSE Go to 3,
………………………
10. IF keyword (KW) = ‘Branch’ AND ‘Branch-App’ THEN Assigned Group is
IE-AMS ELSE Go to 11,
11. IF keyword (KW) = ‘Branch’ THEN Assigned Group is NWS ELSE Go to 12,
12. IF keyword (KW) = ‘CDM’ THEN Assigned Group is OS-EC ELSE Go to 13,
………………………

A T M = 0
| W A N = 0
| | E l e c t r i c a l - S u p p l y = 0
| | | U p d a t e - P a s s b o o k = 0
| | | | P r i n t e r = 0
| | | | | D a t a - W a r e h o u s e = 0
| | | | | | L o t u s N o t e C i t r i x = 0
| | | | | | | P e r s o n a l - C o m p . = 0
| | | | | | | | W I N - 2 0 0 0 = 0
| | | | | | | | | B r a n c h = 0
| | | | | | | | | | C D M = 0
| | | | | | | | | | | I n t e r n e t - B a n k i n = 0
| | | | | | | | | | | | L o t u s N o t e s C l i e n = 0
| | | | | | | | | | | | | K - C y b e r - B a n k i n g = 0
| | | | | | | | | | | | | | C T R = 0
| | | | | | | | | | | | | | | C a r d L i n k = 0
| | | | | | | | | | | | | | | | H o m e - B a n k i n g = 0
| | | | | | | | | | | | | | | | | I B = 0
| | | | | | | | | | | | | | | | | | W I N - N T = 0
| | | | | | | | | | | | | | | | | | | H Q = 0
| | | | | | | | | | | | | | | | | | | | S e r v e r = 0
| | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 0
| | | | | | | | | | | | | | | | | | | | | | B r o w s e r = 0
| | | | | | | | | | | | | | | | | | | | | | | C A T = 0
| | | | | | | | | | | | | | | | | | | | | | | | S S M M = 0
| | | | | | | | | | | | | | | | | | | | | | | | | K - P - G a t e w a y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 2 O O O = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F I C S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I P S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X - o n - w e b = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p l e S o f t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V l i n k = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B i l l - P a y m e n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L - E n t r y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h - C o n n e c t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I V R = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S - R e p o r t - M g n . = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M I S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m i n . o n - W e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h - I n f o . D e l S y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v i n g - A c c o u n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A - M R A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e t i c - S t r i p = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p - N o n P C = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s - N o t e s - D B = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o t e b o o k = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - X P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n t i - V i r u s = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S / 2 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S c a n n e r = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 9 7 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k - R e f e r e n c e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s N o t e s S e r v e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t - o n - D e m a n d = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S t a t e m e n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D - ( E - R e p o r t ) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T r a n s a c t - B P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F i n . A c c e p t . C e r . = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r - C o d e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V - ( P C ) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - 9 8 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r - A p p - R e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e - B o o t h = 0 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e - B o o t h = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r - A p p - R e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - 9 8 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V - ( P C ) = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r - C o d e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F i n . A c c e p t . C e r . = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T r a n s a c t - B P = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D - ( E - R e p o r t ) = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S t a t e m e n t = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t - o n - D e m a n d = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s N o t e s S e r v e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k - R e f e r e n c e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 9 7 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S c a n n e r = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S / 2 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n t i - V i r u s = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - X P = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o t e b o o k = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s - N o t e s - D B = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p - N o n P C = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e t i c - S t r i p = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A - M R A = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v i n g - A c c o u n t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h - I n f o . D e l S y = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m i n . o n - W e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M I S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S - R e p o r t - M g n . = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I V R = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h - C o n n e c t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L - E n t r y = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B i l l - P a y m e n t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V l i n k = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p l e S o f t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X - o n - w e b = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I P S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F I C S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 2 O O O = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | K - P - G a t e w a y = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | | | S S M M = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | | C A T = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | B r o w s e r = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 1 : N W S
| | | | | | | | | | | | | | | | | | | | S e r v e r = 1
| | | | | | | | | | | | | | | | | | | | | P r i n t - S e r v e r = 0
| | | | | | | | | | | | | | | | | | | | | | S h a r e - S e r v e r = 0 : N W S
| | | | | | | | | | | | | | | | | | | | | | S h a r e - S e r v e r = 1 : E O S
| | | | | | | | | | | | | | | | | | | | | P r i n t - S e r v e r = 1 : E O S
| | | | | | | | | | | | | | | | | | | H Q = 1 : N W S
| | | | | | | | | | | | | | | | | | W I N - N T = 1 : N W S
| | | | | | | | | | | | | | | | | I B = 1 : E O S
| | | | | | | | | | | | | | | | H o m e - B a n k i n g = 1 : E O S
| | | | | | | | | | | | | | | C a r d L i n k = 1 : V E N
| | | | | | | | | | | | | | C T R = 1 : E O S
| | | | | | | | | | | | | K - C y b e r - B a n k i n g = 1 : E O S
| | | | | | | | | | | | L o t u s N o t e s C l i e n = 1 : N W S
| | | | | | | | | | | I n t e r n e t - B a n k i n = 1 : E O S
| | | | | | | | | | C D M = 1 : O S - E C
| | | | | | | | | B r a n c h = 1
| | | | | | | | | | B r a n c h - A p p . = 0 : N W S
| | | | | | | | | | B r a n c h - A p p . = 1 : I E - A M S
| | | | | | | | W I N - 2 0 0 0 = 1 : N W S
| | | | | | | P e r s o n a l - C o m p . = 1 : N W S
| | | | | | L o t u s N o t e C i t r i x = 1 : N W S
| | | | | D a t a - W a r e h o u s e = 1 : I E - A M S
| | | | P r i n t e r = 1 : N W S
| | | U p d a t e - P a s s b o o k = 1 : V E N
| | E l e c t r i c a l - S u p p l y = 1 : O S - E C
| W A N = 1 : V E N
A T M = 1 : O S - E C
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gatewa y = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | CAT = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | Br owser = 1: NWS
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 1: NWS
| | | | | | | | | | | | | | | | | | | | Server = 1
| | | | | | | | | | | | | | | | | | | | | Print-Ser ver = 0
| | | | | | | | | | | | | | | | | | | | | | Shar e-Server = 0: NWS
| | | | | | | | | | | | | | | | | | | | | | Shar e-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | | | Print-Ser ver = 1: EOS
| | | | | | | | | | | | | | | | | | | HQ = 1: NWS
| | | | | | | | | | | | | | | | | | WIN-NT = 1: NWS
| | | | | | | | | | | | | | | | | IB = 1: EOS
| | | | | | | | | | | | | | | | Home-Banking = 1: EOS
| | | | | | | | | | | | | | | CardLink = 1: VEN
| | | | | | | | | | | | | | CTR = 1: EOS
| | | | | | | | | | | | | K-Cyber-Banking = 1: EOS
| | | | | | | | | | | | LotusNotesClien = 1: NWS
| | | | | | | | | | | Internet-Bankin = 1: EOS
| | | | | | | | | | CDM = 1: OS-EC
| | | | | | | | | Branch = 1
| | | | | | | | | | Br anch-App. = 0: NWS
| | | | | | | | | | Br anch-App. = 1: IE-AMS
| | | | | | | | WIN-2000 = 1: NWS
| | | | | | | Personal-Comp. = 1: NWS
| | | | | | LotusNoteCitrix = 1: NWS
| | | | | Data-Warehouse = 1: IE-AMS
| | | | Printer = 1: NWS
| | | Update-Passbook = 1: VEN
| | Electrical-Supply = 1: OS-EC
| WAN = 1: VEN
ATM = 1: OS-EC













APPENDIX B

ITIL-BASED KMRCA IT SERVICE DESK PROCESS


90

B-1 ITIL-Based Incident Management Process
The incident is any event that deviates from normal operation of a service and
that causes, or may cause, an interruption to, or a reduction in, the quality of that
service. The goal of the Incident Management is to recover standard service operation
as quickly as possible. It may be that because of incident analysis and resolution, the
incident cause is discovered. If this is not the case and if further investigation is
justified in respect of cost and effort, then the Problems Management process is
solicited and a problem record is raised. The process defines activities to investigate
the problem, which is defined as the unknown underlying cause of one or more
incidents. The status of the problem is transformed to known error when both the root
cause is known and a workaround or a permanent resolution has been identified.
The scope of the Incident Management process includes:
(a) Opening an incident record
(b) Updating the incident record throughout the process to reflect its status
(c) Assigning the incident to an incident resolver
(d) Analyzing the incident and performing incident determination
(e) Implementing a workaround or resolution for the incident to perform
recovery of the service
(f) Monitoring incident (request) queues to ensure that all incidents are
resolved within committed service levels and reprioritizing or reassigning or escalating
as necessary.
Note that during the implementation of the workaround or resolution for the
incident, the Incidents Management process is not directly responsible for the
implementation of the solution but it will monitor and record the progress and results
of the solution implementation.
(g) Updating the incident knowledge database to assist with future incident and
problem investigation and diagnosis
(h) Closing the incident record
(i) The Handle and Control Problems operational process has been called where
the root cause of the incident or problem has not been identified.
Figure B-1 shows the Incident Management Process Flow.

91

FIGURE B-1 IT Incident Management Process Flow

92
Narrative of Incident Management Process
The following Step 1 through Step 7 are performed by Bank’s help desk or
called FLS (first level support), and Step 8 through Step 31 are performed by IT
service desk outsourcing or called SLS (second level support), and the rest Steps are
performed by Resolver Groups or called TLS (third level support) as follows:
1. Open Incident Record Procedure
Refer to the Open Incident Record procedure to open an incident record for
the incident information.
1. Major Incident?
Based on Incident Policy has been defined that the incident severity 1 is the
Major incident. Follow the policy to determine if the incident is a major
incident.
(a) If it is ‘Yes’, proceed to Handle Major Incident Procedure.
(b) If it is ‘No’, proceed to IT Outsourcing Scope?
2. Handle Major Incident Procedure
Refer to the Handle Major Incident procedure to assign a major incident
owner to handle all required notifications and escalations.
3. IT Outsourcing Scope?
Determine whether the incident is IT incident and its description is in an IT
outsourcing scope, referring to the IT outsourcing contract.
(a) If it is ‘Yes’, proceed to Assign Incident to SLS Resolver.
(b) If it is ‘No’, proceed to Assign Incident to Bank Resolver.
4. Assign Incident to Bank Resolver
Assign a non-IT incident to Bank resolver.
Proceed to End.
5. Assign Incident to SLS Incident Resolver
Assign an IT Incident to SLS Resolver who is responsible for resolving IT
incidents of this type.
6. Update Incident Record with Current Status
Update the incident record to indicate that the incident has been assigned to
a SLS Resolver and is awaiting until the incident is closed.





93
7. Review Incident Record For Completeness
Review the incident record to ensure that its contents are complete
The incident information include:
(a) Incident ID
(b) When the incident opened (date and time)
(c) Identified incident severity (1, 2, 3, or 4)
(d) Incident status (open/ assign to/ resolving steps/ close)
(e) System, component, item failure
(f) Caller, Requester (name/ location/ contact no.)
(g) Incident descriptions
(h) SLS owner (who/ when )
(i) TLS owner (who/ when)
8. IT Outsourcing Scope ?
Check if the incident is in IT outsourcing scope.
(a) If it is ‘Yes’, proceed to Additional Information Needed.
(b) If it is ‘No’, proceed to Indicate Incident Type.
9. Indicate Incident Type
If the incident was initially wrong assigned due to the assigned wrong
scope and or wrong resolver, indicate the request type of the incident and, if
it is known the details of whom the incident most appropriate reassigned to
and request for reassignment.
10. Request for Reassignment
Request FLS to review the scope for the incident and reassign as the
provided reasons.
11. Additional Information Needed?
Determine if additional information is needed to complete the incident
record.
(a) If it is ‘Yes’, proceed to Contact Appropriate Parties to get More
Information.
(b) If it is ‘No’, proceed to Validate Initial Severity.





94
12. Validate Initial Severity
Refer to the defined severity based on policy; severity 1 is a critical
incident, severity 2 is a high incident, severity 3 is a normal incident, and
severity 4 is a low incident, validate the initially assigned severity
according to the severity policy.
13. Contact Appropriate Parties to get More Information
Contact the most appropriate parties to get more information. Policy should
dictate how many attempts or how long the incident resolver should spend
trying to obtain additional information before this becomes an issue.
14. Required Information Obtained?
Check if the parties were contacted if the required information is obtained.
(a) If it is ‘Yes’, proceed to Up date Incident Record with Any
Additional Information
(b) If it is ‘No’, proceed to Document Issue
15. Update Incident Record with Any Additional Information
Update the incident record with any additional information.
16. Document Issue
Document the issue when the required information do not receiving on time.
17. Perform Escalation
Handles escalations of issues associated with requests. SLS or personnel
may escalate request handling at any time by notifying to the higher level
of the contact party at that the issue was not resolved and document
unsuccessful resolution.
18. Issue Resolved?
Check if the issue is resolved.
(a) If it is ‘Yes’, proceed to Update Incident Record with Any
Additional Information
(b) If it is ‘No’, proceed to Close Incident?
19. Major Incident?
Determines the update incident is the major incident based on major incident
policy.
(a) If it is ‘Yes’, proceed to Handle Major Incident Procedure.
(b) If it is ‘No’, proceed to Perform Incident Analysis Procedure.



95
20. Handel Major Incident Procedure
Refer to the Handle Major Incident procedure. It needs to assign a Major
Incident owner who handles all required notifications and escalations the
request until the major incident is complete.
21. Perform Incident Analysis Procedure
Refer to the Incident Analysis procedure to gather all required information
about the incident and related incidents and to perform incident determination,
investigation and diagnosis activities.
22. TLS Required?
Determine to whether the TLS resolver groups are required to resolve the
assigned incident. The determination of resolver groups whom it should be
assigned to. In particular, compare the incident to the database of incident
records to determine if this is a repeat occurrence of a previous incident. It
may be more effective if the same resolver handles all related incidents.
(a) If it is ‘Yes’, proceed to Assign/ Reassign incident to Appropriate
Incident Resolver Group.
(b) If it is ‘No’, proceed to Attempt to Resolve Incident.
23. Attempt to Resolve Incident
Attempt to resolve the incident with SLS resolve’s skills and availability.
24. Knowledge-Based Required?
Determine if the Knowledge-based is requited to resolve the incident,
searching similar cases and getting their resolutions of the previous incident
in the knowledge database.
(a) If it is ‘Yes’, proceed to Search Required Information from
Knowledge-Based.
(b) If it is ‘No’, proceed to Perform incident Determination Procedure
25. Search Required Information from Knowledge-Based
The knowledge database is required to search the required information to
resolve the incident.
26. Perform Incident Determination Procedure
Refer to Perform Incident Procedure






96
27. Close Incident?
For an actual incident, determine if the incident should be closed due to the
lack of information required to proceed with resolution of the incident.
(a) If it is ‘Yes’, proceed to Inform Requester that Incident will be Closed
(b) If it is ‘No’, proceed to Take Incident Out of SLA Criteria
28. Take Incident Out of SLA Criteria
If the incident should not be closed due to lack of information needed to
proceed with resolution of the incident, take the incident out of SLA criteria
so that it will not be included in SLA attainment reports.
Return to Contact Appropriate Parties to obtain the additional information
required to proceed with resolution of the incident.
29. Inform Requester that Incident will be Closed
If the incident should be closed due to the lack of information needed to
proceed with resolution of the incident, inform the Requester that the
incident will be closed.
30. Update Incident Record with its Close
Update the incident record to indicate that the required information could
not be obtained and that the incident will be closed.
Proceed to End.
31. Assign/ Reassign incident to Appropriate Incident Resolver Group
Determine if the result of Incident Analysis reassigned the incident to a
different Resolver Group.
(a) If it is ‘Yes’, return to Assign Incident to Incident Resolver to assign
the incident to a new Incident Resolver.
(b) If it is ‘No’, proceed to Actual Incident?
Note that the Assign and/ or reassign the incident to the most appropriate
TLS incident resolver based on skill level and availability within the TLS
Resolver Group.
32. Review for Corrective Assignment
Review the assigned incident for corrective resolver group.
33. Correct Assignment?
Determine if the review indicates that the assigned incident is correct
assignment.
(a) If it is ‘Yes’, proceed to Perform Incident Analysis Procedure to
analyse the incident.
(b) If it is ‘No’, proceed to Indicate Request Type and Reassignment Details


97
34. Indicate Request Type and Reassignment Details
If there is incorrect assignment, indicate request type and provide
reassignment details such as who is appropriate to resolve.
35. Request SLS for Reassignment
Request for reassignment, SLS will review and reassignment
36. Perform Incident Analysis Procedure
Refer to the Incident Analysis procedure to gather all required information
about the incident and related incidents and to perform incident
determination, investigation and diagnosis activities.
37. Knowledge-based Required?
Determine if the Knowledge-based is required to get the required
information.
(a) If it is ‘Yes’, proceed to Search Required Information from
Knowledge-Based
(b) If it is ‘No’, proceed to Attempt to Resolve Incident
38. Search Required Information from Knowledge-Based
Search the required information from the Knowledge database.
39. Attempt to Resolve Incident
Attempt to resolve the incident based on skills and availability.
40. Close Incident?
Determine to close incident when processing incident has been complete.
(a) If it is ‘Yes’, proceed to Close Incident Procedure
(b) If it is ‘No’, proceed to Recovery Required?
41. Recovery Required?
If the incident is an actual incident, determine if recovery from the incident
is required prior to implementation of a permanent solution.
(a) If it is ‘Yes’, proceed to Perform Incident Recovery.
(b) If it is ‘No’, proceed to Handle and Control Problems.
42. Perform Incident Recovery
If recovery of the incident is required prior to permanent resolution of the
incident, proceed to the Perform Incident Recovery as the following.
(a) Review the Recovery Plan with affected parties
(b) Check if the required recovery is entitlement
(c) Check if the service request is required

98
(d) Determine to request for change
(e) Update incident record to indicate recovery result either successful or
unsuccessful
43. Was Incident Recover?
Determine if the Perform Incident Recovery was successful in recovering
from the incident.
(a) If it is ‘Yes’, proceed to Incident Permanently Resolve or Agree to
Workaround Applied?
(b) If it is ‘No’, proceed to Close Incident Record Procedure.
44. Incident Permanently Resolve or Agree to Workaround Applied?
Determine if the Perform Incident Recovery provided a permanent
resolution for the incident. That is, is the recovery action or bypass
acceptable as a permanent solution?
(a) If it is ‘Yes’, proceed to Add Resolution to Knowledge-Based.
(b) If it is ‘No’, proceed to Problem Management Process
Refer to the Problem Management process to develop a permanent
solution for the problem.
Note that a problem is the unknown underlying cause of one or more
incidents. The status of the problem is transformed to known error when
both the root cause is known and a temporary workaround or a permanent
resolution has been identified.
Proceed to End.
45. Add Resolution to Knowledge-Based
Add the resolutions to the knowledge database to assist with future incident
and problem investigation and diagnosis.
46. RCA Required?
Follow the policy to determine if a RCA is required for the recovered incident
for which the recovery action is acceptable as a permanent resolution.
(a) If it is ‘Yes’, proceed to Handle and Control Problems (RCA).
(b) If it is ‘No’, proceed to Close Incident Record.
47. Close Incident Record Procedure
When processing of the incident has completed either successfully or
unsuccessfully, proceed according to the Close Incident Record procedure
to close the associated incident record.
48. End
End of Incident Management Process


99
Figure B-2 shows Open Incident Record Procedure


FIGURE B-2 Open Incident Record Flow

Narrative of Open Incident Record Procedure
1. Incident Record Already Open?
Check if an incident record has already been opened for the incident.
(a) If it is ‘Yes’, proceed to Review Open Incident Policy
(b) If it is ‘No’, proceed to Return
2. Review Open Incident Policy
Review the Open Incident policy particular the details for items such as:
(a) Who has authorized to open incident records?
(b) What information is required when opening an incident?
3. Open an incident record for the incident.
Open an incident record for the incident with required information.
The required information to be included in an incident record is:
(a) Incident ID
(b) Date and Time when open incident Record

100
(c) Incident description
(d) Outage detail particular on failing component /resource, date / time
incident occurred
(e) Incident severity based on business impact
(f) Incident requester (requester’s name, location and contact no.)
(g) Incident status (open/ assign resolver/ necessary resolving steps/ close)
4. Gather Required Information
Gather required information based on policy to complete the incident record.
5. Entitle?
Follow the policy to determine if the Requester is entitled to raise this incident.
(a) If it is ‘Yes’, proceed to Match Severity to Incident
(b) If it is ‘No’, proceed to Document Entitle Failure Detail
6. Document Entitle failure Detail
If the Requester was not entitled to raise this incident, document the details
of the entitlement failure in preparation for calling the Handle Service
Entitlement Failure.
7. Handle Service Entitlement Failure
Handel Service Entitlement Failure is to resolves entitlement failures for
requested services and update request records to reflect the disposition of
entitlement failures. It shall be determined the incident against the service
contracts particular IT outsourcing contact. It may propose the alternative
for entitlement with authorized approval.
8. Continue?
Determine if the decision was made in the Handle Service Entitlement
Failure to continue with the incident.
(a) If it is ‘Yes’, proceed to Assign Severity to Incident
(b) If it is ‘No’, proceed to Return
9. Assign Severity to Incident
Assign severity based on severity definition and its policy to the incident.
Proceed to Return.
10. Return
Return to the Incident Management Process

101
Figure B-3 shows Handle Major Incident Procedure



FIGURE B-3 Handle Major Incident Flow

Narrative of Handle Major Incident Procedure
1. Gather Information for Major incident
If the incident is associated with a major incident, collect all related
information regarding the incident such as:
(a) Services/ applications/ resources affected
(b) Affected service owners
(c) Estimated duration of any associated outages
2. Major Incident Criteria Met?
Determine if the criteria for conducting an incident review have been met
based on major incident severity 1, which is the most business impact in
terms of the availability of specific service, application, or network.

102
(a) If it is ‘Yes’, proceed to Assign major incident Owner
(b) If it is ‘No’, proceed to Inform Requester that Incident Not Major
Incident with Reasons
3. Inform Requester that Incident Not Major Incident with Reasons
Inform the Requester that the incident is not a major incident with reason
why the incident was not assigned to severity 1.
4. Assign Major Incident Owner
Assign a major incident owner who handles all required notifications and
escalations until the resolution is complete.
5. Coordinate Recovery for Major incident.
Coordinate relevant resources for major incident recovery and effectively
manage the recovery activities to minimize the duration of the incident.
6. Major Incident Notification
Perform the major incident notification as the following:
(a) Analyze the incident in detail, take whatever actions are necessary to
confirm whether or not the associated service is actually down or is
severely degraded.
(b) If the service is actually down, urgently provide notification to all
affected parties of the service outage (management team and service
recovery teams) by short massaging and or email with an ongoing status
as required.
(c) If the service is not actually down or severely degraded, notify the
appropriate service providers so that they may handle the incident.
7. Perform Problem Management Process
Perform Problem Management process to permanently resolve.
8. Major Incident Review Required?
Determine if the criteria for conducting an incident review have been met
based on incident severity 1 that business impact in a particular the
availability of specific service, application, or network.
(a) If it is ‘Yes’, proceed to Assign major incident Owner
(b) If it is ‘No’, proceed to Inform Requester that Incident Not Major
Incident with Reasons

103
9. Perform Major Incident Review
Assemble appropriate parties in preparation to conduct a review of an
incident.
10. Notify All Parties
Inform all participants either that a major incident review is not needed or
that the criteria for conducting an incident review have not been met.
Proceed to Return.
11. Return
Return to the Incident Management process

Figure B-4 shows Perform Incident Analysis Procedure



FIGURE B-4 Perform Incident Analysis Flow

Narrative of Perform Incident Analysis Procedure
1. Collect Incident Symptom and Configuration Item Impact Info
Collect all available data about the incident, its symptoms, severity and
associated configuration data based on its component.



104
2. Identify Any Related Occurrence
Identify any related occurrences of the incident and analyze with similar
previous cases.
3. Need To Reproduce Incident?
Determine if there is a need to reproduce the incident to obtain additional
information to understand the exact environment in which the incident
occurred.
(a) If it is ‘Yes’, proceed to Reproduce Proper Incident
(b) If it is ‘No’, proceed to Analyse Available Incident Data
4. Reproduce Proper Incident
If there is a need to reproduce the incident to gather additional insight about
the incident, attempt to reproduce the incident.
5. Incident Reproducible?
Determine if the incident is reproducible.
(a) If it is ‘Yes’, proceed to Update Incident Record with Additional
Details
(b) If it is ‘No’, proceed to Analyse Available Incident Data
6. Update Incident Record with Additional Details
Update the incident record with additional details.
7. Analyse Available Incident Data
Analyze all available incident data to validate that the incident was assigned
to the correct resolver group.
8. Correct Assignment?
Determine if the incident was assigned to the correct resolver group based on
the review of the incident record and all incident data.
(a) If it is ‘Yes’, proceed to Perform Incident Determination Procedure
(b) If it is ‘No’, proceed to Indicate Request Type
9. Indicate Request Type
If the incident record was incorrectly assigned, indicate request type and
document the reassignment details in preparation for calling the reassign
request


105
10. Request for Reassignment
Request for reassignment to reassign the incident to the correct resolver
group and to return to the Assign/ reassign Incident to Appropriate
Incident Resolver to assign the incident to a new incident resolver
11. Perform Incident Determination Procedure
If the incident was assigned to the correct resolver, proceed to perform
Incident Determination procedure to continue with incident analysis and
development of a Recovery Plan.
12. Return
Return to the Incident Management Process

Figure B-5 shows Incident Determination Procedure



FIGURE B-5 Incident Determination Flow


106
Narrative of Incident Determination Procedure
1. Initiate Incident Determination
Analyze all available incident data and initiate normal incident
determination activities. It should identify by all single points of failure.
2. Actual Incident?
Determine if the reported incident is indeed an actual incident.
(a) If it is ‘Yes’, proceed to Determine Incident Impact
(b) If it is ‘No’, proceed to Action Required?
3. Action Required?
Determine if any action is required.
(a) If it is ‘Yes’, proceed to Perform Appropriate Action
(b) If it is ‘No’, proceed to Update Incident Record to Indicate that
Incident is Not an Actual Incident
4. Update Incident Record to Indicate that Incident is Not an Actual Incident
Update incident record to indicate the incident that is not an actual incident.
Proceed to Return
5. Perform Appropriate Action
Perform appropriate action for non actual incident details to check if notification
is required.
6. Notification Required?
Determine if the notification is required.
(a) If it is ‘Yes’, proceed to Notify Appropriate Parties to Perform Action
(b) If it is ‘No’, proceed to Update Incident Record with Current Status
7. Notify Appropriate parties to perform Action
Notify appropriate parties to perform action for non-actual incident.
8. Determine Incident Impact
Determine of which are the incident impact to particular crucial services,
components, application, and networks.
9. Determine to Adjust Severity
Determine to adjust the assigned severity. Negotiable severity either up or
down will be notify to the FLS to determine with the negotiation.


107
10. Major Incident?
Base on the Major Incident policy, determine if the incident is a major incident.
(a) If it is ‘Yes’, proceed to Handle Major Incident Procedure
(b) If it is ‘No’, proceed to Recovery required
11. Handle Major Incident Procedure
Refer to the Handle Major Incident procedure to assign a major incident
owner to the incident and to handle all required notifications and escalations.
12. Recovery Required?
Determine if there is any recovery required to the incident.
(a) If it is ‘Yes’, proceed to Perform Backup and Recovery
(b) If it is ‘No’, proceed to Update Incident Record with Current Status
13. Perform Backup and Recovery
Perform recovery according to Backup and Recovery procedure
14. Update Incident Record with Current Status
Update incident record with the current status.
15. Return
Return to Incident Management Process


108
Figure B-6 shows Close Incident Record Procedure



FIGURE B-6 Close Incident Record Flow

Narrative of Close Incident Record Procedure
1. Review Close Incident Policy
Review the Close Incident policy for the account. The policy shall define:
(a) Who can close incident records
(b) Required closure concurrence, if any
(c) Required notifications, if any
2. Closure Concurrence Required?
Follow the policy to determine if concurrence to close the incident is
required.
(a) If it is ‘Yes’, proceed to Obtain Closure Concurrence from Appropriate
Parties.
(b) If it is ‘No’, proceed to Close Incident Record.

109

3. Obtain Closure Concurrence from Appropriate Parties
If concurrence to close the incident is required, follow the Close Incident
policy to obtain concurrence from the appropriate parties.
4. Concurrence Obtained?
Determine if concurrence to close the incident was obtained from all
appropriate parties.
(a) If it is ‘Yes’, proceed to Close Incident Record.
(b) If it is ‘No’, proceed to Document Closure Issue.
5. Close Incident Record
Close the incident record, ensuring that the incident record contains all the
required information, including the closing status, code and recovery and
resolution dates and times.
6. Notification Required?
Follow the Notification policy to determine if notification is required that the
incident has been closed.
(a) If it is ‘Yes’, proceed to Notify Appropriate Parties.
(b) If it is ‘No’, proceed to Return.
7. Notify Appropriate Parties
If notification is required, follow the Notification policy to notify the
appropriate parties that the incident has been closed and its closing status.
The following personnel to be notified that a severity 1 incident has been
closed:
(a) Incident Coordinator
(b) Requester/ User
(c) Designated customer incident liaison
8. Return
Proceed to Return.

110
B-7 ITIL-Based Problem Management Process
The scope of the Problem Management process includes:
(a) Review problem and incident trend analysis
(b) Opening an problem record
(c) Performing RCA (root cause analysis)
(d) Assigning problem to appropriate problem resolver
(e) Developing permanent resolution plan
(f) Implementing permanent resolution plan
(g) Close incident record

Figure B-7 shows the Problem management process flow



FIGURE B-7 IT Problem Management Process Flow

Narrative of Problem Management Process:
There are two purposes of the problem management process. One is to perform
the preventive action by analyzing problem and incident trends to determine to provide
the action plan (path ‘ongoing’). Another is to handle for each problem as required
from the incident management process (path ‘as required for each problem’).

111
The Ongoing path includes one procedure.
1. Review Problem and Incident Trend Analysis procedure
Refer to the Review Problem and Incident Trend Analysis procedure to
analyse the negative trend of incident and problem process. It will determine
to provide the action plan in terms of preventive action.
Proceed to End
As Required for each problem path includes the following.
1. Open Problem Record Procedure
Refer to Open Problem Record procedure.
2. Request for RCA?
Determine if the problem was opened for a request to perform a Root Cause
Analysis for a negative process trend.
(a) If it is ‘Yes’, proceed to Perform Root Cause Analysis Procedure.
(b) If it is ‘No’, proceed to Assign to Problem Resolver Procedure.
3. Perform Root Cause Analysis Procedure
Refer to Perform Root Cause Analysis procedure
Proceed to End
4. Assign to Problem Resolver Procedure
Refer to Assign to Problem Resolver procedure
5. Develop Permanent Resolution Plan Procedure
Refer to Develop Permanent Resolution Plan procedure
6. Was Resolution Developed?
Determine if the resolution Plan was developed.
(a) If it is ‘Yes’, proceed to Implement Permanent Resolution Plan
Procedure
(b) If it is ‘No’, proceed to Close Problem Record Procedure
7. Implement Permanent Resolution Plan Procedure.
Refer to Implement Permanent Resolution Plan Procedure.
8. Was Resolution Successful?
Determine if the resolution is successful?
(a) If it is ‘Yes’, proceed to Close Problem Record procedure.
(b) If it is ‘No’, proceed to Proceed to Another Effective Resolution Plan

112
9. Proceed to Another Effective Resolution Plan
If the resolution plan was implemented unsuccessful documented issue and
proceed to another effective resolution plan.
Proceed to Develop Permanent Resolution Plan Procedure
10. Close Problem Record procedure
Refer to Close Problem Record procedure
11. End
End of Problem Management Process

Figure B-8 shows Review Problem and Incident Trend Analysis Procedure

1. Review Problem and
Incident Analyses
Start
5.
Action Plan
Required?
3. Document Require for
Preventive Action
2.
Preventive Action
Required?
4. Review Action Plan in
Regular Management Meeting
6. Develop Action Plan
7. Handel Action for
Completion
Return
Yes
No
Yes
No


FIGURE B-8 Review Problem and Incident Trend Analysis

Narrative of Review Problem and Incident Trend Analysis
1. Review problem and incident trend analysis
Review problem and incident trend analysis to proactively determine
potential problems that have not yet been identified by the occurrence of an
incident or recurring data that might indicate an unidentified problem.


113
2. Preventive Action Required?
Determine whether specific targeted actions need to be taken to investigate,
resolve and prevent a potential problem, based on the outcome of data
gathering and trend analysis.
(a) If it is ‘Yes’, proceed to Document Required for Preventive Action.
(b) If it is ‘No’, proceed to Review Action Plan in Regular Management
Meeting.
3. Document Required for Preventive Action.
Document the required for preventative action with the trend analysis output.
Notify the preventive action result to the services of emerging trends and
possible improvement areas.
4. Review Action Plan in Regular Management Meeting
Review the action plan information with management at regular review
meetings to ensure that the information is understood and acted on.
5. Action Plan Required?
Does the review indicate that a further action plan is required to handle any
service issues?
(a) If it is ‘Yes’, proceed to Develop Action plan.
(b) If it is ‘No’, proceed to End.
6. Develop Action plan
Develop the required action plan.
7. Handle Action Plan Implementation for Completion
Handle the action plan implementation to monitor implementation and
completion of the action plan.
8. Return
Return to the Problem Management Process


114
Figure B-9 shows Open Problem Record Procedure



FIGURE B-9 Open Problem Record Flow

Narrative of Open Incident Record
1. Problem Record Already Open?
Check if a problem record has already been opened for the incident.
(a) If it is ‘Yes’, proceed to Review Open Problem Policy
(b) If it is ‘No’, proceed to Return
2. Update Problem Record which It Is Already Open
Update the problem record that the problem is ready opened.
3. Review Open Problem Policy
Review the Open Problem policy particular the details for items such as:
(a) Who has authorized to open incident records?
(b) What information is required when opening a problem

115
4. Open Problem Record
Open a problem record for the problem with required information.
The information required to open a problem record as the following:
(a) Incident details gathered and recorded in the incident record
(b) Associated incidents
5. Multiple Incidents?
Determine if the incident is a multiple incidents
6. Coordinate Incident to Problem Record
Coordinate the incident to the problem record.
7. Gather Required Information
Gather required information based on policy to complete the problem record
8. Entitle?
Follow the policy to determine if the problem requester is entitled to raise
this problem.
9. Document Entitle failure Detail
If the Requester was not entitled to raise this problem, document the details
of the entitlement failure in preparation for handling service entitlement
failure.
10. Handle Service Entitlement Failure
Handle Service Entitlement Failure is to resolves entitlement failures for
requested services and update request records to reflect the disposition of
entitlement failures. It shall be determined the problem against the service
contracts particular IT outsourcing contact. It may propose the alternative
for entitlement with authorized approval.
11. Continue?
Determine if the decision was made in the handle service entitlement failure
to continue with the problem.
12. Match Severity to Incident
Match problem severity based on definition to the problem.
Proceed to Return.
13. Return
Return to the Problem Management Process

116
Figure B-10 shows Perform Root Cause Analysis Procedure



FIGURE B-10 Perform Root Cause Analysis Flow

Narrative of Perform Root Cause Analysis
1. Assign RCA Owner
Assign an ownership for the Root Cause Analysis. The owner is responsible
for managing the Root Cause Analysis through its completion.
2. Gather Problem Related RCA
Gather all available problem data related to RCA, including:
(a) The problem record
(b) Any details about associated service outage


117

Steps 3 through 5 and Steps 6 through 8 are performed in parallel.
3. Analyse Problem
Analyze the problem data. In particular, look for common:
(a) Symptoms, patterns of occurrence, user environments, etc.
(b) Exception events
4. Identify Contribution Factors
Based on the problem data analysis, identify any factors that contributed to
the problem.
5. Determine Probable Cause
Choose the most likely problem cause or causes from the contributing
factors.
Proceed to Analysis Complete?
6. Monitor RCA
Monitor the progress of the Root Cause Analysis to ensure that it is on
schedule.
7. Action Required?
Determine if any action is required to complete the Root Cause Analysis.
(a) If it is ‘Yes’, proceed to Take Appropriate Actions.
(b) If it is ‘No’, proceed to Analysis Complete?
8. Take Appropriate Actions
Take whatever actions are necessary to complete the Root Cause Analysis on
schedule.
Return to Monitor Root Cause Analysis to continue to monitor the progress
of the Root Cause Analysis.
9. Analysis Complete?
Determine if the Root Cause Analysis has been completed.
(a) If it is ‘Yes’, proceed to Document Final RCA Result
(b) If it is ‘No’, proceed to Prepare Interim RCA Result
10. Prepare Interim RCA Result
If the analysis is not yet complete, prepare an interim report that documents
the Root Cause Analysis findings to date.
Return in parallel to Analyze Problem and Monitor RCA to complete the
analysis.

118
11. Document Final RCA Result
If the analysis is complete, document the results of the Root Cause Analysis.
Include findings from the problem data analysis, explanations of
contributing factors, and an indication of the probable cause(s).
12. Review RCA with Appropriate Parties
Review the Root Cause Analysis results with the appropriate parties; for
example, the Problem Coordinator and all affected service owners.
13. Result Accepted?
Determine if the Root Cause Analysis results were accepted.
(a) If it is ‘Yes’, proceed to Root Cause Found?
(b) If it is ‘No’, return in parallel to Analyze Problem and Monitor RCA
to repeat the Root Cause Analysis.
14. Root Cause Found?
Determine if a root cause of a problem was found.
(a) If it is ‘Yes’, proceed to Update Final RCA Results to Knowledge Database
(b) If it is ‘No’, proceed to Update Problem Record with Current Status
15. Update Final RCA Results to Knowledge Database
Update the root cause analysis result to knowledge database. Based on the
update knowledge database policy, it may be updated to reflect the RCA
results for all problems and negative process trends.
16. Update Problem Record with Current Status
Update the problem record with the current status of the problem; either:
(a) Root cause of the problem identified
(b) No root cause found
Proceed to Return.
17. Notify RCA Result to Appropriate Parties
Follow the notification policy to notify the appropriate parties of the RCA
results particular the service accounts that the RCA is applicable.
Proceed to Return.
18. Return
Return to either the Problem Management Process or
Development Resolution Plan

119
Figure B-11 shows Assign Problem to Appropriate Problem Resolver Procedure



FIGURE B-11 Assign Problem to Appropriate Problem Resolver Flow

Narrative of Assign Problem to Appropriate Problem Resolver
1. Review Problem Record
Review the problem record to determine whom it should be assigned to.
2. Correct Assignment?
Determine if the problem was initially assigned to the correct Resolver Group
when the problem was opened.
(a) If it is ‘Yes’, proceed to Indicate Request Type.
(b) If it is ‘No’, proceed to Assign Problem to Problem Resolver.
3. Indicate Request Type
If the problem was initially assigned to the wrong resolver, indicate the
request problem type and, if known, the details of whom the problem should
be reassigned to in preparation for calling the reassign request.
4. Request for Reassignment
Request for reassignment, assign the problem to the most appropriate resolver.
Proceed to Review Problem Record
5. Assign Problem to Problem Resolver
Assign problem to the problem resolver based on skill level and availability.


120
6. Update Problem Record with Current Status
Update the problem record to indicate that the problem has been assigned to
an appropriate problem resolver and is awaiting problem analysis and
development of a permanent resolution plan.
7. Return
Return to the Problem Management Process

Figure B-12 shows Developing Permanent Resolution Plan Procedure



FIGURE B-12 Developing Permanent Resolution Plan

121
Narrative of Developing Permanent Resolution Plan
1. Review Associated incident and Related Configuration Items (CIs)
Review all recorded available data about the incident(s), symptoms, severity
and associated configuration items based on component or application or
network categorization.
2. Identify Any Related Concurrences
Identify any related occurrences of the problem and analyze similar
problems, comparing the problem to the database of records to determine if
this is a repeat occurrence of a previous problem or known error.
3. RCA Required?
Determine if a Root Cause Analysis is required for the problem.
(a) If it is ‘Yes’, proceed to Perform root Cause Analysis Procedure
(b) If it is ‘No’, proceed to Investigate Possible Solution
4. Perform Root Cause Analysis Procedure
If a RCA is required, proceed to the Perform Root Cause Analysis procedure
to determine the most likely cause of the problem.
5. Investigate Possible Solutions
Investigate possible permanent solutions for the problem. It may search and
select potential resolution from the Knowledge Database.
6. Potential Resolution Identified
Determine if any potential resolutions were identified.
(a) If it is ‘Yes’, proceed to Select Resolution.
(b) If it is ‘No’, proceed to Update Problem Record to be Closed without
any Resolution.
7. Update Problem Record to be Closed without Any Resolution
If there is no any potential resolutions was identified, update the problem
record to indicate that the problem will be closed due to the lack of a known
error or possible resolution.
8. Select Resolution
If potential resolutions were found, select what appears to be the best
permanent solution for the problem.


122
9. Finalize Resolution
Finalize possible resolution
Proceed to Return.
10. Develop Resolution Plan and Test Resolution Plan
Match problem severity based on definition to the problem.
Proceed to Return.
11. Review Resolution plan with Appropriate Parties
Match problem severity based on definition to the problem.
Proceed to Return.
12. Issue Occurred?
Check if a problem record has already been opened for the incident.
(a) If it is ‘Yes’, proceed to Review Open Problem Policy
(b) If it is ‘No’, proceed to Return
13. Document Issue
Match problem severity based on definition to the problem.
Proceed to Return.
14. Issue Resolved?
Check if a problem record has already been opened for the incident.
(a) If it is ‘Yes’, proceed to Review Open Problem Policy
(b) If it is ‘No’, proceed to Return
15. Update Problem Record with Current Status
If the Permanent Resolution Plan is acceptable, update the problem record to
indicate that the solution is ready to be implemented to permanently resolve
the problem. Change the status of the problem to Known Error.
Proceed to Return.
16. Return
Return to the Problem Management Process







123
Figure B-13 shows Implement Permanent Resolution Plan Procedure



FIGURE B-13 Implement Permanent Resolution Plan Flow

Narrative of Implement Permanent Resolution Plan
1. Initiate Resolution Plan
Initiate the Permanent Resolution Plan involves two parallel procedures:
(a) Implementation performed by external operational processes
(b) Coordination: performed by the Problem Resolver to monitor the
overall execution of the Permanent Resolution Plan and to record the
implementation results.
2. Monitor Resolution plan Implementation
Monitor the implementation of the Permanent Resolution Plan against the
target schedule.



124
3. Adjustment Required?
Determine if any adjustment to the Permanent Resolution Plan is needed to
ensure resolution of the problem in known error status within committed
service levels.
(a) If it is ‘Yes’, proceed to Adjust Resolution Plan
(b) If it is ‘No’, proceed to Implement Resolution Plan
4. Adjust Resolution Plan
If adjustments to the Permanent Resolution Plan are needed to resolve the
problem in known error status within committed service levels, escalate the
implementers as required to apply corrective action and adjust the plan
accordingly.
5. Review Resolution Plan Adjustment with Appropriate Resolver
Coordinate the adjusted plan with all affected resolver to review the
resolution plan adjustment.
6. Update Problem Record with Adjusted Resolution Plan Details
Update the problem record with details of the modified Permanent
Resolution Plan.
7. Implement Resolution Plan
Perform Implementation of Resolution Plan to continue with the resolution
of the problem in known error status.
8. Implement Complete?
Determine if implementation of the solution is complete.
(a) If it is ‘Yes’, proceed to Successful?
(b) If it is ‘No’, proceed to Update Problem Record with Implemented
Resolution Unsuccessful
9. Successful?
Determine if the decision was made in the handle service entitlement failure
to continue with the problem.
(a) If it is ‘Yes’, proceed to Update Problem Record with Implemented
Resolution Successful
(b) If it is ‘No’, proceed to Update Problem Record with Implemented
Resolution Unsuccessful

125
10. Update Problem Record with Implemented Resolution Unsuccessful
If the problem was not resolved, update the problem record to indicate that
the Permanent Resolution Plan was not successful.
Note: The problem remains in known error status until it is permanently
fixed by a change.
11. Update Problem Record with Implemented Resolution Successful
If the problem was resolved successfully, update the problem record to
indicate that the problem in known error status has been resolved. Be sure to
enter the resolution date and time. The record should brief details of the
resolution so that these are available to assist with future incident and
problem investigation and diagnosis.
12. Notify Appropriate Parties
Notify the Requester, the Problem Coordinator, affected service owners, and
a customer-designated problem liaison of the outcome of implementing the
Permanent Resolution Plan. It should be following g the notification policy
to notify the appropriate parties of the outcome of implementing the
Permanent Resolution Plan.
Proceed to Return.
13. Return
Return to the Problem Management Process


126
Figure B-14 shows Close Problem Record Procedure



FIGURE B-14 Close Problem Record Flow

Narrative of Close Problem Procedure
1. Review Close Problem Policy

Review the Close Problem policy for the account. The policy shall define:
(a) Who can close problem records
(b) Required closure concurrence, if any
(c) Required notifications, if any
2. Closure Concurrence Required?
Follow the policy to determine if concurrence to close the problem is
required.
(a) If it is ‘Yes’, proceed to Obtain Closure Concurrence from
Appropriate Parties.
(b) If it is ‘No’, proceed to Close Incident Record.



127
3. Obtain Closure Concurrence from Appropriate Parties
If concurrence to close the problem is required, follow the Close Problem
policy to obtain concurrence from the appropriate parties.
4. Concurrence Obtained?
Determine if concurrence to close the problem was obtained from all
appropriate parties.
(a) If it is ‘Yes’, proceed to Close Problem Record.
(b) If it is ‘No’, proceed to Document Closure Issue.
5. Close Problem Record
Close the problem record. Ensure that the problem record contains all the
required information, including the closing status, code and recovery, and
resolution dates and times.
6. Notification Required?
Follow the Notification policy to determine if notification is required that
the incident has been closed.
(a) If it is ‘Yes’, proceed to Notify Appropriate Parties.
(b) If it is ‘No’, proceed to Return.
7. Notify Appropriate Parties
If notification is required, follow the Notification policy to notify the
appropriate parties that the problem has been closed and its closing status.
(a) Problem Coordinator
(b) Requester/ User
(c) Designated customer problem liaison
8. Return
Return to the Problem Management Process














APPENDIX C

SIMULATION MODELS AND SIMULATION RESULTS


130
C-1 Simulation Model of Typical IT Service Desk System
A simulation model of IT service desk system is shown in Figure C-1.

Arrivals
IT Incident Call
Assign Severity
0.303
95.753
3.238
El s e
Severity 1
Resolving
Severity 2
Resolving
Severity 3
Resolving
Severity 4
Resolving
Resolved
Ticket Severity 4
Resolved
Ticket Severity 3
Resolved
Ticket Severity 2
Resolved
Ticket Severity 1
Ticket
Assign IT Incident
4
Assign Servirity
3
Assign Servirity
2
Assign Servirity
1
Assign Servirity
0
0
0
0
0
0
0
0
0


FIGURE C-1 Simulation Model for IT Service Desk System

The details of the simulation model can be described by the SIMAN Code is in
the following:


;
;
; Model statements for module: BasicProcess.Create 1 (IT Incident Call
Arrivals)
;

14$ CREATE, 1,MinutesToBaseTime(0.0),Entity
1:MinutesToBaseTime(WEIB( 3.64, 0.903 )):NEXT(15$);

15$ ASSIGN: IT Incident Call Arrivals.NumberOut=IT Incident
Call Arrivals.NumberOut + 1:NEXT(9$);


;
;
; Model statements for module: BasicProcess.Assign 1 (Assign IT
Incident Ticket)
;
9$ ASSIGN: Picture=Picture.Ball:NEXT(0$);





131
;
;
; Model statements for module: BasicProcess.Decide 1 (Assign Severity)
;
0$ BRANCH, 1:
With,(0.303)/100,10$,Yes:
With,(95.753)/100,11$,Yes:
With,(3.238)/100,12$,Yes:
Else,13$,Yes;

;
;
; Model statements for module: BasicProcess.Assign 5 (Assign Servirity
1)
;
13$ ASSIGN: Entity.Type=Severity 1:
Picture=Picture.Red Ball:
S1 resolving time=LOGN(2.37, 4.74):
S1 time arrival=TNOW:NEXT(1$);


;
;
; Model statements for module: BasicProcess.Process 1 (Resolving
Severity 1)
;
1$ ASSIGN: Resolving Severity 1.NumberIn=Resolving
Severity 1.NumberIn + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP+1;
23$ QUEUE, Resolving Severity 1.Queue;
22$ SEIZE, 1,VA:
Resource 1,1:NEXT(21$);

21$ DELAY: MinutesToBaseTime(S1 resolving time),,VA;
20$ RELEASE: Resource 1,1;
68$ ASSIGN: Resolving Severity 1.NumberOut=Resolving
Severity 1.NumberOut + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP-1:NEXT(8$);


;
;
; Model statements for module: BasicProcess.Dispose 4 (Ticket Severity
1 Resolved)
;
8$ ASSIGN: Ticket Severity 1 Resolved.NumberOut=Ticket
Severity 1 Resolved.NumberOut + 1;
71$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 2 (Assign Servirity
4)
;
10$ ASSIGN: Entity.Type=Severity 4:
Picture=Picture.Green Ball:
S4 time arrival=TNOW:
S4 resolving
time=144*BETA(0.248,1.27):NEXT(4$);




132
;
;
; Model statements for module: BasicProcess.Process 4 (Resolving
Severity 4)
;
4$ ASSIGN: Resolving Severity 4.NumberIn=Resolving
Severity 4.NumberIn + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP+1;
75$ QUEUE, Resolving Severity 4.Queue;
74$ SEIZE, 3,VA:
Resource 1,1:NEXT(73$);

73$ DELAY: MinutesToBaseTime(S4 resolving time),,VA;
72$ RELEASE: Resource 1,1;
120$ ASSIGN: Resolving Severity 4.NumberOut=Resolving
Severity 4.NumberOut + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP-1:NEXT(5$);


;
;
; Model statements for module: BasicProcess.Dispose 1 (Ticket Severity
4 Resolved)
;
5$ ASSIGN: Ticket Severity 4 Resolved.NumberOut=Ticket
Severity 4 Resolved.NumberOut + 1;
123$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 3 (Assign Servirity
3)
;
11$ ASSIGN: S3 resolving time T2=LOGN(7.87, 11.1):
S3 resolving time T1=WEIB(5.94, 0.67):
Entity.Type=Severity 3:
Picture=Picture.Blue Ball:
S3 time arrival=TNOW:NEXT(3$);


;
;
; Model statements for module: BasicProcess.Process 3 (Resolving
Severity 3)
;
3$ ASSIGN: Resolving Severity 3.NumberIn=Resolving
Severity 3.NumberIn + 1:
Resolving Severity 3.WIP=Resolving Severity
3.WIP+1;
127$ QUEUE, Resolving Severity 3.Queue;
126$ SEIZE, 2,VA:
Resource 1,1:NEXT(125$);

125$ DELAY: MinutesToBaseTime(S3 resolving time T2),,VA;
124$ RELEASE: Resource 1,1;
172$ ASSIGN: Resolving Severity 3.NumberOut=Resolving
Severity 3.NumberOut + 1:
Resolving Severity 3.WIP=Resolving Severity
3.WIP-1:NEXT(6$);




133
;
;
; Model statements for module: BasicProcess.Dispose 2 (Ticket Severity
3 Resolved)
;
6$ ASSIGN: Ticket Severity 3 Resolved.NumberOut=Ticket
Severity 3 Resolved.NumberOut + 1;
175$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 4 (Assign Servirity
2)
;
12$ ASSIGN: Picture=Picture.Yellow Ball:
Entity.Type=Severity 2:
S2 time arrival=TNOW:
S2 resolving time=LOGN(4.61, 9.4):NEXT(2$);


;
;
; Model statements for module: BasicProcess.Process 2 (Resolving
Severity 2)
;
2$ ASSIGN: Resolving Severity 2.NumberIn=Resolving
Severity 2.NumberIn + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP+1;
179$ QUEUE, Resolving Severity 2.Queue;
178$ SEIZE, 1,VA:
Resource 1,1:NEXT(177$);

177$ DELAY: MinutesToBaseTime(S2 resolving time),,VA;
176$ RELEASE: Resource 1,1;
224$ ASSIGN: Resolving Severity 2.NumberOut=Resolving
Severity 2.NumberOut + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP-1:NEXT(7$);


;
;
; Model statements for module: BasicProcess.Dispose 3 (Ticket Severity
2 Resolved)
;
7$ ASSIGN: Ticket Severity 2 Resolved.NumberOut=Ticket
Severity 2 Resolved.NumberOut + 1;
227$ DISPOSE: Yes;



134
C-2 Simulation Model of KMRCA IT Service Desk System
Simulation Model of KMRCA IT service desk system is shown in Figure C-2.
Arrivals
IT Incident Call
Assign Severity
0.303
95.753
3.238
El s e
1
Resolving Severity
2
Resolving Severity
3 by Factor A
Resolving Severity
4
Resolving Severity
Resolved
Ticket Severity 4
Resolved
Ticket Severity 3
Resolved
Ticket Severity 2
Resolved
Ticket Severity 1
Ticket
Assign IT Incident
Assign Servirity 4
Assign Servirity 3
Assign Servirity 2
Assign Servirity 1
3 by Factor B
Resolving Severity
3 by Factor C
Resolving Severity
0
0
0
0
0
0
0
0
0
0
0


FIGURE C-2 Simulation Model of KMRCA IT Service Desk System

The SIMAN code of the simulation model is in the following:
;
;
; Model statements for module: BasicProcess.Create 1 (IT Incident Call
Arrivals)
;

16$ CREATE, 1,MinutesToBaseTime(0.0),Entity
1:MinutesToBaseTime(WEIB( 3.16, 0.903)):NEXT(17$);

17$ ASSIGN: IT Incident Call Arrivals.NumberOut=IT Incident
Call Arrivals.NumberOut + 1:NEXT(9$);

;
;
; Model statements for module: BasicProcess.Assign 1 (Assign IT
Incident Ticket)
;
9$ ASSIGN: Picture=Picture.Ball:NEXT(0$);

;
; Model statements for module: BasicProcess.Decide 1 (Assign Severity)
;
0$ BRANCH, 1:
With,(0.303)/100,10$,Yes:
With,(95.753)/100,11$,Yes:
With,(3.238)/100,12$,Yes:
Else,13$,Yes;

135
;
; Model statements for module: BasicProcess.Assign 5 (Assign Servirity
1)
;
13$ ASSIGN: Entity.Type=Severity 1:
Picture=Picture.Red Ball:
S1 resolving time=LOGN(2.37, 4.74):
S1 time arrival=TNOW:NEXT(1$);

;
;
; Model statements for module: BasicProcess.Process 1 (Resolving
Severity 1)
;
1$ ASSIGN: Resolving Severity 1.NumberIn=Resolving
Severity 1.NumberIn + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP+1;
51$ STACK, 1:Save:NEXT(25$);
25$ QUEUE, Resolving Severity 1.Queue;
24$ SEIZE, 1,VA:
Resource 1,1:NEXT(23$);
23$ DELAY: S1 resolving time,,VA:NEXT(66$);
66$ ASSIGN: Resolving Severity 1.WaitTime=Resolving
Severity 1.WaitTime + Diff.WaitTime;
30$ TALLY: Resolving Severity
1.WaitTimePerEntity,Diff.WaitTime,1;
32$ TALLY: Resolving Severity
1.TotalTimePerEntity,Diff.StartTime,1;
56$ ASSIGN: Resolving Severity 1.VATime=Resolving Severity
1.VATime + Diff.VATime;
57$ TALLY: Resolving Severity
1.VATimePerEntity,Diff.VATime,1;
22$ RELEASE: Resource 1,1;
71$ STACK, 1:Destroy:NEXT(70$);
70$ ASSIGN: Resolving Severity 1.NumberOut=Resolving
Severity 1.NumberOut + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP-1:NEXT(8$);
;
;
; Model statements for module: BasicProcess.Dispose 4 (Ticket Severity
1 Resolved)
;
8$ ASSIGN: Ticket Severity 1 Resolved.NumberOut=Ticket
Severity 1 Resolved.NumberOut + 1;
73$ DISPOSE: Yes;

;
;
; Model statements for module: BasicProcess.Assign 2 (Assign Servirity
4)
;
10$ ASSIGN: Entity.Type=Severity 4:
Picture=Picture.Green Ball:
S4 time arrival=TNOW:
S4 resolving
time=144*BETA(0.248,1.27):NEXT(4$);

;
;
; Model statements for module: BasicProcess.Process 4 (Resolving
Severity 4)
;

136
4$ ASSIGN: Resolving Severity 4.NumberIn=Resolving
Severity 4.NumberIn + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP+1;
103$ STACK, 1:Save:NEXT(77$);

77$ QUEUE, Resolving Severity 4.Queue;
76$ SEIZE, 3,VA:
Resource 1,1:NEXT(75$);

75$ DELAY: S4 resolving time,,VA:NEXT(118$);

118$ ASSIGN: Resolving Severity 4.WaitTime=Resolving
Severity 4.WaitTime + Diff.WaitTime;
82$ TALLY: Resolving Severity
4.WaitTimePerEntity,Diff.WaitTime,1;
84$ TALLY: Resolving Severity
4.TotalTimePerEntity,Diff.StartTime,1;
108$ ASSIGN: Resolving Severity 4.VATime=Resolving Severity
4.VATime + Diff.VATime;
109$ TALLY: Resolving Severity
4.VATimePerEntity,Diff.VATime,1;
74$ RELEASE: Resource 1,1;
123$ STACK, 1:Destroy:NEXT(122$);

122$ ASSIGN: Resolving Severity 4.NumberOut=Resolving
Severity 4.NumberOut + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP-1:NEXT(5$);

;
; Model statements for module: BasicProcess.Dispose 1 (Ticket Severity
4 Resolved)
;
5$ ASSIGN: Ticket Severity 4 Resolved.NumberOut=Ticket
Severity 4 Resolved.NumberOut + 1;
125$ DISPOSE: Yes;

;
; Model statements for module: BasicProcess.Assign 3 (Assign Servirity
3)
;
11$ ASSIGN: S3 resolving time T2=TRIA(2,3,4.5):
S3 resolving time T3=2.4:
S3 resolving time T1=1.2:
Entity.Type=Severity 3:
Picture=Picture.Blue Ball:
S3 time arrival=TNOW:NEXT(3$);
;
; Model statements for module: BasicProcess.Process 3 (Resolving
Severity 3 by Factor A)
;
3$ ASSIGN: Resolving Severity 3 by Factor
A.NumberIn=Resolving Severity 3 by Factor A.NumberIn + 1:
Resolving Severity 3 by Factor A.WIP=Resolving
Severity 3 by Factor A.WIP+1;
155$ STACK, 1:Save:NEXT(129$);

129$ QUEUE, Resolving Severity 3 by Factor A.Queue;
128$ SEIZE, 2,VA:
Resource 1,1:NEXT(127$);
127$ DELAY: S3 resolving time T1,,VA:NEXT(170$);
170$ ASSIGN: Resolving Severity 3 by Factor
A.WaitTime=Resolving Severity 3 by Factor A.WaitTime + Diff.WaitTime;

137
134$ TALLY: Resolving Severity 3 by Factor
A.WaitTimePerEntity,Diff.WaitTime,1;
136$ TALLY: Resolving Severity 3 by Factor
A.TotalTimePerEntity,Diff.StartTime,1;
160$ ASSIGN: Resolving Severity 3 by Factor
A.VATime=Resolving Severity 3 by Factor A.VATime + Diff.VATime;
161$ TALLY: Resolving Severity 3 by Factor
A.VATimePerEntity,Diff.VATime,1;
126$ RELEASE: Resource 1,1;
175$ STACK, 1:Destroy:NEXT(174$);
174$ ASSIGN: Resolving Severity 3 by Factor
A.NumberOut=Resolving Severity 3 by Factor A.NumberOut + 1:
Resolving Severity 3 by Factor A.WIP=Resolving
Severity 3 by Factor A.WIP-1:NEXT(14$);
;
; Model statements for module: BasicProcess.Process 5 (Resolving
Severity 3 by Factor B)
;
14$ ASSIGN: Resolving Severity 3 by Factor
B.NumberIn=Resolving Severity 3 by Factor B.NumberIn + 1:
Resolving Severity 3 by Factor B.WIP=Resolving
Severity 3 by Factor B.WIP+1;
206$ STACK, 1:Save:NEXT(180$);
180$ QUEUE, Resolving Severity 3 by Factor B.Queue;
179$ SEIZE, 2,VA:
Resource 1,1:NEXT(178$);
178$ DELAY: S3 resolving time T2,,VA:NEXT(221$);
221$ ASSIGN: Resolving Severity 3 by Factor
B.WaitTime=Resolving Severity 3 by Factor B.WaitTime + Diff.WaitTime;
185$ TALLY: Resolving Severity 3 by Factor
B.WaitTimePerEntity,Diff.WaitTime,1;
187$ TALLY: Resolving Severity 3 by Factor
B.TotalTimePerEntity,Diff.StartTime,1;
211$ ASSIGN: Resolving Severity 3 by Factor
B.VATime=Resolving Severity 3 by Factor B.VATime + Diff.VATime;
212$ TALLY: Resolving Severity 3 by Factor
B.VATimePerEntity,Diff.VATime,1;
177$ RELEASE: Resource 1,1;
226$ STACK, 1:Destroy:NEXT(225$);
225$ ASSIGN: Resolving Severity 3 by Factor
B.NumberOut=Resolving Severity 3 by Factor B.NumberOut + 1:
Resolving Severity 3 by Factor B.WIP=Resolving
Severity 3 by Factor B.WIP-1:NEXT(15$);
;
;
; Model statements for module: BasicProcess.Process 6 (Resolving
Severity 3 by Factor C)
;
15$ ASSIGN: Resolving Severity 3 by Factor
C.NumberIn=Resolving Severity 3 by Factor C.NumberIn + 1:
Resolving Severity 3 by Factor C.WIP=Resolving
Severity 3 by Factor C.WIP+1;
257$ STACK, 1:Save:NEXT(231$);

231$ QUEUE, Resolving Severity 3 by Factor C.Queue;
230$ SEIZE, 2,VA:
Resource 1,1:NEXT(229$);

229$ DELAY: S3 resolving time T3,,VA:NEXT(272$);

272$ ASSIGN: Resolving Severity 3 by Factor
C.WaitTime=Resolving Severity 3 by Factor C.WaitTime + Diff.WaitTime;
236$ TALLY: Resolving Severity 3 by Factor
C.WaitTimePerEntity,Diff.WaitTime,1;

138
238$ TALLY: Resolving Severity 3 by Factor
C.TotalTimePerEntity,Diff.StartTime,1;
262$ ASSIGN: Resolving Severity 3 by Factor
C.VATime=Resolving Severity 3 by Factor C.VATime + Diff.VATime;
263$ TALLY: Resolving Severity 3 by Factor
C.VATimePerEntity,Diff.VATime,1;
228$ RELEASE: Resource 1,1;
277$ STACK, 1:Destroy:NEXT(276$);
276$ ASSIGN: Resolving Severity 3 by Factor
C.NumberOut=Resolving Severity 3 by Factor C.NumberOut + 1:
Resolving Severity 3 by Factor C.WIP=Resolving
Severity 3 by Factor C.WIP-1:NEXT(6$);
;
; Model statements for module: BasicProcess.Dispose 2 (Ticket Severity
3 Resolved)
;
6$ ASSIGN: Ticket Severity 3 Resolved.NumberOut=Ticket
Severity 3 Resolved.NumberOut + 1;
279$ DISPOSE: Yes;
;
; Model statements for module: BasicProcess.Assign 4 (Assign Servirity
2)
;
12$ ASSIGN: Picture=Picture.Yellow Ball:
Entity.Type=Severity 2:
S2 time arrival=TNOW:
S2 resolving time=LOGN(4.61, 9.4):NEXT(2$);
;
; Model statements for module: BasicProcess.Process 2 (Resolving
Severity 2)
;
2$ ASSIGN: Resolving Severity 2.NumberIn=Resolving
Severity 2.NumberIn + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP+1;
309$ STACK, 1:Save:NEXT(283$);
283$ QUEUE, Resolving Severity 2.Queue;
282$ SEIZE, 1,VA:
Resource 1,1:NEXT(281$);
281$ DELAY: S2 resolving time,,VA:NEXT(324$);
324$ ASSIGN: Resolving Severity 2.WaitTime=Resolving
Severity 2.WaitTime + Diff.WaitTime;
288$ TALLY: Resolving Severity
2.WaitTimePerEntity,Diff.WaitTime,1;
290$ TALLY: Resolving Severity
2.TotalTimePerEntity,Diff.StartTime,1;
314$ ASSIGN: Resolving Severity 2.VATime=Resolving Severity
2.VATime + Diff.VATime;
315$ TALLY: Resolving Severity
2.VATimePerEntity,Diff.VATime,1;
280$ RELEASE: Resource 1,1;
329$ STACK, 1:Destroy:NEXT(328$);
328$ ASSIGN: Resolving Severity 2.NumberOut=Resolving
Severity 2.NumberOut + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP-1:NEXT(7$);

;
; Model statements for module: BasicProcess.Dispose 3 (Ticket Severity
2 Resolved)
;
7$ ASSIGN: Ticket Severity 2 Resolved.NumberOut=Ticket
Severity 2 Resolved.NumberOut + 1;
331$ DISPOSE: Yes;

139
C-3 Simulation Results for Design of Experiments
This Appendix illustrates the simulation results that are provided to as the inputs
of experimental design (DOE) with 2
3
full factorial running standard order of 8 times
for each of 4 replications. Tables C-1, C-2, …, C-16 show entity detail summary of
Time (in Table C-1) and entity detail summary of Number of Entities ( in Table C-2)
by the 1
st
to the 8
th
Standard orders, respectively.

TABLE C-1 Entity Detail Summary of Time by 1
st
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
4.27 4.27 4.25 4.27
Severity 4
18.36 27.81 30.69 24.89
Total
29.12 40.27 41.80 36.42

TABLE C-2 Entity Detail Summary of Number of Entities by 1
st
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 19 19
Severity 2
117 117 110 110 103 103 103 103
Severity 3
3,434 3,434 3,484 3,482 3,457 3,454 3,457 3,454
Severity 4
9 9 16 16 9 9 9 9
Total 3,585 3,585 3,630 3,628 3,588 3,585 3,564 3,558

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.

140
TABLE C-3 Entity Detail Summary of Time by 2
nd
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
4.67 4.67 4.65 4.67
Severity 4
18.36 27.81 30.69 24.89
Total
29.52 40.67 42.20 36.82

TABLE C-4 Entity Detail Summary of Number of Entities by 2
nd
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,434 3,484 3,480 3,457 3,454 3,410 3,404
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,585 3,630 3,626 3,588 3,585 3,564 3,558

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.

141
TABLE C-5 Entity Detail Summary of Time by 3
rd
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.57 5.57 5.55 5.57
Severity 4
18.36 27.81 30.69 24.89
Total
30.42 41.57 43.10 37.72

TABLE C-6 Entity Detail Summary of Number of Entities by 3
rd
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,470 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,616 3,588 3,584 3,564 3,556

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.


142
TABLE C-7 Entity Detail Summary of Time by 4
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.97 5.97 5.95 5.97
Severity 4
18.36 27.81 30.69 24.89
Total
30.82 41.97 43.50 38.12

TABLE C-8 Entity Detail Summary of Number of Entities by 4
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,469 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,615 3,588 3,584 3,564 3,556

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.



143
TABLE C-9 Entity Detail Summary of Time by 5
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.17 5.17 5.15 5.17
Severity 4
18.36 27.81 30.69 24.89
Total
30.02 41.17 42.70 37.32

TABLE C-10 Entity Detail Summary of Number of Entities by 5
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,478 3,457 3,454 3,410 3,404
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,624 3,588 3,585 3,564 3,558

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.


144
TABLE C-11 Entity Detail Summary of Time by 6
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.57 5.57 5.55 5.57
Severity 4
18.36 27.81 30.69 24.89
Total
30.42 41.57 43.10 37.72

TABLE C-12 Entity Detail Summary of Number of Entities by 6
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,474 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,620 3,588 3,584 3,564 3,556

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.


145
TABLE C-13 Entity Detail Summary of Time by 7
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.47 6.47 6.45 6.47
Severity 4
18.36 30.42 30.69 24.89
Total
31.32 45.08 44.00 38.62

TABLE C-14 Entity Detail Summary of Number of Entities by 7
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,437 3,457 3,452 3,410 3,401
Severity 4
9 9 16 14 9 9 16 16
Total 3,585 3,584 3,630 3,581 3,588 3,583 3,564 3,555

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.



146
TABLE C-15 Entity Detail Summary of Time by 8
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.77 6.78 6.75 6.77
Severity 4
0.49 45.85 20.77 21.54
Total
13.75 60.82 34.38 35.58

TABLE C-16 Entity Detail Summary of Number of Entities by 8
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,390 3,484 3,355 3,457 3,388 3,410 3,386
Severity 4
9 1 16 2 9 3 16 5
Total 3,585 3,533 3,630 3,487 3,588 3,513 3,564 3,529

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.

147
C-4 The Results of Design of Experiment (DOE)
The results of experimental design of Throughput and Time in resolving
incident of severity 3 as shown in Figure C-3 and Figure C-4, respectively.


FIGURE C-3 DOE Results of Throughput

148



FIGURE C-4 DOE Results of Time in Resolving Incidents of Severity 3

149
C-5 Simulation Results for the Comparison Test
The simulation results that are provided for comparison Test, running for 4
replications. Table C-17 to Table C-20 show the summary of entity details of Time in
resolving incident and an entity details of Number of Entities.

TABLE C-17 KMRCA IT Service Desk; Entity Detail Summary of Time
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.77 6.77 6.75 6.77
Severity 4
0.49 45.85 37.95 21.54
Total
13.74 60.82 51.55 35.57

TABLE C-18 KMRCA IT Service Desk; Entity Detail Summary of Number of Entities
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,397 3,484 3,360 3,457 3,375 3,410 3,388
Severity 4
9 1 16 2 9 4 16 5
Total 3,585 3,540 3,630 3,492 3,588 3,501 3,564 3,531

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.


150
TABLE C-19 Typical IT Service Desk; Entity Detail Summary of Time
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
1.61 1.15 2.15 2.75
Severity 2
5.92 4.97 4.99 4.26
Severity 3
7.28 6.96 7.61 7.11
Severity 4
18.99 22.11 24.58 25.22
Total
33.79 35.19 39.33 39.34

TABLE C-20 Typical IT Service Desk; Entity Detail Summary of Number of Entities
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
27 27 20 20 21 21 21 21
Severity 2
100 100 105 104 117 111 88 88
Severity 3
3,017 2,994 2,898 2,889 3,085 2,979 2,986 2,973
Severity 4
10 9 7 7 10 5 9 9
Total
3,154 3,130 3,030 3,020 3,233 3,116 3,104 3,091

Note : ‘Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.

151
C-6 Summary of Comparison Test Results
The statistical t-test results of Comparison of the KMRCA IT service desk and
Typical IT service desk by significant variables as shown in Table C-21.

TABLE C-21 Summary of Comparison Test Results
Replication KMRCA Typical S1-T S1-K S2-T S2-K S3-T S3-K S4-T S4-K
1 3,540 3,130 1.61 2.51 5.92 3.98 7.29 6.77 18.99 0.49
2 3,492 3,020 1.15 2.28 4.97 5.91 6.97 6.77 22.11 45.85
3 3,501 3,116 2.15 2.55 4.95 4.31 7.63 6.75 24.58 37.95
4 3,531 3,091 2.75 1.84 4.26 5.43 7.10 6.77 25.22 21.54

It is note that S1-T, S1-K, S2-T,…, S4-K are average time in resolving incident
of Severity 1 of Typical IT service desk, average time in resolving incident of
Severity 1 of KMRCA IT service desk, average time in resolving incident of Severity
2 of KMRCA IT service desk,…, Time in resolving incident of Severity 4 of KMRCA
IT service desk, respectively.
The below are the t-test results which were generated by Minitab 15.
a) Throughput; Paired T-Test and CI: KMRCA, Typical

Paired T for KMRCA - Typical

N Mean StDev SE Mean
KMRCA 4 3516.0 23.1 11.6
Typical 4 3089.3 48.9 24.5
Difference 4 426.8 37.6 18.8


95% CI for mean difference: (366.9, 486.6)
T-Test of mean difference = 0 (vs not = 0): T-Value = 22.68 P-Value = 0.000


b) Time in resolving of Severity 1; Paired T-Test and CI: S1-T, S1-K

Paired T for S1-T - S1-K

N Mean StDev SE Mean
S1-T 4 1.915 0.691 0.345
S1-K 4 2.295 0.326 0.163
Difference 4 -0.380 0.912 0.456


95% CI for mean difference: (-1.832, 1.072)
T-Test of mean difference = 0 (vs not = 0): T-Value = -0.83 P-Value = 0.466


152
c) Time in resolving of Severity 2; Paired T-Test and CI: S2-T, S2-K

Paired T for S2-T - S2-K

N Mean StDev SE Mean
S2-T 4 5.025 0.682 0.341
S2-K 4 4.907 0.912 0.456
Difference 4 0.118 1.457 0.729


95% CI for mean difference: (-2.201, 2.436)
T-Test of mean difference = 0 (vs not = 0): T-Value = 0.16 P-Value = 0.882


d) Time in resolving of Severity 3; Paired T-Test and CI: S3-T, S3-K

Paired T for S3-T - S3-K

N Mean StDev SE Mean
S3-T 4 7.248 0.287 0.143
S3-K 4 6.765 0.010 0.005
Difference 4 0.483 0.296 0.148


95% CI for mean difference: (0.012, 0.953)
T-Test of mean difference = 0 (vs not = 0): T-Value = 3.26 P-Value = 0.047


e) Time in resolving of Severity 4; Paired T-Test and CI: S4-T, S4-K

Paired T for S4-T - S4-K

N Mean StDev SE Mean
S4-T 4 22.7 2.8 1.4
S4-K 4 26.5 20.1 10.0
Difference 4 -3.73 18.64 9.32


95% CI for mean difference: (-33.39, 25.93)
T-Test of mean difference = 0 (vs not = 0): T-Value = -0.40 P-Value = 0.716









153
BIOGRAPHY

Name : Mr. Padej Phomasakha Na Sakolnakorn
Thesis Title : Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business
Major Field : Information Technology

Biography
Padej worked as senior process architect in IBM solutions Delivery Company, a
strategic IT outsourcing company, working on site at KASIKORNBANK during
April 2004 to May 2007. The purpose of process architect is to implement several
ITIL-based processes to outsourcing of KASIKORN Bank in particular IT service
desk function in Incident management process. Earlier joining the IBM, from October
1996 to March 2004, he worked as Quality assurance manager at SIAMTELTECH
computer company, an IT system integrator focuses on the areas of banking business
financial institutes, and telecommunication such as CAT and TOT.
For his education and certification, he earned a Bachelor of engineering degree
in electronics and telecommunication engineering from King Mongut’s Institute of
Technology Ladkrabang (KMITL) in 1991 and a Master of engineering degree in
management industrial engineering from King Mongut’s Institute of Technology
North Bangkok (KMITNB) in 1996. He was certified ITIL foundation in 2004.
Furthermore, he has been certified a License for professional practice in associate
electrical engineer (telecommunication and electronics) as well as he has been a
member of the Council of Engineers (COE), the engineering institute of Thailand
under H.M. the King's Patronage (EIT).
His interesting researches include IT service management (ITSM) improving
organizational IT outsourcing, Simulation study, Knowledge management system for
IT service desk, Text mining discovery algorithms and classification, and IT disaster
recovery planning (DRP).
Padej’s home address at 23/123 Ladprao Road Cahnkaseam Chatujak Bangkok,
Thailand 10900 and his email is padejp@gmail.com .

Name Thesis Title

: :

Mr. Padej Phomasakha Na Sakolnakorn Knowledge Management System Improvement towards Service Desk of IT Outsourcing in Banking Business

Major Field

:

Information Technology King Mongkut’s University of Technology North Bangkok

Thesis Advisor Co-Advisor

: :

Assistant Professor Dr. Phayung Meesad Dr. Gareth Clayton 2007 Abstract

Academic Year :

In business, knowledge is an organizational asset that enables corporations to sustain competitive advantages. In addition to increasing the demands of IT outsourcing to deliver world-class services, the Information Technology Infrastructure Library (ITIL) is a key concept to provide the high quality service, and the IT service desk is a crucial function for a whole concept of IT service management. Three current problems include 1) technical staff turnover is very high; 2) more than sixty percent of all resolving time is spent to resolve the repeat incidents; and 3) the assigned resolver group to deal with the incident may be inaccurate due to human error. Thus, this thesis proposes a framework for a knowledge management system with root cause analysis so, called KMRCA IT service desk system and evaluates its performance. The system is composed of two main functions, a searching knowledge function, and an automatic assignment function. This thesis evaluated the performance of the searching knowledge function using a simulation study and concluded that the system could significantly reduce time in resolving incidents. Moreover, my thesis enhances the framework to select the most suitable resolver group to deal with the incident using Text mining discovery methods. The ID3 decision tress method could increase productivity and decrease reassignment turnaround times. Furthermore, the rules resulting from the rule generation from the decision tree could be properly kept in a knowledge database in order to support and assist with future assignments. (Total 153 pages) Keywords : knowledge management, service desk, outsourcing, text mining, ITIL, performance evaluation, simulation study, and decision tree. ______________________________________________________________ Advisor ii

: นายเผด็จ พรหมสาขา ณ สกลนคร : ระบบการจัดการความรูเพื่อปรับปรุงการใหบริการแกไข ปญหาไอทีจากหนวยงานภายนอกใหกับธุรกิจธนาคาร สาขาวิชา : เทคโนโลยีสารสนเทศ มหาวิทยาลัยเทคโนโลยีพระจอมเกลาพระนครเหนือ อาจารยที่ปรึกษาวิทยานิพนธหลัก : ผูชวยศาสตราจารย ดร. พยุง มีสัจ อาจารยที่ปรึกษาวิทยานิพนธรวม : ดร. การเร็ธ เคลตัน ปการศึกษา : 2550 บทคัดยอ ในเชิงธุรกิจไดกลาวถึงความรูวาเปนสินทรัพยที่สําคัญขององคกรที่ผลักดันใหเกิดความ ไดเปรียบทางการแขงขันเชิงกลยุทธ สําหรับการจัดจางบริหารจัดการระบบงานสารสนเทศจาก ภายนอกองคกรที่ใหบริการอยางมีคุณภาพโดยที่ ไอทิล (ITIL) เปนปจจัยสําคัญ ซึ่งการใหบริการ แกไขปญหา นั้นเปนสวนที่สําคัญสําหรับการบริหารจัดการของการใหบริการดานสารสนเทศ จากปญหาหลักสามประการคือ 1) ผูชํานาญเฉพาะดานมีอัตราการลาออกสูง 2) มากกวา60% ของเวลาทั้งหมดถูกใชไปกับการแกไขปญหาที่เกิดซ้ําและ 3) การมอบหมายงานที่ไมเหมาะสม เนื่องจากความผิดพลาดของมนุษย ดังนั้นงานวิจยนี้ไดนําเสนอขอบขายงานของระบบการจัดการ ั ความรูกับการแกไขปญหาทีตนเหตุ และทําการประเมินผลความสําเร็จของระบบ KMRCA IT ่ service desk โดยระบบมีการทํางานหลัก 2 สวนคือ การคนหาความรู และ การมอบหมายงานแบบ อัตโนมัติ การวิจัยไดประเมินผลความสําเร็จของการคนหาความรูโดยการจําลองสถานการณ และ ผลสรุปแสดงใหเห็นวาระบบที่นําเสนอนันไดลดเวลาแกไขปญหาอยางมีนัยสําคัญ ยิ่งไปกวานั้นได ้ ปรับปรุงขอบขายของงานวิจัยใหครอบคลุม การมอบหมายงานใหกับกลุมของผูแกไขปญหาแบบ อัตโนมัติโดยใชเทคนิคการทําเหมืองขอความ เพื่อหาวิธที่เหมาะสมกับระบบโดยใช ตนไมตัดสินใจ ี ซึ่งผลของตนไมตัดสินใจแบบ ID3 นั้นใหผลที่มีความถูกตองมากกวา และไดนําไปสูการมอบหมาย ผูแกไขปญหาที่เหมาะสมในแตละปญหาแบบอัตโนมัติ นอกจากนี้ผลลัพธจากกฎที่ไดจากตนไม ตัดสินใจนําไปจัดเก็บไวในฐานขอมูลของความรู เพื่อชวยสนับสนุนการมอบหมายในครั้งตอไป (วิทยานิพนธมีจํานวนทั้งสิ้น 153 หนา) คําสําคัญ : การจัดการความรู การใหบริการแกไขปญหา การบริหารจากภายนอกองคกร ไอทิล เหมืองขอความ การประเมินสมรรถนะ การจําลองสถานการณ และตนไมตัดสินใจ ___________________________________________________ อาจารยที่ปรึกษาวิทยานิพนธหลัก
iii

ชื่อ ชื่อวิทยานิพนธ

This thesis could not be complete without my wife and all people in my family particular Dad and Mom who have supported me since I was born. would like to thank at points on my advisor. Assist. Dr. Choochart Haruechaiyasak whose knowledge and technical suggestions about text mining discovery algorithms in particular word extraction and machine learning to facilitate the approach of automatic resolve group assignment in place of the IT service desk agent’s tasks. He has been actively interested in my work and has always been available to advise me. Gareth Clayton whose advances research methodology. Assist. Prof.ACKNOWLEDGEMENTS I wish to express my gratitude to a number of people who became involved with this thesis. Prof. Thanks to Taweesak Suwanjaritkul and Pisit Thongngok whose knowledge with regard to Visual Basic programming and SQL server 2005 database management that made the prototype of KMRCA IT service desk system worked effectively. Utomporn Phalavonk whose advocate of scheduling and recommendations of graduate college’s regulations made me complete in my planning and performing administrative tasks. Dr. particular statistics and simulation techniques providing to me both concepts and real practices with consciously and unconsciously ideas how good is good enough in experimental design should be taken together that make him a great mentor. I would like to show my faithful thank to Assoc. Phayung Meesad. I am very grateful for his motivation. Foremost. I would like to sincerely thank to Dr. and Dr. I would like to thank Dr. Moreover. I would like to thank my advisors. Prof. I. enthusiasm. and immense knowledge. He also contributes on my work to be onboard of international publishing. Thanks to members of IT admin staff whose works made the most of my administrative documents done during my study at the university. Phayung whose support and guidance made my thesis work possible. Gareth Clayton for providing me with the opportunity to complete my PhD thesis at King Mongut’s University of Technology North Bangkok. especially. Padej Phomasakha Na Sakolnakorn iv . Dr.

1 Knowledge Management 2.1 Background and Statement of the Problem 1.5 Utilization of the Study Chapter 2 Literature Review 2.3 Constructing an Instrument for Data Collection 3.7 Decision Support System 2.2 Root Cause Analysis 2.8 Classification trees 2.2 Objectives 1.TABLE OF CONTENTS Page Abstract (in English) Abstract (in Thai) Acknowledgements List of Tables List of Figures Chapter 1 Introduction 1.4 Scope of the Study 1.4 ITIL-Based IT Service Desk Function 2.5 Technologies for Service Desk 2.2 Information Collection and Requirement Analysis 3.5 Methodology of Automatic Resolver Assignment 3.3 Case-Based Reasoning 2.6 IT Service Desk Outsourcing 2.4 The Proposed KMRCA IT Service Desk Framework 3.6 Summary ii iii iv vii viii 1 1 3 3 3 5 7 7 10 11 14 22 23 24 25 28 31 31 32 34 39 53 59 v .1 Research Process 3.9 Summary Chapter 3 Methodology 3.3 Hypothesis 1.

1 Conclusion 5.TABLE OF CONTENTS (CONTINUED) Page Chapter 4 Experimental Results 4.1 The Results of Text Mining Discovery Methods of Automatic Assign Function 4.4 Summary Chapter 5 Conclusion 5.2 The Results of Design of Experiment 4.3 Future Work References Appendix A Appendix B Appendix C Biography 61 63 67 69 71 71 72 73 75 81 89 129 153 61 vi .3 The Results of Performance Evaluation 4.2 Discussion 5.

LIST OF TABLES Table 3-1 3-2 3-3 3-4 3-5 3-6 3-7 4-1 The Rate of Incident Calls during Time in Business Day and Holiday Percentage of Incident Calls by Severity Classification of Calls by Incident Category Summary of Probability Distributions for Computer Simulation Comparison of Square Error by Function A Good-of-fit Test of Time in Resolving Incidents by Severity The Number of Incidents of System Types and Resolver Groups The Number and Percentage of Correct Incident for Various Types of Decision Trees 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 The Speed Compared with the Accuracy of Classification Assigned Factor Values for Two-Level 2 Full Factorial Design of DOE for Responses Y of O1 Coded Design Matrix of O1 Absolute Value of Coefficients for Average O1 and P-Value Absolute Value of Coefficients for Average O4 and P-Value Comparison Tests of KMRCA and Typical IT Service Desk Systems 3 Page 33 33 34 35 36 38 53 62 62 64 65 65 66 66 68 Comparison Outputs of KMRCA and Typical IT Service Desk Systems 68 vii .

LIST OF FIGURES Figure 2-1 2-2 2-3 2-4 2-5 2-6 2-7 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 The Case-Based Reasoning Cycle Classification Hierarchy of Case-Based Reasoning Applications Incident Management Process Overview The Incident Life Cycle First. and Third Line Supports Relationship between Incidents Handling Incident Work-arounds and Resolutions Input Analyzed Results Probability Plot of Time between Arrivals Probability Plot for Resolving Time by Severity A Typical IT Service Desk Outsourcing Overview Information Flow of IT Service Desk A Conceptual Model of IT Service Desk System A Proposed Framework of KMRCA IT Service Desk System Information Flow of KMRCA IT Service Desk System KMRCA IT Service Desk Process Page 12 13 15 17 18 19 19 36 37 39 40 41 42 43 44 45 46 48 49 51 52 53 54 54 56 66 66 3-10 Search Knowledge Procedure 3-11 Typical IT Service Desk and KMRCA IT Service Desk 3-12 The System Development Life Cycle (SDLC) 3-13 A Sample Display of Search Knowledge and Input Resolution 3-14 A Sample Display of Searching Results 3-15 A Sample Display of Assign Resolver Group 3-16 KMRCA IT Service Desk with Automatic Assignment Function 3-17 A Process of Automatic Resolver Group Assignment 3-18 Processes of Model Approach for Automatic Assignment 4-1 4-2 Pareto of Coefficients for Average Response Y of O1 Pareto of Coefficients for Average Response Y of O4 viii . Second.

As the case study. organizing. Consequently. IT service desk as a second level support (SLS) will resolve the assigned incidents from the FLS by ensuring that the incident is in the outsourcing scope and still owned.CHAPTER 1 INTRODUCTION 1. The bank takes ownership of the help desk agent called the first level support (FLS) which acts as more than just an interface for internal users and external customers. sharing. 2. 8].1 Background and Statement of the Problem Knowledge management is the business process of managing the organization’s knowledge by means of systematic and organizational specific processes for acquiring. takes over IT functions formerly conducted within the boundaries of the firm [5. . applying. The IT service desk is a crucial function of incident management driven by alignment with the business objectives of the enterprise that requires IT support. IT outsourcings are understood as a process in which certain service providers. The primary objective of the IT service desk is to resolve incidents related to IT in the organization. 6]. but also to create value [1. the banks in Thailand also need to reduce costs and to improve their quality of services by strategic information technology (IT) outsourcing such as data processing and system development to the third parties. it appears that the IT service desk outsourcing’s role is not quite a single point of contact [9]. The ITIL defines a set of the best practice processes to align IT services to business needs and constitutes the framework for IT service management [7. 4]. Due to the rapid change in technology and competition among global financial institutions. tracked. external to organizations. 3. and renewing both tacit knowledge and explicit knowledge by employees not only to enhance the organizational performance. and monitored throughout its life cycle. sustaining. balancing theirs operations and achieving desired service level targets while IT Infrastructure Library (ITIL) has become a strategic tool for efficiency and effectiveness of IT outsourcing providers to provide a competitive approach.

1. Because the resolver group assignments are still performed manually by IT service desk agents.2 For the technologies regarding service desk.1 The employee turnover is very high.2 More than sixty percent of all resolving time is spent to resolve the repeat incident [13]. Most efforts at improving service desk performance have been to make the current system more efficient through applications of information technologies.1. voice response unit (VUR) and interactive voice response unit (IVR) [11].1. The last problem of underlying for the incorrect resolver group assignment can be resolved by means of automatic assignment approach. For the reason that service desk staff store significant knowledge regarding the systems such as business processes. particularly for technical employees [12].3 The assigned resolver group to deal with the incident may be mistaken due to human errors. In resolving the incident effectively. 1. and support teams. many organizations have focused on computer telephony integration (CTI). Those technologies do not address the problem of resolving performance dropped due to incorrect assignments. The first of two problems can be resolved by keeping employee’s knowledge along with the organization by knowledge management approach and to conduct the way to prevent the recurring incidents by using root cause analysis. The major hardware technologies are as follows: automatic call distributor (ACD). and technologies and if they leave their knowledge often goes with them. This thesis identifies three problems as follows: 1. The activities are becoming the primary internal IT service desk functions of the outsourcing and they are the potential to provide the competitive advantages. applications. The basis of CTI is to integrate computers and telephones so that they can work together seamlessly and intelligently [10]. These technologies are used to make the existing process more efficient by focusing on minimizing the agent’s idle time.1. The Text mining discovery methods can find out the suitable methods such as decision trees to support the correct assign and the rule resulting from the rule generation from the decision tree could be properly kept in a knowledge database in order to support and assist with further assignments. . IT service desk agents must be very knowledgeable of their service supports.

if the p-value is small then the result is called statistically significant and the null hypothesis is rejected in favour of the alternative hypothesis. 1. Incorrectly rejecting the null hypothesis is a Type I error.2. the defined hypothesis of the alternative hypothesis (H1) is the average time in resolving incidents for all calls except for critical calls will be lower in KMRCA IT service desk system than the currently Typical IT service desk system and null hypothesis (H0) is that the average time in resolving incident of the both systems are the same.1 This study focuses on the performance evaluation in terms of throughput and average time taken in resolving incidents. H0 : µ1 = µ2 .2. The statistical hypothesis test approach is to calculate the probability that the observed effect will occur if the null hypothesis is true. respectively. .1 To propose a framework for knowledge management system with root cause analysis based on ITIL best practice for IT service desk outsourcing in the banking business called KMRCA IT service desk system. If not. Two rival hypotheses are compared by a statistical hypothesis test.2 Objectives The objectives of this dissertation are as follows: 1.4.2 To evaluate the performance of the KMRCA IT service desk system before-and-after usage by using experimental design and simulation study. 1. incorrectly failing to reject it is a Type II error. 1. and H1 : µ1 < µ2 . Therefore.4 Scope of the Study The scope of this dissertation is as follows: 1.3 1. then the null hypothesis is not rejected. where µ1 and µ2 are the average time in resolving incidents of KMRCA IT service desk system and the average time in resolving incidents of Typical IT service desk.3 Hypothesis For the reason that the performance of KMRCA IT service desk system will be higher than the Typical IT service desk system in terms of speed in resolving incidents. In other words.

IT service desk outsourcing includes IT service desk agents and five resolver groups.4.4. Random Forest.4. NWS (network service).2 The performance evaluation is to compare before-and-after employment KMRCA IT service desk system by using simulation study within Arena[56] software package and design of experiment of 23 factorial design. For performance evaluation using simulation study. a sample of incident data collected from Tivoli CTI system of IT service desk outsourcing of selected 12. including EOS (enterprise operating service).4. 1. OS-EC (operation service). a searching knowledge function based on case-based reasoning.9 For the study of automatic resolver assign.8 For performance evaluation. J48. 1.4 1.198 calls (prime time on the working days) for 4-month during April to July 2006. 1.4.7 The resolver groups are always available when they receive the assigned incidents from the IT service desk agents. Random Tree and REPTree. 1.198 calls during the prime time on the working days since the aim needs the simulation output as real as possible. Incident management process and problem management process. 1. .440 cases for 4-month during April to July 2006.3 For the framework. but determine to assign correctly as relevant symptoms of the incident. Decision stump. a sample of incident data collected from Tivoli CTI system of IT service desk outsourcing of all 14. and an automatic resolver group assign function based on the method generating from text mining discovery algorithms.6 The text mining discovers algorithms is to find out the strongest methods by comparing seven decision trees within WEKA [65] machine learning. system development life cycle (SDLC) method. ID3.5 The proposed KMRCA IT service desk system developed based on system analysis.4. Obviously. the sample sizes are different from each other because there are on the different sides of the study objectives.4. NBTree.440 cases because the main purpose of the study requires all data to execute to the system no matter what time concerns.4. and VEN (vendor service). In addition. the system composes of two main functions. IE-AMS (application management service). a sample size is selected 12. Another of automatic resolver group assignment.4 ITIL-based KMRCA IT service desk processes include IT service desk function. 1. 1. a sample size is all 14.

5. case based reasoning (CBR). Finally. ITIL-based IT service desk. 1. 1. but also the knowledge acquisitions that are the rules resulting from the rule generating from the decision tree method. including knowledge management (KM).5.1 The Performance evaluation using simulation study and experimental design can be adopted to find out the specification of the knowledge management system.5.5 Utilization of the Study 1. The acquired knowledge can be kept to support and assist to the further assignments.5 1. technologies for IT service desk. 1. 1. IT service desk outsourcing. conclusion and future work are presented in Chapter 5. The details of the proposed model frameworks are illustrated in Chapter 3. For example.2 The simulation study is also used to evaluate KMRCA IT service desk system’s performance without interrupting the daily IT service desk’s operations. Moreover. Chapter 2 describes literature review. decision support system (DSS) for resource assignments and classification trees. the performance evaluation of KMRCA IT Service Desk can be applied to the other service desk functions to identify the KMRCA specifications and then it can be modified according to the organization’s requirements.5.5.3 The ITIL-based IT service desk function in incident management and problem management processes can be adopted and adapted to the organizational outsourcing to deal with the ITIL certification. This thesis organizes the remainders as follows. .4 The data preparation process and text mining discovery algorithm method can be applied to the empirical studies that need data pre-processing and transforming the results to find the strongest method for the classification approach. the way of simulation can be applied in several industries’ processes in time being concern in order to manage constrictions of the system. Chapter 4 gives results of the study and discussion. root cause analysis (RCA).5 The suitable decision tree-based in the function of IT service desk system provides not only automatic resolver group assign.

the increase of one does not come at the decrease of the other.1 Knowledge Management The study of knowledge management started from Polanyi’s Tacit Dimension. The IT service desk outsourcing is describes in Section 2. including knowledge management. The field of knowledge management has also been developed by the experience and philosophy of Eastern society. and focused on the definition of knowledge but not on the systematic effort of managing it [14]. 2. Firstly. is learned through a process of personal experience. and technologies for service desks. Drucker [15] documented the transformation from a capitalist to a Knowledge Society. or labor. root cause analysis.5 describe ITILbased service desk function. Peter Drucker [15] is among the first who advocated the advent of a knowledge society.4 and 2. tacitness and explicitness are distinct dimensions. . 2.CHAPTER 2 LITERATURE REVIEW This chapter describes the review of several literatures with regard to the study.1. and then to know.2. noting that the foremost economic resource is no longer capital. the summary is shown in Section 2.3. since tacit knowing is an essential element of any kind of knowledge and is acquired through personal experience called indwelling. The conceptualization of KM was not developed until knowledge became central to production and innovation in the 1990s. Sections 2. and case-based reasoning which are illustrated in Sections 2. Decision support system considering resource assignment and Classification trees are illustrated in Sections 2.8.6. Polanyi’s work was situated in a philosophical context.7 and 2.9. the ability to identify the outside objects. any effort to achieve absolute detachment. land. Rather. which began shortly after World War II. In the Post-Capitalist Society [15]. Secondly. His analysis emphasized several key concepts. it is and will be knowledge [15]. Moreover. the objective of knowledge is misdirected and self defeating. and 2. Thirdly.

Knowledge management (KM) is the process of managing the organization’s knowledge by means of systematic and organizational processes conducted by employees to enhance the organizational performance and create value [1. the knowledge management is about acquisition and storage of employees' knowledge and making the knowledge accessible to other employees within the organization [3. but includes organizational issues. combination. 2. and internalization processes by the SECI model that becomes popular in knowledge management today. is a pioneer work in mapping explicit and implicit knowledge. 18. Organizations should therefore seek and share a combination of tacit and explicit knowledge with suppliers and other parties in the value chain to satisfy customer needs in a highly competitive environment. the SECI processes of knowledge management may be considered comparable to the project management for organizing a project and guiding it to success [16]. as well as individual. 20]. This SECI model or SECI processes explain the organizational knowledge creation theory and serve as a method of understanding how an organization creates a new product. For the organizations.8 Nonaka and Takeuchi’s Knowledge-Creating Company [1]. They introduced the socialization. and organizational knowledge into one matrix describing called the dynamics of knowledge creation. based on experience in Japanese companies. on the other hand. new process. This concept is easily understood by focusing on the project in the system solution business in which creation of a new product or new process that leads to success. group. 3]. has been driven by practices and development in information and data management [4]. intranet and internet. or new organisation structure. assumes information resource management together with the cultural change which is important in the KM implementation process [17]. In order to find out the problem or solution. For this reason. KM is more than just the advantage of technology. externalization. The development of KM. Nonaka and Takeuch [1] have extensively studied knowledge in the organization and developed a model that . 19. it recreates a new environment while producing new knowledge or information are from the inside organization. Though many success cases in business activity indicate efficient and effective implementation of SECI an innovative organization does not simply solve the existing problems or process external information for adapting to environmental changes.

a company changes individual's knowledge into organizational knowledge [21]. However. Tacit knowledge is defined as personal. (1) knowledge capturing or knowledge discovery. Organizational knowledge is knowledge held by the organization. and (5) knowledge transfer which are working in cycle and the knowledge sharing and knowledge transfer are conveyed to the community of practice (CoP) which people know how to use the real knowledge. Gray [25] called this knowledge creation. much of the knowledge is from experiential learning [23. A challenge is how to transfer the knowledge gained by individuals into organizational knowledge. The organization maintains the organizational knowledge in organizational knowledge resources which are operated on by human or computer processes that manipulate the knowledge to create value for the organization [22]. “a process that amplifies the knowledge created by individuals and crystallizes it as part of the knowledge network of the organization. For the service desk. (4) knowledge sharing. Along the horizontal axis they defined two classes of problems as new problems and previously solved problems. Through knowledge management. the IT is used to support only knowledge creation and knowledge inventory that are conducted to the organizational memory (OM) [9]. the relevant knowledge management approach is of problem solving. (2) knowledge creation. Explicit knowledge is factual and easily codified so that it can be formally documented and transmitted. Along the vertical axis they define two processes of problem recognition and problem solving.” In a service desk environment.9 describes knowledge as existing in two forms. The primary function of the service desk is problem solving of both new and previously solved problems. When solving new problems. The framework was defined four cells according to the type of problem and the process supported. Nonaka and Takeuchi [1] defined organizational learning as. . (3) knowledge inventory or storing knowledge. 24]. Phomasakha and Meesad [9] reviewed several knowledge management system (KMS) from several literatures regarding knowledge management systems and proposed the KMS compose of five processes. Gray [25] presented a framework that categorizes knowledge management according to a problem solving perspective. context-specific knowledge that is difficult to formalize and communicate. Solving previously solved problems was called knowledge acquisition.

However. gathers contributing incidents. 2. The techniques of the root cause analysis are often applied for input for decision making process. . and monitored throughout their life cycle. IT service desk outsourcing is a curial functions of an IT outsourcing provider who takes over IT functions from its customer or the bank. when eliminated or changed. The purpose of the IT service desk outsourcing is to support customer services on behalf of the bank’s business goals with technology driven. the bank desires service level targets based on service level agreement (SLA) to control the IT service desk operations [26]. The RCA is the process to identify effortless factors using structured approach with techniques decided to provide a focus on identifying and resolving problems. However. the knowledge-based library of RCA models could be hierarchically structured and interconnected failure trees.10 Several characteristics can be defined that will make a KMS successful in the service desk. will prevent the recurrence of the specific or similar problems. The root cause analysis identifies and prevents future errors in the proactive mode [28]. The RCA also provides objectivity for problem solving. The role of IT service desk is to ensure that IT incident tickets are owned. but also the RCA embedded into the system in order to prevent the recurring incidents oin the KMRCA IT service desk system. The KMS must be able to gather knowledge from humans and other sources.2 Root Cause Analysis A root cause analysis (RCA) is a structural investigation that aims to identify the true cause of a problem. and the actions necessary to be taken to eliminate it [27]. predicts other problems. Moreover. assists in developing solutions. the abnormalities in process operations and output quality can originate from abnormalities in equipment or in process conditions possibly due to basic failures [31]. root cause analysis will tell the real reasons for problems [29]. In an environment of IT outsourcing in banking business. and therefore the benefits of the RCA are to improve the service level agreement (SLA) attainment and to enhance quality services as well as customer satisfaction. The results of RCA. The KMS is designed to be incorporated into the daily operation of the service desk to ensure high utilization and maintenance of the knowledge stores [30]. In this study is to develop not only knowledge management system (KMS). tracked. and focus attention on preventing recurrences.

In 1977. Schank [36] then investigated the role that the memory of previous situations and situation patterns scripts. Gentner [38] investigated analogy reasoning that is related to CBR while Carbonell [39] explored the role of analogy in learning and plan generalization [38. It has proven itself to be a methodology suited to solving “weak theory” incidents where it is difficult or impossible to elicit first principle rules from which solutions may be created [40]. CBR uses a database of incident to resolve new incidents. increasing numbers of research paper and applications were published. MOPS play in incident solving and learning [36]. In addition. (b) CBR supports incremental. [35]. The reasoner resolves new incidents by adapting relevant cases from the library [34]. The new case that in general represented as a pair of incident and resolution is immediately available and can be considered as a new piece of knowledge.11 2. sustained learning. When an incident is resolved the case-based reasoner can add the incident description and the solution to the case library. The database can be built through the knowledge management process or it can be collected from the previous cases. and CBR has grown into a field of widespread interest. After CBR solves an incident. 39]. Subsequently. Case-Based Reasoning is different from other artificial intelligence (AI) approaches in following ways: (a) Traditional AI approaches rely on general knowledge of an incident domain and tend to solve incidents on a first-principle while CBR systems solve new incidents by utilizing specific knowledge of past experiences. 33]. it will make the incident available for future incidents.3 Case-Based Reasoning Case-Based Reasoning (CBR) is widely used in resolving incident that is able to resolve a new incident by remembering a previous similar situation and by reusing information and knowledge of that situation [32. In resolving incident. They proposed that general knowledge about situations be recorded as scripts that allow us to set up expectations and perform inferences [36]. each case would describe an incident and a resolution to that incident occurred. . More specifically. Almost at a similar time. According to Doyle et al. CBR can learn from previous experiences. Schank and Abelson’s [36] work brought CBR from research into cognitive science [37].

. giving a suggested solution or revising the solution. this process includes using the retrieved case and adapting it to the new situation. during this process. This process enables CBR to learn and create a new solution and a new case that should be added to the case base. As shown in Figure 2-1. the CB reasoner searches the database to find the most approximate case to the current situation. It should be noted that the Retrieve process in CBR is different from the process in a database.12 2. Aamodt and Plaza [33] described CBR typically as 4-RE cyclical process comprising as follows: 1) RETRIVE the most similar cases. since the proposed solution could be inadequate. the database only retrieves some data using an exact matching while a CBR can retrieve data using an approximate matching. 2) REUSE the cases to attempt to solve the incident. retaining the repaired case and incorporating it into the case base. 3) REVISE the proposed solution if necessary. as shown in Figure 2-1. the CBR cycle starts with the description of a new incident. 4) RETAIN the new solution as a part of a new case. which can be solved by retrieving previous cases and reusing solved cases. the reasoner might propose a solution. this process can correct the first proposed solution. FIGURE 2-1 The Case-Based Reasoning Cycle [33]. At the end of this process. if possible.1 The CBR Cycle The CBR process can be represented by a schematic cycle. If you want to query data.3.

most commercial CBR tools support classification tasks. this cycle rarely occurs without human intervention that is usually involved in the Retain step. CBR systems that perform synthesis tasks must make use of adaptation and are usually hybrid systems combining CBR with other techniques [37]. . Under this classification scheme. they are harder to implement. A new case is matched against those in the case-base from which an answer can be given. Many application systems and tools act as a case retrieval system.3.13 However. such as some help desk systems and customer support systems. 2.2 A Classification of CBR Applications Althoff [41] suggested a classification method of CBR application as shown in Figure 2-2. In fact. CBR applications can be classified into two categories as follows: (a) (b) Classification tasks Synthesis tasks FIGURE 2-2 Classification Hierarchy of Case-Based Reasoning Applications [41]. The solution from the best matching case is then reused. Classification tasks are very common in business and everyday life. Usually. Synthesis tasks attempt to get a new solution by combining previous solutions and there are a variety of constraints during synthesis.

2.4. OGC collected information on how various organisations addressed Service Management.1 IT Service Desk Function in Incident Management ITIL-based IT service desk in incident management process provides a vital day-to-day contact point between users. Other organisations found that the guidance was generally applicable and markets outside of government were very soon created by the service industry. IT services and third-party support organisations. Service Level Management (SLM) is a prime business enable for this function. customers. thus ensuring that the best possible levels of service quality and . The following is given a brief of Incident Management and Problem management processes which the details are in the Service Support book of ITIL book series. which can be incorporated within IT organisations. coherent approach. general activities.4. analysed this and filtered those issues that would prove useful to OGC and to its customers in UK central government. Strategically. It has proved its value from the very beginning. The models show the goals. inputs and outputs of the various processes. It provides a comprehensive a set of best practice for the IT service management. ITIL is wildly accepted approach IT Service Management (ITSM). 2. ITIL describes the contours of organizing service management.14 2. for internal users and external customers the IT service desk is probably the most important function in an IT organisation. For many.4 ITIL-Based IT Service Desk Function ITIL (Information Technology Infrastructure Library) documents industry best practice guidance. Initially. This has been distilled into one reliable. the IT service desk is their only window on the level of service and professionalism offered by the whole organisation or a department. promoting a quality approach to archiving business effectiveness and efficiency in the use of information system. This delivers the prime service component of customer perception and satisfaction. which is fast becoming a de facto stand used by some of the world’s leading businesses [42]. ITIL is based on the collective experience of commercial and governmental practitioners worldwide. Being a framework.2 Incident Management Process The primary goal of the Incident Management process is to restore normal service operation as quickly as possible and minimise the adverse impact on business operations.

A request for new or additional service (i. such as request for information or advice or documentation. 'Normal service operation' is defined here as service operation within Service Level Agreement (SLA) limits. practice shows that handling of both failures in the infrastructure and of service requests are similar.e. software or hardware) is often not regarded as an incident but as a Request for Change (RFC). configuration inaccessible.15 availability are maintained. As the Figure 2-3 shows the Incident Management Process overview which includes Inputs. (c) service requests. forgotten password. disk-usage threshold exceeded. such as system down. printer not printing. FIGURE 2-3 Incident Management Process Overview [42]. Outputs. and its activities [42]. (b) hardware. and both are therefore included in the definition and scope of the process of Incident Management. application bug or query preventing Customer from working. However. and so forth. such as service not available. . automatic alert. Examples of categories of Incidents are as follows: (a) application.

tracking and communication. A resolution or Work-around should be established as quickly as possible in order to restore the service to Users with minimum disruption to their work. (b) resolved and closed incidents. (d) resolution and recovery. (b) configuration details from Configuration Management Database (CMDB). (c) response from incident matching against problems and Known Errors resolution details. networks or computer operations. (d) management information reports. The service desk is responsible for the monitoring of the resolution process of all registered incidents in effect the service desk is the owner of all incidents. Figure 2-4 illustrates the activities during an incident life cycle. Most IT departments and specialist groups contribute to handling incidents at some time. The process is mostly reactive. (b) Classification and initial support. Incident Management activities are as follows: (a) Incident detection and recording. (c) investigation and diagnosis. monitoring. Outputs are as follows: (a) RFC for Incident resolution. (f) Incident ownership. . (d) response on RFC to effect resolution for incident(s). the incident is closed. (e) Incident closure. including resolution and or Work-arounds. Actually. the incidents cannot be resolved immediately by the service desk may be assigned to specialist groups.16 Inputs are as follows: (a) Incident details sourced from service desk. (c) communication to Customers. updated Incident record. After resolution of the cause of the incident and restoration of the agreed service.

departments and specialist support groups other than the service desk are referred to as second or third line support groups. Figure 2-5 illustrates how this terminology relates to the Incident management activities mentioned in previous paragraphs. important to retain the description of the original symptoms. the service desk would be first line support. both for analysis and so that you can refer to the complaint in the same terms used in the initial report [42]. however.17 FIGURE 2-4 The Incident Life Cycle [42]. Throughout an incident life-cycle it is important that the Incident record is maintained.g. Example update activities include: (a) update history details (b) modify status (e. . having more specialist skills. time or other resources to resolve incidents. It is. 'new' to 'work-in-progress' or 'on hold') (c) modify business impact/priority (d) enter time spent and costs (e) monitor escalation status An originally reported Customer description may change as the Incident progresses. In this respect. Often. This allows any member of the service team to provide a Customer with an up-to-date progress report.

18

FIGURE 2-5 First, Second, and Third Line Supports [42]. The service desk plays an important role in the Incident Management process, as follows: (a) All incidents are reported to and registered by the service desk where the

incidents are generated automatically, the process should still include registration by the service desk. (b) The majority of incidents which are possible up to 85% in a highly skilled

requirement. Thus, they will be resolved at the service desk. (c) The service desk is the independent function monitoring incident

resolution progress of all registered incidents.

19 Incidents, the result of failures or errors within the IT infrastructure, result in actual or potential variations from the planned operation of the IT services. The cause of incidents may be apparent and that cause can be addressed without the need for further investigation, resulting in a repair, a Work-around or a RFC to remove the error. Successful processing of a Problem record will result in the identification of the underlying error, and the record can then be converted into a Known Error once a Work-around has been developed, and or RFC [42]. This logical flow, from an initial report to the resolution of an underlying problem, is shown in Figure 2-6.

FIGURE 2-6 Relationship between Incidents. It can be noted that the problem is the unknown underlying cause of one or more incidents. Known Error is a problem that is successfully diagnosed and for which a Work-around is known. In addition to RFC as a Request For Change to any component of an IT Infrastructure or to any aspect of IT services. When incident Management finds a Work-around it will be analysed by the Problem Management team who will update the associated Problem record as shown in the Figure 2-7. An associated Problem record may not exist at this time, for example, the Work-around may be to send a report by fax due to a communication line failure, but at this point there may not be a Problem record for the communication line failure, which the Problem Management team would have to create [42].

FIGURE 2-7 Handling Incident Work-arounds and Resolutions [42].

20 The process is then that the service desk will link incidents that are clearly the result of an existing Problem record. It is also possible that the Problem Management team, while investigating the problem associated with the incident, finds a Workaround or a resolution for a problem and/or some related incidents [42]. In this case, the Problem Management team should inform the incident Management process in order that open incidents have their status changed to 'Known Error' or 'closed' as appropriate. For the next part it will be described the Problem management process. 2.4.3 Problem Management Process The goal of Problem Management is to minimise the adverse impact of incidents and problems on the business that are caused by errors within the IT Infrastructure, and to prevent recurrence of incidents related to these errors. In order to achieve this goal, Problem Management seeks to get to the root cause of incidents and then initiate actions to improve or correct the situation [42]. The Problem Management process has both reactive and proactive aspects. The reactive aspect is concerned with solving problems in response to one or more incidents. Proactive Problem Management is concerned with identifying and solving problems and Known Errors before incidents occur in the first place. The process is intended to reduce both the number and severity of incidents and problems on the business. Therefore, part of Problem Management's responsibility is to ensure that previous information is documented in such a way that it is readily available to firstline and other second line staff. The scope of Problem Management process includes Problem control, error control and proactive Problem Management. In terms of formal definitions, a 'Problem' is an unknown underlying cause of one or more incidents, and a 'Known Error' is a problem that is successfully diagnosed and for which a Work-around has been identified. Inputs to the Problem Management process are as follows: (a) Incident details from Incident Management (b) configuration details from the Configuration Management Database CMDB (c) any defined Work-arounds from Incident Management.

In many situations this goal can be in direct conflict with the goals of Incident Management where the aim is to restore the service to the Customer as quickly as possible. a closed Problem record (e) response from Incident matching to problems and Known Errors (f) management information. often through a Work-around. Outputs of the process are as follows: (a) Known Errors (b) A Request for Change (RFC) (c) An updated Problem record. rather than through the determination of a permanent resolution (for example. the speed with which a resolution is found is only . Problems can also be identified from a single significant incident. therefore. The Problem Management differs from Incident Management in that its main goal is the detection of the underlying causes of an incident and their subsequent resolution and prevention. This is proactive Problem Management. Error control focuses on resolving Known Errors structurally through the Change Management process [42]. by searching for structural improvements in the IT infrastructure. for which the cause is unknown. in order to prevent as many future incidents as possible). but for which the impact is significant. Problem control focuses on transforming problems into Known Errors.21 The major activities of Problem Management are as follows: (a) Problem control (b) Error control (c) Proactive prevention of problems (d) Identifying trends (e) Obtaining management information from Problem Management data (f) Completion of major problem reviews. indicative of a single error. and the subsequent development of a Workaround. In this respect. A problem is a condition often identified as a result of multiple incidents that exhibit common symptoms. A Known Error is a condition identified by successful diagnosis of the root cause of a problem. and User-group meetings can also result in the identification of problems and Known Errors. including a solution and or any work-arounds (d) for a resolved problem. Structural analysis of the IT infrastructure. reports generated from support software.

voice response unit (VRU).. In automating the agent-centric help desk. computer telephony integration (CTI). internet. headsets. It is important to ensure that the blend of technology. These technologies do not address the problem of knowledge loss when agents leave nor do they provide information to the agent in helping to resolve problems. (e) fax servers (supporting routing to email accounts). mobile com. (f) pager systems. . each with its advantages and drawbacks. video. It is also important to understand that with automation comes an increased need for discipline and accountability. search and diagnostic tools. 2. Investigation of the underlying problem can require some time and can thus delay the restoration of service. (g) knowledge. (b) advanced telephone systems for example auto-routing. many have focused on computer telephony integration (CTI). adapting to both current and future demands. The basis of CTI is to integrate computers and telephones so they can work together seamlessly and intelligently [10]. and reader bounds [11]. Interactive voice response unit (IVR). and (h) automated operations and network management tools. causing downtime but preventing recurrence [42]. process and service desk staff will meet the needs of both the business and the User. email systems. (d) electronic mail such voice. These technologies are used to make the existing process more efficient by minimizing the agent's idle time and evenly loading the agents in the help desk. The major hardware technologies are as follows: Automatic call distributor (ACD).22 of secondary (albeit still of significant) importance. (c) interactive voice response (IVR) systems.5 Technologies for Service Desk The service desk technology means a number of technologies are available to assist the service desk functions. voice over internet protocol (VOIP). (a) integrated Service Management and Operations Management systems. predictive dialing. The below are the several technologies of service desk. The technology needs to support business processes.

IT service desk should be the window of IT service and professionalism offered by the organisation. There are two types of incidents. The Outsourcing is to dismantle internal IT departments by transferring IT employees. bank’s vendors. though general service desks or help desks serve an important role of the information technology department by providing the primary point of contact for users to contact . where companies are looking outside to help for more fundamental reasons. one is to provide a single point of contact for users and customers and another is to facilitate the restoration of normal operational service with minimal business impact on the user or customer within agreed service levels and business priorities. Non-IT and IT incidents. IT service desk outsourcing is not an actual single point of contact [9]. and 3) to reshape company boundaries. the FLS will assign it to IT service desk or Second Level Support (SLS) to resolve and the SLS may assign to Third Level Support teams. EOS teams.6 IT Service Desk Outsourcing Information Technology (IT) outsourcing has been one of the critical issue in organization management [43]. Service Level Management (SLM) is a prime business enabler for this function. including AMS teams. With a Bank Help Desk or First Level Support (FLS) provides a day-to-day contact point between customers. users. facilities. and IT services. For the IT incident. leases. IT service desk performed by the outsourcing company called IT service desk or Second Level Support (SLS) is the main service function. Hirschheim and Lacity [45] defined IT outsourcing as the practice of transferring IT assets. Most of the bank organizations trends to outsource IT work by hiring a professional company to run their IT operations. FLS and Bank’s vendors will handle the Non-IT incident. 2) to launch new strategies. and software licenses to third-party vendors [44]. The intellectual capital in supporting the users and customers is a valuable business asset and should not be discarded without a clear understanding of the business requirement [42].23 2. NWS teams and Vendors support teams. There two objectives of the IT service desk. and management responsibility [45]. According to Linder [46] argued that the concept of transformational outsourcing is an emerging practice. staff. hardware leases. including 1) to facilitate rapid organizational change.

In 2003. a decision support system for multi-attribute utility evaluation based on imprecise assignments was proposed by Jiménez et al. 2. Next was in the area of decision support for the single-depot vehicle rescheduling problem presented by Li [49] the aim of the system was to minimize operation and delay costs. contributions of decision support systems (DSS) for resource assignments were proposed in several areas. the problem of assigning navy personnel to jobs was resolved by a guided design search in the interval-bounded sailor assignment problem proposed by Lewis [50]. In the Navy works. The authorized third level supports should be allowed to have access to allow them to update the service desk records. Fan [48] proposed a decision support system for proposal grouping. in which knowledge rules were designed to deal with proposal identification and proposal classification. in research for a rule-based system . However. software. [51]. the IT service desk abides in the middle of the FLS and Third Level Support (TLS). Sun [47] presented a hybrid knowledge and model approach which integrated mathematical decision models for the assignment of external reviewers to R&D project proposals. The process to update the records will ensure that resource usage is properly accounted for. It was designed to obtain optimal vehicle assignments and reassignments. However. Because the IT service desk performs to take in the assigned incidents from the bank help desk or First Level Support (FLS) that not directly contact to users or customer at the first time.24 analysts to help them resolve problems with information technology including hardware.7 Decision Support System In the past decade. The purpose of the model was to assign the most appropriate expert to relevant proposals. which is a hybrid approach for proposal grouping. it should be aware of what your supplier is performing closely. Last but not least. Before the research above. and a genetic algorithm was used to search for the expected groupings. The paper describes a decision support system based on an additive or multiplicative multi-attribute utility model for identifying the optimal strategy. and networks [30]. a hybrid knowledge and model approach for reviewer assignments. In R&D project selection. The paper offers an expanded interval bounded network flow model of the sailor assignment process creating teams of skilled sailors to be assigned to ships.

Lazarov and Shoval [52] presented a model and prototype system for the assign of technicians to handle computer faults. including hardware. software and communications. the branches stand for segments connecting the nodes.8 Classification trees A decision tree is a simple structure where a tree in which each branch node represents a choice between a number of alternatives. A decision tree is drawn from left to right or beginning from the root downwards. These ranges of values must give a partition of the set of values of the given characteristic [53].e. technologies for the service desk management do not focus on automatic assignment. In fact. Each node corresponds to a certain characteristic and the branches correspond to a range of values. In the same way the decision tree consists of nodes which stand for circles or cones. not a leaf) may grow out two or more branches. and leaves.” From each internal node (i. and implementing the strongest method well as validating the selected method of the model. 2. although the ITIL framework guides the IT service desk outsourcing to resolve incidents by putting in place the best practice processes for IT service desk decision making regarding assignment and reassignment. the thesis met that those technologies do not address the issue of resolving performance dropped due to incorrect assignments. The first node is a root. Incorrect assignment is still taking place because of human errors. This thesis propose function of automatic resolver group assignment that is based on text mining discovery methods. However.6. Selection of the technician most suited to deal with the reported failure was based on the assignment rules which are a correlations between the nature of the fault and the technicians’ skills. . comparing the results of the model assignment process against assignment carried out by experts. The results showed that the system’s assignments were better than the experts’. The model was evaluated by using simulation test. and each leaf node represents a classification or decision. because the assignment of resolver group to deal with the incident is performed manually by IT service desk agents. nodes (places where branches are divided). The ordinary tree consists of one root. The end of the chain “root – branch – node – … – node” is called “leaf. The technologies that support service desks are described in Section 2. so it is easier to draw it. branches.25 of automatic assignment of technicians to service faults.

The split that results in the largest information gain or decrease in entropy is executed. 2.5 in WEKA machine learning. The decision trees also represent a supervised approach to classification. However. It splits the data in two parts. .5 decision tree with slightly modified C4.2 ID3 An ID3 [55] has been found to construct simple decision trees and can be described using the information gain criterion which is essentially the same as one. New Zealand. For each discrete attribute. The exact criterion is determined by examining the entropy of the two subsets. including Decision Stump. Random Tree. The decision is grown using depth-first strategy. The study implemented several decision trees. The models are often used as components in ensemble learning techniques such as bagging and boosting. The algorithm considers all the possible tests that can split the data set and selects a test that gives the best information gain.8. developed by the University of Waikato. in a book describing data mining in practical machine learning tools and techniques of WEKA software [54].1 Decision Stump A Decision stump [54] is consisting of a decision tree with only a single depth where the split at the root level is based on a specific attribute per value pair. Several decision trees studied are from WEKA. one test with outcomes as many as the number of distinct values of the attribute is considered. 2. binary tests involving every distinct values of the attribute are considered. 56] classifier generates an unpruned or a pruned C4.8. The C4. A decision stump is a weak machine learning model.26 The decision tree algorithms can be applied to solve the problem under discussion. and REP Tree. a suite of machine learning software written in Java. ID3.8. J48. the approach it uses cannot guarantee that better trees have not been overlooked.3 J48 A J48 [55.5 algorithm generates a classification–decision tree for the given dataset by recursive partitioning of the data. 2. NBTree. For each continuous attribute. The below are brief descriptions of various decision tree methods. Random Forest.

A random forest generally exhibits a substantial performance improvement over the single tree classifier such as CART and C4.6 Random Tree A random tree [54] is a tree drawn at random from a set of possible trees. a local naïve Bayes is deployed on each leaf of a traditional decision tree.8. Another way of saying this is that the distribution of trees is uniform. Prediction is done by aggregating. NBTree [57].8.27 2.8. NBTree frequently achieves higher accuracy than either a naïve Bayesian classifier or a decision tree learner.5’s method of using fractional instances. It only sorts values for numeric attributes once. An NBTree classifies an example by sorting it to a leaf and applying the naïve Bayes to that leaf to assign a class label to it. 2.5.5 Random Forest A random forest [58] is an ensemble of unpruned classification or regression trees. majority vote for classification or averaging for regression. . combines naïve Bayesian classification and decision tree learning. a naive Bayes is constructed for each leaf using the data associated with that leaf. Random Tree models have been extensively developed in the field of Machine Learning in the recent years. Missing values are dealt with using the C4. After a tree is grown.5. and an instance is classified using the local naive Bayes on the leaf into which it falls. and prunes it using reduced-error pruning.7 REPTree A REPTree is a fast decision tree learner which builds a decision/regression tree using information gain as the splitting criterion. The algorithm for learning an NBTree is similar to C4. induced from bootstrap samples of the training data.4 NBTree The naïve Bayesian tree learner. 2. A random trees can be generated efficiently and the combination of large sets of random trees generally leads to accurate models. using random feature selection in the tree induction process.8. the predictions of the ensemble. In NBTree. It generalized error of classifiers depends on the strength of the individual trees in the forest and the correlation between them. 2. The random means that each tree in the set of trees has an equal chance of being sampled.

managing tacit knowledge. Both of knowledge creation and knowledge inventory are related to IT therefore there is becoming organisational memory (OM) and this enables to be organizations’ competitive advantage sources [9]. organizing. Technologies for service desk in particular CTI system which is used in the IT service desk system. sustaining. applying. is an essential task to maintain the organization’s core competency [4]. it can be concluded that the Knowledge management system (KMS) composed of five processes. Knowledge management is a discipline that provides strategy. and (5) knowledge transfer of which are elaborated to the community of practice because this is how people develop real knowledge. With highly competitive business environments. sharing. (4) knowledge sharing. The below are the summary of the review. and renewing both tacit knowledge and explicit knowledge by employees to enhance the organizational performance and to create value [2.9 Summary According to the objectives of the thesis are relevant in two areas. Thus. (2) knowledge creation. 2. root cause analysis. Knowledge management (KM) is the business process of managing the organization’s knowledge by means of systematic and organizational specific procedures for acquiring. case base reasoning. and the second is the automatic resolver group assignment based on text mining discovery methods which are decision tree algorithms. including (1) knowledge capturing. which includes the true value added intellectual assets of an organization. tacit and explicit. process. Knowledge can be categorized into two different types. In addition to the knowledge base is able to support the service desk environment. which also differ in the level of structure of the organization [1]. incident management. and problem management.9.28 2. ITIL-based IT service desk which includes service desk function. First is the performance evaluation of knowledge management system based on search knowledge function in terms of speed in resolving incidents.1 Knowledge management system and its performance evaluation This section is to summaries the reviews of knowledge management. . and technology to share and leverage information and expertise that will increase human’s level of understanding to more effectively solve problems and make decisions [20]. (3) knowledge storing or knowledge inventory. 3].

The KMRCA IT service desk system was required the automatic resolver group assignment function. In addition. In fact. the main purpose of incident management is to minimise interruption in business activities and ensure availability of service.9. It appears unlikely that the service desk’s roles in incident management will extend beyond an interface of internal user and external customer [8]. the service desk owns the entire process. Through there are several papers of decision support system regarding resource assignment there is no the research that applied the text mining discovery methods. 2. The function attempts to match the most suited resolver group with the symptom of the incident. The intention of this thesis is to propose the model of knowledge management with root cause analysis called KMRCA IT service desk and develop the prototype of the KMRCA IT service desk system for IT service desk outsourcing. The text mining discovery methods is widely used to search the strongest method of the model to classify the suited resolver group. By the way of case-based reasoning in the literature review can be applied to search for the similar previous cases to resolve the incident. regardless of who actually manages the various tasks. For example the research of automatic assignment of technicians to service faults [52] using the rule-based system which the rules are created by the experts who have well-knowledgeable how to solve several service faults. the ITIL best practice approach.29 According to the ITIL guidance processes. The system is able to improve a performance of IT service desk function in terms of speed in resolving incidents. . text mining is data mining applied to information extracted from text. It can be broadly defined as a knowledge-intensive process in which a user interacts with documented collection overtime by using the suitable analysis tools.2 Decision support system of automatic resolver group assignment This section is to summaries the reviews of decision support systems focusing on resource assignments in various areas.

The main function of the system is a Search knowledge function.1.1 Formulate research problems The thesis reviewed several literatures which are described in Chapter 2 and then formulated problems and identify hypothesises that are introduced in Chapter 1.1. Minitab 15 statistical analysis.1 Research Process The below show the result of operational steps of a research process that this thesis is done step-by-step.3 Construct tools for data collection The thesis is empirical study and the sample of incident record of 14. The selected tools were used to analyse the data. When the agents use the function the system can resolve incident faster than the previous system.1. 3. The design of experiment 2k factorial design is widely used to find the factors that influence with defined valuables as key performance indicators (KPIs). including Arena simulation software package. WEKA machine learning. Input Analyzer in Arena.2 Conceptualize a research design The purpose of the thesis is to evaluate the performance of the KMRCA IT service desk system by using the design of experiment and simulation. and to demonstrate the proposed model and a prototype of KMRCA IT service desk system. 3. and MS Excel spreadsheet and data filtering. 3. .CHAPTER 3 METHODOLOGY The chapter is to outline research process. 3.440 calls collected for 4 months during April-July 2006 from Tivoli CTI system of IT service desk outsourcing in the bank. to provide a rationale for the research methodologies which were chosen. The simulation study is used to represent the both systems and the results of simulation for two systems are comparable in terms of speed in resolving incident.

32 3.1. However. this step is put everything together in a way that provides adequate information for the advisor(s) and others. 3. Firstly.2. the thesis is focusing on the performance evaluation that data include several columns of time and severity. Thus. the underlying incident data collected for 12.198 calls from the Tivoli CTI system of IT service desk for four separate weeks randomly selected from four-month period during April to July 2006. Because literatures have been issued every time since the research start formulating. incident descriptions. 3. The thesis was proposed as the topic of Knowledge Management System Improvement towards Service Desk of IT Outsourcing in Banking Business: Evaluation its Performance. systemtype failures.. open time. open date. A-1 Figure A-1. assigned resolver group. including ticket no. and writing the thesis document. severity. from the same sample. . resolve time. but also in several steps.198 calls were used for performance evaluation in simulation study and design of experiment. A sample of the incident data shows in Appendix A. the columns contain several information of IT incident. selected sample of all 14. data collection. From the sample.1. caller details and so forth. The thesis selected two samples to support the two objectives of the study. including research design.2 Information Collection and Requirement Analysis 3. incident resolutions. As the research objectives. the final title has been the same as topic proposal but just without “Evaluation its Performance” The review of literatures is not only in the first step of formulating a research problem.1 Information Collection The objectives of the study are to evaluate performance of KMRCA IT service desk system and research hypothesis is the average time in resolving incident of all severities exclude severity 1 is lower that the previous IT service desk system.4 Select a sample This step is selecting a sample which the accuracy of the findings largely depends upon the way of selecting sample. resolve date.440 cases were executed in the text mining discovery methods of automatic resolver group assignment approach. the selected sample of 12. Secondly.5 Write a research proposal After all the preparatory work is done.

83 16.2.17:00 17:01 .) 25.2.2 The percentage of incident calls by severity Next is the percentage of incident calls by severity that is the frequency of number of incident calls by severity is shown in Table 3-2.680. Severity 1 (86.2.55 6.79 2. Table 3-1 shows the rate of calls during time in business day and holiday.30%).1 The rate of incoming calls The nature of data particular inter arrival time of calls coming to the bank help desk and the agents create the IT incident ticket sending to IT service desk to resolve and then the service time in resolving that incident to be analysed. the rank of number of calls and their percentage is Severity 3 (11.75 18. TABLE 3-1 The Rate of Incident Calls during Time in Business Day and Holiday Time 8:00 .2 Requirement Analysis The data is analysed based on the objectives of performance evaluation using computer simulation.) 1.68 3.92 2.30 As shown in Table 3-2. 3.75%).28 0.38 12. The thesis analysed data and met that rate of incoming calls during time in business day and holiday are different. .12:00 12:01 .18:00 Business Day (calls/hr.15 8.68 2.75 0.2.13:00 13:01 . Severity 2 (395. 0.24%). 0.15:00 15:01 .16 Holiday (calls/hr.10:00 10:01 . TABLE 3-2 Percentage of Incident Calls by Severity Severity 1 2 3 4 Number of Calls 86 395 11.2.680 37 Percentage (%) 0.33 3. 95.53 0.24 95.71 3. and Severity 4 (37. 3. The study selects Arena discrete-simulation software package to analyse the data and to build the conceptual model for computer simulation.71%).

3 Incident Classification The incidents are classified into six categories as shown in Table 3-3 with their frequency of occurrence by the Tivoli CTI system. summed over all histogram intervals. These results are presented in form of p-value which is the largest value of the type-I error probability that allows the distribution to fit the data. then f(xi) is determined by numerical integration. A Pareto phenomenon is observed whereby the top three-problem categories account for 98.981 1. the better the fit.05. The results of Chi-square and Kolmogorov-Smirnov provide goodness-of-fit tests for non-integer data. and f(xi) refers to the relative frequency for the fitted probability distribution function.02 % of the total types of calls received.25 3. The higher the p-value.522 211 30 Percent of Frequency 52. TABLE 3-3 Classification of Calls by Incident Category Incident Category 1) Hardware 2) Software 3) Network 4) Power Supply 5) Operations No. if the p-value is greater than 0.3. then it would not reject the null hypothesis of a good fit at level of 0. If the cumulative distribution is not known explicitly. where F refers to the cumulative distribution. then f(xi) is determined as F(xi) .34 3. In this expression fi refers to the relative frequency of the data for the ith interval.454 3.05. For example.1 Goodness-of-fit Test Method As the data in terms of time between arrival and service time in resolving incidents.f(xi) }².48 1.91 32. If the cumulative distribution is known explicitly. This last value is obtained by integrating the probability density across the interval. .3 Constructing an Instrument for Data Collection 3. of Incidents 6. The quality of a curve fit is based primarily on the square error criterion. xi is the right interval boundary and xi-1 is the left interval boundary. which is defined as the sum of {fi .63 12. it is necessary to know the basis of methodology regarding curve fit to the nature of data that represented the data pattern in the computer simulation.73 0.2.F(xi-1).2.

StdDev Mean Min. Mode. Valn CumP1. Val1. the time between arrivals of incident calls is analysed by using Input analyzer that is a standard component of the Arena environment. Max Beta. A full exposition of the simulation model is available in Simulation with Arena. Max Min. Valn ExpoMean. showing the results of the best-fit calculations. Alpha 3. Xi LogMean. … CumPn. TABLE 3-4 Summary of Probability Distributions for Computer Simulation Distribution Beta Continuous Discrete Erlang Exponential Gamma Johnson Lognormal Normal Poisson Triangular Uniform Weibull BETA CONT DISC ERLA EXPO GAMM JOHN LONG NORM POIS TRIA UNIF WEIB Parameter Beta. Figure 3-1 shows patterns of the time between arrivals of incident calls fitted of Weibull distribution. This listing permits one function to be compared with another for the current data file.3. Alpha CumP1. However. … CumPn. Val1. along with their corresponding square errors. ranked from best to worst. By selecting Fit All. k Mean Beta. summary item causes a dialog to appear.35 Table 3-4 shows the summary of probability distributions that will be fitted to the data. Delta. If an enabled distribution function is calculated by the Input Analyzer. This summary file provides the most complete compilation of information describing the curve fit. .2 Goodness-of-fit Test of Time between incident arrivals A discrete event simulation package called Arena [59] is used to imitate the conceptual models of IT Service Desk system and KMRCA IT service desk system. Lamda. Log Std Mean. All of the applicable distribution functions are listed. Alpha Gamma.

10300 0.00161 0. 0. Error) as shown in Table 3-5. Error 0.00181 0. corresponding p-value : 0.00104 0.36 FIGURE 3-1 Input Analyzed Results The distribution summary from the Input Analyzer shows as follows: (a) Distribution : (b) Expression : Weibull WEIB (3.001045 (c) Square Error : (d) Chi-Square test.00360 0.00279 0.64.706 The Input analyzer can be used to determine the quality of fit of probability distribution functions to the input data and be used to compare distribution functions by square error (Sq.00279 0. TABLE 3-5 Comparison of Square Error by Function Function Weibull Gamma Lognormal Exponential Erlang Beta Normal Triangular Uniform Sq.07030 0.905) 0.13200 .

Probability Plot of Call Arrivals Weibull . Another view of good-of-fit test is illustrated by probability plot.010 0. the lowest square error does not mean that the distribution function is suited for the data until the p-value is evaluated by of goodness-of-fit test. The data points follow the straight line. the p-value > 0.011 3.37 However.000 Call Arrivals 10. Therefore.1 0.9 99 90 80 70 60 50 40 30 20 10 5 3 2 1 Shape Scale N AD P-Value 1.001 0.250.404 >0. Figure 3-2 shows the probability plot of time between incident arrivals.318 98 0. and the AD statistic (Anderson-Darling statistic measures how well the data follow a particular distribution) is 0.05 at 95 % confidence interval the H0 will be accepted that means distribution according to the test case. The graph was generated from Minitab-15 statistical analysis software package.100 1.95% CI 99. it can be used the fitted line to estimate for simulation with the distribution instead of a default of exponential time arrival. Weibull distribution provide a good fit for the time between incident arrivals.000 FIGURE 3-2 Probability Plot of Time between Arrivals The simulation model was verified to ensure that the IT service desk system works properly in terms of Arena functionalities and the entities of the incident calls follow the same path as described in the conceptual model shown in Appendix C.000 100.424. C-1 . The goodness-of-fit tests use the following hypotheses: (a) (b) H0 : The distribution adequately describes the data H1 : The distribution does not adequately describe the data By the hypothesis. it can be concluded that at an alpha-level of 0.05. If p-value > 0.250 Percent 0.

003581 0.1) 144*BETA(0. In order to reduce variation. The entity is a incident ticket which its process flow was intended design and verifying the output. After verifying operation of the simulation model it was validated.3 Goodness-of-fit Test of Service Time in Resolving Incidents The simulation process requires expression of fitted distribution to the time in resolving incidents therefore the resolving time by severity was analysed to fit the suited distribution using Input analyzer.38 The verification was done by using the trace element that is adopted within a discrete model to generate a detailed trace report of entity processing.158 0. the model run with different replication numbers to verify that it works properly under different conditions.015237 0. 6.053 0. to 8:00 pm.248.3.74) LOGN (4. 11. a probability plot of service time in resolving incidents can be estimated the distribution fit by viewing how the points fall about the controlled line as shown in Figure 3-3. Figure 4-2 shows results of good-of-fit test.039 All the same.002295 0.46) LOGN (7.037923 p-value 0.19.078 0. from entity creation until entity disposal. Error 0. The Trace output allows following the sequence of an entity as it flows through the system. TABLE 3-6 A Good-of-fit Test of Time in Resolving Incidents by Severity Severity 1 2 3 4 Distribution Lognormal Lognormal Lognormal Beta Expression LOGN (2. 3. four replications were conducted with different random number streams on the simulation model.27) Sq.87.37. .1. A t-test with a 95% confidence level was conducted to compare the results of the simulation model with the results recorded for the actual system based on the data collected from Tivoli CTI system. The simulation was run for 4 replications of 22 working days during prime time during 8:00 am. 4. For each variable the null hypothesis of no difference between the systems was rejected with a 95% confidence level which indicates the simulation model adequately represents the actual system’s behaviors.

Search information procedure.543 0. conceptual model of IT service desk for simulation modeling.053 Anderson-Darling statistic = 0.8753 1.000 1000.158 Lognormal distribution for Severity 3 Probability Plot of S3 Resolving Time Lognormal .9 99 95 90 80 70 60 50 40 30 20 10 5 1 0.0 S3 Resolving Time 100.1583 1.95% CI 99. The purpose of the IT service desk outsourcing is to support customer services on behalf of the bank’s technology driven business goals.158 99.0 10.039 FIGURE 3-3 Probability Plot for Resolving Time by Severity 3.95% CI Loc Scale N AD P-Value 0.01 0.9604 89 0.099 84 0. KMRCA IT service desk framework.25 37 0. Incident management and Problem management processes.4.9 99 95 90 80 70 60 50 40 30 20 10 5 1 0.1 1.078 Beta distribution for Severity 4 Probability Plot of S4 Resolving Time Beta .669 0.4 The Proposed KMRCA IT Service Desk Framework This section illustrates a typical IT service desk system.000 S4 Resolving Time 100.842 p-value = 0.543 p-value = 0.736 0.95% CI 99 95 90 80 70 60 50 40 30 Shape Scale N AD P-Value 0.842 0. 3. .071 90 0.10 1.1 Loc Scale N AD P-Value 1.39 Lognormal distribution for Severity 1 Probability Plot of S1 Resolving Time Lognormal . the bank desires service level targets based on the service level agreement (SLA) to control the IT service desk operations.0 S2 Resolving Time 100.95% CI 99.1 A Typical IT Service Desk Outsourcing IT service desk is a crucial function of an IT outsourcing provider who takes over IT functions from a bank.039 Percent Percent 0.000 Anderson-Darling statistic = 0.736 p-value = 0.00 S1 Resolving Time 10.0 Anderson-Darling statistic = 0.078 Percent Percent 0.00 0.1 Lognormal distribution for Severity 2 Probability Plot of S2 Resolving Time Lognormal . However.010 0.001 0.9 99 95 90 80 70 60 50 40 30 20 10 5 1 0.6166 38.1 1.0 20 10 5 3 2 1 0. and comparison of both a typical IT service desk and KMRCA IT service desk systems.669 p-value = 0.212 0.1 Loc Scale N AD P-Value 0.000 10.053 Anderson-Darling statistic = 0.100 1.00 100.0 10.

email. Figure 3-4 shows a Typical IT service desk outsourcing overview. The internal users or external customers directly contact the FLS agents with various incident reports. fax. . The Tivoli CTI technology is the use of interface among the three levels of agents in order to make them work simultaneously on the current incident ticket to be resolved by the target time. complete necessary incident descriptions and then open the ticket one-by-one without recurring. (2) Second level support called SLS. Both are reported to FLS agents and then the agents review the reports in terms of incident types. which is the Bank help desk agents. and (3) Third level support called TLS. The incident reports can be divided into two types by FLS depending upon the IT related that incident. which is IT service desk outsourcing agents. and monitored throughout their life cycle. which is Resolver groups. They can contact to the FLS by several ways such as telephone call. and internet. In thesis is focusing on the IT service desk outsourcing which includes IT service desk agents and technical resolver groups. One is Non-IT incidents and another type is IT incidents. tracked. initiate severity. These are (1) First level support called FLS. FIGURE 3-4 A Typical IT Service Desk Outsourcing Overview There are three main agent levels in resolving incident end-to-end process.40 The role of the IT service desk is to ensure that IT incident tickets are owned.

Customers / Users Bank Help Desk (First Level Support) SLS Assign Resolver? IT Service Desk of IT Outsourcing (Second Level Support) TLS Resolver Groups (Third Level Support) 1) AMS Support. If the assignment is not correct both of FLS and SLS will be requested to solve the issue. incident types. IT service desk agents take owner of that assigned incident and attempt to resolve the incident by searching essential information from several sources such as Data store. TLS agents include five main resolver groups. However. The valid IT incident ticket may be resolved by the SLS agents using knowledge management system [9] or be assigned to the resolver groups or TLS to resolve the incident. and Internet. and severity criteria. IT service desk agents perform actions based on Incident management process and Problem management process which their details are described in the next section. In resolving incident effectively. (1) EOS. File Server. If the incident needed a high technical resolver the IT service desk agent will determine to assign the incident to the technical resolver groups. Figure 3-5 shows Information flow of IT service desk. and (5) VEN. (3) NWS. 2) EOS Support. Consequently. the SLS agents review and validate the assigned IT incident ticket for adequacy and correctness based on outsourcing scope.41 The Non-IT incident tickets are resolved by bank’s resolvers while IT incident tickets are assigned to the IT service desk outsourcing or SLS agents to resolve those incidents. (2) IE-AMS. (4) OS-EC. 3) NWS Support 4) Operation Support 5) Vendor Support Data Store Internet Resolution File Server FIGURE 3-5 Information Flow of IT Service Desk .

According to the severity criteria. Severity 2 means a “high” severity problem and a workaround may be available. If the incident ticket related to IT. The conceptual model is conveyed to the simulation model.2 Conceptual Model of IT Service Desk Figure 3-6 shows the conceptual model of IT service desk system. so called . or a fault which may have a potential “critical” impact if not resolved quickly.42 3. FIGURE 3-6 A Conceptual Model of IT Service Desk System However.4. or a problem impacting 1 user and the impact is significant. Severity 4 means a “low” severity problem (no impact to the user) and a workaround is available. application or network failure impacting on a large number of users and having a critical impact to the user’s business) and where no workaround is available. when the FLS agents create the incident ticket they also initiate the severity to the ticket. Severity 3 means a “moderate” severity problem (impact is moderate and only to 1 user) and a workaround is available. one component of a system application or network has failed impacting on a small number of users. such as end of month financials. 2) SLS agents. 1) FLS agents. Severity 1 means a “critical” severity problem. (a major system. which the incidents flow through the three agents. the determination of severity based on impact bank’s business and urgent required is assigned according the following criteria. and 3) Resolver groups. In other words.

the thesis proposes the framework of KMRCA IT service desk as shown in the Figure 3-7. the agent attempts to resolve the incident. Besides. 3. If the incident cannot be complete at the second level it will be assigned to the relevant technical resolver group who is responsible for resolving the incident. However. FIGURE 3-7 A Proposed Framework of KMRCA IT Service Desk System The model of the IT service desk outsourcing by putting the KMRCA into the IT service desk functions.4.3 KMRCA IT Service Desk Outsourcing Model For the reason that he Bank takes the owner of first level supports called the Bank help desk agents to initiate support. the SLS agents check if the incident ticket is within the outsourcing scope and the assigned severity is correct. Likewise.43 IT incident ticket it will be assigned to SLS agents to resolve that incident. If the ticket is solved in the second level the incident will be closed. Thus. In fact. The issues of resource high turnover especially technical staff of IT service desk and recurring incidents are in the IT service desk system. KMRCA is the KMS of organizational outsourcing memory to provide resolution and results of root cause analysis in order to prevent the recurring incidents or problems. the KMS enables IT service desk agents to . Therefore. IT service desk agents is not quite a single point of contact (SPOC) [9] and the resolver groups have more specialist skills that can be available time or resources to resolve the assigned incidents. providing a vital day-to-day contact point between internal users and external customers.

the agents can search the similar cases from the knowledge database so that time taken to resolve incident is reduced. access to the database. and so forth.44 increase the speed in resolving incident. The KMRCA IT service desk approach serves as an intermediary between the service desk agent and all data. No matter who resolves the incident. The IT service desk resolve the incident by accessing many different information and knowledge sources via the KMRCA. and access to the Internet. results of root cause analysis. information. the incident may be assigned to the relevant revolver group to resolve that incident. With the KMRCA. To resolve an incident is the responsibility of the IT service desk agent. However. FIGURE 3-8 Information Flow of KMRCA IT Service Desk System Figure 3-8 shows the information flow of KMRCA IT service desk. the resolution is provided and kept into the Knowledge database after finishing resolving. The sources range from files on the agent's computer. and knowledge sources. The KMRCA database includes knowledge of incident resolutions. . As the Figure 3-1 shows the model for KMRCA IT service desk outsourcing. While case-based reasoning systems enable help desks to store and share knowledge in the form of cases. communication with other agents.

A short process flow shows several activities of incident management and problem management processes. the details of the processes of Incident management and Problem management are shown in Appendix B. However. The implementation of the KMS IT service desk system changes the process to the incident management and problem management that performed by IT service desk agents and the process is shown in Figure 3-9.45 3.4. FIGURE 3-9 KMRCA IT Service Desk Process .4 Incident Management and Problem Management processes IT Service Desk function based on ITIL is in Incident management process.

5 Search Knowledge Procedure of KMRCA IT Service Desk When IT service desk agents use KMRCA IT service desk system. proceed to Step 4 Search for the similar cases from KMRCA. (b) If no. . (a) If yes. 2) IT service desk agent determines if the ticket required escalation. they shall perform searching by using search knowledge procedure as shown in Figure 3-10.46 3. proceed to Step 3 escalate the ticket to the relevant resolver groups. FIGURE 3-10 Search Knowledge Procedure The narrative of Search Information Procedure has steps as follows: 1) IT Service desk agent reviews incident information and urgent required.4.

7) Recover group reviews the assigned ticket from SLS. 4) IT service desk agent searches similar cases from KMRCA database. 10) Resolver group searches similar cases from KMRCA database 11) Was the incident resolved ? (a) If yes. (b) If no. 13) End . proceed to Step 9 resolve incident without KMRCA. 8) Recover group determines if KMRCA is require in resolving incident. 9) Resolver group resolves incident without KMRCA. 6) IT service desk agent provides resolution to FLS or Bank help desk and updates into the KMRCA repository. proceed to Step 6 escalate the ticket to the relevant resolver groups. 5) Was the incident resolved? (a) If yes. proceed to Step 10 search similar cases from KMRCA database. proceed to Step 4 search similar cases from KMRCA database. (b) If no.47 3) IT service desk agent escalates the ticket to the relevant resolver groups. (a) If yes. (b) If no. 12) Recover group provides resolution to FLS or Bank help desk and updates into the KMRCA repository. proceed to Step 4 search similar case from KMRCA database. proceed to Step 6 escalate the ticket to the relevant resolver groups.

and Internet as well as receiving the update resolution from the resolver group. Obviously.4.48 3. However. File server. the difference between both of IT service desk is that the KMRCA IT Service Desk includes KMRCA system as center point of information. FIGURE 3-11 Typical IT Service Desk and KMRCA IT Service Desk . IT service desk agents search several information by the KMRCA. The KMRCA system is connecting to several sources for acquiring several sources of information such as Data store.6 Comparison of Typical and KMRCA IT Service Desk systems The comparison of a Typical IT Service Desk against the KMRCA IT Service Desk is shown in Figure 3-11. the essential information such the update incident resolutions have to be validated by IT experts via a domain expert.

FIGURE 3-12 The System Development Life Cycle (SDLC) The methodology SDLC is closely associated to what has come to be known as structured systems analysis and design.7 Methodology of System Development There are many methodologies for the development of information systems: Systems Development Life Cycle (SDLC). Prototyping. The Systems Development Life Cycle is referred to variously as the waterfall model and linear cycle that methodology is a coherent description of the steps taken in the development of information systems. Figure 3-12 shows the system development life cycle (SDLC). .4. It involves a series of steps to be undertaken in the development of information systems as follows: (a) Problem definition On receiving a request from the user for systems development. an investigation is conducted to state the problem to be solved and deliverable is Problem statement.49 3. this thesis is concerned here primarily with SDLC. ObjectOriented design. However. Data Structure-Oriented design. among others.

. (e) Systems construction Systems construction includes Programming the system and development of user documentation for the system as well as the programs. (f) System testing and evaluation System testing and evaluation include testing.50 (b) Feasibility study The objective here is to clearly define the scope and objectives of the systems project. In addition. verification and validation of the system just built as well as the deliverables are test and evaluation results. (d) Systems design phase The specifications of the present system are studied to determine what changes will be needed to incorporate the user needs not met by the system presently. They should contain our understanding of HOW the present system works and WHAT it does. the deliverables are specifications of the present system. and to identify alternative solutions to the problem defined earlier and deliverables is Feasibility report. The deliverables are programs. and 3) Isolation of the problem early in the process. Note that the model has many attractive features such 1) clearly defined deliverables at the end of each phase so that the client can take decisions on continuing the project. In addition. deliverables are the specifications of the proposed system. the client does not have to make a full commitment on the project at the beginning. and the system ready to be delivered to the user or client. and user manuals. their documentation. (c) Systems analysis phase: The present system is investigated and its specifications documented. The output of this phase will consist of the specifications. which must describe both WHAT the proposed system will do and HOW it will work of the proposed system. 2) incremental resource commitment.

< Search Knowledge Input Resolution FIGURE 3-13 A Sample Display of Search Knowledge and Input Resolution . On the left-hand side of Convex lens icon. It includes several functions based on the whole concept end-to-end of the IT service desk’s functionalities. the input search keyword of ‘Printer’ and then click on search button that is giving a several results of similar cases with regard to printer failures and it can be drilled down cases-by-case to get its details. Figure 3-14 displays the Search knowledge and Input resolutions. For example. the agents can double-click on it in order to get in the search knowledge menu. GUI menus for multi-agents can be connecting via internet and logging on as client machines. Then the search menu is displayed pop-up and agents can put some keywords on the input search space. The purpose of searching knowledge function is to find similar cases so that the agents can select one or more of them in resolving the incident.8 The Prototype of KMRCA IT Service Desk System The prototype of KMRCA IT service desk system was developed using the SDLC from problem definition to the system testing and evaluation. In this chapter. two core functions of the system are Searching Knowledge function and Decision support function of automatic assignment.51 3. However.4.

The classification is to help IT service desk agents to identity how to solve the incident by whom effectively. In this thesis. . The accessible required knowledge is relevant several menus. the knowledge database store several cases that will be used in the case-based reasoning approach. FIGURE 3-14 A Sample Display of Searching Results In fact. the function of automatic resolver group is able to initiate the automatic resolver group assignment by setting which severity is need automatic assignment. operations and power supply. Figure 3-15 shows the decision support function of assign resolver group. network.52 As the function of the knowledge is organized by the scope of dealing incidents with system-type failures. Some identified cases such the previous incidents that match the present one may or may not help the agent in resolving the call. including search menu and input resolutions as shown in Figure 3-14. hardware. The incident scope is described the general type of incident failures such as software.

021 30 357 14.120 18 0 1. system-type failures. However.1.5 Methodology of Automatic Resolver Assignment 3.5.53 < Assign Resolver Group FIGURE 3-15 A Sample Display of Assign Resolver Group 3.605 3. Table 3-7 shows the number of incidents of various system types and their resolver groups.493 Total 7.740 4. They were collected for 4 months (April to July 2006).307 308 6 0 9. A sample of the data is shown in Appendix A .440 incident cases. TABLE 3-7 System types Hardware Software Network Operation Power Supply Total The Number of Incidents of System Types and Resolver Groups EOS 0 376 0 0 0 376 IE-AMS 0 400 0 6 0 406 NWS 5.226 OS-EC 1. Each column (or attributes) contains information about several IT incident tickets. A sample of the incident data shows in Appendix A-Figure A-1. we focus on the information of four columns: incident descriptions. component failures. Sample and requirement analysis Raw datasets are provided by the Tivoli system in a spreadsheet for 14.440 . and the assigned resolver groups who are related to those system-type failures.841 148 593 0 357 2.939 VEN 294 61 1.292 2. in this study.Figure A-1.

Figure 3-16 shows the function of IT service desk outsourcing with automatic resolver group assignment and the details of automatic resolver group assignments can be illustrated in terms of process as shown in Figure 3-17. FIGURE 3-16 KMRCA IT Service Desk with Automatic Assignment Function FIGURE 3-17 A Process of Automatic Resolver Group Assignment The automatic resolver group assignment function is one of the core functions in the KMRCA IT service desk system.54 3.5.2 The proposed automatic resolver group assignment The thesis improved the KMRCA IT service desk system by proposing the automatic resolver group assignment function in the system. The focal point is the resolver group which handles the proper allocation of resources to deal with the assigned incident. .

proceed to Step 12 Check if duration time is valid. If no. proceed to End. (b) If no. Step 14: Update Keywords to keep generated rules and assignment results in Knowledge database Step 15: End of the process . proceed to Step 10 Validate the assigned results. proceed to End. Step 13: Check if the result is changed. parallel paths. Perform Text measures and case-terms data transformed through the model classification. Step 5 : Calculate the percentages of matching words in the assigned resolver group and display the results.55 The below are the narratives of automatic resolver group assignment process. A4 : An extended part of ID3 decision tree results and A5 : A sample of ID3-based generation gules. proceed to Step 13 Check if the result is changed. (a) (b) If yes. proceed to Step 7 Notify IT service desk or SLS to make decision. proceed to Step 8 Assign resolver group to deal with the incident. Step 6 : Determine if the percentage matching is equal or more than the specified criteria. Step 7 : Notify IT service desk or SLS to make a decision Step 8 : Assign resolver group to deal with the incident Step 9 : Display the results of assignment Step 10: Validate the assigned results and generated rules by IT experts Step 11: Check if the IT expert has validated the result yet. The generation rules from the ID3 method are shown in Appendix A. Step 4 : Implement the ID3-based method to generate a pattern and to identify a suitable resolver group(s). Step 1 : Step 2 : Step 3 : Start entering IT incident ticket of which includes text document. (b) If no. Step 12: Check if duration time is valid. (a) If yes. (a) If yes. proceed to Step 14 Update keywords (b) If no. Perform keyword-based word extraction. (a) If yes.

5. The study added Data grouping by keywords. Hence. 3. FIGURE 3-18 Processes of Model Approach for Automatic Assignment The six steps of processes of model approach for automatic assignments include. 3) Data divided for training documents and testing documents.1 Data preparation Data preparation processes [60] include data recognition.5.3. This identifies the incident records collected from Tivoli CTI system as the sample of raw structured data in spreadsheet format. 4) Text measures.2. 5) Method selection based on the training documents. 1) Data preparation with text documents of incident records. filtering. Figure 3-18 shows the processes of this model approach for automatic assignment. and 6) Model validation based on the testing documents. data cleansing [61]. 2) Document collection or Text corpus.3 Data preparation and selected model procedure The raw dataset contains structured information about incident cases as previously described in Section 3.56 3. . the data preparation processes are as follows: (a) Data Recognition. and transformation. in this case. parsing.

. comparing several decision tree algorithms to find out the most suitable method for the nature of incident data. the statements will be broken down and individual words of which the incident report is composed are identified. statements in computer language have to be parsed. and Assigned resolver group. the purpose of data parsing is to resolve a sentence into its component parts of speech. the Text corpus includes several columns. Sub-system or Component failures.2 Dataset separation for training and testing The sample of dataset is divided into two documents. and 2) the relevant words either singular or plural [62]. for example of a keyword of “Hard Disk” being the same meaning with “Hard Drive” or “HD”.7 % of all words. (f) Data transformation. Therefore. The study modified LexTo to break down the incident documents (Thai and English) into text. the study transforms data prior to data analysis.3. 3. (e) Data grouping. The program works with Lexitron dictionary. There are two types of data. LexTo is Java program of word extraction for both languages. 1) words with the same meaning. checking to see the data are conforming across its columns and filling in missing values in particular for the component failures and assigned resolver groups. Therefore. The study created another keyword dictionary and modified the program to execute both dictionaries. Text mining via WEKA machine learning. Consequently. from the word extraction that gives many words and then grouped them into the words of component and system-type failures. The program was developed by National Electronic Computer Technology Center of Thailand or NECTEC. including System failure types. Several steps need data transformation such as Word extraction. Text measurement. the study makes correct inconsistent data. Incident descriptions. the correctness of word extraction is more than 98. A-2 Figure A-2 (c) Data filtering. The result of keywords extract from the incident dataset are shown in Appendix A. which is applied to discover algorithms or methods.57 (b) Data parsing. (1) A training document consisting of 66% of the samples and (2) A testing document consisting of 34% of the cases. In fact. (d) Data cleaning.5. it involves selecting rows and columns of data for further Document collection or Text corpus.

3.3. 3.5.5 Method Selection Method discovery is the core of text mining algorithms. It can be broadly defined as a knowledge-intensive process in which a user interacts with documented collection overtime by using suitable analysis tools [64]. To estimate the classification evaluation approaches.3. It displays the text measures. Thesis implemented the ID3 within the WEKA based on the testing dataset and then the details of the validation results of the ID3 method are shown in Appendix A. it will be commonly uses 10-fold smooth out cross-validation [57].5. component failures.5.4 Text Measures The purpose of text measures is to find attributes that describe text in order to know how many keywords (KW1. where n is the number of words) related to the assigned groups are in the documents. The 10-fold cross validation which is helpful to prevent over fitting and the result of accuracy is an average of any 9 divided by 10 sample as training set and the rest as testing set for 10 times. In order to validate the model. Several decision tree methods of Decision Stump.5. 3. 3. which include a sample of data. and REPTree were implemented within the WEKA framework by Written and Frank [54] based upon the training dataset. The model is illustrated in Figure 3-13. A-3 : Evaluation result of ID3 decision tree method. A text mining handbook written by Feldman and Sanger [64] presents a comprehensive discussion in text mining and link detection algorithms and their operations.58 3. Text mining is data mining applied to information extracted from text. J48.3 Document collection Document collection or so called “Text corpus” is the database containing text fields. The study developed program that provides text measures based upon word counts across the sample of the text documents. NBTree. incident descriptions. Random Forest. ….6 Model Validation The proposed ID3-based model for the function of automatic resolver group assignment. ID3. Random Tree. . KWn. Finally. KW2.3. and assigned resolver group [63]. the ID3 decision tree method was found to be the strongest method for the nature of that dataset. The Textual fields are selected columns such as system type failures. The data is a subset of the incident database.

In particular. The aim of the function is to demonstrate the proposed enhance model of decision support system of automatic resolver group assignment and a prototype of ARGA-ID3 IT Service desk system. In addition.59 3. tracked. In addition to the core methodologies of text mining discovery methods of classification trees. and monitored throughout their life cycle. the descriptions of information collection and data analysis focused on the simulation study which are used in the performance evaluation. Knowledge management is used as the framework to integrate the technology.6 Summary The purpose of IT service desk is to support services on behalf of the bank’s technology driven business goals. For the automatic assignment. the strongest method is evaluated by 10-fold cross validation. . people. To perform the new system IT service desk agents and resolver groups have to perform the proposed processes particular search knowledge procedure so that the agents can leverage the organization's knowledge and solve the incident faster than working without the knowledge management system. The purpose of this methodology is to demonstrate the proposed model and a prototype of KMRCA IT Service desk system. The 10-fold cross validation is helpful to prevent over fitting. this is another core function of the system. The text mining discovery algorithm gives the optimized pattern discovery framework to text. The system was improved from the KMRCA IT service desk system by embracing the automatic resolver group assignment. A sample is analysed in terms of correlation between the system type failures and resolver group related the failures. and consider the problem of finding the patterns that optimize a given statistical measure within the whole class of patterns in a large collection of unstructured texts. the class of simple combinatorial patterns over phrases. The role of IT service desk is to ensure that IT incident tickets are owned. and process for improved service desk performance.

12 by changing the parameter of the maxheap in RunWega. the accuracy on the sample was obtained using 10-fold cross validation. and accuracy of classification for the individual classifiers. respectively. The results of experimental design with screening design is to identified which factors are important on each influence variable are illustrated in Section 4. . Based on the 66% of the sample dataset of 9. 4.4. J48. and Decision Stump within WEKA [54] with default parameters.530 records. the summary is presented in Section 4. which is to prevent over fitting. the results were divided into two parts. Besides. REPTree. and (2) Selected method evaluation. ID3. The experimental results particular the time taken to build models are based on a notebook computer IBM ThinkPad model R50e with memory 768 MB and 80 MB Hard disk with running speed at 5. Random Forest.400 rpm. All the experimental results are shown in Tables 4-1 and 4-2.3 shows the performance evaluation of KMRCA IT service desk system that is analysed and compared versus the previous system of a typical IT service desk by using simulation study based on actual data. (1) the comparison results. In the experiment.280 MB instead of the default by 128 MB that is to support our immense dataset. the software tool used in the experiment is WEKA machine learning software version 3.ini to the max value at 1. 4. Section 4. In addition.CHAPTER 4 EXPERIMENTAL RESULTS This chapter provides experimental results of which is describing in terms of performance evaluation. Table 4-2 shows the speed to build models. Table 4-1 shows the number and percentage of correct incidents for various types of decision trees.1 The Results of Text Mining Discovery Methods of Automatic Assign Function In this section. There are seven classification trees were implemented. Section 4. NBTree. including Random Tree. Size of trees.1.1 shows the results of text mining discovery methods of automatic assignment function.4.1 Comparison results The comparison of various decision tree methods was conducted and implemented within the WEKA framework.2.

58 190. ID3 is the second fastest classifier.3473 92.3473 92. decision stump is by far the fastest classifier. about twice as fast as the next one and it also had the lowest proportion of misclassifications.3746 From Table 4-2. of Correct instances 8914 8914 8913 8896 8890 8866 7587 No. Decision Tree classifiers ID3 Random Tree Random Forest J48 NBTree REPTRee Decision Stump Time Taken to Build Models (seconds) 5.5362 93. . Decision stump was worst TABLE 4-2 The Speed compared with the Accuracy of Classification. by an order of magnitude.3746 From Table 4-1.89 46.2844 92.54 10.15 20.62 TABLE 4-1 The Number and Percentage of Correct Incident for Various Types of Decision Trees Decision Tree Classifiers ID3 Random Tree Random Forest J48 NBTree REPTree Decision Stump No.96 19.5362 93.5362 93.0325 80.5257 93.0325 80.59 Size of Tree 134 167 10 83 1 85 1 Accuracy of Classification (%) 93. but the highest proportion of misclassifications also it produces only one tree.5257 93.5362 93.39 0.2844 92. of Incorrect instances 616 616 617 634 640 664 1943 Accuracy of Classification (%) 93. it can be seen that ID3 and random tree were equally good in terms of proportion of correct allocations with random forest not far behind.

Thus. and Decision Stump are comparable. They are often used to study the performance of the process and the system [65]. The results show the accuracy assignment was 93. The objective of the experimental design is to determine the factors are most influential on the response of the system. ID3 and Random Tree give the highest accuracy among the others. . To perform screening experiments is selecting the key factors affecting a response.2 The Results of Design of Experiment 4. Decision Stump gives the highest speed. the Random Tree is not fit to deal with imbalanced samples. However. 4.1 Design of Experiment and Analysis The use of design of experiment (DOE) and optimization technique was conduct when the experimental is execution simulation models of both a currently typical IT service desk and KMRCA IT service desk configurations and comparing their results. using the testing data by default value 10-fold cross validation within WEKA platform. but the lowest accuracy.63 The comparison of decision tree methods is considered in terms of accuracy and performance as shown in Tables 4-1 and 4-2. the IT experts who participate in the experiments also validate the result of validation.06 % of the cases.1. The experiments include the study of three factors. respectively. By using the experimental 23 full-factorial design which is to identify the effects of three different interesting factors on eight dependent variables. The Random Tree gives high accuracy. the performance of ID3.2. 4. J48.2 Method evaluation To validate the method of the automatic assignment function. REPTree. NBTree. Thus. the details of results generating by WEKA machine learning are shown in Appendix A.910 cases. but it is poor performance in terms of speed to build the model. In addition. through it is easy to obtain rules from large datasets like Random Forest. The testing data consisting of 34% of the sample dataset of 4. which indicates the ID3-based method is significantly suited for the model of decision support system for automatic resolution of group assignment. A-3. the ID3 is the best choice. However. It generates of one tree like NBTree that cannot support for the knowledge-based classification. Each factor at two levels and then eight treatment combinations run in the 23 design. considering both accuracy and speed.

1. the IT service desk manager as an IT expert confirmed the results. total number of calls resolved in a period of time O2: Time in resolving incidents of Severity 1 (minutes) O3: Time in resolving incidents of Severity 2 (minutes) O4: Time in resolving incidents of Severity 3 (minutes) O5: Time in resolving incidents of Severity 4 (minutes) O6: Number of incident calls in AMS queue.4 However. The analysis of variance (ANOVA) for full-factorial design is done to test that the main effects or interaction parameters are equal to zero. the factors with a p-value lower than 0. Factor C: Time to add new information into the KMRCA system (minutes).0. 3. resolving.64 4. In addition. 4.0. . O7: Number of incident calls in EOS queue. In statistical analysis.6. (b) (c) Factor B: Time to resolve an incident using the KMRCA system (minutes).3) 1. The dependent three factors are as follows: (a) Factor A: Time to type incident information and search the relevant knowledge from the KMRCA system (minutes). In addition.2.2 The Key Factors and Output Variables According to González [30] argued that dependent variables are performance variables tracked by the service desk which are common performance measurement. a different output variable is needed to test for each incident severity since they follow different paths through the IT service desk. TABLE 4-3 Assigned Factor Values for Two-Level Factor A B C Low (minutes) 0. The factors values were calculated from the average time consumed by the five IT service desk staff who used the KMRCA IT service desk system in searching.2 TRIA(2.05 are considered as important factors that significantly influence the results. Table 4-3 shows the assigned factor values for two-level.8 TRIA(1.8) 2.5 High (minutes) 1. O8: Number of incident calls in NWS queue. 3.0. the dependent Output variables are as follows: O1: Throughput. and keeping resolutions.

9 470.1 24. Table 4-4 shows 23 factorial design of design of experiment (DOE) for responses of throughput. Thus.2 26.05.7 As shown in Table 4-6 is the summary of absolute value of coefficients for average response of Throughput (O1) and p-value by factors and their interactions.8 3587.0 3588. From the Table 4-6.5 24.9 20.8 3586. and 4. Factor B and interaction AB are the most influence to the Throughput.0 192.9 Var. Factor A.9 435. . of calls / time period) Yrep 1 3585 3585 3584 3584 3584 3584 3584 3533 Yrep 2 3628 3626 3616 3615 3624 3620 3581 3487 Yrep 3 3585 3585 3584 3584 3585 3584 3583 3513 Yrep 4 3558 3558 3556 3556 3558 3556 3555 3529 Table 4-5 shows coded design matrix of Throughput (O1) TABLE 4-5 Run Order 1 2 3 4 5 6 7 8 Coded Design Matrix of O1 A + + + + B + + + + C + + + + AB + + + + AC + + + + BC ABC + + + + + + + + Ave. including throughput.8 3515.0 601. TABLE 4-4 23 Full Factorial Design of DOE for Responses Y of O1 Run Order 1 2 3 4 5 6 7 8 A + + + + Factor B + + + + C + + + + Throughput (no.0 3575. (Y) 28. However. C-3 and C-4.2 13. In addition. (Y) 838. the study focused on the five variables. the Figure 4-1 shows Pareto of coefficients for average Response Y of O1. 2.5 SD.0 787.0 3584.3 688.05 and the other dependent variables do not have any factors that affect them significantly that the others in all cases’ p-value are more than 0.65 The ANOVA analysis shows that the dependent variable of throughput (O1) and variable of average time in resolving incidents of severity 3 (O4) are significantly influenced by three key factors which were significant because the p-value lower than 0. (Y) 3589. the details of that result shown in Appendix C.1 27.5 3585. 3.9 28.3 580. accordingly. average time in resolving incidents of severities 1.

0161 C 10. p-value A 0.012 7e-07 BC 0. Therefore.281 0.012 7e-07 ABC 0. p-value FIGURE 4-1 Pareto of Coefficients for Average Response Y of O1 Another response is the time in resolving incidents of severity 3 that Table 4-7 shows the absolute value of coefficient for Average Time in resolving incidents of severity 3 (O4) which all three factors are significant for the response of Time in resolving incidents of severity 3 (O4).438 5e-42 AB 0.344 0.344 0.0845 B 11.1078 AC 7.638 6e-46 C 0.188 3e-33 B 0.0918 BC 9.0268 AB 7. a Pareto of coefficients of average Time in resolving incidents of severity 3 (O4) as shown in Figure 4-2. TABLE 4-7 Absolute Value of Coefficients for Average of O4 and P-Value Absolute of Coeff.66 TABLE 4-6 Absolute Value of Coefficients for Average O1 and P-Value A 7.844 0.012 7e-07 AC 0.1050 Absolute of Coeff.012 7e-07 FIGURE 4-2 Pareto of Coefficients for Average Response Y of O4 .281 0.656 0.0424 ABC 7.281 0.

To demonstrate the concept of KMRCA IT service desk system which has more speed in resolving incidents than the previous Typical IT service desk system. it would be a higher throughput. and technology [66]. 4. A shorter incident resolution time will occur because the knowledge management system with root cause analysis will facilitate organizational learning and will enable IT service desk agents and resolver groups to share knowledge sources to resolve the incident faster as well as it will be preventing the recurring incidents. . In this case.67 4. The developed simulation model is to test the hypothesis that describes Typical IT service desk system and KMRCA IT service desk system. agents. the simulation model will help to analyze the advantages that can be obtained with the implementation of the knowledge management system.1 Comparison of Test of KMRCA and Typical IT Service Desk Systems The factors are analyzed with two levels (low or “-” and high or “+”) and their were replaced with the resolving incident by severity in the assign in simulation model so that the results of responses are shown in Table 4-8. As the reason of reducing time in resolving incidents therefore. The concept of KMRCA IT Service Desk can be evaluated its performance using a simulation study.3 The Results of Performance Evaluation The objective of thesis is to evaluate performance of the KMRCA IT service desk system by using a simulation study. The details of comparison test are shown in Appendix C. According to the hypothesis is that time in resolving incidents of all severities except for critical incident that will be lower in KMRCA IT service desk system than the previously Typical IT service desk system.3. the simulation model research approach is adopted so that it can be conducted by experiments and evaluated the knowledge management system without interrupting the IT service desk’s daily operations. Four replications of each experiment were run for 22 working days in a random order and the results were recorded for further statistical analysis. According to the research hypothesis is that the new system will have a shorter time in resolving incident than the previous system. Furthermore. skills. therefore the research hypothesis is that the system will have a shorter incident resolution time. A simulation enables service desk agents to perform analysis that captures the entire interrelationship between callers. C-5 and C-6.

of calls per period) Average Resolving Time of severity 1 (min.9 % and decreased the average resolving time in severity 3 of 4.) Average Resolving Time of severity 4 (min.019 1.68 TABLE 4-8 Comparison Tests of KMRCA and Typical IT Service Desk Systems Variables 1) Throughput 2) Average Resolving Time of severity 1 3) Average Resolving Time of severity 2 4) Average Resolving Time of severity 3 5) Average Resolving Time of severity 4 Observed t-value 22.182 p-value 0.882 0.) Average Resolving Time of severity 3 (min. The simulation of KMRCA IT service desk system gave more throughputs of 16.77 21.182 3.182 3. In other words.16 3.716 As the hypothesis is the average time in resolving incidents for all calls except for critical calls will be lower in KMRCA IT service desk system than the current agent of service desk system. 4.26 7. As shown in Table 4-8.40 Critical t-value 3.025 and degree of freedom = 3) for each dependent variable.531 2. it can be noticed that in Throughput and Time in resolving incidents of Severity 3.466 0.3.) 3.43 6.83 0.26 -0. the means are not equal. for Time in resolving incidents of Severity 1 and Time in resolving incidents of Severity 2 the observed t-value is lower than the Critical t-statistic.8 %.11 25. but the results of the others were not significant because they failed in the t-test.) Average Resolving Time of severity 2 (min. On the other hand.001 0.182 3.75 4.68 -0. Figure 4-8 shows the values of the observed t-value and the value of Critical t-value with two-tail (α/2 = 0.54 . since the observed t-value is higher than the critical t-value this means that H0 is rejected. then H0 is not rejected therefore it is concluded that those means are equal. TABLE 4-9 Comparison Outputs of KMRCA and Typical IT Service Desk Systems Variables KMRCA IT service desk Throughput (no.182 3.047 0.2 Comparison Output of KMRCA and Typical Service Desk Systems Table 4-9 shows the comparison outputs of KMRCA and Typical IT service desk system.84 5.22 Typical IT service desk 3.

The model was validated based on the training dataset within WEKA platform with 10-fold cross validation and the accuracy of the results of the model was 93. the ID3 decision tree is the strongest algorithm. and low priority incidents. The improvements are significant and provide justification for implementing the knowledge management system with root cause analysis to the moderate-priority incident or incident of severity 3. For the results of performance evaluation of KMRCA IT service desk system. The ID3 classifier has the best performance in terms of speed to build the model combined with a high accuracy of a classification. the summary from a computer simulation to quantitatively compare the currently Typical IT Service Desk and proposed KMRCA IT Service Desk systems. Furthermore. and 4. and 4. For the results Text mining discovery methods. . The comparing results of decision tree methods show correctively classified instance more than 93% of the cases.8 % decrease in just the average time in resolving incidents of severity 3. For the average time in resolving incidents of severities 1.06 % of the cases. it can be used to design the specifications of the knowledge management system. the thesis shows the shows the results of Text mining discovery methods of automatic assignment function and the results of performance evaluation of KMRCA IT service desk system.4 Summary In this chapter. the results of the t-test were failed and no statistically significant difference could be concluded with confidence for of critical . this was to discover suitable decision tree methods based on WEKA machine learning by comparing several decision tree methods. The simulation study result showed almost 17 % increase in throughput. Finally. the advantage of the simulation can be performed studying without interrupting the daily IT service desk operations. 2. With the design of experiment.69 4. high.

At Severity 1. The proposed framework of KMRCA IT service desk system composes of two main functions. Severity 2.1 Conclusion This thesis makes three contributions. Thirdly. the advantages are significant and provide justification for implementing the knowledge management system with root cause analysis on the moderate priority incidents. method selection. and Severity 4 the t-test failed and then no statistically significant difference can be concluded with confidence for critical. The experimental results indicated that KMRCA IT service desk approached significantly enhance the performance of the typical IT service desk system by giving more throughput and reducing time in resolving incidents. and method evaluation through classification approach. text measurement. so called KMRCA IT service desk system. It also suggests the ways to improve system as proposing in the further work. the thesis evaluates a performance of KMRCA IT service desk system by using a simulation study based on actual incident data and compared the results with a previously typical IT service desk system. The performance of KMRCA IT service desk system was evaluated in terms of speed in resolving incidents. Secondly. document collection. Firstly. 1) searching knowledge function.CHAPTER 5 CONCLUSION This chapter concludes the experiment results from evaluating performance and comparing methods and discusses the advantages of the proposed framework. the thesis proposes a framework of knowledge management system and root cause analysis. and 2) automatic resolver group assignment function. 5.8 % decrease in average resolving time of Severity 3. Thus. the computer simulation was conducted to compare the typical IT service desk system against KMRCA IT service desk system. the thesis proposes the process of text mining to discover methods which include data preparation. In the study. . The simulation study result showed almost 17 % increase in throughput. and 4. high and low priority incidents.

Thus. the throughput can be improved by training the staff before they use the KMRCA system so that the staff’s skill can make more decreasing time in resolving incident than without training. the IT service desk system . but the t-test failed at the critical and high priority levels since resolving time is quite limited that makes IT service desk agents urgently assign to the resolver group without using the knowledge management system.2 Discussion The simulation output shows the IT service desk system yielded 17 % higher throughput. The proposed ID3-Based model for automatic resolver group assignment of IT service desk outsourcing in the bank. Finally. the thesis discovers the suitable methods within WEKA machine learning by comparing several decision tree methods. there have a lot of time in resolving incident low priority consequently the agent leave this incident until resolver available to resolve that incident. For severity 4. The method of the model is validated based on the training dataset within WEKA platform with 10-fold cross validation and the creativeness results of the model was 93. the ID3 decision tree method is the strongest algorithm. the rules resulting from the rule generation from the decision tree could be properly kept in knowledge database in order to support and assist with future incident resolver assignments.72 For the Text mining discovery methods. The experimental results indicate that the ID3 in terms of generated tree rules and speed is the optimal method to deal the model with automatic resolver assignment that would significantly increasing productivity in terms of more assignments that are correct and then decreasing reassignment turnaround time. The comparing results of decision tree methods show correctively classified instance more than 93% of the cases. 5. Although the thesis proved that knowledge management with root cause analysis is able to enhance the IT service desk outsourcing in banking business there are several ways to continue improving the system. However. Firstly. In addition.03% of the cases. the KMRCA IT service desk system is not designed to support those severities. The comprehensibility of ID-3 decision tree indicates the appropriate assigned resolver group to deal with the type of the incident. Furthermore. the ID3 classifier has more performance in terms of speed to build the model meanwhile the size of tree does not affect on accuracy classification.

73

should be automatic resolver group assignment because a manual assign may make mistaken when agents select resolver or group to deal with the incident manually. The circumstance when IT service desk agents received critical incidents of which urgently required in resolving therefore they often suddenly assign to the relevant resolver group without using the knowledge management system. The number of the critical incident tickets is less than one percent, but they are significant impact on the whole bank’s business processes. In addition, the specification of the knowledge management system can be defined from the experimental design by three factors of which time consumed when the agents perform using the system.

5.3 Future Work The remaining issue of which one ticket is assigned to the most suited resolver, it does not indicate that the incident ticket closed completely, since some incidents may require resolver more than one. For example, the incident on ATM broken down and hence customers cannot withdraw their money. These may cause of several failures such as applications, networks and electrical power supply that impact on many parties to be concerned. Thus, we will improve the model focusing on the multiresolver group assignments. Another improvement of IT service desk is to search the relevant knowledge automatically by using the text mining of transforming search to discover knowledge in which the process extracting key words and then proceed to discover the relevant knowledge. Through the search engines can help finding relevant documents a new technology goes beyond simple document retrieval. The text mining make it possible to discover new knowledge in the form of trends, anomalies, relationships, and patterns that span multiple knowledge collections. By extending the way text databases can be explored, text mining can contribute valuable content analysis and decision support to the existing knowledge in the organization.

REFERENCES
1. Nonaka, I. and Takeuchi, H. The Knowledge-Creating Company. New York : Oxford Press, 1995. 2. Allee, V. The Knowledge Evolution: Expanding Organizational Intelligence. New York : Butterworth-Heinemann, 1997. 3. Alavi, M. and Leidner, D. E. “Knowledge Management Systems: Emerging Views and Practices From The Field.” Proceedings of the 32nd Hawaii International Conference on System, IEEE Computer Society (1999) : 239. 4. Davenport, T. H. and Prusak, L. Working Knowledge: How Organizations Manage What They Know. Boston, Massachusetts : Harvard Business School Publishing, 2000. 5. Grote, M. H. and Täube, F. A. “When Outsourcing is not an Option: International Relocation of Investment Bank Research - Or isn't it?” Journal of International Management. 1-13(2007) : 57-77. 6. Mahnke, V., Overby, M. and Vang, J. “Strategic Outsourcing of IT Services: Theoretical Stocktaking and Empirical Challenges.” Industry and Innovation. 2-12(2005) : 205–253. 7. Behr, K., Castner, G. and Kim, G. The Value, Effectiveness, Efficiency, and Security of IT Controls: An Empirical Analysis. University of Oregon, 2004. 8. Forte, D. “Security Standardization in Incident Management: the ITIL Approach.” Network Security. 1 (2007, January) : 14-16. 9. Phomasakha, P. and Meesad, P. “Knowledge Management System with Root Cause Analysis for IT Service Desk in Banking Business.” Proceedings of the 2007 Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI) International Conference, 2(2007), Mae Fah Luang University, Chiang Rai, Thailand, (2007, May 9-12) : 1209-1212. 10. Clevel, B. and Mayben, J. Call Center Management on Fast Forward: Succeeding Today’s Dynamic Inbound environment. Maryland : Call Center Press, 1997. 11. Anton, J. and Gusting, D. Call Center Benchmarking: How Good Is Good Enough. Indiana : Purdue University Press, 2000.

76 12. Dawson, K. The Complete Guide to Starting, Running, and Improving Your Call Center. CMP Books, New York : Focal Press, 1999. 13. Sandborn, S. “Structuring the service desk.” Information World. 23-52(2001) : 28 14. Zhang, J. and Faerman, S. R. “Divergent Approaches and Converging Views : Drawing Sensible Linkages between Knowledge Management and Organizational Learning.” Proceedings of the 36th Hawaii International Conference on System Sciences, 2003. 15. Drucker, P. F. The Post-Capitalist Executive Managing in a Time of Great Change. New York : Penguin, 1995. 16. Suzuki, Y. and Toyama, R. “A Self-evaluation Method of SECI Process in Knowledge Management.” IEEE International Engineering Management Conference. 2(2004) : 491- 494. 17. Chen, F. and Burstein, F. “A Dynamic Model of Knowledge Management for Higher Education Development.” Proceedings of the 7th International conference on Information Technology Based Higher Education and Training , 2006 : 173-180. 18. Mertins, K., Heisig, P. and Vorbeck, J. Knowledge Management: Best Practices in Europe. Berlin : Springer-Verlag, 2001. 19. Meso, P. and Smith, R. “A Resource-based View of Organizational Knowledge Management Systems.” Journal of Knowledge Management. 3-4(2000) : 224–234. 20. Satyadas, A. and Harigopal, U. “Knowledge Management Tutorial: An Editorial Overview.” IEEE Transactions on Systems, Man, and Cybernetics-Part C : Applications and Reviews. 31-4(2001) : 429–437. 21. Sveiby, K.E. “The New Organizational Wealth. Managing and Measuring Knowledge-Based Assets.” San Francisco : Berrett Koehler Publisher, 1997. 22. Holsapple, C.W. and Joshi K.D. “Organizational knowledge resources.” Decision Support Systems. 31(2001) : 39–54. 23. Taylor, M.J., Gresty, D. and Askwith, R. “Knowledge for Network Support.” Information and Software Technology. 43(2001) : 469–475.

I.H. A.” Elsevier. Doyle. 2004. University of Trondheim. 30. and Maure. 27. and Israelson. G. 1998. Matlus. “Case-Based Reasoning: Foundational Issues. and System Approaches. 31(2001) : 87–102. Madsen. A Knowledge Intensive Approach to Problem Solving and Sustained Learning. Root Cause Analysis and Decision Support on Operation of Complex Continuous Processes. Hillsdale. E. S. 2000. Wilson. and Middleton. 32. “The Role of the Help Desk in the Strategic Management of Information Systems.C. L. Inside Case-Based Reasoning. A. “A Guide to Successful SLA Development and Management. R. Reisbeck. R. G. “Selected Collaborative Problem-Solving Method for Industry. 25. 15 August) : 1996-2009. 29.. “Applications of Object-oriented Bayesian Networks for Condition Monitoring. and Schank. W. Dell. P. Decision Support Systems. Marcella. N. A. D.. “Knowledge Managementcentric Help Desk : Specification and Performance Evaluation. T. 12-4 (1996) : 4–19.” Decision Support Systems. C. 26. Weidl. “CBR Net: Smart Technology over a Network. Giachetti. et al. M. R. 1993. 2000. Gray. and Fagerhaug. 35.. . R. 40(2005) : 389– 405. M. and Ramirez. 34. May 1991. F. October. Norwegian Institute of Technology. July.77 24. 7(1994) : 39-59.. F. Computer and Chemical Engineering.” TCD Technical Report. 1989.” Selected paper (2004). Aamodt. Aamodt.” Gartner Group Research Strategic Analysis Report. Root Cause Analysis : A Tool for Total Quality Management. Methodological Variations. L. 33.” OCLC Systems and Services. and Plaza. G. PhD.” Elsevier. Anderson. New Jersey : Lawrence Erlbaum Associates. Milwaukee : ASQ Quality Press. “A Problem-solving Perspective on Knowledge Management Processes. dissertation. E. 9-29(2005. M. B. K. 31. Frey. Milwaukee : ASQ Quality Press. Humboldt State University. A. Doggett. P.” AI Communications.. Root Cause Analysis: Simplified Tools and Techniques. Gonza´lez. 28. and Anderson. L.

45.” Elsevier. Sun. Willcocks.. et al. Information Management: The Organizational Dimension.A. Linder. California : Morgan Kaufmann. “Are Scientific Analogies Metaphors?” Problems and perspectives. 1996. 1993. Kolodner. and Utilization. R. Lacity. Expert Systems with Applications. I. Carbonell. Storage.C. 37. 48.-P. UK : Harvester Press. R. et al.. K. L. “Business transformation through outsourcing. Applying Case-Based Reasoning : Techniques for Enterprise Systems. Z. 42. “The myths and realities of information technology insourcing. UK : TSO (The Stationery Office) publisher. M. 1982 : 106-132. 2005. D. 2-43(2000) : 99-107. C. 47. -D. D. et al. California : Morgan Kaufmann. Service Support. 34-12(2007) : 3769-3778. 1989. Watson. Gentner. Office of Government Commerce (OGC). 1997. and Feeny. 44. Case-Based Reasoning. 40. and Lacity. Althoff.-H. H. Cole. 2007. 34-2(2008) : 817-824. J. ITIL Version 2 Library. et al. Brighton.. 30-4 (2002) : 23-28. M.” Communications of the ACM. I.” Elsevier. D. L. Oxford : AI Intelligence. 39. Schank. 1995. Fan. Y. A Review of Industrial Case-Based Reasoning Tools. Expert Systems with Applications. C. 38. 41. Sourcing Information Technology Capability. “Decision support for proposal grouping: A hybrid approach using knowledge rule and genetic algorithm.. New Jersey : Erlbaum. J.” Emerald Strategy and Leadership. M. San Mateo. Yang. J. G. San Mateo.. Computers and Operations Research.78 36. Inside Case Based Reasoning. . A. 1993. Hirschheim. 46. Derivational Analogy in PRODIGY : Automating Case Acquisition. “A hybrid knowledge and model approach for reviewer assignment.. 43. L. Oxford : Oxford University Press. A Decision-Making Framework. Boston : Kluwer Academic Publishers. “Developing a decision model for business process outsourcing. and Jacobson.” Elsevier.

” Elsevier. Kelton. 2-12(1995) : 1137–1143. and Shoval. Y. R. 3rd ed. B. and Sturrock. W. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. E. I.79 49. c2003. A. 51. 1997. “A decision support system for multi-attribute utility evaluation based on imprecise assignments. 57. 61. Decision Support Systems. Induction of Decision Trees. 53.. 36. Jiménez. B. D. Pyle. 2nd ed. R. Lewis. Prentice Hall. Mitchell. Breiman. P. Readings in Machine Learning. California : Morgan Kaufmann.” Advances in Space Research. California : Morgan Kaufmann. A.” Elsevier.” Elsevier. J. Y. Data Preparation for Data Mining. and Mirchandani. “Random Forests. L. San Mateo. San Mateo. and Zhang. Sadowski.. Computers & Operations Research.” Elsevier.” Springer. McGraw-Hill. Series in Industrial Engineering and Management Science. Computers & Operations Research. Decision Support Systems. Singapore : McGraw. Witten. 33-6(2006) : 1664-1680. Morgan Kaufmann. D. and White. Kohavi. 56. 1999. T. and Frank. T. 60. A. S.. 58.Hill. Miller. “Guided design search in the interval-bounded sailor assignment problem. Simulation with Arena. P. Machine Learning. 34-4(2007) : 1008-1032..W. R. Quinlan. M. 32(2002) : 343-360. 2005. D. Zhao. R. “Comparison of decision tree model of finding active objects. 55.” Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. 59.(2003) : 65-79. 50. “A decision support system for the single-depot vehicle rescheduling problem. “A rule-based system for automatic assignment of technicians to service faults. T. J. W. Lewis. M. K. D. Data Text Mining: A business applications approach. “A study of cross-validation and bootstrap for accuracy estimation and model selection. 45-1(2001) : 5-32. P. J. Ríos-Insua. c2005. Machine Learning. 52. Lazarov. 54. and Mateos. Li. 1990 : 81-106. 2007. Borenstein.-Q. .

c2000.” Natural Language Processing and Text Mining. 65. and Sanger. et al. Singapore : McGraw-Hill Press. Feldman. A. “Case Study : Simulation of the Call Center Environment for Comparing Competing Call Routing Technologies for Business Case ROI Projection. 3rd ed. March 6) : 171-192. London : Springer. 1999 : 1694–1700. Simulation Modeling and Analysis. 1995 : 130-136. 66. 64. W.M. and Bapat.80 62. 2007. (2007.” Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval . . The Text Mining Handbook : Advanced Approaches in Analysing Unstructured Data. Washington DC : IEEE Press. E. “Little Words Can Make a Big Difference for Text Classification. R.D. and Kelton. Miller. Riloff. K.” IEEE Winter Simulation Conference Proceedings. J. “Handling of Imbalanced Data in Txt Classification: CategoryBased Term Weights. 63. Liu Y. Law.. V. New York : Cambridge University Press.

APPENDIX A A SAMPLE OF INCIDENT DATASET. SEVERAL RESULS FOR ANALYSIS OF TEXT MINING DISCOVERY METHODS. AND METHOD VALIDATION .

เอทีเอ็มบานหัวถนน เกาะสมุย/ตูDOWN กรุณาตรวจเช็คLINEใ : S1A 2264 โลตัส รังสิต นครนายก/กTrue k กฤษดากร ไดทาการตรวจเช็ ํ : S1A2015 (IP) บ. P rint S erver Call C ent er EBPP PA S av ing A c c ount S hare S erver T rans ac t CC & C L LoanR eview(H os t M IS E xim bills 2000 1500 0 FIGURE A-2 A Pareto histogram of keywords extracted from the incident dataset . No. Cer.. กรุณาตรวจสอบ/แกไข : CDM21235 (IP) CDM สาขายะลา (256): HAS BEEN DISCONNECTE : S1A2120 (IP) เซเวนฯ หวยขวาง 4 (812): Has been marked down รบ : S1A2201 (IP) เซ็นทรัล ปนเกลา เครือง 4 (811): HAS BEEN DISCON  ่ : คุณ ธงชัย 4300 แจง ระบบ Datawarehouse Job EDWPBOTFM : ตูเอทีเอ็มS1A2366อาคารบีบี อโศก /กรุณาตรวจเช็คLINEใหดวยคะ    : s1a1052อาคารชุดโบเบทาวเวอร/กรุณาตรวจเช็คLINEใหดวยคะ  : s1B2431 . TFB-00897742 3/4/2006 8:43:38 4/4/2006 12:05:35 CLOSED NWS 3 Hardware Printer TFB-00898641 4/4/2006 11:12:39 4/4/2006 14:22:51 CLOSED VEN 3 Operation Update Passbook TFB-00898795 4/4/2006 11:14:18 4/4/2006 17:24:55 CLOSED NWS 3 Hardware Printer TFB-00898797 4/4/2006 11:15:24 4/4/2006 17:24:24 CLOSED NWS 3 Hardware Printer TFB-00898798 4/4/2006 11:15:47 4/4/2006 13:25:54 CLOSED NWS 3 Hardware Printer TFB-00898788 4/4/2006 11:17:35 7/4/2006 14:08:03 RESTORE IE_AMS 3 Software Push Info.48 ้ ้ : S1A1142โรบินสัน รัชดา กรุณาตรวจเช็คLINEใหดวยคะ  : RAT32 ฝาย สอ.25 ชางพงษสานไดเ : PC =>หนาจอคาง Blue Screen 5/04/06 14.อัมรินทรพรินติง โรActive 10. S t at em ent W IN XP A nt i V irus CM AS LI CIS Lot us Not esS erv e M S O f f ic e 97 I nf o Cent rix CT S c anner T rans ac t B P B ank Ref erenc e Hos t on Dem and C as h Connec t Cas hA dm in. Map เขาเครือง //Config Win2000 ่ FIGURE A-1 A Sample of Incident Data A-2 Pareto histogram of keywords extracted from the incident dataset Figure A-2 shows a Pareto histogram of keywords extracted from the incident dataset 4000 100% 90% 80% 3500 3000 70% 2500 60% 50% 40% 30% 1000 20% 500 10% 0% P rint er AT M P ersonal C om p. TFB-00898819 4/4/2006 11:35:33 4/4/2006 12:45:00 CLOSED EOS 1 Software Home Banking TFB-00898835 4/4/2006 11:46:27 4/4/2006 14:37:03 CLOSED NWS 3 Software WIN 2000 Incident Descriptions Resolution Results : S1A1444 (IP) เซเวนฯ สวนผัก (811): HAS BEEN DISCONNECTED : S1A2192 (IP) ปมน้ามันบางจาก ปร link up ปกติแลวครับ  ํ : S1B1331 (IP) สาขาพัทยาใต เครือง 2 (ศูนยฯ): HAS BEEN DISCONN ่ : Link EDC สาขาหัวหิน" . Severity System Component TFB-00897593 1/4/2006 16:43:14 3/4/2006 18:04:45 CLOSED OS_EC 3 Hardware ATM TFB-00897595 1/4/2006 16:55:00 2/4/2006 7:17:46 CLOSED VEN 3 Network WAN TFB-00897596 1/4/2006 16:57:28 3/4/2006 18:06:59 CLOSED OS_EC 3 Hardware ATM TFB-00897594 1/4/2006 17:32:18 3/4/2006 13:08:40 CLOSED OS_EC 2 Power Supply ระบบไฟฟา TFB-00897598 1/4/2006 18:35:44 3/4/2006 18:09:08 CLOSED OS_EC 3 Hardware ATM TFB-00897600 1/4/2006 19:18:14 3/4/2006 18:18:37 CLOSED OS_EC 3 Hardware ATM TFB-00897602 1/4/2006 19:23:08 3/4/2006 18:20:32 CLOSED OS_EC 3 Hardware ATM TFB-00897623 2/4/2006 7:42:29 3/4/2006 11:42:41 CLOSED OS_EC 2 Software Data Warehouse TFB-00897624 2/4/2006 7:51:34 4/4/2006 17:24:44 CLOSED OS_EC 3 Power Supply ระบบไฟฟา TFB-00897625 2/4/2006 8:42:47 4/4/2006 17:42:04 CLOSED OS_EC 4 Hardware ATM TFB-00897626 2/4/2006 8:46:46 4/4/2006 17:40:07 CLOSED OS_EC 3 Hardware ATM TFB-00897628 2/4/2006 10:00:28 2/4/2006 13:55:42 CLOSED VEN 3 Network ATM TFB-00897630 2/4/2006 10:18:41 2/4/2006 10:57:47 CLOSED VEN 3 Network WAN TFB-00897634 2/4/2006 12:41:52 4/4/2006 17:46:48 CLOSED OS_EC 3 Hardware ATM TFB-00897709 3/4/2006 8:24:14 3/4/2006 16:06:48 CLOSED NWS 3 Software WIN 2000 TFB-00897713 3/4/2006 8:26:20 3/4/2006 16:39:04 CLOSED NWS 3 Software WIN NT TFB-00897717 3/4/2006 8:29:18 3/4/2006 12:13:42 CLOSED NWS 3 Network HQ TFB-00897657 3/4/2006 8:30:27 3/4/2006 13:17:55 CLOSED NWS 2 Network Branch TFB-00897720 3/4/2006 8:31:10 3/4/2006 14:27:54 CLOSED NWS 3 Software WIN 2000 TFB-00897725 3/4/2006 8:34:17 3/4/2006 12:18:41 CLOSED NWS 3 Hardware Personal Comp.. LINE DOWN จนท. แจง Home banking อากาEOS เขา Check ทีเครือง Web พบว ่ ่ : RAT19 ฝาย สท. ้ : PU270 Server COM695 user k ปราโมทย test ok : พหล ชัน 5 ติดตอคุณ จีรศกดิ์ โทร 0recovery data /user test ok ้ : PC Standalone / จอภาพมืด / Mทําการ จอ Monoter Dijital 1K63309 : สาขาบางกระบือ ติดตอคุณทรงศัก ไดเพิม ram 8 mb และเปลียน batter ่ ่ : Cash service / type 4722 s/n 41-ทําการเปลียนชุด mechanic ่ : รหัสสาขา024 สาขาเยาวราช เครืองป th k สรศักดิ์ ทําการแกไขเปลียน ่ ibm ่ : cashier / พิมพงานได 1-2 บรรทัด แชางพอเจตน ไดปรับแกนหัวพิมพ ตอน : GBS / พิมพงานทางดานซายของกรชางพอเจตน ไดปรับระยะหัวเข็ม ตอน : เครืองพิมพ 9055 ตําแหนงงาน CSO างไพสิทธิ์ ไดเปลียน motor ตอนนีใ ่ ช ่ ้ : ฝาย บจ.82 A-1 A Sample of Incident Dataset Figure A-1 shows a sample of incident data in spreadsheet (Excel). 2587 2586 2585 3514 2583 2581 2577 2563 2562 2561 2560 2558 2557 2556 4103 237 3998 811 4006 559 43 60 212 551 550 439 2884 2955 3149 2716 3150 1114 2926 3926 Incident Id. แจง ระบบ Push ลูกคา AIS account 0991208631 อาการ text : สาขาสํานักสีลม ติดตอคุณ ราตรี โทร05/04/2006 14. W AN W I N 2000 Lot us Not eCit rix Lot usN ot esC lien ร ะ บ บ ไฟ ฟ า Updat e P ass book W I N NT K B A NK N E T Dat a W arehous e B ranc h S erver HQ M S O f f ic e 2O O O A pp-NonP C B row ser M agnet ic S t rip Lot us Not es DB K -Cy ber B ank ing N ot ebook CDM LP M I nt ernet B ank in O S/2 M F A M RA V link C ardLink B ranch A pp. Open Date Open Time Resolve Date Resolve Time Incident Code Assigned Gr. TFB-00897731 3/4/2006 8:38:29 3/4/2006 14:55:23 CLOSED NWS 3 Hardware Personal Comp. A c c ept . แจง Notebook ผ user k พิศษฐ test ok ิ : อาคารสีลม ชัน8 ติดตอคุณอมรา IBMSD(theppitat) install windows ้ : 1403003A0956 // ชัน 19 อาคารรา Confirm by K. RO SS CA e-B oot h K -B iz Net NA V (P C) SQ C urrent IB IVR P ush I nf o.55 reinstall w2k\\user : ตําแหนงงาน PBO จอภาพเบลอ Mชางจํานงค ไดเปลียน monitor s/n 5 ่ : ฝาย ลส.Kripit.user te : PC =>ลงโปรแกรมใหมไมได User 5/04/06 14. D elS y B L E nt ry LM S -R eport M gn.DelSy TFB-00898801 4/4/2006 11:18:28 5/4/2006 14:53:44 CLOSED NWS 3 Hardware Printer TFB-00898806 4/4/2006 11:24:58 5/4/2006 14:55:50 CLOSED NWS 3 Software WIN 2000 TFB-00898807 4/4/2006 11:25:10 17/4/2006 17:09:29 CLOSED NWS 3 Software Lotus Notes DB TFB-00898808 4/4/2006 11:25:48 5/4/2006 14:56:12 CLOSED NWS 3 Software WIN 2000 TFB-00898818 4/4/2006 11:33:51 5/4/2006 10:43:37 CLOSED NWS 3 Hardware Personal Comp.55 reinstall w2k\\user : PHA15 ติดตอคุณ จันทรพันธ โทร 0re-install lotus notes R6 . on W e D CS DM S Hom e B ank ing CT R P eopleS of t F CD SSM M W I N 98 B ar Code F I CS F X on w eb B ill P ay m ent EDW SAF E CAT C T D (E -R eport ) K -P -G at ew ay CI P S I B M -E O S B r A pp-R e F in.

DelSy ROSS SAFE Saving-Account Scanner Server Share-Server SQ SSMM Statement Transact-BP Transact-CC&CL Update-Passbook Vlink WAN WIN-2000 WIN-98 WIN-NT WIN-XP Electrical-Supply weka.83 A-3 Evaluation Results of Id3 Decision Tree Method The evaluation results of Id3 decision tree method based on the Testing documents of 4.Accept. Print-Server Printer Push-Info.Cer.trees. FX-on-web Home-Banking Host-on-Demand HQ IB IBM-EOS Info-Centrix-CT Internet-Bankin IVR KBANKNET K-BizNet K-Cyber-Banking K-P-Gateway LI LMS-Report-Mgn.classifiers. Browser CA Call-Center CardLink Cash-Connect CashAdmin. LoanReview(Host Lotus-Notes-DB LotusNoteCitrix LotusNotesClien LotusNotesServe LPM Magnetic-Strip MFA-MRA MIS MS-Office-2OOO MS-Office-97 NAV-(PC) Notebook OS/2 PA PeopleSoft Personal-Comp.on-We CAT CDM CIPS CIS CMAS CTD-(E-Report) CTR Current Data-Warehouse DCS DMS e-Booth EBPP EDW FCD FICS Fin.based Automatic Resolver Group Assignment Instances: 4909 Assign-Group Test mode: 10-fold cross-validation === Classifier model (full training set) === Id3 ATM = 0 .909 records. === Run information === Scheme: Relation: Attributes: Anti-Virus App-NonPC ATM Bank-Reference Bar-Code Bill-Payment BL-Entry Br-App-Re Branch Branch-App.Id3 ID3.

DelSy = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin.Accept.84 | WAN = 0 | | Electrical-Supply = 0 | | | Update-Passbook = 0 | | | | Printer = 0 | | | | | Data-Warehouse = 0 | | | | | | LotusNoteCitrix = 0 | | | | | | | Personal-Comp. = 0 | | | | | | | | WIN-2000 = 0 | | | | | | | | | Branch = 0 | | | | | | | | | | CDM = 0 | | | | | | | | | | | Internet-Bankin = 0 | | | | | | | | | | | | LotusNotesClien = 0 | | | | | | | | | | | | | K-Cyber-Banking = 0 | | | | | | | | | | | | | | CTR = 0 | | | | | | | | | | | | | | | CardLink = 0 | | | | | | | | | | | | | | | | Home-Banking = 0 | | | | | | | | | | | | | | | | | IB = 0 | | | | | | | | | | | | | | | | | | WIN-NT = 0 | | | | | | | | | | | | | | | | | | | HQ = 0 | | | | | | | | | | | | | | | | | | | | Server = 0 | | | | | | | | | | | | | | | | | | | | | KBANKNET = 0 | | | | | | | | | | | | | | | | | | | | | | Browser = 0 | | | | | | | | | | | | | | | | | | | | | | | CAT = 0 | | | | | | | | | | | | | | | | | | | | | | | | SSMM = 0 | | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.on-We = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.Cer. = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 0: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 1: NWS .

Cer. = 1: NWS | | | | | | LotusNoteCitrix = 1: NWS | | | | | Data-Warehouse = 1: IE-AMS | | | | Printer = 1: NWS | | | Update-Passbook = 1: VEN | | Electrical-Supply = 1: OS-EC | WAN = 1: VEN . = 0: NWS | | | | | | | | | | Branch-App. = 1: IE-AMS | | | | | | | | WIN-2000 = 1: NWS | | | | | | | Personal-Comp.DelSy = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.85 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin. = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 1: VEN | | | | | | | | | | | | | | | | | | | | | | | | SSMM = 1: VEN | | | | | | | | | | | | | | | | | | | | | | | CAT = 1: VEN | | | | | | | | | | | | | | | | | | | | | | Browser = 1: NWS | | | | | | | | | | | | | | | | | | | | | KBANKNET = 1: NWS | | | | | | | | | | | | | | | | | | | | Server = 1 | | | | | | | | | | | | | | | | | | | | | Print-Server = 0 | | | | | | | | | | | | | | | | | | | | | | Share-Server = 0: NWS | | | | | | | | | | | | | | | | | | | | | | Share-Server = 1: EOS | | | | | | | | | | | | | | | | | | | | | Print-Server = 1: EOS | | | | | | | | | | | | | | | | | | | HQ = 1: NWS | | | | | | | | | | | | | | | | | | WIN-NT = 1: NWS | | | | | | | | | | | | | | | | | IB = 1: EOS | | | | | | | | | | | | | | | | Home-Banking = 1: EOS | | | | | | | | | | | | | | | CardLink = 1: VEN | | | | | | | | | | | | | | CTR = 1: EOS | | | | | | | | | | | | | K-Cyber-Banking = 1: EOS | | | | | | | | | | | | LotusNotesClien = 1: NWS | | | | | | | | | | | Internet-Bankin = 1: EOS | | | | | | | | | | CDM = 1: OS-EC | | | | | | | | | Branch = 1 | | | | | | | | | | Branch-App.Accept.on-We = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 1: NWS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 1: IE-AMS | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.

86
ATM = 1: OS-EC

Time taken to build model: 1.57 seconds === Stratified cross-validation === === Summary === Correctly Classified Instances Incorrectly Classified Instances Kappa statistic K&B Relative Info Score K&B Information Score Class complexity | order 0 Class complexity | scheme Complexity improvement (Sf) Mean absolute error Root mean squared error Relative absolute error Root relative squared error Total Number of Instances 4567 342 0.8668 404071.9478 % 6120.7864 bits 7425.008 bits 1.2468 bits/instance 1.5125 bits/instance 93.0332 % 6.9668 %

11293.8523 bits 2.3006 bits/instance -3868.8443 bits -0.7881 bits/instance 0.0456 0.1526 20.9496 % 46.2673 % 4909

=== Detailed Accuracy By Class ===

TP Rate FP Rate Precision Recall F-Measure Class 0.324 0.866 0.99 0.884 0.837 0.003 0.003 0.129 0.01 0.01 0.759 0.88 0.93 0.961 0.91 0.324 0.866 0.99 0.884 0.837 0.454 0.873 0.959 0.921 0.872 EOS IE-AMS NWS OS-EC VEN

=== Confusion Matrix ===

a b 44

c d

e <-- classified as a = EOS

3 89 0 0 |

10 110 7 0 0 | b = IE-AMS 0 9 3074 0 21 | c = NWS

4 3 89 903 22 | d = OS-EC 0 0 48 37 436 | e = VEN

87

A-4 An Extended Part of ID3 Decision Tree Results Figure A-3 shows an extended part of ID3 decision tree results.
A | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A T M = 0 W A N = 0 | E le c tr ic a l-S u p p ly = 0 | | U p d a te -P a s s b o o k = 0 | | | P r in te r = 0 | | | | D a ta -W a re h o u s e = 0 | | | | | L o t u s N o te C itri x = 0 | | | | | | P e rs o n a l -C o m p . = 0 | | | | | | | W IN -2 0 0 0 = 0 | | | | | | | | B ran c h = 0 | | | | | | | | | C D M = 0 | | | | | | | | | | In te rn e t- B a n k in = 0 | | | | | | | | | | | L o tu s N o te s C lie n = 0 | | | | | | | | | | | | K - C y b e r- B a n k in g = 0 | | | | | | | | | | | | | C T R = 0 | | | | | | | | | | | | | | C ard L in k = 0 | | | | | | | | | | | | | | | H o m e -B an k in g = 0 | | | | | | | | | | | | | | | | IB = 0 | | | | | | | | | | | | | | | | | W IN -N T = 0 | | | | | | | | | | | | | | | | | | H Q = 0 | | | | | | | | | | | | | | | | | | | S erv e r = 0 | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 0 | | | | | | | | | | | | | | | | | | | | | B ro w se r = 0 | | | | | | | | | | | | | | | | | | | | | | C A T = 0 | | | | | | | | | | | | | | | | | | | | | | | S S M M = 0 | | | | | | | | | | | | | | | | | | | | | | | | K - P -G a te w a y = 0 | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | M S -O ffic e -2 O O O = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F IC S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C IP S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X -o n -w e b = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p le S o ft = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V lin k = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B ill -P a y m e n t = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L -E n t ry = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C IS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h -C o n n e c t = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IV R = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S -R e p o rt-M g n . = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M IS = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m in .o n -W e = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h -In fo .D e lS y = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v in g - A c c o u n t = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A -M R A = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e tic -S trip = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p -N o n P C = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o tu s -N o te s -D B = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o te b o o k = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W IN -X P = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n ti- V ir u s = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S /2 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S can n e r = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S -O ffice-9 7 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k -R e fe re n c e = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o tu s N o te s S e r v e = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t-o n -D e m a n d = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S ta te m e n t = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D -( E -R e p o rt) = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T ran sa ct-B P = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F in .A c c e p t.C e r. = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r-C o d e = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V -( P C ) = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N -9 8 = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r-A p p -R e = 0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e -B o o th = 0 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e -B o o th = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r-A p p -R e = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N -9 8 = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V -( P C ) = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r-C o d e = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F in .A c c e p t.C e r. = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T ran sa ct-B P = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D -( E -R e p o rt) = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S ta te m e n t = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t-o n -D e m a n d = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o tu s N o te s S e r v e = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k -R e fe re n c e = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S -O ffice-9 7 = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S can n e r = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S /2 = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n ti- V ir u s = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W IN -X P = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o te b o o k = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o tu s -N o te s -D B = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p -N o n P C = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e tic -S trip = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A -M R A = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v in g - A c c o u n t = 1 : I E -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h -In fo .D e lS y = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m in .o n -W e = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M IS = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S -R e p o rt-M g n . = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IV R = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h -C o n n e c t = 1 : IE - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 1 : I E -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C IS = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L -E n t ry = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B ill -P a y m e n t = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V lin k = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p le S o ft = 1 : I E - A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X -o n -w e b = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C IP S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F IC S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | M S -O ffic e -2 O O O = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | K - P -G a te w a y = 1 : V E N | | | | | | | | | | | | | | | | | | | | | | | S S M M = 1 : V E N | | | | | | | | | | | | | | | | | | | | | | C A T = 1 : V E N | | | | | | | | | | | | | | | | | | | | | B ro w se r = 1 : N W S | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 1 : N W S | | | | | | | | | | | | | | | | | | | S erv e r = 1 | | | | | | | | | | | | | | | | | | | | P rin t-S e rv e r = 0 | | | | | | | | | | | | | | | | | | | | | S h a re -S e rv e r = 0 : N W S | | | | | | | | | | | | | | | | | | | | | S h a re -S e rv e r = 1 : E O S | | | | | | | | | | | | | | | | | | | | P rin t-S e rv e r = 1 : E O S | | | | | | | | | | | | | | | | | | H Q = 1 : N W S | | | | | | | | | | | | | | | | | W IN -N T = 1 : N W S | | | | | | | | | | | | | | | | IB = 1 : E O S | | | | | | | | | | | | | | | H o m e -B an k in g = 1 : E O S | | | | | | | | | | | | | | C ard L in k = 1 : V E N | | | | | | | | | | | | | C T R = 1 : E O S | | | | | | | | | | | | K - C y b e r- B a n k in g = 1 : E O S | | | | | | | | | | | L o tu s N o te s C lie n = 1 : N W S | | | | | | | | | | In te rn e t- B a n k in = 1 : E O S | | | | | | | | | C D M = 1 : O S -E C | | | | | | | | B ran c h = 1 | | | | | | | | | B ra n ch -A p p . = 0 : N W S | | | | | | | | | B ra n ch -A p p . = 1 : IE -A M S | | | | | | | W IN -2 0 0 0 = 1 : N W S | | | | | | P e rs o n a l -C o m p . = 1 : N W S | | | | | L o t u s N o te C itri x = 1 : N W S | | | | D a ta -W a re h o u s e = 1 : I E - A M S | | | P r in te r = 1 : N W S | | U p d a te -P a s s b o o k = 1 : V E N | E le c tr ic a l-S u p p ly = 1 : O S -E C W A N = 1 : V E N T M = 1 : O S -E C

| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F IC S = 1: IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 1: IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | | | M S -O ffice-2 O O O = 1 : N W S | | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 1 : IE -A M S | | | | | | | | | | | | | | | | | | | | | | | | | K -P -G a tew a y = 1 : V E N | | | | | | | | | | | | | | | | | | | | | | | | S S M M = 1: V E N | | | | | | | | | | | | | | | | | | | | | | | C A T = 1: V E N | | | | | | | | | | | | | | | | | | | | | | B r ow ser = 1 : N W S | | | | | | | | | | | | | | | | | | | | | K BA N K N E T = 1: N W S | | | | | | | | | | | | | | | | | | | | S erver = 1 | | | | | | | | | | | | | | | | | | | | | Prin t-S er ver = 0 | | | | | | | | | | | | | | | | | | | | | | Sh ar e-S erver = 0: N W S | | | | | | | | | | | | | | | | | | | | | | Sh ar e-S erver = 1: E O S | | | | | | | | | | | | | | | | | | | | | Prin t-S er ver = 1: E O S | | | | | | | | | | | | | | | | | | | H Q = 1: N W S | | | | | | | | | | | | | | | | | | W IN -N T = 1 : N W S | | | | | | | | | | | | | | | | | IB = 1: E O S | | | | | | | | | | | | | | | | H om e-B a n k in g = 1: E O S | | | | | | | | | | | | | | | C ard L in k = 1: V E N | | | | | | | | | | | | | | C T R = 1: E O S | | | | | | | | | | | | | K -C yber-B a n k in g = 1 : E O S | | | | | | | | | | | | L otu sN otesC lien = 1: N W S | | | | | | | | | | | In tern et-B a n k in = 1 : E O S | | | | | | | | | | C D M = 1: O S -E C | | | | | | | | | Bran ch = 1 | | | | | | | | | | Br an ch -A p p. = 0 : N W S | | | | | | | | | | Br an ch -A p p. = 1 : IE -A M S | | | | | | | | W IN -2 0 0 0 = 1 : N W S | | | | | | | P erson a l-C om p . = 1 : N W S | | | | | | L otu sN oteC itrix = 1 : N W S | | | | | D a ta-W areh ou se = 1 : IE -A M S | | | | Prin ter = 1 : N W S | | | U p d ate-P a ssb ook = 1 : V E N | | E lectrica l-S u p ply = 1 : O S -E C | W A N = 1: V E N A T M = 1: O S -E C

FIGURE A-3 An Extended Part of ID3 Decision Tree A-5 A Sample of ID3-Based Generating Rules Figure A-4 shows a sample of ID3-based generating rules.
KW1
ATM

KW2
WAN

KW3 0 0 1 0 0 0 0 0 0 0 0 0 ---

KW4 0 0 0 1 0 0 0 0 0 0 0 0 ---

KW5 0 0 0 0 1 0 0 0 0 0 0 0 ---

KW6 0 0 0 0 0 1 0 0 0 0 0 0 ---

Attributes KW7 KW8 0 0 0 0 0 0 1 0 0 0 0 0 --0 0 0 0 0 0 0 1 0 0 0 0 ---

KW9 0 0 0 0 0 0 0 0 1 0 0 0 ---

KW10 0 0 0 0 0 0 0 0 0 1 1 0 ---

KW11 0 0 0 0 0 0 0 0 0 1 0 0 ---

KW12 0 0 0 0 0 0 0 0 0 0 0 1 ---

-----------------------------

Class Assign Groups OS-EC VEN OS-EC VEN NWS IE-AMS NWS NWS NWS IE-AMS NWS OS-EC ---

E-Supply Passbook

Printer D-Warehou LotusNote P-Comput Win2000

Branch Branch-Ap CDM

1 0 0 0 0 0 0 0 0 0 0 0 ---

0 1 0 0 0 0 0 0 0 0 0 0 ---

FIGURE A-4 A Sample of ID3-Based Pattern Kept in Knowledge Database The IF-THEN Rules could be presented as in the following:
1. IF keyword (KW) = ‘ATM’ THEN Assigned Group is OS-EC ELSE Go to 2, 2. IF keyword (KW) = ‘WAN’ THEN Assigned Group is VEN ELSE Go to 3, ……………………… 10. IF keyword (KW) = ‘Branch’ AND ‘Branch-App’ THEN Assigned Group is IE-AMS ELSE Go to 11, 11. IF keyword (KW) = ‘Branch’ THEN Assigned Group is NWS ELSE Go to 12, 12. IF keyword (KW) = ‘CDM’ THEN Assigned Group is OS-EC ELSE Go to 13, ………………………

APPENDIX B

ITIL-BASED KMRCA IT SERVICE DESK PROCESS

which is defined as the unknown underlying cause of one or more incidents. The goal of the Incident Management is to recover standard service operation as quickly as possible. the quality of that service. The status of the problem is transformed to known error when both the root cause is known and a workaround or a permanent resolution has been identified. or may cause. (g) Updating the incident knowledge database to assist with future incident and problem investigation and diagnosis (h) Closing the incident record (i) The Handle and Control Problems operational process has been called where the root cause of the incident or problem has not been identified.90 B-1 ITIL-Based Incident Management Process The incident is any event that deviates from normal operation of a service and that causes. the Incidents Management process is not directly responsible for the implementation of the solution but it will monitor and record the progress and results of the solution implementation. the incident cause is discovered. Note that during the implementation of the workaround or resolution for the incident. The process defines activities to investigate the problem. . The scope of the Incident Management process includes: (a) Opening an incident record (b) Updating the incident record throughout the process to reflect its status (c) Assigning the incident to an incident resolver (d) Analyzing the incident and performing incident determination (e) Implementing a workaround or resolution for the incident to perform recovery of the service (f) Monitoring incident (request) queues to ensure that all incidents are resolved within committed service levels and reprioritizing or reassigning or escalating as necessary. then the Problems Management process is solicited and a problem record is raised. It may be that because of incident analysis and resolution. Figure B-1 shows the Incident Management Process Flow. If this is not the case and if further investigation is justified in respect of cost and effort. or a reduction in. an interruption to.

91 FIGURE B-1 IT Incident Management Process Flow .

4. IT Outsourcing Scope? Determine whether the incident is IT incident and its description is in an IT outsourcing scope. proceed to Handle Major Incident Procedure.92 Narrative of Incident Management Process The following Step 1 through Step 7 are performed by Bank’s help desk or called FLS (first level support). 1. Assign Incident to Bank Resolver Assign a non-IT incident to Bank resolver. Follow the policy to determine if the incident is a major incident. (a) If it is ‘Yes’. (b) If it is ‘No’. referring to the IT outsourcing contract. 3. and the rest Steps are performed by Resolver Groups or called TLS (third level support) as follows: 1. 5. (a) If it is ‘Yes’. 6. proceed to Assign Incident to SLS Resolver. Handle Major Incident Procedure Refer to the Handle Major Incident procedure to assign a major incident owner to handle all required notifications and escalations. proceed to IT Outsourcing Scope? 2. Update Incident Record with Current Status Update the incident record to indicate that the incident has been assigned to a SLS Resolver and is awaiting until the incident is closed. and Step 8 through Step 31 are performed by IT service desk outsourcing or called SLS (second level support). Assign Incident to SLS Incident Resolver Assign an IT Incident to SLS Resolver who is responsible for resolving IT incidents of this type. proceed to Assign Incident to Bank Resolver. Proceed to End. (b) If it is ‘No’. . Open Incident Record Procedure Refer to the Open Incident Record procedure to open an incident record for the incident information. Major Incident? Based on Incident Policy has been defined that the incident severity 1 is the Major incident.

proceed to Contact Appropriate Parties to get More Information. item failure Caller. Review Incident Record For Completeness Review the incident record to ensure that its contents are complete The incident information include: (a) (b) (c) (d) (e) (f) (g) (h) (i) Incident ID When the incident opened (date and time) Identified incident severity (1. Requester (name/ location/ contact no. if it is known the details of whom the incident most appropriate reassigned to and request for reassignment. 2. component. (a) (b) If it is ‘Yes’. Request for Reassignment Request FLS to review the scope for the incident and reassign as the provided reasons. proceed to Additional Information Needed. IT Outsourcing Scope ? Check if the incident is in IT outsourcing scope. 9. Indicate Incident Type If the incident was initially wrong assigned due to the assigned wrong scope and or wrong resolver. (b) If it is ‘No’. Additional Information Needed? Determine if additional information is needed to complete the incident record.93 7. 11. indicate the request type of the incident and. proceed to Indicate Incident Type.) Incident descriptions SLS owner (who/ when ) TLS owner (who/ when) 8. If it is ‘No’. proceed to Validate Initial Severity. or 4) Incident status (open/ assign to/ resolving steps/ close) System. . (a) If it is ‘Yes’. 3. 10.

Validate Initial Severity Refer to the defined severity based on policy. proceed to Close Incident? 19. (b) If it is ‘No’. . proceed to Up date Incident Record with Any Additional Information (b) If it is ‘No’. proceed to Document Issue 15. 16. Contact Appropriate Parties to get More Information Contact the most appropriate parties to get more information. proceed to Handle Major Incident Procedure. 13. (a) If it is ‘Yes’. (a) If it is ‘Yes’. 17. severity 3 is a normal incident. Update Incident Record with Any Additional Information Update the incident record with any additional information. Major Incident? Determines the update incident is the major incident based on major incident policy. (a) If it is ‘Yes’. severity 2 is a high incident. proceed to Perform Incident Analysis Procedure. proceed to Update Incident Record with Any Additional Information (b) If it is ‘No’.94 12. SLS or personnel may escalate request handling at any time by notifying to the higher level of the contact party at that the issue was not resolved and document unsuccessful resolution. severity 1 is a critical incident. Document Issue Document the issue when the required information do not receiving on time. 18. validate the initially assigned severity according to the severity policy. Perform Escalation Handles escalations of issues associated with requests. Issue Resolved? Check if the issue is resolved. and severity 4 is a low incident. 14. Required Information Obtained? Check if the parties were contacted if the required information is obtained. Policy should dictate how many attempts or how long the incident resolver should spend trying to obtain additional information before this becomes an issue.

(a) If it is ‘Yes’. searching similar cases and getting their resolutions of the previous incident in the knowledge database. Search Required Information from Knowledge-Based The knowledge database is required to search the required information to resolve the incident. It may be more effective if the same resolver handles all related incidents. Perform Incident Analysis Procedure Refer to the Incident Analysis procedure to gather all required information about the incident and related incidents and to perform incident determination. In particular. TLS Required? Determine to whether the TLS resolver groups are required to resolve the assigned incident. Knowledge-Based Required? Determine if the Knowledge-based is requited to resolve the incident. compare the incident to the database of incident records to determine if this is a repeat occurrence of a previous incident. 21. 26.95 20. Attempt to Resolve Incident Attempt to resolve the incident with SLS resolve’s skills and availability. It needs to assign a Major Incident owner who handles all required notifications and escalations the request until the major incident is complete. 23. (b) If it is ‘No’. proceed to Perform incident Determination Procedure 25. 22. investigation and diagnosis activities. (b) If it is ‘No’. 24. proceed to Assign/ Reassign incident to Appropriate Incident Resolver Group. proceed to Search Required Information from Knowledge-Based. proceed to Attempt to Resolve Incident. (a) If it is ‘Yes’. Perform Incident Determination Procedure Refer to Perform Incident Procedure . The determination of resolver groups whom it should be assigned to. Handel Major Incident Procedure Refer to the Handle Major Incident procedure.

Close Incident? For an actual incident. Correct Assignment? Determine if the review indicates that the assigned incident is correct assignment. (a) If it is ‘Yes’. Return to Contact Appropriate Parties to obtain the additional information required to proceed with resolution of the incident. proceed to Indicate Request Type and Reassignment Details . take the incident out of SLA criteria so that it will not be included in SLA attainment reports. (b) If it is ‘No’. (b) If it is ‘No’. Inform Requester that Incident will be Closed If the incident should be closed due to the lack of information needed to proceed with resolution of the incident. inform the Requester that the incident will be closed. 29. (a) If it is ‘Yes’. 30. proceed to Perform Incident Analysis Procedure to analyse the incident. determine if the incident should be closed due to the lack of information required to proceed with resolution of the incident. Take Incident Out of SLA Criteria If the incident should not be closed due to lack of information needed to proceed with resolution of the incident. (a) If it is ‘Yes’. 32. return to Assign Incident to Incident Resolver to assign the incident to a new Incident Resolver. proceed to Take Incident Out of SLA Criteria 28. proceed to Actual Incident? Note that the Assign and/ or reassign the incident to the most appropriate TLS incident resolver based on skill level and availability within the TLS Resolver Group. Assign/ Reassign incident to Appropriate Incident Resolver Group Determine if the result of Incident Analysis reassigned the incident to a different Resolver Group. 31. Update Incident Record with its Close Update the incident record to indicate that the required information could not be obtained and that the incident will be closed.96 27. 33. Review for Corrective Assignment Review the assigned incident for corrective resolver group. Proceed to End. proceed to Inform Requester that Incident will be Closed (b) If it is ‘No’.

40. (a) If it is ‘Yes’. proceed to Search Required Information from Knowledge-Based (b) If it is ‘No’. SLS will review and reassignment 36. Perform Incident Analysis Procedure Refer to the Incident Analysis procedure to gather all required information about the incident and related incidents and to perform incident determination. Knowledge-based Required? Determine if the Knowledge-based is required to get the required information. determine if recovery from the incident is required prior to implementation of a permanent solution. Request SLS for Reassignment Request for reassignment. 42. proceed to Attempt to Resolve Incident 38. 39. Indicate Request Type and Reassignment Details If there is incorrect assignment. 37. (b) If it is ‘No’. investigation and diagnosis activities. Perform Incident Recovery If recovery of the incident is required prior to permanent resolution of the incident. Attempt to Resolve Incident Attempt to resolve the incident based on skills and availability. (a) If it is ‘Yes’. indicate request type and provide reassignment details such as who is appropriate to resolve.97 34. Close Incident? Determine to close incident when processing incident has been complete. proceed to Handle and Control Problems. Recovery Required? If the incident is an actual incident. 35. (a) If it is ‘Yes’. proceed to Perform Incident Recovery. Search Required Information from Knowledge-Based Search the required information from the Knowledge database. (a) Review the Recovery Plan with affected parties (b) Check if the required recovery is entitlement (c) Check if the service request is required . proceed to the Perform Incident Recovery as the following. proceed to Close Incident Procedure (b) If it is ‘No’. proceed to Recovery Required? 41.

48. proceed to Add Resolution to Knowledge-Based. Add Resolution to Knowledge-Based Add the resolutions to the knowledge database to assist with future incident and problem investigation and diagnosis. proceed to Handle and Control Problems (RCA). proceed according to the Close Incident Record procedure to close the associated incident record. Proceed to End. proceed to Problem Management Process Refer to the Problem Management process to develop a permanent solution for the problem. (b) If it is ‘No’. 45. The status of the problem is transformed to known error when both the root cause is known and a temporary workaround or a permanent resolution has been identified. proceed to Close Incident Record. is the recovery action or bypass acceptable as a permanent solution? (a) If it is ‘Yes’. 44. (a) If it is ‘Yes’. Note that a problem is the unknown underlying cause of one or more incidents. 47. (b) If it is ‘No’. (a) If it is ‘Yes’. Was Incident Recover? Determine if the Perform Incident Recovery was successful in recovering from the incident. 46. End End of Incident Management Process . proceed to Close Incident Record Procedure.98 (d) Determine to request for change (e) Update incident record to indicate recovery result either successful or unsuccessful 43. Incident Permanently Resolve or Agree to Workaround Applied? Determine if the Perform Incident Recovery provided a permanent resolution for the incident. That is. RCA Required? Follow the policy to determine if a RCA is required for the recovered incident for which the recovery action is acceptable as a permanent resolution. proceed to Incident Permanently Resolve or Agree to Workaround Applied? (b) If it is ‘No’. Close Incident Record Procedure When processing of the incident has completed either successfully or unsuccessfully.

Review Open Incident Policy Review the Open Incident policy particular the details for items such as: (a) Who has authorized to open incident records? (b) What information is required when opening an incident? 3. Incident Record Already Open? Check if an incident record has already been opened for the incident. (a) If it is ‘Yes’. The required information to be included in an incident record is: (a) Incident ID (b) Date and Time when open incident Record . Open an incident record for the incident with required information. proceed to Return 2. Open an incident record for the incident. proceed to Review Open Incident Policy (b) If it is ‘No’.99 Figure B-2 shows Open Incident Record Procedure FIGURE B-2 Open Incident Record Flow Narrative of Open Incident Record Procedure 1.

Proceed to Return. (a) If it is ‘Yes’. location and contact no. Handle Service Entitlement Failure Handel Service Entitlement Failure is to resolves entitlement failures for requested services and update request records to reflect the disposition of entitlement failures. Entitle? Follow the policy to determine if the Requester is entitled to raise this incident. Document Entitle failure Detail If the Requester was not entitled to raise this incident.100 (c) Incident description (d) Outage detail particular on failing component /resource. proceed to Match Severity to Incident (b) If it is ‘No’. 8. 10. document the details of the entitlement failure in preparation for calling the Handle Service Entitlement Failure. 7.) (g) Incident status (open/ assign resolver/ necessary resolving steps/ close) 4. Assign Severity to Incident Assign severity based on severity definition and its policy to the incident. 5. Return Return to the Incident Management Process . It shall be determined the incident against the service contracts particular IT outsourcing contact. Gather Required Information Gather required information based on policy to complete the incident record. Continue? Determine if the decision was made in the Handle Service Entitlement Failure to continue with the incident. (a) If it is ‘Yes’. proceed to Document Entitle Failure Detail 6. It may propose the alternative for entitlement with authorized approval. date / time incident occurred (e) Incident severity based on business impact (f) Incident requester (requester’s name. proceed to Return 9. proceed to Assign Severity to Incident (b) If it is ‘No’.

Major Incident Criteria Met? Determine if the criteria for conducting an incident review have been met based on major incident severity 1. which is the most business impact in terms of the availability of specific service.101 Figure B-3 shows Handle Major Incident Procedure FIGURE B-3 Handle Major Incident Flow Narrative of Handle Major Incident Procedure 1. collect all related information regarding the incident such as: (a) Services/ applications/ resources affected (b) Affected service owners (c) Estimated duration of any associated outages 2. application. Gather Information for Major incident If the incident is associated with a major incident. . or network.

proceed to Inform Requester that Incident Not Major Incident with Reasons 3. (a) If it is ‘Yes’. (c) If the service is not actually down or severely degraded. Major Incident Review Required? Determine if the criteria for conducting an incident review have been met based on incident severity 1 that business impact in a particular the availability of specific service. 7. 8. take whatever actions are necessary to confirm whether or not the associated service is actually down or is severely degraded. proceed to Assign major incident Owner (b) If it is ‘No’. Assign Major Incident Owner Assign a major incident owner who handles all required notifications and escalations until the resolution is complete. (b) If the service is actually down. Coordinate Recovery for Major incident. or network. application. Coordinate relevant resources for major incident recovery and effectively manage the recovery activities to minimize the duration of the incident. notify the appropriate service providers so that they may handle the incident. proceed to Inform Requester that Incident Not Major Incident with Reasons . urgently provide notification to all affected parties of the service outage (management team and service recovery teams) by short massaging and or email with an ongoing status as required. proceed to Assign major incident Owner (b) If it is ‘No’. Major Incident Notification Perform the major incident notification as the following: (a) Analyze the incident in detail. 4.102 (a) If it is ‘Yes’. Inform Requester that Incident Not Major Incident with Reasons Inform the Requester that the incident is not a major incident with reason why the incident was not assigned to severity 1. 5. 6. Perform Problem Management Process Perform Problem Management process to permanently resolve.

Notify All Parties Inform all participants either that a major incident review is not needed or that the criteria for conducting an incident review have not been met. . Proceed to Return.103 9. severity and associated configuration data based on its component. 11. Perform Major Incident Review Assemble appropriate parties in preparation to conduct a review of an incident. 10. its symptoms. Collect Incident Symptom and Configuration Item Impact Info Collect all available data about the incident. Return Return to the Incident Management process Figure B-4 shows Perform Incident Analysis Procedure FIGURE B-4 Perform Incident Analysis Flow Narrative of Perform Incident Analysis Procedure 1.

104 2. Update Incident Record with Additional Details Update the incident record with additional details. Correct Assignment? Determine if the incident was assigned to the correct resolver group based on the review of the incident record and all incident data. Identify Any Related Occurrence Identify any related occurrences of the incident and analyze with similar previous cases. 7. Reproduce Proper Incident If there is a need to reproduce the incident to gather additional insight about the incident. indicate request type and document the reassignment details in preparation for calling the reassign request . proceed to Analyse Available Incident Data 4. proceed to Indicate Request Type 9. proceed to Reproduce Proper Incident (b) If it is ‘No’. 5. Analyse Available Incident Data Analyze all available incident data to validate that the incident was assigned to the correct resolver group. proceed to Update Incident Record with Additional Details (b) If it is ‘No’. 8. (a) If it is ‘Yes’. (a) If it is ‘Yes’. proceed to Analyse Available Incident Data 6. Indicate Request Type If the incident record was incorrectly assigned. attempt to reproduce the incident. proceed to Perform Incident Determination Procedure (b) If it is ‘No’. Incident Reproducible? Determine if the incident is reproducible. (a) If it is ‘Yes’. 3. Need To Reproduce Incident? Determine if there is a need to reproduce the incident to obtain additional information to understand the exact environment in which the incident occurred.

Perform Incident Determination Procedure If the incident was assigned to the correct resolver. Request for Reassignment Request for reassignment to reassign the incident to the correct resolver group and to return to the Assign/ reassign Incident to Appropriate Incident Resolver to assign the incident to a new incident resolver 11. 12. Return Return to the Incident Management Process Figure B-5 shows Incident Determination Procedure FIGURE B-5 Incident Determination Flow . proceed to perform Incident Determination procedure to continue with incident analysis and development of a Recovery Plan.105 10.

proceed to Perform Appropriate Action (b) If it is ‘No’. Notify Appropriate parties to perform Action Notify appropriate parties to perform action for non-actual incident. Perform Appropriate Action Perform appropriate action for non actual incident details to check if notification is required. (a) If it is ‘Yes’. 9. Action Required? Determine if any action is required. (a) If it is ‘Yes’. proceed to Action Required? 3. proceed to Update Incident Record to Indicate that Incident is Not an Actual Incident 4. proceed to Determine Incident Impact (b) If it is ‘No’. proceed to Notify Appropriate Parties to Perform Action (b) If it is ‘No’. Update Incident Record to Indicate that Incident is Not an Actual Incident Update incident record to indicate the incident that is not an actual incident. 6. It should identify by all single points of failure. Determine Incident Impact Determine of which are the incident impact to particular crucial services. 8. Actual Incident? Determine if the reported incident is indeed an actual incident. proceed to Update Incident Record with Current Status 7. application. Initiate Incident Determination Analyze all available incident data and initiate normal incident determination activities. components. Determine to Adjust Severity Determine to adjust the assigned severity. Proceed to Return 5. and networks.106 Narrative of Incident Determination Procedure 1. Notification Required? Determine if the notification is required. 2. Negotiable severity either up or down will be notify to the FLS to determine with the negotiation. (a) If it is ‘Yes’. .

Handle Major Incident Procedure Refer to the Handle Major Incident procedure to assign a major incident owner to the incident and to handle all required notifications and escalations. Perform Backup and Recovery Perform recovery according to Backup and Recovery procedure 14. proceed to Handle Major Incident Procedure (b) If it is ‘No’. proceed to Recovery required 11. Return Return to Incident Management Process . (a) If it is ‘Yes’. proceed to Perform Backup and Recovery (b) If it is ‘No’. proceed to Update Incident Record with Current Status 13. 12. Major Incident? Base on the Major Incident policy. Update Incident Record with Current Status Update incident record with the current status. (a) If it is ‘Yes’. determine if the incident is a major incident. 15.107 10. Recovery Required? Determine if there is any recovery required to the incident.

Closure Concurrence Required? Follow the policy to determine if concurrence to close the incident is required. The policy shall define: (a) Who can close incident records (b) Required closure concurrence. . (b) If it is ‘No’. proceed to Close Incident Record.108 Figure B-6 shows Close Incident Record Procedure FIGURE B-6 Close Incident Record Flow Narrative of Close Incident Record Procedure 1. (a) If it is ‘Yes’. if any 2. if any (c) Required notifications. Review Close Incident Policy Review the Close Incident policy for the account. proceed to Obtain Closure Concurrence from Appropriate Parties.

code and recovery and resolution dates and times. 4.109 3. proceed to Document Closure Issue. proceed to Notify Appropriate Parties. Concurrence Obtained? Determine if concurrence to close the incident was obtained from all appropriate parties. (a) If it is ‘Yes’. . proceed to Return. 6. Notify Appropriate Parties If notification is required. Notification Required? Follow the Notification policy to determine if notification is required that the incident has been closed. The following personnel to be notified that a severity 1 incident has been closed: (a) Incident Coordinator (b) Requester/ User (c) Designated customer incident liaison 8. ensuring that the incident record contains all the required information. follow the Close Incident policy to obtain concurrence from the appropriate parties. (b) If it is ‘No’. including the closing status. Close Incident Record Close the incident record. proceed to Close Incident Record. Obtain Closure Concurrence from Appropriate Parties If concurrence to close the incident is required. 5. 7. follow the Notification policy to notify the appropriate parties that the incident has been closed and its closing status. Return Proceed to Return. (a) If it is ‘Yes’. (b) If it is ‘No’.

Another is to handle for each problem as required from the incident management process (path ‘as required for each problem’). . One is to perform the preventive action by analyzing problem and incident trends to determine to provide the action plan (path ‘ongoing’).110 B-7 ITIL-Based Problem Management Process The scope of the Problem Management process includes: (a) Review problem and incident trend analysis (b) Opening an problem record (c) Performing RCA (root cause analysis) (d) Assigning problem to appropriate problem resolver (e) Developing permanent resolution plan (f) Implementing permanent resolution plan (g) Close incident record Figure B-7 shows the Problem management process flow FIGURE B-7 IT Problem Management Process Flow Narrative of Problem Management Process: There are two purposes of the problem management process.

Request for RCA? Determine if the problem was opened for a request to perform a Root Cause Analysis for a negative process trend. proceed to Proceed to Another Effective Resolution Plan . 2. proceed to Implement Permanent Resolution Plan Procedure (b) If it is ‘No’. Was Resolution Developed? Determine if the resolution Plan was developed.111 The Ongoing path includes one procedure. Review Problem and Incident Trend Analysis procedure Refer to the Review Problem and Incident Trend Analysis procedure to analyse the negative trend of incident and problem process. It will determine to provide the action plan in terms of preventive action. Assign to Problem Resolver Procedure Refer to Assign to Problem Resolver procedure 5. Implement Permanent Resolution Plan Procedure. proceed to Perform Root Cause Analysis Procedure. 3. Proceed to End As Required for each problem path includes the following. 1. 8. Was Resolution Successful? Determine if the resolution is successful? (a) If it is ‘Yes’. (b) If it is ‘No’. proceed to Assign to Problem Resolver Procedure. Refer to Implement Permanent Resolution Plan Procedure. (a) If it is ‘Yes’. Open Problem Record Procedure Refer to Open Problem Record procedure. proceed to Close Problem Record Procedure 7. Perform Root Cause Analysis Procedure Refer to Perform Root Cause Analysis procedure Proceed to End 4. (a) If it is ‘Yes’. Develop Permanent Resolution Plan Procedure Refer to Develop Permanent Resolution Plan procedure 6. 1. proceed to Close Problem Record procedure. (b) If it is ‘No’.

Review problem and incident trend analysis Review problem and incident trend analysis to proactively determine potential problems that have not yet been identified by the occurrence of an incident or recurring data that might indicate an unidentified problem. Document Require for Preventive Action 4. Review Action Plan in Regular Management Meeting 5. . Proceed to Develop Permanent Resolution Plan Procedure 10. Close Problem Record procedure Refer to Close Problem Record procedure 11. Develop Action Plan 7. Review Problem and Incident Analyses 2. Action Plan Required? Yes No 6. End End of Problem Management Process Figure B-8 shows Review Problem and Incident Trend Analysis Procedure Start 1. Preventive Action Required? No Yes 3.112 9. Proceed to Another Effective Resolution Plan If the resolution plan was implemented unsuccessful documented issue and proceed to another effective resolution plan. Handel Action for Completion Return FIGURE B-8 Review Problem and Incident Trend Analysis Narrative of Review Problem and Incident Trend Analysis 1.

proceed to Document Required for Preventive Action. Notify the preventive action result to the services of emerging trends and possible improvement areas. 5. Return Return to the Problem Management Process . Review Action Plan in Regular Management Meeting Review the action plan information with management at regular review meetings to ensure that the information is understood and acted on. 4. proceed to Develop Action plan. proceed to Review Action Plan in Regular Management Meeting. Document the required for preventative action with the trend analysis output. resolve and prevent a potential problem. (b) If it is ‘No’. 8. 6. Document Required for Preventive Action. Develop Action plan Develop the required action plan. Handle Action Plan Implementation for Completion Handle the action plan implementation to monitor implementation and completion of the action plan. Action Plan Required? Does the review indicate that a further action plan is required to handle any service issues? (a) If it is ‘Yes’. 7.113 2. (b) If it is ‘No’. based on the outcome of data gathering and trend analysis. 3. Preventive Action Required? Determine whether specific targeted actions need to be taken to investigate. (a) If it is ‘Yes’. proceed to End.

proceed to Review Open Problem Policy (b) If it is ‘No’. (a) If it is ‘Yes’. 3. proceed to Return 2.114 Figure B-9 shows Open Problem Record Procedure FIGURE B-9 Open Problem Record Flow Narrative of Open Incident Record 1. Problem Record Already Open? Check if a problem record has already been opened for the incident. Review Open Problem Policy Review the Open Problem policy particular the details for items such as: (a) Who has authorized to open incident records? (b) What information is required when opening a problem . Update Problem Record which It Is Already Open Update the problem record that the problem is ready opened.

Proceed to Return. Continue? Determine if the decision was made in the handle service entitlement failure to continue with the problem. It shall be determined the problem against the service contracts particular IT outsourcing contact. 11. Open Problem Record Open a problem record for the problem with required information. 12.115 4. Gather Required Information Gather required information based on policy to complete the problem record 8. 10. Document Entitle failure Detail If the Requester was not entitled to raise this problem. Handle Service Entitlement Failure Handle Service Entitlement Failure is to resolves entitlement failures for requested services and update request records to reflect the disposition of entitlement failures. (b) Associated incidents Multiple Incidents? Determine if the incident is a multiple incidents 6. It may propose the alternative for entitlement with authorized approval. Match Severity to Incident Match problem severity based on definition to the problem. document the details of the entitlement failure in preparation for handling service entitlement failure. The information required to open a problem record as the following: (a) Incident details gathered and recorded in the incident record 5. 13. Return Return to the Problem Management Process . 7. 9. Coordinate Incident to Problem Record Coordinate the incident to the problem record. Entitle? Follow the policy to determine if the problem requester is entitled to raise this problem.

116 Figure B-10 shows Perform Root Cause Analysis Procedure FIGURE B-10 Perform Root Cause Analysis Flow Narrative of Perform Root Cause Analysis 1. Gather Problem Related RCA Gather all available problem data related to RCA. Assign RCA Owner Assign an ownership for the Root Cause Analysis. 2. The owner is responsible for managing the Root Cause Analysis through its completion. including: (a) The problem record (b) Any details about associated service outage .

Take Appropriate Actions Take whatever actions are necessary to complete the Root Cause Analysis on schedule. In particular. Monitor RCA Monitor the progress of the Root Cause Analysis to ensure that it is on schedule. proceed to Document Final RCA Result (b) If it is ‘No’. user environments. (b) Exception events 4. Return to Monitor Root Cause Analysis to continue to monitor the progress of the Root Cause Analysis. 7. patterns of occurrence. etc. Action Required? Determine if any action is required to complete the Root Cause Analysis. Return in parallel to Analyze Problem and Monitor RCA to complete the analysis. 9.117 Steps 3 through 5 and Steps 6 through 8 are performed in parallel. look for common: (a) Symptoms. . Identify Contribution Factors Based on the problem data analysis. (a) If it is ‘Yes’. identify any factors that contributed to the problem. proceed to Analysis Complete? 8. prepare an interim report that documents the Root Cause Analysis findings to date. Proceed to Analysis Complete? 6. proceed to Take Appropriate Actions. proceed to Prepare Interim RCA Result 10. (a) If it is ‘Yes’. 3. (b) If it is ‘No’. Determine Probable Cause Choose the most likely problem cause or causes from the contributing factors. Prepare Interim RCA Result If the analysis is not yet complete. Analysis Complete? Determine if the Root Cause Analysis has been completed. 5. Analyse Problem Analyze the problem data.

118 11. Document Final RCA Result If the analysis is complete, document the results of the Root Cause Analysis. Include findings from the problem data analysis, explanations of contributing factors, and an indication of the probable cause(s). 12. Review RCA with Appropriate Parties Review the Root Cause Analysis results with the appropriate parties; for example, the Problem Coordinator and all affected service owners. 13. Result Accepted? Determine if the Root Cause Analysis results were accepted. (a) If it is ‘Yes’, proceed to Root Cause Found? (b) If it is ‘No’, return in parallel to Analyze Problem and Monitor RCA to repeat the Root Cause Analysis. 14. Root Cause Found? Determine if a root cause of a problem was found.
(a) If it is ‘Yes’, proceed to Update Final RCA Results to Knowledge Database

(b) If it is ‘No’, proceed to Update Problem Record with Current Status 15. Update Final RCA Results to Knowledge Database Update the root cause analysis result to knowledge database. Based on the update knowledge database policy, it may be updated to reflect the RCA results for all problems and negative process trends. 16. Update Problem Record with Current Status Update the problem record with the current status of the problem; either: (a) Root cause of the problem identified (b) No root cause found Proceed to Return. 17. Notify RCA Result to Appropriate Parties Follow the notification policy to notify the appropriate parties of the RCA results particular the service accounts that the RCA is applicable. Proceed to Return. 18. Return Return to either the Problem Management Process or Development Resolution Plan

119 Figure B-11 shows Assign Problem to Appropriate Problem Resolver Procedure

FIGURE B-11 Assign Problem to Appropriate Problem Resolver Flow Narrative of Assign Problem to Appropriate Problem Resolver 1. Review Problem Record Review the problem record to determine whom it should be assigned to. 2. Correct Assignment? Determine if the problem was initially assigned to the correct Resolver Group when the problem was opened. (a) If it is ‘Yes’, proceed to Indicate Request Type. (b) If it is ‘No’, proceed to Assign Problem to Problem Resolver. 3. Indicate Request Type If the problem was initially assigned to the wrong resolver, indicate the request problem type and, if known, the details of whom the problem should be reassigned to in preparation for calling the reassign request. 4. Request for Reassignment Request for reassignment, assign the problem to the most appropriate resolver. Proceed to Review Problem Record 5. Assign Problem to Problem Resolver Assign problem to the problem resolver based on skill level and availability.

120 6. Update Problem Record with Current Status Update the problem record to indicate that the problem has been assigned to an appropriate problem resolver and is awaiting problem analysis and development of a permanent resolution plan. 7. Return Return to the Problem Management Process Figure B-12 shows Developing Permanent Resolution Plan Procedure

FIGURE B-12 Developing Permanent Resolution Plan

7. (b) If it is ‘No’. proceed to the Perform Root Cause Analysis procedure to determine the most likely cause of the problem. (a) If it is ‘Yes’.121 Narrative of Developing Permanent Resolution Plan 1. comparing the problem to the database of records to determine if this is a repeat occurrence of a previous problem or known error. proceed to Select Resolution. Perform Root Cause Analysis Procedure If a RCA is required. RCA Required? Determine if a Root Cause Analysis is required for the problem. . 2. Investigate Possible Solutions Investigate possible permanent solutions for the problem. Potential Resolution Identified Determine if any potential resolutions were identified. Review Associated incident and Related Configuration Items (CIs) Review all recorded available data about the incident(s). update the problem record to indicate that the problem will be closed due to the lack of a known error or possible resolution. Identify Any Related Concurrences Identify any related occurrences of the problem and analyze similar problems. It may search and select potential resolution from the Knowledge Database. severity and associated configuration items based on component or application or network categorization. Update Problem Record to be Closed without Any Resolution If there is no any potential resolutions was identified. proceed to Perform root Cause Analysis Procedure (b) If it is ‘No’. 8. symptoms. 5. proceed to Investigate Possible Solution 4. 6. 3. proceed to Update Problem Record to be Closed without any Resolution. (a) If it is ‘Yes’. select what appears to be the best permanent solution for the problem. Select Resolution If potential resolutions were found.

122 9. Review Resolution plan with Appropriate Parties Match problem severity based on definition to the problem. update the problem record to indicate that the solution is ready to be implemented to permanently resolve the problem. Proceed to Return. Develop Resolution Plan and Test Resolution Plan Match problem severity based on definition to the problem. 14. Issue Resolved? Check if a problem record has already been opened for the incident. Return Return to the Problem Management Process . Finalize Resolution Finalize possible resolution Proceed to Return. 10. 12. Proceed to Return. (a) If it is ‘Yes’. Proceed to Return. 11. 16. Update Problem Record with Current Status If the Permanent Resolution Plan is acceptable. proceed to Return 13. Document Issue Match problem severity based on definition to the problem. Change the status of the problem to Known Error. Issue Occurred? Check if a problem record has already been opened for the incident. (a) If it is ‘Yes’. Proceed to Return. proceed to Review Open Problem Policy (b) If it is ‘No’. proceed to Review Open Problem Policy (b) If it is ‘No’. proceed to Return 15.

123 Figure B-13 shows Implement Permanent Resolution Plan Procedure FIGURE B-13 Implement Permanent Resolution Plan Flow Narrative of Implement Permanent Resolution Plan 1. Initiate Resolution Plan Initiate the Permanent Resolution Plan involves two parallel procedures: (a) Implementation performed by external operational processes (b) Coordination: performed by the Problem Resolver to monitor the overall execution of the Permanent Resolution Plan and to record the implementation results. 2. . Monitor Resolution plan Implementation Monitor the implementation of the Permanent Resolution Plan against the target schedule.

proceed to Successful? (b) If it is ‘No’. Successful? Determine if the decision was made in the handle service entitlement failure to continue with the problem. proceed to Update Problem Record with Implemented Resolution Unsuccessful 9. proceed to Implement Resolution Plan 4. Implement Resolution Plan Perform Implementation of Resolution Plan to continue with the resolution of the problem in known error status. 8. 5. (a) If it is ‘Yes’. proceed to Adjust Resolution Plan (b) If it is ‘No’. 6. 7. proceed to Update Problem Record with Implemented Resolution Unsuccessful . (a) If it is ‘Yes’. Adjust Resolution Plan If adjustments to the Permanent Resolution Plan are needed to resolve the problem in known error status within committed service levels. Implement Complete? Determine if implementation of the solution is complete. escalate the implementers as required to apply corrective action and adjust the plan accordingly. (a) If it is ‘Yes’. Review Resolution Plan Adjustment with Appropriate Resolver Coordinate the adjusted plan with all affected resolver to review the resolution plan adjustment. proceed to Update Problem Record with Implemented Resolution Successful (b) If it is ‘No’.124 3. Update Problem Record with Adjusted Resolution Plan Details Update the problem record with details of the modified Permanent Resolution Plan. Adjustment Required? Determine if any adjustment to the Permanent Resolution Plan is needed to ensure resolution of the problem in known error status within committed service levels.

Be sure to enter the resolution date and time. It should be following g the notification policy to notify the appropriate parties of the outcome of implementing the Permanent Resolution Plan. update the problem record to indicate that the Permanent Resolution Plan was not successful. and a customer-designated problem liaison of the outcome of implementing the Permanent Resolution Plan. Update Problem Record with Implemented Resolution Successful If the problem was resolved successfully. The record should brief details of the resolution so that these are available to assist with future incident and problem investigation and diagnosis. Update Problem Record with Implemented Resolution Unsuccessful If the problem was not resolved. 13.125 10. 12. Notify Appropriate Parties Notify the Requester. Return Return to the Problem Management Process . Proceed to Return. the Problem Coordinator. Note: The problem remains in known error status until it is permanently fixed by a change. update the problem record to indicate that the problem in known error status has been resolved. affected service owners. 11.

if any (c) Required notifications. (a) If it is ‘Yes’. Closure Concurrence Required? Follow the policy to determine if concurrence to close the problem is required. if any 2.126 Figure B-14 shows Close Problem Record Procedure FIGURE B-14 Close Problem Record Flow Narrative of Close Problem Procedure 1. The policy shall define: (a) Who can close problem records (b) Required closure concurrence. proceed to Close Incident Record. Review Close Problem Policy Review the Close Problem policy for the account. (b) If it is ‘No’. proceed to Obtain Closure Concurrence from Appropriate Parties. .

proceed to Close Problem Record. proceed to Notify Appropriate Parties. Obtain Closure Concurrence from Appropriate Parties If concurrence to close the problem is required. (a) If it is ‘Yes’. 7. (a) If it is ‘Yes’. and resolution dates and times. code and recovery. (b) If it is ‘No’. Notify Appropriate Parties If notification is required. (b) If it is ‘No’. follow the Close Problem policy to obtain concurrence from the appropriate parties. proceed to Return.127 3. 5. Ensure that the problem record contains all the required information. (a) Problem Coordinator (b) Requester/ User (c) Designated customer problem liaison 8. Notification Required? Follow the Notification policy to determine if notification is required that the incident has been closed. 6. Concurrence Obtained? Determine if concurrence to close the problem was obtained from all appropriate parties. Return Return to the Problem Management Process 4. follow the Notification policy to notify the appropriate parties that the problem has been closed and its closing status. proceed to Document Closure Issue. Close Problem Record Close the problem record. . including the closing status.

APPENDIX C SIMULATION MODELS AND SIMULATION RESULTS .

130 C-1 Simulation Model of Typical IT Service Desk System A simulation model of IT service desk system is shown in Figure C-1. 0. . . .NumberOut + 1:NEXT(9$).Create 1 (IT Incident Call 14$ CREATE.Entity 1:MinutesToBaseTime(WEIB( 3.MinutesToBaseTime(0. .NumberOut=IT Incident Call Arrivals. .Ball:NEXT(0$).0).Assign 1 (Assign IT Incident Ticket) . . IT Incident Call A rrivals A ssign IT Incident Ticket A ssign S ervirity 4 Resolving S everity 4 Ticket S everity 4 Resolved 0 0 0 A ssign S ervirity 3 Resolving S everity 3 Ticket S everity 3 Resolved Assign Severity 0. BasicProcess. Model statements for module: Arrivals) .64. 15$ ASSIGN: IT Incident Call Arrivals. 1.753 3. Model statements for module: BasicProcess.303 95. 9$ ASSIGN: Picture=Picture.238 El s e 0 0 A ssign S ervirity 2 Resolving S everity 2 Ticket S everity 2 Resolved 0 0 A ssign S ervirity 1 Resolving S everity 1 Ticket S everity 1 Resolved 0 0 FIGURE C-1 Simulation Model for IT Service Desk System The details of the simulation model can be described by the SIMAN Code is in the following: .903 )):NEXT(15$).

Yes. 1.248. Resolving Severity 1.Type=Severity 1: Picture=Picture.12$. 0$ Model statements for module: BRANCH.753)/100. 8$ ASSIGN: Ticket Severity 1 Resolved.27):NEXT(4$). Model statements for module: BasicProcess. .Type=Severity 4: Picture=Picture. .Process 1 (Resolving Severity 1) .Dispose 4 (Ticket Severity 1 Resolved) . .74): S1 time arrival=TNOW:NEXT(1$).1:NEXT(21$). .Assign 2 (Assign Servirity Entity. ASSIGN: . 13$ Model statements for module: BasicProcess. .Red Ball: S1 resolving time=LOGN(2. 22$ SEIZE.Yes: With.VA: Resource 1.37. .NumberOut + 1.10$.1.(0. . . . 4.(3.238)/100. . Model statements for module: BasicProcess.131 . 10$ Model statements for module: BasicProcess.Yes: Else. 1$ ASSIGN: Resolving Severity 1.303)/100.11$.Decide 1 (Assign Severity) 1: With. MinutesToBaseTime(S1 resolving time).NumberIn=Resolving Severity 1.(95.Assign 5 (Assign Servirity ASSIGN: Entity. Resource 1.NumberOut + 1: 1.. . 4) . .13$. .WIP-1:NEXT(8$).Yes: With. 21$ DELAY: 20$ RELEASE: 68$ ASSIGN: Severity 1.Queue.VA.WIP=Resolving Severity 1.Green Ball: S4 time arrival=TNOW: S4 resolving time=144*BETA(0. 1) . 71$ DISPOSE: Yes.NumberOut=Ticket Severity 1 Resolved.NumberIn + 1: Resolving Severity 1.WIP=Resolving Severity .NumberOut=Resolving Resolving Severity 1. Resolving Severity 1.WIP+1.1. BasicProcess. . 23$ QUEUE.

WIP=Resolving Severity 3.Assign 3 (Assign Servirity ASSIGN: S3 resolving time T2=LOGN(7. . 3) .NumberIn=Resolving Severity 3.VA.NumberIn + 1: Resolving Severity 3.67): Entity. Model statements for module: BasicProcess.VA. Resolving Severity 4.Process 4 (Resolving Severity 4) . 75$ QUEUE.1. Resource 1. 5$ ASSIGN: Ticket Severity 4 Resolved.WIP+1. Resolving Severity 4. .87. Model statements for module: BasicProcess.VA: Resource 1. Resolving Severity 3.NumberOut=Ticket Severity 4 Resolved. 73$ DELAY: 72$ RELEASE: 120$ ASSIGN: Severity 4.NumberOut + 1: 3. 126$ SEIZE.1:NEXT(125$). 0. . Resource 1. 4$ ASSIGN: Resolving Severity 4. MinutesToBaseTime(S4 resolving time). .WIP-1:NEXT(5$).NumberOut=Resolving Resolving Severity 3. 123$ DISPOSE: Yes. 2.1. .WIP=Resolving Severity .VA: Resource 1.WIP+1. Resolving Severity 3. 11.WIP-1:NEXT(6$). .Queue. .94.132 . 74$ SEIZE.. MinutesToBaseTime(S3 resolving time T2). Model statements for module: BasicProcess.1): S3 resolving time T1=WEIB(5..NumberIn + 1: Resolving Severity 4. 127$ QUEUE. 11$ Model statements for module: BasicProcess.NumberOut=Resolving Resolving Severity 4.NumberOut + 1. .WIP=Resolving Severity 4.1:NEXT(73$).NumberIn=Resolving Severity 4.NumberOut + 1: 4. 125$ DELAY: 124$ RELEASE: 172$ ASSIGN: Severity 3. .Queue.Type=Severity 3: Picture=Picture.Blue Ball: S3 time arrival=TNOW:NEXT(3$).WIP=Resolving Severity .Process 3 (Resolving Severity 3) .Dispose 1 (Ticket Severity 4 Resolved) . 3$ ASSIGN: Resolving Severity 3. . 3.

WIP+1. 1.NumberOut=Ticket Severity 3 Resolved.Yellow Ball: Entity.WIP-1:NEXT(7$). . . 175$ DISPOSE: Yes.NumberOut=Resolving Resolving Severity 2. 177$ DELAY: 176$ RELEASE: 224$ ASSIGN: Severity 2. .. .WIP=Resolving Severity . Model statements for module: BasicProcess. 2) .WIP=Resolving Severity 2. MinutesToBaseTime(S2 resolving time).1.1:NEXT(177$).NumberOut + 1: 2.NumberOut=Ticket Severity 2 Resolved.61. 227$ DISPOSE: Yes. 6$ ASSIGN: Ticket Severity 3 Resolved. Resolving Severity 2. Resource 1. .Assign 4 (Assign Servirity ASSIGN: Picture=Picture.VA.NumberOut + 1. . Resolving Severity 2. . 178$ SEIZE. 2$ ASSIGN: Resolving Severity 2. 9. 7$ ASSIGN: Ticket Severity 2 Resolved.NumberOut + 1. Model statements for module: BasicProcess.NumberIn + 1: Resolving Severity 2.Dispose 3 (Ticket Severity 2 Resolved) . 12$ Model statements for module: BasicProcess.NumberIn=Resolving Severity 2.4):NEXT(2$).VA: Resource 1.133 . . Model statements for module: BasicProcess. 179$ QUEUE.Dispose 2 (Ticket Severity 3 Resolved) .Type=Severity 2: S2 time arrival=TNOW: S2 resolving time=LOGN(4.Process 2 (Resolving Severity 2) .Queue. . . .

10$. 0$ Model statements for module: BRANCH.(0. Model statements for module: BasicProcess.Create 1 (IT Incident Call 16$ CREATE.134 C-2 Simulation Model of KMRCA IT Service Desk System Simulation Model of KMRCA IT service desk system is shown in Figure C-2.303 95. BasicProcess.753)/100.0).Yes: With. 0.Decide 1 (Assign Severity) 1: With.MinutesToBaseTime(0. .903)):NEXT(17$). .Yes: Else. .Entity 1:MinutesToBaseTime(WEIB( 3.Yes. .11$.(3. 1.13$. . .238 Els e 0 A ssign S ervirity 2 Resolving S everity 2 Ticket S everity 2 Resolved 0 0 A ssign S ervirity 1 Resolving S everity 1 Ticket S everity 1 Resolved 0 0 FIGURE C-2 Simulation Model of KMRCA IT Service Desk System The SIMAN code of the simulation model is in the following: .NumberOut + 1:NEXT(9$). Model statements for module: Arrivals) .16.Ball:NEXT(0$).(95. . 9$ ASSIGN: Picture=Picture.NumberOut=IT Incident Call Arrivals.12$.Yes: With. BasicProcess. 17$ ASSIGN: IT Incident Call Arrivals. A ssign S ervirity 4 Resolving S everity 4 Ticket S everity 4 Resolved 0 0 IT Incident Call A rrivals A ssign IT Incident Ticket Resolving S everity 3 by Factor A Resolving S everity 3 by Factor B 0 0 0 A ssign S ervirity 3 Resolving S everity 3 by Factor C Ticket S everity 3 Resolved 0 Assign Severity 0.303)/100.238)/100. . .Assign 1 (Assign IT Incident Ticket) .753 3.

74): S1 time arrival=TNOW:NEXT(1$).1.37. 13$ Model statements for module: BasicProcess. 57$ TALLY: Resolving Severity 1.1.VATime + Diff.WaitTime + Diff. 4) .1.Process 1 (Resolving Severity 1) .VATime=Resolving Severity 1.1:NEXT(23$). 22$ RELEASE: Resource 1. .1. . . 71$ STACK. 1$ ASSIGN: Resolving Severity 1.Diff.Queue. 1) .WIP=Resolving Severity 1.VATimePerEntity.Assign 5 (Assign Servirity ASSIGN: Entity.WIP=Resolving Severity 1.WaitTime=Resolving Severity 1. . 4.WaitTimePerEntity. 23$ DELAY: S1 resolving time.VA: Resource 1.NumberIn + 1: Resolving Severity 1.VATime.Process 4 (Resolving .NumberIn=Resolving Severity 1. . 70$ ASSIGN: Resolving Severity 1. 56$ ASSIGN: Resolving Severity 1. 32$ TALLY: Resolving Severity 1.Red Ball: S1 resolving time=LOGN(2.Dispose 4 (Ticket Severity 1 Resolved) .Green Ball: S4 time arrival=TNOW: S4 resolving time=144*BETA(0.Type=Severity 4: Picture=Picture.NumberOut + 1.VATime.Diff. 8$ ASSIGN: Ticket Severity 1 Resolved.248. . 73$ DISPOSE: Yes. .WIP+1.27):NEXT(4$). ASSIGN: BasicProcess. .1. Model statements for module: BasicProcess. 1. .Type=Severity 1: Picture=Picture. 10$ Model statements for module: BasicProcess. . 51$ STACK. 25$ QUEUE. Model statements for module: Severity 4) . 1:Save:NEXT(25$). 24$ SEIZE. .WaitTime.NumberOut=Resolving Severity 1. Model statements for module: BasicProcess. .WaitTime. . 1:Destroy:NEXT(70$). 66$ ASSIGN: Resolving Severity 1.NumberOut=Ticket Severity 1 Resolved.Diff.WIP-1:NEXT(8$).135 .StartTime..TotalTimePerEntity. 30$ TALLY: Resolving Severity 1.Assign 2 (Assign Servirity Entity.NumberOut + 1: Resolving Severity 1. Resolving Severity 1.VA:NEXT(66$).

NumberOut + 1: 4.WaitTime + Diff.NumberOut=Ticket Severity 4 Resolved. .1. 3) .NumberIn + 1: 4. QUEUE.Diff.Dispose 1 (Ticket Severity 4 Resolved) .2: Entity. 127$ DELAY: S3 resolving time T1.1:NEXT(75$). 170$ ASSIGN: Resolving Severity 3 by Factor A. 109$ TALLY: Resolving Severity 4. 122$ ASSIGN: Severity 4.WIP+1. 123$ STACK.NumberOut + 1.WIP=Resolving Severity STACK. 11$ Resolving Severity 4. .Diff.Diff.5): S3 resolving time T3=2.WaitTime. 75$ DELAY: 118$ ASSIGN: Resolving Severity 4.WaitTime=Resolving Severity 3 by Factor A.NumberOut=Resolving Resolving Severity 4.4: S3 resolving time T1=1. 125$ DISPOSE: Yes.1:NEXT(127$).VA: Resource 1.Queue. 2. 74$ RELEASE: Resource 1.NumberIn + 1: Resolving Severity 3 by Factor A.VATime.StartTime..VATime.1. 3. .1. Resolving Severity 4.VATimePerEntity..VATime + Diff. 1:Save:NEXT(77$). 1:Destroy:NEXT(122$).Queue. .NumberIn=Resolving Resolving Severity 4. SEIZE. QUEUE.Type=Severity 3: Picture=Picture. 82$ TALLY: Resolving Severity 4.TotalTimePerEntity.Assign 3 (Assign Servirity ASSIGN: S3 resolving time T2=TRIA(2. 129$ 128$ Resolving Severity 3 by Factor A.VA:NEXT(170$). 108$ ASSIGN: Resolving Severity 4.WaitTime + Diff.WaitTime=Resolving Severity 4. S4 resolving time. . 155$ STACK. Model statements for module: BasicProcess.WIP=Resolving Severity Model statements for module: BasicProcess.NumberIn=Resolving Severity 3 by Factor A.VATime=Resolving Severity 4.WIP+1. .WaitTimePerEntity. SEIZE.WaitTime.WIP=Resolving Severity 3 by Factor A. 103$ 77$ 76$ Resolving Severity 4.1.VA: Resource 1.Process 3 (Resolving Severity 3 by Factor A) . 5$ ASSIGN: Ticket Severity 4 Resolved.136 4$ ASSIGN: Severity 4. 84$ TALLY: Resolving Severity 4.WIP-1:NEXT(5$).VA:NEXT(118$). 1:Save:NEXT(129$).3.4. .Blue Ball: S3 time arrival=TNOW:NEXT(3$). Model statements for module: BasicProcess.WaitTime. 3$ ASSIGN: Resolving Severity 3 by Factor A.

WIP=Resolving Severity 3 by Factor B.Process 5 (Resolving Severity 3 by Factor B) .Diff.NumberIn + 1: Resolving Severity 3 by Factor B. .1.WaitTime. . S3 resolving time T3. .WaitTime.1. 160$ ASSIGN: Resolving Severity 3 by Factor A. Model statements for module: BasicProcess.VATime + Diff.WaitTime.WIP-1:NEXT(15$).NumberIn=Resolving Severity 3 by Factor C.Diff.VATime.VATimePerEntity.VATime.VATime. 180$ QUEUE.StartTime.WaitTimePerEntity.VATime + Diff.1:NEXT(178$). 179$ SEIZE. 14$ ASSIGN: Resolving Severity 3 by Factor B.WaitTimePerEntity.WIP=Resolving Severity 3 by Factor B.Queue.WIP-1:NEXT(14$). 1:Save:NEXT(180$).VA: Resource 1. 212$ TALLY: Resolving Severity 3 by Factor B.WIP+1.Diff. Model statements for module: BasicProcess. 236$ TALLY: Resolving Severity 3 by Factor C.VA:NEXT(221$). 2.Queue.137 134$ TALLY: Resolving Severity 3 by Factor A.NumberOut + 1: Resolving Severity 3 by Factor A.VATimePerEntity.WaitTimePerEntity.VATime=Resolving Severity 3 by Factor B. 1:Destroy:NEXT(225$). 231$ 230$ QUEUE. 185$ TALLY: Resolving Severity 3 by Factor B. . SEIZE. Resolving Severity 3 by Factor C. 175$ STACK.1.StartTime.VATime=Resolving Severity 3 by Factor A.WaitTime.WIP=Resolving Severity 3 by Factor C.TotalTimePerEntity. .WaitTime. 174$ ASSIGN: Resolving Severity 3 by Factor A. 1:Destroy:NEXT(174$).NumberIn=Resolving Severity 3 by Factor B. 226$ STACK.Diff. 126$ RELEASE: Resource 1.NumberIn + 1: Resolving Severity 3 by Factor C.NumberOut + 1: Resolving Severity 3 by Factor B.1:NEXT(229$). 187$ TALLY: Resolving Severity 3 by Factor B. . 206$ STACK.Diff.WIP=Resolving Severity 3 by Factor A.1.Diff..1..WaitTime + Diff.NumberOut=Resolving Severity 3 by Factor B.1.VATime.WaitTime=Resolving Severity 3 by Factor C.1. 177$ RELEASE: Resource 1.1.VA:NEXT(272$). 225$ ASSIGN: Resolving Severity 3 by Factor B.TotalTimePerEntity. 136$ TALLY: Resolving Severity 3 by Factor A.NumberOut=Resolving Severity 3 by Factor A. 2.WaitTime + Diff. 221$ ASSIGN: Resolving Severity 3 by Factor B. Resolving Severity 3 by Factor B.WaitTime=Resolving Severity 3 by Factor B. 15$ ASSIGN: Resolving Severity 3 by Factor C. 161$ TALLY: Resolving Severity 3 by Factor A. 229$ DELAY: 272$ ASSIGN: Resolving Severity 3 by Factor C.Diff.WIP+1. 178$ DELAY: S3 resolving time T2.VA: Resource 1.1.Process 6 (Resolving Severity 3 by Factor C) . 211$ ASSIGN: Resolving Severity 3 by Factor B. 1:Save:NEXT(231$). 257$ STACK.

Dispose 2 (Ticket Severity 3 Resolved) ..Queue. . 1:Destroy:NEXT(328$). 314$ ASSIGN: Resolving Severity 2. Model statements for module: BasicProcess. .VATime.NumberOut=Ticket Severity 2 Resolved.WaitTimePerEntity.VATime. 279$ DISPOSE: Yes.1. Model statements for module: BasicProcess. 288$ TALLY: Resolving Severity 2. 331$ DISPOSE: Yes.NumberOut=Resolving Severity 2. 282$ SEIZE.NumberOut + 1.Diff.WIP-1:NEXT(6$). . 263$ TALLY: Resolving Severity 3 by Factor C.WaitTime.1. .VATimePerEntity.TotalTimePerEntity. 280$ RELEASE: Resource 1. 328$ ASSIGN: Resolving Severity 2.WIP=Resolving Severity 3 by Factor C.StartTime. 1:Destroy:NEXT(276$).NumberIn + 1: Resolving Severity 2. 309$ STACK.VATimePerEntity.1.VATime.Assign 4 (Assign Servirity 2) .NumberOut + 1.VATime + Diff. 1.TotalTimePerEntity.WaitTime + Diff.WIP-1:NEXT(7$).1. 329$ STACK. .1. 6$ ASSIGN: Ticket Severity 3 Resolved.NumberIn=Resolving Severity 2. .Process 2 (Resolving Severity 2) .Diff.Diff.NumberOut=Ticket Severity 3 Resolved. 277$ STACK.VATime=Resolving Severity 2.4):NEXT(2$).Diff. 7$ ASSIGN: Ticket Severity 2 Resolved.WaitTime.Dispose 3 (Ticket Severity 2 Resolved) .Type=Severity 2: S2 time arrival=TNOW: S2 resolving time=LOGN(4.NumberOut + 1: Resolving Severity 2. .Diff.NumberOut=Resolving Severity 3 by Factor C.StartTime.NumberOut + 1: Resolving Severity 3 by Factor C. Model statements for module: BasicProcess. 281$ DELAY: S2 resolving time.WIP=Resolving Severity 2. 228$ RELEASE: Resource 1. 283$ QUEUE.138 238$ TALLY: Resolving Severity 3 by Factor C.VATime=Resolving Severity 3 by Factor C. 9. Resolving Severity 2. 276$ ASSIGN: Resolving Severity 3 by Factor C.WIP=Resolving Severity 2.WaitTime=Resolving Severity 2.VA: Resource 1. .1:NEXT(281$).1. 262$ ASSIGN: Resolving Severity 3 by Factor C.1.Yellow Ball: Entity. 1:Save:NEXT(283$).WIP+1.VATime + Diff. 290$ TALLY: Resolving Severity 2. 12$ ASSIGN: Picture=Picture. . Model statements for module: BasicProcess. 324$ ASSIGN: Resolving Severity 2. 315$ TALLY: Resolving Severity 2.VATime.VA:NEXT(324$). 2$ ASSIGN: Resolving Severity 2.61.

In’ is number of the input and ‘Nr. In Nr.43 4.81 40.89 36.558 Note : ‘Nr.80 Rep 4 1.31 4.482 16 3.51 3.42 TABLE C-2 Entity Detail Summary of Number of Entities by 1st Std Order Rep 1 Nr.457 9 3.27 Rep 3 2.585 20 110 3.84 5.36 29.139 C-3 Simulation Results for Design of Experiments This Appendix illustrates the simulation results that are provided to as the inputs of experimental design (DOE) with 23 full factorial running standard order of 8 times for each of 4 replications.55 4.457 9 3. Out Rep 4 Nr.628 19 103 3. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.630 20 110 3.434 9 3.585 25 117 3. In Nr.69 41. TABLE C-1 Entity Detail Summary of Time by 1st Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.25 30. C-16 show entity detail summary of Time (in Table C-1) and entity detail summary of Number of Entities ( in Table C-2) by the 1st to the 8th Standard orders. .564 19 103 3.27 27.454 9 3.434 9 3.12 Rep 2 2. respectively. In Nr.585 19 103 3.454 9 3.98 4.484 16 3. Out’ is number of the output.28 5. Out Rep 2 Nr.91 4.588 19 103 3. …. C-2.27 24.27 18. Tables C-1. In Nr. Out Rep 3 Nr.

In Nr.20 Rep 4 1.36 29.585 29 109 3.410 16 3.484 16 3.434 9 3.404 16 3.43 4. In’ is number of the input and ‘Nr. Out Rep 4 Nr.91 4.588 19 103 3. In Nr.585 25 117 3. Out’ is number of the output.28 5.65 30. In Nr.51 3. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3. .454 9 3.67 24.626 19 103 3.98 4.67 27.89 36.558 Note : ‘Nr.140 TABLE C-3 Entity Detail Summary of Time by 2nd Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2. In Nr.585 20 110 3.434 9 3.457 9 3.67 Rep 3 2.564 29 109 3.630 20 110 3.55 4. Out Rep 2 Nr. Out Rep 3 Nr.84 5.81 40.480 16 3.52 Rep 2 2.69 42.31 4.67 18.82 TABLE C-4 Entity Detail Summary of Number of Entities by 2nd Std Order Rep 1 Nr.

10 Rep 4 1. Out’ is number of the output.51 3.89 37.57 24.564 29 109 3. .584 20 110 3.616 19 103 3.556 Note : ‘Nr.410 16 3.433 9 3.84 5.57 27.434 9 3.588 19 103 3.57 18. Out Rep 2 Nr.630 20 110 3. In’ is number of the input and ‘Nr.457 9 3.81 41.402 16 3. In Nr.141 TABLE C-5 Entity Detail Summary of Time by 3rd Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.31 5. Out Rep 3 Nr.42 Rep 2 2. In Nr.584 29 109 3.484 16 3.453 9 3. In Nr.57 Rep 3 2.28 5.69 43.585 25 117 3.55 30.55 4.43 5.470 16 3. Out Rep 4 Nr. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3. In Nr.91 5.98 5.36 30.72 TABLE C-6 Entity Detail Summary of Number of Entities by 3rd Std Order Rep 1 Nr.

36 30.28 5.564 29 109 3.584 20 110 3.615 19 103 3.91 5.82 Rep 2 2.584 29 109 3.50 Rep 4 1.51 3.588 19 103 3. . In Nr.97 Rep 3 2.81 41. Out Rep 4 Nr. Out Rep 2 Nr. In’ is number of the input and ‘Nr. In Nr. Out’ is number of the output.84 5.469 16 3.457 9 3.89 38.31 5.95 30.453 9 3.43 5.142 TABLE C-7 Entity Detail Summary of Time by 4th Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.410 16 3. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.97 24.556 Note : ‘Nr.97 18.585 25 117 3.433 9 3. In Nr.12 TABLE C-8 Entity Detail Summary of Number of Entities by 4th Std Order Rep 1 Nr.630 20 110 3.484 16 3.55 4.97 27. In Nr.98 5.434 9 3. Out Rep 3 Nr.402 16 3.69 43.

84 5. In Nr.91 5.17 24.98 5. Out Rep 2 Nr.81 41.457 9 3.51 3.143 TABLE C-9 Entity Detail Summary of Time by 5th Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.584 20 110 3.36 30.434 9 3. Out Rep 4 Nr.69 42.585 29 109 3.32 TABLE C-10 Entity Detail Summary of Number of Entities by 5th Std Order Rep 1 Nr. . Out Rep 3 Nr.15 30. In Nr.433 9 3. In’ is number of the input and ‘Nr.588 19 103 3.564 29 109 3. In Nr.454 9 3.410 16 3.17 27.630 20 110 3.70 Rep 4 1.31 5.28 5.89 37.02 Rep 2 2.55 4. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.484 16 3.17 Rep 3 2.404 16 3.585 25 117 3.43 5.558 Note : ‘Nr. In Nr. Out’ is number of the output.478 16 3.624 19 103 3.17 18.

81 41.42 Rep 2 2.457 9 3.69 43.57 24.630 20 110 3.51 3.57 18.89 37.43 5.564 29 109 3.474 16 3.72 TABLE C-12 Entity Detail Summary of Number of Entities by 6th Std Order Rep 1 Nr. In’ is number of the input and ‘Nr. Out’ is number of the output.584 20 110 3.55 30.91 5.434 9 3.620 19 103 3. In Nr. Out Rep 3 Nr.453 9 3.585 25 117 3.57 27.588 19 103 3.584 29 109 3.55 4. In Nr.484 16 3.144 TABLE C-11 Entity Detail Summary of Time by 6th Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.36 30. In Nr.84 5.410 16 3.98 5. Out Rep 4 Nr.433 9 3.31 5. Out Rep 2 Nr. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.28 5.57 Rep 3 2. In Nr.10 Rep 4 1.402 16 3. .556 Note : ‘Nr.

98 6.36 31.630 20 110 3.42 45.84 5.145 TABLE C-13 Entity Detail Summary of Time by 7th Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.585 25 117 3. Out Rep 3 Nr.410 16 3. Out’ is number of the output. .62 TABLE C-14 Entity Detail Summary of Number of Entities by 7th Std Order Rep 1 Nr.91 6.31 6.437 14 3.47 30. In’ is number of the input and ‘Nr.47 24.433 9 3.434 9 3. In Nr. Out Rep 2 Nr.32 Rep 2 2.583 29 109 3.47 18.28 5.69 44.45 30. In Nr.555 Note : ‘Nr.401 16 3.43 6.00 Rep 4 1.89 38.08 Rep 3 2. In Nr.564 29 109 3.452 9 3.55 4. In Nr. Out Rep 4 Nr.584 20 110 3.51 3.484 16 3.581 19 103 3.588 19 103 3.457 9 3.

564 29 109 3.390 3.58 TABLE C-16 Entity Detail Summary of Number of Entities by 8th Std Order Rep 1 Nr.434 9 3. In’ is number of the input and ‘Nr.84 5.78 45. In Nr.529 3. Out Rep 4 Nr.38 Rep 4 1.533 16 3.54 35.585 25 117 20 110 20 110 3.77 0. Out Rep 3 Nr.49 13.588 19 103 3.28 5.51 3.386 5 3.85 60. In Nr.75 20.146 TABLE C-15 Entity Detail Summary of Time by 8th Std Order Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.457 9 3.410 16 3. Out Rep 2 Nr.75 Rep 2 2. In Nr.31 6.484 1 3.77 21.630 Note : ‘Nr.77 34.91 6.355 2 3.43 6. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.487 19 103 3. .388 3 3. Out’ is number of the output.82 Rep 3 2. In Nr.513 29 109 3.98 6.55 4.

respectively. FIGURE C-3 DOE Results of Throughput .147 C-4 The Results of Design of Experiment (DOE) The results of experimental design of Throughput and Time in resolving incident of severity 3 as shown in Figure C-3 and Figure C-4.

148 FIGURE C-4 DOE Results of Time in Resolving Incidents of Severity 3 .

149 C-5 Simulation Results for the Comparison Test The simulation results that are provided for comparison Test.501 29 109 3. Entity Detail Summary of Number of Entities Rep 1 Nr. Out’ is number of the output.564 29 109 3.540 20 110 3.95 51.43 6. .98 6. Entity Detail Summary of Time Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 2.55 Rep 4 1.630 20 110 3. In Nr.397 1 3.77 0.91 6. running for 4 replications.585 25 117 3.77 21.375 4 3.588 19 103 3. Out Rep 2 Nr.484 16 3.54 35.388 5 3.77 45.51 3.82 Rep 3 2. Table C-17 to Table C-20 show the summary of entity details of Time in resolving incident and an entity details of Number of Entities.410 16 3.31 6.55 4. In Nr.74 Rep 2 2.49 13.85 60.28 5.434 9 3.84 5.457 9 3. In’ is number of the input and ‘Nr. Out Rep 3 Nr. In Nr. In Nr.75 37.492 19 103 3. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 25 117 3.57 TABLE C-18 KMRCA IT Service Desk.531 Note : ‘Nr. TABLE C-17 KMRCA IT Service Desk. Out Rep 4 Nr.360 2 3.

In’ is number of the input and ‘Nr. In Nr.020 21 117 3.030 20 104 2.898 7 3. Out Severity 1 Severity 2 Severity 3 Severity 4 Total 27 100 3.11 35.973 9 3.150 TABLE C-19 Typical IT Service Desk.889 7 3.15 4.11 25.154 27 100 2.34 TABLE C-20 Typical IT Service Desk. In Nr. Out Rep 4 Nr.61 24.97 6. Entity Detail Summary of Number of Entities Rep 1 Nr.22 39. Out Rep 2 Nr.104 21 88 2.58 39.979 5 3.75 4.116 21 88 2.96 22.33 Rep 4 2.085 10 3. Entity Detail Summary of Time Time in Resolving Incidents (minutes) Rep 1 Severity 1 Severity 2 Severity 3 Severity 4 Total 1.28 18.79 Rep 2 1.994 9 3.19 Rep 3 2. In Nr. Out Rep 3 Nr.091 Note : ‘Nr.99 33.017 10 3.15 4. .26 7.233 21 111 2.99 7.92 7.61 5. Out’ is number of the output.130 20 105 2.986 9 3. In Nr.

75 S1-K 2.912 SE Mean 0.380 StDev 0.345 0.000 b) Time in resolving of Severity 1.020 3.….84 S2-T 5.29 6.8 KMRCA Typical Difference 95% CI for mean difference: (366.072) T-Test of mean difference = 0 (vs not = 0): T-Value = -0. Paired T-Test and CI: KMRCA. a) Throughput.540 3.85 37.531 Typical 3.61 1.98 5.915 2.97 4.466 .163 0.83 P-Value = 0. Time in resolving incident of Severity 4 of KMRCA IT service desk.95 21.97 7.99 22.….49 45.116 3.22 S4-K 0. S1-K. Typical Paired T for KMRCA .77 6.326 0. S1-K Paired T for S1-T .691 0.63 7.6) T-Test of mean difference = 0 (vs not = 0): T-Value = 22.5 18.51 2.9 37.501 3.15 2.456 S1-T S1-K Difference 95% CI for mean difference: (-1. 486. S2-T. 1.75 6.9.58 25.77 S4-T 18. Paired T-Test and CI: S1-T.6 24. The below are the t-test results which were generated by Minitab 15.26 S2-K 3.S1-K N 4 4 4 Mean 1.1 48.68 P-Value = 0.295 -0.54 It is note that S1-T.95 4.3 426.43 S3-T 7.92 4.11 24. S4-K are average time in resolving incident of Severity 1 of Typical IT service desk.15 2.91 4.151 C-6 Summary of Comparison Test Results The statistical t-test results of Comparison of the KMRCA IT service desk and Typical IT service desk by significant variables as shown in Table C-21.6 SE Mean 11. respectively. average time in resolving incident of Severity 1 of KMRCA IT service desk.0 3089.77 6.130 3.31 5.Typical N 4 4 4 Mean 3516. TABLE C-21 Summary of Comparison Test Results Replication 1 2 3 4 KMRCA 3.832.28 2. average time in resolving incident of Severity 2 of KMRCA IT service desk.492 3.10 S3-K 6.091 S1-T 1.55 1.8 StDev 23.

S4-K N 4 4 4 Mean 22.152 c) Time in resolving of Severity 2.765 0.4 10.73 StDev 2.882 d) Time in resolving of Severity 3.047 e) Time in resolving of Severity 4.483 StDev 0.64 SE Mean 1.118 StDev 0.287 0.40 P-Value = 0. Paired T-Test and CI: S4-T.148 S3-T S3-K Difference 95% CI for mean difference: (0.5 -3.907 0. S3-K Paired T for S3-T . 2.7 26.296 SE Mean 0.S2-K N 4 4 4 Mean 5.457 SE Mean 0.0 9.S3-K N 4 4 4 Mean 7. Paired T-Test and CI: S2-T.26 P-Value = 0.143 0.1 18.201.8 20.341 0.32 S4-T S4-K Difference 95% CI for mean difference: (-33.005 0.456 0. 25.025 4. Paired T-Test and CI: S3-T.729 S2-T S2-K Difference 95% CI for mean difference: (-2. S4-K Paired T for S4-T .436) T-Test of mean difference = 0 (vs not = 0): T-Value = 0. 0.953) T-Test of mean difference = 0 (vs not = 0): T-Value = 3.912 1.93) T-Test of mean difference = 0 (vs not = 0): T-Value = -0.012.16 P-Value = 0.716 . S2-K Paired T for S2-T .010 0.248 6.39.682 0.

Earlier joining the IBM. from October 1996 to March 2004. Thailand 10900 and his email is padejp@gmail. Knowledge management system for IT service desk.153 BIOGRAPHY Name : Mr. the engineering institute of Thailand under H. working on site at KASIKORNBANK during April 2004 to May 2007. Text mining discovery algorithms and classification. . He was certified ITIL foundation in 2004. Simulation study. and IT disaster recovery planning (DRP). Padej’s home address at 23/123 Ladprao Road Cahnkaseam Chatujak Bangkok. he has been certified a License for professional practice in associate electrical engineer (telecommunication and electronics) as well as he has been a member of the Council of Engineers (COE).M. the King's Patronage (EIT). a strategic IT outsourcing company. Furthermore. he worked as Quality assurance manager at SIAMTELTECH computer company. an IT system integrator focuses on the areas of banking business financial institutes. and telecommunication such as CAT and TOT. Padej Phomasakha Na Sakolnakorn Thesis Title : Knowledge Management System Improvement towards Service Desk of IT Outsourcing in Banking Business Major Field : Information Technology Biography Padej worked as senior process architect in IBM solutions Delivery Company.com . The purpose of process architect is to implement several ITIL-based processes to outsourcing of KASIKORN Bank in particular IT service desk function in Incident management process. he earned a Bachelor of engineering degree in electronics and telecommunication engineering from King Mongut’s Institute of Technology Ladkrabang (KMITL) in 1991 and a Master of engineering degree in management industrial engineering from King Mongut’s Institute of Technology North Bangkok (KMITNB) in 1996. His interesting researches include IT service management (ITSM) improving organizational IT outsourcing. For his education and certification.

Sign up to vote on this title
UsefulNot useful