You are on page 1of 14

KNOWLEDGE ACQUISITION AND PROCESS PLANNING

K Venkateshwaran

20071D0403

CAD/CAM

Mechanical Department

VNRVJIET.

Under the Guidance of:

CH Priyadarshini

Assistant Professor

Mechanical Department

VNRVJIET

INTRODUCTION

The knowledge acquisition and management process are intended to provide a


straightforward means of seeing that knowledge is developed, represented, and
maintained appropriately to meet a project's strategic goals. It replaces what is
usually an ad hoc process with one that is coordinated with the strategic goals of
the organization. With the advent of new tools to better represent and use
knowledge resources, and the expansion of responsibilities for employees, it is
important for organizations to take a more systematic approach to representing
knowledge in order to better leverage it.

The knowledge acquisition and management process are intended as one possible
approach. It emphasizes personal responsibility for and ownership of knowledge
resources by the individuals who will use those resources (Kim & Mauborgne,
1997). It includes a variety of opportunities to quantitatively monitor the success of
the process, and to pinpoint areas for improvement.

The process is intended to be implemented as part of an application development


effort, with specific individual responsibilities to be determined. It is critical that
these individuals be members of the client organization who will remain in their
roles after the development effort concludes.
PROCESS ROLES

There are three roles currently identified in this process. Multiple individuals may
act in each role; conversely, one individual may act in multiple roles. The roles are:

 Initiator: The initiator serves as the owner of the knowledge concept to be


represented. The identification of knowledge may be reactive (e.g., in
response to an alarm or failure), or it may be proactive (e.g., as a result of
analysing log files, and attempting to devise an automated approach to the
most common problems).

 Knowledge Steward: The knowledge steward acts as the gatekeeper for the
knowledge base. They are responsible for taking the knowledge concept,
determining whether the knowledge is new, redundant, or a
modification/upgrade/extension of existing knowledge, and completing the
specification of what needs to be added to the system. The knowledge
steward is also responsible for re-examining the process on a regular basis,
identifying and building in support for new knowledge types, and ensuring
that the process is working successfully. The knowledge steward serves as
the "day to day" process owner.

 Implementor: The implementor is responsible for taking a completed


specification, and generating and implementing a plan for representing that
knowledge, using the set of technology options available. The implementor
is rated on their performance by both the knowledge steward (on how
closely the delivery met the specification), and the initiator (on whether the
delivery met the practical need).

The Process Definition describes these roles and their interactions in more detail.

PROCESS DEFINITION

This process describes the transition from a knowledge "concept" (basically


anything that an interested party believes to be useful to the organization as a
whole) to a formalized knowledge asset.

Process Input

The input to the process is any knowledge concept. Using the rather vague
term concept is deliberate; the goal is to be as inclusive as possible, permitting the
specification of any knowledge that may be useful to the organization. The most
common forms for knowledge in most organizations are in documentation (printed
or on-line), computer code, and in the minds of employees.

Some examples of knowledge concepts are:


 Alarm/event information and the associated failures which cause them.
 Procedures.
 Problem/resolution descriptions (which may be constructed out of multiple
alarms, failures, and procedures).
 Rules (e.g., ignore all events of type X from machine Y in time window Z;
they are caused by their routine maintenance procedures).

The knowledge concept should be a specific example of some type of knowledge


(e.g., "We should be ignoring the MVS message XYZ on machine ABC; it's
cluttering up our information display, and we can't do anything about it"), rather
than general suggestions with no specific examples (e.g., "can we do something
about eliminating unnecessary errors, alarms, and messages").

In the initial stages of development, it is anticipated that there will be no firm


criteria for rejecting knowledge concepts; if the concept is inadequately specified,
it will be up to the knowledge steward to work with the initiator to develop their
concept into something that can be implemented.

As the process matures, and tools to automate process flow are developed to
support the process, more formal criteria for judging knowledge concepts emerge.

Process Tasks

The process flowchart in Figure 1 defines and documents the proposed process.
The chart also captures the relationships between the roles defined above (initiator,
steward, implementor) and specific task
s.

Figure 1. Process Flowchart


The specific tasks identified in the process flowchart are described below.

Generate Specification. The process is initiated by the creation of a specification


that describes the knowledge concept to be represented. The information to be
included in the specification is dependent upon the types of information that make
up the knowledge concept (e.g., the data required to fully specify an alarm may be
considerably different from the data required in describing the basic procedure to
determine whether an IP is "pingable").
The specification may be generated by the initiator (most likely working in a
reactive mode to a problem they've experienced) or by a knowledge steward
working proactively to identify new knowledge for the system.

Submit Specification to Steward. After the concept is defined in sufficient detail


in the preliminary specification, it is then handed off to the knowledge steward. If
the concept was initiated by the knowledge steward, this step is omitted.

Should the process be supported by a trouble ticketing system, or another system


automating process flow, this step in the process would be the forwarding of the
ticket to the steward, or the act of specification generation itself.

Validate Request. The criteria for whether a knowledge concept should be further
developed include:

 Uniqueness: Clearly, it is undesirable to redundantly represent knowledge.


The knowledge steward must assess whether the knowledge is new, or
whether it represents a useful change or extension to the existing knowledge
base. As software-based tools supporting the process are developed, it is
anticipated that searching and browsing tools will be the primary aids in
determining uniqueness.
 Scope: Knowledge, even if accurate, may be not be sufficiently relevant.
The knowledge steward must judge whether the knowledge falls within the
organization's scope of responsibilities. For example, a procedure on how to
adjust desk chairs might be useful, but its position with regard to the
organization's scope is debatable.
 Utility: Knowledge, even if accurate, may be not be useful. The knowledge
steward must assess the relationship between the cost to represent the
knowledge (from both implementation and maintenance perspectives)
versus the potential benefit of that knowledge. For example, representing
knowledge related to an application due to be phased out in 3-6 months is
seldom a worthwhile investment.

This judgment should be made within a set number of days.

Complete Specification. The knowledge steward is responsible for adding


sufficient detail to the concept so that it can be handed off for implementation. The
specification must illustrate how the new knowledge is to be integrated into the
existing knowledge structures, or alternatively show how the existing structures are
to be modified or replaced.

As part of completing the specification, information should be gathered to


objectively assess the value of the knowledge. Potential data points for this task
include:
 Number of customers impacted (establish an appropriate small set of
ranges).
 Quality of customer impact (e.g., unusable, annoyance, intermittent, quirk).
 Impact to support plan (e.g., violation, impending impact).
 Impact to on-line commitment (e.g., violation, impending impact).
 Impact to production cycle (e.g., stopped, impending impact).
 Availability of workarounds (e.g., no workaround, manual workaround,
alternate automated process exists).
 Revenue impact (establish an appropriate small set of ranges).
 Impact to system response time (e.g., 0-20% of normal, 21-50% of normal,
51-80% or normal, 81-99% of normal; alternatively, use a percentage from 0
to 100).
 Impact to functionality (e.g., 0-20% of normal, 21-50% of normal, 51-80%
or normal, 81-99% of normal; alternatively, use a percentage from 0 to 100).

For each of the above data items, points should be assigned (e.g., if the problem
addressed results in a system coming down to 0-20% of normal functionality,
assign 100 points; if the problem results only in a slowdown to 81-99% of normal
functionality, assign 0 points). As a result, each knowledge concept will have its
own unique score.

Scoring has a dual purpose:

1. It drives the prioritization for implementation. Direct assignment of a


priority (e.g., critical) is explicit, but is often subjective; the reasoning
behind that assignment is often obscure, and what may be critical to one
person may not be critical to another. Use of the scoring approach provides
a more objective and adjustable means of determining priority.
2. It provides an objective means of judging contributions to the knowledge
base (see Indicators and Measurements, below).

Alternatively, any prioritization scheme already in use may be substituted. For


example, an automated analysis of an event log may identify and rank a number of
problems, which would then be introduced into the knowledge acquisition process.

Submit Specification to Implementation. Once the steward has the knowledge


concept described in sufficient detail, the specification is then handed off for
implementation.

Should the process be supported by a trouble ticketing system, or another system-


automating process flow, this step in the process would simply be the forwarding
of the ticket to the implementor, or the completion of specification itself.
Validate Specification. The implementor, if unable to generate an implementation
plan from the specification, may choose to reject the specification, returning the
specification to the steward and requesting elaboration on specific points.

The rate of rejection should be monitored closely. A high rate may indicate
routinely inadequate specifications, in which case process adjustments may be
necessary.

Generate Implementation Plan. The implementor, using the specification, creates


a plan for representing the knowledge in one or more of the technologies available.

The plan is placed in a location accessible to both the initiator and the knowledge
steward, for review. Should the process be supported by a trouble ticketing system,
or another system automating process flow, this step in the process would be the
completion of the implementation plan (and automatic availability for review).

Implement Plan. The implementor, using the plan and any subsequent feedback
from the initiator or the knowledge steward, represents the knowledge according to
the plan.

Test Implementation. Following implementation, but prior to release, the


implementor ensures that the implementation of the knowledge conforms to the
plan.

Release Implementation. The implementor makes the knowledge available to the


community.

Evaluate Outcome. The knowledge steward ensures that the implemented


knowledge conforms to the specification. A rating on this success is captured.

The initiator ensures that the implemented knowledge satisfies the need in the real
world.

If the initiator is not satisfied with the outcome, they are obligated to identify
specific problems and work with the knowledge steward and/or the implementor to
resolve those problems. It should be noted that only the initiator may end the
process.

If the initiator is satisfied with the outcome, a rating on the success is captured, and
the process ends.

Process Output

The output of the process is the delivery of a formalized knowledge asset that
meets the needs of the individual who originally instigated its creation, as well as
the organization's needs. The characteristics of the deliverable include:
 The asset is in a form that cannot "walk out the door."
 The asset is in a form that encourages reuse (i.e., can be used without the
need to represent it in another format).

Because of the ownership or "customer" role of the initiator, the process is not
complete until the initiator evaluates the deliverable and provides a rating on the
quality of the deliverable. The initiator should provide the following feedback:

 Did the deliverable meet the specified requirement?


 Did the deliverable solve the real-world need that initiated the process?
 Were you satisfied with the process and its outcome in this case?

If the deliverable fails to successfully address the need, the initiator must provide
feedback to the steward (and if necessary, to the implementor) on specific
deficiencies, and appropriate changes should result. These occasions where the
process receives a "failing grade" should be viewed as learning opportunities and
evaluated during regular process reviews.

INDICATORS AND MEASUREMENTS

Indicators and measures provide actual data on whether critical client requirements
are being met by the process. In this case, the clients of the process are the
"consumers" of the organization's knowledge, and the management responsible for
meeting their organization's strategic goals.

In establishing indicators and measurements, the following assumptions are made:

 Do not establish a measure if results will not actually be collected.


 Do not collect results if the data will not be used to baseline or continuously
improve a process and its deliverable.

Process Indicators

Process indicators, or measures, are taken at critical points within the process and
serve as early warning signs that something is wrong (e.g., randomly testing
machine parts leaving a work cell to ensure that all parts are within established
tolerances). Process indicators are a means of quality assurance because corrective
action can be taken before delivering the output to the client.

Potential process indicators for the knowledge acquisition process include:

 Has specification been completed for the knowledge types involved? As the
process evolves, this measure should increasingly be used to keep
inadequately specified concepts from being handed off from one role to
another.
 Does the knowledge concept meet the criteria for development? Again, the
criteria will emerge more fully as the process evolves. If possible, initiators
should have the tools necessary to make a preliminary determination of
whether the knowledge concept is appropriate (e.g., by looking up key
concept terms in a data dictionary or on-line knowledge repository).

 Does the implementation plan meet the specified need? Following the


completion of the implementation plan, there should be an opportunity,
however brief, for interested parties to validate the plan. The knowledge
steward, in particular, should be responsible for either explicitly or tacitly
approving the plan (e.g., tacit approval may be given by not objecting within
a specified review period).

 Does the implementation meet the specification? The implementor has the


responsibility to re-examine the implementation and determine whether it
fulfils the agreed-upon plan (and hence, the specification), prior to release.

 Does the implementation meet the need? Before the process can be


completed, the initiator must review the delivered implementation and
ensure that the original real-world need has been met.

Quality Indicators

Quality Indicators or measures are usually taken at the end of the process when
defects require rework (e.g., Mean Time to Failure, Mean Time to Repair). Quality
Indicators provide feedback on the overall success of the process in meeting the
client needs.

Potential quality indicators for the knowledge acquisition process include:

 Mean Time to Repair (MTTR). When the purpose of a project is to improve


customer service, the success of the knowledge acquisition process may be
reflected in a steady decrease of the MTTR (considering both failures that
may be addressed and resolved automatically, as well as those failures for
which trouble tickets are opened).

The MTTR may also be used to illustrate that the same level of service is being
provided with reduced headcount, or with less skilled employees.

 Outcome Ratings. How well (e.g., on scale of 1-5) do the initiator and/or
knowledge steward rate the resolution? How many implementations were
returned for rework one or more times before they were accepted?

 Usage Statistics. How often are specific, implemented "chunks" of


knowledge (e.g., a specific procedure) accessed/used? Usage patterns may
reveal that certain representations of knowledge are rarely or never used,
making them potential candidates for rework or culling. These patterns may
also serve to bias implementors towards those representations that receive
more use, or suggest needed upgrades in the underlying technologies.

 Cycle Time Statistics. A host of statistical measures are available, including


the time from initiation to delivery, time between initiator submission and
addition into implementation queue, time between submission by steward
and acceptance by implementation, average time in implementation queue,
difference between promised and actual delivery, time between addition to
queue and delivery, etc. Some of these may be useful in improving the
process; others may prove meaningless. Carefully selecting time-based
statistics and tying them into process improvement (and discarding those
that don't work) is part of the process evolution.

Using Process and Quality Indicators

In reality, it is easy for a process to exist on paper, and for alternate, informal
processes to spring up. To ensure that this does not happen, it is imperative to:

 evolve the process, to meet the needs of its users, and


 align the process to the self-interest of its intended users.

Process and quality indicators are critical elements in both of these efforts.

Evolving the Process. The knowledge acquisition process will require, at a


minimum, frequent minor adjustments during the first year of use, as experience is
gained in using the available tools and technologies. The process indicators are
used as pointers to where adjustments are necessary. A major rework of the process
may also become necessary at some point, if the quality indicators plateau below
the desired levels. Ongoing project planning should take knowledge acquisition
process reviews and adjustments into account.

Aligning the Process. People generally act out of self-interest. If there are no
perceived benefit or consequence to using a process, the process will be ignored. If
there is no perceived benefit, and only negative consequences (i.e., users will be
punished for failure to use the process), the process will be utilized at the lowest
level possible.

In the process definition above, the initiator is viewed as the owner of their own
knowledge, and has been given the final approval over whether the formal
implementation of that knowledge meets the real-world need. This is intended to
make it clear to them that there is a benefit in using the process to get their
knowledge into the system. Hopefully, the initiators will see this as an
improvement on the current process, in which ideas are "tossed over the wall" to
other groups, sometimes never to be seen again.

Illustrating benefit is, by itself, not sufficient to guarantee compliance.


Consequently, it is recommended that the process be directly linked to positive
(and negative) consequences. This should be done by linking the process to an
existing employee evaluation process or program.

Specifically, designated individuals (initiators and knowledge stewards) should be


held responsible for generating a specific amount of knowledge during a given
time frame. Rather than demanding a certain number of knowledge concepts, it
may be more meaningful to assign an expected number of "points" (from the
scoring mechanism described above), so that there is a built-in bias towards high-
impact knowledge concepts, and away from "junk" knowledge (knowledge
submitted to the system for no reason other than to satisfy a bureaucratic
requirement).

After the process has matured, and expectations of what a knowledge concept
specification includes becomes clearer, it may also be beneficial to factor the
rejection rate for an initiator's knowledge concepts into the evaluation process as
well.

Knowledge stewards should additionally be held accountable for the amount of


time it takes them to complete a knowledge specification for submittal to
implementation. Again, an emphasis on points will bias the process towards high-
impact knowledge. Another factor to consider for knowledge stewards is the rate of
rejection by implementors of completed knowledge specifications.

Implementors should be judged based on the amount of knowledge implemented,


tested, and released. Additionally, the ratings given to the implementation by
knowledge stewards and initiators should be considered.

THE ONTOLOGY

An ontology is a formal specification of the vocabulary to be used in specifying


knowledge (Gruber, 1991). It may be thought of as a network of objects, each of
which has attributes or properties unique to that object (and potentially sharable
with specializations of that object, i.e., a child object shares or inherits many of the
attributes and relations of the parent object), and named relations to other objects.

The purpose of the ontology is to provide a uniform, text-based intermediate


representation of the knowledge types specific to a development effort, that is
understandable by either humans or machines. The intermediate representation
provides a means of describing knowledge, at any level of granularity, without
expert knowledge of the specific technologies that will be used to implement that
knowledge. This representation is useful on a number of levels.

 For clients, who need to describe knowledge to be added to the system, the
ontology offers a standard vocabulary, and guidance in creating a precise
specification.

 For developers, who need to understand what knowledge is already in the


system, the intermediate representation provides a rich description of the
knowledge and an index to its representation within specific technologies.

 For technical writers, who need to unambiguously describe behaviour, it


offers a domain-specific (but technology-independent) vocabulary.

Use of the ontology as an intermediate knowledge representation form also allows


the underlying technologies to be upgraded or replaced, as needed.

The ontology should expand on any work already done to standardize the
terminology used in a given domain, to include objects of all types relevant to the
project.

The anticipated evolution of the ontology is that it will begin with the identification
of certain simple terms (e.g., domain concepts) and their arrangement within a
network or hierarchy. As experience is gained in representing the knowledge,
complex terms will emerge that act as a means of functionally grouping a number
of simple objects together (e.g., a problem/resolution might consist of one or more
events, the associated underlying failures, and one or more procedures).

One metaphor for this is LEGO building bricks; a fixed set of objects is defined,
each with its own properties, and ways in which it can be connected to other
objects. Users may choose to instantiate objects, assemble them in a precisely pre-
defined manner (similar to buying a LEGO model, and assembling it as defined in
the instructions), elaborate on a pre-defined model, or assemble them in some
novel but useful fashion. Ideally, all of these approaches are supported (although
computerized support for some of these ideas may not be available immediately).

A software-based tool supporting the ontology should be able to do a number of


different things:

 Creation: Given a simple or complex object type, lead the user, with form
and menu-based interfaces, through supplying the necessary information.
Knowledge of an underlying representation, such as the Knowledge
Interchange Format (Genesereth & Fikes, 1992), should not be required.
 Browsing: Enter the ontology from any point, and browse the representation
in the manner of a hypertext document. Alternatively, enter keywords and
view all related terms. The task of figuring out whether something is already
in the system should be made as easy as possible.

 Guidance: Use the attributes and relations that have already been defined to
dynamically identify object types, or guide the creation of new object types.
For example, present a menu of attributes and relations to the user. As they
cumulatively select the appropriate terms that they know they need to
define, they see a list of all those object types that contain that term. The list
should quickly get small enough for the user to either identify the correct
object type, or realize that they need a new object type (in which case, the
terms they've selected would serve as the basis for the formal specification
of that type).

In addition, it is expected that any software developed would actively support the
knowledge acquisition and management process, and be easily modified in
response to changes or refinements in the process.

CURRENT STATUS AND FUTURE DIRECTIONS

This document describes one possible approach to the knowledge acquisition and
management process. It has been proposed as a means of addressing the specific
knowledge needs of a U S WEST project to better manage hardware and software
event management for an internal help desk. The process is currently undergoing
testing to evaluate its utility and identify weaknesses. To fully integrate this
process into a project will require further work.

Process Implementation and Validation. If the process described here is viewed


as a preferable approach to the knowledge acquisition and management process,
the next step is to implement the process. Implementation would require:

 Identifying individuals to carry out the process.

 Formalizing the process flow.

 Identification and agreement on the ontology.

 Identification of a preliminary set of metrics to track.

 Alignment of the knowledge acquisition and management process with


employee evaluations for the set of employees affected.

 Establishment of a process review cycle.


The knowledge acquisition process can be implemented through the use of paper
forms and careful record-keeping, but many of its benefits can only be realized
through the software-based tools to facilitate the creation and maintenance of the
knowledge generated by the process. A rapid development effort of a tool with a
suitable front end (e.g., a WWW browser for a heterogenous client hardware base)
is a possible approach; if rapid development is not possible (i.e., tools and
developers capable of quickly modifying the interface are not available),
development should occur after the process has had at least a few months to
stabilize. Development of software to support the process described herein is a
future objective, along with refinement of the process itself.

You might also like