You are on page 1of 68

Project Report

ON
DIFFERENCE BETWEEN AN EXPERT SYSTEM &
ARTIFICIAL INTELLIGENCE

For the partial fulfillment of the requirement for the


Award of the Degree of
BACHELOR OF COMMERCE (PROF.)

Submitted To Submitted By
MR. SACHIN SRIVASTAVA ABHISHEK VERMA
Coordinator B.COM– 1st year
System ID: 2014014943

School of Business Studies (Sharda University)


GREATER NOIDA
CANDIDATE’S DECLARATION
I do hereby declare that the piece of dissertation report entitled “Working
Capital Management of DIFFERENCE BETWEEN AN EXPERT
SYSTEM & ARTIFICIAL INTELLIGENCE” has been prepared by me
under the avid guidance and supervision of Mr. Sachin Srivastava
(Coordinator) as a part fulfillment for the requirement of the degree in
B.Com (Professional). Sharda University during the session 2015-17.

To the best of my knowledge & belief, this is my own work and has not
been submitted anywhere earlier for any other degree.

(Abhishek Verma)
ACKNOWLEDGEMENT

When I embarked this project, it appeared to me an onerous work. Slowly as I


progressed, I did realize that I was not alone after all!! There were friends and well
wishers, who with their magnanimous and generous help and support made it a relative
easier affair.

I wish to express my gratitude to all that concerned persons who have extended
their kind help, guidance and suggestions without which it could not have been possible
for me to complete this project report.

I am deeply indebted to my guide Mr. Sachin Srivastava (Coordinator) for not


only his valuable and enlightened guidance but also for the freedom he rendered to me
during this project work.

My heart goes out to my parents who bear with all types of troubles I caused them
with smile during the entire study period and beyond.

(ABHISHEK VERMA)
TABLE OF CONENTS

ABSTRACT

INTRODUCTION

EXPERT SYSTEM COMPONENTS:

BUILDING AN EXPERT SYSTEM:

REPRESENTING THE KNOWLEDGE:

WORKING WITH THE KNOWLEDGE:

DEVELOPMENT TOOLS:

CONCEPTS IN KNOWLEDGE ACQUISITION:

INTELLIGENT EDITORS:

DISADVANTAGES OF RULE-BASED EXPERT SYSTEMS:

EXPERT SYSTEMS AND ARTIFICIAL INTELLIGENCE:

APPLICATION OF EXPERT SYSTEM:

IMPORTANCE OF EXPERT SYSTEM:

LIMITATIONS

CONCLUSION
ABSTRACT

An expert system is a computer program that contains stored knowledge and

solves problems in a specific field in much the same way that a human expert

would. The knowledge typically comes from a series of conversations between the

developer of the expert system and one or more experts. The completed system

applies the knowledge to problems specified by a user.

One is to use expert systems as the foundation of training devices that

act like human teachers, instead of like the sophisticated page-turners that

characterize traditional computer-aided instruction. The idea is to combine one

expert system that provides domain knowledge with another expert system that

has the know-how to present the domain knowledge in a learnable way. The

system could then vary its presentation style to fit the needs of the individual

learner.
Expert Systems Evolution.

3.1Timeline.

The expert systems technology is based on a wide background. Fig. 2. 1 presents

the major milestones in the development of the expert systems.


As one can see, the Expert Systems history starts at around 1943 with the Post

production rules. The initial trend was towards intelligent systems that relied on

powerful methods of reasoning rather than making use of domain knowledge.

They were also designed to cover any domain rather than a specialised area.

The shortcoming of this approach was that the machine had to discover

everything from first principles in every new domain. In contrast, a human expert

would rely on domain knowledge for high performance.

3.2 Milestones.

3.2.1 The Beginning.

It all started in the late 50s and early 60s, when several programs aimed at

general problem solving were written. Post (Ref 3) was the first to use the

production systems in symbolic logic, proving that any system of mathematics or

logic could be written as a certain type of production rule system. The basic idea

was that any mathematic logic system is just a set of rules specifying how to

change a string of symbols into another set of symbols. The major difference

between a Post production system and a human is that the manipulation of


strings is only based on syntax and not on any semantics or understanding of the

underlying knowledge. A Post production system consists of a group of

production rules. The limitation of the Post production system is that, if two or

more rules apply, there is no control specifying whether to apply one, some or all

rules, and in what order. The Post production rules are not adequate for writing

practical programs because of the lack of a control strategy.

At the same time (around 1943), McCulloch and Pitts developed a mathematical

modelling of neurons, which later gave birth to the connectionist programming

paradigm. Hebb gave an explanation of the learning process of the neurons in

1949.

The next step happened around 1954, with the introduction of a control structure

for the Post production system, by Markov. A so-called Markov algorithm is an

ordered group of productions, applied in order of priority to an input string. The

termination of the Markov algorithm is triggered by either a production not

applicable to the string or by the application of a ruleterminated with a period.

The Markov algorithm as a definite control strategy. The higher priority rules are

used first. If not applicable, the algorithm tries the lower priority rules. Although

applicable to a real expert system, the Markov algorithm may take a long time to

execute on a system with many rules. If the response time is too long, the system

will not be used no matter how good it is.


The possible improvement to Markov's approach of always trying all the rules,

was of course to be able to apply any rule, in a non-sequential order. This would

be possible if the algorithm already knew about all the rules. This was to happen

later on, around 1977 - 1979.

3.2.2 The human problem solving model.

An outstanding example of a problem solving program was to be the General

Problem Solver, described in the published work Human Problem Solving (Ref

2), by Newell and Simon. Their work demonstrated that much of the human

problem-solving (cognition) could be expressed as IF..THEN production rules.

Newell and Simon regarded the knowledge as modular, each rule corresponding

to a small module of knowledge called a chunk. In this model, the human

memory is organised chunks loosely arranged, with links to other related chunks.

Sensory input would provide stimuli to the brain and trigger appropriate rules of

long term memory, which then produce the suitable response. The long term

memory is where the knowledge is stored; the short term memory however, is

used for temporary knowledge storing during problem solving.


Human problem solving must also include a so-called cognitive processor. This is

the equivalent of the inference engine in a modern expert system. The cognitive

processor will try to find the rules to be activated by the appropriate stimuli. If

more than one rule is to be activated, then a conflict resolution must be

performed in order to decide which rule will be activated first (priority

assignment).

Newell and Simon have not only managed to create a credible model of human

problem solving, but have also set the basis of the modern rule-based expert

systems. The concept of knowledge chunks brings with it the problem of

granularity in the design of expert systems. More granularity means several

chunks of knowledge are blended together in one rule; less granularity makes an

isolated rule difficult to understand.

A. A change of concept.

The General Problem Solver reflected the current trend of the sixties to attempt

producing general-purpose systems that heavily relied on reasoning and not on

domain knowledge. Both approaches have their advantages and disadvantages

though.
Domain knowledge may be powerful, but it is limited to a particular domain.

Applying it to another domain involves skill rather than genuine expertise.

In the early seventies it was already obvious that the only way to achieve

machines comparable in performance with human counterparts was to use

domain knowledge.

In many cases, human experts rely less on reasoning and a lot more on a vast

knowledge of heuristics (practical knowledge) and experience. When a human

expert is confronted with a substantially different problem, he/she must use

reasoning from the first principles and they are no better in this respect than any

other person.

Therefore, expert systems relying solely on reasoning are doomed to failure. The

factors that contributed to the creation of the modern rule-based systems are

presented in Fig. 2. 2.
Fig. 2. 2 Creation of the modern rule-based expert system (Ref.1, p.15)

In 1965, work has begun on DENDRAL, the first expert system, aimed at

identifying chemical constituents. However, DENDRAL was an unique system in

which the knowledge base was mixed together with the inference engine. This

made the task of reusing it to build a new expert system almost impossible.

1972-73 brought the development of the MYCIN system, intended to diagnose

illnesses. MYCIN was a step forward because:

The Evolution of Expert Systems Expert Systems Evolution

• it has demonstrated that AI may be used in real-life situations;

• MYCIN was used to test and prototype the explanation facility, the

automatic knowledge acquisition (bypassing the explicit knowledge coding) and

even intelligent tutoring (found in the GUIDON system);

• the concept of expert system shell was demonstrated to be viable.

MYCIN explicitly separated the knowledge base from the inference engine,

making the core of the system reusable. Emptying out the knowledge base of the

MYCIN system for example, and reusing the inference and explanation modules
could therefore build a new expert system. The shell produced by removing the

medical knowledge from the MYCIN system produced EMYCIN (Empty MYCIN).

Back to the improvements in the control algorithms. In 1979 Charles R. Forgy, at

the Carnegie-Mellon University developed a fast pattern matcher algorithm,

which stores information about rules in a network. The RETE algorithm is fast

because it does not match facts against every rule on every recognise-act cycle.

Instead, it looks for changes in matches. Because the static data does not change,

it can be ignored and the processing speed increases dramatically. This algorithm

was used to develop the OPS (Official Production System) language and shell.

The foundations of the modern rule-based expert systems are shown in Fig. 2.3.

A variation of the DENDRAL system was developed in 1978, called META-

DENDRAL. This program used induction to infer new rules of chemical structure.

META-

DENDRAL has been one of the most important attempts to defeat the acquisition

bottleneck, where the knowledge has to be extracted from a human expert and

explicitly coded into the knowledge database.

Another example of meta-reasoning was the TEIRESIAS knowledge acquisition

program for MYCIN. In case of an erroneous diagnosis, TEIRESIAS would

interactively lead an expert through the reasoning MYCIN has accomplished,

until the point where the mistake was made is discovered. The acquired rule
would first be tested for compatibility with the existing rules, and if no conflicts

found, added to the knowledge base. Otherwise, TEIRESIAS would query the

user about the conflict.

Meta-knowledge in TEIRESIAS can be either a control strategy (on how the rules

must be applied), or validation-and-verification meta-knowledge (whether the

rule is in the correct form so it can to be added to the knowledge base).

The Evolution of Expert Systems Expert Systems Evolution

Fig. 2. 3 The foundation of the rule based expert systems (Ref.1, p.32)
3.2.3 A new trend. Around 1980, companies start producing commercial expert

systems, accompanied by powerful software for expert system development. An

example is the Automated Reasoning Tool (ART, released later on in 1985). Also

in 1980, new specialised hardware was developed, optimised to run the expert

system software faster.

The LISP machines were specially designed to run the most used expert system

development base language of the period, LISP. The native assembly language,

operating system, etc were actually created in LISP (LISt Processing language).

As one can imagine, this type of dedicated technology came to a price. While

being more efficient and faster at running LISP code, the dedicated machines had

price tags around $100000. Obviously there was a need for more affordable

software that could run on common machines.

1985 is another milestone to the development of expert systems, with the

introduction of CLIPS by NASA. Written in C, CLIPS is portable (i.e. suitable for a

large number of platforms), fast (it uses the RETE pattern matching algorithm

developed by Forgy in 1979) and cheap or free for some users. CLIPS can run on

any machine that can run the standards C language. CLIPS had actually made

expert system development and study accessible to large groups of potential


users. This is always vital in promoting the acceptance of new technologies. 1990

sees the revival of the neural networks with the Artificial Neural Systems (ANS,

refer section E, part 1). In 1992, 11 shell programs were available for Macintosh,

29 for MS-DOS programs, 4 for UNIX platforms and 12 for dedicated mainframe

applications (Ref.7).

By 1994, commercial ANS are being offered (refer section E, part 3) by various

companies at different levels of price and features. The number of expert systems

patents in the US has grown steadily from 20 in 1998 to over 900 in 1996.(Ref.7).

The GUESS system (Generically Used Expert Scheduling System) , by Liebovitz et

al, was devised in 1997, using an object oriented, constraint-based AI toolkit

approach to perform scheduling.

The current (1997-) trend cannot escape the influence of the Internet. Envisaged

applications are:

• intelligent customer and vendor agents for use on the web in E-commerce;

• intelligent search engines;


• interagent communications methods relating to AI in E-commerce.
1. INTRODUCTION:

Within the last ten years, artificial intelligence-based computer

programs called expert systems have received a great deal of attention. The

reason for all the attention is that these programs have been used to solve an

impressive array of problems in a variety of fields. Well-known examples include

computer system design, locomotive repair, and gene cloning.

How do they do it? An expert system stores the knowledge of one or more human

experts in a particular field. The field is called a domain. The experts are called

domain experts. A user presents the expert system with the specifics of a problem

within the domain. The system applies its stored knowledge to solve the problem.

Conventional programming languages, such as FORTRAN and C, are

designed and optimized for the procedural manipulation of data (such as

numbers and arrays). Humans, however, often solve complex problems using

very abstract, symbolic approaches which are not well suited for implementation

in conventional languages. Although abstract information can be modeled in

these languages, considerable programming effort is required to transform the

information to a format usable with procedural programming paradigms.


One of the results of research in the area of artificial intelligence has

been the development of techniques which allow the modeling of information at

higher levels of abstraction. These techniques are embodied in languages or tools

which allow programs to be built that closely resemble human logic in their

implementation and are therefore easier to develop and maintain. These

programs, which emulate human expertise in well defined problem domains, are

called expert systems. The availability of expert system tools, such as CLIPS, has

greatly reduced the effort and cost involved in developing an expert system.

The goal of expert systems research was to program into a computer the

knowledge and experience of an expert. Expert systems are used in medicine,

business management, for searching for natural resources and much more.

Randal Davis, of MIT is quoted as saying, "Expert systems can be experts - you

come to them for advice. Or they can be coworkers, on a more equal footing. Or

they could be assistants to an expert. All along the spectrum there are very useful

systems that can be built."

Expert Systems came at a time when AI research was shifting "from a power-

based strategy for achieving intelligence to a knowledge based-approach." Where

formerly researchers were aiming at making more powerful systems that used

clever techniques and tricks, they instead began to look at programming more

knowledge into-machines.
The rise in expert systems research brought with it a new type of

engineering: knowledge engineering. It was the job of a knowledge engineer to

take the knowledge of an expert and convert that knowledge into a form that

could be understood by computers. This was a quite a long process. In the case

of Mycin, several physicians were presented with cases and asked to diagnose the

patient, propose a treatment and explain the reasoning behind their decisions.

From these interactions with the experts, knowledge engineers came up with a set

of rules to be programmed into the computer. Then the computer was tested on

several cases and its conclusions matched with those of human experts. When

necessary, rules were added or modified to fix errors in the computer's program.

However, expert systems still had their limitations. It took a lot of

time to build the system with many hours of testing and debugging. Also expert

systems lacked what Douglas Hofstadter called "the more subtle things that
3
constitute human intelligence such as intuition." Sometimes we know things,

but can't really explain exactly why we know it. We often make decisions on this

type of intuition. Since engineers didn't have any defined rules for how intuition

worked, they couldn't program intuition into computers.

Also lacking in expert systems was the ability to learn from its

mistakes. If a human expert came to an incorrect conclusion, she would be able

to understand and learn from her mistake in order to avoid making the same or
similar mistakes in the future. An expert system, however, could not learn from

its mistakes and would not be able to avoid making the same mistake in the

future. Once an expert system was found to have an error, the only way to fix that

error was to have it reprogrammed by an engineer.

2. EXPERT SYSTEM COMPONENTS:

The part of the expert system that stores the knowledge is called the

knowledge base. The part that holds the specifics of the to-be-solved problem is

(somewhat misleadingly) called the global database (a kind of "scratch pad"). The

part that applies the knowledge to the problem is called the inference engine.

As is the case with most contemporary computer programs, expert

systems typically have friendly user interfaces. A friendly interface doesn't make

the system's internals work any more smoothly, but it does enable inexperienced

users to specify problems for the system to solve and to understand the system's

conclusions.
3. BUILDING AN EXPERT SYSTEM:

Expert systems are the end-products of knowledge engineers. To build

an expert system that solves problems in a given domain, a knowledge engineer

starts by reading domain-related literature to become familiar with the issues and

the terminology. With that as a foundation, the knowledge engineer then holds

extensive interviews with one or more domain experts to "acquire" their

knowledge. Finally, the knowledge engineer organizes the results of these

interviews and translates them into software that a computer can use.

The interviews take the most time and effort of any of these stages. They

often stall system development. For this reason, developers use the term

knowledge acquisition bottleneck to characterize this phase.

Anyway, expert systems are meant to solve real problems which

normally would require a specialized human expert (such as a doctor or a

mineralogist). Building an expert system therefore first involves extracting the

relevant knowledge from the human expert. Such knowledge is often heuristic in

nature, based on useful ``rules of thumb'' rather than absolute certainties.

Extracting it from the expert in a way that can be used by a computer is generally

a difficult task, requiring its own expertise.


A knowledge engineer has the job of extracting this knowledge and building the

expert system knowledge base.

The core components of expert system are the knowledge base and inference

engine.

1. Knowledge base: A store of factual and heuristic knowledge. An ES tool

provides one or more knowledge representation schemes for expressing

knowledge about the application domain. Some tools use both frames

(objects) and IF-THEN rules. In PROLOG the knowledge is represented as

logical statements.
2. Inference engine: Inference mechanisms for manipulating the symbolic

information and knowledge in the knowledge base to form a line of

reasoning in solving a problem. The inference mechanism can range from

simple modus ponens backward chaining of IF-THEN rules to case-based

reasoning.

3.

4. Knowledge acquisition subsystem: A subsystem to help experts build

knowledge bases. Collecting knowledge needed to solve problems and

build the knowledge base continues to be the biggest bottleneck in

building expert systems.

5. Explanation subsystem: A subsystem that explains the system's

actions. The explanation can range from how the final or intermediate

solutions were arrived at to justifying the need for additional data.

6. User interface: The means of communication with the user. The user

interface is generally not a part of the ES technology, and was not given

much attention in the past. However, it is now widely accepted that the

user interface can make a critical difference in the perceived utility of a

system regardless of the system's performance.


A first attempt at building an expert system is unlikely to be very successful.

This is partly because the expert generally finds it very difficult to express exactly

what knowledge and rules they use to solve a problem. Knowledge acquisition for

expert systems is a big area of research, with a wide variety of techniques

developed. However, generally it is important to develop an initial prototype

based on information extracted by interviewing the expert, then iteratively refine

it based on feedback both from the expert and from potential users of the expert

system.

In order to do such iterative development from a prototype it is important

that the expert system is written in a way that it can easily be inspected and

modified. The system should be able to explain its reasoning (to expert, user and

knowledge engineer) and answer questions about the solution process. Updating

the system shouldn't involve rewriting a whole lot of code - just adding or

deleting localized chunks of knowledge.

The most widely used knowledge representation scheme for expert systems

is rules (sometimes in combination with frame systems). Typically, the rules

won't have certain conclusions - there will just be some degree of certainty that

the conclusion will hold if the conditions hold. Statistical techniques are used to

determine these certainties. Rule-based systems, with or without certainties, are

generally easily modifiable and make it easy to provide reasonably helpful traces
of the system's reasoning. These traces can be used in providing explanations of

what it is doing.

Expert systems have been used to solve a wide range of problems in

domains such as medicine, mathematics, engineering, geology, computer science,

business, law, defense and education. Within each domain, they have been used

to solve problems of different types. Types of problem involve diagnosis (e.g., of a

system fault, disease or student error); design (of a computer systems, hotel etc);

and interpretation (of, for example, geological data). The appropriate problem

solving technique tends to depend more on the problem type than on the domain.

4. REPRESENTING THE KNOWLEDGE:

The format that a knowledge engineer uses to capture the knowledge is

called a knowledge representation. The most popular knowledge representation

is the production rule (also called the if-then rule). Production rules are intended

to reflect the "rules of thumb" that experts use in their day-to-day work. These

rules of thumb are also referred to as heuristics. A knowledge base that consists

of rules is sometimes called a rule base.


To clarify things a bit, imagine that we have set out to build an expert

system that helps us with household repairs. Toward this end, we have held

lengthy interviews with an experienced plumber. Here are two if-then rules that

the plumber has told us. (Bear in mind that a working expert system can contain

tens of thousands of rules.)

Rule 1:

If you have a leaky faucet

And

If the faucet is a compression faucet

And
If the leak is in the handle

Then tighten the packing nut.

Rule 2:

If you've tightened the packing nut

And

If the leak persists

Then replace the packing.

(A compression faucet has two handles, one for hot water, and the other for cold.

The "packing" and the "packing nut" are two items that sit under a faucet

handle.) In each rule, the lines that specify the problem situation (the lines That

begin with "if" and "and") are called the antecedent of the rule. The line that

specifies the action to take in that situation (the line that begins with "then") is

called the consequent.

5. WORKING WITH THE KNOWLEDGE:

In the two rules in the preceding section, note that the consequent of Rule

1 appears (slightly changed) in the first line of the antecedent of Rule 2. This kind
of inter-rule connection is crucial to expert system operation. An inference engine

uses one of two strategies to examine interconnected rules.

In one strategy, the inference engine starts with a possible solution and

tries to gather information that verifies this solution. Faced with a leaky faucet

(and knowing our two rules), an inference engine that follows this strategy would

try to prove that the packing should be replaced. In order to do this, the inference

engine looks first at Rule 2 (because its consequent contains the conclusion that

the inference engine is trying to prove), then at Rule 1 (because its consequent

matches a statement from Rule 2's antecedent). This process is called backward

chaining.

When the inference engine needs information about the problem that

isn't in the knowledge base, the system questions the user (the question-answer

type of interaction typifies backward-chaining systems). The user's answers

become part of the problem specification in the global database. Because

backward chaining starts with a goal (the solution it tries to verify), it is said to be

goal-driven.
In the other strategy, the user begins by entering all the specifics of a

problem into the system, which the system stores in its global database. After

this, the system usually does not question the user further. The inference engine

inspects the problem specifications and then looks for a sequence of rules that

will help it form a conclusion.

In our leaky faucet example, the system user might specify the problem

as a leaky compression faucet with a leak in the handle. The inference engine

examines Rule 1 (because its antecedent matches the specifics of the problem),

and then Rule 2 (because its antecedent contains a statement from Rule 1's

consequent). In our example, Rule 2's consequent is the conclusion. This process

is called forward chaining. Because this process starts with data (specification of

a problem), it is said to be data-driven.

6. DEVELOPMENT TOOLS:

One of the first expert system development tools was a by-product of one

of the first expert systems. In the 1970s, Stanford researchers developed a system

called MYCIN. MYCIN contains a multitude of rules (coded in the computer


language LISP) that represent medical knowledge and enable the system to

perform medical diagnoses.

MYCIN's developers reasoned that removing the medical diagnosis rules

should not affect the workings of the system's inference engine and global

database. With the medical knowledge gone (resulting in a system they named

EMYCIN-"E" for "empty"), they also reasoned that they could insert knowledge

from other domains into the knowledge base and thus build a fully functioning

expert system. Because they were right on both counts, the expert system shell

was born.

An expert system shell contains pre-coded expert system components

(including backward chaining, forward chaining, or both). The knowledge

engineer just adds the knowledge.

Mycin was one of the earliest expert systems, and its design has strongly

influenced the design of commercial expert systems and expert system shells. Its

job was to diagnose and recommend treatment for certain blood infections. To do

the diagnosis ``properly'' involves growing cultures of the infecting organism.

Unfortunately this takes around 48 hours, and if doctors waited until this was

complete their patient might be dead! So, doctors have to come up with quick

guesses about likely problems from the available data, and use these guesses to
provide a ``covering'' treatment where drugs are given which should deal with

any possible problem.

Mycin was developed partly in order to explore how human experts make

these rough (but important) guesses based on partial information. However, the

problem is also a potentially important one in practical terms - there are lots of

junior or non-specialized doctors who sometimes have to make such a rough

diagnosis, and if there is an expert tool available to help them then this might

allow more effective treatment to be given. In fact, Mycin was never actually used

in practice. This wasn't because of any weakness in its performance - in tests it

outperformed members of the Stanford medical school. It was as much because

of ethical and legal issues related to the use of computers in medicine - if it gives

the wrong diagnosis, who do you sue??

Anyway Mycin represented its knowledge as a set of IF-THEN rules with

certainty factors. The following is an English version of one of Mycin's rules:

IF the infection is pimary-bacteremia

AND the site of the culture is one of the sterile sites

AND the suspected portal of entry is the gastrointestinal tract

THEN there is suggestive evidence (0.7) that infection is bacteria.


The 0.7 is roughly the certainty that the conclusion will be true given the

evidence. If the evidence is uncertain the certainties of the bits of evidence will be

combined with the certainty of the rule to give the certainty of the conclusion.

Mycin was written in Lisp, and its rules are formally represented as Lisp

expressions. The action part of the rule could just be a conclusion about the

problem being solved, or it could be an arbitrary lisp expression. This allowed

great flexibility , but removed some of the modularity and clarity of rule-based

systems, so using the facility had to be used with care.

Anyway, Mycin is a (primarily) goal-directed system, using the basic

backward chaining reasoning strategy that we described above. However, Mycin

used various heuristics to control the search for a solution (or proof of some

hypothesis). These were needed both to make the reasoning efficient and to

prevent the user being asked too many unnecessary questions.

One strategy is to first ask the user a number of more or less preset

questions that are always required and which allow the system to rule out totally

unlikely diagnoses. Once these questions have been asked the system can then

focus on particular, more specific possible blood disorders, and go into full
backward chaining mode to try and prove each one. These rules out allot of

unnecessary search, and also follow the pattern of human patient-doctor

interviews.

The other strategies relate to the way in which rules are invoked. The first

one is simple: given a possible rule to use, Macon first checks all the premises of

the rule to see if any are known to be false. If so there's not much point using the

rule. The other strategies relate more to the certainty factors. Macon will first

look at rules that have more certain conclusions, and will abandon a search once

the certainties involved get below 0.2.

A dialogue with Macon is longer and somewhat more complex. There are

three main stages to the dialogue. In the first stage, initial data about the case is

gathered so the system can come up with a very broad diagnosis. In the second

more directed questions are asked to test specific hypotheses. At the end of this

section a diagnosis is proposed. In the third section questions are asked to

determine an appropriate treatment, given the diagnosis and facts about the

patient. This obviously concludes with a treatment recommendation. At any stage

the user can ask why a question was asked or how a conclusion was reached, and

when treatment is recommended the user can ask for alternative treatments if the

first is not viewed as satisfactory.


Macon, though pioneering much expert system research, also had a

number of problems which were remedied in later, more sophisticated

architectures. One of these was that the rules often mixed domain knowledge,

problem solving knowledge and ``screening conditions'' (conditions to avoid

asking the user silly or awkward questions - e.g., checking patient is not child

before asking about alcoholism). A later version called NEOMYCIN attempted to

deal with these by having an explicit disease taxonomy to represent facts about

different kinds of diseases. The basic problem solving strategy was to go down the

disease tree, from general classes of diseases to very specific ones, gathering

information to differentiate between two disease subclasses (i.e., if disease1 has

subtypes disease2 and disease3, and you know that the patient has the disease1,

and subtype disease2 has symptom1 but not disease3, then ask about symptom1.)

There were many other developments from the MYCIN project. For

example, EMYCIN was really the first expert shell developed from Macon. A new

expert system called PUFF was developed using EMYCIN in the new domain of

heart disorders. And system called NEOMYCIN was developed for training

doctors, which would take them through various example cases, checking their

conclusions and explaining where they went wrong.


7. CONCEPTS IN KNOWLEDGE ACQUISITION:

7.1 DEFINITION:

Knowledge acquisition is the process of adding a new knowledge to a

Knowledge Base and refining or otherwise improving knowledge that was

previously acquired. Acquisition is usually associated with some purpose such as

expanding the capabilities of a system or improving its performance at some

specified task. Acquired knowledge may consist of facts, rules, concepts,

procedures, heuristics information. Sources of this knowledge may include one of

the following:

1. Experts in domain of interest

2. Text books

3. Technical papers

4. Data Bases

5. Reports

6. The environments
"Knowledge acquisition is a bottleneck in the construction of expert systems. The

knowledge engineer's job is to act as a go-between to help an expert build a

system. Since the knowledge engineer has far less knowledge of the domain than

the expert, however, communication problems impede the process of

transferring expertise into a program. The vocabulary initially used by the

expert to talk about the domain with a novice is often inadequate for problem-

solving; thus the knowledge engineer and expert must work together to extend

and refine it. One of the most difficult aspects of the knowledge engineer's task is

helping the expert to structure the domain knowledge, to identify and formalize

the domain concepts." (Hayes-Roth, Waterman & Leant, 1983)

Thus, the basic model for knowledge engineering has been that the knowledge

engineer mediates between the expert and knowledge base, eliciting knowledge

from the expert, encoding it for the knowledge base, and refining it in

collaboration with the expert to achieve acceptable performance. Figure 1 shows

this basic model with manual acquisition of knowledge from an expert followed

by interactive application of the knowledge with multiple clients through an

expert system shell:

 The knowledge engineer interviews the expert to elicit his or her

knowledge;
 The knowledge engineer encodes the elicited knowledge for the knowledge

base;

 The shell uses the knowledge base to make inferences about particular

cases specified by clients;

 The clients use the shell's inferences to obtain advice about particular

cases.

The computer itself is an excellent tool for helping the expert to structure the

knowledge domain and, in recent year’s research on knowledge acquisition has

focused on the development of computer-based acquisition tools. Many such

tools are designed to be used directly by the expert with the minimum of

intervention by the knowledge engineer, and emphasize facilities for visualizing

the domain concepts.

Figure 1 Basic knowledge engineering


Interactive knowledge acquisition and encoding tools can greatly reduce

the need for the knowledge engineer to act as an intermediary but, in most

applications, they leave a substantial role for the knowledge engineer. As shown

in Figure 2, knowledge engineers have responsibility for:

 Advising the experts on the process of interactive knowledge elicitation;

 Managing the interactive knowledge acquisition tools, setting them up

appropriately;

 Editing the unencoded knowledge base in collaboration with the experts;

 Managing the knowledge encoding tools, setting them up appropriately;

 Editing the encoded knowledge base in collaboration with the experts;

 Validating the application of the knowledge base in collaboration with the

experts;

 Setting up the user interface in collaboration with the experts and clients;

 Training the clients in the effective use of the knowledge base in

collaboration with the expert by developing operational and training

procedures.
Figure 2 Knowledge engineering with interactive tools

This use of interactive elicitation can be combined with manual elicitation

and with the use of the interactive tools by knowledge engineers rather than, or in

addition to, experts. Knowledge engineers can:

 Directly elicit knowledge from the expert;

 Use the interactive elicitation tools to enter knowledge into the knowledge

base.

Figure 2 specifies multiple knowledge engineers since the tasks above may

require the effort of more than one person, and some specialization may be

appropriate. Multiple experts are also specified since it is rare for one person to
have all the knowledge required, and, even if this were so, comparative elicitation

from multiple experts is itself a valuable knowledge elicitation technique

Validation is shown in Figure 2 as a global test of the shell in operation with the

knowledge base, that is of overall inferential performance. However, validation

may also be seen as a local feature of each step of the knowledge engineers'

activities: the experts' proper use of the tools needs validation; the operation of

the tools themselves needs validation; the resultant knowledge base needs

validation; and so on. Attention to quality control through validating each step of

the knowledge acquisition process is key to effective system development.

7.2 TYPES OF LEARNING:

One can develop learning classifications based on the type of knowledge

representation used (predicate calculus, rules, frames), the type of knowledge

learned (concepts, game playing, problem solving), or by the area of application

(medical diagnosis, scheduling, prediction and so on). The classification is

independent of the knowledge domain and the representation scheme used. It is

based on type of inference strategy employed or the methods used in the learning

process. There are five different methods;

1. Memorization

2. Direct Instruction
3. Analogy

4. Induction

5. Deduction

At first learning by memorization is the simplest form of learning. It

requires the least amount of inference and is accomplished by simply copying

the knowledge in the same form that it will be used directly into the

Knowledge Base. We use this type of learning when we memorize

multiplication tables.

The second is the Direct Instruction which is slightly more complex form

of learning. This type of learning requires more inference than memorization

learning since the knowledge must be transformed into an operation form

before being integrated into the Knowledge Base. We can use this type of

learning when a teacher presents number of facts directly to us in a well

organized manner.

The third type listed is analogical learning is the process of learning a

new concept through the use of similar known concepts. We use this type of

learning when solving problems on an exam where previously learned


examples serve as a guide or when we learn to drive a truck using our

knowledge of car driving. We make frequent use of analogical learning.

The fourth type of learning is also one that is used frequently by

humans. It is a powerful form of learning which, it also requires more

inferring than the first two methods. This form of learning requires the use of

inductive inference a form of invalid but useful inference for example, we

learn the concept of color or sweet taste after experiencing the sensation

associated with several examples of colored objects or sweet foods.

The last type of acquisition is Deductive learning. In this from the known

facts, new facts or relationships are logically derived.


8. INTELLIGENT EDITORS:

Expert Systems some times require hundreds or even thousands of rules

to reach acceptable level of performance.

An intelligent editor’s act as an interface between a domain expert and

an Expert System. They permit a domain expert to interact directly with the

system without the need for an intermediate to code the knowledge. The

Expert carries on a dialog with the editors in a restricted subset of English

which includes a domain-specific vocabulary. The editor has direct access to

the knowledge in the Expert System and knows the structure of that

knowledge. Through the editors, an expert can create, modify and delete rules

without knowledge of the internal structure of the rules.

The editors assists the expert in building and refining a Knowledge Base

by recalling rules related to same specific topic, and reviewing and modifying

the rules, if necessary, to better feed the experts meaning and intent. When

faulty or deficit knowledge is found, the problem can then be corrected.


8.1 Choosing a Problem:

Writing an expert system generally involves a great deal of time and money.

To avoid costly and embarrassing failures, people have developed a set of

guidelines to determine whether a problem is suitable for an expert system

solution:

1. The need for a solution must justify the costs involved in development.

There must be a realistic assessment of the costs and benefits involved.

2. Human expertise is not available in all situations where it is needed. If

the ``expert'' knowledge is widely available it is unlikely that it will be

worth developing an expert system. However, in areas like oil exploration

and medicine there may be rare specialized knowledge which could be

cheaply provided by an expert system, as and when required, without

having to fly in your friendly (but very highly paid) expert.

3. The problem may be solved using symbolic reasoning techniques. It

shouldn't require manual dexterity or physical skill.

4. The problem is well structured and does not require (much) common

sense knowledge. Common sense knowledge is notoriously hard to capture

and represent. It turns out that highly technical fields are easier to deal
with, and tend to involve relatively small amounts of well formalized

knowledge.

5. The problem cannot be easily solved using more traditional computing

methods. If there's a good algorithmic solution to a problem, you don't

want to use an expert system.

6. Cooperative and articulate experts exist. For an expert system project to

be successful it is essential that the experts are willing to help, and don't

feel that their job is threatened! You also need any management and

potential users to be involved and have positive attitudes to the whole

thing.

7. The problem is of proper size and scope. Typically you need problems that

require highly specialized expertise, but would only take a human expert a

short time to solve.

8. It should be clear that only a small range of problems are appropriate for

expert system technology. However, given a suitable problem, expert

systems can bring enormous benefits. Systems have been developed, for

example, to help analyze samples collected in oil exploration, and to help

configure computer systems. Both these systems are (or were) in active

use, saving large amounts of money.


8.2 A Simple Example:

This is much better explained through a simple example. Suppose that we have

the following rules:

1. IF:engine_getting_petrol

AND:engine_turns_over

THEN problem_with_spark_plugs

2. IFNOT:engine_turns_over

ANDNOT:lights_come_on

THEN problem_with_battery

3. IFNOT:engine_turns_over

ANDlights_come_on

THEN problem_with_starter

4. IF:petrol_in_fuel_tank

THEN engine_getting_petrol

5.

Our problem is to work out what's wrong with our car given some observable

symptoms. There are three possible problems with the car:

problem_with_spark_plugs, problem_with_battery, problem_with_starter.

We'll assume that we have been provided with no initial facts about the

observable symptoms.
In the simplest goal-directed system we would try to prove each

hypothesized problem (with the car) in turn. First the system would try to prove

``problem_with_spark_plugs''. Rule 1 is potentially useful, so the system would

set the new goals of proving ``engine_getting_petrol'' and

``engine_turns_over''. Trying to prove the first of these, rule 4 can be used, with

new goal of proving ``petrol_in_fuel_tank'' There are no rules which conclude

this (and the system doesn't already know the answer), so the system will ask the

user:

Is it true that there's petrol in the fuel tank?

Let's say that the answer is yes. This answer would be recorded, so that the user

doesn't get asked the same question again. Anyway, the system now has proved

that the engine is getting petrol, so now wants to find out if the engine turns over.

As the system doesn't yet know whether this is the case, and as there are no rules

which conclude this, the user will be asked:

Is it true that the engine turns over?

Lets say this time the answer is no. There are no other rules which can be used to

prove ``problem_with_spark_plugs'' so the system will conclude that this is not


the solution to the problem, and will consider the next hypothesis:

problem_with_battery. It is true that the engine does not turn over (the user has

just said that), so all it has to prove is that the lights don't come one. It will ask

the user

Is it true that the lights come on?

Suppose the answer is no. It has now proved that the problem is with the battery.

Some systems might stop there, but usually there might be more than one

solution, (e.g., more than one fault with the car), or it will be uncertain which of

various solutions is the right one. So usually all hypotheses are considered. It will

try to prove “problem_with_starter'', but given the existing data (the lights come

on) the proof will fail, so the system will conclude that the problem is with the

battery. A complete interaction with our very simple system might be:

System: Is it true that there's petrol in the fuel tank?

User: Yes.

System: Is it true that the engine turns over?

User: No.

System Is it true that the lights come on?

User: No.

System: I conclude that there is a problem with battery.


9. Advantages and Disadvantages:

9.1 Advantages of Expert Systems:

 Permanence - Expert systems do not forget, but human experts may

 Reproducibility - Many copies of an expert system can be made, but

training new human experts is time-consuming and expensive

 If there is a maze of rules (e.g. tax and auditing), then the expert system

can "unravel" the maze

 Efficiency - can increase throughput and decrease personnel costs

o Although expert systems are expensive to build and maintain, they

are inexpensive to operate

o Development and maintenance costs can be spread over many users

o The overall cost can be quite reasonable when compared to

expensive and scarce human experts

o Cost-savings:

Wages - (elimination of a room full of clerks)

 Consistency - With expert systems similar transactions handled in the

same way. The system will make comparable recommendations for like

situations.

Humans are influenced by

o recency effects (most recent information having a disproportionate

impact on judgment)
o Primacy effects (early information dominates the judgment).

 Documentation - An expert system can provide permanent

documentation of the decision process

 Completeness - An expert system can review all the transactions, a

human expert can only review a sample

 Timeliness - Fraud and/or errors can be prevented. Information is

available sooner for decision making

 Breadth - The knowledge of multiple human experts can be combined to

give a system more breadth that a single person is likely to achieve

 Reduce risk of doing business

o Consistency of decision making

o Documentation

o Achieve Expertise

 Entry barriers - Expert systems can help a firm create entry barriers for

potential competitors

 Differentiation - In some cases, an expert system can differentiate a

product or can be related to the focus of the firm (XCON)

 Computer programs are best in those situations where there is a structure

that is noted as previously existing or can be elicited


9.2 Disadvantages of Rule-Based Expert Systems:

 Common sense - In addition to a great deal of technical

knowledge, human experts have common sense. It is not yet

known how to give expert systems common sense.

 Creativity - Human experts can respond creatively to unusual situations,

expert systems cannot.

 Learning - Human experts automatically adapt to changing

environments; expert systems must be explicitly updated. Case-based

reasoning and neural networks are methods that can incorporate learning.

 Sensory Experience - Human experts have available to them a wide

range of sensory experience; expert systems are currently dependent on

symbolic input.

 Degradation - Expert systems are not good at recognizing when no

answer exists or when the problem is outside their area of expertise.


10. EXPERT SYSTEMS AND ARTIFICIAL INTELLIGENCE:

Expert Systems are computer programs that are derived from a branch of

computer science research called Artificial Intelligence (AI). AI's scientific goal is

to understand intelligence by building computer programs that exhibit intelligent

behavior. It is concerned with the concepts and methods of symbolic inference, or

reasoning, by a computer, and how the knowledge used to make those inferences

will be represented inside the machine.

Of course, the term intelligence covers many cognitive skills, including the

ability to solve problems, learn, and understand language; AI addresses all of

those. But most progress to date in AI has been made in the area of problem

solving -- concepts and methods for building programs that reason about

problems rather than calculate a solution.

AI programs that achieve expert-level competence in solving problems in

task areas by bringing to bear a body of knowledge about specific tasks are called

knowledge-based or expert systems. Often, the term expert systems is reserved

for programs whose knowledge base contains the knowledge used by human

experts, in contrast to knowledge gathered from textbooks or non-experts. More

often than not, the two terms, expert systems (ES) and knowledge-based systems

(KBS) are used synonymously. Taken together, they represent the most
widespread type of AI application. The area of human intellectual endeavor to be

captured in an expert system is called the task domain. Task refers to some goal-

oriented, problem-solving activity. Domain refers to the area within which the

task is being performed. Typical tasks are diagnosis, planning, scheduling,

configuration and design.

Building an expert system is known as knowledge engineering and its

practitioners are called knowledge engineers. The knowledge engineer must

make sure that the computer has all the knowledge needed to solve a problem.

The knowledge engineer must choose one or more forms in which to represent

the required knowledge as symbol patterns in the memory of the computer -- that

is, he (or she) must choose a knowledge representation. He must also ensure that

the computer can use the knowledge efficiently by selecting from a handful of

reasoning methods. We first describe the components of expert systems.

10.1 The Building Blocks of Expert Systems:

Every expert system consists of two principal parts: the knowledge base; and the

reasoning, or inference, engine.


The knowledge base of expert systems contains both factual and heuristic

knowledge. Factual knowledge is that knowledge of the task domain that is

widely shared, typically found in textbooks or journals, and commonly agreed

upon by those knowledgeable in the particular field.

Heuristic knowledge is the less rigorous, more experiential, more judgmental

knowledge of performance. It is the knowledge of good practice, good judgment,

and plausible reasoning in the field. It is the knowledge that underlies the "art of

good guessing."
Knowledge representation formalizes and organizes the knowledge. One widely

used representation is the production rule, or simply rule. A rule consists of an IF

part and a THEN part (also called a condition and an action). The IF part lists a

set of conditions in some logical combination. The piece of knowledge

represented by the production rule is relevant to the line of reasoning being

developed if the IF part of the rule is satisfied; consequently, the THEN part can

be concluded, or its problem-solving action taken. Expert systems whose

knowledge is represented in rule form are called rule-based systems.

Another widely used representation, called the unit (also known as frame,

schema, or list structure) is based upon a more passive view of knowledge. The

unit is an assemblage of associated symbolic knowledge about an entity to be

represented. Typically, a unit consists of a list of properties of the entity and

associated values for those properties.

Since every task domain consists of many entities that stand in various

relations, the properties can also be used to specify relations, and the values of

these properties are the names of other units that are linked according to the

relations. One unit can also represent knowledge that is a "special case" of

another unit, or some units can be "parts of" another unit.


The problem-solving model, or paradigm, organizes and controls the

steps taken to solve the problem. One common but powerful paradigm involves

chaining of IF-THEN rules to form a line of reasoning. If the chaining starts from

a set of conditions and moves toward some conclusion, the method is called

forward chaining. If the conclusion is known (for example, a goal to be achieved)

but the path to that conclusion is not known, then reasoning backwards is called

for, backward chaining. These problem-solving methods are built into program

modules called inference engines or inference procedures that manipulate and

use knowledge in the knowledge base to form a line of reasoning.

The knowledge base an expert uses is what he learned at school, from

colleagues, and from years of experience. Presumably the more experience he

has, the larger his store of knowledge. Knowledge allows him to interpret the

information in his databases to advantage in diagnosis, design, and analysis.

Though an expert system consists primarily of a knowledge base and an

inference engine, a couple of other features are worth mentioning: reasoning with

uncertainty, and explanation of the line of reasoning.


Knowledge is almost always incomplete and uncertain. To deal with

uncertain knowledge, a rule may have associated with it a confidence factor or a

weight. The set of methods for using uncertain knowledge in combination with

uncertain data in the reasoning process is called reasoning with uncertainty. An

important subclass of methods for reasoning with uncertainty is called "fuzzy

logic," and the systems that use them are known as "fuzzy systems."

Because an expert system uses uncertain or heuristic knowledge. When an

answer to a problem is questionable, we tend to want to know the rationale. If the

rationale seems plausible, we tend to believe the answer. So it is with expert

systems. Most expert systems have the ability to answer questions of the form:

"Why is the answer X?" Explanations can be generated by tracing the line of

reasoning used by the inference engine. The most important ingredient in any

expert system is knowledge. The power of expert systems resides in the specific,

high-quality knowledge they contain about task domains. AI researchers will

continue to explore and add to the current repertoire of knowledge

representation and reasoning methods. But in knowledge resides the power.

Because of the importance of knowledge in expert systems and because the

current knowledge acquisition method is slow and tedious, much of the future of

expert systems depends on breaking the knowledge acquisition bottleneck and in

codifying and representing a large knowledge infrastructure.


10.2 Knowledge engineering:

It is the art of designing and building expert systems, and knowledge

engineers are its practitioners. We stated earlier that knowledge engineering is an

applied part of the science of artificial intelligence which, in turn, is a part of

computer science. Theoretically, then, a knowledge engineer is a computer

scientist who knows how to design and implement programs that incorporate

artificial intelligence techniques. The nature of knowledge engineering is

changing, however, and a new breed of knowledge engineers is emerging. We'll

discuss the evolving nature of knowledge engineering later.


Today there are two ways to build an expert system. They can be built from

scratch, or built using a piece of development software known as a "tool" or a

"shell." Before we discuss these tools, let's briefly discuss what knowledge

engineers do. Though different styles and methods of knowledge engineering

exist, the basic approach is the same: a knowledge engineer interviews and

observes a human expert or a group of experts and learns what the experts know,

and how they reason with their knowledge. The engineer then translates the

knowledge into a computer-usable language, and designs an inference engine, a

reasoning structure, that uses the knowledge appropriately. He also determines

how to integrate the use of uncertain knowledge in the reasoning process, and

what kinds of explanation would be useful to the end user.

Next, the inference engine and facilities for representing knowledge and for

explaining are programmed, and the domain knowledge is entered into the

program piece by piece. The discovery and cumulation of techniques of machine

reasoning and knowledge representation is generally the work of artificial

intelligence research. The discovery and cumulation of knowledge of a task

domain is the province of domain experts. Domain knowledge consists of both

formal, textbook knowledge, and experiential knowledge -- the expertise of the

experts.
10.3 Tools, Shells, and Skeletons:

Compared to the wide variation in domain knowledge, only a small number

of AI methods are known that are useful in expert systems. That is, currently

there are only a handful of ways in which to represent knowledge, or to make

inferences, or to generate explanations. Thus, systems can be built that contain

these useful methods without any domain-specific knowledge. Such systems are

known as skeletal systems, shells, or simply AI tools.

Building expert systems by using shells offers significant advantages. A system

can be built to perform a unique task by entering into a shell all the necessary

knowledge about a task domain. The inference engine that applies the knowledge

to the task at hand is built into the shell. If the program is not very complicated

and if an expert has had some training in the use of a shell, the expert can enter

the knowledge himself.

Many commercial shells are available today, ranging in size from shells on

PCs, to shells on workstations, to shells on large mainframe computers. They

range in price from hundreds to tens of thousands of dollars, and range in

complexity from simple, forward-chained, rule-based systems requiring two days

of training to those so complex that only highly trained knowledge engineers can
use them to advantage. They range from general-purpose shells to shells custom-

tailored to a class of tasks, such as financial planning or real-time process control.

Although shells simplify programming, in general they don't help with

knowledge acquisition. Knowledge acquisition refers to the task of endowing

expert systems with knowledge, a task currently performed by knowledge

engineers. The choice of reasoning method, or a shell, is important, but it isn't as

important as the accumulation of high-quality knowledge. The power of an expert

system lies in its store of knowledge about the task domain -- the more

knowledge a system is given, the more competent it becomes.

11. APPLICATION OF EXPERT SYSTEM:

1. Different types of medical diagnosis.

2. Identification of chemical component structure.

3. Diagnosis of complex electronic and electro-mechanical.

4. Diagnosis of software development projects.

5. Forecasting crop damage.

6. Numerous applications related to space planning and exploration.

7. The design of very large scale integration (VLSI) systems.


8. Numerous military applications, from battle field assessment to ocean

surveillance.

9. Teaching students specialized tasks.

10. Location of faults in computers and communication systems.

12. IMPORTANCE OF EXPERT SYSTEM:

A good example which illustrate the example of expert system is the

diagnostic system developed by the Compbell Soup Company. Compbell Soup

uses large sterilizers or cookers to cook soups and other canned products at

the eight plants located throughout the country. Some of the larger cookers

hold up to 68000 cans of food for short periods of cooking time. When

difficult maintenance problems occur with the cookers, the fault must be

found and corrected quickly or the batch of the foods being prepared will

spoil. The company was dependent on a single expert to diagnose and cure the

more difficult problems occurred.


Since this individual expert will retire in a few years taking his expertise

with him. The company decided to develop an expert system to diagnose these

difficult problems. After some months of development with assistance from

Texas Instruments, the company developed expert systems which ran on a PC.

The system has about 150 rules in its knowledge base with which to diagnose

more complex cooker problems. The system has also been used to provide

training to the new maintenance personnel. Cloning multiples copies for each

of the eight locations cost the company only a few pennies per copy. Further

mote system cannot retire and its performance continued. It has proven to be

a real asset to the company.


LIMITATIONS:

1. Not widely used or tested

2. Limited to relatively narrow problems

3. Cannot readily deal with “mixed” knowledge

4. Like all software, Possibility of errors

5. Cannot refine own knowledge base

6. Difficult and expensive to develop and maintain

7. Raise legal and ethical concerns


14. CONCLUSION:

One is to use expert systems as the foundation of training devices that act

like human teachers, instead of like the sophisticated page-turners that

characterize traditional computer-aided instruction. The idea is to combine one

expert system that provides domain knowledge with another expert system that

has the know-how to present the domain knowledge in a learnable way. The

system could then vary its presentation style to fit the needs of the individual

learner. While this concept is not new, today's powerful PCs are starting to put

such trainers, called ICAI (Intelligent Computer Assisted Instruction) systems,

within everybody's reach. Tomorrow's shells will routinely allow developers to

embed inference engines into other kinds of programs.

Embeddable inference engines set up a number of fascinating potential

breakthroughs. Will word processors become so intelligent that they grasp the

gist of what we want to write and then write it better than we can? Will databases

comprehend the information we need, help us find it, and then suggest a search

for supplementary information? Will smart spreadsheets tell us the kinds of

models we should use in our projections and then build them for us? The

possibilities are limited only by the knowledge bases and inference engines that

reside in our heads.

You might also like