You are on page 1of 24

PAGE

NO.

INDEX
SYLLABUS

2

Unit IV
1.Game playing techniques
1.1.Mini max procedure
2. Alpha-beta cut-offs
3. Planning
4. Study of the block world problem in robotics
5. Introduction to understanding and
natural languages
processing
Unit V
6. Introduction to learning
7. Various techniques used in learning
8. Introduction to neural networks
9. Applications of neural networks
9.1. Real-life applications
9.2.Neural networks and neuroscience
10. Common sense
11. Reasoning
12. Some example of expert systems
12.1. Advantages
12.2. Disadvantages
REFERENCES

Page
1

3
3
5
6
7
9
11
11
11
17
17
17
18
19
20
20
21
21
23

IT 833 Artificial Intelligence
Branch : Information Technology, VIII Semester
Course: Artificial Intelligence
Unit I: Meaning and definition of artificial intelligence, Various types of production
systems,
Characteristics of production systems, Study and comparison of breadth first search and
depth first
search. Techniques, other Search Techniques like hill Climbing, Best first Search. A*
algorithm,
AO* algorithms etc, and various types of control strategies.
Unit II: Knowledge Representation, Problems in representing knowledge, knowledge
representation using propositional and predicate logic, comparison of propositional and
predicate
logic, Resolution, refutation, deduction, theorem proving, inferencing, monotonic and
nonmonotonic
reasoning.
Unit III: Probabilistic reasoning, Baye's theorem, semantic networks, scripts, schemas,
frames,
conceptual dependency, fuzzy logic, forward and backward reasoning.
Unit IV: Game playing techniques like minimax procedure, alpha-beta cut-offs etc,
planning, Study
of the block world problem in robotics, Introduction to understanding and natural
languages
processing.
Unit V: Introduction to learning, Various techniques used in learning, introduction to
neural networks, applications of neural networks, common sense, reasoning, some
example of expert systems.

Page
2

The most common used AI technique in game is search. statistics and philosophy for minimizing the possible loss for a worst case (maximum loss) scenario.Unit IV 1. In some other problem-solving activities. Originally formulated for two-player zero-sum game theory. However. though not all competitions are zero-sum! There are perfect information games (such as Chess and Go) and imperfect information games (such as Bridge and games where dice are used). though not for the latter. states also depend on the actions of other players (systems) who usually have different goals. and the other's viewpoint is exactly the opposite. it is also because its close relation to "intelligence". This algorithm assumes each player takes the best move in each step. we Page 3 . and its well-defined states and rules. it has also been extended to more complex games and to general decision making in the presence of uncertainty. Game playing techniques Game playing has been a major topic of AI since the very beginning. if the two players take turn to move. game theory. covering both the cases where players take alternate moves and those where they make simultaneous moves. for most interesting games. usually an optimum solution can be obtained for the former by exhaustive search. For two-person zero-sum perfect-information game. However. A special situation that has been studied most is two-person zero-sum game. This type of game is common. where the two players have exactly opposite goals. the minimax procedure can solve the problem given sufficient computational resources. 2. state change is solely caused by the action of the system itself. each state can be evaluated by a score from one player's viewpoint. Given sufficient time and space. and easy to analyze. First. such a solution is usually too inefficient to be practically used. that is. Beside the attraction of the topic to people. Mini max procedure Minimax is a decision rule used in decision theory. in multi-player games.

and the path from root to the actual result is the one on which all nodes have the same score. one player (MAX) takes the action that leads to the highest score. determined by the depth of the search tree. Minimax procedure: starting from the leaves of the tree (with final scores with respect to one player. and go backwards towards the root (the starting state). with the difference of possible win paths as the henristic function. MAX and MIN. and estimated scores generated by a heuristic function are used in place of the actual score in the above procedure. Example: Because of computational resources limitation. Page 4 . MAX). At each step. in the state graph. All nodes in the tree will all be scored. Example: Tic-tac-toe. the search depth is usually restricted.distinguish two types of nodes. while the other player (MIN) takes the action that leads to the lowest score.

Such moves need not be evaluated further. Go. which never decreases. Search is depth-first. which never increases.). each MIN node has a beta value. and stops at any MIN node whose beta value is smaller than or equal to the alpha value of its parent. Alpha-beta cut-offs Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. etc. It is an adversarial search algorithm used commonly for machine playing of two-player games (Tic-tac-toe. When applied to a standard minimax tree. It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. These values are set and updated when the value of a child is obtained. Chess. it returns the same move as minimax would. but prunes away branches that cannot possibly influence the final decision. each MAX node has an alpha value. Page 5 . as well as at any MAX node whose alpha value is greater than or equal to the beta value of its parent.3. In Alpha-Beta Pruning.

4. It involves choosing a sequence of actions that will (with a high likelihood) transform the state of the world. or computer programs when trying to achieve some goal. More complex forms of planning can be formalized e.(2)MAX[==5] ----. step by step. in the framework of Marvov decision processes.(4)MAX[==3] | |-----------. The world is typically viewed to consist of atomic facts (state variables). and show how the planning problem can be solved automatically.(6)MIN[==8] |-----------X |-----------X This method is used in a Prolog program that plays Tic-tac-toe.(5)MIN[<=0] ----. We will only focus on the simplest AI planning problem. the other children of node (5) do not need to be generated.Examples: in the following partial trees.(5)MAX[>=8] ----. humans. with uncertainty about the effects of actions and therefore without the possibility to predict the results of a plan with certainty.(2)MIN[==3] ----. and actions make some facts true and some facts false.(3)MAX[==5] | |-----------. Planning The planning problem in Artificial Intelligence is about the decision making performed by intelligent creatures like robots. In the following we discuss a number of ways of formalizing planning. Page 6 . characterized by the restriction to one agent in a deterministic environment that can be fully observed.g.(4)MIN[==3] | |-----------.(3)MIN[==5] | |-----------. so that it will satisfy the goal. (1)MAX[>=3] ----.(6)MAX[==0] |-----------X |-----------X (1)MIN[<=5] ----.

5. an early AI study of planning and robotics (STRIPS) used a block world in which a robot arm performed tasks involving the manipulation of blocks. you will "program" a robotic arm to respond to a limited set of commands. All of the methods described below are equally applicable to all of these other problems as well. discrete event-systems diagnosis. Intelligent Control. Study of the block world problem in robotics Many areas of Computer Science use simple. and many of these methods were initially developed and applied in the context of these other problems. Page 7 . model-checking). abstract domains for both analytical and empirical studies. For example. and so on. Initially there are n blocks on the table (numbered from 0 to n-1) with block bi adjacent to block bi+1 for all 0 <= i < n-1 as shown in the diagram below: The valid commands for the robot arm that manipulates blocks are: b move a onto where a and b are block numbers. The problem is to parse a series of commands that instruct a robot arm in how to manipulate blocks that lie on a flat table. puts block a onto the top of the stack containing block b.The most basic planning problem is one instance of the general s-t reachability problem for succinctly represented transition graphs. move a over b where a and b are block numbers. In this problem you will model a simple block world under certain rules and constraints. puts block a onto block b after returning any blocks that are stacked on top of blocks a and b to their initial positions. after returning any blocks that are stacked on top of block a to their initial positions. which has other important applications in Computer Aided Verification (reachability analysis. Rather than determine how to achieve a specified state.

Any command in which a = b or in which a and b are in the same stack of blocks is an illegal command. The number of blocks is followed by a sequence of block commands. You may assume that 0 < n < 25. onto block b. All illegal commands should be ignored and should have no affect on the configuration of blocks. one command per line. The blocks stacked above block a retain their original order when moved. If there is at least a block on it.pile a onto b where a and b are block numbers. All blocks on top of block b are moved to their initial positions prior to the pile taking place. puts the pile of blocks consisting of block a. pile a over b where a and b are block numbers. Don't put any trailing spaces on a line. The blocks stacked above block a retain their order when moved. the colon must be followed by one space. You may assume that all commands will be of the form specified above. onto the top of the stack containing block b. and any blocks that are stacked above block a. Input The input begins with an integer n on a line by itself representing the number of blocks in the block world. Page 8 . There will be no syntactically incorrect commands. quit terminates manipulations in the block world. moves the pile of blocks consisting of block a. Your program should process all commands until the quit command is encountered. Each original block position numbered i ( 0 <= i < n where n is the number of blocks) should appear followed immediately by a colon. and any blocks that are stacked above block a. followed by a list of blocks that appear stacked in that position with each block number separated from other block numbers by a space. Output The output should consist of the final state of the blocks world.

Page 9 . Introduction to understanding and natural languages processing Natural language understanding is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension. n lines of output where n is the integer on the first line of input).e.. Sample Input 10 move 9 onto 1 move 8 over 1 move 7 over 1 move 6 over 1 pile 8 over 6 pile 8 over 5 move 2 over 1 move 4 over 9 quit Sample Output 0: 0 1: 1 9 2 4 2: 3: 3 4: 5: 5 8 7 6 6: 7: 8: 9: 6.There should be one line of output for each block position (i.

relatively simple tasks such as short commands issued to robots. voice-activation. Many real world applications fall between the two extremes. archiving and large-scale contentanalysis.The process of disassembling and parsing input is more complex than the reverse process of assembling output in natural language generation because of the occurrence of unknown and unexpected features in the input and the need to determine the appropriate syntactic and semantic schemes to apply to it. for instance text classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require in depth understanding of the text. ranging from small.[dubious – discuss] There is considerable commercial interest in the field because of its application to newsgathering. to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. The umbrella term "natural language understanding" can be applied to a diverse set of computer applications. but is far more complex than the management of simple queries to database tables with fixed schemata Page 10 . text categorization. factors which are predetermined when outputting language.

one will need to monitor only the trickier parts of the performance." --Marvin Minsky 2..e. Learning occurs in various regimes. One begins by paying attention to what needs to be done. it is possible to learn by practicing the skill until the performance converges on the desired model. There's no decisive definition of learning but here are some that do justice: · "Learning denotes changes in a system that . Automatic performance of some skills by the brain points out that the brain is capable of doing things in parallel i." --Ryszard Michalski · "Learning is making useful changes in our minds." --Herbert Simon · "Learning is constructing or modifying representations of what is being experienced.. For example. possible to learn how to use a word processor as a result of following particular instructions. enables a system to do the same task more efficiently the next time. Various techniques used in learning The following components are part of any learning problem: Page 11 . it is possible to learn to open a lock as a result of trial and error. Introduction to learning Learning can be described as normally a relatively permanent change that occurs in behavior as a result of experience.Unit V 1. Once the internal model of what ought to happen is set. but with more practice. one part is devoted to the skill whilst another part mediates conscious experience.

if an agent were to observe that some light was not working.. An intelligent tutoring system could try to explain why a student gives some answer in terms of what the student understands and does not understand. and abduction(Abduction is a form of reasoning where assumptions are made to explain observations. it can hypothesize what is happening in the world to explain why the light was not working. The problem of inferring an internal representation based on examples is often called induction and can be contrasted with deduction. This internal representation could be the raw experiences themselves. For example.1. new skills that were not present initially. but it is typically a compact representation that summarizes the data. or improved speed .).The experiences that are used to improve performance in the task measure of improvement. How the improvement is measured . which is deriving consequences of a knowledge base. Consider the agent internals of Figure . about the experiences of the agent) and to create an internal representation (the knowledge base) that is used by the agent as it acts. increasing accuracy in prediction.: Offline and online decomposition of an agent The problem of learning is to take in prior knowledge and data (e.g. which is hypothesizing what may be true about a particular case. Figure 1.Task :-The behavior or task that is being improved Data :.for example. There are two principles that are at odds in choosing a representation scheme: Page 12 .

but it is not the only one. predict the target features of a new example for which the input features are given. the more difficult it is to learn. and often many different hypotheses are consistent with the data. A very rich representation is difficult to learn because it requires a great deal of data.  The richer the representation. The richer the representation scheme. The representations required for intelligence are a compromise between many desiderata (see Section 2). some target features. and a set of training examples where the input features and the target features are specified. the representation must be rich enough to express a way to solve the problem. the more useful it is for subsequent problems solving. This is called classification when the target variables are discrete and regression when the target features are continuous. For an agent to learn a way to solve a problem. The most commonly studied learning task is supervised learning: given some input features. Learning techniques face the following issues: Task Virtually any task for which an agent can get data or experiences can be learned. Page 13 . Figure 2: The role of representations in solving problems The ability to learn the representation is one of them.

what has to be learned is specified for each example. Page 14 . Feedback Learning tasks can be characterized by the feedback given to the learner. This chapter presents some standard representations to show the common features behind learning. In supervised learning. This leads to the creditassignment problem of determining which actions were responsible for the rewards or punishments. learning to reason faster (analytic learning). Much of machine learning is studied in the context of particular representations (e. It is possible that it can learn what actions to perform without actually determining which consequences of the actions are responsible for rewards.Other learning tasks include learning classifications when the examples are not already classified (unsupervised learning). Unsupervised learning occurs when no classifications are given and the learner must discover categories and regularities in the data. the experiences must affect the agent's internal representation. Representation For an agent to use its experiences. decision trees. Feedback often falls between these extremes. In online learning.. a user could give rewards to the delivery robot without telling it exactly what it is being rewarded for. where the feedback in terms of rewards and punishments occurs after a sequence of actions. For example. neural networks. Online and offline In offline learning. and learning richer representations such as logic programs (inductive logic programming) or Bayesian networks. training examples arrive as the agent is acting.g. Supervised classification occurs when a trainer provides the classification for each example. or case bases). all of the training examples are available to an agent before it needs to act. The robot then must either learn what it is being rewarded for or learn which actions are preferred in which situations. learning what to do based on rewards and punishments (reinforcement learning). such as in reinforcement learning. Supervised learning of actions occurs when the agent is given immediate feedback about the value of each action.

but how well the agent performs for new experiences. Typically. As new examples are observed. To know whether an agent has learned. the real measure is its performance on some future task. The measure is usually not how well the agent performs on the training experiences. Suppose that there were two agents P and N. Thus. Both of these agents correctly classify every example in the training set but disagree on every other example. the agent must update its representation. the learner must generalize: go beyond the specific given examples to classify unseen examples. an agent never sees all of its examples. Of course. we must define a measure of success. the agent reasons about which examples would be useful to learn from and acts to collect these examples. consider the problem of predicting a Boolean feature based on a set of examples. In classification. Consider the agents N and P defined earlier.An agent that learns online requires some representation of its previously seen examples before it has seen all of its examples. Saying that a hypothesis is better than N's or P's hypothesis is not something that is obtained from the data . A representation is built using the training set. being able to correctly classify all training examples is not the problem. Agent N claims that the positive examples in the training set were the only positive examples and that every other instance is negative. Agent P claims that all of the negative examples seen were the only negative examples and that every other instance is positive. For example. this is only an approximation of what is wanted. Active learning is a form of online learning in which the agent acts to acquire useful examples from which to learn. Bias The tendency to prefer one hypothesis over another is called a bias. and then the predictive accuracy is measured on the test set. A standard way to measure success is to divide the examples into a training set and a test set.both N and P Page 15 . Success in learning should not be judged on correctly classifying the training set but on being able to correctly classify unseen examples. Measuring success Learning is defined in terms of improving performance based on some measure. In active learning.

the Ptolemaic system and heliocentric system of Copernicus made detailed models of the movement of solar system in terms of epicycles (cycles within cycles). and. Extrapolation is usually much more inaccurate than interpolation. Noise exists in the data (some of the features have been assigned the wrong value). and often there are examples with missing features. except for the simplest of examples. an agent requires a bias. the search spaces are typically prohibitively large for systematic search. Learning is a search through the space of possible representations. in ancient astronomy. The hypotheses adopted by P and N disagree on all further examples. and the search method. Unfortunately. an agent will not be able to make any predictions on unseen examples. Interpolation and extrapolation For cases in which there is a natural interpretation of "between. Nearly all of the search techniques used in machine learning can be seen as forms of local search through a space of representations. we do not imagine that either P's or N's biases work well in practice.but is something external to the data. the problem of learning can be reduced to one of search. One of the important properties of a learning algorithm is its ability to handle noisy data in all of its forms. the evaluation function. Extrapolation involves making a prediction that goes beyond the seen examples. What constitutes a good bias is an empirical question about which biases work best in practice. For example. the agent will not be able to resolve this disagreement. the data are not perfect. To have any inductive process make predictions on unseen data. Without a bias. interpolation involves making a prediction between cases for which there are data. trying to find the representation or representations that best fits the data given the bias." such as where the prediction is about time or space.accurately predict all of the data given . Learning as search Given a representation and a bias. The parameters for the models could be made to fit the data very well and they were Page 16 . The definition of the learning algorithm then becomes one of defining the search space. there are inadequate features (the features given do not predict the classification). Noise In most real-world situations. if a learning agent cannot choose some hypotheses as better.

Introduction to neural networks Artificial Intelligence has had its fair share from the field of neuroscience. Neural Network research has gone through a number of these lulls. Applications of neural networks The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. Network Models and Learning Rules. and have suffered from some setback. Weiners work allowed Mculloch and Pitts to describe how these different connection types would affect the operation of the network. As another example. An agent must be careful if its test cases mostly involve interpolating between data points. 4. the models were very poor at extrapolation. Neural Networks can be loosely separated into Neural Models. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical. as new methods have been created have shown brief promise. have been over-promoted. Neuroscience is the study of nervous system. and it would be very profitable to be able to do so. However scientists have always come back to the technology because it is a real attempt to model neural mechanisms despite the hype. they are describing how the network connects neurons in one layer to neurons in the next. When you hear of a network being described as a feed forward or feedback network. Page 17 .very good at interpolation. How the brain enables human beings to think has remained a mystery until the present day. it is often easy to predict a stock price on a certain day given data about the prices on the days before and the days after that day. It is very difficult to predict the price that a stock will be tomorrow. particularly the brain. however. But significant leaps and bounds in the field have enabled scientists to come close to the nature of thought processes inside a brain. the earliest mathematical models of the Neuron pre-date Mcullock and Pitts who developed the first Network models to explain how the signals passed from one neuron to another within the network. 3. but the learned model is used for extrapolation.

After training. the networks could predict multiple patient outcomes from unrelated institutions. clustering. including Computer numerical control. or regression analysis.g. object recognition and more). pattern recognition (radar systems. including filtering. visualization and e-mail spam filtering. Colorectal cancer has also been predicted using the neural networks. Neural networks and neuroscience Page 18 .a). prosthesis.  Control. poker). "KDD"). The models do not depend on assumptions about correlations of different variables. speech. process control. quantum chemistry. face identification. handwritten text recognition). novelty detection and sequential decision making.  Data processing. sequence recognition (gesture. chess.[42] These networks have also been used to diagnose prostate cancer. blind source separation and compression. The diagnoses can be used to make specific models taken from a large group of patients compared to information of one given patient. fitness approximation and modeling. financial applications (e.[43] b). including directing manipulators. Neural networks could predict the outcome for a patient with colorectal cancer with more accuracy than the current clinical methods.  Classification.  Robotics. including time series prediction. data mining (or knowledge discovery in databases. Application areas include the system identification and control (vehicle control. Artificial neural networks have also been used to diagnose several cancers. medical diagnosis. automated trading systems). Real-life applications The tasks artificial neural networks are applied to tend to fall within the following broad categories:  Function approximation. An ANN based hybrid lung cancer detection system named HLND improves the accuracy of diagnosis and the speed of lung cancer radiology. including pattern and sequence recognition. natural resources management).[41] game-playing and decision making (backgammon.

represented in a way that it is available to artificial intelligence programs that use natural language or make inferences about the ordinary world. commonsense knowledge is the collection of facts and information that an ordinary person is expected to know. The problem is considered to be among the hardest in all of AI research because the breadth and detail of commonsense knowledge is enormous. These tasks include machine translation. Information in a commonsense knowledge base may include. the machine simply has to know what the text is talking about or what objects it may be looking at. the field is closely related to cognitive and behavioral modeling.Theoretical and computational neuroscience is the field concerned with the theoretical analysis and the computational modeling of biological neural systems. and this is impossible in general unless the machine is familiar with all the same concepts that an ordinary person is familiar with. object recognition. biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory). Such a database is a type of ontology of which the most general are called upper ontologies. Since neural systems are intimately related to cognitive processes and behavior. neuroscientists strive to make a link between observed biological processes (data). To gain this understanding. Any task that requires commonsense knowledge is considered AI-complete: to be done as well as a human being does it. but is not limited to. To do these tasks perfectly. the following:  An ontology of classes and individuals Page 19 . text mining and many others. 5. Common sense In artificial intelligence research. The commonsense knowledge problem is the ongoing project in the field of knowledge representation (a sub-field of artificial intelligence) to create a commonsense knowledge base: a database containing all the general knowledge that most people possess. it requires the machine to appear as intelligent as a human being. The aim of the field is to create models of biological neural systems in order to understand how biological systems work.

He is not in the café.” and of the latter. An example of the former is. Reasoning To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. “Previous accidents of this sort were caused by instrument failure. Parts and materials of objects  Properties of objects (such as color and size)  Functions and uses of objects  Locations of objects and layouts of locations  Locations of actions and events  Durations of actions and events  Preconditions of actions and events  Effects (postconditions) of actions and events  Subjects and objects of actions  Behaviors of devices  Stereotypical situations or scripts  Human goals and needs  Emotions  Plans and strategies  Story themes  Contexts 6. therefore he is in the museum. therefore this Page 20 . “Fred must be in either the museum or the café.

whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Expert systems are designed to solve complex problems by reasoning about knowledge. Deductive reasoning is common in mathematics and logic. With an expert system the goal was to specify the rules in a format that was intuitive and easily Page 21 . An expert system is divided into two sub-systems: the inference engine and the knowledge base. The knowledge base represents facts and rules. it involves drawing inferences relevant to the solution of the particular task or situation. Some example of expert systems In artificial intelligence. The inference engine applies the rules to the known facts to deduce new facts. The first expert systems were created in the 1970s and then proliferated in the 1980s. especially deductive inferences. There has been considerable success in programming computers to draw inferences. Inference engines can also include explanation and debugging capabilities. Inductive reasoning is common in science. true reasoning involves more than just drawing inferences. Expert systems were among the first truly successful forms of AI software.accident was caused by instrument failure. where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. a). Advantages The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion. However. 7. In a traditional computer program the logic is embedded in code that can typically only be reviewed by an IT specialist. where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rules. represented primarily as if–then rules rather than through conventional procedural code. an expert system is a computer system that emulates the decision-making ability of a human expert.

by removing the need to write conventional code many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. simply invoke the inference engine. Essentially. System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcomed in most corporate IT environments – programming languages such as Lisp and Prolog and hardware platforms such as Lisp Machines and personal computers. access to large databases. which executed interpreted rather than compiled code. This was achieved in two ways. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects. and even edited by domain experts rather than IT experts. These problems were essentially the same as those of any other large system: integration. Disadvantages The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. b). As a result of this problem a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition. Page 22 . reviewed. such as C. and maintaining rules defined by experts. the logical flow of the program (at least at the highest level) was simply a given for the system. Interpreting provided an extremely powerful development environment but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages of the time. This also was a reason for the second benefit: rapid prototyping. debugging.understood. First. Ease of maintenance is the most obvious benefit. The benefits of this explicit knowledge representation were rapid development and ease of maintenance. when looking at the life-cycle of expert systems in actual use other problems seem at least as critical as knowledge acquisition. to help automate the process of designing. Obtaining the time of domain experts for any software application is always difficult but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. Performance was especially problematic because early expert systems were built using tools such as Lisp. However. and performance.

Page 23 . “Artificial Intelligence”. New Delhi. References:1. These issues were resolved primarily by the client-server paradigm shift as PCs were gradually accepted in the IT world as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications. and porting to more standard platforms. Rich E and Knight K. integration with large database systems. TMH.As a result a great deal of effort in the later stages of expert system tool development was focused on integration with legacy environments such as COBOL.

J. Nelsson N. Springer Verlag. “Principles of Artificial Intelligence”.2. Berlin.. Page 24 .