You are on page 1of 47

Chapter 4 LEARNING, PLANNING AND EXPLANATION IN EXPERT SYSTEM Learning:Learning is acquiring new knowledge, behaviors, skills, values, preferences

or understanding, and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals and some machines. Machine learning has been central to Al research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. Reinforcement learning the agent is rewarded for good responses and punished for bad ones. Learning Classification: - Learning can be classified as described below: 1. Unsupervised learning: - In machine learning, unsupervised learning is a class of problems in which one seeks to determine how the data are organized. It is distinguished from supervised learning (and reinforcement learning) in that the learner is given only unlabeled examples. Unsupervised learning is closely related to the problem of density estimation in statistics. It also encompasses many other techniques that seek to summarize and explain key features of the data. One form of unsupervised learning is clustering. Another example is blind source separation based on Independent Component Analysis (ICA). Among neural network models, the Self-Organizing Map (SOM) and Adaptive resonance theory (ART) are commonly used unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART

networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. 2. Supervised Learning: - Supervised learning is a machine learning technique for deducing a function from training data. The training data consist of pairs of input objects (typically vectors), and desired outputs. The output of the function can be a continuous value (called regression), or can predict a class label of the input object (called classification). The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). 3. Empirical Learning: - Empirical learning presupposes little background knowledge relevant to the task at hand, so that the main concern is to hypothesize a concept or rule primarily on the basis of the observational data supplied to the system. Many possible hypotheses that can explain on observation but main problem are to find the most preferred explanation. Thus main inference scheme of these systems is inductive. The empirical learning task is described as follows: Given: Observational statements (OS) about an object, phenomenon, or a process Background knowledge (BK) which includes domain concepts, the prefer criterion for choosing among competing hypotheses, and inductive rules of inference. Determine: Explanation knowledge (EK) that, if true, logically entails the observation and is most plausible, or in general, most desirable among all other such hypotheses according to a given preference criterion. Planning: Planning is to retrieve that knowledge and find out that whether procedural knowledge is better retained and more easily accessed or not. Therefore, one should develop and use cognitive procedures when learning information. Procedures can include shortcuts for completing a task (e.g., using fast l/Os to solve multiplication problems), as well as memory strategies that increase the distinctive meaning of information.

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or value) of the available choices. In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. It can involve agents planning for a common goal, an agent coordinating the plans (plan merging) - planning of others, or agents refining their own plans while negotiating over tasks or resources. A typical planner takes three inputs: a description of the initial state of the world, a description of the desired goal, and a set of possible actions, all encoded in a formal language such as STRIPS. The planner produces a sequence of actions that lead from the fi1 state to a state meeting the goal. Explanation: An explanation is a set of statements constructed to describe a set of facts which clarifies the causes, context, and consequences of those facts. An explanation facility is the part of an expert system that explains the reasoning of the system to the user. The explanation can range from how the final or intermediate solutions were arrived at to justifying the need for additional data. It allows the user to ask why it asked some question and how it reached some conclusion. For Example, Suppose there is an expert system that can suggest the most appropriate juice to a user based on the ingredients of the meal. The expert system will return an answer suggesting the best juice for the meal. The explanation facility can be used to show the rules that the expert system used to come to the conclusion that is the best juice. TYPES OF LEARNING:Learning new knowledge through different methods, depending on the type of material to be learned, the amount of relevant knowledge we already possess, and the environment in which learning takes places. One can develop learning taxonomies based on knowledge representation used (predicate calculus, Rules based, Frame based), the type of knowledge learned (Concept, games playing,

Problem solving), or by the area of application (medical diagnosis, scheduling, prediction and so on.).

Five different learning methods under this taxonomy are: 1. Memorization (rote learning) 2. Direct instruction (by being told) 3. Analogy

4. Induction 5. Deduction

1. Memorization (rote learning):- Memorization is the simplest form of learning. It requires the least amount of inference and is accomplished by simply copying the knowledge in the same form that it will be used directly into the knowledge base. We use this type of learning when we memorize multiplication table for example.

2. Direct instruction (by being told):- This type of learning requires more inference than rote learning since the knowledge must be transformed into an operational form before being integrated into the knowledge base. This type of learning when a teacher presents a number of facts directly to us in a well organized manner.

3. Analogy: - Analogical learning, is the process of learning a new concept or solution through the use of similar concepts or solutions. This type of learning when solving problems on an exam where previously learned examples serve s guide or when we learn to drive a truck using our knowledge of car driving.

4. Induction: - This type of learning is also one that is used frequently by humans. It is a powerful form of learning which like analogical learning, also requires more inferring

than the first two methods. This form of learning requires the use of inductive inference, a form of invalid but useful inference. We use inductive learning when we formulate a general concept after seeing a number of instances or examples of the concept. For example, we learn the concepts of color or sweet taste after experiencing the sensations associated with several examples of colored objects or sweet foods.

5. Deduction: - It is accomplished through a sequence of deductive inference steps using known facts. From the known facts, new facts or relationships are logically derived. For example, we could learn deductively that Sue is the cousin of Bill, if we have knowledge of Sue and Bills parents and rules for the cousin relationship. Deductive learning usually requires more inference than other methods. The inference method used which is valid form of inference. GENERAL LEARNING MODEL: Learning requires that new knowledge structures be created from some form of input stimulus. This new knowledge must then be assimilated into a knowledge base and be tested in some way for its utility. Testing means that the knowledge should be used in the performance of some task from which meaningful feedback can be obtained, where the feedback provides some measure of accuracy and usefulness of the newly acquired knowledge.

General Learning Model The environment has been included as part of the overall learner system The environment may be regarded as either a form of nature which produces random stimuli or as a more organized training source such as teacher which provides carefully selected training examples for the

learner component. The actual form of environment used will depend on the particular learning paradigm. Inputs to the learner component may be physical stimuli of some type or descriptive, symbolic training examples. The information conveyed to the learner component is used to create and modify knowledge structures in the knowledge base. This same knowledge is used by the performance component to carry out some tasks, such as solving a problem, playing a game, or classifying instances of some concept. When given a task, the performance component produces a response describing it actions in performing the task. The critic module then evaluates this response relative to an optimal response. Feedback, indicating whether or not the performance was acceptable, is then sent by the critic module to the learner component for its subsequent use in modifying the structures in the knowledge base. The cycle described above may be repeated a number of times until the performance of the system have reached some acceptable level, until a known learning goal has been reached. Several factors affect the learning performance in addition to the form of representation used. They include the types of training provided, the form and extent of any initial background knowledge, the type feedback provided and the learning algorithms used. Type of training used in a system can have a strong effect on performance, much the same as it does for human. Training may consist of randomly selected instances or examples that have been carefully selected and ordered for presentation. Many forms of learning can be characterized as a search through a space of possible hypothesis or solutions. To make learning more efficient, it is necessary to constrain this search process or reduce the search space. This can be achieved through the use of background knowledge which can be used to constrain the search space or exercise control operations which limit the search process. Feedback is essential to the learner component since otherwise it would never know if the knowledge structures in the knowledge base were improving or if they were adequate for the performance of the given tasks. The feedback must be accurate otherwise system would never learn good or bad. The feedback is always reliable and carries useful information; the learner

should be able to build up a useful corpus of knowledge quickly. On the other hand, if the feedback is noisy or unreliable, the learning process may be very slow and the resultant knowledge incorrect.

Factor affecting learning performance Learning algorithms determine a large extent how successful a learning system will be. The algorithms control the search to find and build the knowledge structures. PERFORMANCE MEASURES How can we evaluate the performance of a given system or compare the relative performance of two different systems. Characteristics are as fallows to have relative performance for the different learning methods:1. Genra1ity (Scope of method): It is most important performance measures for learning methods. Generality is the measure of the ease with which the method can be adapted to different domains of application. A completely general algorithm is one which is a fixed or self adjusting configuration that can learn or adapt in any environment or application domain. At the other extreme are methods which function in a single domain only. Methods which have degree of generality will function well in at least a few domains. 2. Efficiency: Efficiency method is the measure of the average time required to construct the target knowledge structures from some specified initial 3. Robustness: Robustness is the ability of a learning system to function with unreliable feedback and with a variety of training examples, including noisy one. 4. Efficacy: The efficacy of a system is a measure of the overall power of the system.

5. Ease of implementation: Ease of implementation relates to the complexity of the programs and data structures and the resources required for developing the system.

REAL TIME EXPERT SYSTEM:-

Rea1-time expert systems are on-line knowledge-based systems that combine analytical process models with conventional process control and heuristics to judge and interpret sensory data, while reasoning about the past, present, and future to assess on-going developments and plan appropriate actions. Real time system is software where the correct functioning of the system depends on the result produced by the system and the time at which these results were produced.

Hard Real Time System It is a system whose operation is incorrect if result is not produced according to timing specifications.

Soft Real Time System It is a system whose operation is degraded if results are not produced according specific timing requirements. Real time system model depends upon stimulus/response time/system. Stimuli falls into two classes:Periodic Stimuli: Stimuli which occur at predicable time interval. A Periodic Stimuli: Stimuli that are unpredictable and signaled using the computers interrupt mechanism. The sensor associated with the system generates periodic stimuli. The responses are directed to set of actuators that control some hardware units that influence system environment.

Real Time System A real time system is one in which the correctness of the computation not only depends upon the logical correctness but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to have occurred. It is essential that the timing constraints of the system are guaranteed to met. Guaranteeing timing behavior requires that the system be predictable. It is also desirable that the system attain a high degree of utilization while satisfying the timing constraints of the system. For example: A robot that has to pick up something from a conveyer belt. The piece is moving and the robot has a small window to pick up the object. If the robot is late, the piece wont be there anymore, and thus the job will have been done incorrectly, even through the robot went to the right place. If the robot is early, the piece wont be there yet and the robot may block it. Like real time in operating systems: The ability of the operating system to provide a required level of service in a bounded response time. It is an interactive system with better response time. In practice system are usually mixtures of hard and soft real time tasks. For example: A real time process, attempting to recognize images may have only a few hundred micro seconds in which to resolve each image. A process that attempts to position a servo motor may have tens of millisecond in which to process its data

A Real Time System

CHARACTERISTICS OF REAL TIME EXPERT SYSTEM Asynchronous event handling (continuous operation of processes). Guaranteed reaction and response time (world waits for no agent) Procedural representation of knowledge (often no declarative available) Handling of multiple problem simultaneously (multitasking) Reactive and goal directive behavior. Focus of attention Deal with incomplete, inaccurate data.

ARCHITECTURE OF REAL TIME EXPERT SYSTEM Architecture of real time expert system consists of: 1. Database: It is used to store data of the knowledge sources and provides means to manage them. The main problems are connected to the concurrent access of several parallel inference tasks, trying to access data (or objects) in the database. Important issues in the organization of the databases are: A possible representation of temporal knowledge. Updating and retrieving of information from the database.

Uncertainty management. 2. Formalism of Knowledge Sources: In a knowledge source, a collection of activities is combined. It represents a separate task which has to be executed using its own inference engine. The reasoning mechanism in such knowledge source must be efficiently implemented. For example- RETE Algorithm in which method builds a network compiled from the, conditional parts of the rules and inputs are changed in working memory. 3. Control Components: It determines which knowledge is scheduled with highest priority to meet the deadline. The control module decides when a knowledge source is allowed to access the database. Scheduling the tasks to be performed by the knowledge sources can be carried out in various ways common to solutions. The most common solutions are: Priority Scheduling Deadline Scheduling Progressive Scheduling

NEURAL NETWORK EXPERT SYSTEM: NEURAL NETWORK An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. WHY USE NEURAL NETWORKS Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an expert in the category of information it has been given to analyze. This expert can then be used to

provide projections given new situations of interest and answer what if questions. Other advantages include: 1. Adaptive learning. An ability to learn how to do tasks based on the data given for training or initial experience. 2. Self-Organization. An ANN can create its own organization or representation of the information it receives during learning time. 3. Real Time Operation. ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. 4. Fault Tolerance via Redundant Information Coding. Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage. HOW THE HUMAN BRAIN LEARNS In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurones. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.

Components of a neuron

The Synapse FROM HUMAN NEURONES TO ARTIFICIAL NEURONES We conduct these neural networks by first trying to deduce the essential features of neurones and their interconnections. We then typically program a computer to- simulate these features. However because our knowledge of neurones is incomplete and our computing power is limited, our models are necessarily gross idealisations of real networks of neurones.

The neuron model A SIMPLE NEURON: An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.

A simple neuron FIRING RULES: A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained. A simple firing rule can be implemented by using Hamming distance technique. The rule goes as follows: 1. Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison, they have more input elements in common with the nearest pattern in the 1-taught set than with the nearest pattern in the 0taught set. If there is a tie, then the pattern remains in the undefined state. For example, a 3-input neuron is taught to output 1 when the input (X1, X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001. Then, before applying the firing rule, the truth table is;

As an example of the way the firing rule is applied, take the pattern 010. It differs from 000 in 1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements. Therefore, the nearest pattern is 000 which belongs in the 0-taught set. Thus the firing rule requires that the neuron should not fire when the input is 010. On the other hand, 011 is equally distant from two taught patterns that have different outputs and thus the output stays undefined (0/1). By applying the firing in every column the following truth table is obtained;

The difference between the two truth tables is called the generalization of the neuron. Therefore the firing rule gives the neuron a sense of similarity and enables it to respond sensibly to patterns not seen during training. ARCHITECTURE OF NEURAL NETWORKS Feed-forward networks: Feed-forward ANNs allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organization is also referred to as bottom up or top-down. Feedback networks: Feedback networks can have signals travelling in both directions by introducing loops in the network. Feedback networks are dynamic; their state is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent.

An example of a simple feed-forward network Network layers: The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of input units is connected to a layer of hidden units, which is connected to a layer of output units.

The activity of the input units represents the raw information that is fed into the network. The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.

The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units. This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.

An example of a complicated network We also distinguish single-layer and multi-layer architectures. The single-layer organization, in which all units are connected to one another; constitutes the most general case and is of more potential computational power than hierarchically structured multi-layer organizations. In multilayer networks, units are often numbered by layer, instead of following a global numbering. Perceptrons: The most influential work on neural nets in the 60s went under the heading of Perceptrons a term coined by Frank Rosenblatt. The perceptron turns out to be a neuron with weighted inputs with some additional, fixed, preprocessing. Units labeled Al, A2, Aj , Ap are called association units and their task is to extract specific, localized featured from the input images. Perceptrons mimic the basic idea behind the mammalian visual system. They were mainly used in pattern recognition even though their capabilities extended a lot more.

APPLICATIONS OF NEURAL NETWORKS Sales forecasting Industrial process control Customer research Data validation Risk management Target marketing Neural networks in medicine Electronic noses Modeling and Diagnosing the Cardiovascular System Instant Physician Neural Networks in business Marketing FUZZY LOGIC

A fuzzy expert system is an expert system that uses a collection of fuzzy membership functions and rules, instead of Boolean logic, to reason about data. The rules in a fuzzy expert system are usually of a form similar to the following: If x is low and y is high then z = medium where x and y are input variables (names for know data values), z is an output variable (a name for a data value to be computed), low is a membership function (fuzzy subset) defined on x, high is a membership function defined on y, and medium is a member & function defined on z. The antecedent (the rules premise) describes to what degree the rule applies, while the conclusion (the rules consequent) assigns a membership function to each of one or more output variables. Most tools for working with fuzzy expert system allow more than one conclusion per rule. The set of rules in a fuzzy expert system is known as the rule base or knowledge base. The general inference process proceeds in three (or four) steps.

1. Under FUZZIFICATION, the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise. 2. Under INFERENCE, the truth value for the premise of each rule is computed, at applied to the conclusion part of each rule. This result in one fuzzy subset to be assigned to each output variable for each rule. Usually only MIN or PRODUCT are used as inference rules. In MIN inferencing, the output membership function is clipped off at a height corresponding to the rule premises computed degree of truth (fuzzy logic AND). In PRODUCT inferencing, the output membership function is scaled by the rule premises computed degree of truth.

3. Under COMPOSITION, all of the fuzzy subsets assigned to each output variable are combined together to form a single fuzzy subset for each output variable. Again usually MAX or SUM are used. In MAX composition, the combined output fuzzy subset is constructed by taking the point wise maximum over all of the fuzzy subset assigned to variable by the inference rule (fuzzy logic OR). In SUM composition, the combined output fuzzy subset is constructed by taking the point wise sum over all of the fuzzy subsets assigned to the output variable by the inference rule. 4. Finally is the (optional) DEFUZZIFICATION, which is used when it is useful to convert the fuzzy output set to a crisp number. There are more defuzzification - methods than you can shake a stick at (at least 30). Two of the more common techniques are the CENTROID and MAXIMUM methods. In the CENTROID method, the crisp value of the output variable is computed by finding the variable value of the center of gravity of the membership function for the fuzzy value. In the MAXIMUM method, one of the variable values at which the fuzzy subset has its maximum truth value is chosen as the crisp value for the output variable. Extended Example Assume that the variables x, y, and z all take on values in the interval [0,10], and that the following membership functions and rules are defined: Low (t) = 1 - (t/10) High (t) = t/10 Rule 1: if x is low and y is low then z is high Rule 2: if x is low and y is high then z is low Rule 3: if x is high and y is low then z is low Rule 4: if x is high and y is high then z is high Notice that instead of assigning a single value to the output variable z, each rule assigns an entire fuzzy subset (low or high) Notes: 1. In this example, low (t) + high (4t) = 1.0 for all t. This is not required, but it is fairly common.

2. The value of t at which low (t) is maximum is the same as the value of t at which high (t) is minimum, and vice-versa. This is also not required, but fairly common. 3. The same membership functions are used for all variables. This isnt required, and is also *not* common. In the fuzzification sub process, the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise. The degree of truth for a rules premise is sometimes referred to as its ALPHA. If a rules premise has a nonzero degree of truth (if the rule applies at all...) then the rule is said to FIRE. For example,

In the inference sub process, the truth value for the premise of each rule is computed, and applied to the conclusion part of each rule. This results in one fuzzy subset to be assigned to each output variable, for each rule. MIN and PRODUCT are two INFERENCE METHODS or INFERENCE RULES. In MIN inferencing, the output membership function is clipped off at a height corresponding to the rule premises computed degree of truth. This corresponds to the traditional interpretation of the fuzzy logic AND operation. In PRODUCT inferencing, the output membership function is scaled by the rule premises computed degree of truth. For example, lets look at rule 1 for x = 0.0 and y = 3.2. As shown in the table above, the premise degree of truth works out to 0.68. For this rule, MIN inferencing will assign z the fuzzy subset defined by the membership function: Rule l (z) = {z/10, if z <= 6.8

0.68, if z >= 6.8} For the same conditions, PRODUCT inferencing will assign z the fuzzy subset defined by the membership function: Rule l (z) = 0.68 * high (z) = 0.068 * z In the composition sub process, all of the fuzzy subsets assigned to each output variables are combined together to form a single fuzzy subset for each output variable. MAX composition and SUM composition are two COMPOSITION RULES. In MAX composition, the combined output fuzzy subset is constructed by taking the point wise maximum over all of the fuzzy subsets assigned to the output variable by the inference rule. In SUM composition, the combined output fuzzy subset is constructed by taking point wise sum over all of the fuzzy subsets assigned to the output variable by the inference rule. Note that this can result in truth values greater than one! For this reason, composition is only used when it will be followed by a defuzzification method, such as t CENTROID method, that doesnt have a problem with this odd case. Otherwise SUM composition can be combined with normalization and is therefore a general purpose method again. For example, assume x = 0.0 and y = 3.2. MIN inferencing would assign the following four fuzzy subsets to z: Rule l(z) = {z/10, if z <= 6.8 0.68, if z >= 6.8} Rule 2(z) = {0.32, if z <= 6.8 1-z/10, if z >= 6.8} Rule 3(z) = 0.0 Rule 4(z) = 0.0 MAX composition would result in the fuzzy subset: Fuzzy (z) = {o.32, if z <= 3.2

z/10, if 3.2 <= z <= 6.8 0.68, if z >= 6.8} PRODUCT inferencing would assign the following four fuzzy subsets to z: Rule l(z) = 0.068 * z Rule 2(z) = 0.32-0.032 * z Rule 3(z) = 0.0 Rule 4(z) = 0.0 SUM composition would result in the fuzzy subset: Fuzzy (z) = 0.32 + 0.036 * z Sometimes it is useful to just examine the fuzzy subsets that are the result of the composition process, but more often, this FUZZY VALUE needs to be converted to a single number a CRISP VALUE. This is what the defuzzification sub process does. Two of the more common techniques are the CENTROID and MAXIMUM methods. In the CENTROID method, the crisp value of the output variable is computed by finding the variable value of the center of gravity of the membership function for the fuzzy value. In the MAXIMUM method, one of the variable values at which the fuzzy subset has its maximum truth value is chosen as the crisp value for the output variable. There are several variations of the MAXIMUM method that differ only in what they do when there is more than one variable value at which this maximum truth value occurs. One of these, the AVERAGE- OFMAXIMA method, returns the average of the variable values at which the maximum truth value occurs. For example, Using MAX-MIN inferencing and AVERAGE-OF-MAXIMA defuzzification results in a crisp value of 8.4 for a Using PRODUCT- SUM inferencing and CENTROID defuzzification results in a crisp value of 5.6 for z, as follows. Earlier on in the FAQ, we state that all variables (including z) take on values in the range [0, 10]. To compute the centroid of the function f(x), we divide the moment of the function by the area of the function. To compute the moment of f(x), we compute the integral of x*f(x) dx,

and to compute the area of f(x), you compute the integral of f(x) dx. In this case, we would compute the area as integral from 0 to 10 of (0.32 + 0.036*z) dz, which is 0.32 * 10 + 0.018*100) = (3.2 + 1.8) = 5.0 and the moment as the integral from 0 to 10 of (0.32 * z+ 0.036 * z * z) dz, which is (0.16 * 10 * 10 + 0.012 * 10 * 10 * 10) = (16 + 12) = 28 Finally, the centroid is 28/5 or 5.6. Sometimes the composition and defuzzification processes are combined, taking advantage of mathematical relationships that simplify the process of computing the final output variable values. FUZZY SETS Fuzzy set has an ability to classify elements into a continuous set using the concept of degree of membership. The characteristic function or membership function not only gives 0 or 1 but can also give values between 0 and 1. Example: Consider the outside ambient temperature. Classical set theory can only classify the temperature as hot or cold (i.e., either 1 or 0). It cannot interpret the temperature between 20F and 100F. In other words, the characteristic function for the classical logic for the above example is given by

The boundary 50F is taken because classical logic cannot interpret intermediate values. On the other hand, fuzzy logic solves the above problem with a membership function as given by

The above membership function is shown in Table, a graph of the membership function for the fuzzy temperature variable is shown in Figure. The degree of coldness is taken as the complement of the degree of hotness.

Membership function of temperature

Membership function for the degree of hotness and degree of coldness Comparison of neural networks with conventional computing Neural computing a) Neural networks process information in a Conventional computing a) Conventional computers use an algorithmic

similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurones) working in parallel to solve a specific problem. b) Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. finds out how to solve the problem by itself, its operation can be unpredictable

approach i.e. the computer follows a set of instructions in order to solve a problem.

b) Conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to high code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.

The disadvantages is that because the network level language program and then into machine

c) Neural systems are like our own brains-are well suited to situations that have no clear algorithmic solutions and are able to manage noisy imprecise data.

c) The computer must told in advance the details of great detail. However, even relatively simple tasks for people such as recognizing facts are very difficult to express in a rigid algorithm.

d) They are able to manage the variability of data obtained in the real world

d) Conventional computer are often unable to manage the variability of data obtained in the real world.

e) Input data is statistical pattern.

e) Input data is 1, 2, 3and so on.

Comparison of human expertise with Machine expertise Human expertise a) 1014 Neurons b) Parallel computing c) Speed: 100+M/sec. d)Natural Evolution Machine expertise a) CPU-Central Processing Unit b) Serial computing c) Approx. speed of Light d) designable

e) use of knowledge in the form of rules of thumb

e) use of knowledge expressed in the forms of rules

Comparison of real time expert system with Normal expert system Real time expert system a) Tasks are carried out more quickly b) Real time systems can be designed to perform tasks in places where it is extremely dangerous for people to work.(e.g. nuclear reactors, in chemical factories, on North Sea oil platforms) c) Humans get bored when carrying out the same task time after time. Real time system are able to carry out same tasks over and over again without making mistakes. d)can be hard or soft real time system e)This guarantees 100% accuracy d) they can come under the soft category e) This guarantees 100% accuracy if the data input is correct. f) Highly flexible f) Less flexible c) Normal expert systems are able to carry out same tasks over and over again without making mistakes but it consumes time. Normal expert system a) Tasks are carried out less quickly b) These systems are for normal usage.

Comparison of neural networks with fuzzy expert system Property Data required to construct system Expert knowledge to construct a system Considerable Minimal Fuzzy expert system Minimal Neural network Considerable

Important points to remember: Learning:Learning is acquiring new knowledge, behaviors, skills, values, preferences or understanding, and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals and some machines. Machine learning has been central to Al research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. Reinforcement learning the agent is rewarded for good responses and punished for bad ones. Planning: Planning is to retrieve that knowledge and find out that whether procedural knowledge is better retained and more easily accessed or not. Therefore, one should develop and use cognitive procedures when learning information. Procedures can include shortcuts for completing a task (e.g., using fast l/Os to solve multiplication problems), as well as memory strategies that increase the distinctive meaning of information. Explanation: An explanation is a set of statements constructed to describe a set of facts which clarifies the causes, context, and consequences of those facts. An explanation facility is the part of an expert system that explains the reasoning of the system to the user. The explanation can range from how the final or intermediate solutions were arrived at to justifying the need for additional data. It allows the user to ask why it asked some question and how it reached some conclusion. For Example, Suppose there is an expert system that can suggest the most appropriate juice to a user based on the ingredients of the meal. The expert system will return an answer suggesting the best juice

for the meal. The explanation facility can be used to show the rules that the expert system used to come to the conclusion that is the best juice. TYPES OF LEARNING:Learning new knowledge through different methods, depending on the type of material to be learned, the amount of relevant knowledge we already possess, and the environment in which learning takes places. One can develop learning taxonomies based on knowledge representation used (predicate calculus, Rules based, Frame based), the type of knowledge learned (Concept, games playing, Problem solving), or by the area of application (medical diagnosis, scheduling, prediction and so on.).

Five different learning methods under this taxonomy are: 1. 2. 3. 4. 5. Memorization (rote learning) Direct instruction (by being told) Analogy Induction Deduction

1.

Memorization (rote learning):- Memorization is the simplest form of learning. It

requires the least amount of inference and is accomplished by simply copying the knowledge in the same form that it will be used directly into the knowledge base. We use this type of learning when we memorize multiplication table for example.

2.

Direct instruction (by being told):- This type of learning requires more

inference than rote learning since the knowledge must be transformed into an operational form before being integrated into the knowledge base. This type of learning when a teacher presents a number of facts directly to us in a well organized manner.

3.

Analogy: - Analogical learning, is the process of learning a new concept or

solution through the use of similar concepts or solutions. This type of learning when solving problems on an exam where previously learned examples serve s guide or when we learn to drive a truck using our knowledge of car driving.

4.

Induction: - This type of learning is also one that is used frequently by humans. It

is a powerful form of learning which like analogical learning, also requires more inferring than the first two methods. This form of learning requires the use of inductive inference, a form of invalid but useful inference. We use inductive learning when we formulate a general concept after seeing a number of instances or examples of the concept. For example, we learn the concepts of color or sweet taste after experiencing the sensations associated with several examples of colored objects or sweet foods.

5.

Deduction: - It is accomplished through a sequence of deductive inference steps

using known facts. From the known facts, new facts or relationships are logically derived. For example, we could learn deductively that Sue is the cousin of Bill, if we have knowledge of Sue and Bills parents and rules for the cousin relationship. Deductive learning usually requires more inference than other methods. The inference method used which is valid form of inference. GENERAL LEARNING MODEL: Learning requires that new knowledge structures be created from some form of input stimulus. This new knowledge must then be assimilated into a knowledge base and be tested in some way for its utility. Testing means that the knowledge should be used in the performance of some task from which meaningful feedback can be obtained, where the feedback provides some measure of accuracy and usefulness of the newly acquired knowledge.

General Learning Model The environment has been included as part of the overall learner system The environment may be regarded as either a form of nature which produces random stimuli or as a more organized training source such as teacher which provides carefully selected training examples for the learner component. The actual form of environment used will depend on the particular learning paradigm. Inputs to the learner component may be physical stimuli of some type or descriptive, symbolic training examples. The information conveyed to the learner component is used to create and modify knowledge structures in the knowledge base. This same knowledge is used by the performance component to carry out some tasks, such as solving a problem, playing a game, or classifying instances of some concept. When given a task, the performance component produces a response describing it actions in performing the task. The critic module then evaluates this response relative to an optimal response. Feedback, indicating whether or not the performance was acceptable, is then sent by the critic module to the learner component for its subsequent use in modifying the structures in the knowledge base. The cycle described above may be repeated a number of times until the performance of the system have reached some acceptable level, until a known learning goal has been reached.

Several factors affect the learning performance in addition to the form of representation used. They include the types of training provided, the form and extent of any initial background knowledge, the type feedback provided and the learning algorithms used. Type of training used in a system can have a strong effect on performance, much the same as it does for human. Training may consist of randomly selected instances or examples that have been carefully selected and ordered for presentation. Many forms of learning can be characterized as a search through a space of possible hypothesis or solutions. To make learning more efficient, it is necessary to constrain this search process or reduce the search space. This can be achieved through the use of background knowledge which can be used to constrain the search space or exercise control operations which limit the search process. Feedback is essential to the learner component since otherwise it would never know if the knowledge structures in the knowledge base were improving or if they were adequate for the performance of the given tasks. The feedback must be accurate otherwise system would never learn good or bad. The feedback is always reliable and carries useful information; the learner should be able to build up a useful corpus of knowledge quickly. On the other hand, if the feedback is noisy or unreliable, the learning process may be very slow and the resultant knowledge incorrect.

Factor affecting learning performance Learning algorithms determine a large extent how successful a learning system will be. The algorithms control the search to find and build the knowledge structures. PERFORMANCE MEASURES

How can we evaluate the performance of a given system or compare the relative performance of two different systems. Characteristics are as fallows to have relative performance for the different learning methods:1. Genra1ity (Scope of method): It is most important performance measures for

learning methods. Generality is the measure of the ease with which the method can be adapted to different domains of application. A completely general algorithm is one which is a fixed or self adjusting configuration that can learn or adapt in any environment or application domain. At the other extreme are methods which function in a single domain only. Methods which have degree of generality will function well in at least a few domains. 2. Efficiency: Efficiency method is the measure of the average time required to

construct the target knowledge structures from some specified initial 3. Robustness: Robustness is the ability of a learning system to function with

unreliable feedback and with a variety of training examples, including noisy one. 4. Efficacy: The efficacy of a system is a measure of the overall power of the

system. 5. Ease of implementation: Ease of implementation relates to the complexity of the

programs and data structures and the resources required for developing the system.

REAL TIME EXPERT SYSTEM:-

Rea1-time expert systems are on-line knowledge-based systems that combine analytical process models with conventional process control and heuristics to judge and interpret sensory data, while reasoning about the past, present, and future to assess ongoing developments and plan appropriate actions. Real time system is software where the correct functioning of the system depends on the result produced by the system and the time at which these results were produced.

Hard Real Time System It is a system whose operation is incorrect if result is not produced according to timing specifications.

Soft Real Time System It is a system whose operation is degraded if results are not produced according specific timing requirements. Real time system model depends upon stimulus/response time/system. Stimuli falls into two classes:Periodic Stimuli: Stimuli which occur at predicable time interval. A Periodic Stimuli: Stimuli that are unpredictable and signaled using the computers interrupt mechanism. The sensor associated with the system generates periodic stimuli. The responses are directed to set of actuators that control some hardware units that influence system environment.

Real Time System A real time system is one in which the correctness of the computation not only depends upon the logical correctness but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to have occurred.

It is essential that the timing constraints of the system are guaranteed to met. Guaranteeing timing behavior requires that the system be predictable. It is also desirable that the system attain a high degree of utilization while satisfying the timing constraints of the system. For example: A robot that has to pick up something from a conveyer belt. The piece is moving and the robot has a small window to pick up the object. If the robot is late, the piece wont be there anymore, and thus the job will have been done incorrectly, even through the robot went to the right place. If the robot is early, the piece wont be there yet and the robot may block it. Like real time in operating systems: The ability of the operating system to provide a required level of service in a bounded response time. It is an interactive system with better response time. In practice system are usually mixtures of hard and soft real time tasks. For example: A real time process, attempting to recognize images may have only a few hundred micro seconds in which to resolve each image. A process that attempts to position a servo motor may have tens of millisecond in which to process its data

A Real Time System

CHARACTERISTICS OF REAL TIME EXPERT SYSTEM Asynchronous event handling (continuous operation of processes). Guaranteed reaction and response time (world waits for no agent) Procedural representation of knowledge (often no declarative available)

Handling of multiple problem simultaneously (multitasking) Reactive and goal directive behavior. Focus of attention Deal with incomplete, inaccurate data.

ARCHITECTURE OF REAL TIME EXPERT SYSTEM Architecture of real time expert system consists of: 1. Database: It is used to store data of the knowledge sources and provides means

to manage them. The main problems are connected to the concurrent access of several parallel inference tasks, trying to access data (or objects) in the database. Important issues in the organization of the databases are: 2. A possible representation of temporal knowledge. Updating and retrieving of information from the database. Uncertainty management. Formalism of Knowledge Sources: In a knowledge source, a collection of

activities is combined. It represents a separate task which has to be executed using its own inference engine. The reasoning mechanism in such knowledge source must be efficiently implemented. For example- RETE Algorithm in which method builds a network compiled from the, conditional parts of the rules and inputs are changed in working memory. 3. Control Components: It determines which knowledge is scheduled with highest

priority to meet the deadline. The control module decides when a knowledge source is allowed to access the database. Scheduling the tasks to be performed by the knowledge sources can be carried out in various ways common to solutions. The most common solutions are: Priority Scheduling Deadline Scheduling

Progressive Scheduling

NEURAL NETWORK EXPERT SYSTEM: NEURAL NETWORK An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. WHY USE NEURAL NETWORKS Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an expert in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer what if questions. Other advantages include: 1. Adaptive learning. An ability to learn how to do tasks based on the data given for training or initial experience. 2. Self-Organization. An ANN can create its own organization or representation of the information it receives during learning time. 3. Real Time Operation. ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. 4. Fault Tolerance via Redundant Information Coding. Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

The Synapse ARTIFICIAL NEURONES We conduct these neural networks by first trying to deduce the essential features of neurones and their interconnections. We then typically program a computer to- simulate these features. However because our knowledge of neurones is incomplete and our computing power is limited, our models are necessarily gross idealisations of real networks of neurones.

The neuron model A SIMPLE NEURON: An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.

A simple neuron FIRING RULES: A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained. A simple firing rule can be implemented by using Hamming distance technique. The rule goes as follows:

1. Take a collection of training patterns for a node, some of which cause it to fire (the 1taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison, they have more input elements in common with the nearest pattern in the 1-taught set than with the nearest pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state. For example, a 3-input neuron is taught to output 1 when the input (X1, X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001. Then, before applying the firing rule, the truth table is;

As an example of the way the firing rule is applied, take the pattern 010. It differs from 000 in 1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements. Therefore, the nearest pattern is 000 which belongs in the 0-taught set. Thus the firing rule requires that the neuron should not fire when the input is 010. On the other hand, 011 is equally distant from two taught patterns that have different outputs and thus the output stays undefined (0/1). By applying the firing in every column the following truth table is obtained;

The difference between the two truth tables is called the generalization of the neuron. Therefore the firing rule gives the neuron a sense of similarity and enables it to respond sensibly to patterns not seen during training. ARCHITECTURE OF NEURAL NETWORKS

Feed-forward networks: Feed-forward ANNs allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organization is also referred to as bottom up or top-down. Feedback networks: Feedback networks can have signals travelling in both directions by introducing loops in the network. Feedback networks are dynamic; their state is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent.

An example of a simple feed-forward network Network layers: The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of input units is connected to a layer of hidden units, which is connected to a layer of output units. The activity of the input units represents the raw information that is fed into the

network. The activity of each hidden unit is determined by the activities of the input units

and the weights on the connections between the input and the hidden units. The behavior of the output units depends on the activity of the hidden units and

the weights between the hidden and output units. This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units

determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.

An example of a complicated network We also distinguish single-layer and multi-layer architectures. The single-layer organization, in which all units are connected to one another; constitutes the most general case and is of more potential computational power than hierarchically structured multilayer organizations. In multi-layer networks, units are often numbered by layer, instead of following a global numbering. Perceptrons: The most influential work on neural nets in the 60s went under the heading of Perceptrons a term coined by Frank Rosenblatt. The perceptron turns out to be a neuron with weighted inputs with some additional, fixed, preprocessing. Units labeled Al, A2, Aj , Ap are called association units and their task is to extract specific, localized featured from the input images. Perceptrons mimic the basic idea behind the mammalian visual system. They were mainly used in pattern recognition even though their capabilities extended a lot more.

APPLICATIONS OF NEURAL NETWORKS Sales forecasting Industrial process control Customer research Data validation Risk management Target marketing Neural networks in medicine Electronic noses Modeling and Diagnosing the Cardiovascular System Instant Physician Neural Networks in business Marketing

FUZZY LOGIC

A fuzzy expert system is an expert system that uses a collection of fuzzy membership functions and rules, instead of Boolean logic, to reason about data. The rules in a fuzzy expert system are usually of a form similar to the following: If x is low and y is high then z = medium where x and y are input variables (names for know data values), z is an output variable (a name for a data value to be computed), low is a membership function (fuzzy subset) defined on x, high is a membership function defined on y, and medium is a member & function defined on z. The antecedent (the rules premise) describes to what degree the rule applies, while the conclusion (the rules consequent) assigns a membership function to each of one or more output variables. Most tools for working with fuzzy expert system allow more than one conclusion per rule. The set of rules in a fuzzy expert system is known as the rule base or knowledge base. The general inference process proceeds in three (or four) steps.

1. Under FUZZIFICATION, the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise. 2. Under INFERENCE, the truth value for the premise of each rule is computed, at applied to the conclusion part of each rule. This result in one fuzzy subset to be assigned to each output variable for each rule. Usually only MIN or PRODUCT are used as inference rules. In MIN inferencing, the output membership function is clipped off at a height corresponding to the rule premises computed degree of truth (fuzzy logic AND). In PRODUCT inferencing, the output membership function is scaled by the rule premises computed degree of truth.

3. Under COMPOSITION, all of the fuzzy subsets assigned to each output variable are combined together to form a single fuzzy subset for each output variable. Again usually MAX or SUM are used. In MAX composition, the combined output fuzzy subset is constructed by taking the point wise maximum over all of the fuzzy subset assigned to variable by the inference rule (fuzzy logic OR). In SUM composition, the combined output fuzzy subset is constructed by taking the point wise sum over all of the fuzzy subsets assigned to the output variable by the inference rule. 4. Finally is the (optional) DEFUZZIFICATION, which is used when it is useful to convert the fuzzy output set to a crisp number. There are more defuzzification - methods than you can shake a stick at (at least 30). Two of the more common techniques are the CENTROID and MAXIMUM methods. In the CENTROID method, the crisp value of the output variable is computed by finding the variable value of the center of gravity of the membership function for the fuzzy value. In the MAXIMUM method, one of the variable values at which the fuzzy subset has its maximum truth value is chosen as the crisp value for the output variable. FUZZY SETS Fuzzy set has an ability to classify elements into a continuous set using the concept of degree of membership. The characteristic function or membership function not only gives 0 or 1 but can also give values between 0 and 1. Example: Consider the outside ambient temperature. Classical set theory can only classify the temperature as hot or cold (i.e., either 1 or 0). It cannot interpret the temperature between 20F and 100F. In other words, the characteristic function for the classical logic for the above example is given by

The boundary 50F is taken because classical logic cannot interpret intermediate values. On the other hand, fuzzy logic solves the above problem with a membership function as given by

The above membership function is shown in Table, a graph of the membership function for the fuzzy temperature variable is shown in Figure. The degree of coldness is taken as the complement of the degree of hotness.

Membership function of temperature

Membership function for the degree of hotness and degree of coldness

Comparison of neural networks with conventional computing

Neural computing a) Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurones) working in parallel to solve a specific problem. b) Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. finds out how to solve the problem by itself, its operation can be unpredictable

Conventional computing a) Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem.

b) Conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to high code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.

The disadvantages is that because the network level language program and then into machine

c) Neural systems are like our own brains-are well suited to situations that have no clear algorithmic solutions and are able to manage noisy imprecise data.

c) The computer must told in advance the details of great detail. However, even relatively simple tasks for people such as recognizing facts are very difficult to express in a rigid algorithm.

d) They are able to manage the variability of data obtained in the real world

d) Conventional computer are often unable to manage the variability of data obtained in the real world.

e) Input data is statistical pattern.

e) Input data is 1, 2, 3and so on.

Q. Give the advantages of neural network architecture? Ans: Advantages A neural network can perform tasks that a linear program can not.

When an element of the neural network fails, it can continue without any problem by their parallel nature. A neural network learns and does not need to be reprogrammed. It can be implemented in any application. It can be implemented without any problem. Comparison of human expertise with Machine expertise Human expertise a) 1014 Neurons b) Parallel computing c) Speed: 100+M/sec. d)Natural Evolution e) use of knowledge in the form of rules of thumb Machine expertise a) CPU-Central Processing Unit b) Serial computing c) Approx. speed of Light d) designable e) use of knowledge expressed in the forms of rules

Comparison of real time expert system with Normal expert system Real time expert system Normal expert system a) Tasks are carried out less quickly b) These systems are for normal usage.

a) Tasks are carried out more quickly b) Real time systems can be designed to perform tasks in places where it is extremely dangerous for people to work.(e.g. nuclear reactors, in chemical factories, on North Sea oil platforms) c) Humans get bored when carrying out the same task time after time. Real time system are able to carry out same tasks over and over again without making mistakes. d)can be hard or soft real time system e)This guarantees 100% accuracy

c) Normal expert systems are able to carry out same tasks over and over again without making mistakes but it consumes time.

d) they can come under the soft category e) This guarantees 100% accuracy if the data

input is correct. f) Highly flexible f) Less flexible

Comparison of neural networks with fuzzy expert system Property Fuzzy expert system Minimal Neural network Considerable

Data required to construct system Expert knowledge to construct a system

Considerable

Minimal

You might also like