## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

Pakistan.

**Summary of Book Chapters with Classification of Approaches **

**Part I **

Foundations of Soft Ccomputing and Sntelligent Control Systems

**CHAPTER 1 **

**Outline of a Computational Theory of Perceptions Based on Computing with Words **

**L.A. ZADEH, Berkeley Initiative in Soft Computing (BISC), University of California, Berkeley, California, USA **

**James Albus and Alex Meystel **

Perceptions play a pivotal role in human cognition. The literature on perceptions is enormous, encompassing thousands of papers and books in the realms of psychology, linguistics, philosophy, and brain science, among others [**64]. And yet, what is not in existence is a theory in which perceptions are treated as objects of computation. A preliminary version of such a theory, called the computational theory of perceptions (CTP), is outlined in this chapter. **

The computational theory of perceptions is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and without any computations. Everyday examples of such tasks are parking a car, driving in city traffic, playing golf, cooking a meal, and summarizing a story. Underlying this remarkable capability is the brain’s ability to manipulate perceptions—perceptions of time, distance, force, direction, speed, shape, color, likelihood, intent, truth and other attributes of physical and mental objects.

A basic difference between measurements and perceptions is that, in general, measurements are crisp and quantitative whereas perceptions are fuzzy and qualitative. Furthermore, the finite ability of the human brain to resolve detail and store information necessitates a partitioning of objects (points) into granules, with a granule being a clump of objects (points) drawn together by indistinguishability, similarity, proximity, or functionality [**62]. For example, a perception of age may be described as young, with young being a context-dependent granule of the variable Age (Figure 1). Thus, in general, perceptions are both fuzzy and granular, or for short, f-granular. In this perspective, use of perceptions may be viewed as a human way of achieving fuzzy data compression. **

**FIGURE 1 **Crisp and fuzzy granulation of *Age*. Note that *young *is context-dependent.

One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurement. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in city traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize nontrivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks.

It may be argued that underlying the underachievements is the lack of a machinery for manipulation of perception. A case in point is the problem of automation of driving in city traffic—a problem that is certainly not academic in nature. Human drivers do it routinely, without any measurements and any computations. Now assume that we had a limitless number of sensors to measure anything that we might want. Would this be of any help in constructing a system that would do the driving on its own? The answer is a clear no. Thus, in this instance, as in others, progressing from perceptions to measurements does not solve the problem.

To illustrate a related point, consider an example in which we have a transparent box containing black and white balls. Suppose that the question is: What is the probability that a ball drawn at random is black? Now, if I can count the balls in the box and the proportion of black balls is, say, 0.7, then my answer would be 0.7. If I cannot count the balls but my visual perception is that most are black, the traditional approach would be to draw on subjective probability theory. Using an elicitation procedure in this theory would lead to a numerical value of the desired probability, say, 0.7. By so doing, I have quantified my perception of the desired probability; but can I justify the procedure that led me to the numerical value?

Countertraditionally, employing CTP would yield the following answer: If my perception is that most balls are black, then the probability that a ball drawn at random is black is *most*, where *most *is interpreted as a fuzzy proportion (**Figure 2). Thus, in the traditional approach the data are imprecise but the answer is precise. In the countertraditional approach, imprecise data induce an imprecise answer [63]. **

**FIGURE 2 **Definition of *most *and related perceptual quantifiers. Note that *most *is context-dependent.

An interesting point is that even if I know that 80% of the balls are black, it may suffice—for some purposes—to employ a perception of the desired probability rather than its numerical value. In this instance, we are moving, countertraditionally, from measurements to perceptions to exploit the tolerance for imprecision. This is the basic rationale for the use of words in place of numbers in many of the applications of fuzzy logic, especially in the realm of consumer products [**41, 63]. **

An additional rationale is that for humans it is easier to base a decision on information that is presented in a graphical or pie-chart form rather than as reams of numbers. In this case, as in many others, there is a significant advantage in moving from measurements to perceptions.

An important conclusion that emerges is that, in addition to methodologies in which we follow the tradition of moving from perceptions to measurements, we need methodologies in which we move, countertraditionally, from measurements to perceptions (**Figure 3). It is this conclusion that serves as the genesis for the computational theory of perceptions. More fundamentally, it is a conclusion that has important implications for the course of the evolution of science, especially in those fields, e.g., economics, in which perceptions play an important role. **

**FIGURE 3 **Evolution of science.

In the computational theory of perceptions, perceptions are not dealt with directly. Rather, they are dealt with through their description as words or propositions expressed in a natural or synthetic language. Simple examples of perceptions described in a natural language are the following:

• several large balls

• Mary is young

• Mary is much older than Carol

• Robert is very honest

• overeating causes obesity

• precision carries a cost

• Mary is telling the truth

• it is likely that Robert knows Mary

• it is very likely that there will be a significant increase in the price of oil in the near future

In fact, a natural language may be viewed as a system for describing perceptions. There are many ways in which perceptions can be categorized. A system that is important for our purposes is the following:

1. Perceptions that are descriptions of unary relations

2. Perceptions that are descriptions of binary relations

3. Perceptions that are descriptions of functions

Examples (see **Figure 4): **

**FIGURE 4 **Perception-based function representation.

if *X *is *small *then *Y *is *small *

if *X *is *medium *then *Y *is *large *

if *X *is *large *then *Y *is *small *

4. Perceptions that are descriptions of systems. A system is assumed to be associated with sequences of inputs *X*1, *X*2, *X*3, …; sequences of outputs *Y*1, *Y*2, *Y*3, …; sequences of states *S*1, *S*2, *S*3, …; the state transition function

and the output function

in which *Xt*, *Yt*, and *St *denote the values of input, output and state at time *t*. The inputs, outputs, states, *f *and *g *are assumed to be perception-based.

Perception of dependencies involving functions and relations plays a pivotal role in human decision making. Thus, when we have to decide which action to take, we base our choice on the knowledge of a perception-based model that relates the outputs (consequences) to inputs (actions). This is what we do when we park a car, in which case what we know is a perception-based model of the kinematics of the car that we are parking. In a more general setting, the perception-based structure of decision making is shown in **Figure 6. **

**FIGURE 5 **Perception-based model of a system.

**FIGURE 6 **Perception-based human decision-making and control.

By adopting as the point of departure in CTP the assumption that perceptions are described in a natural or synthetic language, we are moving the problem of manipulation of perceptions into familiar territory where we can deploy a wide variety of available methods. However, based as they are on predicate logic and various meaning-representation schemes, the available methods do not have sufficient expressive power to deal with perceptions. For example, the meaning of the simple perception *most Swedes are tall*, cannot be represented through the use of any meaning-representation scheme based on two-valued logic. If this is the case, then how could we represent the meaning of *it is very unlikely that there will be a significant increase in the price of oil in the near future*?

In CTP, what is used for purposes of meaning-representation and computing with perceptions is a fuzzy-logic-based methodology called computing with words (CW) [**61]. As its name suggests, computing with words is a methodology in which words are used in place of numbers for computing and reasoning. In CW, a word is viewed as a label of a granule, that is, a clump of points (objects) drawn together by indistinguishability, similarity, proximity, or functionality. Unless otherwise specified, a granule is assumed to be a fuzzy set. In this sense, granulation may be interpreted as fuzzy partitioning. For example, when granulation is applied to a human head, the resulting granules are the nose, forehead, scalp, cheeks, chin, etc. **

Basically, there are four principal rationales for the use of CW.

(a) The *don’t know *rationale. In this case, the values of variables and/or parameters are not known with sufficient precision to justify the use of conventional methods of numerical computing. An example is decision-making with poorly defined probabilities and utilities.

(b) The *don’t need *rationale. In this case, there is a tolerance for imprecision that can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. An example is the problem of parking a car. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

(c) The *can’t solve *rationale. In this case, the problem cannot be solved through the use of numerical measurements and numerical computing. An example is the problem of automation of driving in city traffic.

(d) The *can’t define *rationale. In this case, a concept that we wish to define is too complex to admit of definition in terms of a set of numerical criteria. A case in point is the concept of causality. Causality is an instance of what may be called an *amorphic *concept.

The point of departure in CW—the initial data set (IDS)—is a collection of propositions expressed in a natural language. A premise—that is, a constituent proposition in IDS—is interpreted as an implicit constraint on an implicit variable. For purposes of computation, the premises are expressed as canonical forms that serve to explicitate the implicit variables and the associated constraints. Typically, a canonical form is expressed as *X *isr *R*, where *X *is a variable, *R *is a constraining relation and isr is a variable copula in which a value of the variable *r *defines the way in which *R *constrains *X *(**Figure 7). The constraints can assume a variety of forms, among which are possibilistic, probabilistic, veristic, functional, and random-set types [63]. **

**FIGURE 7 **Translation/explicitation and canonical form.

In CW, the constraints are propagated from premises to conclusions through the use of rules of inference in fuzzy logic, among which the generalized extension principle plays the principal role. The induced constraints are retranslated into a natural language, yielding the terminal data set (TDS). The structure of CW is summarized in **Figure 8. **

**FIGURE 8 **Structure of computing with words.

A key component of CW is a system for representing the meaning of propositions expressed in a natural language. This system is referred to as *constraint-centered semantics of natural languages *(CSNL). A summary of CSNL is described in the following.

Issues related to meaning representation have long played major roles in linguistics, logic, philosophy of languages, and AI. In the literature on meaning, one can find a large number of methods of meaning representation based on two-valued logic. The problem, as was alluded to already, is that conventional methods of meaning representation do not have sufficient expressive power to represent the meaning of perceptions.

In a departure from convention, the constraint-centered semantics of natural languages is based on fuzzy logic. In essence, the need for fuzzy logic is dictated by the f-granularity of perceptions. The concept of f-granularity is pervasive in human cognition and plays a key role in fuzzy logic and its applications. By contrast, f-granularity is totally ignored in traditional logical systems. Considering the basic importance of f-granularity, it is hard to rationalize this omission.

The key idea underlying CSNL is that the meaning of a proposition is a constraint on a variable, with the understanding that a proposition is an instantiation of a relation. In the case of mathematical languages, the constraint is explicit, as in *a *≤ *X *≤ *b*. By contrast, in the case of natural languages, the constraints are, in general, implicit. Thus, meaning representation in CSNL is, in effect, a procedure that explicitates the constrained variable and the relation that constrains it. The concept of CSNL is closely related to that of test-score semantics [**58]. **

In more specific terms, the point of departure in CSNL is a set of four basic assumptions.

1. A proposition, *p*, is an answer to a question, *q*. In general *q *is implicit rather than explicit in *p*.

2. The meaning of *p *is a constraint on an instantiated variable. In general, both the variable and the constraint to which it is subjected are implicit in *p*. The canonical form of *p*, *CF*(*p*), places in evidence the constrained variable and the constraining relation (**Figure 7). **

3. A proposition, *p*, is viewed as a carrier of information. The canonical form of *p *defines the information that *p *carries.

4. In CTP, reasoning is viewed as a form of computation. Computation with perceptions is based on propagation of constraints from premises (antecedent propositions) to conclusions (consequent propositions).

In one form or another, manipulation of constraints plays a central role in a wide variety of methods and techniques, among which are mathematical programming, constraint programming, logic programming, and qualitative reasoning. However, in these methods and techniques, the usual assumption is that a constraint on a variable *X *is expressible as *X *∈ *A*, where *A *is a crisp set, e.g., *a *≤ *X *≤ *b*. In other words, conventional constraints are crisp and possibilistic in the sense that what they constrain are the possible values of variables.

If our goal is to represent the meaning of a proposition drawn from a natural language as a constraint on a variable, then what is needed is a variety of constraints of different types—a variety that includes the standard constraint *X *∈ *A *as a special case. This is what underlies the concept of a generalized constraint [**60] in constraint-centered semantics of natural languages. **

A generalized constraint is represented as

where isr, pronounced ezar,

is a variable copula that defines the way in which *R *constrains *X*. More specifically, the role of *R *in relation to *X *is defined by the value of the discrete variable *r*. The values of *r *and their interpretations are defined below.

As an illustration, when *r *= *e*, the constraint is an equality constraint and is abbreviated to =. When *r *takes the value *d*, the constraint is disjunctive (possibilistic) and isd,

abbreviated to is,

leads to the expression

in which *R *is a fuzzy relation that constrains *X *by playing the role of the possibility distribution of *X *[**17, 60]. Additional examples are shown in Figure 9. **

**FIGURE 9 **Examples of constraints.

As was alluded to already, the key idea underlying the constraint-centered semantics of natural languages is that the meaning of a proposition, *p*, may be represented as a generalized constraint on a variable. Schematically, this is represented as

with the understanding that the target language of translation is the language of generalized constraints, that is, the Generalized Constraint Language (GCL) [**63] (Figure 10). In this perspective, translation is viewed as an explicitation of the constrained variable, X, the defining copula variable, r, and the constraining relation, R. In general, X, r, and R are implicit rather than explicit in p. Furthermore, X, r, and R depend on the question to which p is an answer. The result of translation/explicitation is the canonical form of p. **

**FIGURE 10 **Generalized constraint language.

As a very simple example, consider the proposition

Assuming that the question is: How old is Mary?, the meaning of *p *would be represented as the canonical form

where *Age*(*Mary*) is the constrained variable; *young *is the constraining relation; and the constraint defines the possibility distribution of *Age*(*Mary*). If the membership function of *young *is defined as shown in **Figure 1, then the same function defines the possibility distribution of Age(Mary). More specifically, if the grade of membership of, say, 25 in young is 0.8, then the possibility that Mary is 25 given that Mary is young is 0.8. **

Similarly, if

the constrained variable is *Age*(*Mary*) but the constraining relation becomes

where ²*young *is an intensified version of *young*, that is, the result of applying the intensifier *very *to *young*, and (²young)′ is the complement of (²*young*), that is, the result of applying negation *not *to *very young*.

Proceeding further, consider the proposition

In this case, the constrained variable is

and the canonical form of *p *is

where *much.older *is the constraining relation.

The constraining relation *much.older *may be interpreted as a perception that is defined by a collection of fuzzy if–then rules such as:

An important point that is made in this example is that, in general, the constraining relation *R *may be defined as a composite perception which is described by a collection of fuzzy if–then rules, with each rule playing the role of an elementary perception.

As a further example, consider the proposition (perception)

In this case, the constrained variable, *X*, is the proportion of tall Swedes among Swedes, which is expressed as the relative sigma-count [**59]: **

More specifically, if we have a population of Swedes {Swede1, …, Swede*N*}, with the height of Swede*i*, *i *= 1, …, *N*, being *hi *and the grade of membership of *hi *in *tall *being μ*tall*(*hi*), then

Thus, the constrained variable is

and the constraining relation is *most*. The canonical form of *p *may be expressed as

A concept that plays an important role in the representation of meaning is that of the depth of explicitation. In essence, depth of explicitation is a measure of the difficulty of representing the meaning of a proposition, *p*, as an explicit constraint on a variable, that is, as its canonical form *CF*(*p*). As an illustration, the following propositions are listed in the order of *increasing *depth.

• *Mary is young *

• *Mary is not very young *

• *Mary is much older than Carol *

• *most Swedes are tall *

• *Carol lives in a small city near San Francisco *

• *Robert is honest *

• *high inflation causes high interest rates *

• *it is very unlikely that there will be a significant increase in the price of oil in the near future*.

For purposes of illustration, consider the problem of explicitation of the last proposition in the list,

Assume that the constrained variable is the probability of the event *E*, where

Furthermore, assume that *E *contains the secondary event:

The relationship between *E *and *E*secondary is represented as a semantic network in **Figure 11. **

**FIGURE 11 **Semantic network representation of the perception *it is very unlikely that there will be a significant increase in the price of oil in the near future*.

Assuming that the labels *significant increase *and *near future *are defined by their membership functions as shown in **Figure 12, we can compute, for any specified price time-function, the degree to which it fits the event E. In effect, this means that E plays the role of a fuzzy event [55]. **

**FIGURE 12 **Fuzzy events *E *and *E*secondary.

The constraining relation is the fuzzy probability *very unlikely*, which is related to *likely *by

where *ant*(*likely*) is the antonym of *likely *and ²(*ant*(*likely*)) is the result of intensification of *ant*(*likely*) by the intensifier *very *(**Figure 13). Combining these results, the canonical form of p may be expressed as **

In the computational theory of perceptions, reasoning with perceptions is viewed as a process of generalized constraint propagation from premises to a conclusion, with the conclusion playing the role of an answer to a question.

**FIGURE 13 **Definitions of *likely, unlikely *and *very likely*.

As a very simple example, assume that the premises are the perceptions

and the question is

Explicitation of *p*1 and *p*2 leads to the constraints

Applying the rules of constraint propagation that are discussed in the following, leads to a constraint that constitutes the answer to *q*. Specifically,

In this expression, *young *and *few *play the role of fuzzy numbers and their sum can be computed through the use of fuzzy arithmetic [**28]. **

The basic structure of the reasoning process in CTP is shown in **Figure 14. The first step involves description of the given perceptions as propositions expressed in a natural language, resulting in the initial data set IDS. The second step involves translation of propositions in IDS into the generalized constraint language GCL, resulting in antecedent constraints. The third step involves translation of the question into GCL. The fourth step involves augmentation of antecedent constraints, resulting in the augmented data set ADS. ADS consist of constraints induced by IDS and the external knowledge base set KBS. Application of rules governing generalized constraint propagation to constraints in ADS leads to consequent constraints in the terminal data set TDS. The fifth and last step involves retranslation of consequent constraints into the answer to q. **

**FIGURE 14 **Basic structure of reasoning with perceptions.

Generalized constraint propagation is a process that involves successive application of rules of combination and modification of generalized constraints. In generic form, the principal rulesgoverning constraint propagation are the following. These rules coincide with the generic form of rules of inference in fuzzy logic.

In these rules, the dependence of *C *on *A *and *B *is determined by *r, s*, and *t*. For example, in the case of possibilistic constraints, *r *= *s *= *t *= blank, and the compositional rule assumes the form

where *A *· *B *is the composition of relations *A *and *B*. Similarly, in the case of probabilistic constraints, we have

where *B *is the conditional probability distribution of *Y *given *X*, and the consequent constraint expresses the familiar Bayesian rule of combination of probabilities.

The rules governing generalized constraint propagation become more complex when the constituent constraints are heterogeneous. For example, if the constraint on *X *is probabilistic:

and the constraint on (*X, Y*) is possibilistic:

then the constraint on *Y *is of random set type, i.e.:

Such constraints play a central role in the Dempster–Shafer theory of evidence [**48]. **

The principal rule of inference in the computational theory of perceptions is the *generalized extension principle *[**63]. For possibilistic constraints, it may be expressed as **

In this constraint-propagation *f*(*X*) is *R *plays the role of an antecedent constraint that is an explicitation of a given perception or perceptions; *X *is the constrained variable; *f *is a given function; *R *is a relation that constrains *f*(*X*); *g *is a given function and *f*−1 (*R*) is the preimage of *R*. In effect, *f*(*X*) is *R *is a generalized constraint that represents the information conveyed by antecedent perception(s), while *g*(*X*) is *g*(*f*−1(*R*)) defines the induced generalized constraint on a specified function of *X *(**Figure 15). **

**FIGURE 15 **Generalized extension principle.

As a simple illustration of the generalized extension principle, assume that the initial data set is the perception

and the question is

Referring to our previous discussion of the example, we note that the canonical forms of *p *and *q *are

where ?*B *denotes the relation that constrains the answer to *q*.

Using the generalized extension principle reduces the determination of ?*B *to the solution of the constrained maximization problem [**63]: **

subject to the constraint

The solution is a fuzzy interval since the initial data set is a proposition that describes a perception.

In a general setting, application of the generalized extension principle transforms the problem of reasoning with perceptions into the problem of constrained maximization of the membership function of a variable that is constrained by a question. The example considered above is a simple instance of this process.

The computational theory of perceptions which is outlined in this chapter may be viewed as a first step toward the development of a better understanding of ways in which the remarkable human ability to reason with perceptions may be mechanized. Eventually, the availability of a machinery for computing with perceptions may have a profound impact on theories in which human decision-making plays an important role. Furthermore, in moving countertraditionally from measurements to perceptions, we may be able to conceive and construct systems with higher MIQ (Machine IQ) than those we have today.

Research supported in part by NASA Grant NAC2-1177, ONR Grant N00014-96-1-0556, ARO Grant DAAH 04-961-0341 and the BISC Program of UC Berkeley.

[1] Albus, J. Outline for a theory of intelligence. *IEEE Trans. Systems, Man, and Cybernetics*. 1991;Vol. 21:473–509.

[2] Albus, J., Meystel, A. A reference model architecture for design and implementation of intelligent control in large and complex systems. *Int. J. Intelligent Control and Systems*. 1996;Vol. 1:15–30.

[3] Baker, A.B. Nonmonotonic reasoning in the framework of situation calculus. *Artificial Intelligence*. 1991;Vol. 49:5–23.

[4] Bobrow, D.G. *Qualitative Reasoning about Physical Systems Baker*. Cambridge, MA: Bradford Books/MIT Press; 1985.

[5] Bowen, J., Lai, R., Bahler, D., Fuzzy semantics and fuzzy constraint networks. Proc. 1st IEEE Conf. Fuzzy Systems. San Francisco, 1992:1009–1016.

[6] Coiera, E.W. Qualitative superposition. *Artificial Intelligence*. 1992;Vol. 56:171–196.

[7] Dague, P. Symbolic reasoning with relative orders of magnitude. In: *Proc. Thirteenth Int. Joint Conf. Artificial Intelligence*. San Mateo, CA: Morgan Kaufmann; 1993.

[8] D’Ambrosio, B. Extending the mathematics in qualitative process theory. In: Widman L.E., Loparo K.A., Nielsen N.R., eds. *Artificial Intelligence, Simulation, and Modeling*. New York: Wiley-Interscience; 1989:133–158.

[9] Davis, E. Constraint propagation with interval labels. *Artificial Intelligence*. 1987;Vol. 24:347–410.

[10] Davis, E. *Representations of Commonsense Knowledge*. San Mateo, CA: Morgan Kaufmann; 1990.

[11] Kleer, J. de. Multiple representations of knowledge in a mechanics problem-solver. In: *Proc. 5th Int. Joint Conf Artificial Intelligence*. San Mateo, CA: Morgan Kaufmann; 1977:299–304.

[12] Kleer, J. de, Bobrow, D.G. Qualitative reasoning with higher-order derivatives. In: *Proc. 4th National Conf Artificial Intelligence*. San Mateo, CA: Morgan Kaufmann; 1984.

[13] Kleer, J. de, Brown, J.S. A qualitative physics based on confluences. *Artificial Intelligence*. 1984;Vol. 24:7–83.

[14] Doyle, J., Sacks, E. Markov analysis of qualitative dynamics. *Computational Intelligence*. 1991;Vol. 7:1–10.

[15] Dubois, D., Fargier, H., Prade, H., The calculus of fuzzy restrictions as a basis for flexible constraint satisfaction. Proc. 2nd IEEE Int. Conf Fuzzy Systems. San Francisco, 1993:1131–1136.

[16] Dubois, D., Fargier, H., Prade, H. Propagation and satisfaction of flexible constraints. In: Yager R.R., Zadeh L.A., eds. *Fuzzy Sets, Neural Networks, and Soft Computing*. New York: Von Nostrand Reinhold; 1994:166–187.

[17] Dubois, D., Fargier, H., Prade, H., Possibility theory in constraint satisfaction problems: handling priority, preference and uncertainty. Applied Intelligence, 1996:287–309.

[18] Falkenhainer, B., Modeling without amnesia: making experience-sanctioned approximations. Proc. 6th Int. Workshop Qualitative Reasoning about Physical Systems. Edinburgh, Scotland, 1992.

[19] Forbus, K.D., Gentner, D. Causal reasoning about quantities. In: *Proc. 5th Annual Conf. of the Cognitive Science Society*. Palo Alto, CA: Lawrence Erlbaum and Associates; 1983:196–206.

[20] Forbus, K.D. Qualitative process theory. *Artificial Intelligence*. 1984;Vol. 24:85–168.

[21] Freuder, E.C. Synthesizing constraint expressions. *Communications of the ACM*. 1978;Vol. 21:958–966.

[22] Freuder, E.C., Snow, P., Improved relaxation and search methods for approximate constraint satisfaction with a maximum criterion. Proc. 8th Biennial Conf Canadian Society for Computational Studies of Intelligence. Ontario, 1990:227–230.

[23] Geng, J.Z. Fuzzy CMAC neural networks. *J. Intelligent and Fuzzy Systems*. 1995;Vol. 3:87–102.

[24] Hayes, P.J. The naive physics manifesto. In: *Expert Systems in the Micro Electronic Age*. Edinburgh: Edinburgh University Press; 1979.

[25] Hayes, P.J. The second naive physics manifesto. In: Hobbs J.R., Moore R.C., eds. *Formal theories of the Commonsense World*. Norwood, NJ: Ablex Publishing Corp.; 1985:1–36.

[26] Kalagnanam, J., Simon, H.A., Iwasaki, Y. The mathematical bases for qualitative reasoning. *IEEE Expert*. 1991:11–19.

[27] Katai, O., Matsubara, S., Masuichi, H., Ida, M., et al. Synergetic computation for constraint satisfaction problems involving continuous and fuzzy variables by using Occam. In: Noguchi S., Umeo H., eds. *Transputer/Occam, Proc. 4th Transputer/Occam Int. Conf*. Amsterdam: IOS Press, 1992.

[28] Kaufmann, A., Gupta, M.M. *Introduction to Fuzzy Arithmetic: Theory and Applications*. New York: Von Nostrand; 1985.

[29] Klir, G., Yuan, B. *Fuzzy Sets and Fuzzy Logic*. Englewood Cliffs, NJ: Prentice Hall; 1995.

[30] Kuipers, B.J. Commonsense reasoning about causality: deriving behavior from structure. *Artificial Intelligence*. 1984;Vol. 24:169–204.

[31] Kuipers, B.J. *Qualitative Reasoning*. Cambridge, MA: MIT Press; 1994.

[32] Lano, K. A constraint-based fuzzy inference system. In: Barahona P., Pereira L.M., Porto A., eds. *EPIA 91, 5th Portuguese Conf. Artificial Intelligence*. Berlin: Springer-Verlag; 1991:45–59.

[33] Lenat, D., Guha, R. *Building Large Knowledge-Based Systems*. Reading, MA: Addison-Wesley; 1990.

[34] Mares, M. *Computation Over Fuzzy Quantities*. Boca Raton: CRC Press; 1994.

[35] Mavrovouniotis, M.L., Stephanopoulos, G. Reasoning with orders of magnitude and approximate relations. In: *Proc. 6th National Conf. Artificial Intelligence*. San Mateo, CA: Morgan Kaufmann; 1987:626–630.

[36] McCarthy, J., Hayes, P.J. Some philosophical problems from the standpoint of artificial intelligence. In: Meltzer R., Michie D., eds. *Machine Intelligence 4*. Edinburgh: Edinburgh University Press; 1969:463–502.

[37] McDermott, D. A temporal logic for reasoning about processes and plans. *Cognitive Science*. 1982;Vol. 6:101–155.

[38] Meystel, A., Planning in a hierarchical nested controller for autonomous robots. Proc. IEEE 25th Conf Decision and Control. Athens, Greece, 1986.

[39] Novak, V. Fuzzy logic, fuzzy sets, and natural languages. *Int. J. General Systems*. 1991;Vol. 20(No. 1):83–97.

[40] Novak V, Ramik M., Cerny M., Nekola J., eds. Fuzzy Approach to Reasoning and Decision-Making. Boston: Kluwer, 1992.

[41] Pedrycz, W., Gomide, F. *Introduction to Fuzzy Sets*. Cambridge, MA: MIT Press; 1998.

[42] Puccia, C.J., Levins, R. *Qualitative Modeling of Complex Systems*. Cambridge, MA: Harvard University Press; 1985.

[43] Rainman, O. Order of magnitude reasoning. In: *Proc. 5th National Conf. Artificial Intelligence*. San Mateo, CA: Morgan Kaufmann; 1986:100–104.

[44] Rainman, O. Order of magnitude reasoning. *Artificial Intelligence*. 1991;Vol. 51:11–38.

[45] Rasiowa, H., Marek, M. On reaching consensus by groups of intelligent agents. In: Ras Z.W., ed. *Methodologies for Intelligent Systems*. Amsterdam: North-Holland; 1989:234–243.

[46] Sakawa, M., Sawada, K., Inuiguchi, M. A fuzzy satisfying method for large-scale linear programming problems with block angular structure. *European J. Operational Research*. 1995;Vol. 81(No. 2):399–409.

[47] Sandewall, E. Combining logic and differential equations for describing real-world systems. In: *Proc. 1st Int. Conf Principles of Knowledge Representation and Reasoning*. San Mateo, CA: Morgan Kaufmann; 1989:412–420.

[48] Shafer, G. *A Mathematical Theory of Evidence*. Princeton: Princeton University Press; 1976.

[49] Shen, Q., Leitch, R. Combining qualitative simulation and fuzzy sets. In: Faltings B., Struss P., eds. *Recent Advances in Qualitative Physics*. Cambridge, MA: MIT Press, 1992.

[50] Shoham, Y, McDermott, D. Problems in formal temporal reasoning. *Artificial Intelligence*. 1988;Vol. 36:49–61.

[51] Struss, P. Problems of interval-based qualitative reasoning. In: Weld D., de Kleer J., eds. *Qualitative Reasoning about Physical Systems*. San Mateo, CA: Morgan Kaufmann; 1990:288–305.

[52] Paris Vallee, R., Cognition et Systeme. l’Interdisciplinaire Systeme(s), 1995.

[53] Weld, D.S., Kleer, J. de. *Readings in Qualitative Reasoning about Physical Systems*. San Mateo, CA: Morgan Kaufmann; 1990.

[54] Yager, R.R. Some extensions of constraint propagation of label sets. *Int. J. Approximate Reasoning*. 1989;Vol. 3:417–435.

[55] Zadeh, L.A. Probability measures of fuzzy events. *J. Mathematical Analysis and Applications*. 1968;Vol. 23:421–427.

[56] Zadeh, L.A. A fuzzy-set-theoretic interpretation of linguistic hedges. *J. Cybernetics*. 1972;Vol. 2:4–34.

[57] Zadeh, L.A. Outline of a new approach to the analysis of complex systems and decision processes. *IEEE Trans. Systems, Man, and Cybernetics*. 1973;Vol. SMC-3:28–44.

[58] Brockmeyer, Bochum, Germany Zadeh, L.A., Test-score semantics for natural languages and nearing representation via: PRUF. Rieger B., ed., Empirical Semantics, 1982:281–349.

[59] Zadeh, L.A. A computational approach to Fuzzy quantiless in natural languages. *Computers and Mathematics*. 1983;Vol. 9:149–184.

[60] Zadeh, L.A. Outline of a computational approach to meaning and knowledge representation based on the concept of a generalized assignment statement. In: Thoma M., Wyner A., eds. *Proc. Int. Seminar Artificial Intelligence and Man-Machine Systems*. Heidelberg: Springer-Verlag; 1986:198–211.

[61] Zadeh, L.A. Fuzzy logic = computing with words. *IEEE Trans. Fuzzy Systems*. 1996;Vol. 4:103–111.

[62] Zadeh, L.A. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. *Fuzzy Sets and Systems*. 1997;Vol. 90:111–127.

[63] Zadeh, L.A. From computing with numbers to computing with words—from manipulation of measurements to manipulation of perceptions. *IEEE Trans. Circuits and Systems*. 1999;Vol. 45:105–119.

[64] Barsalou, L.W. Perceptual symbol systems. *Behavioural and Brain Sciences*. 1999;Vol. 22:577–660.

**NARESH K. SINHA, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, L8S 4L7, Canada **

**MADAN M. GUPTA, Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Sask. S7N 5A9, Canada **

*The path that leads to scientific discovery often begins when one of us takes an adventurous step into the world of endless possibilities. Scientists intrigued by a mere glimpse of a subtle variation may uncover a clue or link, and from that fragment emerges an idea to be developed and worked into shape. *

Man has always dreamed of creating a portrait of himself, a machine with humanlike attributes such as locomotion, speech, vision, and cognition (memory, learning, thinking, adaptation, intelligence). Through his creative actions, he has been able to realize some of his dreams. In today’s technological society, there are machines that have some of the human attributes that emulate several human functions with tremendous capacity and capabilities: human locomotion versus transportation systems, human speech and vision versus communications systems, and human low-level cognition versus computing systems. No doubt the machines that are an extension of human muscular power (cars, tractors, airplane, trains, robots, etc.), have brought luxury to human life. But who provides control to these mighty machines—human intelligence, the human cognition?

When the computer first appeared in the early fifties, we admired it as an *artificial brain*, and we thought that we were successful in creating a low-level decision making cognitive machine. We called it artificial intelligence

and waited for many potential applications to evolve. However, after several years we realized that the so-called *artificial intelligence *(AI) was, indeed, very artificial in nature. In AI, about half of what we hear is *not true*, and the other half is *not possible*.

Now we are moving into a new era of information technology

, the technology for processing statistical information and cognitive information, the information originating from the human cognitive faculty. New computing theories with a sound biological understanding have been evolving. This new field of computing falls under the category of *neural and soft computing systems*. The new computing technology has been evolving under disciplines such as optical computing, optoelectronics, and molecular computing. This new technology seems to have the potential of surpassing the micro-, nano-, and pico-technologies. The neural computing systems have also been proven (theoretically) to supplement the enormous processing power of the von Neuman digital computer.*** Hopefully, these new computing methods, with the neural architecture as a basis, will be able to provide a thinking machine—a low-level cognitive machine for which scientists have been striving for so long. **

Today, we are in the process of designing neural computing-based information processing systems using the biological neural systems as a basis. The highly *parallel *processing and layered neuronal morphology with learning abilities of the human cognitive faculty—the brain—provides us with a new tool for designing a cognitive machine that can learn and recognize complicated patterns like human faces and Japanese characters. The theory of fuzzy logic, the basis for soft computing, provides mathematical power for the emulation of the higher-order cognitive functions—the thought and perception processes. A marriage between these evolving disciplines, such as neural computing, genetic algorithms and fuzzy logic, may provide a new class of computing systems—neural-fuzzy systems—for the emulation of higher-order cognitive power. The chaotic behavior inherent in biological systems—the heart and brain, for example—and the neuronal phenomena and the genetic algorithms are some of the other important subjects that promise to provide robustness to our neural computing systems.

Factors that are to be understood and learned for designing an intelligent control system are primarily the process characteristics, characteristics of the disturbances, and equipment operating practices. It is desirable to acquire and store this knowledge so that it can be easily retrieved and updated. Also, the system should be able to autonomously improve its performance as experience is gained.

Apart from the standard methods in the theory of feedback control systems, papers in the area of intelligent control used ideas from several areas of artificial intelligence, such as knowledge-based systems (also known as expert systems), neural networks, learning, qualitative simulation, genetic algorithms and fuzzy control. Although these appear to be a collection of techniques that are somewhat unrelated, attempts have been made to combine two or more of these to obtain better results. It is felt that it should be possible to combine these ideas in a suitable manner to develop a framework for intelligent control. In fact, several authors have already tried to combine fuzzy logic with neural network (neuro-fuzzy

) control in their approach to intelligent control of robots.

Some of the earliest attempts for the intelligent control of robots were based on the introduction of some form of learning. The concept of learning in control systems has been around us for some time. Learning control techniques have been developed as a means of improving the performance of poorly modelled nonlinear systems by exploring the experience gained with online interaction with the actual plant. A very lurid description of the motivation and implementation of learning control systems has been given by Farrel and Baker in *Intelligent Control Systems: theory and Applications *(edited by M.M. Gupta and N.K. Sinha, IEEE-Press, 1996). Adaptation has been defined as the *ability to adjust to a specific situation. *On the other hand, learning can be defined as *gaining mastery through experience and storing it into memory. *Thus, a system that treats every different operating situation as new is limited to adaptive operation, whereas a learning system is able to exploit past experiences by correlating them with past situations to improve its performance in the future.

The implementation of learning control requires three capabilities: (i) performance feedback; (ii) memory and (iii) training. To improve its performance, a learning system (like an adaptive control system) must receive performance feedback information, based on the cost function that is to be optimized. This information is then used to adjust the parameters of the controller to improve the process performance. The accumulated knowledge must be stored in the memory for its use in the future. The training, or memory adjustment process, is designed to automatically adjust the parameters of the controller when the process is subjected to an uncertain situation. It is evident that the main problem in the learning process is gaining the speed and efficiency.

Another approach to intelligent control is through the use of a knowledge-based system, usually called an expert system. The field of automatic control has, for a long time, focused on the development of algorithms. With the availability of inexpensive digital computers, attempts are now being made to add other elements such as logic, reasoning, sequencing, and heuristics. Knowledge-based control is one alternative for obtaining controllers with improved functional features.

Knowledge-based control systems are based on the operator’s heuristic knowledge of the system to be controlled. The main objective is to extend the range of conventional control algorithms by encoding knowledge and heuristics about identification and adaptation in a supervisory expert system.

An ideal expert control system should have the following objectives:

• It should be able to control in a satisfactory manner a large class of processes that may be time-varying and subject to a variety of disturbances.

• It should require minimal *a priori *knowledge about the process.

• It should be able to make intelligent use of all available prior knowledge.

• It should improve the control performance as it gathers more knowledge about the system.

• It should allow the user to enter performance specifications in simple qualitative terms like *small overshoot *and *as fast as possible*.

• It should monitor the performance of the system and detect problems with sensors, actuators, and other components.

• It should be possible for the user to get information about process dynamics, control performance, and the factors that limit the performance, in an easy manner.

• The heuristic and the underlying knowledge about the control system should be stored in a transparent manner so that it can be easily accessed, examined, and modified.

Some of these properties can also be found in many conventional control systems, but no existing control system satisfies all of these. One way to research these goals is to visualize an expert control engineer as part of the control loop and equipped with a toolbox consisting of many algorithms for control, identification, measurement, monitoring, and

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading