Professional Documents
Culture Documents
I.D. Doukas
The terminology below can be found in many (slight or not) variations according
to different scientific views, books, Internet and other sources. In any case, the se-
lected terminology here is fully consistent with the related bibliography of this pa-
per. Furthermore, this bibliography is rich enough to allow the reader for deeper
explorations of the extremely wide scientific field of Artificial Intelligence.
The term intelligence is always defined as the ability to learn effectively, to react
adaptively, to make proper decisions, to communicate in language or images in a
sophisticated way, and to understand.
Artificial intelligence (AI), the science and engineering of making intelligent ma-
chines, is a term coined by Prof. John McCarthy in 1956. AI is a general idiom
which includes, among others, evolutionary algorithms, genetic programming, arti-
ficial neural networks, cellular automata and fuzzy systems (Rajabi et al., 2009),
(Kasabov, 1998), (Kalogirou, 2007), (McCarthy, 2007). The main objectives of AI
are to develop methods and systems for solving problems, usually solved by the
intellectual activity of humans, for example, image recognition, language and
speech processing, planning, and prediction, thus enhancing computer information
systems; and to develop models which simulate living organisms and the human
brain in particular, thus improving our understanding of how the human brain
works.
Knowledge (alternatively, the problem of dealing with knowledge), was for many
years a research field for psychology and sociology. The evolutions of AI trans-
formed the knowledge-problem to a problem of representation of knowledge in
computers.
Knowledge Engineering (KE) is a branch of Artificial Intelligence (AI). It is an
area that mainly concentrates on activities with knowledge (including knowledge
acquisition, representation, validation, inference and explanation). It is a discipline
devoted to integrating human knowledge in computer systems, which means to
building knowledge-based systems (Ding, 2001).
Heuristic (a word with Greek roots) means discovery. Heuristic methods are based
on experience, rational ideas, and rules of thumb. Heuristics are based more on
common sense than on mathematics. Heuristics are useful, for example, when the
optimal solution needs an exhaustive search that is not realistic in terms of time. In
principle, a heuristic does not guarantee the best solution, but a heuristic solution
can provide a tremendous shortcut in cost and time (Kasabov, 1998).
Soft Computing (SC) (Zadeh 1994), (Kecman, 2001), (a term sometimes met as
Softcomputing), is a concept introduced by Iranian professor Asgar Lotfi Zadeh in
the early 1990s. SC is an evolving collection of methodologies for the representa-
tion of the ambiguity in human thinking. SC refers to a collection of computational
techniques in computer science, artificial intelligence, machine learning and some
engineering disciplines, which attempt to study, model, and analyze very complex
phenomena: those for which more conventional methods have not yielded low cost,
analytic, and complete solutions. Earlier computational approaches could model
and precisely analyze only relatively simple systems. More complex systems aris-
ing in biology, medicine, engineering, earth sciences, ecology, the humanities,
management sciences, and similar fields often remained intractable to conventional
mathematical and analytical methods. The core methodologies of SC are Fuzzy
Logic (FL), Neuro-Computing (NC) (or neural modeling, brain theory, (Artificial)
Neural Networks (ANN)), Probabilistic Reasoning (PR), Evolutionary Computa-
tion (EC) (especially the Genetic Algorithms (GA) and the Evolution Strategies
(ES)), chaotic systems, belief networks, parts of learning theory. SC targets at ex-
ploiting the tolerance for imprecision and uncertainty, approximate reasoning, and
partial truth in order to achieve tractability, robustness, and low-cost solutions.
Overall, there is a rather permanent confusion about the terms AI, SC, KE and their
components-branches. A few (among others) reasons are: their differences in age
of appearance on the scientific stage, the involvement of other disciplines (such as
computer science, mathematics etc.) and the fact that there is not yet an elucidation
and official classification of the sectors/branches of AI. For example, FL, ANN,
etc., both appear to belong to the SC and to the AI. In any case, for convenience
these overlaps will be resolved here, by considering for the rest of the paper all the
relevant tools as tools of AI/SC/KE.
2.1. Artificial Neural Networks (ANN) (Du and Swamy, 2006), (Ra-
jabi et al., 2009), (Rao, 1995), (Taylor and Smith, 2006):
2.2. Fuzzy Systems (FS) (Du and Swamy, 2006), (Rajabi et al.,
2009), (Syed and Cannon, 2004):
Fuzzy theory was first introduced in 1965 by Iranian professor Asgar Lotfi Zadeh,
(at Berkley University, USA) to implement in uncertainty condition. This theory is
able to provide several of phenomena, variables and the ground to deduction, con-
trol and decision making in uncertainty condition. Fuzzy logic (FL) provides a
means for treating uncertainty and computing with words. This is especially useful
to mimic human recognition, which skillfully copes with uncertainty. Fuzzy sys-
tems (FS) are conventionally created from explicit knowledge expressed in the
form of fuzzy rules, which are designed based on experts’ experience. A FS can
explain its action by fuzzy rules. FS can also be used for function approximation.
The synergy of FL and ANNs generates neurofuzzy systems, which inherit the
learning capability of ANNs and the knowledge-representation capability of FS.
Classical sets follow Boolean logic (i.e. either an element belongs to a set or not)
whereas fuzzy sets use the concept of degree of membership. The membership
functions define the degree to which an input belongs to a fuzzy set. These mem-
bership functions are chosen empirically and optimized using a sample in-
put/output data. There are three points of view to define fuzzy membership: seman-
tic import model, similarity relation model, and experimental analysis.
In a general view, FS are applicable when the problem knowledge includes heuris-
tic rules, but they are vague, ill-defined, approximate, possibly contradictory.
The Genetic Algorithm (GA) is the best known and most studied among evolu-
tionary algorithms,, while Evolutionary Strategy is more efficient for numerical
optimization. Genetic algorithms require neither data sets nor heuristic rules, but a
simple selection criterion to start with; they are very efficient when only a little is
known to start with (Kasabov, 1998). EC has been applied for the optimization of
the structure or parameters of ANNs, FS and neurofuzzy systems. The hybridiza-
tion between ANN, FL, and EC provides a powerful combination for solving engi-
neering problems.
The first GAs were developed in the early 1970s by John Holland (at University of
Michigan, USA). GAs are inspired by the mechanism of natural selection where
stronger individuals are likely the winners in a competing environment. The GAs
are defined as:
“... search algorithms based on the mechanics of natural selection and natural ge-
netics. They combine survival of the fittest among string structures with a struc-
tured yet randomized information exchange to form a search algorithm with some
of the innovative flair of human search. In every generation, a new set of artificial
creatures (strings) is created using bits and pieces of the fittest of the old; an occa-
sional new part is tried for good measure. While randomized, GAs are no simple
random walk. They efficiently exploit historical information to speculate on new
search points with expected improved performance”.
Evolutionary Strategies (ES) as being introduced in the early 1970s (Mai, 2010)
have seen many improvements within the last decades. Nowadays, this approach
can be regarded as an alternative to standard optimization techniques in many sci-
entific areas, especially in cases where gradient methods like the classical least-
squares algorithm fail.
Compared to other optimization techniques, ES algorithms are relatively easy to
realize, because the main idea behind it is very simple. They are universal, unde-
manding, close to reality, robust, and can be considered as a compromise between
volume and path orientated search strategies. Once implemented, the same algo-
rithm can be applied to a wide range of problems without any big changes. In many
cases it’s even sufficient just to set up the new performance index that’s specific to
the actual problem; one rarely needs any additional a priori insight into the mathe-
matical/physical nature of the optimization task.
The only necessary condition for the ES to be applicable to a specific problem, is
the inherent existence of strong causality, not to be confused with weak causality.
But on the other hand, there is no guarantee to actually find the global optimum. In
addition, the convergence speed of an ES algorithm might be less compared to
some alternative methods that are tuned to a specific problem.
Although there are several differences between GA and ES, their between barriers
are nowadays being hazy, since both techniques are improved by borrowing the
ideas from each other.
In optimization problems, if the variables are discrete, they just do not have deriva-
tives (a good example is the ready-made cross sectional area of structural mem-
bers). If such a case, the HS (Harmony Search) algorithm offers an outlet, since it
is based on a novel stochastic derivative. HS algorithm is a phenomenon-
mimicking algorithm, which is inspired by the improvisation process of (especially
Jazz) musicians. In the HS algorithm, each "musician" (i.e. a decision variable)
"plays" (i.e. generates) a "note" (i.e. a value) for finding a "best harmony" (i.e. a
global optimum) all together. The traditional optimization algorithms, in order to
detect the right direction of the optimal solution, they give information dealing
with gradient. On the contrary, the above mentioned stochastic derivative that
characterizes the HS algorithm gives a probability to be selected for each value of a
decision variable.
The collective behavior that emerges from a group of social insects has been
dubbed “Swarm Intelligence.” (SI). Social insects work without supervision. In
fact, their teamwork is largely self-organized, and coordination arises from the dif-
ferent interactions among individuals in the colony. Although these interactions
might be primitive (one ant merely following the trail left by another, for instance),
taken together they result in efficient solutions to difficult problems (such as find-
ing the shortest route among myriad possible paths in Internet traffic).
SI is a design framework based on social insect behavior. Social insects such as
ants, bees, and wasps are unique in the way these simple individuals cooperate to
accomplish complex, difficult tasks. This cooperation is distributed among the en-
tire population, without any centralized control. Each individual simply follows a
small set of rules influenced by locally available information. This emergent be-
havior results in great achievements that no single member could complete by
themselves. Additional properties Swarm intelligent systems possess include: ro-
bustness against individual misbehavior or loss, the flexibility to change quickly in
a dynamic environment, and an inherent parallelism or distributed action.
The main advantages of SI are: Flexibility, robustness and self-organization (i.e.
the group needs relatively little supervision or top-down control.
2.6. Cellular Automata (CA) (Rajabi, 2009), (Wolfram MathWorld,
2010), (Wolfram, 2000):
On the other side, the strongest points of ANNs are the representation of nonlinear
mappings and their - through training – construction. Speaking about FS, their be-
havior is comprehensible and "digestible", thanks to their logical structure and their
stepwise inference procedures. Speaking about ANNs, simply the aforementioned
«black-box» recapitulates their behavior.
The possibility of combining these two components into a new system, named
neuro-fuzzy control, is a rather recent consideration of the scientists. Such a com-
bination gives as a resultant a new system armed with «weapons» from both the
fuzzy and the neural ones.
The main goal of the design of an intelligent system is to represent as adequately as
possible the existing problem domain knowledge in order to better approximate the
goal function, in most cases not known a priori. In order a solution to be achieved,
there is a set of different methods to select from.
Figure 3. An indicative qualification of some intelligent systems
Depending on the type of the problem and the available knowledge about the prob-
lem, different methods could be recommended for use (Figure 1). If it is a matter of
data population and the available knowledge (expertise), then Figure 2 illustrates a
topology of «solution-spaces». Finally, a qualitative comparison of some intelligent
systems is illustrated in Figure 3 (Taylor and Smith, 2006).
Geodesy and Geomatics do offer the “fertile land” for many AI/SC/KE applica-
tions. There is an increasing diffusion of such methods and IAG has already estab-
lished IAG-WG 4.2.3 (dealing with application of AI in Engineering Geodesy)
(IAG-WG 4.2.3, 2010). By comparing problems of Geodesy / Geomatics / Engi-
neering Geodesy with problems of AI/SC/KE, there are noteworthy similarities
(Kutterer, 2010). Both disciplines use methods based on mathematical stochastics,
they also use modeling (for variables, parameters). Finally, the issue of learning in
simple “geodetic words” is equivalent to the procedure of model selection and pa-
rameter identification.
The relative bibliography is in fact huge and increases rapidly. The space here al-
lows only for only some indicative bibliography regarding just broad fields of Ge-
odesy/Geomatics. For example, in the GIS field, Kirankumar and Jayaram, (2008)
offer an excellent review. The conclusion is that the modeling of the environment
and the site selection, the analysis of spatial data and the integration of SC compo-
nents, the decision support are some of many essential topics where AI/SC/KE has
a powerful influence (Kirankumar and Jayaram, 2008), (Rajabi et al., 2009),
(Bartoněk, D., 2003). Going to the GNSS area, there are plenty of applications
dealing with GPS and navigation, covering a really wide range of cases, from at-
mospheric issues to geoid and space geodesy (Xu et al. 2002), (Doukas and Ioanni-
dis, 1997 ), (Coulot et al. 2009), (Crowell, 1992), (Syed and Cannon, 2004), (Liu et
al. 2007), (Zaletnyik et al. 2004).
Another excellent review, dealing with AI/SC/KE techniques as applied in Engi-
neering Geodesy, is given by Kutterer (2010), while the same does Adeli (2001)
for the science of Civil Enginnering. There is plenty of common fields between
Engineering Geodesy, Civil Engineering, Geodesy and Geomatics, where geodetic
methods (combined or not with AI/SC/KE tools) play key roles (for example
ground deformation, landslides, geodetic control nets etc.) (Haberler-Weber 2005),
(Einhorn, 2007), (Carbobe et al. 2008).
4. Conclusions
References