Professional Documents
Culture Documents
Artificial Intelligence: "Cogito, Ergo Sum"
Artificial Intelligence: "Cogito, Ergo Sum"
- Descartes
CHAPTER I
INTRODUCTION
linked to his notion of brains and thought. "Cogito, ergo sum," said Descartes -
"I think, therefore I am" - basing the proof of his very existence on his
of years old. Plato and Aristotle were among the first to divide human
capabilities into two distinct areas: the physical body and the rational mind.
3. What is the nature of the brain? Can thought, feelings and emotions
4. The mechanistic view is the belief that the workings of the mind can
Containing about 100 billion cells with complex interrelations that are still only
but actually be intelligent in the same sense that people are intelligent? Will a
6. No matter what the scientists may say, many people instinctively feel
that the workings of the human mind can never be programmed into a machine
and that no computer will ever have a mind of its own. While conceding that
made to think in the sense that people think, to understand information rather
METHODOLOGY
that have out-performed them in terms of physical ability, and have greatly
enhanced their capabilities. Man's quest for developing more and more
sophisticated machines, would reach its ultimate goal if he were able to create
machines that could actually think for themselves. Intelligent machines would
be able to perform tasks hitherto beyond his imagination. This dream of being
able to make a thinking machine, coupled with the invention of computers, has
led to a lot of research into developing artificial intelligence. It has also led to
the limits of man's achievements in this field might be. This study has been
intelligence artificially.
1Scope
excluded.
2Operational Definitions
10. The special terms used in this dissertation are defined below :-
specific problem.
subject.
themselves and assess what the goals of research in the field have been.
been laid down for judging any success that may be achieved in the
Intelligence. Chapter VII looks at the major projects that have been
VIII an attempt has been made to look into the future and visualise the
civilian fields.
the thinking principle): endowed with the faculty of reason: alert, bright, quick
are trying to develop machines that will posses these attributes. This requires a
responding the same way every time. If the response were to be the
behaviour.)
be placed in context.)
1
Janakiraman, V.S. et al. Foundations of Artificial Intelligence and Expert Systems. New Delhi,
2
Godel, et al. An Eternal Golden Braid, New-York, Vintage,1980, p.26.
a situation.
adjusted accordingly.)
16. These abilities all come very easily to people. In fact, they are often
grouped under the heading of common sense, implying that there is nothing
subsequent paragraphs.
is that it is:
not constant but are linked to the current state of computer science. The
phrase, "at the moment...", in this definition, implies that, when an artificial
3
Rich, Elaine et al. Artificial Intelligence. New Delhi, Tata McGraw-Hill Publishing Company
Limited, 1991, p. 3.
intelligence technology is developed to an extent that humans no longer out-
beings shows that these are all mechanical mental activities. They include:
21. On the other hand, the activities in which human beings can, at the
22. Living beings that display intelligence "make sense out of what they
see and hear; and come up with new ideas seemingly out of thin air, using
common sense to make their way through a world that sometimes seems highly
4
Mishkoff, Henry C. Understanding Artificial Intelligence. New Delhi, B.P.B. Publications, 1986, p.4
23. Another definition states that :
26. One definition that takes this approach is that artificial intelligence
is:
reasoning processes."6
27. On the other hand, there is a school of thought that says that, even if
5
Ibid.
6
Janakiraman. Loc.cit.
28. The definitions discussed so far have concentrated on the
solving."7
programmes:
Human beings use heuristics to help them decide what to do. This precludes
7
Bruce G. Buchanan et al. Rule-Based Expert Systems. Reading, MA, Addison-Wesley, 1984, p.3.
8
Bruce G Buchanan, The Encyclopaedia Britannica.
the requirement of having to completely rethink every problem with which one
approach into the process of using artificial intelligence to solve problems with
computers.
processes; and can organise large amounts of information more efficiently than
things. This definition focuses on the fact that this is the essence of intelligence.
useful way to understand it is to look at the areas of research that are being
9
Brattle Research Corporation. Artificial Intelligence and Fifth Generation Computer Technologies,
Boston, p.5.
CHAPTER IV
1Areas of Research
35. The activities that come most naturally to people are actually the
so naturally that people don't have to think about it at all, they may have great
difficulty in describing exactly how they did it. The more difficult a task is for
human beings, the more deliberate and conscious thought has to be devoted to
it. If the precise steps that are necessary to produce a certain result are known;
computer.
(e) Robotics.
2EXPERT SYSTEMS
2
are currently designed to assist experts and not to replace them. Subjects like
system configuration are suitable areas for the development of expert systems.
draw conclusions from stored facts. Although they vary in their design, most
interface.
knowledge base.10
known as the inference engine. The inference engine controls how and
which heuristic search techniques are used to determine how the rules
expert system that communicates with the user. The user must be able
10
A knowledge base contains both declarative knowledge (facts about objects, events and situations)
and procedural knowledge (information about courses of action). Depending on the form of
knowledge representation chosen, the two types of knowledge may be separate or integrated.
to describe his problem to the expert system, and the system must be
able to respond with its recommendations. The user may also want to
ask the system to explain its "reasoning," or the system may request
39. Knowledge engineers and domain experts work together for the
for the knowledge base11 and the knowledge engineer selects the development
developed, the knowledge engineer develops a set of rules for the domain
tested to see if the correct techniques were chosen. The knowledge engineer
11
To facilitate formalisation, researchers are looking for ways to reduce the amount of time required
to enter the information into the knowledge base. This however remains a relatively complex process.
The goal of simpler development tools is to enable the domain expert to create the knowledge base
himself. With automatic knowledge acquisition, the computer may one day be able to include
information from books in the knowledge base automatically, and with computer learning a computer
could discover and add new facts to its knowledge base.
12
Rule-based vs. Model-based Expert Systems. Most expert systems are rule-based, i.e.,
knowledge is represented as a series of production rules. This is because this technology is relatively
well developed. However, other approaches to expert systems are being investigated, and one of the
most promising is the model-based expert system. Model-based expert systems are especially useful
in diagnosing equipment problems or "troubleshooting". Unlike rule-based systems which are based
on human expertise, model-based systems are based on knowledge of the structure and behaviour of
the devices they are designed to "understand". In effect, a model-based expert system includes a
"model" of a device that can be used to identify the causes of the device's failure. Although it remains
largely experimental, the model-based approach looks very promising for expert systems designed for
diagnosis and repair.
40. revises the structure and implementation of the expert system until
for mineral and oil deposits, developing rule-based expert systems and
rather than in a computer language. The words would however still be typed
understanding printed text. While the technology for recognising scanned text
is already fairly well developed, computers are not very good at understanding
what it means and still cannot understand natural language as well as a typical
four-year-old child. Four problems that cause difficulties in natural language
understanding are:
man with the hammer." This could mean that the man he hit had
with Mary.
vague and inexact terminology. For example, how long is "a long time".
Consider
(c) the following sentences:
(i) "He has been waiting in the doctors office for a long
time."
time."
The phrase remains the same, and yet, the duration of time
the same phrase. Concepts are often not described with precision. Ones
with a situation.
(d) Incompleteness. One does not always say all of what one
means. Because one shares common experiences with other people, one
can usually omit many details without fear of being misunderstood; one
assumes that one's listeners can "read between the lines". Consider the
following example:
Did John eat the steak? Although it is not stated explicitly in the
story, one would assume that he did. After all, why else should he pay
understand what they are told even if it does not adhere to certain rules
person.
important conclusion that can be drawn from investigating the ways in which
solved when these vast amounts of knowledge are put into computers and
computers develop the ability to handle this information at speeds that permit
(b) Keyword Analysis. Key words in the text are found using
may be overlooked.
parts.
components:
49. All this makes natural language generation programmes even more
failed to recognise the fact that a machine cannot translate without first
understanding. They assumed that if the computer had access to lexical and
translate text from one language to the other. One famous result of this
approach was that when the sentence, "The spirit is willing, but the flesh is
weak.", was translated from English into Russian and back to English again by
the same programme, the resulting sentence was "The vodka is good, but the
meat is rotten."
1SPEECH PROCESSING
1Speech Recognition
that a computer can recognise the words one speaks and understand what they
difficult to implement.
56. Researchers have developed three approaches to speech recognition
spoken with short but distinct pauses between them, thus "isolating"
each word from any context. This is the easiest and, so far, the most
successful technique.
with the rapid pace of normal conversation. This is the most difficult
57. To help analyse speech signal patterns, words can be broken into
1Speech Understanding
58. For computers to understand speech, they must select the most
likely meaning of what has been said from several possible interpretations.
Several techniques are used to make the selection. Some speech understanding
programmes begin with the first word in a sentence and attempt to interpret the
words in a sequence. Although this technique has been used successfully, it can
lead to problems if the first word happens to be misinterpreted. In another
technique, called island driving, the programme selects the words within a
sentence that are most likely to have been interpreted correctly. The
programme then tries to connect these "word islands" by selecting the most
interpreted words. This approach is useful because some words (often the most
important ones, fortunately) are enunciated clearly, while other parts of the
1COMPUTER VISION
images. It is far more difficult to make a computer that can understand what it
is seeing. The goal is to make computers that can see and understand their
surroundings. Currently one of the major uses for computer vision is in the
field of robotics.
that requires more computer memory. The main problem, however remains
making the computer recognise these pictures as images. While the human
always just a group of dots. No matter how many pixels are used to form an
clues, that can help determine various features of an image. The clues include:
made to analyse the shades of colour, the purity of colour and the
"seen".
objects.
either the camera or the object. A moving camera gets images of the
object from different angles. These can be analysed in the same way as
analysed, the difficult task of identifying the components of the image begins.
(i) Some edges are not entirely distinct, and actually may
be quite blurred.
from any one vantage point, and parts of objects are often
techniques are those that have been developed for specific domains,
known in advance.
(b) Model-Based Vision. In model-based vision systems,
2ROBOTICS
64. Robotics is one of the technologies in which the most advances have
automotive industries. Over half the world's industrial robots are in Japan.
in a straight path until something physically blocks that path. Servo robots can
can alter its own trajectory in response to feedback from a sensing device, such
as a camera.
1AUTOMATIC PROGRAMMING
The ultimate aim is to have a computer system that could develop programmes
70. All the research being carried out is an attempt to make computers
These include activities that are considered to require the senses of sight,
hearing and touch. An attempt is, therefore, being made to incorporate sensors
intelligent manner. There are also a number of activities that involve the
minds are quite adept at handling; but is difficult to programme into computers.
71. The most striking aspect that emerges from most of the research
carried out, is that machines can only do what they are programmed to do.
posses neither human senses nor human brains, demonstrate human behaviour.
CHAPTER V
established.
machine can think. His method has since become known as the Turing test. To
conduct this test, one requires two people and the machine to be evaluated.
One person plays the role of the interrogator, who is in a separate room from
the computer and the other person. The interrogator can ask questions of
either the person or the computer by typing questions and receiving typed
responses. However, the interrogator knows them as only A and B and aims to
determine which is the person and which is the machine. The goal of the
machine is to fool the interrogator into believing that it is the person. If the
machine could succeed at this, then one would conclude that the machine could
think.
74. The more serious issue, though, is the amount of knowledge that a
machine would need to pass the Turing test. It will be a long time before a
computer passes this test. Some people believe that no machine ever will.
1Criticism of the Turing Test
75. A number of criticisms of the Turing test have been raised. These
include:
(a) Even if a machine were to pass the Turing test, there are
the Turing test. According to this, Searle, who did not know Chinese
instructions in English that would correlate the first and second set of
could manipulate the Chinese symbols in a very formal way and provide
76. While it may be a long time before a computer passes the Turing
paraphrase a newspaper story, the best way to rate its level of artificial
programme that meets some performance standard for a particular task. When
79. one sets out to design an artificial intelligence programme, one
should attempt to specify, as well as possible, the criteria for success for that
13
Rich et al. op cit. pp. 25-26.
CHAPTER VI
1KNOWLEDGE REPRESENTATION
appear to have very little in common except that that they are hard. However,
one feature common to all artificial intelligence programmes is that they all
intelligent behaviour.
serious dilemma:
82. One is forced to conclude that the kinds of techniques that will be
useful for solving artificial intelligence problems must exploit knowledge that
would fail.
(e) The knowledge must be used to help overcome its own sheer
computer:
of techniques have been tried, with different ones proving to be better for
relative and approximate, like tall, expensive and normal. Fuzzy logic
taken in
(d) certain circumstances. Production rules consist of a
times.
with an initial state and attempt to reach some goal state. To determine which
problem are not explicitly laid down. A search has the following prerequisites:
(a) The initial state description of the problem, e.g., the initial
(b) A set of legal operators that change the state. In chess, this
87. Searching yields the sequence of steps that transforms the initial
solutions are scanned. Assuming that the possible solutions can be represented
as an inverted tree branching out from an initial node, with each possible step
forming a separate branch from the node and each subsequent step branching
out from a node on the branch representing the first step and so on; a search
can be breadth first, where all of one level is evaluated before the next level is
searched, or depth first, where the search starts at the top and goes to a
position at the bottom. In either type, the search ends when a goal state is
reached.
90. Depth-first search stores only the current path that it is pursuing.
decide upon this depth. The point at which the search along a particular path is
abandoned is called cut-off depth. the value of cut-off depth is critical because,
if not specified, the search will go on and on increasing the time-complexity
number of branches and the depth to which the search proceeds. The major
to be remembered.
forward from the initial state to the goal state or backward from the goal state
when there are fewer possible initial states than goal states. In the reverse
situation, where there are few goal states and many initial states, it may make
93. Since a search tree can be very large, heuristic strategies have been
94. Heuristic Search. The use of heuristics can drastically reduce the
(a) Problems for which no exact algorithms are known and one
states. These numbers are then used to guide the search process.
96. By their very nature, heuristic techniques are not foolproof: they do
not guarantee the best solution - or even any solution at all. What heuristic
that are considerably more likely to bear fruit than the "trial-and-error"
both of which have been the basis for artificial intelligence programmes:
as people do.
appears easiest.
More recently, several new neural net architectures have been proposed. These
architectures are loosely called connectionist, and they have been used as the
problem that the skills easiest for the human mind are the most difficult for a
14
Cf ante p. 10.
done before any predictions can be made as to the potential of these theories.
99. One must also consider the fact that while human brains are highly
parallel devices, most current computing systems are essentially serial engines.
serial computer. Recently, partly because of the existence of the new family of
researchers to look for higher level (i.e., far above the neuron level) theories
that do not require massive parallelism for their implementation. This approach
the binary number system. If they are to display intelligence, digital computers
Created in the early 1950s, it was a fairly low level language and, hence,
difficult to use. It has since been supplanted by higher level, more convenient
15
The human mind relates groups of symbols to each other in various ways. An intelligent
programme must also be able to establish associations between symbols, not merely store them as
unrelated pieces of data. Symbolic processing languages associate symbols by representing them as a
list. A list is represented in a computer's memory as a series of cells. Each cell can contain two parts,
or fields. In a simple list, one field contains a symbol and the other field contains a pointer to the next
cell in the list, thus associating the symbols to each other.
16
A LISP programme can actually modify its own programme instructions - or add entirely new
instructions to itself. It is even possible for a LISP programme to write an entirely new LISP
programme. This is especially useful in an artificial intelligence programme that is learning to
perform a new task.
17
PROLOG is the only artificial intelligence programming language other than LISP that is widely
used.
1Developments in Computer Technology
require more computing resources than other programmes. The major artificial
techniques may allow computers to feature the large memory and high
applications.
time. Now that the economics of very large scale integration have
applications.
CHAPTER VII
INTELLIGENCE
1MAJOR PROJECTS
may have profound implications for the future of artificial intelligence are:
for artificial intelligence research in the USA than the Department of Defence.
Department has financed much of the artificial intelligence research that has
corporations.
105. The Strategic Computing Programme, announced in 1983, is an
technology.
systems.
107. Autonomous Systems. These are vehicles and other systems that
incorporate computer vision and expert system technologies so that they can
switches, buttons and knobs that cover their control panels demanding precise
for operators to keep up with; there soon may be more devices available on an
ways and to perform particular functions for that pilot. Using speech
recognition and expert system technologies, the pilot will be able to delegate
19
Ibid.
20
Ibid.
111. The Battle Management System, as envisioned by the Defence
decision-making process.
participants and sent ripples through the entire research community and
industrial circles. The proposals were the outcome of research work done by
the Ministry of Trade and Industry and the Institute for New Generation
113. The aims of the Fifth Generation Computer Project are to open up
the new fields of application that are referred to by the terms 'knowledge
114. The Japanese have divided the project for development of fifth
the project.
115. Because the original goals of the project were stated in general and
including Great Britain, Germany, The United States of America, France and
intelligence research.
and Chief Executive Officer of Control Data Corporation (CDC). Norris called
the meeting to explore ways that the companies, normally competitors, could
technology. Although there were some initial fears about such a co-operative
effort, Norris felt that the threat to American technological leadership posed by
ordinated response.
Technology Corporation in early 1983. Unlike the Fifth Generation Project, the
and capable of performing more complex tasks at a higher level of quality and
long-range strategy that might have been difficult for its members to pursue
individually.
21
Mishkoff. op.cit. p. 243.
(c) Very Large Scale Integration / Computer-Aided Design. An
to complex systems and the very complex very large scale integration
projects:
UK and Europe was also tremendous. In the UK, the government set up a
Telecom. A report called the Alvey Report was submitted which proposed an
investment of 350 million pounds, over a period of five years, for development
are:
2INDIAN RESEARCH
systems).
Enterprise, Hyderabad.
CHAPTER VIII
"Can we prevent or contain bloody wars waged in battlefields crammed with 'virtual
realities,' 'artificial intelligence,' and autonomous weapons -- weapons that, once
programmed, will decide on their own when, and towards whom, to fire? Should the
world ban -- or embrace -- a whole new class of weapons designed for bloodless
war?"
work such as judgement and design. It is anticipated that such systems will
have to have the ability to understand languages and images, and will
1MILITARY APPLICATIONS
hazardous environments.
22
Cf ante p. 46.
(c) Expert systems for diagnosis and maintenance of
extraction of low level map features for imagery. Also, maps could be
battle.
(j) Guided bombs that can hit their targets unassisted, well after the
measures.
(k) Natural language and speech interface in weapons could have
put a whole crew into a simulated reality. The rehearsal would make
2CIVIL APPLICATIONS
needs. Students read brief instructional material and are then presented
problem.
23
Cf ante p. 46.
(b) Software Development. Automated programming can
Some examples of the spheres in which such systems could be used are:
planning.
approach determination.
process can
(e) affect other parts of the process, thus helping to increase
automate offices are being developed. There are five potential areas for
computers.
is pertinent.
and dialling.
CHAPTER IX
CONCLUSION
126. Before one can answer any question as to how successful attempts
seeks to model or simulate human intelligence will decide how much success
can be claimed.
127. The fact that the processes underlying intelligent behaviour have
yet to be fully analysed makes the quest for artificial intelligence a difficult one.
intelligence as, "the study of how to make computers do things which, at the
moment, people do better"24, can only add to the indefinite nature of this
intelligence remains intelligent for as long as one fails to analyse it; should it be
it do many things that were earlier thought to require intelligence. The question
130. It is the contention of the author of this dissertation that, when one
dissects what the computer has actually done, by looking at the details of the
programme responsible for its "actions", one finds that there is really no
24
Cf ante p. 8.
and complex and so is the hardware. But they are, at their starting point, the
creations of the human mind. When one looks at most artificial intelligence
problems that have not yet been overcome, one finds that it is not that they are
conceptually impossible but that one has yet to develop computers that are
processing.
131. No matter how complex the software or the hardware gets, the
code in their programmes which cater for this, and make a computer "learn
from its mistakes". All that this really means is that the software is written in
circumstances. When these circumstances are met, the software gets modified
and the next time round, when the computer runs that part of the programme,
from simple programmes, is the fact that they are far more complex and that
they, therefore, make machines do things which one would have thought they
Instead, that tribute is reserved for its actual source: the human programmer. In
134. One of the first revelations that one has, when one becomes
computer literate, is the fact that the computer "knows" nothing. It has a
tremendous capacity for storing and processing information, but it does not
"know" what it is storing or doing. This is something which people who are
matter how much progress is made in various fields of artificial intelligence the
machine shall remain a dumb platform on which human beings will run brilliant
programmes.
One only sees evidence of it when it manifests itself in behaviour. This seems to
have left people with the handicap of mistaking behaviour for intelligence.
of the intelligence of the programmer and the person who designed the
responsible for the intelligent behaviour are detached from the machine and,
hence, not directly associated with what it does. It is, therefore, easy to ascribe
the "intelligence" to the machine rather than tracing it to where it came from.
People are not used to the idea of intelligence originating at a place far
with a more primitive form of "storing" intelligence. Ever since man learnt to
write, and later, to record sound and images, he has been able to communicate
"intelligently" with people far removed from him and in some cases long dead.
When one reads a book, the book serves as the medium by which intelligence
is carried from the author to the reader. One does not call the book intelligent;
"know" what they are doing and wouldn't "notice the difference" if they were
programmed to do something quite different, one is falling into the same trap
as the man who thought that the thermos flask was man's greatest invention
because he was baffled by the fact that it seemed to know what to keep hot and
138. If one takes the view that artificial intelligence should be akin to
computers fail to measure up on this score. Should one, however, consider that
intelligence, one is talking about something quite different. And, in that case,
139. This debate about artificial intelligence and whether computers are
"intelligent" or not does not in any way detract from the usefulness of the
research being carried out in the field, or of the applications that are expected
change the life of the human race so radically that no speculation as to its
impact can be more than a partial vision of the future. An attempt has been
might be.
BIBLIOGRAPHY
1. Books.
(b) Moto-oka, Tohru et al. The Fifth Generation Computer: The Japanese
Challenge. Chichester, John Wiley & Sons, 1985.
(d) Penrose, Roger. The Emperor's New Mind. London, Vintage Books,
1990.
(e) Sayre, Kenneth M. et al. The Modeling of Mind. New York, Simon and
Schuster, 1963.
(f) Toffler, Alvin et al. War and Anti-war. London, Warner Books, 1994.
2. Reference Books.
(c) Murthy, S.S. et al. Computers and Defence Applications. New Delhi,
Defence Scientific Information and Documentation Centre, 1987.
3. Text Books.
(c) Rich, Elaine et al. Artificial Intelligence. New Delhi, Tata McGraw-Hill
Publishing Company Limited, 1991.