Professional Documents
Culture Documents
STUDENT ID #: UB23404SEL31719.
MAJOR:
ELECTRICAL ENGINEERING
COURSE:
ASSIGNMENT TITLE:
HONOLULU, HAWAII
From archaic times in memorial, man has always had a general tendency to re-
engineer his environment, tools and resources, with the sole aim of easing up his life. However,
the last 70 years has seen a drastic shift from the mere muscle power resolving component of
the science and engineering (i.e. levers, machines or robotics), to include brain power or
autonomous systemization. That is, we have experienced a paradigm shift from mere
mechanical efficiency concerns to include those of computational efficiency. After the coming
in of computers that essentially depended on continual updates, operators and even
programmers, the dream of autonomous super computers suddenly took to flight. Alongside
this dream, computer-controlled machines were created. These computers run on sets of
instructions embedded in them by programmers, since they weren't and still aren't fully
autonomous to runaffairs without human interventions. And in later years, following the
realization of thecomputer age, it became theoretically evident that integrating such super
computers withmachines would bring the machines to literal life (i.e. creating cyborgs and
other fullyautonomous robotic systems) intelligent embodiments that according to popular
belief,
would eventually enslave the human race (if not obliterate it). This is absurdity one may
conclude: but then, weren t there similar opinions before we took flight into the sky and then
into space? Only time will tell.
Questions such as who am I? Where am I? , etc. clearly exhibit intelligence; and would
somewhat suggest consciousness if the embodiment of intelligence were completely
autonomous (e.g. Humanity). Other equally autonomous and conscious beings, though less
intellectual, include dogs, ants, rats and monkeys among other. Intelligence that is equivalent
to or greater than that of a human is referred to as strong intelligence, whilst that which is less
is referred to as weak intelligence. Unlike the latter intelligence which can only solve simple
problems, the former is capable of solving complex problems.
The gestation period that characterized the focus on artificial intelligence (AI) as an
emerging way to find solutions began sometime in 1943 during the Second World War and
might have been probably inspired by the war itself. The military wings or warring men sought
cunning ways of eluding the enemy by using intelligent methods, animals and artifacts. In the
summer of 1956, John McCarthy first coined the phrase Artificial Intelligence to describe this
new science frontier of cunning arts, which is now commonly referred to by the letters AI. AI is
generally relevant to any performance involving intelligence (i.e. intelligent tasks), and is thus
perceived as a universal field - so I am compelled to think of AI as the science of general
intelligence. Nevertheless, the fields that typically constitute AI include economics, linguistics,
philosophy, mathematics, computers, psychology, control theory & cybernetics, and
neuroscience.
Basic questions the disciplines ask that make them useful to AI include;
1. Linguistics How does language relate to thought? (components and influences include;
generative grammar, computational linguistics and a book titled syntactic structure ,
written by Noam Chosky in 1950 his models were formal enough to program.)
2. Economics How should we make decisions in order to have optimized results?
(Decision theory, utility theory, Game theory- inspired by The theory of Games and
Economic behavior written by John von Neumann and Oskar Morgensken in 1944, and
operations research.)
3. Mathematics What are the formal rules for conclusive deductions? What can be
calculated? How do we resolve or reason with uncertain data? (Formal logic e.g.
Boolean algebra and incompleteness theorem, Algorithms, and probability theorem.)
4. Control theory & Cybernetics How can artifacts operate under their own control (i.e.
self-autonomy)? (Stable feedback (homeostatic) systems includes water clocks,
thermostats and steam engines, and cybernetics basically the science of
communications and automated control systems occurring in living things and
machines.)
5. Philosophy How can the mind arise from a physical brain? How can we draw valid
conclusions from formal rules? Where does knowledge come from and how can we use
it to effect an action? (Empiricism induction, Logical positivism knowledge, Goal-
based analysis and Utilitarianism.)
6. Neuroscience How does the brain process information? (Basically the study of the
brain neurons.)
7. Psychology How do living things such as humans and animals act? (Behaviorism,
cognitive psychology and cognitive science.)
8. Computer Engineering (or computer science) How can we build a more efficient
computer? (Automatic calculus, Theory of artificial automata.) The first forms of
omputers were used to decipher communication data (language messages) among
warring factions during the Second World War.
AI is the art of creating machines that perform functions that require intelligence when
performed by people (Kurzweil, 1990).
Even though artificial intelligence cannot be defined with astute precision due to the
inability to categorically define what is meant by intelligence, all eight diverse disciplines that
comprise AI, indicate what AI is all about. Thus by knowing what is involved, we gain a broader
understanding of what AI is.
There is no standard definition of exactly what artificial intelligence is. If you ask five
computing professionals to define "AI", you are likely to get five different answers
(Munakata 2008).
The concise Oxford English dictionary (11th edition), defines intelligence as the ability
to acquire and apply knowledge and skills . The key phrase is ability to acquire and apply; so
that all the sub-abilities leading to this over-all objective of the catch-phrase are abilities
indicative of intelligence. But then what are these attributes or indications of intelligent
behavior that qualify an artifact to be considered as being intelligent? Attributes such as
reasoning, perception, communication, learning, interaction with a complex environment
(including other agents) are indicative of intelligent behavior whilst automated reasoning,
knowledge representation, natural language processing, machine learning, perception,
robotics, computer vision, etc. are all components of artificial intelligence (AI), drawn from the
various fields of AI. The various definitions of AI ultimately give four broad performance
perspectives to which AI generally inclines, based on the cognitive approach, as shown in the
table below;
THINKING LIKE A HUMAN: THINKING RATIONALLY:
"The exciting new effort to make computers "The study of mental faculties through the
think . . . machines with minds, in the full use of computational models"
and literal sense" (Haugeland, 1985) (Charniak and McDermott, 1985)
"[The automation of] activities that we "The study of the computations that make
associate with human thinking, activities such it possible to perceive, reason, and act"
as decision-making, problem solving, learning (Winston, 1992)
..."(Bellman, 1978)
ACTING LIKE A HUMAN: ACTING RATIONALLY:
"The art of creating machines that perform "A field of study that seeks to explain and
functions that require intelligence when emulate intelligent behavior in terms of
performed by people" (Kurzweil, 1990) computational processes" (Schalkoff, 1 990)
"The study of how to make computers do "The branch of computer science that is
things at which, at the moment, people are concerned with the automation of intelligent
better" (Rich and Knight, 1 99 1 ) behavior" (Luger and Stubblefield, 1993)
The four perspectives of AI are generally categorized into two main streams of
thought; the human approach and the rationalist approach.
Acting rationally
In this approach, AI is viewed as the study and construction of rational agents (An agent
is just something that perceives and acts). When one acts so as to achieve one's goals, given
one's beliefs, the individual is said to be acting rationally. In the "laws of thought" approach to
AI, the whole emphasis was on the aspect of correct inferences. Being a rational agent entails
making correct inferences one way to act rationally is to reason logically to the conclusion that
a given action will achieve one's goals and then to act on that same conclusion. On the other
hand, correct inference is not all of rationality; because there are often situations where there
is no provably correct thing to do, yet something must still be done. Note also that there are
also ways of acting rationally that cannot be reasonably said to involve inference. For example,
pulling one's hand off of a hot stove is a reflex action that is more successful than a slower
action taken after careful deliberation. The cognitive skill set used for the Turing Test is
there to allow rational actions. Thus, we need the ability to represent knowledge and reason
with it because this enables us to reach or make good decisions in diverse of situations.
We need learning because having a better idea of how the world works actually enables
us to generate more effective strategies for dealing with it. We need visual perception in order
to get a better idea of what an action might achieve for example, being able to see a tasty
morsel helps one to move toward it.
The study of AI as rational agent design therefore has two advantages;
1. First, it is more general than the "laws of thought" approach, because correct inference
is only a useful mechanism for achieving rationality, and not a necessary one.
2. Secondly, it is more amenable to scientific
Machine learning is the process by which a machine uses a sample training set to
learn and then to generalize the data that it receives based on experience. It must be noted that
such machines are not mere levers, but those integrated with logic circuits such as computers
designed to make decisions for the machines. Machine learning involves the adaption to new
circumstances and the detection and extrapolation of patterns. It is all about directly getting
computers to be smart and learn things by themselves so that human beings do not have to
reprogram them. Let us take handwriting analysis as an example. Machine learning would
involve the development of a computer algorithm to recognize and interpret a person's
handwriting based on a particular sample set. Although this can be done with relative ease in
the human brain, this form of artificial intelligence is very difficult to program in computers. In
order to fully understand and appreciate what is meant by machine learning, we need to first
understand what it means to learn. And by so doing, we will also better understand the
different methods in which an agent can learn and thus effectively improve its overall
performance.
The study of Cybernetics is concerned with the mathematical properties of feed-back systems
and treats the human being as an automation, whereas AI is concerned with the cognitive
processes brought into play by the human being in order to perform what we regard as
intelligent tasks. (Bonnet, 1985)
During this period, the research interest was in building general purpose learning systems that
start with little or no initial structure or task-oriented knowledge.
According to Michalski, Carbonell & Mitchell (1984), Rosenblatt's Perceptron was an
elementary visual system which could be taught to recognize a limited class of patterns. It
consisted of a finite grid of light-sensitive cells. Experience with the perceptron spawned a new
discipline of pattern recognition and led to the development of the decision-theory approach to
the machine learning process. This approach equates learning with the acquisition of linear,
polynomial, or other related forms of discriminate functions from a given set of training
examples. One of the best known successful learning systems utilized such techniques was
Samuel's checkers program, which was able to acquire through learning a master level of
performance.
In the 1960s, AI was characterized by a Heuristic search. AI workers and researchers
abandoned their attempt to build artificial brains from the ground up. Instead they perceived
human thinking as a complex coordination of essentially simple symbol-manipulating tasks, and
this was research mainly coming from the work of psychologists and early AI researchers on
models of human learning. This paradigm utilized logic or graph structure representations
rather than numerical or statistical methods. The systems learned symbolic descriptions
representing higher level knowledge and made strong structural assumptions about the
concepts that were to be acquired (Michalski, Carbonell & Mitchell 1984). Work in this
paradigm include research on human concept acquisition various applied pattern recognition
systems. The most influential workers during this time were Allen Newell and Herbert Simon of
Carnegie-Mellon University. They worked on theorem-proving and computer chess, among
other things. Their masterwork was a program known the General Problem Solver (GPS).
The central idea behind GPS was that problem solving was a search through a space of probable
potential solutions. To make the search efficient, it had to be guided by heuristic rules that
directed it towards the desired destination. Thus, an automaton wandering around a maze
would have to use an exhaustive search technique if it knew nothing about the structure of that
maze; but if it had some way of telling when it was getting 'warm' it could normally reach its
goal state sooner. Heuristics are not guaranteed to work, and may occasionally lead it down a
blind alley; thus were not much good at real-life problems
Learning Agents:
In artificial intelligence s machine learning, a learning agent is an intelligent agent, and is
defined as an entity capable of perception and action.
Human Agent Eyes, Ears, Tongue, etc. Legs, Hands, Mouth, etc.
Robot Agent Cameras, IR finders, etc. Motors
Software Agent File content, Key stroke, etc. Screen display, Disc write, File
write, etc.
(i) The environment This is the input to the learning system in which the learner
currently finds himself/itself, and it is referred to as the problem domain in machine
learning.
(ii) The learning element This is the l learning pattern recognizer, and it is an interface
between the problem solver and the knowledge base. The manner that learning
element consults the knowledge base is known as the learning skills.
Many of the techniques adapted in Artificial Intelligence society are to simulate
those learning skills, e.g., rote learning, learning by advice taking, etc.. This will be
discussed later. There are a number of inference techniques applied by human
beings, i.e. induction, deduction, abduction and creation.
Induction:
This is the principle of reasoning to a conclusion about all members of a
classification resulting from the thorough examination of a sample of the class;
broadly, reasoning from the particular to the general (i.e. the inference of a general
law from particular instances).
Deduction:
This is a process of reasoning in which a conclusion is made from the stated
premises; reasoning from the general to specific (i.e. the inference of particular
instances from reference to a general law).
Abduction:
This is a form of deductive logic which provides only a 'plausible inference'.
Using statistics and probability theory, abduction may yield the most probable
inference among many possible likely or inference.
The combined three components comprising, the learning element, the problem
solver, and the knowledge base constitute the Central Nervous System (CNS).
Why conscious level?
This model is classified as conscious level owing to the usual application of symbolic
knowledge in the problem solving procedure. For example, let us imagine that we are to answer
a mathematics examination. We will in the first instance perceive the question with our eyes
(i.e. stimulus received by the receptors, which code into system signals, and then sends signals
to problem solver, then secondly, we will try to search our memory for the solutions (i.e.
problem solving). The receptors in this case are our eyes (in problem solving, the problem
solver sends the request to the learning element, ordering it to read or find the solution for the
question). If the learning element succeeds in finding the answer, feedback the problem solver
which in turn orders the eyes (receptors) and hands (effectors) to read and to write it down on
the answering paper simultaneously. The answer that has been written down will then become
a new stimulus for the system to validate. However, if the learning element fails to find the
solution for the question, the learning element may request the problem solver to go on trying
the next question (depends on an applying situation), or even ask for more input (i.e.
information).
ROT LEARNING
LEARNING BY ADVICE
This is actually learning from our own experience, without the aid of an instructor or
advisor. And this type of learning does not involve an increase in knowledge; but rather, in the
methods of solving a problem, using our already existing knowledge. Thus our experiences
teach us new rules which in turn direct the problem-solving processes. In machine learning the
problem solver needs to store and consult these rules from time to time, which can be partially
overcome by a utility measure in order to keep track of how useful the learnt rules are, and
deleting them when necessary (i.e. when they outlive their usefulness). Learning in problem
solving can be sub-classified as learning by parametric adjustment, learning with macro-
operators, and learning by chunking.
(iii) Learning by chunking: This learning classification is rote learning in the context of a
Production System (Chunking is a process similar in flavor to macro-operators). Rules
which are useful and always fired together are chunked to form one large
production. The idea of chunking comes was drawn from a psychological literature
on memory and problem solving, and its basis of computation is in production
INDUCTIVE LEARNING
Also known as learning from examples, this type of learning is a form of supervised
learning that uses specific examples of particular instances to reach general conclusions (i.e.
applying the specific to the general). Here, the amount of inference by the learner is much
more than in that engaged in, when learning by instruction; as no general concepts are
provided by a teacher. Although learning by examples does not require as much inference as in
learning by analogy since there are no similar seed concepts, by which new concepts can be
created. This type of learning can be sub-classified according to the source of examples;
(i) The source is a teacher This is a source of examples generated by a teacher who
knows the concepts well and presents them in a way meant to be as helpful as
possible to the learner. If the teacher knows the learner s learning abilities well, he
can choose examples to suit the learner s convergence on the required concept.
(ii) The source is the learner itself Under this source the learner quite typically
understands its own knowledge state, but does not really know the required
concept to acquire.
(iii) The source is the external environment In this case, the example generation
process is operationally random, as the learner must learn from relatively
uncontrolled observations.
(iv) Only positive examples Although positive examples provide instances for the
required concept to be acquired, they do not provide information for preventing
over generalization of inferred concepts.
(v) Positive and negative examples for these kinds of examples, positive examples
force generalization while negative examples prevent overgeneralization. This is the
most typical form of learning from examples.
This type (EBL) of learning involves the extraction of the concept behind the
information contain within one example, and generalize to other instances. And it broadly
requires domain-specific knowledge. In general, the inputs to explanation based learning
programs are as follows:
1. A Goal Concept
2. A Domain Theory (or Knowledge Base)
3. A Training Example
4. An Operationally Criterion
Also known as unsupervised learning, this type of learning is a very general form of
inductive learning that requires the learner to perform more inference than all the other
methods of learning discussed. It includes theory formation tasks, discovery systems, the
creation of classification criteria to form taxonomies and other similar tasks without the benefit
of an external teacher. Learning by observation tends to span several concepts that need to be
acquired rather than only one concept. Much like Learning in Problem Solving, this involves
gleaming information without the use of a teacher, and Focuses much more on extracting
knowledge, rather than strategies or even operations in problem solving. We may sub-classify
learning from observation according to the level of interaction with the environment. These
being:
1. Passive observation This is where the learner classifies and taxonomizes observations
of multiple aspects of the environment.
2. Active experimentation Where the learner perturbs the environment to observe the
results of its perturbations. Experimentations may be random and dynamically focused
according to how interesting they are, or according to theoretical constraints.
LEARNING BY ANALOGY
1. Anderson, Dave; McNeill, George. (1992). Artificial Neural Networks technology. A DACS
State-of-the-Art Report. Kaman Sciences Corporation, Utica, New York 13502-4627.
2. Bellman, R. E. (1978). An Introduction to Artificial Intelligence: Can Computers Think?
Boyd &Fraser Publishing Company, San Francisco.
3. Charniak, E.; McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-
Wesley, Reading, Massachusetts.
4. Haugeland, J. editor (1985). Artificial Intelligence: The Very Idea. MIT Press, Cambridge,
Massachusetts.
5. Haykin, Simon. (1999). Neural networks: A Comprehensive Foundation. 2nd edition.
Pearson Education, Inc. McMaster University. Ontario Canada.
6. Kelly, K. (1996). The Logic of Reliable Inquiry. Oxford University Press.
7. Luger, G. F. and Stubblefield, W. A. (1993). Artificial Intelligence: Structures and
Strategies for Complex Problem Solving. Benjamin/Cummings, Redwood City, California,
second edition.
8. Michalski, R.S; Carbonell, J; Michel, T. (1983).Machine learning: An Artificial Intelligence
Approach. Tioga Publishing Co., Palo Alto, California.
9. Michalski, Ryszard S; Carbonell,Jaime G; Mitchell, Tom M. (1984). (Symbolic
Computation) Machine Learning - An Artificial Intelligence Approach. Springer-Verlag,
Berlin Heidelberg, New York, Tokyo.
10. Munakata, Toshinori. (2008). Fundamentals of the New Artificial Intelligence. 2nd
Edition. Springer-Verlag London Limited.
11. Perlovsky; Leonid I. (2001). Neural Networks and Intellect: Using Model-Based Concepts.
New York. Oxford university press, Inc.
12. Kecxnan, Vojislav. (2001).Learning and soft computing: support vector machines, neural
networks, and fuzzy logic models. Massachusetts Institute of Technology (MIT),
Massachusetts (United States of America).
13. Kurzweil, Ray. (1990). The Age of Intelligent Machines. MIT Press, Cambridge,
Massachusetts.
14. Rich, E.; Knight, K. (1991). Artificial Intelligence. 2nd edition. McGraw-Hill, New York.
15. Russell, Stuart; Norvig, Peter. (1995). Artificial Intelligence: A Modern Approach. 1st
Edition. Prentice-Hall, Inc. A Simon & Schuster Company, Englewood Cliffs, New Jersey
07632.
16. Russell, Stuart; Norvig, Peter. (2010). Artificial Intelligence: A Modern Approach. 3rd
Edition. Pearson Education, Inc., Upper Saddle River, New Jersey 07458.
17. Schalkoff, R. I. (1990). Artificial Intelligence: An Engineering Approach. McGraw-Hill,
New York.
18. Winston, Patrick H. (1992). Artificial Intelligence. 3rd Edition. Reading, MA: Addison-
Wellsley. ISBN: 0201533774.
19. Witten, Ian H; Frank, Eibe; Hall, Mark A. (2011). Data Mining; Practical Machine Learning
Tools and Techniques. 3rd Edition. Elsevier Inc.
.
JOURNALS:
WEBSITES: