Professional Documents
Culture Documents
i. The science of making machines do things which would require intelligence if they were done by a human.
(Marvin Minsky)
ii. A set of goals meant to address a class of problems, using a set of methods by a set of people
2. In your own words describe the significance of the Turing Test in artificial intelligence (Al) research.
The Turing Test was originally designed to test the possibility of a machine passing as human through question and
answer sessions. The test Turin proposed is that the computer should be interrogated by a human via a teletype, and
passes the test if the interrogator cannot tell if there is a computer or a human at the other end.
Turing’s “test”, is a useful tool to test machine intelligence. The computer would need to possess the following
capabilities:
natural language processing to enable it to communicate successfully in English (or some other human
language);
knowledge representation to store information provided before or during the interrogation;
automated reasoning to use the stored information to answer questions and to draw new conclusions;
machine learning to adapt to new circumstances and to detect and extrapolate patterns.
3. With reference to the history of artificial intelligence, by the late 1960s, most of the basic ideas and
concepts necessary for neural computing had already been formulated (Cowan,1980) However, only in
the mid-19S0s did the solution emerge. Explain clearly any three reasons why the field of artificial
neural networks (ANNs) was reborn in the 1980s
Physicists such as Hopfield (1982) used techniques from statistical mechanics to analyze the storage and
optimization properties of networks, leading to significant cross-fertilization of ideas. In 1982 several
events caused a renewed interest. John Hopfield of Caltech presented a paper to the national Academy
of Sciences. Hopfield's approach was not to simply model brains but to create useful devices. With
clarity and mathematical analysis, he showed how such networks could work and what they could do.
Yet, Hopfield's biggest asset was his charisma. He was articulate, likeable, and a champion of a dormant
technology.
At the same time, another event occurred. A conference was held in Kyoto, Japan. This conference was
the US-Japan Joint Conference on Cooperative/Competitive Neural Networks. Japan subsequently
announced their Fifth Generation effort. US periodicals picked up that story, generating a worry that
the US could be left behind. Soon funding was flowing once again.
By 1985 the American Institute of Physics began what has become an annual meeting - Neural Networks
for Computing. By 1987, the Institute of Electrical and Electronic Engineers’ (IEEE) first International
Conference on Neural Networks drew more than 1,800 attendees.
Psychologists including David Rumelhart and Geoff Hinton continued the study of neural net models of
memory.
The real impetus came in the mid-1980s when at least four different groups reinvented the back-
propagation learning algorithm first found in 1969 by Bryson and Ho. The algorithm was applied to
many learning problems in computer science and psychology, and the widespread dissemination of the
results in the collection Parallel Distributed Processing (Rumelhart and McClelland, 1986) caused great
excitement.
By 1989 at the Neural Networks for Defence meeting Bernard Widrow told his audience that they were
engaged in World War IV, "World War III never happened," where the battlefields are world trade and
manufacturing. The 1990 US Department of Defence Small Business Innovation Research Program
named 16 topics which specifically targeted neural networks with an additional 13 mentioning the
possible use of neural networks.
4. Fully explain the Goal based agent and Utility based agent
A Goal Based Agent and a Utility Based agent are both reflex agents that immediately respond to percepts.
Goal Based Agents
These act so that they will achieve their goal(s). Consider an agent driving a taxi. At a road junction, the taxi can
turn left, right, or go straight on. The right decision depends on where the taxi is trying to get to. In other words,
as well as a current state description, the agent needs some sort of goal information, which describes situations
that are desirable—for example, being at the passenger's destination. The agent program can combine this with
information about the results of possible actions (the same information as was used to update internal state in
the reflex agent) in order to choose actions that achieve the goal.
Utility Based Agents
These act so as to try to maximize their own "happiness." A complete specification of the utility function allows
rational decisions in two kinds of cases, where goals are inadequate.
First, when there are conflicting goals, the function specifies the appropriate trade-off.
Second, when there are several goals that the agent can aim for, none of which can be achieved with certainty,
utility provides a way in which the likelihood of success can be weighed up against the importance of the goals.
The initial state, set of states, goal test, and path cost function define a problem.
An initial state, the agent starts in
A set of states the set of all states reachable from the initial state by any sequence of actions. (The state
space, i.e. the initial state & a set of all possible successor states (successor function)).
A path cost function assigning a numeric cost to each path.
A goal test, which determines whether a given state is a goal state.
6. Write a search algorithm for a simple problem showing agent (formulate, search, execute) that takes
a problem as input and return a solution in the form of an action sequence
Greedy-Best First Search Tries to expand the node that is closest to the goal, on the grounds that that this will lead to a
solution fast. It evaluates nodes by using just the heuristic function
f(n) = h(n)
h(n) = estimated cost of cheapest path from node n to a goal node.
h(n) = 0 if n is a goal
A* Search It is the most widely used form of Best searh. It evaluates nodes by combining g(n),- the cost to reach the
node – and h(n) – the cost to get from the node to the goal.
f(n) = g(n) + h(n)
f(n) = estimated cost of cheapest solution
8. Create a knowledge base for the following facts using predicate logic
i. Marcus was a man.
ii. Marcus was a Pompeian.
iii. All Pompeians were Romans.
iv. Caesar was a ruler.
v. All Pompeians were either loyal to Caesar or hated him.
vi. Everyone is loyal to someone.
vii. People only try to assassinate rulers they are not loyal to.
viii. Marcus tried to assassinate Caesar.
ix. Was Marcus loyal to Caesar?
Answers
9. I married a widow (let’s call her W) who has a grown up daughter (call her D). My father (F), who
visited us quite often, fell in love with my step daughter and married her. Hence, my father became
my son-in-law and my step daughter became my mother. Some months later, my wife gave birth to a
son (S1), who became the brother-in-law of my father, as well as my uncle. The wife of my father, that
is, my step daughter, also a son (S2).
Using predicate calculus, create a set of expressions that represent the situation in the above story.
Add expressions defining basic family relationships such as the definition of father-in-law and use
modus ponens on this system to prove the conclusion that “I am my own grandfather.”
Heuristic Search: Classically heuristics means rule of thumb. In heuristic search, we generally use one or more
heuristic functions to determine the better candidate states among a set of legal states that could be generated from a
known state. The heuristic function, in other words, measures the fitness of the candidate states. The better the
selection of the states, the fewer will be the number of intermediate states for reaching the goal. However, the most
difficult task in heuristic search problems is the selection of the heuristic functions. One has to select them intuitively,
so that in most cases hopefully it would be able to prune the search space correctly.
Uncertainty can be defined as the lack of the exact knowledge that would enable us to reach a perfectly reliable
conclusion.
Uncertainty refers to a value that cannot be determined during a consultation. It is a property of environments that
are
–Partially observable, or
–Stochastic (probabilistic)
For the expert system, suppose all rules in the knowledge base are represented in the following form:
IF E is true
This rule implies that if event E occurs, then the probability that event H will occur is p.
17. Two methods of dealing with uncertainty are certainty factors (CFs) and Bayesian probability.
i. Explain the two methods mentioned above.
ii. W h a t new ideas does the theory of certainty factors bring in?
Certainty Factors
Certainty factors are values used to approximate the degree to which we think a rule in a rule-based system is correct.
These values range from –1 and +1. The negative value indicates predominance of opposing evidence, while the
positive value indicates a predominance of confirming evidence for the rule being correct.
e.g. If interest rates = fall (CF = 0.6) AND
Taxes = Reduced (CF = 0.8)
Then Stock Market = Rise (CF = 0.9)
The certainty factor of the stock market rising then becomes:
Min(0.6, 0.8) * 0.9 = 0.54.
If there are two rules talking about the same point then the certainty factor of the point is considered as the
maximum of the CFs of the two rules.
Bayesian probability
Bayes probability states:
Bayes’ theorem is only valid if we know all the conditional probabilities relating to the evidence in question. In fact, as
we consider more and more evidence it quickly becomes computationally difficult to use Bayes’ theorem, quite apart
from the problem of obtaining and representing all the conditional probabilities. Because of this, Bayes’ theorem is
rarely. However, it is important as it is a well-known, sound way of dealing with the probabilities of hypotheses given
evidence, and as such provides a kind of standard for assessing other approaches.