You are on page 1of 4

UNIVERSITY OF ENGINEERING & TECHNOLOGY LAHORE

D E P AR T M E N T OF COMPUTER SCIENCE AN D ENGINEERING


Time Allowed: 3 Hrs Maximum Marks: 100
Artificial Intelligence
Attempt Any Five Questions. All Questions Carry Equal Marks

Question 1:
A- Differentiate between informed and uninformed searches. 5

B- Consider the tree shown below. The numbers on the arcs are the arc lengths; the numbers near states
B,C, and D are the heuristic estimates; all other states have a heuristic estimate of 0. Assume that the
children of a node are expanded in alphabetical order when no other order is specified by the search and
that the goal is state J. No “Visited” or “Expanded” lists are used. What order would the states be expanded
by each type of the search? Write only the sequence of states expanded by each search.
1-Breadth First 2- Depth First 3- Progressive deepening Search 4- Best first 5- A*
Please show all the calculations.
15

Question 2:
A- Consider the graph shown below where the numbers on the links are link costs and the numbers next to
the states are heuristic estimates. Here A be the start state and G be the goal state. Simulate A* search with
a strict expanded list on this graph. Please show length of the path and total estimated cost along with the
expanded list. 14

B- Prove that A* is an optimal search strategy. 6

Question 3:
A- What are the two consistency conditions in A* search. 4
B- Explain arc consistency condition in CSP problems. 3

1/4
C- Explain backtracking with forward checking technique for colouring the graph with constrint of not
having same colurs at the adjacent boundaries. Domains values of the three variables are given inside the
ellipse. 13

Question 4:
A-
Use naïve Bayesian networks on the data give in the table below to calculate the probability of the last
sample in the table below. 13
The naïve bayes assumption states
n
P ( X \ Ci ) = ∏ P ( x K \ Ci )
k =1
Si
And the class prior probabilities can be estimated by P(Ci ) =
S
P ( X \ Ci ) P(Ci )
And the Bayes rule is stated as P(Ci \ X ) =
P( X )
A B C D CLASS
0 1 1 0 1
0 0 1 1 1
1 0 1 0 1
0 0 1 1 1
0 0 0 0 1
1 0 0 1 0
1 1 0 1 0
1 0 0 0 0
1 1 0 1 0
1 0 1 1 0
0 0 1 1 ?

B – What is the class independence assumption and why we use it in Naïve Bayesian classification.
4
C – In order to calculate class’s posterior probability we have to multiply all the class independent posterior
probabilities which could result in a very small number. Please suggest any mathematical solution in order
to overcome this problem. Please note that we are only interested in the maximum posterior probability and
not with the exact answer. 3

Question 5:

A- Chose appropriate answer 3


I) In constructing Decision Trees for rule extraction
A- We always prefer over fitting
B- We some time prefer over fitting
C- We never prefer over fitting
II) Post pruning requires more computation but have more reliable and accurate results
A- True
B- False
III) The leaves of the decision trees bears
A- A Function
B- An Attribute
C- A Class

2/4
D- A Rule

B- You are an allergist. One of your patients complains of a food allergy, but cannot determine its cause.
You ask the patient to keep a journal of all the foods they eat for a week, and whether or not they had an
allergic reaction on each day. Here is the journal: You have to construct a decision tree with zero error.
14

C- Discuss, in no more than five lines, the role and use of entropies in building decision trees 3

Question 6:
A- Select appropriate answer 3
I) Clustering stops when cluster centers are moved to the mean of clusters
A- True
B- False
II) Initial cluster centers have to be data points.
A- True
B- False
III) Clustering is an unsupervised classification technique
A- True
B- False
B- Please analyze critically the pros and cons of nearest neighbour algorithm especially in perspective of
noisy data. 5

C- Differentiate between supervised and unsupervised classification 2

D- Suppose that the task is to cluster the following 8 points, with (x; y) representing their locations, into
three clusters.
A1(2; 10);A2(2; 5);A3(8; 4);B1(5; 8);B2(7; 5);B3(6; 4);C1(1; 2);C2(4; 9)
The distance function is Euclidean distance. Suppose we initially assign A1, B1 and C1 as the centre of
each cluster, respectively. Use the k - means algorithm to show: The 3 cluster centers after the FIRST round
execution 10

Question 7:
A- In machine learning discuss the impact of memory, average and generalization on the learning
process with examples. 5
B- What do you understand by cross validation? How can we improve the learning process using k-
fold cross validation. Discuss 5
C- Explain in your own words what is over fitting and how, in your opinion over fitting can cause
problems in machine learning. 5
D- In decision trees when a tree is fully grown then some time in order to avoid over fitting we have
to prune the branches of the tree. Name the types of tree pruning techniques and discuss them
briefly. 5

3/4
Question 8:
A- Parse the following Rel Clause “The person that John called ran”. Rel Clause is a sentence having
“that” word followed by missing NP gap. For parsing process, Grammar is as follows:

(S ?sg0 ?sg2) : - (NP ?sg0 ?sg1) (VP ?sg1 ?sg2)


(VP ?vpg0 ?vpg1) : - (Verb/ tr)(NP ?vpg0 ?vpg1)
(VP ?vpg0 ?vpg0) : - (Verb/ itr)
(NP ?npg0 ?npg0) : - (Name)
(NP ?npg0 ?npg0) : - (Det) (Noun) (RelClause)
(NP ?npg0 ?npg0) : - (Name)
(NP ((gap NP)) () )
(RelClause) : - “that” (S ((gap NP)) () ) 10

B- Write down the problems with top-down and bottom-up parsing. 5

C- What are the two types of ambiguity? Find ambiguity in the following sentences and how it can be
removed? 5

London had snow yesterday.


It also had fog.
It fell to a depth of 1 meter.
It will continue cold today.

4/4

You might also like