You are on page 1of 3

Birla Institute of Technology & Science, Pilani

Work Integrated Learning Programmes Division


Second Semester 2021-2022

Comprehensive Examination
(EC-3 Regular)

Course No. : IS ZC444


Course Title : ARTIFICIAL INTELLIGENCE
Nature of Exam : Open Book
Weightage : 40% No. of Pages =3
Duration : 2 Hours No. of Questions = 5
Date of Exam : 20/05/2022 (AN)
Note to Students:
1. Please follow all the Instructions to Candidates given on the cover page of the answer book.
2. All parts of a question should be answered consecutively. Each answer should start from a fresh page.
3. Assumptions made if any, should be stated clearly at the beginning of your answer.

Q.1 [5+2.5+1.5=9 Marks]


Apply the alpha beta pruning for the game tree depicted below, answer the following
questions:

a) Show complete step by step calculations for all the nodes. List the nodes pruned
using alpha pruning and nodes pruned using beta pruning.
b) Prove that the answer obtained by alpha beta pruning and minimax algorithm are
the same for the given problem.
c) Where does the static evaluation function’s application preferred over the utility
value of a node in a game tree?

Q.2 [1+5+2=8 Marks]


In this snake game (head of the snake is
represented by dark green cell and body is
depicted by green shaded ones), the snake has
to learn to eat as many apples (represented in
red colored cell) as possible without dying.
Eating apple makes it grow lengthier. Both its
head hitting the wall or its own body is
penalized by death leading to game restart.
Game restart triggers start state again from the
same state depicted in the given diagram.

Note: The cost model include -5 for each action. In addition eating apple adds a reward of
+50, and that of hitting a wall or own body adds a penalty to -100. Q-Table initialized to
value = “+10”, learning rate=0.7 and discount factor=0.5. It’s mandatory to show the
update of Q-table at the end of every iteration. Partial structure of Q-table is sufficient for
each iterations.

a) List the partial structure of reward table, transition table and Q-table with partially
filled values
b) Apply vanilla Q-Learning algorithm as discussed from the start state (ie., F8), for
the sequence of actions, moveEastmoveEastmoveEast
c) Explain shortly the significance of learning rate using the given problem as an
example. What happens when the learning rate is set as 100%?

Q.3 [4+3+2=9 Marks]

Suppose the assumptions of markov model is applicable in the sequence prediction of an AI


agent which moves about in the famous wumpus world problem as shown in the diagram.
Assume that only three sensors ie., Stench(S), Breeze(B) & Glitter(G) can be detected at every
cell.
Every cell state can be grouped into either Safe (F) or Unsafe(~F) cave cell.
a) If above observed sensor readings for two consecutive navigation of the agent is known,
what is the state of the cell during the second observation?
b) For below natural language sentences and the corresponding mentioned part of the speech
tags only and the EndOfSentence, extract and build the complete Initial table, Transition
table and Emission table.
Fruit is falling.
Noun Verb Verb

Tree bears fruit.


Noun Verb Noun
Prize is the fruit of labor.
Noun Verb Determiner Noun Preposition Noun

Tree is falling.
Noun Verb Verb

Falling pride is no prize.


Verb Noun Verb Determiner Noun

Pride tree bears.


Noun Noun Verb

c) Explain with representations, a scenario that necessitates Hidden Morkov Model based
inferences relevant only to the given Natural Language Processing problem in part b).

Q.4 [4+1=5 Marks]

Consider the below initial and goal state of the problem and answer the following questions.
Given is an N-Tile puzzle where each immediate neighbors of the empty tile (shown in
unnumbered gray colored tile) are allowed to swap positions with the empty tile. An agent
should find a path ie., series to swapping from the Initial state to achieve the Goal state.
Note: To understand the definition of neighbors, in the Goal state below, tile numbered 1
and 3 are neighbors to the empty tile.

Start State:
A
3 1 2

4 5 8

6 7

a) Apply hill climbing algorithm with below fitness function (f) and show the
neighborhood exploration with step by step node evaluation till termination. A tile
denotes both the empty tile and numbered tile.
f(state) = No.of.misplaced tiles
b) If 1/(0.5N) is the observed probability of reaching the solution in a single random
run of the algorithm, what is the expected number of random restarts required to
find the solution?

Q.5 Answer the below with appropriate plagiarism free numerical examples. Vague and
plagiarized answers will be penalized. [4+2+3=9 Marks]
a. As an AI designer and analyst, propose a design to extend min-max algorithm for a
four player game, where all players’ alternate turns following their opponent’s turn.
Players are grouped into two per team. Players among the same team have to play in a
cooperative mode. Explain the logic behind your answer with any game of your choice.
b. Relate the use of knowledge representation using semantic nets in the field of natural
language processing with examples.
c. Is it possible that the heuristic designed for informed search technique turns out to be
admissible but not consistent? Explain with numerical example.

You might also like