You are on page 1of 17

CIA I INTERNAL EXAMINATION ANSWER KEY

PART A (5x2=10 marks)

1. Define static environment.

It is static if it does not change while the agent is deciding on an action.


agent does not to keep in touch with time.

2. Name the types of informed search algorithms.

o Best First Search


o A* algorithm

3. List the steps performed by problem solving agent.

o Problem definition.
o Problem analysis
o Knowledge Representation
o Problem-solving

4. What is global maxima?

It is the best possible state in the state space diagram.

5. Compare local maximum and flat local maximum.

Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.

Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.

PART B (2x13=26 marks)

6a) Demonstrate Best first search and A* algorithm.

Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which appears best at
that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm,
we expand the node which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e.

f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:


o Step 1: Place the starting node into the OPEN list.
o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value
of h(n), and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages
of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
Example:

Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.

A* Search Algorithm:

A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search algorithm finds the shortest path through the search
space using the heuristic function. This search algorithm expands less search tree
and provides optimal result faster. A* algorithm is similar to UCS except that it uses
g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.

At each point in the search space, only those node is expanded which have the
lowest value of f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED list,
if not then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached
to the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n)
of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach
any node from start state.

Here we will use OPEN and CLOSE

D
Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S--
>G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.

6b) Outline the steps performed by problem solving agents in detail.

The reflex agent of AI directly maps states into action. Whenever these agents fail
to operate in an environment where the state of mapping is too large and not
easily performed by the agent, then the stated problem dissolves and sent to a
problem-solving domain which breaks the large stored problem into the smaller
storage area and resolves one by one. The final integrated action will be the
desired outcomes.
On the basis of the problem and their working domain, different types of problem-
solving agent defined and use at an atomic level without any internal state visible
with a problem-solving algorithm. The problem-solving agent performs precisely
by defining problems and several solutions. So we can say that problem solving is
a part of artificial intelligence that encompasses a number of techniques such as a
tree, B-tree, heuristic algorithms to solve a problem.
We can also say that a problem-solving agent is a result-driven agent and always
focuses on satisfying the goals.
There are basically three types of problem in artificial intelligence:
1. Ignorable: In which solution steps can be ignored.
2. Recoverable: In which solution steps can be undone.
3. Irrecoverable: Solution steps cannot be undo.
Steps problem-solving in AI: The problem of AI is directly associated with the
nature of humans and their activities. So we need a number of finite steps to
solve a problem which makes human easy works.
These are the following steps which require to solve a problem :
 Problem definition: Detailed specification of inputs and acceptable system
solutions.
 Problem analysis: Analyse the problem thoroughly.
 Knowledge Representation: collect detailed information about the problem
and define all possible techniques.
 Problem-solving: Selection of best techniques.
Components to formulate the associated problem:
 Initial State: This state requires an initial state for the problem which starts
the AI agent towards a specified goal. In this state new methods also initialize
problem domain solving by a specific class.
 Action: This stage of problem formulation works with function with a specific
class taken from the initial state and all possible actions done in this stage.
 Transition: This stage of problem formulation integrates the actual action
done by the previous action stage and collects the final stage to forward it to
their next stage.
 Goal test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the action
and forward into the next stage to determines the cost to achieve the goal.
 Path costing: This component of problem-solving numerical assigned what
will be the cost to achieve the goal. It requires all hardware software and
human working cost.

7a) Summarize heuristic function in artificial intelligence with 8 puzzle problem.

The state of the 8-puzzle is represented using a 3x3 grid, where each cell can hold
one of the numbered tiles or remain empty (occupied by the blank tile). This grid
serves as a compact and systematic way to capture the configuration of the puzzle.
In a 3x3 grid, each cell can contain one of the following elements:

Numbered tiles, typically from 1 to 8.

A blank tile, represented as an empty cell.

The arrangement of these elements in the grid defines the state of the puzzle. The
state represents the current position of the tiles within the grid, which can vary as

the puzzle is

Significance of State Space:

State space is a critical concept in problem-solving, including the 8-puzzle. It


provides a structured way to explore and navigate the puzzle's possible states.

State Space Definition:

The state space of the 8-puzzle encompasses all possible states that the puzzle can
transition through, from the initial state to the goal state.

Each state in the state space represents a unique configuration of the puzzle.

Navigating the State Space:

Problem-solving algorithms, like search algorithms, traverse the state space


systematically, evaluating different states to find an optimal path from the initial
state to the goal state.

The state space's vastness highlights the complexity of the 8-puzzle problem, as
there are numerous potential states to explore.

Search Strategies:
Within the state space, search strategies determine the order in which states are
explored. Algorithms like Breadth-First Search and A* employ various techniques to
efficiently navigate this space.

Understanding the representation of the 8-puzzle state, the concept of initial and
goal states, and the significance of the state space is essential for grasping the
problem-solving process in AI and heuristic search.

How AI Technique is Used to Solve 8 Puzzle Problem?

Introducing Search Algorithms:

Search algorithms play a central role in solving the 8-puzzle problem by


systematically exploring possible states and finding a sequence of moves to reach
the goal state.

Search algorithms are a fundamental tool in artificial intelligence and problem-


solving, used to navigate complex state spaces efficiently.

Heuristic functions are a vital component of informed search algorithms, like A*.
They provide estimates of the cost to reach a goal state from a given state. Here's
why they are essential:

Importance of Heuristic Functions: Heuristic functions are used to guide search


algorithms by providing a measure of how promising a state is in reaching the goal.
In other words, they help the algorithm make informed choices about which states
to explore next.

Informed vs. Uninformed Search: Informed search algorithms, like A*, use
heuristic functions to focus on more promising states, making them significantly
more efficient than uninformed algorithms.
7b) Explain simple hill climbing algorithm.

o Hill climbing algorithm is a local search algorithm which continuously moves


in the direction of increasing elevation/value to find the peak of the mountain
or best solution to the problem. It terminates when it reaches a peak value
where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps
to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.

State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing


algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the
goal of search is to find the global minimum and local minimum. If the function of
Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.
8a) Compare between BFS, DFS and DLS with algorithm and example.

o Breadth-first search is the most common search strategy for traversing a


tree or graph. This algorithm searches breadthwise in a tree or graph, so it is
called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search
algorithm.
o Breadth-first search implemented using FIFO queue data structure.

o S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory


size of frontier which is O(bd).

Depth-first Search

o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
o Root node--->Left node ----> right node.
o It will start searching from root node S, and traverse A, then B, then D and
E, after traversing E, it will backtrack the tree as E has no other successor
and still goal node is not found. After backtracking it will traverse node C and
then G, and here it will terminate as it found goal node.

o Completeness: DFS search algorithm is complete within finite state space


as it will expand every node within a limited search tree.
o Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm.

Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.
Example:

Completeness: DLS search algorithm is complete if the solution is above the


depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is


also not optimal even if ℓ>d.

8b) Illustrate the nature of environments and its features.

An environment in artificial intelligence is the surrounding of the agent. The agent


takes input from the environment through sensors and delivers the output to the
environment through actuators. There are several types of environments:
 Fully Observable vs Partially Observable
 Deterministic vs Stochastic
 Competitive vs Collaborative
 Single-agent vs Multi-agent
 Static vs Dynamic
 Discrete vs Continuous
 Episodic vs Sequential
 Known vs Unknown

Environment types

1. Fully Observable vs Partially Observable


 When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment else
it is partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
 An environment is called unobservable when the agent has no sensors in all
environments.
 Examples:
 Chess – the board is fully observable, and so are the opponent’s
moves.
 Driving – the environment is partially observable because what’s
around the corner is not known.
1. Deterministic vs Stochastic
 When a uniqueness in the agent’s current state completely determines the next
state of the agent, the environment is said to be deterministic.
 The stochastic environment is random in nature which is not unique and cannot
be completely determined by the agent.
 Examples:
 Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined.
 Self-Driving Cars- the actions of a self-driving car are not unique, it
varies time to time.
3. Competitive vs Collaborative
 An agent is said to be in a competitive environment when it competes against
another agent to optimize the output.
 The game of chess is competitive as the agents compete with each other to win
the game which is the output.
 An agent is said to be in a collaborative environment when multiple agents
cooperate to produce the desired output.
 When multiple self-driving cars are found on the roads, they cooperate with
each other to avoid collisions and reach their destination which is the output
desired.
4. Single-agent vs Multi-agent
 An environment consisting of only one agent is said to be a single-agent
environment.
 A person left alone in a maze is an example of the single-agent system.
 An environment involving more than one agent is a multi-agent environment.
 The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
 An environment that keeps constantly changing itself when the agent is up
with some action is said to be dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment
keeps changing every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an
agent enters.
6. Discrete vs Continuous
 If an environment consists of a finite number of actions that can be deliberated
in the environment to obtain the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e.
is not discrete, is said to be continuous.
 Self-driving cars are an example of continuous environments as their actions
are driving, parking, etc. which cannot be numbered.
7.Episodic vs Sequential
 In an Episodic task environment, each of the agent’s actions is divided into
atomic incidents or episodes. There is no dependency between current and
previous incidents. In each incident, an agent receives input from the
environment and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to detect
defective parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency between
current and previous decisions.
 In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has taken
previously and what action he is supposed to take in the future.
 Example:
Checkers- Where the previous move can affect all the following moves.
8. Known vs Unknown
In a known environment, the output for all probable actions is given.
Obviously, in case of unknown environment, for an agent to make a decision, it
has to gain knowledge about how the environment works.

You might also like