Professional Documents
Culture Documents
SECTION -C
Q12)ANS. Water Jug Problem:-
We are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marked on it. There is a pump, which can be used to fill the jugs with water.
How can we get exactly 2 gallons of water into 4-gallon jug?
The state space for this problem can be described as the set of ordered pairs of
integers (X, Y) such that X = 0, 1, 2, 3 or 4 and Y = 0, 1, 2 or 3; X is the number of
gallons of water in the 4-gallon jug and Y the quantity of water in the 3-gallon jug.
The start state is (0, 0) and the goal state is (2, n) for any value of n, as the problem
does not specify how many gallons need to be filled in the 3-gallon jug (0, 1, 2, 3). So
the problem has one initial state and many goal states. Some problems may have
many initial states and one or many goal states.
The operators to be used to solve the problem can be described as shown in Fig.
2.3:
In order to describe the operators completely here are some assumptions, not
mentioned, in the problem state.
1. We can fill a jug from the pump.
To solve the water jug problem, all we need, in addition to the problem description
given above, is a control structure which loops through a simple cycle in which some
rule whose left side matches the current state is chosen, the appropriate change to the
state is made as described in the corresponding right side and the resulting state is
checked to see if it corresponds to a goal state.
The loop continues as long as it does not lead to the goal. The speed with which the
problem is solved depends upon the mechanism, control structure, which is used to
select the next operation.
There are several sequences of operators which will solve the problem, two
such sequences are shown in Fig. 2.4:
In doing so some issues which affect the approach towards the solution are:
1. The rules should be stated explicitly and not written because they are allowable. For
example, the first rule states that “Fill the 4-gallon jug”, but it should have been written
as “if the 4-gallon jug is not already full, fill it” but the rule in its stated form is not wrong
as there is no condition that the already filled jug cannot be filled (may be after
emptying).
No doubt this is physically possible but not wise; as this won’t change the problem
state. In order to increase the efficiency of the problem solving program, it is imperative
to encode some constraints in the left side of the rules so that the rules should lead to
a solution that is the rules should be made more general.
2. Now consider the rules 3 and 4 should or should not these rules be included in the
list of available operators? Emptying an unmeasured amount of water onto the ground
is certainly allowed by the problem statement, but do these rules lead us near to a
solution. The answer is ‘no’ so these can be ignored. So, such rules which are really
applicable to the problem in arriving at the solution should be considered.
DFS.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: DFS is complete if the search tree is finite, meaning
for a given finite search tree, DFS will come up with a solution if it
exists.
Optimality: DFS is not optimal, meaning the number of steps in
reaching the solution, or the cost spent in reaching it is high.
Applications:-
The applications of DFS includes the inspection of two edge
connected graph, strongly connected graph, acyclic graph,
and topological order.
As the nodes on the single path are stored in each iteration from root to leaf
node, the space requirement to store nodes is linear. With branching
factor b and depth as m, the storage space is bm.
solution.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: BFS is complete, meaning for a given search tree,
BFS will come up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges are equal.
A graph is bipartite when the graph vertices are parted into two
disjoint sets; no two adjacent vertices would reside in the same set.
Another method of checking a bipartite graph is to check the
occurrence of an odd cycle in the graph. A bipartite graph must not
contain odd cycle.
If branching factor (average number of child nodes for a given node) = b and
depth = d, then number of nodes at level d = bd.
Disadvantage − Since each level of nodes is saved for creating next one, it
consumes a lot of memory space. Space requirement to store nodes is
exponential.
Its complexity depends on the number of nodes. It can check duplicate
nodes.
Simple reflex agents ignore the rest of the percept history and act
only on the basis of the current percept. Percept history is the
history of all that an agent has perceived till date. The agent function
is based on the condition-action rule. A condition-action rule is a
rule that maps a state i.e, condition to an action. If the condition is
true, then the action is taken, else not. This agent function only
succeeds when the environment is fully observable. For simple reflex
agents operating in partially observable environments, infinite loops
are often unavoidable. It may be possible to escape from infinite loops
if the agent can randomize its actions. Problems with Simple reflex
agents are :
Goal-based agents
These kind of agents take decision based on how far they are
currently from their goal(description of desirable situations). Their
every action is intended to reduce its distance from the goal. This
allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state. The knowledge that
supports its decisions is represented explicitly and can be modified,
which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behavior can easily be changed.
Utility-based agents
The agents which are developed having their end uses as building
blocks are called utility based agents. When there are multiple
possible alternatives, then to decide which one is best, utility-based
agents are used.They choose actions based on a preference
(utility) for each state. Sometimes achieving the desired goal is not
enough. We may look for a quicker, safer, cheaper trip to reach a
destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the
uncertainty in the world, a utility agent chooses the action that
maximizes the expected utility. A utility function maps a state onto a
real number which describes the associated degree of happiness.
Learning Agent
A learning agent in AI is the type of agent which can learn from its
past experiences or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
Q8) ans. The heuristic function is a way to inform the search about the
direction to a goal.It can also be defined thus as a function that ranks
alternatives in search algorithms at each branching step based on
available information to decide which branch to follow.
This solution may not be the best of all the actual solutions to this
problem, or it may simply approximate the exact solution. But it
is still valuable because finding it does not require a prohibitively
long time.
OR
A global database
A control system
Simplicity
Modularity
Modifiability
Knowledge Intensive
Language Independence
The rules in the production system should not have any type of
conflict resolution as when a new rule is added to the database
it should ensure that it does not have any conflict with any
existing rules.
ARCHITECTURE OF PRODUCTION SYSTEM
o It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o In this algorithm, we don't need to maintain and handle the search tree or graph
as it only keeps a single current state.
o Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to decide
which direction to move in the search space.
o No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
On Y-axis we have taken the function which can be an objective function or cost function,
and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search
is to find the global minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum and local maximum.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It
has the highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current states have the same value.
1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than
the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states
of the current state contains the same value, because of this algorithm does not find any
best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.
SECTION-A
Q1)ANS. Local Maximum: A local maximum is a peak state in the landscape which is
better than each of its neighboring states, but there is another state also present which
is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states
of the current state contains the same value, because of this algorithm does not find any
best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.
The environment is where agent lives, operate and provide the agent with something to
sense and act upon it. An environment is mostly said to be non-feministic.
Q6) ans. An agent can be anything that perceiveits environment through sensors and act
upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting.
Agent Function
A rational agent is an agent which has clear preference, models uncertainty, and acts
in a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.