Professional Documents
Culture Documents
Introduction :
The reflex agent of AI directly maps states into action.
Whenever these agents fail to operate in an environment where the
state of mapping is too large and not easily performed by the agent,
then the stated problem dissolves and sent to a problem-solving
domain which breaks the large stored problem into the smaller
storage area and resolves one by one. The final integrated action will
be the desired outcomes.
On the basis of the problem and their working domain, different
types of problem-solving agent defined and use at an atomic level
without any internal state visible with a problem-solving algorithm.
The problem-solving agent performs precisely by defining problems
and several solutions. So we can say that problem solving is a part of
artificial intelligence that encompasses a number of techniques such
as a tree, B-tree, heuristic algorithms to solve a problem.
We can also say that a problem-solving agent is a result-driven agent
and always focuses on satisfying the goals.
Steps problem-solving in AI: The problem of AI is directly
associated with the nature of humans and their activities. So we need
a number of finite steps to solve a problem which makes human easy
works.
These are the following steps which require to solve a problem :
Goal Formulation: This one is the first and simple step in
problem-solving. It organizes finite steps to formulate a
target/goals which require some action to achieve the goal. Today
the formulation of the goal is based on AI agents.
Problem formulation: It is one of the core steps of problem-
solving which decides what action should be taken to achieve the
formulated goal. In AI this core part is dependent upon software
agent which consisted of the following components to formulate
the associated problem.
Components to formulate the associated problem:
Initial State: This state requires an initial state for the problem
which starts the AI agent towards a specified goal. In this state
new methods also initialize problem domain solving by a specific
class.
Action: This stage of problem formulation works with function
with a specific class taken from the initial state and all possible
actions done in this stage.
Transition: This stage of problem formulation integrates the
actual action done by the previous action stage and collects the
final stage to forward it to their next stage.
Goal test: This stage determines that the specified goal achieved
by the integrated transition model or not, whenever the goal
achieves stop the action and forward into the next stage to
determines the cost to achieve the goal.
Path costing: This component of problem-solving numerical
assigned what will be the cost to achieve the goal. It requires all
hardware software and human working cost.
Charecteristics of problem :
Decomposable to smaller or easier problems.
Solution steps can be ignored or undone.
Predictable problem universe.
Good solutions are obvious.
Uses internally consistent knowledge base.
Requires lots of knowledge or uses knowledge to constrain
solutions
Requires periodic interaction between human and computer
Exhaustive searches
In computer science, brute-force search or exhaustive search, also
known as generate and test, is a very general problem-
solving technique and algorithmic paradigm that consists of
systematically enumerating all possible candidates for the solution
and checking whether each candidate satisfies the problem's
statement.
A brute-force algorithm to find the divisors of a natural
number n would enumerate all integers from 1 to n, and check
whether each of them divides n without remainder. A brute-force
approach for the eight queens puzzle would examine all possible
arrangements of 8 pieces on the 64-square chessboard, and, for each
arrangement, check whether each (queen) piece can attack any other.
While a brute-force search is simple to implement, and will always
find a solution if it exists, its cost is proportional to the number of
candidate solutions – which in many practical problems tends to grow
very quickly as the size of the problem increases (§Combinatorial
explosion).[1] Therefore, brute-force search is typically used when the
problem size is limited, or when there are problem-
specific heuristics that can be used to reduce the set of candidate
solutions to a manageable size. The method is also used when the
simplicity of implementation is more important than speed.
This is the case, for example, in critical applications where any errors
in the algorithm would have very serious consequences; or
when using a computer to prove a mathematical theorem. Brute-force
search is also useful as a baseline method when benchmarking other
algorithms or metaheuristics. Indeed, brute-force search can be
viewed as the simplest metaheuristic. Brute force search should not be
confused with backtracking, where large sets of solutions can be
discarded without being explicitly enumerated (as in the textbook
computer solution to the eight queens problem above). The brute-
force method for finding an item in a table – namely, check all entries
of the latter, sequentially – is called linear search.
Heuristic search techniques
A Heuristic is a technique to solve a problem faster than classic
methods, or to find an approximate solution when classic methods
cannot. This is a kind of a shortcut as we often trade one of optimality,
completeness, accuracy, or precision for speed. A Heuristic (or a
heuristic function) takes a look at search algorithms. At each
branching step, it evaluates the available information and makes a
decision on which branch to follow. It does so by ranking alternatives.
The Heuristic is any device that is often effective but will not
guarantee work in every case.
Heuristic Search Techniques in Artificial Intelligence
Best-First Search
A* Search
Bidirectional Search
Tabu Search
Beam Search
Simulated Annealing
Hill Climbing
1. Evaluate initial state- if goal state, stop and return success. Else,
make initial state current.
Even if not better than the current state, continue until the solution
reached.
1. Exit.
3. Remove node n (node with best score) from list, move it to list
CLOSED.
4. Expand node n.
5. IF any successor to n is the goal node, return success and trace path
from goal node to s to return the solution.
1. Loop to step 2.
So, this was all in Heuristic Search Techniques in AI. Hope you like
our explanation.
Iterative deepening a*
Iterative deepening A* (IDA*) is a graph traversal and path
search algorithm that can find the shortest path between a designated
start node and any member of a set of goal nodes in a weighted graph.
It is a variant of iterative deepening depth-first search that borrows the
idea to use a heuristic function to evaluate the remaining cost to get to
the goal from the A* search algorithm. Since it is a depth-first search
algorithm, its memory usage is lower than in A*, but unlike ordinary
iterative deepening search, it concentrates on exploring the most
promising nodes and thus does not go to the same depth everywhere
in the search tree. Unlike A*, IDA* does not utilize dynamic
programming and therefore often ends up exploring the same nodes
many times.
While the standard iterative deepening depth-first search uses search
depth as the cutoff for each iteration, the IDA* uses the more
informative where is the cost to travel from the root to node and is a
problem-specific heuristic estimate of the cost to travel from to the
goal.
The algorithm was first described by Richard Korf in 1985
Iterative-deepening-A* works as follows: at each iteration, perform a
depth-first search, cutting off a branch when its total cost exceeds a
given threshold.
This threshold starts at the estimate of the cost at the initial state, and
increases for each iteration of the algorithm. At each iteration, the
threshold used for the next iteration is the minimum cost of all values
that exceeded the current threshold.
Pseudocode
path current search path (acts like a stack)
node current node (last node in current path)
g the cost to reach current node
f estimated cost of the cheapest path
(root..node..goal)
h(node) estimated cost of the cheapest path (node..goal)
cost(node, succ) step cost function
is_goal(node) goal test
successors(node) node expanding function, expand nodes ordered by g +
h(node)
ida_star(root) return either NOT_FOUND or a pair with the best path
and its cost
procedure ida_star(root)
bound := h(root)
path := [root]
loop
t := search(path, 0, bound)
if t = FOUND then return (path, bound)
if t = ∞ then return NOT_FOUND
bound := t
end loop
end procedure
properties
Like A*, IDA* is guaranteed to find the shortest path leading from the
given start node to any goal node in the problem graph, if the heuristic
function h is admissible,[2] that is
for all nodes n, where h* is the true cost of the shortest path
from n to the nearest goal (the "perfect heuristic").[3]
IDA* is beneficial when the problem is memory constrained. A*
search keeps a large queue of unexplored nodes that can quickly
fill up memory. By contrast, because IDA* does not remember any
node except the ones on the current path, it requires an amount of
memory that is only linear in the length of the solution that it
constructs. Its time complexity is analyzed by Korf et al. under the
assumption that the heuristic cost estimate h is consistent, meaning
that
for all nodes n and all neighbors n' of n; they conclude that
compared to a brute-force tree search over an exponential-sized
problem, IDA* achieves a smaller search depth (by a constant
factor), but not a smaller branching factor
constraint satisfaction
In artificial intelligence and operations research, constraint
satisfaction is the process of finding a solution to a set
of constraints that impose conditions that the variables must satisfy.
[1]
A solution is therefore a set of values for the variables that satisfies
all constraints—that is, a point in the feasible region.
The techniques used in constraint satisfaction depend on the kind of
constraints being considered. Often used are constraints on a finite
domain, to the point that constraint satisfaction problems are typically
identified with problems based on constraints on a finite domain.
Such problems are usually solved via search, in particular a form
of backtracking or local search. Constraint propagation are other
methods used on such problems; most of them are incomplete in
general, that is, they may solve the problem or prove it unsatisfiable,
but not always. Constraint propagation methods are also used in
conjunction with search to make a given problem simpler to solve.
Other considered kinds of constraints are on real or rational numbers;
solving problems on these constraints is done via variable
elimination or the simplex algorithm.
Constraint satisfaction originated in the field of artificial
intelligence in the 1970s (see for example (Laurière 1978)). During
the 1980s and 1990s, embedding of constraints into a programming
language were developed. Languages often used for constraint
programming are Prolog and C++.
As originally defined in artificial intelligence, constraints enumerate
the possible values a set of variables may take in a given world. A
possible world is a total assignment of values to variables
representing a way the world (real or imaginary) could be.
[2]
Informally, a finite domain is a finite set of arbitrary elements. A
constraint satisfaction problem on such domain contains a set of
variables whose values can only be taken from the domain, and a set
of constraints, each constraint specifying the allowed values for a
group of variables. A solution to this problem is an evaluation of the
variables that satisfies all constraints. In other words, a solution is a
way for assigning a value to each variable in such a way that all
constraints are satisfied by these values.
In some circumstances, there may exist additional requirements: one
may be interested not only in the solution (and in the fastest or most
computationally efficient way to reach it) but in how it was reached;
e.g. one may want the "simplest" solution ("simplest" in a logical, non
computational sense that has to be precisely defined). This is often the
case in logic games such as Sudoku.
Solving
Constraint satisfaction problems on finite domains are typically
solved using a form of search. The most used techniques are variants
of backtracking, constraint propagation, and local search. These
techniques are used on problems with nonlinear constraints.
Variable elimination and the simplex algorithm are used for
solving linear and polynomial equations and inequalities, and
problems containing variables with infinite domain. These are
typically solved as optimization problems in which the optimized
function is the number of violated constraints.
Constraint satisfaction toolkits
Constraint satisfaction toolkits are software
libraries for imperative programming languages that are used to
encode and solve a constraint satisfaction problem.
Cassowary constraint solver, an open source project for
constraint satisfaction (accessible from C, Java, Python and
other languages).
Comet, a commercial programming language and toolkit
Gecode, an open source portable toolkit written in C++
developed as a production-quality and highly efficient
implementation of a complete theoretical background.
Gelisp, an open source portable wrapper
[4]
of Gecode to Lisp. http://gelisp.sourceforge.net/
Game playing :
Game Playing is an important domain of artificial intelligence.
Games don’t require much knowledge; the only knowledge we need
to provide is the rules, legal moves and the conditions of winning or
losing the game.
Both players try to win the game. So, both of them try to make the
best move possible at each turn. Searching techniques like
BFS(Breadth First Search) are not accurate for this as the branching
factor is very high, so searching will take a lot of time
The most common search technique in game playing is Minimax
search procedure. It is depth-first depth-limited search procedure. It
is used for games like chess and tic-tac-toe.
Step-1: In the first step, the algorithm generates the entire game-tree and apply
the utility function to get the utility values for the terminal states. In the below
tree diagram, let's take A is the initial state of the tree. Suppose maximizer takes
first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we
will compare each value in terminal state with initial value of Maximizer and determines the
higher nodes values. It will find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer node values.
Step 3: Now it's a turn for Maximizer, and it will again choose the maximum of
all nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.
o For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.
This type of games has a huge branching factor, and the player has lots of
choices to decide. This limitation of the minimax algorithm can be improved
from alpha-beta pruning which we have discussed in the next topic.
Alpha-Beta Pruning
1. α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.
Pseudo-code for Alpha-beta Pruning:
1. function minimax(node, depth, alpha, beta, maximizingPlayer) is
2. if depth ==0 or node is a terminal node then
3. return static evaluation of node
4.
5. if MaximizingPlayer then // for Maximizer Player
6. maxEva= -infinity
7. for each child of node do
8. eva= minimax(child, depth-1, alpha, beta, False)
9. maxEva= max(maxEva, eva)
10. alpha= max(alpha, maxEva)
11. if beta<=alpha
12. break
13. return maxEva
14.
15.else // for Minimizer player
16. minEva= +infinity
17. for each child of node do
18. eva= minimax(child, depth-1, alpha, beta, true)
19. minEva= min(minEva, eva)
20. beta= min(beta, eva)
21. if beta<=alpha
22. break
23. return minEva
Step 1: At the first step the, Max player will start first move from
node A where α= -∞ and β= +∞, these value of alpha and beta passed
down to node B where again α= -∞ and β= +∞, and Node B passes the
same value to its child D
Step 2: At Node D, the value of α will be calculated as its turn for
Max. The value of α is compared with firstly 2 and then 3, and the
max (2, 3) = 3 will be the value of α at node D and node value will
also 3.
Step 4: At node E, Max will take its turn, and the value of alpha will
change. The current value of alpha will be compared with 5, so max (-
∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right
successor of E will be pruned, and algorithm will not traverse it, and
the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B
to node A. At node A, the value of alpha will be changed the
maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these
two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to
node F.
.
Step 8: C now returns the value of 1 to A here the best value for A is
max (3, 1) = 3. Following is the final game tree which is the showing
the nodes which are computed and nodes which has never computed.
Hence the optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the
order in which each node is examined. Move order is an important
aspect of alpha-beta pruning.
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent
to their actions. This requires embedded thinking or backward
reasoning to solve the game problems in AI.
Game tree:
A game tree is a tree where nodes of the tree are the game states and
Edges of the tree are the moves by players. Game tree involves initial
state, actions function, and result Function.
o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of
the tree.
o Propagate the minimax values up to the tree until the terminal
node discovered.
In a given game tree, the optimal strategy can be determined from the
minimax value of each node, which can be written as MINIMAX(n).
MAX prefer to move to a state of maximum value and MIN prefer to
move to a state of minimum value then: