You are on page 1of 61

What is Artificial Intelligence

In the simplest terms, AI which stands for artificial


intelligence refers to systems or machines that mimic
human intelligence to perform tasks and can iteratively
improve themselves based on the information they collect.
AI manifests in a number of forms. A few examples are:
Chatbots use AI to understand customer problems faster
and provide more efficient answers
Intelligent assistants use AI to parse critical information
from large free-text datasets to improve scheduling
Recommendation engines can provide automated
recommendations for TV shows based on users’ viewing
habits
Applications
1. Education
2. Entertainment
3. Medical
4. Military
5. Business and Manufacturing
6. Automated Planning and scheduling
7. Voice Technology
8. Heavy Industry
Current Trends in AI
1. Deep Learning
2. Machine Learning
3. AI replacing workers
4. Internet of Things
5. Emotional AI
6. AI in shopping and customer service
7. Ethical AI
AI Terms
1. Agents and it’s Environment
I. Percept
II. Percept Sequence
III. Agent Function
IV. Agent Program
Architecture of Agents[Components
of AI program]

Agents=Architecture + Program
AI Agents Performing Actions
Role of an Agent Program
State Space Search
• A state space is the set of all configurations
that a given problem and its environment
could achieve. Each configuration is called a
state, and contains. Static information. This is
often extracted and held separately, e.g., in
the knowledge base of the agent.
Representation
• In state space search, a state space is formally
represented as a tuple
S: {S, A, Action(s), Result(s,a), Cost(s,a)}
Example 1: (X’S AND 0’S)
State space representation of a
problem
• All the states the system can be in are represented as nodes of a graph.
• An action that can change the system from one state to another (e.g. a
move in a game) is represented by a link from one node to another.
• Links may be unidirectional (e.g. Xs and Os, chess, can't go back)
or bi-directional (e.g. geographic move).
• Search for a solution.
• A solution might be:
– Any path from start state to goal state.
– The best (e.g. lowest cost) path from start state to goal state (e.g. Travelling
salesman problem).
• It may be possible to reach the same state through many different paths
(obviously true in Xs and Os).
• There may be loops in the graph (can go round in circle). No loops in Xs
and Os.
Example 2 (TSP)
• Travelling salesman problem.
Start at A, visit all cities, return to A. Links
show cost of each trip (distance, money). Find
trip with minimum cost.
Solution is a path. e.g. [A,D,C,B,E,A]
Example 3 (8-PUZZLE)
• The puzzle can be solved by moving the tiles
one by one in the single empty space and thus
achieving the Goal state. Instead of moving
the tiles in the empty space we can visualize
moving the empty space in place of the tile.
The empty space cannot move diagonally and
can take only one step at a time.
Types of Environments in AI
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
Agents in Artificial Intelligence
Agents can be grouped into classes based on
their degree of perceived intelligence and
capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents
• Simple reflex agents ignore the rest of the percept history and act
only on the basis of the current percept. Percept history is the
history of all that an agent has perceived to date. The agent function
is based on the condition-action rule. A condition-action rule is a
rule that maps a state i.e, condition to an action. If the condition is
true, then the action is taken, else not. This agent function only
succeeds when the environment is fully observable.
Model-based reflex agents
• It works by finding a rule whose condition matches the current situation. A
model-based agent can handle partially observable environments by the use of a
model about the world. The agent has to keep track of the internal state which is
adjusted by each percept and that depends on the percept history. The current state
is stored inside the agent which maintains some kind of structure describing the part
of the world which cannot be seen.
• Updating the state requires information about :
• how the world evolves independently from the agent, and
• how the agent’s actions affect the world.
Goal-based agents
• These kinds of agents take decisions based on how far they are currently
from their goal(description of desirable situations). Their every action is
intended to reduce its distance from the goal. This allows the agent a way
to choose among multiple possibilities, selecting the one which reaches a
goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible.
They usually require search and planning. The goal-based agent’s behavior
can easily be changed.
Utility-based agents
• The agents which are developed having their end uses as building blocks are called
utility-based agents. When there are multiple possible alternatives, then to decide
which one is best, utility-based agents are used. They choose actions based on
a preference (utility) for each state. Sometimes achieving the desired goal is not
enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent
happiness should be taken into consideration. Utility describes how “happy” the
agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real
number which describes the associated degree of happiness.
Learning Agent :
• A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities. Learning element: It is
responsible for making improvements by learning from the environment
• Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
All search methods can be broadly
classified into two categories:
• Uninformed (or Exhaustive or Blind) methods,
where the search is carried out without any
additional information that is already provided in
the problem statement. Some examples include
Breadth First Search, Depth First Search etc.
• Informed (or Heuristic) methods, where search is
carried out by using additional information to
determine the next step towards finding the
solution. Best First Search is an example of such
algorithms
Informed Search Uninformed Search

It uses knowledge for the searching It doesn’t use knowledge for searching
process. process.

It finds solution more quickly. It finds solution slow as compared to


informed search.

It may or may not be complete. It is always complete.

Cost is low. Cost is high.

It consumes less time. It consumes moderate time.

It provides the direction regarding the No suggestion is given regarding the solution
solution. in it.

It is less lengthy while implementation. It is more lengthy while implementation.

Greedy Search, A* Search, Graph Search. Depth First Search, Breadth First Search.
BFS(Breadth First Search)
BFS stands for Breadth First Search is a vertex
based technique for finding a shortest path in
graph. It uses a Queue data structure which
follows first in first out. In BFS, one vertex is
selected at a time when it is visited and marked
then its adjacent are visited and stored in the
queue. It is slower than DFS.
BFS(Breadth First Search)
A standard BFS implementation puts each vertex of the graph into
one of two categories:
• Visited
• Not Visited
The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles.
The algorithm works as follows:
1. Start by putting any one of the graph's vertices at the back of a
queue.
2. Take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones which
aren't in the visited list to the back of the queue.
4. Keep repeating steps 2 and 3 until the queue is empty.
Advantages and Disadvantages of BFS
• Advantages:
1. A BFS will find the shortest path between the
starting point and any other reachable node.
A depth-first search will not necessarily find
the shortest path.
• Disadvantages
1. A BFS on a binary tree generally requires
more memory than a DFS.
DFS (Depth First Search )
DFS stands for Depth First Search is a edge
based technique. It uses the Stack data
structure, performs two stages, first visited
vertices are pushed into stack and second if
there is no vertices then visited vertices are
popped.
Depth First Search

A standard DFS implementation puts each vertex of the graph into


one of two categories:
• Visited
• Not Visited
The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles.
• The DFS algorithm works as follows:
1. Start by putting any one of the graph's vertices on top of a stack.
2. Take the top item of the stack and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones which
aren't in the visited list to the top of the stack.
4. Keep repeating steps 2 and 3 until the stack is empty.
Advantages and Disadvantages of DFS
• Advantages:
1. Depth-first search on a binary
tree generally requires less memory than
breadth-first.
2. Depth-first search can be easily implemented
with recursion.
• Disadvantages
1. A DFS doesn't necessarily find the shortest path
to a node, while breadth-first search does.
BFS DFS
BFS stands for Breadth First Search. DFS stands for Depth First Search.
BFS(Breadth First Search) uses Queue data DFS(Depth First Search) uses Stack data
structure for finding the shortest path. structure.
BFS can be used to find single source In DFS, we might traverse through more
shortest path in an unweighted graph, edges to reach a destination vertex from
because in BFS, we reach a vertex with a source.
minimum number of edges from a source
vertex.
BFS is more suitable for searching vertices DFS is more suitable when there are
which are closer to the given source. solutions away from source.
The Time complexity of BFS is O(V + E) The Time complexity of DFS is also O(V +
when Adjacency List is used and O(V^2) E) when Adjacency List is used and
when Adjacency Matrix is used, where V O(V^2) when Adjacency Matrix is used,
stands for vertices and E stands for edges. where V stands for vertices and E stands
for edges.
Here, siblings are visited before the Here, children are visited before the
children siblings
Heuristic Function
• The purpose of heuristic function is to guide
the search process in the most profitable path
among all that available.
Best First Search
• Best first search is a traversal technique that decides
which node is to be visited next by checking which
node is the most promising one and then check it. For
this it uses an evaluation function to decide the
traversal.
• This best first search technique of tree traversal comes
under the category of heuristic search or informed
search technique.
• The cost of nodes is stored in a priority queue. This
makes implementation of best-first search is same as
that of breadth First search. We will use the priority
queue just like we use a queue for BFS.
Steps
• Create 2 empty lists: OPEN and CLOSED
• Start from the initial node (say N) and put it in the ‘ordered’ OPEN
list
• Repeat the next steps until GOAL node is reached
– If OPEN list is empty, then EXIT the loop returning ‘False’
– Select the first/top node (say N) in the OPEN list and move it to the
CLOSED list. Also capture the information of the parent node
– If N is a GOAL node, then move the node to the Closed list and exit the
loop returning ‘True’. The solution can be found by backtracking the
path
– If N is not the GOAL node, expand node N to generate the ‘immediate’
next nodes linked to node N and add all those to the OPEN list
– Reorder the nodes in the OPEN list in ascending order according to an
evaluation function f(n)
Advantages and Disadvantages of Best
First Search
• Advantages:
1. Can switch between BFS and DFS, thus
gaining the advantages of both.
2. More efficient when compared to DFS.
• Disadvantages:
1. Chances of getting stuck in a loop are
higher.
A*
• A* Algorithm works by vertices in the graph which
start with the starting point of the object and then
repeatedly examines the next unexamined vertex,
adding its vertices to the set of vertices that will be
examined.
• A* Algorithm is popular because it is a technique that
is used for finding path and graph traversals. This
algorithm is used by many web-based maps and games.
• A* Algorithms are optimal. It relies on an open list as
well as a closed list to find a path that is optimal and
complete towards the goal.
Parameters:-
A* algorithm has 3 parameters:
• g : the cost of moving from the initial cell to the
current cell. Basically, it is the sum of all the cells
that have been visited since leaving the first cell.
• h : also known as the heuristic value, it is
the estimated cost of moving from the current cell
to the final cell. The actual cost cannot be
calculated until the final cell is reached. Hence, h
is the estimated cost. We must make sure that there
is never an over estimation of the cost.
• f : it is the sum of g and h. So, f = g + h
Hill climbing
• Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak
value where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems. One of the widely
discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to minimize
the distance traveled by the salesman.
• It is also called greedy local search as it only looks to its
good immediate neighbor state and not beyond that.
Contd….
• A node of hill climbing algorithm has two
components which are state and value.
• Hill Climbing is mostly used when a good
heuristic is available.
• In this algorithm, we don't need to maintain
and handle the search tree or graph as it only
keeps a single current state.
Algorithm
1. Evaluate the initial state
2. Loop until the solution is found or no new
operator left.
i. Select and apply the new operator
ii. Evaluate the new state
a. if goal then quit
b. If better than current state then it is
now new state.
Features of Hill Climbing:
• Generate and Test variant: Hill Climbing is the
variant of Generate and Test method. The
Generate and Test method produce feedback
which helps to decide which direction to move in
the search space.
• Greedy approach: Hill-climbing algorithm search
moves in the direction which optimizes the cost.
• No backtracking: It does not backtrack the search
space, as it does not remember the previous
states.
Different regions in the state space
landscape:
• Local Maximum: Local maximum is a state which is
better than its neighbor states, but there is also
another state which is higher than it.
• Global Maximum: Global maximum is the best
possible state of state space landscape. It has the
highest value of objective function.
• Current state: It is a state in a landscape diagram
where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape
where all the neighbor states of current states have
the same value.
• Ridge: complicated type of search .
Types of Hill Climbing
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
Simple Hill Climbing:
• Simple hill climbing is the simplest way to implement
a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first
one which optimizes current cost and set it as a
current state. It only checks it's one successor state,
and if it finds better than the current state, then move
else be in the same state.
This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not
guaranteed
Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new
state as a current state.
– Else if not better than the current state, then return to step2.
• Step 5: Exit.
Steepest-Ascent hill climbing:
• The steepest-Ascent algorithm is a variation of
simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the
current state and selects one neighbor node
which is closest to the goal state. This
algorithm consumes more time as it searches
for multiple neighbors
Stochastic hill climbing:
• Stochastic hill climbing does not examine for
all its neighbor before moving. Rather, this
search algorithm selects one neighbor node at
random and decides whether to choose it as a
current state or examine another state.
Problems in Hill Climbing
• Local Maximum
• Plateau
• Ridges
Local Maximum
A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present
which is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local
maximum in state space landscape. Create a list of the promising path
so that the algorithm can backtrack the search space and explore
other paths as well.
Plateau
A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value,
because of this algorithm does not find any best direction to
move. A hill-climbing search might be lost in the plateau
area.
Solution: The solution for the plateau is to take big steps or
very little steps while searching, to solve the problem.
Randomly select a state which is far away from the current
state so it is possible that the algorithm could find
non-plateau region.
Ridges
A ridge is a special form of the local maximum. It has an
area which is higher than its surrounding areas, but itself has
a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving
in different directions, we can improve this problem.
Advantages of Hill Climbing:

• Hill climbing technique is very useful in job


shop scheduling, automatic programming,
circuit designing, and vehicle routing.
• Hill climbing is also helpful to solve pure
optimization problems where the objective is
to find the best state according to the objective
function.
Beam Search
• A heuristic search algorithm that examines a graph
by extending the most promising node in a limited
set is known as beam search.
Beam search is a heuristic search technique that
always expands the W number of the best nodes at
each level. It progresses level by level and moves
downwards only from the best W nodes at each
level. Beam Search uses breadth-first search to
build its search tree. Beam Search constructs its
search tree using breadth-first search. It generates
all the successors of the current level’s state at
each level of the tree.
Tabu Search
• Tabu Search is a commonly used
meta-heuristic used for optimizing model
parameters. A meta-heuristic is a general
strategy that is used to guide and control actual
heuristics. Tabu Search is often regarded as
integrating memory structures into local
search strategies.
Examples of Problems to Solve with
TS
• N-Queens Problem
• Traveling Salesman Problem (TSP)
• Minimum Spanning Tree (MST)
• Assignment Problems
• Vehicle Routing
• DNA Sequencing
Advantages and Disadvantages of TS

Advantages
• Can escape local optimums by picking non-improving
solutions
• The Tabu List can be used to avoid cycles and reverting to
old solutions
• Can be applied to both discrete and continuous solutions

Disadvantages
• Number of iterations can be very high
• There are a lot of tunable parameters in this algorithm
Simulated annealing
• Simulated Annealing (SA) is an effective and general form of
optimization. It is useful in finding global optima in the
presence of large numbers of local optima. “Annealing” refers
to an analogy with thermodynamics, specifically with the way
that metals cool and anneal. Simulated annealing uses the
objective function of an optimization problem instead of the
energy of a material.
• Implementation of SA is surprisingly simple. The algorithm is
basically hill-climbing except instead of picking the best
move, it picks a random move. If the selected move improves
the solution, then it is always accepted. Otherwise, the
algorithm makes the move anyway with some probability less
than 1. The probability decreases exponentially with the
“badness” of the move, which is the amount deltaE by which
the solution is worsened (i.e., energy is increased.)
Advantage and Disadvantages
Advantage
• Easy to code for complex problems as well.
• Gives good solution.
• Statistically guarantees finding optimal solution

Disadvantages
• Slow process
• Can’t tell whether optimal solution is found.

You might also like