Professional Documents
Culture Documents
Agenda
• History of AI
• Researchers and computer scientists like Alan Turing, John McCarthy,
Marvin Minsky and Geoffrey Hinton
• Key concepts like Turing Test
• Difference between AI and ML
Definition of AI
“The exciting new effort to make “The study of mental faculties
computers think … machine with through the use of computational
minds, … ” (Haugeland, 1985) models” (Charniak and McDermott,
1985)
“Activities that we associated with
human thinking, activities such as “ The study of the computations
decision-making, problem solving, that make it possible to perceive,
learning … “ (Bellman, 1978) reason, and act” (Winston, 1992)
HUMAN RATIONAL
AI Foundations?
AI inherited many ideas, viewpoints and techniques from other disciplines.
Psy
To investigate human y ch p h
olo so
mind g h ilo Theories of reasoning and
P
learning
y
AI
AI
Linguistic Mathemati
The meaning and cs
Theories of logic probability,
structure of language decision making and
CS computation
Make AI a reality 5
The Turing Test
(Can Machine think? A. M. Turing, 1950)
• Requires:
– Natural language Processing
– Knowledge representation
– Automated reasoning
– Machine learning
– (vision, robotics) for full test
The Turing Test
The Turing test is an assessment to determine whether a machine is
able to exhibit the same intelligence as a human.
There are now many variations of the Turing test, and as technology
continues to advance with AI at the forefront, new lines of thinking
are emerging with regard to means of determining intelligence and a
lot of nuances are resulting from that thinking as well, which requires
more work to be done in this area.
History of AI
The gestation of Artificial Intelligence (1943-55)
• The first work that is now generally recognized as AI was done by
Warren McCulloch and Walter Pitts (1943).
• They proposed a model of artificial neurons
• Two undergraduate students at Harvard, Marvin Minsky and Dean
Edmonds, built the first neural network computer in 1950.
• The SNARC, as it was called, used 3000 vacuum tubes and a surplus
automatic pilot mechanism from a B-24 bomber to simulate a
network of 40 neurons.
Biological Neural Networks
An agent is any thing that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors
23
Agents
• An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors
• Robotic agent:
• Sensors:- cameras (picture Analysis) and infrared range finders for sensors, Solar Sensor.
• Actuators- various motors, speakers, Wheels
• Expert System
• Ex-
9/30/2023 Cardiologist24 Unit -1 Introduction
What is an Intelligent Agent
• Rational Agents
• An agent should strive to "do the right thing",
• based on what it can perceive and the actions it can perform.
• The right action is the one that will cause the agent to be
most successful.
• Perfect Rationality( Agent knows all & correct action)
• Humans do not satisfy this rationality
• Bounded Rationality-
• Human use approximations
• Basic types:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
• Learning agents
Agent selects actions on the basis Simple reflex agent
of current percept only.
How detailed?
“Infers potentially
dangerous driver
in front.”
Considers “future”
“Clean kitchen”
Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents 🡪 may involve search and planning
Module : Decision Making
Utility-based agents
Module:
Learning
No quick turn
Goal State
Example problem: Pegs and Disks problem
Now we will describe a sequence of actions that can be applied on the initial state.
Step 1: Move A → C
Step 2: Move A → B
Example problem: Pegs and Disks problem
Step 3: Move A → C
Step 4: Move B→ A
Example problem: Pegs and Disks problem
• Step 5: Move C → B
Step 6: Move A → B
Example problem: Pegs and Disks problem
• Step 7: Move C→ B
Example problem: Pegs and Disks problem
45
Example
48
Continued…
Search Strategies
• Problem solving and formulating a problem State Space
Search- Uninformed and Informed Search Techniques,
• Heuristic function,
• A*,
• AO* algorithms ,
• Hill climbing,
• simulated annealing,
• genetic algorithms ,
• Constraint satisfaction method
State Space
Search Strategies
● Uninformed Search ● Informed Search
● breadth-first ● best-first search
● depth-first ● search with heuristics
● uniform-cost search
● depth-limited search
● iterative deepening
● bi-directional search
● constraint satisfaction
Key concepts in search
• Set of states that we can be in
• Including an initial state…
• … and goal states (equivalently, a goal test)
• For every state, a set of actions that we can take
• Each action results in a new state
• Typically defined by successor function
• Given a state, produces all states that can be reached from it
• Cost function that determines the cost of each action (or path = sequence
of actions)
• Solution: path from initial state to a goal state
• Optimal solution: solution with minimal cost
Search Problem
We are now ready to formally describe a search problem.
A search problem consists of the following:
• S: the full set of states
• s0 : the initial state
• A:S→S is a set of operators
• G is the set of final states. Note that G ⊆S
1 2 1 2 3
4 5 3 4 5 6
7 8 6 7 8
goal state
8-puzzle
1 2
4 5 3
7 8 6
1 2 1 2 1 5 2
4 5 3 4 5 3 4 3
7 8 6 7 8 6 7 8 6
.. ..
. .
Uninformed search
• Uninformed search: given a state, we only know whether it is a goal state
or not
• Cannot say one nongoal state looks better than another nongoal state
• Can only traverse state space blindly in hope of somehow hitting a goal
state at some point
• Also called blind search
• Blind does not imply unsystematic!
The basic search algorithm
In addition the search algorithm maintains a list of nodes called the fringe(open list). The fringe keeps track of the nodes
that have been generated but are yet to be explored.
Evaluating Search strategies
What are the characteristics of the different search algorithms and what is their efficiency? We will look at the
following three factors to measure this.
1. Completeness: Is the strategy guaranteed to find a solution if one exists?
2. Optimality: Does the solution have low cost or the minimal cost?
3. What is the search cost associated with the time and memory required to find a solution?
a. Time complexity: Time taken (number of nodes expanded) (worst or average case) to find a solution.
b. Space complexity: Space used by the algorithm measured in terms of the maximum size of fringe
Breadth-First Search
• Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search
algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Breadth First Search
Algorithm Breadth first search
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node 🡨 remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop
Note that in breadth first search the newly generated nodes are put at the back of fringe or the OPEN list. The
nodes will be expanded in a FIFO (First In First Out) order. The node that enters OPEN earlier will be expanded
earlier.
Breadth-First Search
Breadth-First Search
Breadth-First Search
Breadth-First Search
BFS illustrated
Step 1: Initially fringe contains only one node corresponding to the source state A.
Figure 3
Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated. They are
placed at the back of fringe.
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put at the back of fringe.
Version
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the back of fringe.
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the back of fringe.
Step 6: Node E is removed from fringe. It has no children.
Step 7: D is expanded, B and F are put in OPEN.
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the path A C G by following the
parent pointers of the node corresponding to G. The algorithm terminates.
Search Demo
https://cs.stanford.edu/people/abisee/tutorial/bfs.html
https://cs.stanford.edu/people/abisee/tutorial/dfs.html
https://cs.stanford.edu/people/abisee/tutorial/greedy.html
https://cs.stanford.edu/people/abisee/tutorial/astar.html
Depth-First Search
https://cs.stanford.edu/people/abisee/tutorial/dfs.html
https://cs.stanford.edu/people/abisee/tutorial/greedy.html
https://cs.stanford.edu/people/abisee/tutorial/astar.html
Thank You