You are on page 1of 96

Artificial Intelligence

Agenda
• History of AI
• Researchers and computer scientists like Alan Turing, John McCarthy,
Marvin Minsky and Geoffrey Hinton
• Key concepts like Turing Test
• Difference between AI and ML
Definition of AI
“The exciting new effort to make “The study of mental faculties
computers think … machine with through the use of computational
minds, … ” (Haugeland, 1985) models” (Charniak and McDermott,
1985)
“Activities that we associated with
human thinking, activities such as “ The study of the computations
decision-making, problem solving, that make it possible to perceive,
learning … “ (Bellman, 1978) reason, and act” (Winston, 1992)

“The art of creating machines that “A field of study that seeks to


perform functions that require explain and emulate intelligent
intelligence when performed by behavior in terms of computational
people” (Kurzweil, 1990) processes” (Schalkoff, 1990)

“The study of how to make “The branch of computer science


computers do things at which, at that is concerned with the
the moment, people are better” automation of intelligent behavior”
(Rich and Knight, 1991) (Luger and Stubblefield, 1993)

In conclusion, they fall into four categories: Systems that


think like human, act like human, think rationally, or act rationally.
What is Artificial Intelligence ?

Systems that think Systems that think


THOUGHT
like humans rationally

Systems that act Systems that act


BEHAVIOUR like humans rationally

HUMAN RATIONAL
AI Foundations?
AI inherited many ideas, viewpoints and techniques from other disciplines.

Psy
To investigate human y ch p h
olo so
mind g h ilo Theories of reasoning and
P
learning
y
AI
AI

Linguistic Mathemati
The meaning and cs
Theories of logic probability,
structure of language decision making and
CS computation

Make AI a reality 5
The Turing Test
(Can Machine think? A. M. Turing, 1950)

• Requires:
– Natural language Processing
– Knowledge representation
– Automated reasoning
– Machine learning
– (vision, robotics) for full test
The Turing Test
The Turing test is an assessment to determine whether a machine is
able to exhibit the same intelligence as a human.
There are now many variations of the Turing test, and as technology
continues to advance with AI at the forefront, new lines of thinking
are emerging with regard to means of determining intelligence and a
lot of nuances are resulting from that thinking as well, which requires
more work to be done in this area.
History of AI
The gestation of Artificial Intelligence (1943-55)
• The first work that is now generally recognized as AI was done by
Warren McCulloch and Walter Pitts (1943).
• They proposed a model of artificial neurons
• Two undergraduate students at Harvard, Marvin Minsky and Dean
Edmonds, built the first neural network computer in 1950.
• The SNARC, as it was called, used 3000 vacuum tubes and a surplus
automatic pilot mechanism from a B-24 bomber to simulate a
network of 40 neurons.
Biological Neural Networks

• Two interconnected brain cells (neurons)


Biology Analogy
McCulloch–Pitts “neuron” (1943)
• Attributes of neuron
• m binary inputs and 1 output (0 or 1)
• Synaptic weights wij
• Threshold μi
Neural Networks and Logic Gates

[Russell & Norvig, 1995]

• simple neurons can act as logic gates


Processing Information in ANN

• A single neuron (processing element – PE) with inputs


and outputs
The birth of AI (1956)
• Dartmouth 1956 workshop for 2 months
• Term “artificial intelligence”
• Fathers of the field introduced
• Logic Theorist: program for proving theorems by Alan Newell &
Herbert Simon
1952-69
– GPS- Newell and Simon
– Geometry theorem prover - Gelernter (1959)
– Samuel Checkers that learns (1952)
– McCarthy - Lisp (1958), Advice Taker, Robinson’s resolution
Knowledge-based systems (1969-79)
• DENDRAL: molecule structure identification [Feigenbaum et al.]
• Knowledge intensive
• Mycin: medical diagnosis [Feigenbaum, Buchanan, Shortliffe]
• 450 rules; knowledge from experts; no domain theory
• Better than junior doctors
• Certainty factors
• PROSPECTOR: drilling site choice [Duda et al]
• Domain knowledge in NLP
• Knowledge representation: logic, frames...
AI becomes an industry (1980-88)
• R1: first successful commercial expert system, configured
computer systems at DEC; saved 40M$/year
• 1988: DEC had 40 expert systems, DuPont 100...
• 1981: Japan’s 5th generation project
• Software tools for expert systems: Carnegie Group,
Inference, Intellicorp, Teknowledge
• LISP-specific hardware: LISP Machines Inc, TI, Symbolics,
Xerox
• Industry: few M$ in 1980 -> 2B$ in 1988
1987 Interests Drop
• Mid-1980s, different research groups reinvented backpropagation
(originally from 1969)
• Disillusionment on expert systems
• Fear of AI winter
1997 onwards
• 1997- First Computer IBM’s DeepBlue beat the
chess Champion
• 2002- Robots replace humans at Amazon
• 2011 - Voice Siri/ Alexa, Google translate
• 2016- 2024..AI is integral part of everything
History of AI
Intelligent Agents
Agents

An agent is any thing that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors

Ex: Human Being , Calculator etc

Agent has goal –the objective which agent has to satisfy

Actions can potentially change the environment

Agent perceive current percept or sequence of perceptions

23
Agents
• An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors

• Human agent: eyes, ears, and other organs used as sensors;


• hands, legs, mouth, and other body parts used as actuators/ Effector

• Robotic agent:
• Sensors:- cameras (picture Analysis) and infrared range finders for sensors, Solar Sensor.
• Actuators- various motors, speakers, Wheels

• Software Agent(Soft Bot)


• Functions as sensors
• Functions as actuators
• Ex. Askjeeves.com, google.com

• Expert System
• Ex-
9/30/2023 Cardiologist24 Unit -1 Introduction
What is an Intelligent Agent

• Rational Agents
• An agent should strive to "do the right thing",
• based on what it can perceive and the actions it can perform.
• The right action is the one that will cause the agent to be
most successful.
• Perfect Rationality( Agent knows all & correct action)
• Humans do not satisfy this rationality
• Bounded Rationality-
• Human use approximations

• Definition of Rational Agent:


For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure,
• Rational=best?
Yes, but best of its knowledge
• Rational=Optimal?
Yes, to the best of it’s abilities & constraints (Subject to
resources)
9/30/2023 Unit -1 Introduction
25
Rational Agents PEAS Analysis
• Performance measure: An objective criterion for success of an
agent's behavior.

Performance measures of a vacuum-cleaner agent:


amount of dirt cleaned up, amount of time taken, amount of
electricity consumed, level of noise generated, etc.

• Performance measures self-driving car: time to reach


destination (minimize), safety, predictability of behavior for
other agents, reliability, etc.

• Performance measure of game-playing agent: win/loss


percentage (maximize), robustness, unpredictability (to
“confuse” opponent), etc.
Characterizing a Task Environment

• Must first specify the setting for intelligent agent


design.
Example: the task of designing a self-driving car
• Performance measure Safe, fast, legal, comfortable trip
• Environment Roads, other traffic, pedestrians
• Actuators Steering wheel, accelerator, brake, signal, horn
• Sensors Cameras, LIDAR (light/radar), speedometer, GPS,
odometer ,engine sensors, keyboard
Environment types
• Fully observable (vs. partially observable)
• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multiagent):

Artificial Intelligence a modern approach 29


Examples

Observable Deterministic Episodi Static Discrete Agents


c
Cross Word Fully Deterministic Sequential Static Discrete Single

Poker Partially Stochastic Sequential Static Discrete Multi

Backgammon Partially Stochastic Sequential Static Discrete Multi

Taxi driver Partially Multi


Stochastic Sequential Dynamic Conti

Part picking robot Partially Stochastic Episodic Dynamic Conti Single

Image analysis Fully Deterministic Episodic Semi Conti Single


Agent types

• Basic types:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
• Learning agents
Agent selects actions on the basis Simple reflex agent
of current percept only.

If tail-light of car in front is red, then brake.


Module:
Logical Agents Model-based reflex agents
Representation and Reasoning

How detailed?

“Infers potentially
dangerous driver
in front.”

If “dangerous driver in front,”


then “keep distance.”
Module:
Problem Solving Goal-based agents

Considers “future”
“Clean kitchen”

Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents 🡪 may involve search and planning
Module : Decision Making
Utility-based agents

Decision theoretic actions:


e.g. faster vs. safer
More complicated when agent needs to learn Learning agents
utility information: Reinforcement learning
(based on action payoff)
Adapt and improve over time

Module:
Learning

“Quick turn is not safe”

No quick turn

Road conditions, etc


Takes percepts
and selects actions

Try out the brakes on


different road surfaces
AI core capabilities
• The ability to solve problems: Search, Optimization, Constraint
Satisfaction
•The ability to plan: Abstraction
•The ability to deduce: Logic, Reasoning
•The ability to learn: Models, Data, Learning Algorithms
•The ability to handle uncertainty : Bayesian networks, Hidden
Markov Models
•The ability to interface with the real world : Human Computer
Interface
Complex Problems and Solutions
Example problem: Pegs and Disks problem

The initial state

Goal State
Example problem: Pegs and Disks problem

Now we will describe a sequence of actions that can be applied on the initial state.

Step 1: Move A → C

Step 2: Move A → B
Example problem: Pegs and Disks problem

Step 3: Move A → C

Step 4: Move B→ A
Example problem: Pegs and Disks problem

• Step 5: Move C → B

Step 6: Move A → B
Example problem: Pegs and Disks problem

• Step 7: Move C→ B
Example problem: Pegs and Disks problem
45

Example: The n-Queen problem

• Place n queens on an n by n chess board so that


no two of them are on the same row, column,
or diagonal
46

State Space Tree of the Four-queens


Problem
47

Example
48

Continued…
Search Strategies
• Problem solving and formulating a problem State Space
Search- Uninformed and Informed Search Techniques,
• Heuristic function,
• A*,
• AO* algorithms ,
• Hill climbing,
• simulated annealing,
• genetic algorithms ,
• Constraint satisfaction method
State Space
Search Strategies
● Uninformed Search ● Informed Search
● breadth-first ● best-first search
● depth-first ● search with heuristics
● uniform-cost search
● depth-limited search
● iterative deepening
● bi-directional search
● constraint satisfaction
Key concepts in search
• Set of states that we can be in
• Including an initial state…
• … and goal states (equivalently, a goal test)
• For every state, a set of actions that we can take
• Each action results in a new state
• Typically defined by successor function
• Given a state, produces all states that can be reached from it
• Cost function that determines the cost of each action (or path = sequence
of actions)
• Solution: path from initial state to a goal state
• Optimal solution: solution with minimal cost
Search Problem
We are now ready to formally describe a search problem.
A search problem consists of the following:
• S: the full set of states
• s0 : the initial state
• A:S→S is a set of operators
• G is the set of final states. Note that G ⊆S

These are schematically depicted in above Figure


Generic search algorithm
• Fringe = set of nodes generated but not expanded
= nodes we know we still have to explore
• fringe := {node corresponding to initial state}
• loop:
• if fringe empty, declare failure
• choose and remove a node v from fringe
• check if v’s state s is a goal state; if so, declare success
• if not, expand v, insert resulting nodes into fringe
• Key question in search: Which of the generated nodes do we expand next?
8-puzzle

1 2 1 2 3
4 5 3 4 5 6
7 8 6 7 8
goal state
8-puzzle
1 2
4 5 3
7 8 6

1 2 1 2 1 5 2
4 5 3 4 5 3 4 3
7 8 6 7 8 6 7 8 6

.. ..
. .
Uninformed search
• Uninformed search: given a state, we only know whether it is a goal state
or not
• Cannot say one nongoal state looks better than another nongoal state
• Can only traverse state space blindly in hope of somehow hitting a goal
state at some point
• Also called blind search
• Blind does not imply unsystematic!
The basic search algorithm

Let L be a list containing the initial state (L= the fringe)

Loop if L is empty return failure


Node 🡨 select (L)
if Node is a goal
then return Node
(the path from initial state to Node)
else
generate all successors of Node, and
merge the newly generated states into L
End Loop

In addition the search algorithm maintains a list of nodes called the fringe(open list). The fringe keeps track of the nodes
that have been generated but are yet to be explored.
Evaluating Search strategies

What are the characteristics of the different search algorithms and what is their efficiency? We will look at the
following three factors to measure this.
1. Completeness: Is the strategy guaranteed to find a solution if one exists?
2. Optimality: Does the solution have low cost or the minimal cost?

3. What is the search cost associated with the time and memory required to find a solution?

a. Time complexity: Time taken (number of nodes expanded) (worst or average case) to find a solution.

b. Space complexity: Space used by the algorithm measured in terms of the maximum size of fringe
Breadth-First Search

• Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search
algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Breadth First Search
Algorithm Breadth first search
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node 🡨 remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop

Note that in breadth first search the newly generated nodes are put at the back of fringe or the OPEN list. The
nodes will be expanded in a FIFO (First In First Out) order. The node that enters OPEN earlier will be expanded
earlier.
Breadth-First Search
Breadth-First Search
Breadth-First Search
Breadth-First Search
BFS illustrated
Step 1: Initially fringe contains only one node corresponding to the source state A.

Figure 3
Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated. They are
placed at the back of fringe.
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put at the back of fringe.
Version
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the back of fringe.
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the back of fringe.
Step 6: Node E is removed from fringe. It has no children.
Step 7: D is expanded, B and F are put in OPEN.

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the path A C G by following the
parent pointers of the node corresponding to G. The algorithm terminates.
Search Demo
https://cs.stanford.edu/people/abisee/tutorial/bfs.html

https://cs.stanford.edu/people/abisee/tutorial/dfs.html

https://cs.stanford.edu/people/abisee/tutorial/greedy.html

https://cs.stanford.edu/people/abisee/tutorial/astar.html
Depth-First Search

• Depth-first search isa recursive algorithm for traversing a tree or


graph data structure.
• It is called the depth-first search because it starts from the root
node and follows each path to its greatest depth node before
moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
Depth first Search
Algorithm

Let fringe be a list containing the initial state


Loop
if fringe is empty return failure
Node🡨 remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
merge the newly generated nodes into fringe
add generated nodes to the front of fringe
End Loop
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Let us now run Depth First Search on the search space given in Figure
34, and trace its progress.
Depth-First Search
Step 1: Initially fringe contains only the node for A.
Depth-First Search
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of fringe.
Depth-First Search
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.
Depth-First Search
Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Depth-First Search
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe
Depth-First Search
Step 6: Node G is expanded and found to be a goal node. The solution path A-B-D-C-G is returned and the algorithm
terminates.
Search Demo
https://cs.stanford.edu/people/abisee/tutorial/bfs.html

https://cs.stanford.edu/people/abisee/tutorial/dfs.html

https://cs.stanford.edu/people/abisee/tutorial/greedy.html

https://cs.stanford.edu/people/abisee/tutorial/astar.html
Thank You

You might also like