You are on page 1of 52

CS8691 – ARTIFICIAL INTELLIGENCE

UNIT I INTRODUCTION

Introduction–Definition - Future of Artificial Intelligence – Characteristics of


Intelligent Agents– Typical Intelligent Agents – Problem Solving Approach to
Typical AI problems
What is Intelligence?

Intelligence is what we use when we don’t know what to do.

Intelligence relates to tasks involving higher mental processes

Examples
Creativity, Solving problems, Pattern recognition, Classification, Learning,
Optimization, Language processing, Knowledge and many more
What is involved in intelligence?
Ability to interact with the real world
to perceive, understand, and act
e.g., speech recognition and understanding and synthesis
e.g., image understanding
e.g., ability to take actions, have an effect

Reasoning and Planning


modeling the external world, given input
solving new problems, planning, and making decisions
ability to deal with unexpected problems, uncertainties

 Learning and Adaptation


we are continuously learning and adapting
our internal models are always being updated
•e.g., a baby learning to categorize and recognize animals
What is Artificial Intelligence?
It is the study of how to make computers do things at which, at the moment, people
are better.

The Main Classification of AI


Strong AI (General Intelligence) - An AI system with generalized human cognitive
abilities. When presented with an unfamiliar task, it has enough intelligence to find a
solution

Weak AI (Narrow AI) – An AI system that is designed and trained for a particular
task. Example: a virtual personal assistant, such as Apple’s Siri
The Term AI is defined by each author in own ways which
fall into 4 categories
1.System that think like humans
2.System that act like humans
3.System that think rationally
4.System that act rationally

Rationality – A system is rational if it does the “right thing”


1.Acting Humanly : Turing Test Approach
 Test proposed by Alan Turing in 1950

 The computer is asked questions by a human interrogator

 The computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or not.
1.Acting Humanly : Turing Test Approach
Programming a computer to pass the test provides plenty to work on to
process the following capabilities
a.Natural Language Processing : To enable it to communicate
successfully in English
b.Knowledge Representation : To store what it knows or receives
c.Automated Reasoning : To use the stored information to answer
questions and to draw new conclusion
d.Machine Learning : To adapt to new circumstances and to detect and
extrapolate patterns

Total Turing Test


This test includes a video signal so that the questioner can test the
subject’s perceptual abilities of the machine.
To pass Total Turing test the computer will need
a. Computer Vision – To perceive objects
b. Robotics – To manipulate and move them
2.Thinking Humanly : The Cognitive Modeling Approach
We need to get inside actual working of the human mind :
(a) through introspection – trying to capture our own thoughts as they go
by; (b) through psychological experiments

Example
Allen Newell and Herbert Simon, who developed GPS, the “General
Problem Solver” tried to trace the reasoning steps to traces of human
subjects solving the same problems. The interdisciplinary field of cognitive
science brings together computer models from AI and experimental
techniques from psychology to try to construct precise and testable
theories of the workings of the human mind

3.Thinking Rationally : “The Laws of Thought “ Approach


Right thinking initiated the field called Logic
Example
AI is a computer science subject. All computer science subjects are easy,
therefore AI is easy
4.Acting Rationally : The Rational Agent Approach
An Agent is just something that acts. Computer Agents are expected to
have the following attributes also
(a)operating under autonomous control,
(b)perceiving their environment,
(c)persisting over a prolonged time period,
(d)adapting to change.

A rational agent is one that acts so as to achieve the best outcome.


Characteristics of Intelligent Agent
Internal characteristics are

 Learning/reasoning: an agent has the ability to learn from previous experience


and to successively adapt its own behavior to the environment.
 Reactivity: an agent must be capable of reacting appropriately to influences or
information from its environment.
 Autonomy: an agent must have both control over its actions and internal states.
The degree of the agent’s autonomy can be specified. There may need intervention
from the user only for important decisions.
 Goal-oriented: an agent has well-defined goals and gradually influence its
environment and so achieve its own goals.

External characteristics are

 Communication: an agent often requires an interaction with its environment to


fulfill its tasks, such as human, other agents, and arbitrary information sources.
 Cooperation: cooperation of several agents permits faster and better solutions
for complex tasks that exceed the capabilities of a single agent.
 Mobility: an agent may navigate within electronic communication networks.
 Character: like human, an agent may demonstrate an external behavior with
many human characters as possible.
Agents and environments
An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.

Example
 A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth, and other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators.
 A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen, writing files,
and sending network packets.
Percept
We use the term percept to refer to the agent's perceptual inputs at any
given instant.

Percept Sequence
An agent's percept sequence is the complete history of everything the agent
has ever perceived.

Agent function
Mathematically speaking, we say that an agent's behavior is described by the
agent function that maps any given percept sequence to an action.
Agent Program
The agent function for an intelligent agent will be implemented by agent program
Example : the vacuum-cleaner world
This particular world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and whether there is dirt
in the square.
It can choose to move left, move right, suck up the dirt, or do nothing.
One very simple agent function is the following:
if the current square is dirty, then suck, otherwise,
it move to the other square.
A partial tabulation of this agent function is shown in Figure
Agent Program

function REFLEX-VACUUM-AGENT([ location, status]) returns an action

if status = Dirty then return Suck


else if location = A then return Right
else if location = B then return Left

Performance measures
 A performance measure embodies the criterion for success of an agent's behavior.
 When an agent is plunked down in an environment, it generates a sequence of
actions according to the percepts it receives.
 This sequence of actions causes the environment to go through a sequence of states.
 If the sequence is desirable, then the agent has performed well

Good Behavior: The Concept of Rationality


 A rational agent is one that does the right thing—conceptually speaking, every
entry in the table for the agent function is filled out correctly.
 Obviously, doing the right thing is better than doing the wrong thing.
 The right action is the one that will cause the agent to be most successful
Rationality
What is rational at any given time depends on four things:
 The performance measure that defines the criterion of success.
 The agent's prior knowledge of the environment.
 The actions that the agent can perform.
The agent's percept sequence to date.

This leads to a definition of a rational agent: For each possible percept sequence, a
rational agent should select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has

Omniscience, learning, and autonomy


 An omniscient agent knows the actual outcome of its actions and can act accordingly;
but omniscience is impossible in reality.
 Doing actions in order to modify future percepts-sometimes called information
gathering-is an important part of rationality.
 Our definition requires a rational agent not only to gather information, but also to
learn as much as possible from what it perceives.
 To the extent that an agent relies on the prior knowledge of its designer rather than on
its own percepts, we say that the agent lacks autonomy. A rational agent should be
autonomous-it should learn what it can to compensate for partial or incorrect prior
knowledge.
Task environments
We must think about task environments, which are essentially the "problems" to
which rational agents are the "solutions."

Specifying the task environment


The rationality of the simple vacuum-cleaner agent, needs specification of

 the performance measure


 the environment
 the agent's actuators and sensors.

PEAS
 All these are grouped together under the heading of the task environment.

 We call this the PEAS (Performance, Environment, Actuators, Sensors)


description. In designing an agent, the first step must always be to specify the task
environment as fully as possible.
Properties of task environments

 Fully observable vs. partially observable


 Deterministic vs. stochastic
 Episodic vs. sequential
 Static vs. dynamic
 Discrete vs. continuous
 Single agent vs. multiagent
Fully observable vs. partially observable.
 If an agent's sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.
 A task environment is effectively fully observable if the sensors detect all aspects
that are relevant to the choice of action;
 An environment might be partially observable because of noisy and inaccurate
sensors or because parts of the state are simply missing from the sensor data.

Deterministic vs. stochastic.


If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic.

Episodic vs. sequential


 In an episodic task environment, the agent's experience is divided into atomic
episodes.
 Each episode consists of the agent perceiving and then performing a single action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
For example, an agent that has to spot defective parts on an assembly line bases each
decision on the current part, regardless of previous decisions;
In sequential environments, on the other hand, the current decision could affect all
future decisions. Chess and taxi driving are sequential:
Discrete vs. continuous.
 The discrete/continuous distinction can be applied to the state of the environment, to
the way time is handled, and to the percepts and actions of the agent.
 For example, a discrete-state environment such as a chess game has a finite number of
distinct states. Chess also has a discrete set of percepts and actions.
 Taxi driving is a continuous state and continuous-time problem: the speed and location
of the taxi and of the other vehicles sweep through a range of continuous values and do so
smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.).
Single agent vs. multiagent.
 An agent solving a crossword puzzle by itself is clearly in a single-agent
environment,
 whereas an agent playing chess is in a two-agent environment.
 Multiagent is further classified in to two ways
Competitive multiagent environment (Ex. Chess)
Cooperative multiagent environment (Ex. Taxi Driving )
The Structure of Agents
Agent program implements the agent function mapping percepts to action. This
Program run on the some physical device with sensors and actuators. This is
Called Architecture
Agent = Architecture + Program
Agent Programs
Agent program take the current percept as input from the sensors and return an
Action to the actuators.
There are two types of agent programs, they are
i. A Skeleton Agent ii. A Table Driven Agent

i. A Skeleton Agent : The agent program receives only a single percept as its input.
If the percept is new input the agent updates the memory with the new percept

ii. A Table Driven Agent: A table which consists of indexed percept sequences with
Its corresponding action. The input percept checks the table for the same percept
And perform its corresponding action.
Drawbacks:

 Table lookup of percept-action pairs defining all possible condition-action rules


necessary to interact in an environment

 Problems
– Too big to generate and to store (Chess has about 10^120 states, for example)
– No knowledge of non-perceptual parts of the current state
– Not adaptive to changes in the environment; requires entire table to be updated if
changes occur
– Looping: Can't make actions conditional

 Take a long time to build the table

 No autonomy

 Even with learning, need a long time to learn the table entries
To implement the mapping from percepts to actions, to perform the mapping task,
4 types of agent programs are there, they are
1.Simple Reflex Agent
2.Model – Based Reflex Agent
3.Goal – Based Agent
4.Utility – Based Agent

1. Simple Reflex Agent

 The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.

 These agents only succeed in the fully observable environment.

 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.

 The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
Simple Reflex Agent
The agent program for a simplex reflex agent in the two
State vacuum environment

Characteristics
 Only works if the environment is fully observable.
 Lacking history, easily get stuck in infinite loops
 One solution is to randomize actions
Problems for the simple reflex agent design approach:
They have very limited intelligence
They do not have knowledge of non-perceptual parts of the current state
Mostly too big to generate and to store.
Not adaptive to changes in the environment.

2. Model – Based Agent


The Model-based agent can work in a partially observable environment, and track
the situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept
history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
a.How the world evolves
b.How the agent's action affects the world.
3. Goal – Based Agents
The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.

The agent needs to know its goal which describes desirable situations.

Goal-based agents expand the capabilities of the model-based agent by


having the "goal" information.

They choose an action, so that they can achieve the goal.

These agents may have to consider a long sequence of possible actions


before deciding whether the goal is achieved or not. Such considerations of
different scenario are called searching and planning, which makes an agent
proactive.
4. Utility – Based Agents

These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by providing
a measure of success at a given state.

Utility-based agent act based not only goals but also the best way to achieve
the goal.

The Utility-based agent is useful when there are multiple possible


alternatives, and an agent has to choose in order to perform the best action.

The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
5. Learning Agents

A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.

It starts to act with basic knowledge and then able to act and adapt
automatically through learning.

A learning agent has mainly four conceptual components, which are:


a. Learning element: It is responsible for making improvements by
learning from environment
b. Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.

Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
Problem Solving Agents
Problem Solving Agent is one type of goal based agent, where there are
sequence of actions, the agent should select one from these actions which leads
to desirable
states. If the agent understand the definition of the problem, then we have to
search for finding solutions which implies agent should maximize the
performance measure.

Steps to maximize the performance measure


1. Goal Formulation : It is based on the current situation, the actions are
selected
and the action that result to a failure case can be rejected without further
consideration.
2. Problem Formulation: It is the process of deciding what actions and states
to consider for a given goal.
3. Search : The process of looking for a sequence of actions that reaches the
goal is called search.
4. Solution : A Search algorithm takes a problem as input and returns a
solution in the form of action sequence.
5. Execution : Once the solution is found, the action it recommends can be carried out. This
is called the execution phase.
Once the solution has been executed, the agent will formulate a new goal
Well defined problems and solutions
A problem can be defined by 4 components
1. Initial State : The starting state which agent knows itself
2. Successor Function :
 A description of the possible actions available to the agent.
 State x, successor –FN(x) returns a set of <action, successor>
ordered pairs, where each action is a legal action in a state x and
each successor is a state that can be reached from x by applying the
action.
 State Space : The set of all states reachable from the initial state by
any sequence of actions. The initial state and successor function
defines the state space. The state space forms a graph in which
nodes are states and axes between the nodes are action.
 Path : A path in the state space is a sequence of state connected by a
sequence of action.
3. Goal Test : Test to determine whether the given state is the goal
state. When there is an explicit set of possible goal states, then we
can check whether any one of the goal state is reached or not
4. Path Cost : A path cost function that assigns a numeric cost to each
path. The problem-solving agent chooses a cost function that
reflects its own performance measure
The step cost of taking action a in state s to reach state s ′ is denoted by c(s, a, s′ ).
A solution to a problem is an action sequence that leads from the initial state to a goal
state. Solution quality is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.

Formulating Problems
Example
Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. Now, suppose
the agent has a nonrefundable ticket to fly out of Bucharest the following day. In that
case, it makes sense for the agent to adopt the goal of getting to Bucharest. The agent's
task is to find out which sequence of actions will get it to a goal state

This process of looking for such a sequence is called search.

A search algorithm takes a problem as input and returns a solution in the form of an
action sequence. Once a solution is found, the actions it recommends can be carried
out. This is called the execution phase.
Initial state : the initial state for our agent in Romania might be described as In(Arad)

Successor function : Given a particular state x, SUCCESSOR-FN(x) returns a set of


(action, successor) ordered pairs, where each action is one of the legal actions in state x
and each successor is a state that can be reached from x by applying the action. For
example, from the state In(Arad), the successor function for the Romania problem would
return
{( Go(Sibzu),In(Sibiu)), (Go(Timisoara), In(Tzmisoara)), (Go(Zerznd),In(Zerind)))

Goal test : The agent's goal in Romania is the singleton set {In(Bucharest)).

Path cost : The step cost of taking action a to go from state x to state y is denoted by
c(x, a, y).
These 4 elements can be gathered into a data structure that is the input to
problem solving algorithm.
A solution to a problem is a path from initial state to a goal state.
Solution performance is measured by path cost function.
An optimal solution has lowest path cost among all solutions.

Abstraction
The process of removing details from a representation is called Abstraction.
In the above route finding problem, the state’s In(Coimbatore) action has so
many things.
1. The Travelling companion
2. The scenery out of the window
3. How far is the next stop
4. The condition of road
5. The weather
These are irrelevant to find a way to Chennai. So they are left out. The choice of
good abstraction involves removing as much details as possible while retaining
validity and ensuring that abstract actions are easy to carry out
Example Problems
The problem-solving approach has been applied to a vast array of task environments.

A toy problem is intended to illustrate or exercise various problem-solving methods.


It can be given a concise, exact description. It can be used easily by different
researchers to compare the performance of algorithms

A real-world problem is one whose solutions people actually care about. Some list
of best known toy and real-world problems
Toy Problems
i)Vacuum world Problem
States: The agent is in one of two locations, each of which might or might not contain
dirt. Thus there are 2 * 22 = 8 possible world states.
Initial state: Any state can be designated as the initial state.
Successor function: three actions (Left, Right, and Suck).
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
ii) 8-puzzle Problem
The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles and a blank
space. A tile adjacent to the blank space can slide into the space. The object is to reach
a specified goal state

States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
Initial state: Any state can be designated as the initial state.
Successor function: This generates the legal states that result from trying the four
actions (blank moves Left, Right, Up, or Down).
Goal test: This checks whether the state matches the goal configuration (Other goal
configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
iii) 8-queens problem
The goal of the 8-queens problem is to place eight queens on a chessboard such
that no queen attacks any other. (A queen attacks any piece in the same row,
column or diagonal.
States: Any arrangement of 0 to 8 queens on the board is a state.
Initial state: No queens on the board.
Successor function: Add a queen to any empty square.
Goal test: 8 queens are on the board, none attacked.
Path cost : Zero (search cost only exists)
Real-world problems

Airline travel problem

States: Each is represented by a location (e.g., an airport) and the current time.
Initial state: This is specified by the problem.
Successor function: This returns the states resulting from taking any scheduled flight
(perhaps further specified by seat class and location), leaving later than the current
time plus the within-airport transit time, from the current airport to another.
Goal test: Are we at the destination by some pre specified time?
Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent flyer
mileage awards, and so on.

Route-finding problem is defined in terms of specified locations and transitions along


links between them. Route-finding algorithms are used in a variety of applications,
such as routing in computer networks, military operations planning, and airline travel
planning systems
The traveling salesperson problem (TSP) is a touring problem in which each city
must be visited exactly once. The aim is to find the shortest tour.

A VLSI layout problem requires positioning millions of components and connections on


a chip to minimize area, minimize circuit delays, minimize stray capacitances, and
maximize manufacturing yield. The layout problem comes after the logical design
phase, and is usually split into two parts: cell layout and channel routing. In cell
layout, the primitive components of the circuit are grouped into cells, each of which
performs some recognized function. Each cell has a fixed footprint (size and shape) and
requires a certain number of connections to each of the other cells. The aim is to place
the cells on the chip so that they do not overlap and so that there is room for the
connecting wires to be placed between the cells. Channel routing finds a specific route
for each wire through the gaps between the cells.

Robot navigation is a generalization of the route-finding problem described earlier.


Rather than a discrete set of routes, a robot can move in a continuous space with (in
principle) an infinite set of possible actions and states. For a circular robot moving on a
flat surface, the space is essentially two-dimensional. When the robot has arms and legs
or wheels that must also be controlled, the search space becomes many-dimensional.
Advanced techniques are required just to make the search space finite. In addition to the
complexity of the problem, real robots must also deal with errors in their sensor
readings and motor controls.
Automatic assembly sequencing of complex objects by a robot was first
demonstrated by FREDDY (Michie, 1972). In assembly problems, the aim is to find an
order in which to assemble the parts of some object. If the wrong order is chosen, there
will be no way to add some part later in the sequence without undoing some of the
work already done. Checking a step in the sequence for feasibility is a difficult
geometrical search problem closely related to robot navigation

Searching for Solutions

Search techniques use an explicit search tree that is generated by the initial state and
the successor function that together define the state space. In general, we may have a
search graph rather than a search tree, when the same state can be reached from
multiple paths
Any Queries ?
Thank You

You might also like