You are on page 1of 52

o

ARTIFICIAL INTELLIGENCE
MODULE 1:

Introduction and History


Intelligent Agents
Solving Problem by Searching

COURSE CODE: 21AI51


What is Artificial Intelligence?
INTELLIGENCE: We call ourselves Homo sapiens—man the wise—because our intelligence is so
important to us.

• we have tried to understand how we think;

• The field of artificial intelligence attempts not just to understand but also to build intelligent entities.

AI is one of the newest fields in science and engineering. Work started in earnest soon after World War II, and
the name itself was coined in 1956.

• The branch of Computer Science called AI is said to be have born at conference held at “Dartmouth”, USA, in
1956

• The scientists attending the conference represented different disciplines: Mathematics, Neurology,
Psychology, Electrical Engineering etc.
• Artificial intelligence (AI) is the ability of machines or software to perform tasks that typically require
human intelligence.
 These tasks include:
• Learn, Reason, Speech Recognition, Problem Solving, Identifying Patterns.
• A Rational agent in the context of artificial intelligence refers to an entity, typically a computer program
or system, that makes decisions or takes actions aimed at achieving its goals effectively in a given
environment.
Acting Humanly: The Turing test approach
• The Turing test was developed by Alan Turing(A computer scientist) in 1950.

• “Turing test is used to determine whether or not a computer(machine) can think


intelligently like humans”?

• The Turing Test is a widely used measure of a machine’s ability to demonstrate human-like
intelligence.
• Natural language processing to communicate successfully in a human language;

• Knowledge representation to store what it knows or hears;

• Automated reasoning to answer questions and to draw new conclusions;

• Machine learning to adapt to new circumstances and to detect and extrapolate patterns.

Total Turing Test: Turing test deliberately avoids the direct physical interaction between
the interrogator and the computer, because the physical simulation of a person is
unnecessary for intelligence.

To pass the total Turing test, a robot will need

• Computer vision to perceive objects, and

• Robotics to manipulate objects and move about.


Thinking Humanly: The Cognitive modelling approach
To say that a program thinks like a human, we must know how humans think. We
can learn about human thought in three ways:
• Introspection—trying to catch our own thoughts as they go by;
• Psychological experiments—observing a person in action;
• Brain imaging—observing the brain in action.
• Cognitive modelling is a subfield of artificial intelligence that simulates human
cognition. It's used to understand human cognition and improve human-
computer interaction.
• Simulate human behaviour
• Predict human performance
• Improve human-computer interaction
• Provide human qualities to AI systems
• System that thinks like human requires cognitive modelling approaches.

• It is a black box where we are not clear about our thought process.

• One has to know the functioning of the brain and its mechanism for processing
information. It is an area of cognitive science.

• Cognitive Science and Artificial Intelligence is a comprehensive interdisciplinary


program that integrates the study of artificial intelligence with the study of
human cognition.

• Neural network is a computing model for processing information similar to brain.

• Ex: voice-activated virtual assistants


Thinking Rationally: The “Laws of Thought” approach
• The “laws of thought” approach is a method of artificial intelligence (AI) and
machine learning that uses formal logic and reasoning.
• Systems which think rationally relies on logic rather than human to measure
correctness.
• The goal is to develop AI systems that can make logical decisions based on a
set of rules.
The three fundamental laws of thought are:
• The principle of identity: If any statement is true, then it is true
• The law of contradiction: No statement can be both true and false
• The law of excluded middle: Also known as the third law
For Example: given Socrates is a man and all men are mortal then one
can conclude logically that Socrates is mortal.
Acting Rationally: The Rational agent approach
• In the rational agent approach, acting rationally means acting to achieve one's goals, given one's beliefs.
• An agent is a system that perceives an environment and acts within that environment.
• Rational agents need to be performed in such a way that there is maximum benefit to the entity performing
the action.
• An agent is said to act rationally if, given a set of rules, it takes actions to achieve its goals.
 Some characteristics of a rational agent include:
• Operating autonomously
• Perceiving their environment
• Adapting to change
• Creating and pursuing goals
• Making correct inferences
• Making decisions based on logical reasoning
• Reason and draw meaningful conclusions
• Plan sequence of actions to complete a goal
• Solve problems
• Think abstractly
• Comprehend ideas and help computers to communicate in Natural
Language
• Store knowledge provide before or during interrogation
• Learn new ideas from environment and circumstances
• Offer advice based on rules and situations
• Learn new concepts and tasks that require high level of intelligence
Foundations and History of Artificial Intelligence
The State of the Art
Robotic vehicles: 132-mile DARPA Grand Challenge in 2005 and on streets with traffic in
the 2007 Urban Challenge, the race to develop self-driving cars began in earnest. In 2018,
Waymo test vehicles passed the landmark of 10 million miles driven on public roads
without a serious accident, with the human driver stepping in to take over control only once
every 6,000 miles. Soon after, the company began offering a commercial robotic taxi
service.

Speech Recognition: A traveller calling United Airlines to book a flight can have the
entire conversation guided by an automated speech recognition and dialog management
system.

Game Playing: IBM’s DEEP BLUE became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibition match. The value of IBM’s stock increased by $18 billion.
Autonomous planning and Autonomous Scheduling : A hundred million miles from Earth,
NASA’s Remote Agent program became the first on-board autonomous planning program
to control the scheduling of operations for a spacecraft. Successor program MAPGEN
plans the daily operations for NASA’s Mars Exploration Rovers, and MEXAR2 did
mission planning—both logistics and science planning—for the European Space Agency’s
Mars Express mission in 2008.

Spam Fighting: Every day, learning algorithms categorize over a billion messages as spam,
preventing users from manually deleting a significant portion of their emails, which could
otherwise constitute 80% or 90% of their inbox. AI algorithms are trained to recognize
patterns associated with spam content, including common phrases, keywords, and
structural elements. Every day AI algorithm classify over billions of spam messages.
Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do
automated logistics planning and scheduling for transportation. This involved up to
50,000 vehicles, cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters.

Machine Translation: Machine translation in AI refers to the use of algorithms and


computational models to automatically translate text or speech from one language to
another. One prominent example of machine translation is Google Translate. Machine
translation is a powerful tool for breaking down language barriers, facilitating cross-
cultural communication, and making information more accessible globally.

Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum
cleaners for home use. The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify
the location of snipers.
CHAPTER-2 INTELLIGENT AGENTS

• Agents and Environments

• Good Behaviour: The concept of Rationality

• The Nature of Environments

• Structure of Agents
AGENTS AND ENVIRONMENTS
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• HUMAN AGENT:
eyes, ears, and other organs for sensors
hands, legs, vocal tract for actuators
• ROBOTIC AGENT:
cameras and infrared range finders for sensors
and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets
• The agent function maps from percept
Histories(Percept Sequence) to actions: [f : p*A]
The agent program runs on the physical architecture to produce f
• An agent’s percept sequence is the complete history of everything the agent
has ever perceived.
• Mathematically speaking, we say that an agent’s behaviour is described by the
agent function that maps any given percept sequence to an action.
VACUUM CLEANER WORLD

• Percept: location and contents, e.g., [A, Dirty] –I/P


Perception: Clean or Dirty? where it is in?
• Actions: Left, Right, Suck, Do Nothing
Actions: Move left, Move right, suck, do nothing
• Agent’s function à look-up table
For many agents this is a very large table
GOOD BEHAVIOUR: THE CONCEPT OF RATIONALITY
Rational Agent:
• one that does the right thing,
• every entry in the table for the agent function is filled out correctly.
• doing the right thing is better than doing the wrong thing.
• The right action is the one that will cause the agent to be most successful.
Performance measure:
• embodies the criterion for success of an agent's behaviour.
• When an agent is plunked down in an environment, it generates a sequence of
actions according to the percept's it receives.
• This sequence of actions causes the environment to go through a sequence of
states.
• If the sequence is desirable, then the agent has performed well.
Rationality
• Performance measure that defines the criterion of success
• Agent’s prior knowledge of environment
• Actions that agent can perform
• Agent’s percept sequence to date

Rational Agent: For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Examples of Rational Choice
With respect to Vacuum cleaner

a) Performance measure – awarding points (one point for each clean square at
each time step)
b) Prior knowledge – geography of environment(priori)-Clean squares stay
clean and sucking cleans the current square.
c) Actions – left, right, suck, Do Nothing
d) Percept sequence – perceiving dirt locations- The agent correctly perceives
its location and whether that location contains dirt.
Omniscience, learning, and autonomy
Rational is different from omniscience
• an omniscient agent knows the actual outcome of its actions and can act accordingly.
• Percept's may not supply all relevant information
• E.g., I am walking along the Champs one day and I see an old friend across the
street.
• E.g., in card game, don’t know cards of others.

Rational is different from being perfect


• Rationality maximizes expected performance while perfection maximizes actual
performance.
• Performing actions in order to modify future percept's (i.e. information gathering) is
a crucial part of rationality and is closely aligned with exploration.
• An intelligent agent should not only gather information, but also learn.

• The agent’s initial configuration could reflect some prior knowledge of the
environment, but as the agent gains experience, this may be modified and
augmented.

• A rational agent should be autonomous, in the sense that it learns what it


can to compensate for partial or incorrect prior knowledge.

• Ideally, the incorporation of learning allows for the design of a single


rational agent that will succeed in a variety of different environments and
for a variety of tasks.
The Nature of Environments
• we must think about task environments, which are essentially the “problems”
to which rational agents are the “solutions.”
Specifying the Task Environment:
PEAS: Performance measure, Environment, Actuators, Sensors
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi driver:
Properties of Task Environments
Fully observable (vs. partially observable)

Single agent (vs. multiagent)

Deterministic (vs. stochastic)

Episodic (vs. sequential)

Static (vs. dynamic)

Discrete (vs. continuous)

Known (vs. unknown)


Fully observable (vs. partially observable)
• Is everything an agent requires to choose its actions available to it via its
sensors? Perfect or Full information.
• If so, the environment is fully accessible If not, parts of the environment are
inaccessible Agent must make informed guesses about world.
• In decision theory: perfect information vs. imperfect information.
• If the agent has no sensor at all, then its unobservable.

EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Fully Partially Fully Partially Partially Fully
Single agent (vs. multi agent)
• An agent operating by itself in an environment or there are many agents
working together
• An agent solving a crossword puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess is in a two-agent environment.
• Chess is a competitive multiagent environment.
• In the taxi-driving environment, it is a partially cooperative multiagent
environment. And it is a partially competitive environment.

EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Single Multi Multi Multi Single Single
Deterministic (vs. stochastic)
• Deterministic (vs. stochastic): If the next state of the environment is completely
determined by the current state and the action executed by the agent, then we
say the environment is deterministic; other- wise, it is stochastic.
If the environment is(mostly)
partially observable -appear to be stochastic.
fully observable -appear to be deterministic
• We say an environment is uncertain if it is not fully observable or not
deterministic.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Deterministic Stochastic Stochastic Stochastic Stochastic Deterministic
Episodic (vs. sequential)

Is the choice of current action


• Dependent on previous actions?
• If not, then the environment is episodic
• Each episode consists of the agent perceiving and then performing a single action.
• For example, spot defective parts on an assembly line bases
In non-episodic environments:
• Current choice will affect future actions
• In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and
taxi driving are sequential
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Sequential Sequential Sequential Sequential Episodic Episodic
Static (vs. dynamic)

Static environments don’t change


While the agent is deliberating over what to do
Dynamic environments do change
So agent should/could consult the world when choosing actions Alternatively:
anticipate the change during deliberation OR make decision very fast
Semi dynamic: If the environment itself does not change with the passage of
time but the agent's performance score does.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Static Static Static Dynamic Dynamic Semi
Discrete (vs. continuous)

• The discrete/continuous distinction can be applied to the state of the


environment, to the way time is handled, and to the percepts and actions of the
agent.
• For example, a discrete-state environment such as a chess game has a finite
number of distinct states- discrete set of percepts and actions.
• Taxi driving is a continuous- state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range of
continuous values and do so smoothly over time.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Discrete Discrete Discrete Continous Continous Continous
Known (vs. unknown)

• In a known environment, the outcomes for all actions are given.


• Obviously, if the environment is unknown, the agent will have to learn how it works in order
to make good decisions.
• distinction between known and unknown environments is not the same as the one between
fully and partially observable environments.
• It is quite possible for a known environment to be partially observable—for example, in
solitaire card games, I know the rules but am still unable to see the cards that have not yet
been turned over.
• Conversely, an unknown environment can be fully observable —in a new video game, the
screen may show the entire game state but I still don’t know what the buttons do until I try
them.
SUMMARY
The Structure of Agents
• agent = architecture + program

• The agent programs -take the current percept as input from the sensors and
return an action to the actuator.

• agent program-takes the current percept as input(nothing more is available from


the environment)

• agent function - takes the entire percept history.

• if the agent's actions depend on the entire percept sequence, the agent will have
to remember the percepts.
• P = the set of possible percepts
• T= lifetime of the agent
• The total number of percepts it receives
• Size of the look up table
• Consider playing chess
• P =10, T=150
• Will require a table of at least entries
Agent Types
Four basic types in order of increasing generality:

Reflex Agents-
• Simple reflex agents
• Model Based Reflex agents

Goal-based agents
Utility-based agents
Learning agents
Simple Reflex Agents
• Simple but very limited intelligence.
• Action does not depend on percept history, only on current percept.
• Therefore no memory requirements.
• Environment is fully observable , deterministic ,static , episodic , discrete
and single agent.
• The agent function is based on the condition-action rule:
• if condition then action .
• e.g. : if car-in-front-is-braking then initiate-braking.
• If (you see the car in front’s brake lights) then apply the brakes Agent
simply takes in a percept , determines which action could be applied, and
does that action.
• The INTERPRET-INPUT function generates an abstracted description of the
current state from the percept,

• The RULE-MATCH function returns the first rule in the set of rules that
matches the given state description.
Advantages:
• Easy to implement.
• Uses much less memory than the table-driven agent.
• Useful when a quick automated response needed (i.e. reflex action is
needed) .

Disadvantages:
• Simple reflex agents can only react on fully observable environment.
• Partially observable environments get simple reflex agents into trouble.
• Vacuum cleaner robot with defective location sensor leads to infinite loops.
Model Based Reflex Agent
• A model-based reflex agent is an artificial intelligence agent that incorporates
an internal model of the world to make decisions.
• This type of agent maintains an internal representation or model of the current
state of the world, and it uses this model to decide on actions based on
perceived inputs.
• The internal model helps the agent reason about the consequences of its
actions and plan its behaviour accordingly.
• The agent perceives the current state of the environment through sensors,
obtaining information about its surroundings.
• After taking an action, the agent updates its internal model to reflect the
changes caused by its actions. This update helps the agent refine its
understanding of the environment and improve decision-making in the future.
Goal Based Agents
knowing state and environment? Enough?
• Taxi can go left, right, straight
• correct decision depends on where the taxi is trying to get to.
Have a goal
• A destination to get to
• the agent needs some sort of goal information that describes
• situations- being at the passenger's destination
Uses knowledge about a goal to guide its actions
• E.g., Search, planning
• Reflex agent breaks when it sees brake lights.
• Goal based agent reasons
• Brake light -> car in front is stopping -> I should stop -> I should use brake
Utility Based Agents
• A utility-based agent is an artificial intelligence agent designed to make
decisions by evaluating the utility or desirability of different outcomes.
• Unlike simple reflex agents or model-based reflex agents, which operate
based on predefined rules,
• utility-based agents consider the overall utility of possible actions and
choose the action that maximizes expected satisfaction or value.
• The agent selects the action that maximizes its expected utility. This
involves considering the potential outcomes of each action and choosing
the one that leads to the highest overall satisfaction.
• The agent may update its internal model, utility function, or decision-
making strategy based on the observed outcomes, aiming to improve its
performance over time.
Learning Agents
• All agents can improve their performance through learning.
A learning agent can be divided into four conceptual components
• Performance element is
what was previously the whole agent
• Input sensor
• Output action
• Learning element
• Modifies performance element.
• Critic
• how the agent is doing and how the performance measure modified to do
better
• Problem generator
• Tries to solve the problem differently instead of optimizing.
• Suggests exploring new actions -> new problems.
END OF CHAPTER-2

You might also like