You are on page 1of 38

Lecture 2

AI Agents

Marissa B. Ramos, MIT - Instructor


Recall: What is AI?
• A field of study that seeks to explain and emulate
intelligent behavior in terms of computational processes
• The branch of computer science that is concerned with the
automation of intelligent behavior
• Artificial Intelligence is the study of systems that:
think like humans think rationally

act like humans act rationally

• The study and development of systems that demonstrate


intelligent behavior that carry out actions to achieve the
best outcome. 2
Types of Artificial Intelligence

• Artificial Intelligence can be divided in two types which


are based on capabilities and functionality of the AI.
AI Type-1: Based on Capabilities

1. Narrow AI (Weak AI)


• Narrow AI is a type of AI which is able to perform a dedicated task with
intelligence.
• This is the most common and currently available AI.
• Narrow AI cannot perform beyond its field or limitations, as it is only trained
for one specific task. Hence it is also termed as weak AI. Narrow AI can fail in
unpredictable ways if it goes beyond its limits.
• Siri, Alexa, Cortana and other virtual assistants are good examples of Narrow
AI, but they operate with a limited pre-defined range of functions.
• IBM's Watson supercomputer also comes under Narrow AI.
• Other examples of Narrow AI are playing chess, purchasing suggestions on
e-commerce site, self-driving cars, speech recognition, and image
recognition.
AI Type-1: Based on Capabilities

2. General AI
• General AI is a type of intelligence which could perform any intellectual
task with efficiency like a human.
• The idea behind the general AI is to make such a system which could be
smarter and think like a human by its own.
• Currently, there is no such system exist which could come under general
AI and can perform any task as perfect as a human. General AI needs to
master human-like capabilities such as sensory perception, motor skills,
natural language understanding, human-level creativity and social and
emotional connection and problem solving skills.
• The worldwide researchers are now focused on developing machines
with General AI.
• As systems with general AI are still under research, it will take lots of
efforts and time to develop such systems.
AI type-1: Based on Capabilities

3. Super AI
• Super AI is a level of Intelligence of Systems at which
machines could surpass human intelligence, and can
perform any task better than human with cognitive
properties. It is an outcome of general AI.
• Some key characteristics of Super AI include the
ability to think, to reason, to solve puzzle, make
judgments, plan, learn, and communicate by its own.
• Super AI is still a hypothetical concept of Artificial
Intelligence. Development of such systems in reality
is still a world changing task.
Summary for Type 1 AI
AI Type-2: Based on Functionality

1. Reactive Machines
• Purely reactive machines are the most
basic types of Artificial Intelligence.
• Such AI systems do not store
memories or past experiences for
future actions.
• These machines only focus on current
scenarios and react on it as per
possible best action.
• IBM's Deep Blue system is an example
of reactive machines.
• Google's AlphaGo is also an example
of reactive machines.
AI Type-2: Based on Functionality

2. Limited Memory
• Limited memory machines can store
past experiences or some data for a
short period of time.
• These machines can use stored data
for a limited time period only.
• Self-driving cars are one of the best
examples of Limited Memory systems.
These cars can store recent speed of
nearby cars, the distance of other
cars, speed limit, and other
information to navigate the road.
AI Type-2: Based on Functionality

3. Theory of Mind
• In Psychology, “theory of mind” refers to the ability to attribute
mental state — beliefs, intent, desires, emotion, knowledge — to
oneself and others.
• Theory of Mind AI should understand the human emotions,
people, beliefs, and be able to interact socially like humans.
• This type of AI machines are still not developed, but
researchers are making lots of efforts and improvement for
developing such AI machines.
• For example, you could yell angrily at Google Maps to take you in
another direction. However, it’ll neither show concern for your
distress nor offer emotional support. Instead, the map application
will return the same traffic report and ETA.
AI Type-2: Based on Functionality

4. Self-Awareness
• Self-awareness AI is the future of Artificial
Intelligence. These machines will be super
intelligent, and will have their own consciousness,
sentiments, and self-awareness.
• These machines will be smarter than human
mind.
• Self-Awareness AI does not exist in reality and it
is still a hypothetical concept.
Summary for Type 2 AI
Intelligent Agents

• Agent: anything that perceives its environment through


sensors and acts upon its environment through actuators
or effectors
• AI: study of rational agents
• A rational agent carries out an action with the best
outcome after considering past and current percepts
• An AI system is composed of an agent and its environment.
• The agents act in their environment.
• The environment may contain other agents.
13
AI Agent Terminologies

• Sensor: Sensor is a device which detects the change in


the environment and sends the information to other
electronic devices. An agent observes its environment
through sensors.
• Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only
responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the
environment. Effectors can be legs, wheels, arms, fingers,
wings, fins, and display screen.
Intelligent Agents

• A human agent has sensory organs such as eyes,


ears, nose, tongue and skin parallel to the sensors,
and other organs such as hands, legs, mouth, for
effectors.
• A robotic agent replaces cameras and infrared
range finders for the sensors, and various motors
and actuators or effectors.
• A software agent has encoded bit strings as its
programs and actions. It can have keystrokes, file
contents as sensory input and act on those inputs
and display output on the screen.
Intelligent Agents

16
Agent Function
• a = F(p)
where p is the current percept, a is the action carried out,
and F is the agent function
• F maps percepts to actions
F: P → A
where P is the set of all percepts, and A is the set of all
actions
• In general, an action may depend on all percepts
observed so far, not just the current percept
17
Agent Function Refined
• ak = F(p0 p1 p2 …pk)
where p0 p1 p2 …pk is the sequence of percepts
observed to date, ak is the resulting action carried out
• F now maps percept sequences to actions
F: P* → A

18
Structure of Agents
• Agent = architecture + program
– architecture
• device with sensors and actuators
• e.g., A robotic car, a camera, a PC, …
– program
• implements the agent function on the architecture

19
Specifying the Task Environment

• The environment is where the agent lives


• It operates and provides the agent with something
to sense and act upon it.
• It involves the PEAS Model

20
The PEAS Model

• PEAS is a type of model on which an AI agent works


upon. When we define an AI agent or rational agent,
then we can group its properties under PEAS
representation model.
• It is made up of four words:
– Performance Measure: captures agent’s aspiration
– Environment: context, restrictions
– Actuators: indicates what the agent can carry out
– Sensors: indicates what the agent can perceive
Example: PEAS for Self-
driving Cars:
• Performance: Safety, time, legal drive, comfort
• Environment: Roads, other vehicles, road signs,
pedestrian
• Actuators: Steering, accelerator, brake, signal, horn
• Sensors: Camera, GPS, speedometer, odometer,
accelerometer, sonar.
Example of Agents with
their PEAS representation
Properties of Environments
• Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the environment,
the environment is discrete (For example, chess); otherwise it is continuous (For example, driving).
• Observable / Partially Observable − If it is possible to determine the complete state of the environment at
each time point from the percepts it is observable; otherwise it is only partially observable.
• Static / Dynamic − If the environment does not change while an agent is acting, then it is static; otherwise it
is dynamic.
• Single agent / Multiple agents − The environment may contain other agents which may be of the same or
different kind as that of the agent.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete state of the
environment, then the environment is accessible to that agent.
• Deterministic / Stochastic − If the next state of the environment is completely determined by the current
state and the actions of the agent, then the environment is deterministic; otherwise it is non-deterministic or
stochastic.
• Episodic / Sequential − In an episodic environment, each episode consists of the agent perceiving and
then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend
on the actions in the previous episodes. Episodic environments are much simpler because the agent does
not need to think ahead. In Sequential environment, an agent requires memory of past actions to determine
the next best actions.
24
Examples
Reasons for “Playing Soccer”

• i. Stochastic – For a given current state and action executed by agent, the next
state or outcome cannot be exactly determined, for e.g., if agent kicks the ball in
a particular direction, then the ball may or may not be stopped by other players,
or the soccer field can change in many different ways depending on how players
move.
• ii. Sequential – The past history of actions in the game can affect the next action
in the game.
• iii. Dynamic – The environment can change while the agent is making decision,
for e.g., soccer field (environment) changes when a player moves.
• iv. Continuous – Location of the ball or players is continuous. The speed or the
direction (angle) at which the agent hits the ball is continuous.
• v. Partially observable – An agent cannot detect all the things on soccer field that
can affect its action, for e.g. it cannot determine what other players are thinking.
• vi. Multi-agent – There are many agents involved in soccer game.
Types of Agents
• Agents can be grouped into five classes based on their
degree of perceived intelligence and capability.
• All these agents can improve their performance and generate
better action over the time.
– Simple Reflex Agent
– Model-based Reflex Agent
– Goal-based Agent
– Utility-Based Agent
– Learning Agent
27
Simple Reflex Agent
• The Simple reflex agents are the simplest agents.
• These agents take decisions on the basis of the
current percepts and ignore the rest of the percept
history.
• These agents only succeed in the fully observable
environment.
• The Simple reflex agent does not consider any part
of percepts history during their decision and action
process.
• The Simple reflex agent works on Condition-action
rule, which means it maps the current state to action.
Such as a Room Cleaner agent, it works only if there
is dirt in the room.
• Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual
•Condition-Action Rule − It is a rule that maps a parts of the current state
state (condition) to an action. o Mostly too big to generate and to store.
•Ex: if hand is in fire then pull away hand o Not adaptive to changes in the environment.
28
Simple Reflex Agent
The vacuum promises to sense dirt
and debris on your floors and clean
those areas accordingly. This is an
example of a simple reflex agent that
operates on the condition (dirty floors) to
initiate an action (vacuuming).

I-Robot's Roomba was introduced in 2002.


Model-based Reflex Agent

• They use a model of the world to choose their actions.


• The Model-based agent can work in a partially observable
environment, and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
• Internal State: It is a representation of the current state
based on percept history.
• These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• Current state of its world depends on percept • How the agent's action affects the world.
history
• Rule to be applied next depends on resulting state
• state’ -> next-state( state, percept )
action -> select-action( state’, rules ) 30
Model-based Reflex Agent
Goal-based Agent
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do. The agent needs
to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based
agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not.
Such considerations of different scenario are called searching
and planning, which makes an agent proactive.
• Goal-based agent is more flexible than reflex agent since the
knowledge supporting a decision is explicitly modeled, thereby
allowing for modifications.
• Goal − It is the description of desirable situations.
• Goals provide for a more sophisticated next-state function
• Essentially, the agent’s rule set is determined by its goals
• Requires knowledge of future consequences given possible actions
32
Goal-based Agent
Google's Waymo driverless cars are good examples of a goal-based agent
when they are programmed with an end destination, or goal, in mind.
Utility-based Agent

• Utility-based agent act based not only goals but


also the best way to achieve the goal.
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has to
choose in order to perform the best action.
• The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
• May have multiple action sequences that arrive at a goal
• Choose action that provides the best level of “happiness”
for the agent

• These agents are similar to the goal-


based agent but provide an extra
component of utility measurement
which makes them different by
providing a measure of success at a
given state. 34
Utility-based Agent
• Route recommendation system
Learning Agent

• A learning agent in AI is the type of agent which can


learn from its past experiences, or it has learning
capabilities.
• It starts to act with basic knowledge and then able to
act and adapt automatically through learning.
• A learning agent has mainly four conceptual
components, which are:
• Learning element: It is responsible for making
improvements by learning from environment
• Critic: Learning element takes feedback from critic
which describes that how well the agent is doing
with respect to a fixed performance standard.
• Performance element: It is responsible for
selecting external action
• Problem generator: This component is
• Learning agents are able to learn, responsible for suggesting actions that will lead to
analyze performance, and look for new and informative experiences.
new ways to improve the
performance. 36
Projects to try
• Automatically organize your PDF/source code collections
• Automatically organize your video/music collection
• Find faces in pictures or movies
• Make an automated call center
• Find cliques of friends from social graphs
• Make a dating site
• Predict NFL/NBA/MLB outcomes
• Track a finger on a touch interface
• Categorize physiological data, predict user emotions
• Categorize network traffic or OS activity
References

• AI Agents (2021). Retrieved from https://www.javatpoint.com/types-of-


ai-agents
• AI Agents and Environment (2021). Retrieved from
https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligen
ce_agents_and_environments.htm

You might also like