You are on page 1of 39

AI Foundations and Applications

2. Intelligent Agent

Thien Huynh-The
HCM City Univ. Technology and Education
Jan, 2023
Announcement
Student should bring notebook in the next week for in-class coding with python
Requirement
Install python and jupyter notebook
Ref: https://www.youtube.com/watch?v=1w-Bm4zpFgs
Install several common packages: numpy, pandas, and scikit-learn, and tensorflow
Recommend students setup to run tensorflow on GPU if available.

HCMUTE AI Foundations and Applications 03/18/2024 2


Agents and Environments

Agent
?

Sensors Actuators

Percepts Actions

Environment

HCMUTE AI Foundations and Applications 03/18/2024 3


Agents

• Definition
• An agent is an entity that perceives its environment through sensors and take actions through
actuators.
• The agent behavior is described by the agent function, or policy, that maps percept histories to
actions:

• A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the
sensors, and other organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.

HCMUTE AI Foundations and Applications 03/18/2024 4


Agent Terminology

• Agent Terminology
• Performance Measure of Agent − It is the criteria, which determines how successful an agent
is.
• Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
• Percept − It is agent’s perceptual inputs at a given instance.
• Percept Sequence − It is the history of all that an agent has perceived till date.
• Agent Function − It is a map from the percept sequence to an action.

HCMUTE AI Foundations and Applications 03/18/2024 5


Pacman Game

• Simplified Pacman world


• Percepts: location and content. For example: (left cell, no food).
• Actions: go left, go right, go up, go down, eat, do nothing.

HCMUTE AI Foundations and Applications 03/18/2024 6


Pacman Agents

• Partial tabulation of a simple Pacman agent function

Percept sequence Action


(left cell, no food) Go right
(left cell, food) Eat
(right cell, no food) Go left
(right cell, food) Eat
(left cell, no food), (left cell, no food) Go right
(left cell, no food), (left cell, food) Ear
… …

HCMUTE AI Foundations and Applications 03/18/2024 7


What is the Optimal Pacman?

• Main four rules all AI agents must adhere to:


• Rule 1: An AI agent must be able to perceive the environment.
• Rule 2: The environmental observations must be used to make decisions.
• Rule 3: The decisions should result in action.
• Rule 4: The action taken by the AI agent must be a rational. Rational actions are actions that
maximize performance and yield the best positive outcome.

• How to formulate the goal of Pacman?


• 1 point per food dot collected up to time ?
• 1 point per food dot collected up to time , minus one per move?
• penalize when too many food dots are left not collected?

• Can it be implemented in a small and efficient agent program?

HCMUTE AI Foundations and Applications 03/18/2024 8


Rational Agents

• Informally, a rational agent is an agent that does the "right thing".


• A performance measure evaluates a sequence of environment states caused by the
agent's behavior.
• A rational agent is an agent that chooses whichever action that maximizes the
expected value of the performance measure, given the percept sequence to date.
• Another explanation: a rational agent is modeled as making choices resulting in
intentions in an attempt to optimize the expected utility with respect to their
desires and consistent with their beliefs.
• Rational agents are self-interested agents.

HCMUTE AI Foundations and Applications 03/18/2024 9


What are AI Rational Agents

• If an agent is able to take good decisions considering all the past as well as the
current percept then the agent is said to be a rational agent.
• In other words, we can say that a rational agent is an agent which has the
capability of doing the right thing at the right time.
• Autonomous driving (autopilot system)
• right time but wrong action
• right action but incorrect time
• Before going into details about rational agents let us first see what rationality
actually means.

HCMUTE AI Foundations and Applications 03/18/2024 10


The Meaning of Rationality

It is a measure which defines the success or failure of an


agent. Performance of each agent will vary with respect to its
precept.
It is the surroundings from where the agent will learn and
react with the help of sensors and actuators respectively.
Different types of environments (explained in the previous
article) are faced by the agent in case it is set in motion.

Performance
Environment Actuators Sensors
measure

The parts of the agent which help an agent to


collect information regarding the environment
The part of the agent from which it executes its are known as sensors.
output are called actuators.

HCMUTE AI Foundations and Applications 03/18/2024 11


Ideal Rational Agent & Functionality

• An agent which can take various actions that would maximize the performance
measure based on the perceptual history and the inbuilt knowledge of the agent.
• Functionality
• Before an agent is actually put in the environment, the percept sequence and actions for
corresponding percepts need to be fed into the agent → start functioning with basic inputs
(initially setting up)
• Based on these inputs, the agent does its basic functions and keeps on learning about the
environment. This increases the complexity of the agent’s learning.
• An agent constantly learns from the environment and it upgrades and changes its perceptual
experience. This is usually done with the help of learning techniques such as reinforcement
learning.
• Performing required actions so that future percept can be modified is one of the main and
important parts of rationality → purely depends on the amount of exploration the agent does.

HCMUTE AI Foundations and Applications 03/18/2024 12


Examples

Agent Self-driving Car


Performance Measure Comfort, Safety, Time Taken, Correct Navigation.
Environment Roads, Signals, other Vehicles, weather and
Pedestrians.
Actuators Steering Wheel, Brake, Horn, Accelerator, Indicators
etc.
Sensors Cameras attached, Speedometer of the car, GPS,
Odometer etc.

HCMUTE AI Foundations and Applications 03/18/2024 13


Examples

Agent Vacuum Cleaner


Performance Measure Cleanliness, Battery life, ease of use, efficiency.
Environment Room, floor, furniture, carpets, other objects.
Actuators Wheels, brushes, vacuum extractor.
Sensors Cameras, bump sensor, wall sensor etc.

HCMUTE AI Foundations and Applications 03/18/2024 14


Examples

Agent Diagnostic System in Hospital


Performance Measure Patient Health record, input costs.
Environment Hospital, staff, patients.
Actuators Diagnostic information, treatments, referrals etc.
Sensors Keyboard (for entering data), patient’s replies, test
reports etc.

HCMUTE AI Foundations and Applications 03/18/2024 15


Exercise

Agent Smart IoT-based Farming System


Performance Measure ?
Environment ?
Actuators ?
Sensors ?

HCMUTE AI Foundations and Applications 03/18/2024 16


Environment Types

• Fully observable vs. partially observable


• Whether the agent sensors give access to the complete state of the environment, at each
point in time.
• Deterministic vs. stochastic
• Whether the next state of the environment is completely determined by the current state and
the action executed by the agent.
• Episodic vs. sequential
• Whether the agent's experience is divided into atomic independent episodes.
• Static vs. dynamic
• Whether the environment can change, or the performance measure can change with time.

HCMUTE AI Foundations and Applications 03/18/2024 17


Environment Types

• Discrete vs. continuous


• Whether the state of the environment, the time, the percepts or the actions are continuous.
• Single agent vs. multi-agent
• Whether the environment include several agents that may interact which each other.
• Known vs unknown
• Reflect the agent's state of knowledge of the "law of physics" of the environment.

HCMUTE AI Foundations and Applications 03/18/2024 18


Environment Types

• Are the following task environments fully observable? deterministic? episodic?


static? discrete? single agents? known?
• Taxi driving
• Medical diagnosis
• Image analysis
• Part-picking robot
• Smart farming
• Smart manufacturing

HCMUTE AI Foundations and Applications 03/18/2024 19


The Structure of Intelligent Agents

• An intelligent agent is a program that can make decisions or perform a service


based on its environment, user input and experiences.
• These programs can be used to autonomously gather information on a regular,
programmed schedule or when prompted by the user in real time.
• Intelligent agents may also be referred to as a bot, which is short for robot.
• Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.

HCMUTE AI Foundations and Applications 03/18/2024 20


Simple Reflex Agents

• A simple reflex agent is an AI system that follows pre-defined rules to make


decisions.
• It only responds to the current situation without considering the past or future
ramifications.
• A simple reflex agent is suitable for environments with stable rules and
straightforward actions, as its behavior is purely reactive and responsive to
immediate environmental changes.

HCMUTE AI Foundations and Applications 03/18/2024 21


Simple Reflex Agents

• How does it work?


• A simple reflex agent executes its
functions by following the condition-
action rule, which specifies what action
to take in a certain condition.
• Example
• A rule-based system developed to
support automated customer support
interactions. The system can
automatically generate a predefined
response containing instructions on
resetting the password if a customer’s
message contains keywords indicating a
password reset.

HCMUTE AI Foundations and Applications 03/18/2024 22


Simple Reflex Agents

• Advantages:
 Easy to design and implement, requiring minimal computational resources
 Real-time responses to environmental changes
 Highly reliable in situations where the sensors providing input are accurate, and the rules are well designed
 No need for extensive training or sophisticated hardware

• Limitations:
 Prone to errors if the input sensors are faulty or the rules are poorly designed
 Have no memory or state, which limits their range of applicability
 Unable to handle partial observability or changes in the environment they have not been explicitly
programmed for
 Limited to a specific set of actions and cannot adapt to new situations

HCMUTE AI Foundations and Applications 03/18/2024 23


Model-based Reflex Agents

• A model-based reflex performs actions based on a current percept and an internal


state representing the unobservable word. It updates its internal state based on
two factors:
• How the world evolves independently of the agent
• How does the agent’s action affect the world
• A cautionary model-based reflex agent is a variant of a model-based reflex agent
that also considers the possible consequences of its actions before executing
them.

HCMUTE AI Foundations and Applications 03/18/2024 24


Model-based Reflex Agents
• A model-based reflex agent follows the
condition-action rule, which specifies the
appropriate action to take in a given
situation.
• But unlike a simple reflex agent, a model-
based agent also employs its internal state
to assess the condition during the decision
and action process.
• The model-based reflex agent operates in
four stages:
• Sense: It perceives the current state of the world
with its sensors.
• Model: It constructs an internal model of the world
from what it sees.
• Reason: It uses its model of the world to decide
how to act based on a set of predefined rules or
heuristics.
• Act: The agent carries out the action that it has
chosen.
HCMUTE AI Foundations and Applications 03/18/2024 25
Model-based Reflex Agents

• Advantages Quick and efficient decision-making based on their understanding of the world
• Better equipped to make accurate decisions by constructing an internal model of the world
• Adaptability to changes in the environment by updating their internal models
• More informed and strategic choices by using its internal state and rules to determine the condition

• Disadvantages Building and maintaining models can be computationally expensive


• The models may not capture the real-world environment’s complexity very well
• Models cannot anticipate all potential situations that may arise
• Models need to be updated often to stay current
• Models may pose challenges in terms of interpretation and comprehension

HCMUTE AI Foundations and Applications 03/18/2024 26


Goal-based Agents

• Goal-based agents are AI agents that use information from their environment to
achieve specific goals. They employ search algorithms to find the most efficient
path towards their objectives within a given environment.
• These agents are also known as rule-based agents, as they follow predefined rules
to accomplish their goals and take specific actions based on certain conditions.
• Goal-based agents are easy to design and can handle complex tasks in various
applications like robotics, computer vision, and natural language processing.
• Unlike basic models, a goal-based agent can determine the optimal course of
decision-making and action-taking processes depending on its desired outcome or
goal

HCMUTE AI Foundations and Applications 03/18/2024 27


Goal-based Agents
• Given a plan, a goal-based agent attempts to
choose the best strategy to achieve the goals, It
then uses search algorithms and heuristics to find
the efficient path to the goal.
• The working pattern of the goal-based agent can
be divided into five steps:
• Perception: The agent perceives its environment using
sensors or other input devices to collect information
about its surroundings.
• Reasoning: The agent analyzes the information
collected and decides on the best course of action to
achieve its goal.
• Action: The agent takes actions to achieve its goal,
such as moving or manipulating objects in the
environment.
• Evaluation: After taking action, the agent evaluates its
progress towards the goal and adjusts its actions, if
necessary.
• Goal Completion: Once the agent has achieved its goal,
it either stops working or begins working on a new goal.
HCMUTE AI Foundations and Applications 03/18/2024 28
Goal-based Agents

• Advantages
• Simple to implement and understand
• Efficient for achieving a specific goal
• Easy to evaluate performance based on goal completion
• It can be combined with other AI techniques to create more advanced agents
• Well-suited for well-defined, structured environments
• It can be used for various applications, such as robotics, game AI, and autonomous vehicles.

• Disadvantages
• Limited to a specific goal
• Unable to adapt to changing environments
• Ineffective for complex tasks that have too many variables
• Requires significant domain knowledge to define goals

HCMUTE AI Foundations and Applications 03/18/2024 29


Utility-based Agents

• Utility-based agents are AI agents that make decisions based on maximizing a


utility function or value.
• They choose the action with the highest expected utility, which measures how good
the outcome is.
• This helps them deal with complex and uncertain situations more flexibly and
adaptively.
• Utility-based agents are often used in applications where they have to compare
and select among multiple options, such as resource allocation, scheduling, and
game-playing.

HCMUTE AI Foundations and Applications 03/18/2024 30


Utility-based Agents

• A utility-based agent aims to choose


actions that lead to a high utility state. To
achieve this, it needs to model its
environment, which can be simple or
complex.
• Then, it evaluates the expected utility of
each possible outcome based on the
probability distribution and the utility
function.
• Finally, it selects the action with the
highest expected utility and repeats this
process at each time step.

HCMUTE AI Foundations and Applications 03/18/2024 31


Utility-based Agents

• Advantages
• Handles a wide range of decision-making problems
• Learns from experience and adjusts their decision-making strategies
• Offers a consistent and objective framework for decision-making

• Disadvantages
• Requires an accurate model of the environment, failing to do so results in decision-making errors
• Computationally expensive and requires extensive calculations
• Does not consider moral or ethical considerations
• Difficult for humans to understand and validate

HCMUTE AI Foundations and Applications 03/18/2024 32


Learning Agents

• An AI learning agent is a software agent that can learn from past experiences and
improve its performance.
• It initially acts with basic knowledge and adapts automatically through machine
learning.
• The learning agent comprises four main components:
• Learning Element: It is responsible for learning and making improvements based on the experiences it gains
from its environment.
• Citric: It provides feedback to the learning element by the agent’s performance for a predefined standard.
• Performance Element: It selects and executes external actions based on the information from the learning
element and the critic.
• Problem Generator: It suggests actions to create new and informative experiences for the learning
element to improve its performance.

HCMUTE AI Foundations and Applications 03/18/2024 33


Learning Agents
• AI learning agents follow a cycle of observing,
learning, and acting based on feedback. They
interact with their environment, learn from
feedback, and modify their behavior for future
interactions.
• Here’s how the cycle works:
• Observation: The learning agent observes its
environment through sensors or other inputs.
• Learning: The agent analyzes data using algorithms and
statistical models, learning from feedback on its actions
and performance.
• Action: Based on what it has learned, the agent acts in
its environment to decide how to behave.
• Feedback: The agent receives feedback about their
actions and performance through rewards, penalties, or
environmental cues.
• Adaptation: Using feedback, the agent changes its
behavior and decision-making processes, updating its
knowledge and adapting to its environment.

HCMUTE AI Foundations and Applications 03/18/2024 34


• Advantages
• The agent can convert ideas into action based on AI decisions
• Learning intelligent agents can follow basic commands, like spoken instructions, to perform tasks
• Unlike classic agents that perform predefined actions, learning agents can evolve with time
• AI agents consider utility measurements, making them more realistic

• Disadvantages
• Prone to biased or incorrect decision-making
• High development and maintenance costs
• Requires significant computing resources
• Dependence on large amounts of data
• Lack of human-like intuition and creativity

HCMUTE AI Foundations and Applications 03/18/2024 35


Learning Autonomous Car

• Performance element:
• The current system for selecting actions and driving.
• The critic observes the world and passes information to the learning element.
• E.g., the car makes a quick left turn across three lanes of traffic. The critic observes shocking
language from the other drivers and informs bad action.
• The learning element tries to modify the performance element to avoid reproducing this
situation in the future.
• The problem generator identify certain areas of behavior in need of improvement
and suggest experiments.
• E.g., trying out the brakes on different surfaces in different weather conditions.

HCMUTE AI Foundations and Applications 03/18/2024 36


Reinforcement Learning

• Reinforcement Learning(RL) is a type of machine learning technique that enables


an agent to learn in an interactive environment by trial and error using feedback
from its own actions and experiences.
• Unlike supervised learning where the feedback provided to the agent is correct set
of actions for performing a task, reinforcement learning uses rewards and
punishments as signals for positive and negative behavior.
• As compared to unsupervised learning, reinforcement learning is different in terms
of goals
• Unsupervised learning: find similarities and differences between data points
• reinforcement learning: find a suitable action model that would maximize the total cumulative
reward of the agent

HCMUTE AI Foundations and Applications 03/18/2024 37


Hierarchical Agents

• Definition
• How it work
• Advantages and disadvantages
• Real-life examples of AI agents

HCMUTE AI Foundations and Applications 03/18/2024 38


Reading

• In-class assignment
• Describe the mathematical representation of processing steps of the Reinforcement Learning
algorithm
• Reading
• Supervised learning vs. Unsupervised learning vs Semi-supervised learning
• Regression vs. Classification
• Reading
• What is deep Q learning?

HCMUTE AI Foundations and Applications 03/18/2024 39

You might also like