You are on page 1of 28

ARTIFICIAL

INTELLIGENCE
A.I AGENTS
A.I AGENTS

An AI agent, also known as an artificial intelligence agent, is a software program or


system that can perceive its environment, reason about it, and take actions to achieve
specific goals. It is designed to simulate human-like intelligence and behaviour in
order to perform tasks autonomously or assist humans in various domains.

A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel
to the sensors, and other organs such as hands, legs, mouth, for effectors.

A robotic agent replaces cameras and infrared range finders for the sensors, and
various actuators for effects.

A software agent has encoded bit strings as its programs and actions.
Characteristics of an AI agent
Here are the key components and characteristics of an AI agent:

Perception: An AI agent has the ability to perceive its environment through sensors
or input data. These sensors can include cameras, microphones, or other types of
sensors that capture relevant information from the environment. Perception allows
the agent to gather data and understand the current state of the world.

Reasoning: Once the agent has gathered data from its environment, it uses
reasoning algorithms to make sense of the information and draw conclusions.
Reasoning involves processing the available data, applying logical rules, and using
algorithms to make decisions or predictions based on the acquired knowledge. This
component allows the agent to analyse the current state and plan its actions
accordingly.
Decision-making: Based on the reasoning process, an AI agent makes decisions on
how to act in order to achieve its goals. Decision-making can involve selecting from a
set of predefined actions or generating new actions based on the agent's capabilities.
The agent evaluates the potential outcomes and chooses the action that is most
likely to lead to a desired outcome.

Learning: AI agents can learn from their experiences and improve their performance
over time. Learning can occur through various techniques such as supervised
learning, unsupervised learning, reinforcement learning, or a combination of these
approaches. By learning from data and feedback, an agent can adapt its behavior,
refine its decision-making processes, and become more effective in achieving its
goals.

Autonomy: AI agents are designed to operate autonomously, meaning they can


perform tasks without continuous human intervention.
Interaction: AI agents often interact with humans or other agents to fulfill their
goals. This interaction can be through natural language interfaces, graphical user
interfaces, or other communication channels. AI agents may receive instructions from
humans, provide information, or collaborate with humans in a cooperative or
competitive manner. Interaction enables AI agents to assist and augment human
capabilities in various domains.

Domains of application

AI agents find applications in a wide range of domains, including robotics, virtual


assistants, autonomous vehicles, video games, recommendation systems, cyber
security, healthcare, finance, and many others. Depending on the domain, AI agents
can be specialized to excel in specific tasks and environments.
Agents Terminology

Performance Measure of Agent: It is the criteria, which determines how successful


an agent is.

Behaviour of Agent: It is the action that agent performs after any given sequence of
percepts.

Percept: It is agent’s perceptual inputs at a given instance.

Percept Sequence: It is the history of all that an agent has perceived till date.

Agent Function: It is a map from the precept sequence to an action.


TYPES OF AI AGENTS
There are several types of AI agents, each designed to address specific tasks and
environments. Here are some of the commonly recognized types of AI agents:

I. Simple Reflex Agents


II. Model-Based Reflex Agents
III. Goal Based Agents

Simple Reflex Agents


Simple Reflex or Reactive agents are the simplest form of AI agents. They perceive
the current state of the environment and directly map it to actions without any
memory or internal representation. Reactive agents do not have the ability to form
long-term plans or consider past experiences. They react solely based on the current
situation and are typically used in real-time systems where immediate responses are
required
EXAMPLE OF SIMPLE REFLEX AGENT

The vacuum agent is a simple reflex agent because the decision is based only on the
current location, and whether the place contains dirt.
LOGICAL VIEW OF SIMPLE REFLEX AGENT
explanation of the above diagram
:

i. The actions are taken depending upon the condition. If the condition is true, the
relevant action is taken. If it is false, the other action is taken.

ii. The agent takes input from the environment through sensors, and delivers the
output to the environment through actuators.

iii. The colored rectangles denote the current internal state of the agent’s decision
process.

iv. The ovals represent the background information used in the process.
Model-Based Reflex Agents

Model-Based Reflex Agents: Model-based reflex agents extend the functionality of


reactive agents by incorporating an internal model or representation of the
environment. They maintain an internal state that allows them to take into account
the history of the environment and make decisions based on the current state as well
as past states. These agents can store information about the world and use it to make
more informed decisions.

Example of Model-Based Reflex Agents

Self-driving cars are a great example of a model-based reflex agent. The car is
equipped with sensors that detect obstacles, such as car brake lights in front of them
or pedestrians walking on the sidewalk. As it drives, these sensors feed percepts into
the car's memory and internal model of its environment.
Goal-Based Agents

Goal-based agents are designed to achieve specific goals or objectives. They possess
a goal or a set of goals to pursue and take actions that are likely to lead to the
fulfilment of those goals. Goal-based agents often use planning and search
algorithms to generate a sequence of actions that maximize the chances of achieving
the desired outcome.

Goal-based AI agents are an expansion of model-based AI agents. These AI agents


can perform all the tasks that model-based AI agents can perform, i.e., these models
work on the current perception of the environment that is collected via sensors and
the knowledge gained via historical events that have occurred. These both are
required for the correct functioning of a model-based AI agent and a goal-based AI
agent, but the additional functioning requirement of this model is the expected
output.
Example of Goal Based Agent

Google's Waymo driverless cars are good examples of a goal-based agent when they
are programmed with an end destination, or goal, in mind. The car will then ''think''
and make the right decisions in order to deliver the passenger where they intended
to go.

Another example is a machine that perform surgical operation on humans.


Characteristics of Agent Environment

The environment in which an agent operates greatly influences its behaviour and
performance. Here are some important characteristics of the agent environment:

I. Observable vs. Partially Observable: An environment can be fully observable,


meaning the agent has access to complete and accurate information about the
environment's state. Alternatively, it can be partially observable, where the agent
has limited or noisy information about the state of the environment. Agents must
adapt their perception and decision-making strategies accordingly.

II. Deterministic vs. Stochastic: An environment can be deterministic, where the


outcomes of actions are completely predictable, or stochastic, where the
outcomes are subject to randomness or uncertainty. Agents need to consider the
level of uncertainty in the environment when planning and making decisions.
III. Episodic vs. Sequential: In an episodic environment, the agent's actions do not
have a long-term impact, and each episode is independent. In contrast, a
sequential environment has a sequential nature, where the agent's actions affect
future states and outcomes. Agents must have the ability to plan and reason over
multiple steps in a sequential environment.

IV. Static vs. Dynamic: A static environment does not change while the agent is
making decisions, whereas a dynamic environment can change unpredictably.
Agents operating in dynamic environments need to constantly monitor and
update their internal models to respond effectively to changes in the
environment.

V. Discrete vs. Continuous: An environment can have discrete states and actions,
where there are distinct, separate choices. Alternatively, it can have continuous
states and actions, with a range of possible values. Agents must adapt their
decision-making algorithms and representation techniques to handle the nature
of the environment.
VI. Competitive vs. Cooperative: The agent's environment can involve competition
or cooperation with other agents. In competitive environments, agents may strive
to outperform each other, while in cooperative environments, agents work
together to achieve common goals. Agents need to consider the strategies and
behaviour of other agents when making decisions.

Understanding these characteristics of the agent environment is crucial for designing


appropriate AI agents that can effectively perceive, reason, and act in different
domains and scenarios.
MODEL PERFORMANCE MEASURES

Model performance measures are used to evaluate the effectiveness and accuracy of
machine learning models. Here are explanations of several commonly used
performance measures:

Confusion Matrix: A confusion matrix is a tabular representation that summarizes


the performance of a classification model. It provides a breakdown of the predicted
and actual classes by counting the number of true positives (TP), true negatives (TN),
false positives (FP), and false negatives (FN). From the confusion matrix, other
performance metrics such as accuracy, sensitivity, specificity, and precision can be
derived.
Accuracy: Accuracy is the most basic performance measure, representing the overall
correctness of the model's predictions. It is calculated as the ratio of the correctly
predicted instances (TP + TN) to the total number of instances (TP + TN + FP + FN).
However, accuracy alone may not be sufficient if the classes are imbalanced.

Sensitivity (Recall or True Positive Rate) Sensitivity measures the ability of a model
to correctly identify positive instances. It is calculated as TP divided by the sum of TP
and FN. Sensitivity indicates the proportion of actual positive instances that are
correctly classified by the model.

Specificity (True Negative Rate): Specificity measures the ability of a model to


correctly identify negative instances. It is calculated as TN divided by the sum of TN
and FP. Specificity indicates the proportion of actual negative instances that are
correctly classified by the model.
Precision: Precision is the measure of the accuracy of positive predictions made by
the model. It is calculated as TP divided by the sum of TP and FP. Precision represents
the proportion of predicted positive instances that are actually positive. It is useful
when the cost of false positives is high.

F1 Score: The F1 score is the harmonic mean of precision and sensitivity. It provides a
balanced measure of a model's performance by considering both false positives and
false negatives. The F1 score is calculated as 2 * (precision * sensitivity) / (precision +
sensitivity).
Accuracy: Accuracy is the most basic performance measure, representing the overall
correctness of the model's predictions. It is calculated as the ratio of the correctly
predicted instances (TP + TN) to the total number of instances (TP + TN + FP + FN).
However, accuracy alone may not be sufficient if the classes are imbalanced.

Sensitivity (Recall or True Positive Rate) Sensitivity measures the ability of a model
to correctly identify positive instances. It is calculated as TP divided by the sum of TP
and FN. Sensitivity indicates the proportion of actual positive instances that are
correctly classified by the model.

Specificity (True Negative Rate): Specificity measures the ability of a model to


correctly identify negative instances. It is calculated as TN divided by the sum of TN
and FP. Specificity indicates the proportion of actual negative instances that are
correctly classified by the model.
CONFUSION MATRIX
CONFUSION MATRIX
A confusion matrix is a table that is often used to describe the performance of a
classification model.

GROUND TRUTH/ACTUAL

POSITIVE NEGATIVE

T = 180
60 30
POSITIVE
Predicted

NEGATIVE 40 50
MODEL PERFORMANCE MEASURES

Let’s say the above table is a test result of people that tested either positive or
negative to cancer test.

 Out of the total 100 positive samples, 60 are correctly classified as positive and
remaining 40 are mis-classified as negative.

 Out of total 80 negative samples, 50 are correctly classified as negative while the
remaining 30 are wrongly classified as positive.
ACCURACY MEASURE

 True Positive (TP): These are cases which are correctly classified as cancerous (yes),
and they do have the disease

 True Negative (TN): The model correctly classified as non-cancerous (No), and they
don’t have the disease.

 False Positive: The model predicted yes, but actually no disease

 False Negative: The model predicted no, but actually they have the disease.
ACTUAL CANCER = YES ACTUAL CANCER = NO

PREDICTED CANCER =
YES TRUE POSITIVE FALSE NEGATIVE

PREDICTED CANCER =
NO FALSE NEGATIVE TRUE NEGATIVE
ACCURACY
= (TP+TN)/(TP+TN+FP+FN)

TRUE POSITIVE RATE (SENSITIVITY OR RECALL)


= TP/(TP+FN)

FALSE POSITIVE RATE


= FP/(FP+TN)

TRUE NAGATIVE RATE (SPECIFICITY)


= TN/(TN+FP)

PRECISION (WHAT PROPORTION OF PREDICTED POSITIVES IS TRULY POSITIVE?)


= TP/(TP+FP)
F1 SCORE ( IS THE FUNCTION OF PRECISION AND RECALL)

= 2*(PRECISION * RECALL)/(PRECISION+RECALL)
THANK YOU

You might also like