Professional Documents
Culture Documents
INTELLIGENCE
A.I AGENTS
A.I AGENTS
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel
to the sensors, and other organs such as hands, legs, mouth, for effectors.
A robotic agent replaces cameras and infrared range finders for the sensors, and
various actuators for effects.
A software agent has encoded bit strings as its programs and actions.
Characteristics of an AI agent
Here are the key components and characteristics of an AI agent:
Perception: An AI agent has the ability to perceive its environment through sensors
or input data. These sensors can include cameras, microphones, or other types of
sensors that capture relevant information from the environment. Perception allows
the agent to gather data and understand the current state of the world.
Reasoning: Once the agent has gathered data from its environment, it uses
reasoning algorithms to make sense of the information and draw conclusions.
Reasoning involves processing the available data, applying logical rules, and using
algorithms to make decisions or predictions based on the acquired knowledge. This
component allows the agent to analyse the current state and plan its actions
accordingly.
Decision-making: Based on the reasoning process, an AI agent makes decisions on
how to act in order to achieve its goals. Decision-making can involve selecting from a
set of predefined actions or generating new actions based on the agent's capabilities.
The agent evaluates the potential outcomes and chooses the action that is most
likely to lead to a desired outcome.
Learning: AI agents can learn from their experiences and improve their performance
over time. Learning can occur through various techniques such as supervised
learning, unsupervised learning, reinforcement learning, or a combination of these
approaches. By learning from data and feedback, an agent can adapt its behavior,
refine its decision-making processes, and become more effective in achieving its
goals.
Domains of application
Behaviour of Agent: It is the action that agent performs after any given sequence of
percepts.
Percept Sequence: It is the history of all that an agent has perceived till date.
The vacuum agent is a simple reflex agent because the decision is based only on the
current location, and whether the place contains dirt.
LOGICAL VIEW OF SIMPLE REFLEX AGENT
explanation of the above diagram
:
i. The actions are taken depending upon the condition. If the condition is true, the
relevant action is taken. If it is false, the other action is taken.
ii. The agent takes input from the environment through sensors, and delivers the
output to the environment through actuators.
iii. The colored rectangles denote the current internal state of the agent’s decision
process.
iv. The ovals represent the background information used in the process.
Model-Based Reflex Agents
Self-driving cars are a great example of a model-based reflex agent. The car is
equipped with sensors that detect obstacles, such as car brake lights in front of them
or pedestrians walking on the sidewalk. As it drives, these sensors feed percepts into
the car's memory and internal model of its environment.
Goal-Based Agents
Goal-based agents are designed to achieve specific goals or objectives. They possess
a goal or a set of goals to pursue and take actions that are likely to lead to the
fulfilment of those goals. Goal-based agents often use planning and search
algorithms to generate a sequence of actions that maximize the chances of achieving
the desired outcome.
Google's Waymo driverless cars are good examples of a goal-based agent when they
are programmed with an end destination, or goal, in mind. The car will then ''think''
and make the right decisions in order to deliver the passenger where they intended
to go.
The environment in which an agent operates greatly influences its behaviour and
performance. Here are some important characteristics of the agent environment:
IV. Static vs. Dynamic: A static environment does not change while the agent is
making decisions, whereas a dynamic environment can change unpredictably.
Agents operating in dynamic environments need to constantly monitor and
update their internal models to respond effectively to changes in the
environment.
V. Discrete vs. Continuous: An environment can have discrete states and actions,
where there are distinct, separate choices. Alternatively, it can have continuous
states and actions, with a range of possible values. Agents must adapt their
decision-making algorithms and representation techniques to handle the nature
of the environment.
VI. Competitive vs. Cooperative: The agent's environment can involve competition
or cooperation with other agents. In competitive environments, agents may strive
to outperform each other, while in cooperative environments, agents work
together to achieve common goals. Agents need to consider the strategies and
behaviour of other agents when making decisions.
Model performance measures are used to evaluate the effectiveness and accuracy of
machine learning models. Here are explanations of several commonly used
performance measures:
Sensitivity (Recall or True Positive Rate) Sensitivity measures the ability of a model
to correctly identify positive instances. It is calculated as TP divided by the sum of TP
and FN. Sensitivity indicates the proportion of actual positive instances that are
correctly classified by the model.
F1 Score: The F1 score is the harmonic mean of precision and sensitivity. It provides a
balanced measure of a model's performance by considering both false positives and
false negatives. The F1 score is calculated as 2 * (precision * sensitivity) / (precision +
sensitivity).
Accuracy: Accuracy is the most basic performance measure, representing the overall
correctness of the model's predictions. It is calculated as the ratio of the correctly
predicted instances (TP + TN) to the total number of instances (TP + TN + FP + FN).
However, accuracy alone may not be sufficient if the classes are imbalanced.
Sensitivity (Recall or True Positive Rate) Sensitivity measures the ability of a model
to correctly identify positive instances. It is calculated as TP divided by the sum of TP
and FN. Sensitivity indicates the proportion of actual positive instances that are
correctly classified by the model.
GROUND TRUTH/ACTUAL
POSITIVE NEGATIVE
T = 180
60 30
POSITIVE
Predicted
NEGATIVE 40 50
MODEL PERFORMANCE MEASURES
Let’s say the above table is a test result of people that tested either positive or
negative to cancer test.
Out of the total 100 positive samples, 60 are correctly classified as positive and
remaining 40 are mis-classified as negative.
Out of total 80 negative samples, 50 are correctly classified as negative while the
remaining 30 are wrongly classified as positive.
ACCURACY MEASURE
True Positive (TP): These are cases which are correctly classified as cancerous (yes),
and they do have the disease
True Negative (TN): The model correctly classified as non-cancerous (No), and they
don’t have the disease.
False Negative: The model predicted no, but actually they have the disease.
ACTUAL CANCER = YES ACTUAL CANCER = NO
PREDICTED CANCER =
YES TRUE POSITIVE FALSE NEGATIVE
PREDICTED CANCER =
NO FALSE NEGATIVE TRUE NEGATIVE
ACCURACY
= (TP+TN)/(TP+TN+FP+FN)
= 2*(PRECISION * RECALL)/(PRECISION+RECALL)
THANK YOU