Professional Documents
Culture Documents
Artificial Intelligence Allows Machines to Work Efficiently and Solve Problems. See How
AI Can Learn and Analyse Vast Amounts of Data in the Blink of an Eye. As-a-service.
Edge to Core. Enterprise AI.
A boom of AI (1980-1987)
1. Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human
expert.
2. In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, as a person, firm, machine, or software.
Examples of Agent:
1. A software agent has Keystrokes, file contents, received network packages which
act as sensors and displays on the screen, files, sent network packets acting as
actuators.
2. A Human-agent has eyes, ears, and other organs which act as sensors, and hands,
legs, mouth, and other body parts acting as actuators.
3. A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors acting as actuators.
C. Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Goal-based agents
These kinds of agents take decisions based on how far they are currently from
their goal (description of desirable situations).
Utility-based agents
The agents which are developed having their end uses as building blocks are called
utility-based agents
Learning Agent:
A learning agent in AI is the type of agent that can learn from its past experiences, or it has
learning capabilities. It starts to act with basic knowledge and then can act and adapt
automatically through learning.
Following are the main four rules for an AI agent:
1. Rule 1: An AI agent must have the ability to perceive the environment.
2. Rule 2: The observation must be used to make decisions.
3. Rule 3: Decision should result in an action.
4. Rule 4: The action taken by an AI agent must be a rational action.
D. Structure of an AI Agent
1. Agent = Architecture + Agent program
PEAS Representation
PEAS is a type of model on which an AI agent works upon.
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
E. Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not a part of
an agent itself. An environment can be described as a situation in which an agent is present.
Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view
of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
Fully observable vs Partially Observable:
1. If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
2. An agent with no sensors in all environments then such an environment is called
as unobservable.
Deterministic vs Stochastic:
1. If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
2. A stochastic environment is random in nature and cannot be determined completely
by an agent.
Episodic vs Sequential:
1. In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
2. However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
Single-agent vs Multi-agent
1. If only one agent is involved in an environment and operating by itself then such
an environment is called single agent environment.
2. However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
Static vs Dynamic:
1. If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
Discrete vs Continuous:
1. If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
Known vs Unknown
1. Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
Accessible vs Inaccessible
1. If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it is
called inaccessible.
F. Problem formulation
It is one of the core steps of problem-solving which decides what action should be taken
to achieve the formulated goal. In AI this core part is dependent upon software agent which
consisted of the following components to formulate the associated problem.
Components to formulate the associated problem:
1. Initial State: This state requires an initial state for the problem which starts the AI
agent towards a specified goal. In this state new methods also initialize problem
domain solving by a specific class.
2. Action: This stage of problem formulation works with function with a specific
class taken from the initial state and all possible actions done in this stage.
3. Transition: This stage of problem formulation integrates the actual action done by
the previous action stage and collects the final stage to forward it to their next
stage.
4. Goal test: This stage determines that the specified goal achieved by the integrated
transition model or not, whenever the goal achieves stop the action and forward
into the next stage to determines the cost to achieve the goal.
5. Path costing: This component of problem-solving numerical assigned what will be
the cost to achieve the goal. It requires all hardware software and human working
cost.
I. Tree
A tree is a non-linear data structure that represents the hierarchy. A tree is a collection of
nodes that are linked together to form a hierarchy.
III. Graph
A graph is like a tree data structure is a collection of objects or entities known as nodes that
are connected to each other through a set of edges. A tree follows some rule that determines
the relationship between the nodes, whereas graph does not follow any rule that defines the
relationship among the nodes. A graph contains a set of edges and nodes, and edges can
connect the nodes in any possible way.
Mathematically, it can be defined as an ordered pair of a set of vertices, and a set of nodes
where vertices are represented by 'V' and edges are represented by 'E'.
G= (V , E)
1. A directed graph is a finite set of vertices together with a finite set of edges. Both sets
might be empty, which is called the empty graph.
2. Each edge is associated with two vertices, called its source and target vertices.
3. We say that the edge connects its source to its target.
4. The order of the two connected vertices is important.
Undirected graph: The graph with the undirected edges known as a undirected graph.
1. An undirected graph is a set of nodes and a set of links between the nodes.
2. Each node is called a vertex, each link is called an edge, and each edge connects two
vertices.
3. The order of the two connected vertices is unimportant.
4. An undirected graph is a finite set of vertices together with a finite set of edges. Both
sets might be empty, which is called the empty graph.
Artificial Intelligence
dictionary.com definition says capacity for Learning, reasoning, understanding and similar
forms of mental activity.
We can say that in these four definitions one dimension is human life versus rational. So
first column is the human-like box, and other is the rational column.
It is the study of building agents that act rationally. Most of the time, these agents perform
some kind of search algorithm in the background in order to achieve their tasks.
State space representation
Its structure corresponds to the structure of problem solving in two important ways:
1. It allows for a formal definition of a problem as per the need to convert some given
situation into some desired situation using a set of permissible operations.
2. It permits the problem to be solved with the help of known techniques and control
strategies to move through the problem space until goal state is found.
Greedy Search:
In greedy search, we expand the node closest to the goal node. The “closeness” is estimated
by a heuristic h(x).
Question. Find the path from S to G using greedy search. The heuristic values h of each
node below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the
lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with a
lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is shown in the
search tree below, in blue.