You are on page 1of 4

What is Ai: A.

It is the science and engineering of making intelligent Agent: Agents Should Act :A rational agent is one that does the right thing. DFS:DFS or Depth First Search starts from the top node and follows a
machines, especially intelligent computer programs. It is related to the That is, the rational agent will act such that it will be successful. A path to reaches the end node of the path Depth-first search always
similar task of using computers to understand human intelligence, but AI expands the deepest node in the current frontier of the search tree.
does not have to confine itself to methods that are biologically performance measure determines how successful an agent is. Goal- Based
The search proceeds immediately to the deepest level of the search
Agent • Choose actions so as to achieve a (given or computed) goal. • A
observable.1 The Turing Test (Acting humanly) *A judge has to find out tree, where the nodes have no successors **.1DFS is faster than
goal is a description of a desirable situation • Keeping track of the current
BFS.*2 DFS is better when target is far from source.*3 DFS uses a
which of two hidden agents is a human and which is a machine.*To pass state is often not enough – need to add goals to decide which situations are
total Turing test computer need - computer vision to perceive objects - good||Knowing the world and respond appropriately is not the whole story Stack to find the shortest path..*||DFS is not optimal, meaning the
robotics to move the objects.2 Thinking Humanly: * How human thinks? Anticipating future … involving planning and search.exa:1: Pathfinding number of steps in reaching the solution, or the cost spent in reaching
*Two ways to determine a given program thinks like human Through Robot..Utility- Based Agent • When there are multiple possible it is high.Mar. BFS: Breadth-first search is a simple strategy in which
introspection Through psychological experiments *After defining theory, alternatives, how to decide which one is best? • A goal specifies a crude the root node is expanded first, then all the successors of the root
we can express it as a computer program and check if it is behaving like distinction between a happy and unhappy state, but often need a more node are expanded next, then their successors, and so on. || BFS is
human.3 Thinking Rationally:* Several Greek school developed various general performance measure that describes "degree of happiness" • optimal if path cost is a non-decreasing function of the depth of the
forms of logic: notation and rules of derivation for thoughts; may or may Utility function U: States --> Reals indicating a measure of success or
not have proceeded to the idea of mechanization Not all intelligent happiness when at a given state exa : 1: Automated Energy Management node.*1 BFS is slower than DFS..*2 BFS uses a Queue to find the
behavior is mediated by logical deliberation What is the purpose of
System|| Utility (cost) function maps a state onto a real number. Improve shortest path.*3 BFS is better when target is closer to Source. .
thinking? What thoughts should I have? Acting Rationally: Acting rationally upon the goal-based agents by having high-quality behavior in most difference between informed and uninformed searches? Informed
= to achieve one’s goals, given one’s beliefs.*Difference between acting environment. Table Driven Agent Function: (percept) returns action Search is another technique that has additional information about the
and thinking rationally: correct inference is not all of rationality.*Need *Static: percepts // a sequence, initially empty table // a table, indexed by
ability to represent knowledge and reason with it. Reach good decisions in percept sequences, initially fully *specified append percept to the end of estimate distance from the current state to the goal.* It helps search
a wide variety of situations. PEAS stands for a Performance measure, percepts *action = LOOKUP (percepts, table)*return action *Limitation: the
table needed for something as simple as an agent that can play chess would efficiently.* Examples of informed search include greedy search and
Environment, Actuator, Sensor. Agent= Vacuum Cleaner. Performance be about 35100 entries *- it would take quite a long time for the designer
graph search.* It may or may not be complete. uninformed search is
to build the table - no autonomy. Simple Reflex Agent: Encode "internal
Measure= Cleanness, Efficiency, Battery life ,Security. Environment= a searching technique that has no additional information about the
state" of the world to remember the past as contained in earlier percepts
Needed because sensors do not usually give the entire state of the world at distance from the current state to the goal.* The information is only
Roads, Room, Table ,Wood floo. Actuator= Wheels, Brushes ,Vacuum
each input, so perception of the environment is captured over time. "State"
used to encode different "world states" that generate the same immediate provided in the problem definition.* Examples of uninformed search
Extractor, Mirror. Sensor= Camera ,Dirt detection sensor ,Cliff
percept.|| Use internal states (or models) to deal with the world that is
sensor ,Bump Sensor. Properties of Environments: • Deterministic/ Non- include depth first search (DFS) and breadth first search (BFS).* It is
only partially observable. Draw diagram..Limitations of intelligent agents
deterministic. An environment is deterministic if the next state of the Data availability. Artificial intelligence is highly dependent on past data.
always complete. Brute force / Blind search › Only have knowledge
environment is completely determined by the current state of the However, the available data may be soiled or of poor quality, thus posing a
environment and the action of the agent. * In an accessible and challenge to the company. Feeding AI wrong or poor quality data can about already explored nodes › No knowledge about how far a node
deterministic environment the agent need not deal with uncertainty. • undermine its efficiency to do its work and can become a hurdle in the
is from goal state. Heuristic search Estimates "distance" to goal state
Episodic/ Sequential. An episodic environment means that subsequent
episodes do not depend on what actions occurred in previous episodes. success of a company. utility-based agent differ from goal-based. A utility- *Guides search process toward goal state *Prefer states (nodes) that
based agent focuses on maximizing overall satisfaction by evaluating
Such environments do not require the agent to plan ahead. Static/ lead close to and not away from goal state. HEURISTIC FUNCTION
options based on their utility or value. It considers multiple factors and
aims to achieve the best outcome, even if it doesn't directly align with a *The choice of f(evaluation function) determines the search strategy
Dynamic. • Discrete/ Continuous. Intelligent Agents Definition: An
specific goal. In contrast, a goal-based agent specifically aims to achieve a *Best-first algorithms include as a component of f a heuristic function,
Intelligent Agent/entity that perceives it environment via sensors and acts predefined objective and takes actions that directly contribute to reaching
rationally upon that environment with its effectors. - Example: human, that goal. exa : 1: Automated Energy Management System. exa: 1: denoted h(n). h(n) = estimated cost, Greedy best-first search8 tries
robotic, software agents *Hence, an agent gets percepts one at a time, and Pathfinding Robot. Rational Agents :An agent is an entity that perceives to expand the node that is closest to the goal, on the grounds that
maps this percept into sequence to actions. Automated taxi driving and acts. For any given class of environments and tasks, we seek the agent
system: Percepts: Video, sonar, speedometer, odometer, engine sensors, (or class of agent) with the best performance. Computational limitations this is likely to lead to a solution quickly.* Thus, it evaluates nodes by
keyboard input, microphone, GPS, Actions: Steer, accelerate, brake, horn, make perfect rationality unachievable. …. Design best program of given using just the heuristic function; that is, f(n) = h(n). *“greedy”—at
speak/display, Goals: Maintain safety, reach destination, maximize profits
each step it tries to get as close to the goal as it can. *Its search cost is
(fuel, tire wear), obey laws, provide passenger comfort, …Environment: machine resources. This class is about construction rational agents. An
urban streets, freeways, traffic, pedestrians, weather, customers, … ideal rational agent should, for each possible percept sequence, do minimal. It is not optimal. . A* SEARCH: The most widely known form
OPTIMALITY OF A∗ has the following properties: the tree-search version of whatever actions that will maximize its performance measure based on (1) of best-first search is called A search .*It evaluates nodes by
A∗ is optimal if h(n) is admissible, while the graph-search version is optimal the percept sequence, and (2) its built-in and acquired knowledge. combining -g(n), the cost to reach the node, and h(n),* the cost to get
if h(n) is consistent. Suppose n is a successor of n; then g(n’)=g(n) + c(n, a,
n’) for some action a, and we have f(n) = g(n) + h(n) = g(n) + c(n, a, n) + h(n) from the node to the goal: f(n) = g(n) + h(n) .* A search is both
≥ g(n) + h(n) = f(n) .The next step is to prove that whenever A ∗ selects a
node n for expansion, the optimal path to that node has been found. complete and optimal. .Simulation is a model whereas AI is trying to
impute a model. With simulation you can build a model first and then
validate it. Very often AI is trying to figure out something from
nothing and simulation you build a model that you can see and test
and validate and then you use that model to find i0nformation within
it.
Systematically :The search algorithms that we have seen so far are Uniform-cost search is a searching algorithm used for traversing a
designed to explore search spaces systematically .This systematicity is weighted tree or graph. This algorithm comes into play when a
achieved by keeping one or more paths in memory and by recording which different cost is available for each edge. The primary goal of the
alternatives have been explored at each point along the path and which uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost. Evaluating Search Strategies: Completeness*
have not. Local Search: algorithm searches only the final state, not the path Guarantees finding a solution whenever one exists.Time Complexity
*How long (worst or average case) does it take to find a solution?
to reach there. Local search algorithms operate using a single current state
Usually measured in terms of the number of nodes expanded. Space
(rather than multiple paths) and generally move only to neighbors of that Complexity *How much space is used by the algorithm? Usually
state. Although local search algorithms are not systematic, they have *two measured in terms of the maximum size that the “OPEN" list becomes
key advantages: *they use very little memory-usually a constant amount; during the search .Optimality/Admissibility *If a solution is found, is it
*they can often find reasonable solutions in large or infinite (continuous) guaranteed to be an optimal one? For example, is it the one with
state spaces for which systematic algorithms are unsuitable. Crossover is a minimum cost?
genetic operator that combines (mates) two chromosomes (parents) to
(Evolutionary Algorithms is to maximize the function f(x) = x? with × in
produce a new chromosome (offspring). The idea behind crossover is that
the integer interval [0, 311, i.e., x = 0, 1, - . . 30, 31. 1. The first step is
the new chromosome may be better than both of the parents if it takes the
encoding of chromosomes; use binary representation for integers; 5-
best characteristics from each of the parents. Mutation is a genetic bits are used to represent integers up to 31. \\2: Randomly generate
operator used to maintain genetic diversity from one generation of a a set of solutions : randomly generated solutions as : 01101, 11000,
population of chromosomes to the next. The mutation operator simply
01000, 10011.\\ 3. Evaluate the fitness of each member of the
inverts the value of the chosen gene. i.e. 0 goes to 1 and 1 goes to 0. This
population : The calculated fitness values for each individual are - \\
mutation operator can only be used for binary genes. WELL-DEFINED
(a) Decode the individual into an integer (called phenotypes), 01101 -
PROBLEMS :A problem can be defined formally by five components: 1. The
> 13; 11000 > 24; 01000 - 8; 10011 - 19; \\ (b) Evaluate the fitness
initial state that the agent starts in. For example, the initial state for our
according to f(x) = x ? 13 – 55.9; 24 – 129.6; 8 > 30.4; 19 – 93.1. \\(c)
agent in Romania might be described as In(Arad). 2. A description of the
Expected count = N * Prob i , where N is the number of individuals in
possible actions available to the agent. Given a particular state s,
the population called population size, here N = 4. draw 1st box(String
ACTIONS(s) returns the set of actions that can be executed in s. We say that
No:- Total, (sum), Average, Max. Initial Population: X value :Fitness
each of these actions is applicable in s. For example, from the state
f(x) =
In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara), Go(Zerind)}. Prob i :Expected count N * Prob i:) \\4:Selection: We divide the range
4. The goal test, which determines whether a given state is a goal state. 5. A into four bins, sized according to the relative fitness of the solutions
path cost function that assigns a numeric cost to each path. Informed which they represent. 2nd draw box( string: prob i: associate bin). By
search strategy : Informed search is a type of search algorithm that uses generating 4 uniform (0, 1) random values and seeing which bin they
domain-specific knowledge to guide its search through a problem fall into we pick the four strings that will form the basis for the next
space.one that uses problem-specific knowledge beyond the definition of generation. 3rd dwar box(random : fall into bin: chosen bin)./5
the problem itself .can find solutions more efficiently than can an Crossover: For the first pair of strings: 0 1 10 1,, 1 1 0 0 0 - We
uninformed strategy. randomly select the crossover point:
0 1 1 0 1= 0 1 10 |1= 0 1 1 0 0 ,,1 1 000 = 1 1 0 0 | 0 =1 1 00 1 For the
second pair of strings: 1 1000 ,10011: >
1 1 0 0 0 =1 1 | 0 0 0= 1 1 0 1 1 ,,1 0 0 1 1= 1 0 | 0 1 1= 1 0 000
//6 :Go back and re-evaluate fitness of the population (new

generation) 5th draw box end //7: 1. Initial populations : were 01101,
11000, 01000, 10011 After one cycle, new populations, act as initial
population 01100, 11001, 11011, 10000 2. The total fitness has gone
from 1170 to 1754 in a single generation. 3. The algorithm has already
come up with the string 11011 (i.ex = 27) as a possible solution.

You might also like