You are on page 1of 9

Chapter 2

What is an (Intelligent) Agent?
• An over-used, over-loaded, and misused term.
• Anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through its
effectors to maximize progress towards its goals.
• PAGE (Percepts, Actions, Goals, Environment)
• Task-specific & specialized: well-defined goals and
environment
• The notion of an agent is meant to be a tool for analyzing
systems,
• It is not a different hardware or new programming
languages
Intelligent Agents and Artificial Intelligence Conflict Resolution by Action Selection Agents
• Example: Human mind as network of thousands or millions • Override: CAA overrides LKA
of agents working in parallel. To produce real artificial • Arbitrate: if Obstacle is Close then CAA
intelligence, this school holds, we should build computer else LKA
systems that also contain many agents and systems for • Compromise: Choose action that satisfies both
arbitrating among the agents' competing results. agents
• Distributed decision-making • Any combination of the above
and control
• Challenges: Doing the right thing
• Challenges:
The Right Thing = The Rational Action
• Action selection: What next action to choose
• Rational Action: The action that maximizes the expected
• Conflict resolution
value of the performance measure given the percept sequence
to date
• Rational = Best Yes, to the best of its knowledge
• Rational = Optimal Yes, to the best of its abilities (incl. its
constraints)
• Rational ≠ Omniscience
Agent Types • Rational ≠ Clairvoyant
We can split agent research into two main strands: • Rational ≠ Successful
• Distributed Artificial Intelligence (DAI) – Behavior and performance of IAs
Multi-Agent Systems (MAS) (1980 – 1990) • Perception (sequence) to Action Mapping: f : P* → A
• Much broader notion of "agent" (1990’s – present) • Ideal mapping: specifies which actions an agent ought to
• interface, reactive, mobile, information take at any point in time
Rational Agents
• Description: Look-Up-Table, Closed Form, etc.
• Performance measure: a subjective measure to
characterize how successful an agent is (e.g., speed,
power usage, accuracy, money, etc.)
• (degree of) Autonomy: to what extent is the agent able to
make decisions and take actions on its own?
Look up table
A Windshield Wiper Agent
• Goals: Keep windshields clean & maintain visibility
• Percepts: Raining, Dirty
• Sensors: Camera (moist sensor)
• Effectors: Wipers (left, right, back)
• Actions: Off, Slow, Medium, Fast
• Environment: Inner city, freeways, highways, weather Closed form
Interacting Agents • Output (degree of rotation) = F(distance)
• E.g., F(d) = 10/d (distance cannot be less than 1/10)
How is an Agent different from other software?
• Agents are autonomous, that is, they act on behalf of the user
• Agents contain some level of intelligence, from fixed rules to
learning engines that allow them to adapt to changes in the
environment
• Agents don't only act reactively, but sometimes also
proactively

and other agents as required • Goal-based agents • Agents may also cooperate with other agents to carry out more • Utility-based agents complex tasks than they themselves can handle • Reflex agents • Agents may migrate from one system to another to access • Reactive: No memory remote resources or even to meet other agents • Reflex agents with internal states Environment Types • W/o previous state. beobot. Action) Reflex agents w/ state return Action • Architecture: a device that can execute the agent program (e. • Accessible vs. • Hostile vs.• Agents have social ability. they communicate with the • Reflex agents with internal states user. driving Reactive agents • Reactive agents do not have internal symbolic models. agent’s perception-action mapping • Complex patterns of behavior emerge from their interaction. continuous • Chess vs. nondeterministic • How well can the goal be achieved (degree of happiness) • The next state can be determined based on the current • What to do if there are conflicting goals? state and the action.) Using a look-up-table to encode f : P* → A Goal-based agents Agent types • Reflex agents .. the system. general-purpose computer. friendly • Static vs. that is. etc. inaccessible • Goal-based agents • Sensors give access to complete state of the • Goal information needed to make decision environment. function Skeleton-Agent(Percept) returns Action • Benefits: robustness. nonepisodic (Sequential) • Which goal should be selected if several can be achieved? • Episode: each perceive and action pairs Reflex agents • The quality of action does not depend on the previous episode. Percept) • Challenges: scalability. brake lights at night.g. • Agent = architecture + program • Each reactive agent is simple and interacts with others in a • Agent program: the implementation of f : P* → A. specialized device. the basic way. dynamic • Dynamic if the environment changes during deliberation • Discrete vs. fast response time memory ← UpdateMemory(memory. may not be able to make decision • Characteristics • E. • Speed and safety • Episodic vs. • Utility-based agents • Deterministic vs. • Act by stimulus-response to the current state of the Structure of Intelligent Agents environment. how intelligent? and how do you Action ← ChooseBestAction(memory) debug them? memory ← UpdateMemory(memory.g.

from PDA) towards its goals. Actions. utility- • Applications: based • Distributed information retrieval. value of the performance measure given the percept sequence • Programs that can migrate from one machine to another. goal-based. • Examples: • BargainFinder comparison shops among Internet stores for Utility-based agents CDs • FIDO the Shopping Doggie (out of service) • Internet Softbot infers which internet facilities (finger. • Telecommunication network routing. • Two types: • One-hop mobile agents (migrate to one other place) . • Asynchronous computing (when you are not connected) • PAGE (Percepts. • Anything that can be viewed as perceiving its • Mobility not necessary or sufficient condition for agenthood. • Intelligent Agents: • Execute in a platform-independent execution environment.g. ftp. Information agents • Manage the explosive growth of information. to date • Execute in a platform-independent execution environment. • Rational Action: The action that maximizes the expected • Telecommunication network routing. • Manipulate or collate information from many distributed sources. state-based. Environment) • Two types: • Described as a Perception (sequence) to Action Mapping: f • One-hop mobile agents (migrate to one other place) : P* → A • Multi-hop mobile agents (roam the network from place to • Using look-up-table. Chapter 1 • Require agent execution environment (places). place) • Agent Types: Reflex. etc. • Require agent execution environment (places). • Multi-hop mobile agents (roam the network from place to place) • Applications: • Distributed information retrieval. What is AI? • Mobility not necessary or sufficient condition for agenthood. Acting Humanly: The Turing Test • Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent • “Can machines think?” ←→“Can machines behave • Practical but non-functional advantages: intelligently?” • Reduced communication cost (e. Goals. • Challenge: ontologies for annotating Web pages (eg. environment through sensors and acting upon that • Practical but non-functional advantages: environment through its effectors to maximize progress • Reduced communication cost (eg. SHOE). • Information agents can be mobile or static. from PDA) •The Turing test (The Imitation Game): Operational definition • Asynchronous computing (when you are not connected) of intelligence. closed form. gopher) to use and when from high-level search requests. Mobile agents Summary • Programs that can migrate from one machine to another.

intelligently?” etc.) attempted to codify “right thinking” • Are there any problems/limitations to the Turing Test? What are correct arguments/thought processes? What tasks require AI? • E. knowledge representation. • Knowledge representation etc. questions and to draw new conclusions. Thinking Humanly: Cognitive Science • Inference From some facts. • Search • Motor control (total test): to act upon objects as • Natural language processing requested. • More general • Computer needs to posses:Natural language processing. It is a race. smell.. intelligent.g.g.C. and • AI research has both theoretical and experimental sides. touch. 2) What about physical experimental side has both basic and applied aspects. [John McCarthy] • Vision (for Total Turing test): to recognize the Branches of AI examiner’s actions and various objects presented by the • Logical AI examiner. reflex vs. but both racers seem to be and extrapolate patterns. based on studying and • Knowledge representation: to store and retrieve information formalizing common sense facts about the world and the provided before or during interrogation. • Tasks that require AI: • Solving a differential equation • Problems: • Brain surgery • Uncertainty: Not all facts are certain (e. problems that the world presents to the achievement of • Automated reasoning: to use the stored information to answer goals. • Its goal of rationality is well defined Knowledge representation. • The other is phenomenal. and How to achieve AI? Machine learning • How is AI research done? • Problem: 1) Turing test is not reproducible. AI should study humans and imitate their What would a computer need to pass the Turing test? psychology or physiology.. • pattern recognition • Other senses (total test): such as audition. The amenable to mathematic analysis. AR.g. • Natural language processing: to communicate with examiner.) • The Turing test (The Imitation Game): Operational • Advantages: definition of intelligence. based on the idea that since humans are perception and actuation. therefore Socrates • “AI is the science and engineering of making intelligent is mortal” machines which can perform tasks that require intelligence • Several Greek schools developed various forms of logic: when performed by humans …” notation plus rules of derivation for thoughts. and both should • Machine learning: to adapt to new circumstances and to detect eventually succeed. others can be inferred. deliberation) • “Can machines think?” ←→“Can machines behave • Cognitive skills (NLP. walking. • The two approaches interact to some extent. reproduce results (simulation) Knowledge representation.. interaction with interrogator and environment? • There are two main lines of research: • Total Turing Test: Requires physical interaction and needs • One is biological. • Playing Jeopardy • Resource limitations: • Playing Wheel of Fortune • Not enough time to compute/process • What about walking? • Insufficient memory/disk/etc • What about grabbing stuff? Acting Rationally: The Rational Agent • What about pulling your hand away from fire? • Rational behavior: Doing the right thing! • What about watching TV? • The right thing: That which is expected to maximize the • What about day dreaming? expected return Acting Humanly: The Full Turing Test • Provides the most general view of AI because it includes: • Alan Turing's 1950 article Computing Machinery and • Correct inference (“Laws of thought”) Intelligence discussed conditions for considering a machine • Uncertainty handling to be intelligent • Resource limitation considerations (e. • What level of abstraction? “Knowledge” or “Circuits”? • How to validate models? • Predicting and testing behavior of human subjects (top- down) • Direct identification from neurological data (bottom-up) • Building computer/machine simulated models and • Computer needs to possess: Natural language processing. ML. the flight might • Inventing stuff be delayed). “Socrates is a man. • 1960 “Cognitive Revolution”: information-processing • Automated reasoning psychology replaced behaviorism • Learning from experience • Cognitive science brings together theories and experimental • Planning To generate a strategy for achieving some goal evidence to model internal activities of the brain . Automated reasoning. constructive. Automated reasoning. all men are mortal. and Thinking Rationally: Laws of Thought Machine learning • Aristotle (~ 450 B.

• E. and we study what these kinds are and what their basic properties are. Online problem-solving involves acting w/o complete knowledge of the problem and environment Example: Romania On holiday in Romania. • Ontology Study of the kinds of things that exist.g. a new skater in an arena • Sliding problem. Arad. Bucharest AI State of the art • Have the following been achieved by AI? • World-class chess playing • Playing table tennis • Cross-country driving Problem types • Solving mathematical problems • Single-state problem: deterministic. turning left leads you to the bedroom • Contingency problem: nondeterministic. Flight leaves tomorrow from Bucharest Formulate goal:  be in Bucharest Formulate problem:  states: various cities  actions: drive between cities Find solution:  sequence of cities. inaccessible • Express emotions • Agent does not know the exact state (could be in any of the Chapter 3 possible states) Problem-solving agents • May not have sensor at all • Assume states while working towards goal state... • Engage in a meaningful conversation • Can calculate optimal action sequence to reach goal state.g. playing chess.g. • Epistemology Study of the kinds of knowledge that are required for solving problems in the world. In AI. • Many skaters around . e. inaccessible • Must use sensors during execution • Solution is a tree or policy • Often interleave search and execution • E. the programs and sentences deal with various kinds of objects.g.. Sibiu. • Genetic programming • Emotions??? AI Prehistory Note: This is offline problem-solving. going straight will lead you to the kitchen • If you are at the kitchen.. • Understand spoken language • E. Any action will result in an exact state • Observe and understand human emotions • Multiple-state problem: deterministic. Fagaras. accessible • Discover and prove mathematical theories • Agent knows everything about world (the exact state). currently in Arad. walking in a dark room • If you are at the door.

path cost (additive) Basic idea:  e.7. "Arad à Zerind" represents a complex set of possible routes. Clean]. dirt at current location.. i. up.Suck. e.g.  states? integer dirt and robot location Right goes to {2. any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution =  set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem Vacuum world state space graph . sum of distances..a.g. For guaranteed realizability.5.. down  goal test? = goal state (given)  path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard] Example: robotic assembly Single-state problem formulation A problem is defined by four items: 1. Suck]  Sensorless. "at Arad" 2. rest stops.  Percept: [L. actions or successor function S(x) = set of action–state  states?: real-valued coordinates of robot joint angles parts pairs of the object to be assembled  e. start in #5 or #7 Solution? [Right. Solution? Right.Left.4. … }  actions? : continuous motions of robot joints 3... x = "at Bucharest"  path cost?: time to execute  implicit. • E. number of actions  offline. S(Arad) = {<Arad à Zerind. Right.6.3.8} e.. etc.e.g.g.k. Checkmate(x) Tree search algorithms 4.g.2.Suck]  goal test? no dirt at all locations Contingency  path cost? 1 per action  Nondeterministic: Suck may Example: The 8-puzzle dirty a clean carpet  Partially observable: location. initial state e.8}  actions? Left.g. Zerind>. detours. start in #5. start in {1.a.g.• Exploration problem: unknown state space Discover and learn about environment while taking actions.~expanding  c(x.g.. can be  goal test?: complete assembly  explicit.4. etc.y) is the step cost.6. Suck Solution? [Right.. Maze Example: vacuum world  Single-state. successors of already-explored states (a. if dirt then Suck]  states? locations of tiles  actions? move blank left.. e. simulated exploration of state space by generating executed. goal test. assumed to be ≥ 0 states) A solution is a sequence of actions leading from the initial state to a goal state Selecting a state space Real world is absurdly complex à state space must be abstracted for problem solving  (Abstract) state = set of real states (Abstract) action = complex combination of real actions  e. right.

Breadth-first search Expand shallowest unexpanded node Implementation:  fringe is a FIFO queue. all nodes up till that depth are created. if step cost ≥ ε Depth-first search  Time? # of nodes with g ≤ cost of optimal solution. nodes  Space complexity: how much memory does it require?  Optimality: does it guarantee the least-cost solution?  Time and space complexity are measured in terms of:  b – max branching factor of the search tree  d – depth of the least-cost solution  m – max depth of the search tree (may be infinity) Space complexity of breadth-first • Largest number of nodes in QUEUE is reached on the level d of the goal node.. O(bceiling(C*/ Depth-limited search ε) ) where C* is the cost of the optimal solution Iterative deepening search . Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions:  completeness: does it always find a solution if one exists? Time complexity of breadth-first search  time complexity: number of nodes generated • If a goal node is found on depth d of the tree. i. new successors go at end Implementation: general tree search Properties of breadth-first search  Complete? Yes (if b is finite)  Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)  Space? O(bd+1) (keeps every node in memory)  Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time) Search algorithms are commonly evaluated according to the following four criteria:  Completeness: does it always find a solution if one exists?  Time complexity: how long does it take as function of num.  space complexity: maximum number of nodes in memory  optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of  b: maximum branching factor of the search tree  d: depth of the least-cost solution • Thus: O(bd)  m: maximum depth of the state space (may be ∞) Uniform-cost search Uninformed search strategies Expand least-cost unexpanded node Uninformed search strategies use only the information Implementation: available in the problem definition  fringe = queue ordered by path cost Breadth-first search Equivalent to breadth-first if step costs all equal Uniform-cost search  Complete? Yes. of nodes? Implementation: states vs.e.

nodes at depth l have no successors Depth-first search Recursive implementation: Expand deepest unexpanded node Implementation:  fringe = LIFO queue.456 possible successors that are in same state as any of node’s Overhead = (123. Properties of iterative deepening search • do not generate any state that was ever generated before. i.111 = 11% ancestors. Space complexity of depth-first  Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Avoiding repeated states  Number of nodes generated in an iterative deepening search to In increasing order of effectiveness and computational overhead: depth d with branching factor b: • do not return to state we come from. by  Complete? Yes keeping track (in memory) of every state generated. expand function will skip  NIDS = 6 + 50 + 400 + 3.e... i.111 • do not create paths with cycles. may be much faster than breadth-first  Space? O(bm). linear space!  Optimal? No Time complexity of depth-first: details • In the worst case: Iterative deepening search l =3 • the (only) goal node may be on the right-most branch. expand function will NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd skip possible successors that are in same state as node’s  For b = 10.111)/111.000 = 123.000 = 111.111.e. spaces with loops  Modify to avoid repeated states along path Iterative deepening search à complete in finite spaces  Time? O(bm): terrible if m is much larger than d  but if solutions are dense.000 + 100.e. O(bceiling(C*/ Depth-limited search ε) ) = depth-first search with depth limit l.e.000 + 100.  NDLS = 1 + 10 + 100 + 1.. i. unless  Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) the cost of reaching that state is lower than last time we reached it. parent. put successors at front Properties of depth-first search  Complete? No: fails in infinite-depth spaces.456 . Space? # of nodes with g ≤ cost of optimal solution.e.000 + 20. if step cost = 1 . i.  Optimal? Yes – nodes expanded in increasing order of g(n) i.000 + 10.  Space? O(bd)  Optimal? Yes. d = 5...

Summary of algorithms Repeated states Failure to detect repeated states can turn a linear problem into an exponential one! Graph search Summary Problem formulation usually requires abstracting away real- world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms .