This action might not be possible to undo. Are you sure you want to continue?
Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better.
SOME DEFINITIONS OF AI
Building systems that think like humans “The exciting new effort to make computers think … machines with minds, in the full and literal sense” -- Haugeland, 1985 “The automation of activities that we associate with human thinking, … such as decision-making, problem solving, learning, …” -- Bellman, 1978 Building systems that act like humans “The art of creating machines that perform functions that require intelligence when performed by people” -- Kurzweil, 1990 “The study of how to make computers do things at which, at the moment, people are better” -- Rich and Knight, 1991 Building systems that think rationally “The study of mental faculties through the use of computational models” Charniak and McDermott, 1985 --
“The study of the computations that make it possible to perceive, reason, and act” -Winston, 1992 Building systems that act rationally “A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” -- Schalkoff, 1990 “The branch of computer science that is concerned with the automation of intelligent behavior” -- Luger and Stubblefield, 1993
It is proposed by Alan Turing 1950 .According to this test, a computer could be considered to be thinking only when a human interviewer, conversing with both an unseen human being and an unseen computer, could not determine which is which. Description: 2 human being,1 computer
The computer would need to posses the following capabilities: The computer processing: to enable it to communicate successfully in English Knowledge representation: to store what it knows or hears Automated reasoning: to use the stored information to answer questions and to draw new conclusions Machine learning: to adapt to new circumstances and to detect and extrapolate patterns. To pass the total Turing test, the computer will need, Computer vision: to perceive objects Robotics: to manipulate objects and move about
Thinking and Acting Humanly
Acting humanly "If it looks, walks, and quacks like a duck, then it is a duck” The Turing Test Interrogator communicates by typing at a terminal with TWO other agents. The human can say and ask whatever s/he likes, in natural language. If the human cannot decide which of the two agents is a human and which is a computer, then the computer has achieved AI this is an OPERATIONAL definition of intelligence, i.e., one that gives an algorithm for testing objectively whether the definition is satisfied Thinking humanly: cognitive modeling Develop a precise theory of mind, through experimentation and introspection, then write a computer program that implements it Example: GPS - General Problem Solver (Newell and Simon, 1961) trying to model the human process of problem solving in general Thinking Rationally- The laws of thought approach Capture ``correct'' reasoning processes” A loose definition of rational thinking: Irrefutable reasoning process How do we do this Develop a formal model of reasoning (formal logic) that “always” leads to the “right” answer Implement this model How do we know when we've got it right? when we can prove that the results of the programmed reasoning are correct
soundness and completeness of first-order logic Example: Ram is a student of III year CSE. All students are good in III year CSE. Ram is a good student. Acting Rationally Act so that desired goals are achieved The rational agent approach (this is what we’ll focus on in this course) Figure out how to make correct decisions, which sometimes means thinking rationally and other times means having rational reflexes correct inference versus rationality reasoning versus acting; limited rationality
RELATION WITH OTHER DISCIPLINES:
- Expert Systems - Natural Language Processor - Speech Recognition - Robotics - Computer Vision - Intelligent Computer-Aided Instruction - Data Mining - Genetic Algorithms
Logic, methods of system foundations rationality
mind as learning, proof
Formal representation and computation, (un)decidability, probability utility, decision theory physical substrate for mental activity phenomena of perception experimental techniques and
• • •
Economics Neuroscience Psychology
• • •
Computer engineering Control theory design Linguistics
building systems that function over time
fast maximize an
knowledge representation, grammar
HISTORY OF AI:
• • • • • 1943 1950 1956 1952—69 1950s Early McCulloch & Pitts: Boolean circuit model of brain Turing's "Computing Machinery and Intelligence" Dartmouth meeting: "Artificial Intelligence" adopted Look, Ma, no hands! AI programs, including program, Newell & Simon's Gelernter's Geometry Engine Samuel's Logic checkers Theorist,
• • • • • • •
1965 1966—73 1969—79
Robinson's complete algorithm for logical reasoning AI discovers computational Neural network research almost disappears Early development of knowledge-based systems complexity
1980-- AI becomes an industry 1986-- Neural networks return to popularity 1987-- AI becomes a science 1995-- The emergence of intelligent agents
Agent = perceive + act Thinking Reasoning Planning
The actions can depend on the most recent perception or on the entire history (percept sequence). An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the environment through actuators. Ex: Robotic agent Human agent Agent E Sensors N PERCEPT S V I R O N M ACTUATORS ACTION E N T ? . The perception capability is usually called a sensor.Agent: entity in a program or environment capable of generating action. An agent uses perception of the environment to make decisions about actions to take.
[A. clean] [B. environment sensors agent function actuators environment . The part of the agent taking an action is called an actuator. clean].Agents interact with environment through sensors and actuators. dirty] action right suck left suck right suck Fig: practical tabulation of a simple agent function for the vacuum cleaner world Agent Function The agent function is a mathematical function that maps a sequence of perceptions into action. clean] [A. The function is implemented as the agent program. [A. A B Percept sequence [A. clean]. dirt] [B. dirty] [A. clean] [A.
Definition: for every possible percept sequence. . Agent Autonomy: An agent is omniscient if it knows the actual outcome of its actions. The performance measures should be based on the desired effect of the agent on the environment. An environment can sometimes be completely known in advance. Rationality: The agent's rational behavior depends on: the performance measure that defines success the agent's knowledge of the environment the action that it is capable of performing The current sequence of perceptions.RATIONAL AGENT: A rational agent is one that can take the right decision in every situation. Not possible in practice. the agent is expected to take an action that will maximize its performance measure. Performance measure: a set of criteria/test bed for the success of the agent's behavior.
Exploration: sometimes an agent must perform an action to gather information (to increase perception). brake. tests. hospital. An environment might be partially observable because of noisy and inaccurate sensors or apart of the state are simply missing from the sensor data. findings. other traffic. GPS. customers Steering. keyboard. signal. Fast. Includes Performance measure Environment Actuator Sensors Agent Type Performance Measures Environment Actuators Sensors Taxi Driver Safe. Comfort. Autonomy: the capacity to compensate for partial or incorrect prior knowledge (usually by learning). sonar. etc Medical diagnosis system Healthy patient. patient's answers) Properties of Task Environment: • Fully Observable (vs. . referrals) Keyboard (entry of symptoms. Maximize Profits Roads. Partly Observable) – – – Agent sensors give complete state of the environment at each point in time Sensors detect all the aspect that is relevant to the choice of action. horn Camera. NATURE OF ENVIRONMENTS: Task environment – the problem that the agent is a solution to. minimize costs. accelerators. staff Screen display (questions. pedestrians. treatments. Speedometer. diagnoses. Legal. lawsuits Patient.
An agent playing chess is in a two agent environment. Semi dynamic: If the environment does not change for some time. Dynamic) – – Environment doesn’t change as the agent is deliberating Semi dynamic • Discrete (vs. cooperative multi-agent environments Communication is a key issue in multi agent environments. actions • • Chess game Taxi driving : discrete : continuous • Single Agent (vs.) • Episodic (vs. Stochastic) – – Next state of the environment is completely determined by the current state and the action executed by the agent Strategic environment (if the environment is deterministic except for the actions of other agent. . Multi Agent) – – Competitive. Partially Observable: Ex: Automated taxi cannot see what other devices are thinking.• Deterministic (vs. each episode with what an agent perceive and what is the action • – Next episode does not depend on the previous episode Current decision will affect all future sates in sequential environment • Static (vs. because one can never predict the behavior of the traffic exactly. Single Agent Vs multi agent: An agent solving a cross word puzzle by itself is clearly in a single agent environment. Stochastic: Ex: taxi driving is clearly stochastic in this sense. Sequential) – Agent’s experience can be divided into episodes. Continuous) – Depends the way time is handled in describing state. percept. then it changes due to agent’s performance is called semi dynamic environment.
Example of Task Environments and Their Classes STRUCTURE OF AGENT: .
utility-based agent Simple reflex agent Definition: SRA works only if the correct decision can be made on the basis of only the current percept that is only if the environment is fully observable. Simple reflex agent 2. each with t possible states. then the table size is tn. Simple reflex agents: deciding on the action to take based only on the current perception and not on the history of perceptions. goal-based agent 4. Based on the condition-action rule: (if (condition) action) Works if the environment is fully observable Four types of agents: 1.Simple Agents: Table-driven agents: the function consists in a lookup table of actions to be taken for every possible state of the environment. Only works for a small number of possible states for the environment. If the environment has n variables. Model based reflex agent 3. .
Characteristics – – – no plan. no goal do not know what they want to achieve do not know what they are doing Condition-action rule – If condition then action Ex: medical diagnosis system. .
If the world is not fully observable. RULE.Algorithm Explanation: Interpret – Input: Function generates an abstracted description of the current state from the percept.MATCH: Function returns the first rule in the set of rules that matches the given state description. Ex: Braking problem characteristics Reflex agent with internal state Sensor does not provide the complete state of the world. must keep its internal state Updating the internal world . we call this model-based agent. Model-Based Reflex Agents: Definition: An agent which combines the current percept with the old internal state to generate updated description of the current state. RULE .ACTION: The selected rule is executed as action of the given percept. Since this representation is a model of the world. This usually requires an internal representation of the world (or internal state). the agent must remember observations about the parts of the environment it cannot currently observe.
. Goal-based agents: The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal).requires two kinds of knowledge How world evolves How agent’s action affect the world Algorithm Explanation: UPDATE-INPUT: This is responsible for creating the new internal stated description.
sifting through a search space for possible solutions. Search and Planning Solving “car-braking” problem? Yes. then it has higher utility for the agent Utility-Function (state) = real number (degree of happiness) The agent is aware of a utility function that estimates how close the current state is to the agent's goal. possible … but not likely natural.In some cases the goal is easy to achieve. Characteristics – – – – – – • Action depends on the goal. path finding Fundamentally different from the condition-action rule. Appears less efficient. developing a strategy. Utility-based agents If one state is preferred over the other.g. • Characteristics – to generate high-quality behavior . (consideration of future) e. In others it involves planning.
game playing) • Looking for higher utility value utility function Learning Agents Agents capable of acquiring new competence through observations and actions.g. (e. Learning agent has the following components Learning element Suggests modification to the existing rule to the critic Performance element Collection of knowledge and procedures for selecting the driving actions Choice depends on Learning element Critic Observes the world and passes information to the learning element Problem generator Identifies certain areas of behavior needs improvement and suggest experiments ..– Map the internal states to real numbers.
cp. Autonomy – an agent doesn't need the user's input to function. etc. Environment: fully observable (but partially observed). deterministic (strategic). episodic. . Program Size – an agent is usually smaller than a program. Actuators: commands like tar. Purpose: compress and archive files that have not been used in a while. Persistence – an agent's life span is not entirely dependent on a user launching and quitting it. Sensors: commands like ls. dynamic. Agent vs. gzip. Purpose – an agent has a specific purpose while programs are multi-functional. discrete. cd. Problem Solving Agents • Problem solving agent – – – A kind of “goal based” agent Finds sequences of actions that lead to desirable states. pwd. rm.Agent Example A file manager agent. du.
Formulate Goal. . Solution: A search algorithm takes a problem as input and returns a solution in the form of an action sequence. given a goal Path: A path in the state space is a sequence of states connected by a sequence of actions. Path Goal Test – which determine whether a given state is goal state Path cost – function that assigns a numeric cost to each path. the actions it recommends can be carried out called execution phase. • Step cost Problem formulation is the process of deciding what actions and states to consider. Problem Formulation: it is the process of deciding what actions and states to consider. The sequence of steps done by intelligent agent to maximize the performance measure: Goal Formulation: based on the current situation and the agent’s performance measure. Formulate Problem Search Execute PROBLEMS Four components of problem definition – – Initial state – that the agent starts in Possible Actions • Uses a Successor Function – • • – – Returns <action. successor> pair State Space – the state space forms a graph in which the nodes are states and arcs between nodes are actions. it is the first step in problem solving. given a goal. Execution: Once a solution is found. Search: the process of looking for different sequence.
Assuming the environment is • • • • Static Observable Discrete Deterministic . and then choosing the best sequence Searching Process Input to Search Output from Search : Problem : Solution in the form of Action Sequence – – A Problem solving Agent.Solutions • • A Solution to the problem is the path from the initial state to the final state Quality of solution is measured by path cost function – – Optimal Solution has the lowest path cost among other solutions An Agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to a state of known value.
Fagaras.g. currently in Arad Flight leaves tomorrow from Bucharest Formulate goal: – • be in Bucharest Formulate problem: – – states: various cities actions: drive between cities • Find solution: – sequence of cities. e.Example A Simplified Road Map of Part of Romania Explanation: • • • On holiday in Romania. Sibiu.. Arad. Bucharest TOY PROBLEM Example-1 : Vacuum World Problem Formulation • States – 2 x 22 = 8 states .
Right. Suck) • Goal Test – All squares are clean • Path Cost – Number of steps (each step costs a value of 1) .– • Formula n2n states Initial State – Any one of 8 states • Successor Function – Legal states that result from three actions (Left.
Up.State Space for the Vacuum World. down : Shown in Fig. S: Suck Example-2 : The 8-Puzzle • • • • • States Initial State Successor Function Goal Test Path Cost : Location of Tiles : One of States : Move blank left. Above : 1 for each step . Labels on Arcs denote L: Left. R: Right. Right.
Eight puzzle is from a family of “sliding –block puzzles” • • • • NP Complete 8 puzzle has 9!/2 = 181440 states 15 puzzle has approx.3*1012 states 24 puzzle has approx. with no queen attacking another are states Initial state : No queens on the board . 1. Arrangements of n queens. one per column in the leftmost n columns. 1*1025 states • • • • Place eight queens on a chess board such that no queen can attack another queen No path cost because only the final state counts! Incremental formulations Complete state formulations States : Any arrangement of 0 to 8 queens on the board.
Goal test: complete assembly Path cost: time to execute. 64*63*…*57 = 1.8*1014 possible sequences. Route-finding Find the best route between two cities given the type & condition of existing roads & the driver’s preferences • Used in – – – computer networks automated travel advisory systems airline travel planning systems • • • • • path cost money seat quality time of day type of airplane Traveling Salesman Problem (TSP) • A salesman must visit N cities. Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. Actions: continuous motion of robot joint. SOME MORE REAL-WORLD PROBLEMS • • • • • • Route finding Touring (traveling salesman) Logistics VLSI layout Robot navigation Learning Robotic assembly: States: real-valued co ordinates of robot joint angels part of the object to be assembled. .Successor function: Add a queen to an empty square. 2057 sequences to investigate Goal Test: 8 queens on the board and none are attacked.
However. b) to travel from city a to city b. VLSI layout • • The decision of placement of silicon chips on breadboards is very complex. Closely related to the Hamiltonian-cycle problem. There is usually an integer cost c (a. Given a road map of n cities. the total tour cost must be minimum.• • • • • Each city is visited exactly once and finishing the city started from. find the shortest tour which visits every city on the map exactly once and then return to the original city (Hamiltonian circuit) (Geometric version): – – – – A complete graph of n vertices (on an unit square) Distance between any two vertices: Euclidean distance n!/2n legal tours Find one legal tour that is shortest It’s an NP Complete problem no one has found any really efficient way of solving them for large n. . This includes – – • cell layout channel routing The goal is to place the chips without overlap. (or standard gates on a chip). where the total cost is the sum of the individual cost of each city visited in the tour.
Searching for Solutions to VLSI Layout • • Generating action sequences Data structures for search trees Generating action sequences • What do we know? – • • define a problem and recognize a solution Finding a solution is done by a search in the state space Maintain and extend a partial solution sequence UNINFORMED SEARCH STRATEGIES • Uninformed strategies use only the information available in the problem definition – – Also known as blind searching Uninformed search methods: • • • • • Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search .• Finding the best way to route the wires between the chips becomes a search problem.
and then all the nodes generated by the node are expanded.BREADTH-FIRST SEARCH Definition: The root node is expanded first. • • Expand the shallowest unexpanded node Place all new successors at the end of a FIFO queue Implementation: • • • .
not optimal in general Lessons from Breadth First Search • • The memory requirements are a bigger problem for breadth-first search than is execution time Exponential-complexity search problems cannot be solved by uniformed methods for any but the smallest instances .• Properties of Breadth-First Search • Complete – • Time – – • Space – – – • O(bd+1) Keeps every node in memory This is the big problem. an agent that generates nodes at 10 MB/sec will produce 860 MB in 24 hours 1 + b + b2 + … + bd + b(bd-1) = O(bd+1) exponential in d Yes if b (max branching factor) is finite Optimal – Yes (if cost is 1 per step).
Ex: Route finding problem Given: Task: Find the route from S to G using BFS. Step1: Step 2: Step3: .
backtracking is done to the next immediate previous node for the nodes to be expanded • • • • • • Expand the deepest unexpanded node Unexplored successors are placed on a stack until fully explored Enqueue nodes on nodes in LIFO (last-in.. That is. It needs to store only a single path from the root to a leaf node. along with remaining unexpanded sibling nodes for each node on a path Back track uses less memory.+ O ) DEPTH-FIRST SEARCH OR BACK TRACKING SEARCH: Definition: Expand one node to the depth of the tree. Time complexity 1+b+b²+……………………. first-out) order. It has modest memory requirement.Step4: Answer : The path in the 2nd depth level that is SBG (or ) SCG. If dead end occurs. nodes used as a stack data structure to order nodes. .
spaces with loops • Modify to avoid repeated spaces along path .Properties of Depth-First Search • Complete – No: fails in infinite-depth spaces.
only does “chronological backtracking” Advantage: • If more than one solution exists or no of levels is high then dfs is best because exploration is done only a small portion of the white space. Example: Route finding problem Given problem: Task: Find a route between A to B . can only back up one level at a time even if the “problem” occurs because of a bad operator choice near the top of the tree.– • Time – – – • Space – • Yes: in finite spaces O(bm) Not great if m is much larger than d But if the solutions are dense. this may be faster than breadth-first search O(bm)…linear space Optimal – No • When search hits a dead-end. Hence. Disadvantage: • No guaranteed to find solution.
Step 1: Step 2: Step 3: S A B C D Step 4: .
DLS can be implemented as a simple modification to the general tree search algorithm or the recursive DFS algorithm.S A B C D G Answer: Path in 3rd level is SADG DEPTH-LIMITED SEARCH Definition: A cut off (Maximum level of the depth) is introduced in this search technique to overcome the disadvantage of Depth First Search. A variation of depth-first search that uses a depth limit – – – • • • Alleviates the problem of unbounded trees Search to a predetermined depth l (“ell”) Nodes at depth l have no successors Same as depth-first search if l = ∞ Can terminate for failure and cutoff Two kinds of failure Standard failure: indicates no solution Cut off: indicates no solution within the depth limit . The cut off value depends on the number of states. DLS imposes a fixed depth limit on a dfs.
+(1) O(bl) Yes if l < d Optimal – No if l > d Advantage: • Cut off level is introduced in DFS Technique..Properties of Depth-Limited Search • Complete – • Time – – • Space – • O(bl) N(IDS)=(d)b+(d-1)b²+……………………. . Disadvantage: • No guarantee to find the optimal solution.
0. Task: find a path from A to E. … until a goal is found Iterative Lengthening Search: The idea is to use increasing path-cost limit instead of increasing depth limits.E. 1. . 1. So it is possible to get the goal state at the maximum depth of four.: Route finding problem Given: A B C D E The number of states in the given map is five. The resulting algorithm called iterative lengthening search.g. Therefore the cut off value is four. 4. A A B C B C B C D D Answer: Path = ABDE Depth=3 E ITERATIVE DEEPENING SEARCH (OR) DEPTH-FIRST ITERATIVE DEEPENING (DFID): Definition: • Iterative deepening depth-first search It is a strategy that steps the issue of choosing the best path depth limit by trying all possible depth limit Uses depth-first search Finds the best depth limit Gradually increases the depth limit. 2. A 2. A 3.
Implementation: Properties of Iterative Deepening Search: • Complete – Yes .
then nodes at depth d are generated once.e. If b=4. – – • Hence bd + 2b(d-1) + .• Time : N(IDS)=(d)b+(d-1)b2+…………+(1)bd – O(bd) • Space – O(bd) • Optimal – – Yes if step cost = 1 Can be modified to explore uniform cost tree Advantages: • • • This method is preferred for large state space and when the depth of the search is not known. i. Memory requirements are modest.. Faster than BFS even though IDS generates repeated states – – BFS generates nodes up to level d+1 IDS only generates nodes up to level d • In general. Like BFS it is complete Disadvantages: Many states are expanded multiple times. + db <= bd / (1 . then worst case is 1.78 * 4d.. iterative deepening search is the preferred uninformed search method when there is a large search space and the depth of the solution is not known Example: Route finding problem Given: A F B C D E G . etc.1/b)2 = O(bd). nodes at depth d-1 are generated twice.. Lessons from Iterative Deepening Search • If branching factor is b and solution is at depth d. 78% more nodes searched than exist at depth d (in the worst case).
A B C F G G .Task: Find a path from A to G. Limit=0 A Limit=1 A B C F Limit=2 1. A B C F D 2. A B C F D 3.
Stop when the frontiers intersect. Time Complexity: O(b d/2) 2. BI-DIRECTIONAL SEARCH Definition: It is a strategy that simultaneously searches both the directions (i.) A-F-G is selected as the solution path. Requires the ability to generate “predecessor” states. Space Complexity: O(b d/2) 3. Optimal: Yes .A-B-D-E-G A B D-E-G A-C-E-G A-F-G- Limit 4 Limit 3 Limit 2 Answer: Since it is a IDS tree the lowest depth limit (i. Properties of Bidirectional Search: 1. Works well only when there are unique start and goal states.e) forward from the initial state and backward from the goal state and stops when the two searches meet in the middle. Can (sometimes) lead to finding a solution more quickly.e. Complete: Yes 4. • • • • • Alternate searching from the start state toward the goal and from the goal state toward the start.
If more than one goal state exists then explicitly. . multiple state searches are required. complexity arises in the search technique. If two searches do not meet at all. Ex: Route Finding Problem Given: A B C D Task: Find a path from A to E Search from forward (A): E A A B C Search from backward (E): E E D C Answer: Solution path is A-C-E. In backward search calculating predecessor is difficult task.Advantages: Reduce time complexity and space complexity Disadvantages: The space requirement is the most significant weakness of bi-directional search.
COMPARING UNINFORMED SEARCH STRATEGIES • Completeness – Will a solution always be found if one exists? • Time – – How long does it take to find the solution? Often represented as the number of nodes searched • Space – – How much memory is needed to perform the search? Often represented as the maximum number of nodes stored at once • Optimal – Will the optimal (least cost) solution be found? • Time and space complexity are measured in – – – b – maximum branching factor of the search tree m – maximum depth of the state space d – depth of the least cost solution .