You are on page 1of 42

ARTIFICIAL INTELLIGENCE

lecture-1 Introduction
-Abhijit Boruah
DUIET

1
ARTIFICIAL INTELLIGENCE
 The branch of computer science concerned with making
computers behave like humans.
 The term was coined in 1956 by John McCarthy at the Massachusetts
Institute of Technology.
 Artificial intelligence includes
 games playing: programming computers to play games
 expert systems : programming computers to make decisions in real-life
situations.
 natural language : programming computers to understand natural human
languages.
 neural networks : simulate intelligence by attempting to reproduce the
types of connections that occur in animal brains.
 robotics : programming computers to see and hear and react
to other sensory stimuli.

2
Turing Test

To check if a computer can imitate a human.

“Can Machines Think”


- Alan Turing(1950)

3
A Reverse Turing Test

It’s the job of a computer to figure out if the questioner is a human


or not.

4
Requirements of a Turing Test
 To pass the Turing test the computer would need to
possess the following capabilities:
 NLP: to enable it to communicate successfully in English/any other
language.
 KR: to store what it knows or hear.
 Automated reasoning: to use the stored information to answer and to
draw new conclusion.
 Machine Learning: to adapt to new circumstances and to detect and
extrapolate patterns.
 For Total Turing Test
 Computer Vision: to perceive objects
 Robotics: to manipulate objects and move about.

5
Artificial Intelligence
 What does AI involve?
 modeling aspects of human cognition by computer
 study of ill-formed problems
 advanced algorithms research
 …… other important stuff!
 Machine learning, data mining, speech, language, vision, web agents …
and you can actually get paid a lot for having fun!

6
Cognitive Science
 The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to try to construct precise and
testable theories of the working of human mind.

 Cognitive Modeling approach

 How to get inside the human mind??


 Introspection
 Psychologicalexperiments
 Brain Imaging/ Neuroimaging

7
What is Intelligence??
 Thinking Rationally: The “Laws of Thoughts” approach
by Socrates- governed by Logic.

 Acting Rationally: The rational Agent Approach.


 A rational agent is one that acts so as to achieve the best
outcome or, when there is uncertainty, the best expected
outcome.

 One way to act rationally is to reason logically to the


conclusion that a given action will achieve one’s goal
and then to act on that conclusion.

8
Foundations of AI
 Philosophy
 Can formal rules be used to draw valid conclusions?
 How does the mind arise from a physical brain?
 Where does knowledge come from?
 How does knowledge lead to action?

 Mathematics
 What are the formal rules to draw valid conclusions?
 What can be computed?
 How do we reason with uncertain information?

9
Foundations of AI
 Economics
 How should we make decisions so as to maximize payoff?
 How should we do this when others may not go along?
 How should we do this when the payoff may be far in the
future?

 Neuroscience.
 How do brains process information?

 Psychology & Cognitive Psychology


 How do humans and animals think and act?

10
Foundations of AI
 Computer Engineering
 How can we build an efficient computer?

 Control Theory & Cybernetics


 How can artifacts operate under their own control?

 Linguistics
 How does language relate to thought?

11
The State of Art (What can AI do today)
 Robotic vehicles: In 2007 CMU’s BOSS won the Urban Challenge,
safely driving in traffic through the streets of a closed Air Force base,
obeying traffic rules and avoiding pedestrians and other vehicles.

 Speech recognition: The Google Assistant is a virtual assistant


powered by artificial intelligence and developed by Google that is
primarily available on mobile and smart home devices.

12
The State of Art (What can AI do today)
 Autonomous planning and scheduling: A hundred million miles
from Earth, NASA’s Remote Agent program became the first on-
board autonomous planning program to control the scheduling of
operations for a spacecraft. [https://ti.arc.nasa.gov/m/pub-archive/125h/0125%20(Jonsson).pdf].

 Game playing: IBM’s DEEP BLUE became the first computer


program to defeat the world champion in a chess match when it
bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition
match (Goodman and Keene, 1997).

 Spam fighting

13
The State of Art (What can AI do today)
 Logistics planning: During the Persian Gulf crisis of 1991, U.S.
forces deployed a Dynamic Analysis and Replanning Tool, DART
(Cross and Walker, 1994), to do automated logistics planning and
scheduling for transportation. This involved up to 50,000 vehicles,
cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters.

 Robotics

 Machine Translation

14
15
Agents & Environments
 An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.

16
Rationality
 For each possible percept sequence, a rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and the agent’s build in knowledge.

 What is the right thing to do?


 How do we measure success?
 How do we define that a job has been done rationally?

17
Rationality
 A Performance measure is the criteria of success of an
agents behavior.

 Design performance measures


 according to the needs of the environment (Correct).
 According to how one wants the agent to behave. (Wrong).

 Rationality depends upon four things:


 The performance measure that defines the success criteria
 The agent’s prior knowledge of the environment.
 The actions that the agent can perform.
 The agent’s percept sequence to date.

18
Specifying the task environment
 Problem specification: Performance measures,
Environment, Actuators, Sensors (PEAS).
 Example: An automated taxi driver.
 Performance Measure: Safe, Fast, Legal, Comfortable
trips, Maximize profits.
 Environment: Roads, other traffic, pedestrians, customers.
 Actuators: steering wheel, accelerator, brakes, clutch,
signal, horn.
 Sensors: camera, sonars, speedometer, GPS, engine
sensors etc.

19
Agent: Spam Filter
 Performance measure:
 Minimizing false positives, false negatives.

 Environment:
 A users mail account.

 Actuators:
 Mark as spam, delete, etc.

 Sensors:
 Incoming messages, other info about the account.

20
Environment types
 Fully observable(vs. partially observable):
 Agent’s sensors give it access to the complete state of the
environment at each point of time
 The agent need not keep any internal state to keep track of the
world.

 Deterministic(vs. stochastic):
 If next state of the environment is completely determined by the
current state and the action executed by the agent.
 Else it is stochastic.

21
Environment types
 Episodic( vs. sequential) :
 Agent’s experience divided into atomic episodes
 Episodes consists of agent perceiving and then performing a
single action.
 Choice of action depends on the episode itself.
 In sequential, the current decision could affect all future
decisions.

22
Environment types
 Static (vs. dynamic) : If environment can change while
agent is deliberating, then environment is dynamic, else it
is static.
 Semi dynamic : Environment does not change with passing time, but
agent’s performance score does. E.g.: Playing chess with a clock.

23
Environment types
 Discrete(vs. continuous) : The environment provides fixed
number of distinct percepts, actions and environment states.
 Single agent( vs. multi agent) : An agent operating by itself
in an environment. E.g.: An agent solving a crime scene.
Where as, two agents playing chess is a multi agent
environment(competitive).
What about the taxi driver scenario??
 Partiallyobservable
 Stochastic
 Sequential
 Dynamic
 Continuous
 Multi Agent (Co-operative)

24
Structure of agent
 The motive of Ai is to design an agent program that
implements agent function mapping percepts to actions.

 We assume this program will run on some sort of


computing device with physical sensors and actuators-an
architecture.

agent = architecture + program

25
Hierarchy of Agent Types
Basic kinds of agent programs that embody the
principles underlying almost all intelligent systems:
 Simple reflex agents
 Model based reflex agents.
 Goal based agents.
 Utility based agents.
 Learning Agents

26
Simple reflex agents
 Simplest kinds of agents.
 Selects actions on basis of current percept, ignoring the rest
of percept history.
 A condition-action rule(also called situation-action rules,
productions or if-then rules)
if car-in-front-is-braking then initiate-braking
 Problems
 Still usually too big to generate and to store
 Still no knowledge of non-perceptual parts of state
 Still not adaptive to changes in the environment; requires
collection of rules to be updated if changes occur
 Still can’t make actions conditional on previous state

27
A simple reflex agent program

28
Simple reflex agent architecture

29
Model Based Reflex Agents
 Handling partially observable environment by keeping track of the
part it cant see.

 Encode “internal state” of the world to remember the past as


contained in earlier percepts.

 Needed because sensors do not usually give the entire state of the
world at each input, so perception of the environment is captured
over time. “State” is used to encode different "world states" that
generate the same immediate percept.

 The Knowledge about how the world works is called a model of


the world and agents using such models is a model based agent.
30
Model Based Agent Architecture

31
Goal Based Agents
 Keeping track of the current state is often not enough.
Need to add goals to decide which situations are good.

 Choose actions so as to achieve a (given or computed)


goal.

 A goal is a description of a desirable situation.

 Agent program can combine it with the model (info of


model based agents) in order to choose actions that
achieve goal.
32
Architecture of goal based agent

33
Example: Tracking a Target

• The robot must keep


the target in view
• The target’s trajectory
is not known in advance
• The robot may not know
all the obstacles in target
robot
advance
• Fast decision is required
34
Advantages of goal based agents
 Although it appears to be less efficient, it is more flexible
because the knowledge that supports its decisions is
represented explicitly and can be modified.

35
Utility based Agents
 Goals alone sometimes may not generate high quality
behavior in some environments.

 When there are multiple possible alternatives, how to


decide which one is best?

 A goal specifies a crude distinction between a happy and


unhappy state, but often need a more general performance
measure that describes “degree of happiness.”

36
Utility based agents
 A utility function maps a state( or a sequence of states)
onto a real number, which describes the associated degree
of happiness.
 Utility function U: State  Real indicating a measure of
success or happiness when at a given state.
 It allows rational decisions to be taken in two cases where
goals are inadequate:
 When there are conflicting goals, only some can be achieved( e.g.
speed & safety), U provides a tradeoff.
 When there are several goals the agent can aim for, none of which can
be achieved with certainty, U provides a way in which likelihood of
success can be weighted up against importance of goals.

37
Architecture for a model based utility based
agent

38
Learning Agents
 Learning allows agents to operate in initially unknown
environments and to become more competent than its
initial knowledge might allow.
 A learning agent has four components:
 A learning element responsible for making improvements.
 A performance element for selecting external
actions(considered to be entire agent previously).
 Learning element uses feedback from the critic on how agent is
performing and determines how performing agent must be
modified for better.
 Problem generator for suggesting actions that will lead to new
and informative experiences.

39
Architecture of learning agent

40
Summary
 An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program.
 Task environment – PEAS (Performance, Environment, Actuators, Sensors)
 An ideal agent always chooses the action which maximizes its expected
performance, given its percept sequence so far.
 An autonomous learning agent uses its own experience rather than built-in
knowledge of the environment by the designer.
 An agent program maps from percept to action and updates internal state.
 Reflex agents respond immediately to percepts.
 Goal-based agents act in order to achieve their goal(s).
 Utility-based agents maximize their own utility function.
 Representing knowledge is important for successful agent design.
 The most challenging environments are partially observable,
nondeterministic, dynamic, and continuous, stochastic and multi agent.

41
Assignment
 For each of the following agents, develop a PEAS
description of the task environment:
1. Robot soccer player;
2. Internet book-shopping agent;
3. Autonomous Mars rover;
4. Mathematician's theorem-proving assistant.

 For each of the agent types, characterize the


environment and select a suitable agent design.

42

You might also like