You are on page 1of 4

ZAIN MUSHTAQ

70067944 Assignment no :1
SECTION C
AI
1.Define the following in your own words
a.Sate of the Art in AI
If we talk about the state of art of Artificial Intelligence then it's
Deep Neural Networks in Deep Learning. Deep Learning models
deal with non-textual data such as voice and image recognition.
These features give the machine an intelligent feature that can
recognize objects.

b.A knowledge-based agent


Knowledge-based agents are those agents who have the
capability of maintaining an internal state of knowledge, reason
over that knowledge, update their knowledge after observations
and take actions. These agents can represent the world with some
formal representation and act intelligently.

c.Expert systems
An expert system is a computer program that uses artificial
intelligence (AI) technologies to simulate the judgment and
behavior of a human or an organization that has expert
knowledge and experience in a particular field.

d.Machine Learning
Machine learning (ML) is a type of artificial intelligence (AI) that
allows software applications to become more accurate at
predicting outcomes without being explicitly programmed to do
so. Machine learning algorithms use historical data as input to
predict new output values.

e.State space
The "state space" is the Euclidean space in which the variables on
the axes are the state variables. The state of the system can be
represented as a state vector within that space. To abstract from
the number of inputs, outputs and states, these variables are
expressed as vectors.
2.Suppose that the performance measure is
concerned with just the first T time steps ofthe
environment and ignores everything thereafter.
Show that a rationalagent’s action maydepend not
just on the state of the environment but also on
the time step it has reached.

A rational agent’s actions vary in regards to the environment. An action


may or may not affect the environment, but if it does then we need to
update the environment. One action may also lead to different paths,
so we need to know what the final result will be in regards to every
action, not just up until T time steps. An agent can only know what to
do based on what it knows, if it stops after T time steps then there is a
chance it will have not reached its goal, unless the goal in question has
been designed around T time steps.

3.For each of the following assertions, say


whether it is true or false and support
youranswer with examples or counterexamples
where appropriate.
a. An agent that senses only partial information
about thestate cannot be perfectly rational.

False, it would be unwise to judge an agent on not acting rational for


only having partial information since it can only act on what it
perceives, and therefore is acting rational as much as it possibly can
for having some variables not accounted for by no fault of its’ own.

b. There exist task environments in which no pure


reflex agent can behave rationally.

False, since there is potentially infinitely many task environments, it


is reasonable to assume that there will be one in which a pure reflex
agent can behave rationally given the right environment.

c. There exists a task environment in which every


agent is rational.

True, since there is infinitely many task environments that could be


created, there is very likely that at least one exists in which every
agent is rational in its’ actions.  It could very well be that there would
be any point for the agent to make any actions depending on what
the performance measure is and what the agent is allowed to
perceive and act upon its’ environment.

d. The input to an agent program is the same as


the input to the agent function.

False, because the function is only an abstraction while the agent


program is an implementation within a physical system.

e. A perfectly rational poker-playing agent never


loses.

False, because it will not know an opponents hand and even if it


does act rationally, the odds may or may not be with it and it will
eventually lose given enough games.

4.For each of the following activities, give a


PEAS description of the task environmentand
characterize it in terms of the properties
a.Playing soccer.
b.Shopping for used AI books on the Internet.
c.Playing a tennis match.
d.Performing a high jump.
e.Knitting a sweater.

Agent Performanc Environmen Actuators Sensors


Type e measure t
Playing Scoring, no Soccer field, Player Eyes, ears,
Soccer penalties, players, legs, head,
not allowing goalie, hands
the other referees,
team to coach,
score soccer ball,
net.
Shopping Low price Internet, Keyboard, Monitor
for used cost of rival mouse
AI books procuring shopping
on the an AI book sites,
Internet customers
Playing a Attaining Rackets, net, Human Eyes, ears
tennis highest referee, ball, body
match score to win players
the match
Performin Attaining Jumping Legs
g a high maximum pole,
jump height padding,
Jumper,
Referee
Knitting a Creating a Knitter, Hands Eyes,
sweater well made yarn, touch
full sweater knitting
needles,
directions

5.Discusspossible agent programs for each of the


following stochastic version keeping in view the
vacuum environments:

Murphy’s law: twenty-five percent of the time,


the Suck action fails to clean the floor ifit is
dirty and deposits dirt onto the floor if the
floor is clean. How is your agent programaffected
if the dirt sensor gives the wrong answer 10% of
the time?

The failure of Suck action doesn’t cause any problem at all as long as we
replace
the reflex agent’s ‘Suck’ action by ‘Suck until clean’.
If the dirt sensor gives wrong answers from time to time, the agent
might just leave
the dirty location and maybe will clean this location when it tours back
to this location
again, or might stay on the location for several steps to get a more
reliable
measurement before leaving. Both strategies have their own advantages
and
disadvantages. The first one might leave a dirty location and never
return. The latter
might wait too long in each location.

You might also like