You are on page 1of 30

Artificial Intelligence and Neural Network

Fall:2021, Spring 2022


1.a.Draw a functional block diagram of an Agent and its all component using a real life
example. A company is planning to design an agent for rice cooker. How can you design a
model? Discuss about the environment.

Artificial intelligence refers to machine's intelligence, unlike the natural intelligence displayed
by humans and animals, which involves consciousness and emotionality.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we
are also agents.
• Before moving forward, we should first know about sensors, effectors, and actuators.
• Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
• Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
an agent for rice cooker
• An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.
• Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

(b) Distinguish between supervised and unsupervised learning algorithm.

supervised learning algorithm unsupervised learning algorithm


Definition: Supervised learning algorithms try Definition: Unsupervised learning is a type of
to model relationships and dependencies machine learning in which models are trained
between the target prediction output and the using unlabeled dataset and are allowed to act
input features on that data without any supervision.
Goal: the goal is to predict outcomes for new Goal: the goal is to get insights from large
data. volumes of new data.
Complexity: Supervised learning is a simple Complexity: Unsupervised learning models
method for machine learning are computationally complex
Drawbacks: Supervised learning models can Drawbacks: unsupervised learning methods
be time-consuming to train can have wildly inaccurate results
Applications: Supervised learning models Applications: unsupervised learning is a
are ideal for spam detection, sentiment great fit for anomaly detection,
analysis, weather forecasting and pricing recommendation engines, customer personas
predictions and medical imaging.

(c) There are mainly six groups of environments and an environment can be in multiple
groups. Fill up the following table for the real time example given on the first column.

The Environment is the surrounding world around the agent which is not
part of the agent itself. It’s important to understand the nature of the
environment when solving a problem using artificial intelligence. For
example, program a chess bot, the environment is a chessboard and creating
a room cleaner robot, the environment is Room.
Each environment has its own properties and agents should be designed
such as it can explore environment states using sensors and act accordingly
using actuators. In this guide, we’re going to understand all types of
environments with real-life examples.

Fully Observable vs
Partially-Observable
In a fully observable environment, The Agent is familiar with the complete
state of the environment at a given time. There will be no portion of the
environment that is hidden for the agent.
Real-life Example: While running a car on the road ( Environment ), The
driver ( Agent ) is able to see road conditions, signboard and pedestrians on
the road at a given time and drive accordingly. So Road is a fully observable
environment for a driver while driving the car.
in a partially observable environment, The agent is not familiar with the
complete environment at a given time.
Real-life Example: Playing card games is a perfect example of
a partially-observable environment where a player is not aware of the card
in the opponent’s hand. Why partially-observable? Because the other parts of
the environment, e.g opponent, game name, etc are known for the player
(Agent).

Deterministic vs Stochastic
Deterministic are the environments where the next state is observable at a
given time. So there is no uncertainty in the environment.
Real-life Example: The traffic signal is a deterministic environment where the
next signal is known for a pedestrian (Agent)
The Stochastic environment is the opposite of a deterministic environment.
The next state is totally unpredictable for the agent. So randomness exists in
the environment.
Real-life Example: The radio station is a stochastic environment where the
listener is not aware about the next song or playing a soccer is stochastic
environment.

Episodic vs Sequential
Episodic is an environment where each state is independent of each other.
The action on a state has nothing to do with the next state.
Real-life Example: A support bot (agent) answer to a question and then
answer to another question and so on. So each question-answer is a single
episode.
The sequential environment is an environment where the next state is
dependent on the current action. So agent current action can change all of
the future states of the environment.
Real-life Example: Playing tennis is a perfect example where a player
observes the opponent’s shot and takes action.

Static vs Dynamic
The Static environment is completely unchanged while an agent is
precepting the environment.
Real-life Example: Cleaning a room (Environment) by a dry-cleaner reboot
(Agent ) is an example of a static environment where the room is static while
cleaning.
Dynamic Environment could be changed while an agent is precepting the
environment. So agents keep looking at the environment while taking action.
Real-life Example: Playing soccer is a dynamic environment where players’
positions keep changing throughout the game. So a player hit the ball by
observing the opposite team.

Discrete vs Continuous
Discrete Environment consists of a finite number of states and agents have a
finite number of actions.
Real-life Example: Choices of a move (action) in a tic-tac game are finite on a
finite number of boxes on the board (Environment).
While in a Continuous environment, the environment can have an infinite
number of states. So the possibilities of taking an action are also infinite.
Real-life Example: In a basketball game, the position of players
(Environment) keeps changing continuously and hitting (Action) the ball
towards the basket can have different angles and speed so infinite
possibilities.

Single Agent vs Multi-Agent


Single agent environment where an environment is explored by a single
agent. All actions are performed by a single agent in the environment.
Real-life Example: Playing tennis against the ball is a single agent
environment where there is only one player.
If two or more agents are taking actions in the environment, it is known as a
multi-agent environment.
Real-life Example: Playing a soccer match is a multi-agent environment.

Example Fully vs Deterministic Episodic vs Static vs Discrete vs Single vs


partially vs Stochastic Sequential Dynamic Continuous Multi
observable Agents
Brushing Sequenti Continuo
Fully Stochastic Static Single
Teeth al us
Playing cards Partially Sequenti Continuo Multi-Ag
Stochastic Dynamic
al us ent
Autonomous Sequenti Continuo Multi-Ag
Fully Stochastic Dynamic
Vehicles al us ent
Playing Sequenti Continuo Multi-Ag
Partially Stochastic Dynamic
Chess al us ent
Order in Determinist Single
Fully Episodic Static Discrete
Restaurant ic Agent
Sequenti Continuo Multi
Playing Partially Stochastic Dynamic
al u Agent

2.(a)Maza shown in Figure 2 where square S is the initial position and G is the goal position.
The goal of our agent is to find a way from the initial position to final position. The possible
actions are move up, down, left and right to an adjacent square. The shaded squares are
obstacles. No state is visited twice and label the start state as ‘1’, the next state as ‘2’, etc.

G
2
S1

Starting from S node how to reach G node by BFS and DFS using illustration.
(b) Distinguish between DLS and BFS algorithms.

Parameter BFS DFS DLS


Abbreviation Breadth First Search Depth First Search Depth Limited
Search
Tree Traversal Level wise Depth wise Depth wise
Implementation Queue(FIFO) Stack(LIFO) Stack(LIFO)
Memory Required Higher Lower Lower
Back Hacking Not allowed allowed allowed
Infinite Loops No Yes No
Optimal Yes No No
Time complexity O(b^d) O(b^m) O(b^1)

(c) Translate the following to Predicate Logic

1. Every house is a physical object

∀x.(house(x) → physical object(x)),

where house and physical object are unary predicate symbols.

2. Some physical objects are houses

∃x.(physical object(x) ∧ house(x))

3. Peter does not own a house.

¬∃x.(owns(Peter, x) ∧ house(x))

4. ”Everybody owns a house”

∀x.∃y.(owns(x, y) ∧ house(y))

5. “Sue owns a house”

∃x.(owns(Sue, x) ∧ house(x))

6. “Somebody does not own a house”

∃x.∀y.(owns(x, y) → ¬house(y))
The truth relation: Let S be the signature consisting of the unary predicates house and human, the
binary predicate owns, and the individual constant Sue. Give an S-interpretation F with F |= G for the
following sentences G:

• there are houses:

∃x.house(x)

• there are human beings:

∃x.human(x)

• no house is a human being:

∀x.(house(x) → ¬human(x))

• some humans own a house:

∃x.∃y.(human(x) ∧ house(y) ∧ owns(x, y))

• Sue is human:

human(Sue) .

• Sue does not own a house:

¬∃x.(owns(Sue, x) ∧ house(x))

• every house has an owner:

∀x.(house(x) → ∃y.owns(y, x))

3.(a) Consider the following tree, what is the minimax value for the root? Use minimax to
determine the best strategy for both players and give the actions that would be chosen and
their values. How would you re-order the nodes to get maximal pruning when using the alpha
–beta algorithm? Feel free to use the figure to show the ordering

Max A

Min B C

Max D E F G
(b) There are two jugs of volume Aliter and B liter. Neither has any measuring mark on it.
There is a pump that can be used to fill the jugs with water. How can you get exactly x liter of
water into the A liter jug. Assuming that we have unlimited supply of water. Let’s assume we
have A=4 liter and B=3 liter jugs. And we want exactly 2 Liter water into jug A(4 liter jug) how
will do this. Write the rule state and process of this problem.
(c) How can you overcome the infinite loop problem of DFS?
In graph search algorithms [used frequently on AI], DFS's main advantage is space efficiency.
It is its main advantage on BFS. However, if you keep track of visited nodes, you lose this
advantage, since you need to store all visited nodes in memory. Don't forget the size of
visited nodes increases drastically over time, and for very large/infinite graphs - might not fit
in memory.
(2) Sometimes DFS can be in an infinite branch [in infinite graphs]. An infinite branch is a
branch that does not end [always has "more sons"], and also does not get you to your target
node, so for DFS, you might keep expanding this branch inifinitely, and 'miss' the good branch,
that leads to the target node.

How can DFS get stuck in a cycle and what we can do to avoid them?
If you do not check for cycles, then DFS can get stuck in one and never find its target
whereas BFS will always expand out to all nodes at the next depth and therefore will
eventually find its target, even if cycles exist. Put simply: If your graph can have cycles
and you're using DFS, then you must account for cycles.

4.(a) We have studied the Wumpus World game in the class. A different version is presented
here. Find how to reach the square where the gold is located. Also prove that Wumpus is in
(1,4) square of the cave.

Wumpus Stench Breeze Pit

Stench Breeze Pit Breeze

Breeze
Gold
Agent Breeze Pit Breeze

1
1 2 3 4
(b) Let A be a fuzzy set that tells about a student as shown in figure 3 and the elements with
corresponding maximum membership values are also given. A
={(P,0.6),(F,0.4),(G,0.2),(VG,0.2),(E,0)}

Here, the linguistic variable P represents a pass student, F stands for a Fair student, G
represents a good student, VG represents a Very Good student and E for an Excellent student.

Using Weighted Average Method, find the deffuzzified value.

0.8 U(x)

0.6

0.4

0.2

50 60 70 80 90 100 x
(c) X={x1,x2}, Y={y1,y2}, and Z={z1,z2,z3}. Consider the following fuzzy relations:

R= 0.7 0.6 S= 0.8 0.5 0.4

0.8 0.3 0.1 0.6 0.7

Relation R Relation S

Using Max-Product composition, find T=R . S

5.(a) Create a fuzzy control system which modes how you might choose to tip at a restaurant.
When tipping, you consider the service and food quality, rated between 0 and 10. You use this
to leave a tip between 0 and 25%.
We would formulate this problem as:

Antecedents(inputs)

Service

● Universe (ie, crisp value range): How good was the service of the wait staff, on a scale of
0 to 5?
● Fuzzy set (ie, fuzzy value range): poor, acceptable, amazing

Food quality

● Universe : How tasty was the food, on a scale of 0 to 5?


● Fuzzy set : bad, decent, great

Consequents (outputs)

Tip

● Universe: How much should we tip, on a scale of 0% to 15%


● Fuzzy set: low,medium,high

(a) Design a rule base system.


(b) Draw membership functions( use illustrations)
(c) Show that if the service and food quality are great, the tip will be high.
6.(a) Solve the following Knapsack problem by Genetic Algorithm. Find which item should be
kept in the knapsack so as it will maximizes knapsack value without breaking it.

Item Weight(kg) Value(TK)


A 5 12
B 3 5
C 7 10
D 2 7
(b) Write GA using a flowchart.
7.(a) Explain Neural Representation of NAND gate using the perceptron algorithm.

(b) Cooking scheduling problem discussed in the class in given below.

Food item to be cooked: Apple pie, Burger and Chicken.

Weather: Sunny and Rainy

Monday Tuesday Wednesday Thursday Friday Saturday

Apple pie Apple pie Burger Chicken Chicken Apple pie

The cooking is made based on the weather – sunny and rainy. If the weather is sunny then you
are not cooking for the next day, whereas if the weather is rainy, you are not going outside and
have plenty of time for cooking next in the list.Draw an RNN for solving this problem and explain
your model for solving the problem.

You might also like