You are on page 1of 97

AI unit 2 QB

1] What is knowledge representation? What are different kinds of knowledge


that need to be represented?
2] Write short note on the AI knowledge cycle.
3] Explain following representation technique
4] Write short note on Propositional Logic.
5] Explain concept of First Order Logic in Ai.
6] Write note on.
7] Short note on support vector mechanism.
8] What is an Artificial Neural Network?
9]what is entropy? How do we calculate it?
10] Explain single-layer feed forward neural networks.
11] Short note on multilayer feed forward neural networks.
12] What is Backpropagation Neural Network?

?
13] Note on supervised Learning.
14] Note on nearest neighbour model
15] Write note on deductive reasoning.
16] Short note on Inductive reasoning.
17] How reasoning done using Abductive reasoning?
18] What is role of planning in Ai?
19] Explain concept of Fuzzy logic.
20] Explain any 5 membership functions of fuzzy logic system.
21] What are parametric model? Give there advantages.

Types of Parametric Models


There are two types of Parametric Models. These are discussed below:
1. Constructive Solid Geometry (CSG)
2. Boundary Representation (BR)

Advantages of Parametric Modelling


The advantages of 3D parametric modelling over conventional 2D drawings
are as follows:
 It is able to create flexible designs
 Enhanced product visualisation because you can start with basic objects
and little detail
 Enhanced downstream application integration and shortened engineering
cycle time
 New designs can be made using already created design data.
 Rapid design turnaround, efficiency improvement
22] Explain the non- parametric models.
23]What is regression? What are its types?

TYPES OF REGRESSION ARE AS FOLLOWS-


1)Simple linear regression
2)Multiple linear regression
3)Polynomial regression
4)Logistic regression
24]Explain-
25] Explain concept of classification used in machine learning.
26] What is bias? What is variance? What is Bias/variance Tradeoff?
27] Note on overfitting in decision tree.
28] What do you mean by regularization? How does it works?
29] Explain –
30] Describe the Ensemble learning.
31] What is gradient descent? How does it work?
32] Explain the Restaurant wait problem with respect to decision trees
representation.
33] Discuss different form of learning models.
34] Differentiate between supervised and unsupervised learning.

Difference b/w Supervised and Unsupervised Learning :


The distinction between supervised and unsupervised learning depends on
whether the learning algorithm uses pattern-class information. Supervised
learning assumes the availability of a teacher or supervisor who classifies
the training examples into classes, whereas unsupervised learning must
identify the pattern-class information as a part of the learning process.
Supervised learning algorithms utilize the information on the class
membership of each training instances. This information allows supervised
learning algorithms to detect pattern misclassifications as a feedback to
themselves. In unsupervised learning algorithms, unlabeled instances are
used. They blindly or heuristically process them. Unsupervised learning
algorithms often have less computational complexity and less accuracy than
supervised learning algorithms
34] Differentiate between supervised and unsupervised learning.
SUPERVISED UNSUPERVISED
LEARNING LEARNING
Uses Known and Uses Unknown Data as
Input Data Labelled Data as input input
Computational Less Computational More Computational
Complexity Complexity Complex
Uses Real Time Analysis
Real Time Uses off-line analysis of Data
Number of Number of Classes are Number of Classes are not
Classes known known
Accuracy of Accurate and Reliable Moderate Accurate and
Results Results Reliable Results
Desired output is not
Output data Desired output is given. given.
In supervised learning it In unsupervised learning it
is not possible to learn is possible to learn larger
larger and more and more complex models
complex models than than with unsupervised
Model with supervised learning learning
In supervised learning
training data is used to In unsupervised learning
Training data infer model training data is not used.
Supervised learning is
also called Unsupervised learning is
Another name classification. also called clustering.
Test of model We can test our model. We cannot test our model.
Optical Character
Example Recognition Find a face in an image.
35]Difference between linear regression and logistic regression.

Linear Regression is a machine learning algorithm based on supervised


regression algorithm. Regression models a target prediction value based
on independent variables. It is mostly used for finding out the relationship
between variables and forecasting. Different regression models differ based
on – the kind of relationship between the dependent and independent
variables, they are considering and the number of independent variables
being used. Logistic regression is basically a supervised classification
algorithm. In a classification problem, the target variable (or output), y, can
take only discrete values for a given set of features (or inputs), X.
Sl.No. Linear Regression Logistic Regression

Linear Regression is a Logistic Regression is a supervised


1.
supervised regression model. classification model.

Equation of logistic regression


Equation of linear regression:
y(x) = e(a0 + a1x1 + a2x2 + … +
y = a0 + a1x1 + a2x2 + … +
aixi) / (1 + e(a0 + a1x1 + a2x2 + …
aixi
+ aixi))
Here,
2. Here,
y = response variable
y = response variable
xi = ith predictor variable
xi = ith predictor variable
ai = average effect on y as xi
ai = average effect on y as xi
increases by 1
increases by 1

In Linear Regression, we
In Logistic Regression, we predict
3. predict the value by an integer
the value by 1 or 0.
number.

Here activation function is used to


Here no activation function is
4. convert a linear regression equation
used.
to the logistic regression equation

Here no threshold value is


5. Here a threshold value is added.
needed.

Here we calculate Root Mean


Here we use precision to predict the
6. Square Error(RMSE) to
next weight value.
predict the next weight value.

Here the dependent variable


consists of only two categories.
Here dependent variable
Logistic regression estimates the
should be numeric and the
7. odds outcome of the dependent
response variable is
variable given a set of quantitative
continuous to value.
or categorical independent
variables.

It is based on the least square It is based on maximum likelihood


8.
estimation. estimation.

Here when we plot the Any change in the coefficient leads


9.
training datasets, a straight to a change in both the direction
Sl.No. Linear Regression Logistic Regression

line can be drawn that and the steepness of the logistic


touches maximum plots. function. It means positive slopes
result in an S-shaped curve and
negative slopes result in a Z-
shaped curve.

Linear regression is used to


estimate the dependent Whereas logistic regression is used
variable in case of a change to calculate the probability of an
10.
in independent variables. For event. For example, classify if
example, predict the price of tissue is benign or malignant.
houses.

Linear regression assumes


Logistic regression assumes the
the normal or gaussian
11. binomial distribution of the
distribution of the dependent
dependent variable.
variable.

Applications of logistic regression:


Applications of linear
regression:  Medicine
12.  Credit scoring
 Financial risk assessment
 Hotel Booking
 Business insights
 Gaming
 Market analysis
 Text editing
36] what are the types of quantifiers used in First Order Logic?
37] What are the similarities and differences between Reinforcement
learning and supervised learning?
Reinforcement learning (RL) and supervised learning (SL) are both paradigms within machine
learning, and they share some similarities, despite having distinct characteristics. Here are some
commonalities between reinforcement learning and supervised learning:

1. Learning from Data:


 In both RL and SL, the models learn from data. In supervised learning, the model is
trained on labelled examples, where each input is associated with a corresponding
output. In reinforcement learning, the agent learns from interacting with an environment
and receiving feedback (rewards or punishments) based on its actions.
2. Objective of Learning:
 The ultimate goal in both RL and SL is to generalize from the training data to make
accurate predictions or decisions on new, unseen data. In supervised learning, this
involves making predictions for new inputs. In reinforcement learning, it involves learning
a policy that guides the agent's actions to maximize cumulative rewards.
3. Use of Neural Networks:
 Both RL and SL can make use of neural networks as the underlying model architecture.
Deep learning, particularly deep neural networks, is commonly applied in both
paradigms.
4. Optimization Techniques:
 Similar optimization algorithms can be used for training models in both RL and SL.
Techniques such as stochastic gradient descent (SGD) or its variants are commonly
employed to update the model parameters and reduce the prediction error.
5. Overfitting and Generalization:
 Both RL and SL face challenges related to overfitting and the need for generalization.
Overfitting occurs when a model performs well on the training data but fails to generalize
to new, unseen data. Both paradigms involve techniques to address overfitting and
promote generalization.
6. Feature Engineering:
 Feature engineering, the process of selecting and transforming input features to improve
model performance, is relevant in both RL and SL. Effective feature representation is
crucial for the success of the learning process.

Despite these similarities, it's important to note that RL and SL differ significantly in their learning
setups and objectives. In supervised learning, the model learns from a dataset with labeled
examples, while in reinforcement learning, the agent learns by interacting with an environment
and receiving feedback in the form of rewards. The distinction lies in the dynamic and sequential
nature of interactions in reinforcement learning compared to the static and independent nature
of data points in supervised learning.
38] Discuss different forms of machine learning.
Machine learning can be broadly categorized into three main types:
supervised learning, unsupervised learning, and reinforcement learning.
Each type serves different purposes and has distinct characteristics:

1. Supervised Learning:
 Definition: Supervised learning involves training a model on a
labeled dataset, where each input is associated with a corresponding
output. The model learns the mapping from inputs to outputs based
on the provided examples.
 Objective: The goal is to learn a function that can accurately map
new, unseen inputs to their correct outputs.
 Examples:
 Classification: Predicting the category or class of an input (e.g.,
spam detection, image classification).
 Regression: Predicting a continuous value (e.g., predicting
house prices, temperature forecasting).
2. Unsupervised Learning:
 Definition: Unsupervised learning deals with unlabeled data, where
the algorithm explores the inherent structure or patterns in the data
without explicit guidance.
 Objective: The goal is often to discover hidden patterns, group
similar data points, or reduce the dimensionality of the data.
 Examples:
 Clustering: Grouping similar data points together (e.g.,
customer segmentation, document clustering).
 Dimensionality Reduction: Reducing the number of features
while preserving important information (e.g., principal
component analysis).
3. Reinforcement Learning:
 Definition: Reinforcement learning involves an agent learning to
make decisions by interacting with an environment. The agent
receives feedback in the form of rewards or punishments based on its
actions.
 Objective: The goal is for the agent to learn a policy that maximizes
cumulative rewards over time.
 Examples:
 Game playing: Training an agent to play games (e.g., AlphaGo,
reinforcement learning in video games).
 Robotics: Teaching a robot to perform tasks by trial and error.
4. Semi-Supervised Learning:
 Definition: Semi-supervised learning is a combination of supervised
and unsupervised learning. The model is trained on a dataset that
contains both labeled and unlabelled examples.
 Objective: The goal is to leverage the limited labeled data along with
the unlabelled data to improve the model's performance.
 Examples: Text classification with a small labeled dataset and a large
amount of unlabelled text.
5. Self-Supervised Learning:
 Definition: Self-supervised learning is a type of unsupervised
learning where the algorithm generates its own labels from the data,
often by defining pretext tasks.
 Objective: The goal is to pretrain a model on a task that does not
require external labels, and then fine-tune it for a specific
downstream task.
 Examples: Predicting missing parts of an image, language model
pretraining (e.g., BERT).

These different forms of machine learning cater to a wide range of


applications and problem domains, allowing practitioners to choose the
most suitable approach based on the nature of the data and the learning
task.
39] Explain the modus ponens with an example.
Modus Ponens is a valid form of deductive reasoning that involves drawing a specific
conclusion from two given premises. It can be expressed in the following logical form:

1. If P, then Q.
2. P (Premise 1).

Therefore, Q (Conclusion).

Here's a more concrete explanation with an example:

Premise 1: If it is raining (P), then the streets are wet (Q). Premise 2: It is raining (P).

Conclusion: Therefore, the streets are wet (Q).

In this example, "If it is raining (P), then the streets are wet (Q)" is the first premise, and "It is
raining (P)" is the second premise. Applying Modus Ponens, you can conclude that
"Therefore, the streets are wet (Q)."

This form of reasoning is based on the idea that if the first premise establishes a connection
between P and Q, and the second premise asserts that P is true, then you can logically infer
that Q must also be true. Modus Ponens is a fundamental rule of inference in classical logic.
40] What are logical connectivities used in Propositional logic.

In propositional logic, logical connectives are used to combine or modify


propositions to create more complex statements. Here are some common
logical connectives used in propositional logic:

1. Conjunction ( ∧ - AND):
 The conjunction of two propositions, �P and �Q, denoted as
�∧�P∧Q, is true only when both �P and �Q are true;
otherwise, it is false.
Example: If �P represents "It is raining" and �Q represents "I have an
umbrella," then �∧�P∧Q could represent "It is raining and I have an
umbrella."
2. Disjunction ( ∨ - OR):
 The disjunction of two propositions, �P and �Q, denoted as
�∨�P∨Q, is true if at least one of �P or �Q is true.
Example: Using the same �P and �Q as above, �∨�P∨Q could
represent "It is raining or I have an umbrella."
3. Negation (¬ - NOT):
 The negation of a proposition �P, denoted as ¬�¬P, is true when
�P is false, and false when �P is true.
Example: If �P is "It is raining," then ¬�¬P would be "It is not raining."
4. Implication ( → - IF...THEN):
 The implication of two propositions, �P and �Q, denoted as
�→�P→Q, is true unless �P is true and �Q is false. In other
words, the implication is false only in the case where the antecedent
(�P) is true, but the consequent (�Q) is false.
Example: If �P is "It is raining" and �Q is "I carry an umbrella," then
�→�P→Q could represent "If it is raining, then I carry an umbrella."
5. Biconditional ( ↔ - IF AND ONLY IF):
The biconditional of two propositions, �P and �Q, denoted as
�↔�P↔Q, is true when both �P and �Q have the same truth
value (either both true or both false).
Example: Using the same �P and �Q, �↔�P↔Q could represent "
41] What are the main component of PDDL?
PDDL, which stands for Planning Domain Definition Language, is a
language used to formally describe planning problems in the field of
artificial intelligence. PDDL provides a way to represent the various
components of a planning problem in a standardized format. The main
components of PDDL are as follows:

1. Domain Definition:
 The domain definition in PDDL specifies the basic elements of the
planning problem. It includes information about the types of objects,
the actions that can be taken, and the predicates that describe the
state of the world. The domain definition typically begins with the
define keyword, followed by the specification of types, predicates,
and actions.
2. Types:
 PDDL allows the definition of different types of objects. Types are
used to categorize objects in the planning domain. For example, in a
logistics domain, types might include "location" and "package."
3. Constants:
 Constants are instances of types. They represent specific objects in
the planning domain. For example, if "location" is a type, then specific
locations (e.g., "A", "B") would be constants of the "location" type.
4. Predicates:
 Predicates describe the properties or relationships between objects in
the planning domain. They are used to define the state of the world.
Predicates can be either true or false. For instance, a predicate in a
logistics domain might be "at(x, y)" to represent that object �x is at
location �y.
5. Functions:
 PDDL allows the definition of numeric functions that can be used to
represent quantities or values associated with objects in the planning
domain. Functions are used to express numeric aspects of the state.
6. Actions:
 Actions in PDDL represent the possible transitions between states.
Each action has a name, parameters, preconditions, and effects. The
preconditions specify the conditions that must be true for the action
to be applicable, and the effects describe the changes that the action
makes to the state of the world.
7. Goals:
 The goal specification in PDDL defines the desired state that the
planner aims to achieve. Goals are expressed using predicates and
specify the conditions that should be true in the final state.
8. Problem Definition:
 The problem definition in PDDL instantiates the domain definition for
a specific planning problem. It includes information about the initial
state of the world, the goal state, and any objects specific to the
problem.
9. Initial State:
 The initial state describes the state of the world at the beginning of
the planning problem. It specifies the values of predicates that are
true initially.
10. Goal State:
 The goal state defines the conditions that the planner aims to
achieve. It is specified using predicates, and the planner's task is to
find a sequence of actions that leads from the initial state to a state
satisfying the goal conditions.

PDDL is used in the context of automated planning and is often employed


by planners and solvers to generate plans that achieve specified goals
within a given planning domain. Its structure allows for a formal and
standardized representation of planning problems, enabling
interchangeability between different planning systems.
42] What are the various types of operations which can be performed on
Fuzzy Sets?

Fuzzy sets, introduced by Lotfi A. Zadeh, are sets whose elements have
degrees of membership rather than being strictly in or out of the set.
Various operations can be performed on fuzzy sets to manipulate their
membership degrees and create new fuzzy sets. Here are some
fundamental operations on fuzzy sets:

1. Fuzzy Union (OR Operation):


 The fuzzy union of two fuzzy sets A and B, denoted as �∪�A∪B, is
a fuzzy set where the membership degree of an element is the
maximum of its membership degrees in A and B. Mathematically, for
each element x: ��∪�(�)=max⁡(��(�),��(�))μA∪B
(x)=max(μA(x),μB(x))
2. Fuzzy Intersection (AND Operation):
 The fuzzy intersection of two fuzzy sets A and B, denoted as
�∩�A∩B, is a fuzzy set where the membership degree of an
element is the minimum of its membership degrees in A and B.
Mathematically, for each element x:
��∩�(�)=min⁡(��(�),��(�))μA∩B(x)=min(μA(x),μB
(x))
3. Fuzzy Complement (NOT Operation):
 The fuzzy complement of a fuzzy set A, denoted as ¬�¬A, is a fuzzy
set where the membership degree of an element is 1 minus its
membership degree in A. Mathematically, for each element x:
�¬�(�)=1−��(�)μ¬A(x)=1−μA(x)
4. Fuzzy Difference (SUB Operation):
 The fuzzy difference between two fuzzy sets A and B, denoted as
�−�A−B, is a fuzzy set where the membership degree of an
element is the minimum of its membership degree in A and the
complement of its membership degree in B. Mathematically, for each
element x: ��−�(�)=min⁡(��(�),1−��(�))μA−B
(x)=min(μA(x),1−μB(x))
5. Fuzzy Cartesian Product:
 The fuzzy Cartesian product of two fuzzy sets A and B, denoted as
�×�A×B, results in a fuzzy relation where the membership degree
of an ordered pair is determined by the minimum of the membership
degrees of its components. Mathematically, for each pair (x, y):
��×�((�,�))=min⁡(��(�),��(�))μA×B
((x,y))=min(μA(x),μB(y))
6. Fuzzy Composition:
 Fuzzy composition is used when working with fuzzy relations. Given
two fuzzy relations R and S, their composition, denoted as
�∘�R∘S, is another fuzzy relation. The membership degree of an
ordered pair in the composition is determined by combining the
membership degrees through a suitable operation.
7. Fuzzy Extension Principle:
 The fuzzy extension principle is used to extend operations on crisp
sets to fuzzy sets. It involves applying a standard set operation to the
membership degrees of corresponding elements in the fuzzy sets.

These operations provide a foundation for fuzzy set theory and are crucial
in fuzzy logic and fuzzy systems. They allow for the manipulation and
combination of fuzzy information, enabling the representation of
uncertainty and vagueness in various applications, such as control systems,
decision-making, and pattern recognition.
43] Explain defuzzification process using any suitable method.
Defuzzification is the process of converting a fuzzy set or fuzzy output into a crisp
value that can be used in decision-making or control systems. There are several
methods for defuzzification, and one commonly used method is the Centre of Gravity
(COG) or Centroid method. Here's an explanation of the defuzzification process using
the Centre of Gravity method:

Centre of Gravity (Centroid) Method:

1. Membership Function:
 Start with a fuzzy set or fuzzy output that has been derived from fuzzy
inference. This fuzzy set has a membership function representing the degree
of membership of each element in the universe of discourse.
2. Implication:
 In the fuzzy inference process, you typically get a set of rules and a fuzzy
output for each rule. Combine these fuzzy outputs using a suitable
aggregation method, such as the maximum or average, to obtain a single
fuzzy set that represents the overall fuzzy output.
3. Defuzzification - Centre of Gravity:
 The Centre of Gravity method involves finding the centre of mass or centre of
area of the combined fuzzy set. This centre is a crisp value that represents the
defuzzified output.
4. Mathematical Calculation:
 Mathematically, the centre of gravity is calculated by taking the weighted
average of the fuzzy set's membership degrees. The formula is given by:
COG=∑(Membership Degree×Crisp Value)∑Membership DegreeCO
G=∑Membership Degree∑(Membership Degree×Crisp Value) Here, the summation is
performed over all elements in the universe of discourse.
5. Interpretation:
 The resulting COG value is the defuzzified output. It represents the "center" of
the fuzzy set in terms of the universe of discourse. This value is then used as a
crisp input for further decision-making or control actions.

Example:

Let's say we have a fuzzy output related to the temperature control of a room, and
the fuzzy set representing the output is defined on the universe of discourse
"Temperature" with membership degrees ranging from 0 to 1. After the inference
process, we have a fuzzy set that looks like a triangle with a peak membership
degree at a certain temperature.
The Centre of Gravity method would calculate the weighted average of the
temperature values within the fuzzy set, considering the membership degrees as
weights. The result would be a crisp temperature value that represents the centre of
mass of the fuzzy set.

This method provides a way to obtain a meaningful and interpretable crisp value
from a fuzzy set, making it suitable for applications in fuzzy control systems and
decision-making.

You might also like