Professional Documents
Culture Documents
1.5 marks
YEAR-2022, 2Marks
1. What is NLP?
Natural language processing (NLP) is a branch of ar ficial intelligence (AI) that enables computers to
comprehend, generate, and manipulate human language. Natural language processing has the ability to
interrogate the data with natural language text or voice
2. Write down the Inconsistencies in truth management System?
A truth maintenance system manages uncertain and inconsistent informa on by tracking beliefs and
assump ons used to derive conclusions, to maintain a consistent and coherent view of the world.
3. What is conceptual graphs in AI?
A conceptual graph (CG) is a graph representa on for logic based on the seman c networks of ar ficial
intelligence and the existen al graphs of Charles Sanders Peirce. Several versions of CGs have been
designed and implemented over the past thirty years.
4. What is seman c nets?
In ar ficial intelligence, a seman c network is a knowledge representa on technique for organizing and
storing knowledge. Seman c networks are a type of graphical model that shows the rela onships between
concepts, ideas, and objects in a way that is easy for humans to understand.
5. Define knowledge representa on?
Knowledge representa on equips AI agents with the capabili es to solve the most complex tasks based
on what they have learned from the knowledge given to them. The knowledge given to them could be
human experiences, problem-solu ons, if-then rules, response to specific scenarios, etc.
6. What is problem solving Technique?
In computer science, problem-solving refers to ar ficial intelligence techniques, including various
techniques such as forming efficient algorithms, heuris cs, and performing root cause analysis to find
desirable solu ons.
7. What is Hill Climbing?
Hill climbing is basically a search technique or informed search technique having different weights based
on real numbers assigned to different nodes, branches, and goals in a path.
8. What id Best First Search?
Best first search (BFS) is a search algorithm that func ons at a par cular rule and uses a priority queue
and heuris c search. It is ideal for computers to evaluate the appropriate and shortest path through a
maze of possibili es. Suppose you get stuck in a big maze and do not know how and where to exit quickly.
9. What is Turing test?
In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test
is known as the Turing Test. In this test, Turing proposed that the computer can be said to be an intelligent
if it can mimic human response under specific condi ons.
10. What is AI?
Ar ficial intelligence is the science of making machines that can think like humans. It can do things that
are considered "smart." AI technology can process large amounts of data in ways, unlike humans. The goal
for AI is to be able to do things such as recognize pa erns, make decisions, and judge like humans.
YEAR-2022,7 Marks
1. What is AI agent? Explain different agents in AI.
In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals. The agent
operates autonomously, meaning it is not directly controlled by a human operator.
Agents can be classified into different types based on their characteristics, such as whether they
are reactive or proactive, whether they have a fixed or dynamic environment, and whether they are
single or multi-agent systems.
Reactive agents are those that respond to immediate stimuli from their environment and take
actions based on those stimuli. Proactive agents, on the other hand, take initiative and plan ahead
to achieve their goals. The environment in which an agent operates can also be fixed or dynamic.
Fixed environments have a static set of rules that do not change, while dynamic environments are
constantly changing and require agents to adapt to new situations.
Mul -agent systems involve mul ple agents working together to achieve a common goal. These agents
may have to coordinate their ac ons and communicate with each other to achieve their objectives.
Agents are used in a variety of applications, including robotics, gaming, and intelligent systems.
They can be implemented using different programming languages and techniques, including
machine learning and natural language processing.
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, such as a person, firm, machine, or software. It carries out an
action with the best outcome after considering past and current percepts(agent’s perceptual
inputs at a given instance). An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain other agents.
An agent is anything that can be viewed as: Perceiving its environment through sensors and
Acting upon that environment through actuators
Every agent can perceive its own actions (but not always the effects).
As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to
half. Hence there is a technique by which without checking each node of the game tree we can
compute the correct minimax decision, and this technique is called pruning. This involves two
threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is
also called as Alpha-Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
a. Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
The resolution principle, due to Robinson (1965), is a method of theorem proving that proceeds by
constructing refutation proofs, i.e., proofs by contradiction. This method has been exploited in many
automatic theorem provers. The resolution principle applies to first-order logic formulas in Skolemized
form. These formulas are basically sets of clauses each of which is a disjunction of literals. Unification
is a key technique in proofs by resolution.
If two or more positive literals (or two or more negative literals) in clause are unifiable and is their
most general unifier, then is called a factor of . For example, is
factored to . In such a factorization, duplicates are removed.
Let and be two clauses with no variables in common, let contain a positive literal , contain
a negative literal , and let be the most general unifier of and . Then
Generation of a resolvent from two clauses, called resolution, is the sole rule of inference of the
resolution principle. The resolution principle is complete, so a set (conjunction) of clauses is
unsatisfiable iff the empty clause can be derived from it by resolution.
Proof of the completeness of the resolution principle is based on Herbrand's theorem. Since
unsatisfiability is dual to validity, the resolution principle is exercised on theorem negations.
Multiple strategies have been developed to make the resolution principle more efficient. These
strategies help select clause pairs for application of the resolution rule. For instance, linear resolution is
the following deduction strategy. Let be the initial set of clauses. If , a linear deduction of
from with top clause is a deduction in which each is a resolvent of and ( ),
and each is either in or a resolvent (with ).
Bayesian inference is a way of making statistical inferences in which the statistician assigns
subjective probabilities to the distributions that could generate the data. These subjective
probabilities form the so-called prior distribution.
After the data is observed, Bayes' rule is used to update the prior, that is, to revise the probabilities
assigned to the possible data generating distributions. These revised probabilities form the so-called
posterior distribution.
In Bayesian inference, we assign a subjective distribution to the elements of , and then we use
the data to derive a posterior distribution.
In parametric Bayesian inference, the subjective distribution is assigned to the parameters that
are put into correspondence with the elements of The likelihood:The first building block of a
parametric Bayesian model is the likelihood.The prior:The second building block of a Bayesian
model is the prior, The prior is the subjective probability density assigned to the parameter .After
observing the data , we use Bayes' rule to update the prior about the parameter