You are on page 1of 3

Describe about Alpha - Beta pruning?

Alpha-Beta Pruning Describe about Local search algorithms and optimistic problems? Define Propositional logic? Describe about Theory of first order
o Alpha-beta pruning is a modified version of the minimax algorithm. It LocalSearchAlgorithmsandOptimization Problem logic?
The informed and uninformed search expands the nodes systematically in two First-Order logic:
is an optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of
ways:
o First-order logic is another way of knowledge representation in artificial intelligence.

game states it has to examine are exponential in depth of the tree.  keeping different paths in the memory and It is an extension to propositional logic.

Since we cannot eliminate the exponent, but we can cut it to half.  selecting the best suitable path, o FOL is sufficiently expressive to represent the natural language statements in a
Hence there is a technique by which without checking each node of concise way.
Which leads to a solution state required to reach the goal node. But beyond
the game tree we can compute the correct minimax decision, and this these “classical search algorithms," we have some “local search algorithms” o First-order logic is also known as Predicate logic or First-order predicate logic.
technique is called pruning. This involves two threshold parameter where the path cost does not matters, and only focus on solution-state First-order logic is a powerful language that develops information about the objects
Alpha and beta for future expansion, so it is called alpha-beta needed to reach the goal node. in a more easy way and can also express the relationship between those objects.
pruning. It is also called as Alpha-Beta Algorithm. A local search algorithm completes its task by traversing on a single current node o First-order logic (like natural language) does not only assume that the world contains
o Alpha-beta pruning can be applied at any depth of a tree, and rather than multiple paths and following the neighbors of that node generally. facts like propositional logic but also assumes the following things in the world:

sometimes it not only prune the tree leaves but also entire sub-tree. Although local search algorithms are not systematic, still they have the o Objects: A, B, people, numbers, colors, wars, theories, squares, pits,

o The two-parameter can be defined as:


following two advantages: wumpus, ......

a. Alpha: The best (highest-value) choice we have found so  Local search algorithms use a very little or constant amount of o Relations: It can be unary relation such as: red, round, is

far at any point along the path of Maximizer. The initial memory as they operate only on a single path. adjacent, or n-any relation such as: the sister of, brother of, has
color, comes between
value of alpha is -∞.
 Most often, they find a reasonable solution in large or infinite state
o Function: Father of, best friend, third inning of, end of, ......
b. Beta: The best (lowest-value) choice we have found so far
spaces where the classical or systematic algorithms do not work.
at any point along the path of Minimizer. The initial value
Working ofaLocalsearchalgorithm o As a natural language, first-order logic also has two main parts:
of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the
Let's understand the working of a local search algorithm with the help of an
example:
a. Syntax

same move as the standard algorithm does, but it removes all the Consider the below state-space landscape having both: b. Semantics
nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the algorithm
 Location: It is defined by the state. Propositional logic in Artificial intelligence
Propositional logic (PL) is the simplest form of logic where all the statements are
fast.  Elevation: It is defined by the value of the objective function or made by propositions. A proposition is a declarative statement which is either true
Rules to find good ordering: heuristic cost function. or false. It is a technique of knowledge representation in logical and mathematical
Following are some rules to find good ordering in alpha-beta pruning: form.
o Occur the best move from the shallowest node. The local search algorithm explores the above landscape by finding the following Example:
o Order the nodes in the tree such that the best nodes are checked first. 1. a) It is Sunday.  
2. b) The Sun rises from West (False proposition)  
o Use domain knowledge while finding the best move. Ex: for Chess, try 3. c) 3+3= 7(False proposition)  
order: captures first, then threats, then forward moves, backward 4. d) 5 is a prime number.   
moves. Following are some basic facts about propositional logic:
o We can bookkeep the states, as there is a possibility that states may o Propositional logic is also called Boolean logic as it works on 0 and 1.
repeat. o In propositional logic, we use symbolic variables to represent the logic,
and we can use any symbol for a representing a proposition, such A,
B, C, P, Q, R, etc.
Difference between Supervised and Unsupervised Learning
two points: o Propositions can be either true or false, but it cannot be both.

 Global Minimum: If the elevation corresponds to the cost, then the


o Propositional logic consists of an object, relations or function,

task is to find the lowest valley, which is known as Global Minimum.    and logical connectives.

 Global Maxima: If the elevation corresponds to an objective function,


o These connectives are also called logical operators.

then it finds the highest peak which is called as Global Maxima. It is o The propositions and connectives are the basic elements of the
the highest point in the valley. propositional logic.
We will understand the working of these points better in Hill-climbing search. o Connectives can be said as a logical operator which connects two
Below are some different types of local searches: sentences.
 Hill-climbing Search o A proposition formula which is always true is called tautology, and it
 Simulated Annealing is also called a valid sentence.

 Local Beam Search


o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called
o Statements which are questions, commands, or opinions are not
propositions such as "Where is Rohini", "How are you", "What is
your name", are not propositions.
Syntax of propositional logic:
The syntax of propositional logic defines the allowable sentences for the
knowledge representation. There are two types of Propositions:
a. Atomic Propositions
b. Compound propositions = P[x1| x2, x3,....., xn]P[x2, x3,....., xn]
o Atomic Proposition: Atomic propositions are the simple propositions. = P[x1| x2, x3,....., xn]P[x2|x3,....., xn]....P[xn-1|xn]P[xn].
In general for each variable Xi, we can write the equation as:
It consists of a single proposition symbol. These are the sentences P(Xi|Xi-1,........., X1) = P(Xi |Parents(Xi ))
which must be either true or false.
Compound proposition: Compound propositions are constructed by combining
simpler or atomic propositions, using parenthesis and logical connectives.
Hidden Markov Model (HMM)-
Markov models are named after Andrey Markov, who first developed them in the
early 1900s. Markov models are a type of probabilistic model that is used to predict
the future state of a system, based on its current state. In other words, Markov
Bayesian Belief Network in artificial intelligence
models are used to predict the future state based on the current hidden or
Bayesian belief network is key computer technology for dealing with probabilistic
observed states. Markov model is a finite-state machine where each state has an
events and to solve a problem which has uncertainty. We can define a Bayesian network Forward Chaining and backward chaining in AI
associated probability of being in any other state after one step. They can be used
as: An artificial intelligence, forward and backward chaining is one of the important topics,
to model real-world problems where hidden and observable states are involved.
"A Bayesian network is a probabilistic graphical model which represents a set of but before understanding forward and backward chaining lets first understand that
Markov models can be classified into hidden and observable based on the type of variables and their conditional dependencies using a directed acyclic graph." from where these two terms came.
information available to use for making predictions or decisions. Hidden Markov It is also called a Bayes network, belief network, decision network, or Bayesian Inference engine:
models deal with hidden variables that cannot be directly observed but only model. The inference engine is the component of the intelligent system in artificial intelligence,
inferred from other observations, whereas in an observable model also termed as Bayesian networks are probabilistic, because these networks are built from which applies logical rules to the knowledge base to infer new information from known
Markov chain, hidden variables are not involved. a probability distribution, and also use probability theory for prediction and anomaly facts. The first inference engine was part of the expert system. Inference engine
What are Hidden Markov models (HMM)? detection. commonly proceeds in two modes, which are:
The hidden Markov model (HMM) is another type of Markov model where there are Real world applications are probabilistic in nature, and to represent the relationship a. Forward chaining
few states which are hidden. This is where HMM differs from a Markov chain. HMM between multiple events, we need a Bayesian network. It can also be used in various b. Backward chaining
is a statistical model in which the system being modeled are Markov processes with tasks including prediction, anomaly detection, diagnostics, automated insight, Horn Clause and Definite clause:
unobserved or hidden states. It is a hidden variable model which can give an reasoning, time series prediction, and decision making under uncertainty. Horn clause and definite clause are the forms of sentences, which enables knowledge
observation of another hidden state with the help of the Markov assumption. The Bayesian Network can be used for building models from data and experts opinions, and base to use a more restricted and efficient inference algorithm. Logical inference
hidden state is the term given to the next possible variable which cannot be directly it consists of two parts: algorithms use forward and backward chaining approaches, which require KB in the
observed but can be inferred by observing one or more states according to o Directed Acyclic Graph form of the first-order definite clause.
Markov’s assumption. Markov assumption is the assumption that a hidden Definite clause: A clause which is a disjunction of literals with exactly one positive
variable is dependent only on the previous hidden state. Mathematically, the o Table of conditional probabilities.
literal is known as a definite clause or strict horn clause.
probability of being in a state at a time t depends only on the state at the time (t-1). The generalized form of Bayesian network that represents and solve decision problems Horn clause: A clause which is a disjunction of literals with at most one positive
It is termed a limited horizon assumption. Another Markov assumption states under uncertain knowledge is known as an Influence diagram. literal is known as horn clause. Hence all the definite clauses are horn clauses.
that the conditional distribution over the next state, given the current state, A Bayesian network graph is made up of nodes Example: (¬ p V ¬ q V k). It has only one positive literal k.
doesn’t change over time. This is also termed a stationary process assumption. and Arcs (directed links), where: It is equivalent to p ∧ q → k.
*A Markov model  is made up of two components: the state transition and hidden o Each node correspon ds to the random A. Forward Chaining
random variables that are conditioned on each other. However, A hidden Markov variables, and a variable can Forward chaining is also known as a forward deduction or forward reasoning method
model consists of five important components: be continuous or dis crete. when using an inference engine. Forward chaining is a form of reasoning which start
with atomic sentences in the knowledge base and applies inference rules (Modus
=>Initial probability distribution: An initial probability distribution over states, πi o Arc or directed arrows represent the
Ponens) in the forward direction to extract more data until a goal is reached.
is the probability that the Markov chain will start in state i. Some states j may have
causal relationship or conditional probabilities between random variables. The Forward-chaining algorithm starts from known facts, triggers all rules whose
πj = 0, meaning that they cannot be initial states. The initialization distribution
These directed links or arrows connect the pair of nodes in the graph. premises are satisfied, and add their conclusion to the known facts. This process repeats
defines each hidden variable in its initial condition at time t=0 (the initial hidden
These links represent that one node directly influence the other node, and until the problem is solved.
state).
if there is no directed link that means that nodes are independent with Properties of Forward-Chaining:
=>One or more hidden states each other
=>Transition probability distribution: A transition probability matrix where o It is a down-up approach, as it moves from bottom to top.
o In the above diagram, A, B, C, and D are random
each aij��� represents the probability of moving from state i to state j. The
variables represented by the nodes of the network
o It is a process of making a conclusion based on known facts or data, by
transition matrix is used to show the hidden state to hidden state transition
graph. starting from the initial state and reaches the goal state.
probabilities.
=>A sequence of observations o If we are considering node B, which is connected with o Forward-chaining approach is also called as data-driven as we reach to the
=>Emission probabilities: A sequence of observation likelihoods, also called node A by a directed arrow, then node A is called the goal using available data.
emission probabilities, each expressing the probability of an parent of Node B. o Forward -chaining approach is commonly used in the expert system, such
observation oi�� being generated from a state I. The emission probability is used o Node C is independent of node A. as CLIPS, business, and production rule systems.
to define the hidden variable in terms of its next hidden state. It represents the B. Backward Chaining:
The Bayesian network has mainly two components:
conditional Backward-chaining is also known as a backward deduction or backward reasoning
distribution over
o Causal Component method when using an inference engine. A backward chaining algorithm is a form of
an observable o Actual numbers reasoning, which starts with the goal and works backward, chaining through rules to
output for each Each node in the Bayesian network has condition probability distribution P(Xi | find known facts that support the goal.
hidden state at Parent(X i) ), which determines the effect of the parent on that node. Properties of backward chaining:
time t=0. Bayesian network is based on Joint probability distribution and conditional probability. o It is known as a top-down approach.
Let’s understand
the above using
So let's first understand the joint probability distribution:
Joint probability distribution:
o Backward-chaining is based on modus ponens inference rule.

the hidden Markov If we have variables x1, x2, x3,....., xn, then the probabilities of a different combination of o In backward chaining, the goal is broken into sub-goal or sub-goals to
model x1, x2, x3.. xn, are known as Joint probability distribution. prove the facts true.
representation P[x1, x2, x3,....., xn], it can be written as the following way in terms of the joint
shown below: probability distribution.
o It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
o Backward -chaining algorithm is used in game theory, automated theorem
proving tools, inference engines, proof assistants, and various AI
applications.
o The backward-chaining method mostly used a depth-first search strategy
for proof.

You might also like