This action might not be possible to undo. Are you sure you want to continue?
AI is a branch of computer science that helps in studying how to make computers do things at which at the moment people are better. Applications – Game Playing, Mathematical theorem proving, Medical field. 2. List the differences between uninformed and informed search algorithms Uninformed Se arch No Information about the number of steps or path cost from the current state to goal state. Known as Blind Search and it is les effective in search method Example – Breadth First Search, Depth First search. Bi – directional Search, Uniform cost search Informed Search The path cost from the current state to goal state is calculated to select the minimum path cost as the next state Known as Heuristic search and it is more effective in search method Best First search, Greedy search, A* search
3. Indicate the role of heuristics in guiding search.
Useful in Constraint Satisfaction Problems. Here, we extend the analysis by considering heuristics for selecting a variable to instantiate and for choosing a value for the variable.
4. What is constraint satisfaction problem?
A constraint satisfaction problem (or CSP) is a special kind of problem that satisfies some additional structural properties beyond the basic requirements for problems in general. In a CSP, the states are defined by the values of a set of variables and the goal test specifies a set of constraints that the values must obey. For example, the 8-queens problem can be viewed as a CSP in which the variables are the locations of each of the eight queens; the possible values are squares on the board; and the constraints state that no two queens can be in the same row, column or diagonal.
5. What is unification algorithm?
Unification The job of the unification routine, UNIFY, is to take two atomic sentences p and q and return a substitution that would make p and q look the same. (If there is no such substitution, then UNIFY should return fail.) Formally, UNIFY (p, q) = 6 where SuBST(0,p) = SuBST(0,q)
analogous to ordinary deductive logic. an active learner must also act using the learned information. what is active and passive reinforcement learning? The agent can be a passive learner or an active learner. and tries to learn the utility of being in various states. thus constructing a model which accounts for the characteristics of the observed objects. How is machine translation system implemented? Although there has been no fundamental breakthrough in machine translation. developed by the University of Montreal. also known as grammatical inference or syntactic pattern recognition. One of the most successful is the TAUM-METEO system. which in the case of grammatical inference relies heavily on hierarchical substitutions 10. the size of the hypothesis required to construct an explanation for the observations can be much reduced. Representing these very general concepts is sometimes called ontological engineering. prior knowledge plays two key roles in reducing the complexity of learning: 1. change. The study of this relation was intended to constitute a mathematical discipline called inductive logic. substances. The smaller the hypothesis. What is ontological engineering? Representing time. there has been real progress. 8. 1985). refers to the process in machine learning of learning a formal grammar (usually in the form of re-write rules or productions) from a set of observations. It works because the language used in these government weather reports is highly stylized and regular. to the point that there are now dozens of machine translation systems in everyday use that save money over fully manual techniques. These are important because they show up in one form or another in every domain. because the prior knowledge will be available to help out the new rules in explaining the observations. which can translate a Spanish passage into English of this quality . For any given set of observations. events. 6. A representative system is SPANAM (Vasconcellos and Leon. Logical relation between p proposition and E Evidence. and can use its problem generator to suggest explorations of unknown portions of the environment 9. In more open domains.0 is called the unifier of the two sentences. objects. actions. In Inductive logic programming or ILP systems. measures. 2. and so on. Grammatical inference is distinguished from traditional decision rules and other such methods principally by the nature of the resulting model. which translates weather reports from English to French. the results are less impressive. money. Grammatical induction. the easier it is to find. Because any hypothesis generated must be consistent with the prior knowledge as well as with the new observations. 7. State the characteristics of Inductive logic programming. Define the term grammar induction. the effective hypothesis space size is reduced to include only those theories that are consistent with what is already known. A passive learner simply watches the world going by.
e. linear space! Optimal? No BIDIRECTIONAL – Definition – Bidirectional Search is a strategy that simultaneously search both the directions..11. i. Agent program implements mapping between percept sequences and corresponding actions. spaces with loops Modify to avoid repeated states along path à complete in finite spaces Time? O(bm): terrible if m is much larger than d but if solutions are dense. If dead end occurs backtracking is done to next immediate previous node. Human agent: eyes. ears. Implementation: fringe = LIFO queue. Example – Route Finding Problem . hands. may be much faster than breadth-first Space? O(bm). i. i. and other organs for sensors.e.e forward from initial stste and backward from goal.. (a) Different types of Agent Programs An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Four basic types in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents (b)Bidirectional Search and Depth First Search DFS Expand deepest unexpanded node. put successors at front EXPLAIN WITH DIAGRAM Properties of depth-first search Complete? No: fails in infinite-depth spaces.
(a)Min Max Search Procedure. Idea: choose move to position with highest minimax value = best achievable payoff against best play E.12. m ≈100 for "reasonable" games ◊ exact solution completely infeasible ALGORITHM • (b)(i) Steps in Hill Climbing Search Algorithm • Problem: depending on initial state. can get stuck in local maxima Example – 8 Queens Problem (ii)Alpha Beta Pruning • α is the value of the best (i. highest-value) choice found so far at any choice point along the path for max . 2-ply game Properties Complete? Yes (if tree is finite) • Optimal? Yes (against an optimal opponent) • Time complexity? O(bm) • Space complexity? O(bm) (depth-first exploration) • For chess... b ≈ 35.e.g.
unconscious processing. (b)Non Monotonic and minimalist reasoning A non-monotonic logic is a formal logic whose consequence relation is not monotonic. routine decisions • May do lots of work that is irrelevant to the goal • BC is goal-driven.g. • FC is data-driven.If v is worse than α. appropriate for problem-solving.g. Intuitively.. monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. – e.. abductive reasoning (consequences are only deduced as most likely explanations) and some important approaches to reasoning about knowledge (the ignorance of a consequence must be retracted when the consequence becomes known) and similarly belief revision (new knowledge may contradict old beliefs). – e. meaning that adding a formula to a theory never produces a reduction of its set of consequences. Most studied formal logics have a monotonic consequence relation. A monotonic logic cannot handle various reasoning tasks such as reasoning by default (consequences may be derived only because of lack of evidence of the contrary). (a)(i)Decision Tree learning Aim: find a small tree consistent with the training examples Idea: (recursively) choose "most significant" attribute as root of (sub)tree . automatic. max will avoid it ◊ prune that branch • Define β similarly for min • Pruning does not affect final result • Good move ordering improves effectiveness of pruning • With "perfect ordering. Where are my keys? How do I get into a PhD program? • Complexity of BC can be much less than linear in size of KB EXPLAIN WITH EXAMPLE." time complexity = O(bm/2) ◊ doubles depth of search • A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) • 13. object recognition. (a)Forward and Backward Chaining algorithm. 14.
but that alone usually cannot produce a good translation of a text. translation of idioms. Now that so much text is online. possibly subdivided into sections that each serve as a separate document for retrieval purposes. sometimes referred to by the abbreviation MT. The term "Bayesian" comes from its use of the Bayes' theorem in the calculation process. In early information retrieval systems. machine-aided human translation MAHT and interactive translation. The query is normally a list of words typed by the user. the complete automaton becomes too large or even infinite in size. whereas in very complex domains. Explanation-based learning approach generates efficient rules for handling the types of situations that actually arise during execution. it is more common to use the full text. Solving this problem with corpus and statistical techniques is a rapidly growing field that is leading to better translations. handling differences in linguistic typology. has been used extensively in two robot architectures: RoboSOAR (Laird et al. In simple domains. and then it was extended to the general theorem by other researchers. or else to update its previously-calculated probability. the task is to choose from a set of documents the ones that are relevant to a query. 1990). and the isolation of anomalies . also called computeraided translation. the query was a Boolean combination of keywords. (b)Machine translation system and analyze learning probabilities Machine translation. MT performs simple substitution of words in one natural language for words in another. 1991) and THEO (Mitchell.(ii)Explanation based learning Explanation-based learning. the query "(natural and language) or (computational and linguistics)" would be a reasonable query to find documents relevant to that field. Bayes' theorem was deduced in several special cases by Thomas Bayes. especially those with recursive structure. the situated-automaton approach is feasible and probably more efficient. (b) Bayesian Statistics provides reasoning under various kinds of uncertainty Bayesian inference is a method of statistical inference in which some kind of evidence or observations are used to calculate the probability that a hypothesis may be true. because recognition of whole phrases and their closest counterparts in the target language is needed.. For example. such as the title and a list of keywords and/or an abstract. is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. Sometimes a document is represented by a surrogate. At its basic level. 15.(a) Information retrieval and Information extraction in detail Information retrieval In information retrieval (IR).