You are on page 1of 42
Second Edition i SY “TN crt prea to We Natit! Intelligence AI-LISP PROLOG ) Dr. Munesh Chandra Trivedi Associate Professor & Dean (Academics) Rajkiya Engineering College, Azamgarh (UP.) KHANNA BOOK PUBLISHING CO. (P) LTD. Publisher of Engine eRe 4C/4344, AnsariRoad, DaryaGan), NewDelhi- 110002 Phone : 011-23244447-48 Mobile: +91-9910909320 E-mail: contact@khannabooks.com Website : www-khannabooks.com ie ER A Classical Approach to ARTIFICIAL INTELLIGENCE Munesh Chandra Trivedi Copyright © Khanna Book Publishing Co. (P) Ltd This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, resold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser and without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored in or introduced into retrieval system, or transmitted any form or by any means (electronic, mechanical, photocopying, recording or otherwise), without the prior written permission of both the copyright owner and the above mentioned publisher of this book. ISBN: 978-81-90698-89-4 Edition: 2018 Reprint: 2019 Published by: Khanna Book Publishing Co. (P) Ltd. 4C/4344, Ansari Road, Darya Ganj, New Delhi-110 002 Phone: 011-23244447-48 Mobile: +91-9910909320 E-mail: contact@khannabooks.com Printed in India by: S.P.S, Printers & Binders, Delhi EXERCISE | +. | OVERVIEW OF ARTIFICIAL INTELLIGENCE 1-32 1.1. INTRODUCTION 1 1.2 DEFINITIONS OF AL 3 1.3. IS AUTOMATING INTELLIGENCE IS POSSIBLE 5 14 MAN VS. COMPUTERS 5 1.4.1. What computers do better than people? 5 1.4.2 What people can do better than computers? 6 1.8 SIMULATION OF SOPHISTICATED AND INTELLIGENT BEHAVIOUR 7 1.5.1 General Problem Solving 9 1.5.2 Expert Systems 9 1.53 Natural Language Processing 10 1.5.4 Computer vision 10 155 Robotics " 156 Others 12 1.6 HOW Al TECHNIQUES HELP COMPUTERS TO BE SMARTER? 12 1.7 BRIEF HISTORY OF Al 14 1.7.1 Turing Test 15 BRANCHES OF Al 18 NATURAL LANGUAGE 19 AUTOMATED REASONING 20 VISUAL PERCEPTION 22 1.11.1 Types of Visual Perception 23 4.11.2 Automating Visual Perception 23. 1.12 INTELLIGENT AGENTS: 24 4.12.1 Agents and Environments 25 1.12.2 The Concept of Rationality 25 1.123 Classification of Agents 28 1.12.4 Application Areas of Inteligent Agents 27 1.18 MAJOR COMPONENTS OF INTELLIGENT SYSTEM 28 1.14 IMPORTANT DEFINITIONS AND CONCEPTS. 29 31 | 2. | PROBLEM SOLVING AND SEARCH | 33-74 ] 2.1 PROBLEM SOLVING BY INTELLIGENT COMPUTERS 33 2.2 PROBLEM FORMULATION 34 2.3 STATE SPACE REPRESENTATION 35 2.4 EXAMPLES OF SEARCH PROBLEMS: 37 2.41 Playing Chess—An Example of State Space Search 37 2.4.2 The Eight Tile Puzzle 39 2.4.3 The Fifteen Tile Puzzle 40 2.4.4 Water Jug Problem 42 (ill) 2.5 PROBLEM REDUCTION 45, 2.6 PRODUCTION SYSTEMS 45 2.6.1 ules of Production Systems 45 2.6.2 Architecture of Production System 48 2.7 EXAMPLE OF PRODUCTION SYSTEM-8-PUZZLE PROBLEM 52 2.8 HEURISTIC SEARCH 53, 2.9 HEURISTIC FUNCTIONS 585 2.10 TYPES OF HEURISTICS 57 2.11 CHARACTERISTICS OF PROBLEMS 58 2.11.4 Is The Problem Decomposable? 539 2.11.2 Can Solution Steps Be Ignored or Undone 61 2.11.3 Is the Universe Predictable (or Role of Planning) 63 2.11.4 Is The Good Solution Absolute or Relative? Or Is The Aim Any Solution or the Best Solution? 64 2.11.5. Is The Solution A State or A Path? 66 2.11.6 Role of Knowledge 68 2.12 PROBLEM SOLVING IN Al 69 SUMMARY 70 EXERCISE nm ls. | SEARCH METHODS: 75-132 3.1. SEARCHING FOR SOLUTIONS 75 3.2 UNINFORMED SEARCH METHODS 7 3.2.1 Breadth-First search 7 3.22 — Uniform-Cost Search 78 32.3 Depth First Search (DFS) 79 324 — Depth-limited Search at 3.2.5 Iterative Deepening Depth-First Search 82 3.3 INFORMED SEARCH a4 3.4 GENERATE AND TEST METHOD 85 3.5 HILLCLIMBING 85 3.5.1 Difficulties of Hill-Climbing Method 87 3.5.2 _ Determination of Heuristic Function Example 89 3.6 SIMULATED ANNEALING ot 3.7 BEAM SEARCH 92 3.8 BEST-FIRST SEARCH 94 3.8.1 Comparison between Hil Climbing and Best-First Search 96 3.9 BRANCH AND BOUND SEARCH 7 3.9.1 Depth-First Branch and Bound 98 3.10 FINDING THE BEST SOLUTION—A* SEARCH 99 3.10.1 Desirable Properties of Heuristic Search Algorithms 103 3.10.2 Admissibility of A* 104 3.11 ITERATIVE-DEEPENING Av 104 3.12 PROBLEM REDUCTION 105 3.13 AO" ALGORITHM 109 3.14 CONSTRAINT SATISFACTION 110 3.18 MEANS-END ANALYSIS 417 3.16 HEURISTIC VERSUS SOLUTION GUARANTEED ALGORITHMS. 120 ‘SUMMARY 123 EXERCISE 128 (ix) | a. | PROBLEM SOLVING IN GAMES (Adversarial Search) 133-166 4.1 INTRODUCTION 133 42 ADVERSIAL SEARCH 134 43. GAME PLAYING CYCLE 135 44 ASIMPLE GAME TREE 136 45 GAME PLAYING SEARCH 137 46 MINIMAX PROCEDURE 140 46.1 Illustration of Minimax Search 142 48.2 Limitations of Minimax 143 48.3 Negmax Procedure 144 4.7 ADDITIONAL PRUNING OF GAME TREE 145, 48 ILLUSTRATION OF ALPHA-BETA GUT OFF 147 49 ADDITIONAL REFINEMENTS 153 410 HORIZON EFFECT 155 4.11 ITERATIVE DEEPENING 158 SUMMARY 161 EXERCISE 164 Gi UNDERSTANDING NATURAL LANGUAGES 167-225 5.1. INTRODUCTION 167 5.2 UNDERSTANDING NATURAL LANGUAGES 168 5.3 NEED OF NATURAL LANGUAGE UNDERSTANDING 169 5.4 WHY IS NATURAL LANGUAGE UNDERSTANDING DIFFICULT 169 5.4.1. Natural Languages Processing System—SHROLU 171 55 LEVELS OF KNOWLEDGE USED IN LANGUAGE UNDERSTANDING 172 5.6 WORKING OF NATURAL LANGUAGE PROCESSING SYSTEM 173 5.7 SYNTACTIC PROCESSING 7 58 LANGUAGES OF GRAMMARS 7 5.9 CLASSIFICATION OF GRAMMAR 178 5.9.1. Transformational Grammars 179 59.2 Semantic Grammars 181 5.9.3 Systemic Grammars 181 5.9.4 Fillmore’s Case Grammars 182 59.5 Unification Grammars 182 5.9.6 Context-Free Grammar 182 5.10 PARSING TECHNIQUES 186 5.10.1 Top-Down and Bottom-Up Parsing 189 5.10.2 Syntactic Parsers 191 5103 Char Parsers 191 5.10.4 Finite State Transition Diagrams 192 5.11 TRANSITION NETWORKS 193 5.12 CONTEXT-SENSITIVE GRAMMARS 195 5.13 AUGMENTED TRANSITION NETWORKS 197 5.13.1 _ Definite Clause Grammar 200 5.14 UNIFICATION GRAMMAR 201 5.15 SEMANTIC PROCESSING 203, 5.18 PRAGMATIC ANALYSIS. 205 5.17 SHANK'S CONCEPTUAL DEPENDENCY THEORY 206 5.18 APPLICATIONS OF CD THEORY 215 bo 5.18.1 MARGIE 215 5.182 SAM 215 5.183 PAM 217 5.18.4 Other CD-based Natural Language Processing Programs 217 5.19 SENTENCE GENERATION 219 5.20 MACHINE TRANSLATION 220 EXERCISE 222 |. | KNOWLEDGE REPRESENTATION 226-247 6.1 ROLE OF KNOWLEDGE REPRESENTATION IN A.L 226 6.1.1 Features of Knowledge representation 228 62 TYPES OF KNOWLEDGES: 228 6.2.1 Declarative and Procedural Knowledge 228 622 Domain Specific Knowledge and Domain Independent (Common Sense) Knowledge 229 6.3 REPRESENTING KNOWLEDGE 229 6.3.1 Properties for Knowledge Representation Systems 231 6.3.2 _ Advantages of Disadvantages of Knowledge representations 233, 6.4 APPROACHES TO KNOWLEDGE REPRESENTATION 233, 6.4.1 Simple Relational Knowledge 233 642 _ Inheritable Knowledge 234 64.3 Inferential Knowledge 235 6.4.4 Procedural Knowledge 235 65 CATEGORIES OF KNOWLEDGE REPRESENTATION SCHEME 235 66 LOGIC 236 6.6.1 Propositional Logic 236 6.8 REASONING PATTERNS IN PROPOSITIONAL LOGIC 240 68.1 — Resolution 241 68.2 Normal Forms in Propositional Logic 242 6.9 RESOLUTION IN PROPOSITIONAL LOGIC 243, 6.9.1 _ Limitations of Propositional Logic 245 6.10 THE ROLE OF LOGIC IN ARTIFICIAL INTELLIGENCE 246 6.11 THE ROLE OF ARTIFICIAL INTELLIGENCE IN LOGIC 246 ‘SUMMARY 247 EXERCISE 247 | 7. | TECHNIQUES OF KNOWLEDGE REPRESENTATION 248-301 7.1. FIRST ORDER PREDICATE CALCULUS, 248 TAA Syntax for FOPL 249 7.1.2 _ Semantics for FOPL 250 7.2 QUANTIFIERS 251 7.3 PROPOSITIONAL VS. FIRST-ORDER INFERENCE 253, 7.3.1 Inference Role for Quantiiers 254 7.3.2 — Rules for WFF 256 7.4 CONVERSION TO CLAUSAL FORM 257 7.8 UNIFICATION 260 7.6 RESOLUTION IN PREDICATE LOGIC 261 7.7 COMPARISON WITH OTHER LOGICS 264 7.8 HORN CLAUSES 265 7.9 SEMANTIC NETS: 266 7.10 PROPERTIES OF SEMANTIC NETS an (a) 7.41. TYPES OF SEMANTIC NETS 272 7.12. PARTITIONED SEMANTIC NETS 272 7.49 STRUCTURED REPRESENTATION 275 7.14 MINSKY'S FRAME SYSTEM THEORY 278 7.48 CASE GRAMMAR THEORY 279 7.16 PRODUCTION RULES OR RULES 282 7.17 FORWARD AND BACKWARD DEDUCTION 285 7.18 ADVANTAGES OF PRODUCTION RULES. 289 7.19 PROBLEMS WITH PRODUCTION RULES 290 7.19.1 Conflict Resolution 290 7.20 APPLICABILITY OF PRODUCTION RULES 291 7.21 KNOWLEDGE BASE 292 7.22 THE INTERFACE SYSTEM 293, SUMMARY 294 EXERCISE 298 Ls EXPERT SYSTEMS 302-341 INTRODUCTION 302 82 _ BASIG ARCHITECTURE OF AN EXPERT SYSTEM 304 8.2.1 Individuals Involved with Expert Systems 305 8.2.2 _ Advantages and Disadvantages 305 8.3 TYPE OF PROBLEMS SOLVED BY EXPERT SYSTEMS. 306 8.4 FEATURES OF AN EXPERT SYSTEM 308 8.5 EXPERT SYSTEMS ARCHITECTURES 308 85.1 Rule-based System Architectures 308 8.5.2 __Non-production System Architectures ait 86 _ INDIVIDUALS INVOLVED WITH EXPERT SYSTEMS 316 8.7 EXPERT SYSTEM HOW IT SHOULD BE? 317 8.8 KNOWLEDGE ELICITATION / ACQUISITION 318 88.1 — Stages of Knowledge Acquisition 319 88.2 Techniques for Knowledge Elicitation 322 8.9 EXPERT SYSTEM TOOLS 323, 89.1 Al Shells or Expert System Shells, 324 8.9.2 _ Automating the Creation of the Knowledge Base 325 8.10 EXISTING EXPERT SYSTEMS 328 8.10.1 Dendral 327 8.10.2 Mycin 328 8.11 APPLICATIONS OF EXPERT SYSTEM TECHNOLOGY 331 8.12 DOMAIN EXPLORATION 333, 8.13 METAKNOWLEDGE 334 8.14 EXPERTISE TRANSFER 335 8.15 SELF EXPLAINING SYSTEM 336 8.16 DIFFERENCE BETWEEN NEURAL NETWORKS AND EXPERT SYSTEMS, 337 8.17 LIMITATIONS OF EXPERT SYSTEMS 338 SUMMARY 338 EXERCISE 340 Js. | PATTERN RECOGNITION 342-375 9.1 INTRODUCTION 342 9.2 THE RECOGNITION AND CLASSIFICATION PROCESS. 344 (xii) 93 APPROACHES FOR RECOGNITION 344 9.3.1 Structured Description (Syntactic Pattern Recognition) 344 93.2 _ Statistical Classification (Decision Theoretic Classification) 346 9.4 SYMBOLIC DESCRIPTION 349 9.5 LEARNING CLASSIFICATION PATTERNS 353 9.6 MACHINE PERCEPTION 354 97 COMPUTER VISION 355 9.8 DIGITIZATION AND SIGNAL PROCESSING 355 9.9 OBJECT IDENTIFICATION 358 9.10 SPEECH RECOGNITION 262 8.10.1 Signal Processing 363, 9.10.2 Noisy Channel Formulation of Statistical Speech Recognition 364 9.103 Approaches of Statistical Speech Recognition 364 9.10.4 Major Design Issues in Speech Systems: 366 9.10.5 Applications of Speech Recognition 266 9.11 VISION SYSTEM ARCHITECTURES 367 9.12 AUTOMATIC NUMBER PLATE RECOGNITION (ANPR) 369 9.13 FACE RECOGNITION SOFTWARE 370 9.14 FINGER PRINT RECOGNITION ari 9.15 ROBOTICS 372 SUMMARY a74 EXERCISE 375 SU COMPUTER VISION 376-410 10.1 WHAT IS COMPUTER VISION? 376 10.2 APPLICATIONS OF COMPUTER VISION a77 40.3. STATE OF THE ART a7a 10.4 RELATED FIELDS 378 10.5 TYPICAL TASKS OF COMPUTER VISION 380 105.1. Recognition 381 10.52 Motion 381 10.5.3 Scene Reconstruction 382 10.5.4 Image Restoration 382 10.8 COMPUTER VISION SYSTEMS 382 10.7. THE CHALLENGE OF VISION 383, 10.8 IMAGE ACQUISITION 384 10.8.1 2D Image Input 384 10.82 30 Imaging 385 10.9 METHODS OF ACQUISITION 387 10.9.1 Laser Ranging Systems 387 10.9.2 Structured Light Methods 387 10.9.3 More Fringe Methods 387 10.9.4 Shape from Shading Methods 388 10.9.5 Passive Stereoscopic Methods 389 10.9.6 _Active Stereoscopic Methods 389 10.10 GEOMETRIC MODELING FORM COMPUTER VISION 389 10.10.1 Wireframe Models 300. 10.102 Set-Theoretic Modelling 390 10.103 Boundary Representation 301 10.10.4 Desirable Model Properies for Vision 302 40.11 LINE LABELING 392 (xi) 10.11.1. Labelling an image 395 40.12 HOUGH TRANSFORMS FOR EDGE LINKING (LINE FINDING) 396 1012.1 Line-Finding in Image 396 10.122 To Detect Straight Lines in an Image, We Do 397 40.13 RELAXATION LABELLING 399 10.14 STATISTICAL RELAXATION TECHNIQUE 400 40.15 EDGE DETECTION 401 40.16 EDGE FOLLOWING 403 10.17 REGION DETECTION 404 40.18 RECONSTRUCTION OBJECTS 406 SUMMARY 407 EXERCISE 409 fu. | COMPUTER VISION REPRESENTATION 411-439 11.1 REPRESENTATION AND RECOGNITION ani 11.11 Generalised Cylinders 412 41.1.2. Aspect Graphs 412 11.1.3 _ Skeleton Representations 413 11.2. OBJECT RECOGNITION 413, 11.2.1 Model-Based Recognition 414 11.22 Geometric Invariants 415 11.23 Recognition using Invariants 416 1124 Invariants 47 41.25 Invariant Measures 418 11.3 PATTERN RECOGNITION 419 41.3.1. Template Matching 419 11.8.1.1 Pixel Level Template Matching 419 11.3.1.2 High Level Template Matching 419 41.32 Hough Transforms 420 11.33 Extended Gaussian Images 423, 11.4 MODEL BASED OBJECT RECOGNITION 424, 114.1 Tree Search Methods 424 41.42 The Oshima and Shiral Method (1979) 425 41.43 The 2DPO Method (1983) 428 11.44. The ACRONYM Method (1979) 428 11.45 The Grimson and Lozano-Perez Method (1984) 430 11.46 Related Grimson and Lozano-Perez Methods 432 11.4.7 The Faugeras and Hebert Method (1983) 483 11.5 RELAXATION LABELING METHODS 434 11.8 GRAPH SEARCHING 435 SUMMARY 438 EXERCISE 438 Seay COMMON SENSE 440-453 42.1 INTRODUCTION 440 42.2 COMMON SENSE SYSTEM 446 122.1 The Physical World — Qualitative Physics 446 1222 Modeling the Qualitative World 448 122.3 Reasoning with Qualitative Information 447 12.3 COMMON SENSE ONTOLOGIES 487 (xiv) 123.1 Time 447 1232 Space 448 123.3 Materials 448 123.4 Memory Organization 449 12.4 FORMALIZED NONMONOTONIC REASONING 450 12.5 SOME FORMALIZATIONS AND THEIR PROBLEMS: 450 12.6 ABILITY, PRACTICAL REASON AND FREE WILL 451 12.7 THREE APPROACHES TO KNOWLEDGE AND BELIEF 452 SUMMARY 452 EXERCISE 453, | 13. | PROGRAMMING LANGUAGES: 454-523 13.1. INTRODUCTION 454 13.2. LISP PROGRAMMING 455 13.3 BASIC DATA OBJECTS—LISTS AND ATOMS. 457 13.4 LISP PRIMITIVES OR FUNCTIONS 460 19.4.1 List Selectors 463, 13.4.2 List Constructors 465 13,8 USER-DEFINED FUNCTIONS IN LISP 468 13.5.1 Predicates and Conditionals 471 18.5.2 Simple Branching Primitives 476 135.3 General Branching Primitives 477 19.6 RECURSION AND ITERATION 479 18.6.1 Repeating by Recursion 479 13.6.2 Repeating by Iteration 480 13.7 MISCELLANEOUS PRIMITIVES. 482 13.8 PROPERTY LISTS AND ARRAYS 484 139 PROLOG 487 13.10 FEATURES OF PROLOG 488 13,11 STRUCTURE OF PROLOG PROGRAM 490 13.12 LISTS IN PROLOG 494 13.18 CONTROLLING EXECUTION IN PROLOG 495 13.14 STARTING PROLOG 495 13.15 OTHER Al PROGRAMMING LANGUAGES sit 18.15.1 Small Talk 511 18.15.2 Example of a SMALLTALK Program 512 13.15.3 POP (POP-11) 513, SUMMARY 516 EXERCISE 516 Chapter 1 OVERVIEW OF ARTIFICIAL INTELLIGENCE EB [intropuction Artificial Intelligence is a broad field i.e. different things to different people. The main objective of AI is computers to do tasks that requires human Intelligence. People want to automate human Intelligence for the following reasons (i) Understand human intelligence better. (ii) Smarter Programs (iii) Useful techniques for solving difficult problems. To a common man ARTIFICIAL INTELLIGENCE are two words whose meaning in dictionary is as follows: Artificial : Made as a copy of something natural Intelligence : The ability to gain and apply knowledge and skills. However, for a technical person artificial intelligence is a very wide field of science and engineering which makes intelligent machines and especially intelligent computer programs It is related to the similar task of using computers to understand human intelligence. Complexity in defining artificial intelligence arises because the word ‘intelligence’ is ill-defined For a technical person, intelligence is something more than what a dictionary defines in such a simple way. For him “intelligence is the computational part of the ability to achieve goals in the world." Having stated this starts the complexity of intelligence and in a way of the subject of artificial intelligence. Defining intelligence in this way we have related intelligence to human world or human intelligence. Till today there isn’t a solid definition of intelligence that does not depend on relating it to human intelligence. We cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the machanisms of intelligence and not others. Also, what may appear intelligent to one person may not be so for another person. Varying kinds and degrees of intelligence occur in people, many animals and some machines. However, Al is not purely about simulating human intelligence. Sometimes, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in Al involves studying the problems the world presents to 2 ARTIFICIAL INTELLIGENCE intelligence rather than studying people or animals. Al researchers are free to use methods that are not observed in people or that involve much more computing than people can do. Following is a list of tasks that require intelligence (i) Speech generation and understanding (ii) Pattern recognition (iii) Mathematical theorem proving (iv) Reasoning (v) Motion in obstacle filled space ‘Summarizing, we can distinguish between intelligence and artificial intelligence as shown in table 1.1. Table 1.1 Intelligence Artificial Intelligence 1. [Natural Programmed by human beings Increases with experience and also hereditary. | Nothing called hereditary but systems do learn from experience. 3. |Highly refined and no electricity from outside| It is in computer system and we need electrical | is required to generate output. Rather/energy to get output. Knowledge base is| © knowledge is good for intelligence required to generate output. 4, |No one is an expert. We can always get better|Expert systems are made which have the solution from another human being capability of many individual _ person's experiences and ideas. 5. [Intelligence increases by supervi- sed or| We can increase Al's capabilities by other means unsupervised teaching, apart. from supervised and unsupervised teaching. The confusion about the word “intelligence”, its ill definition and much broad sphere has led people to divide AI into two classes: (i) Strong AI (ii) Weak AI Strong AI makes the bold claim that computers can be made to think on a level atleast equal to humans. Strong AI research deals with the creation of some form of computer-based artificial intelligence that can truly reason and solve problems. People advocating strong Al believe that it will eventually lead to computers whose intelligence will greatly exceed than that of human beings. In strong AI the programs are themselves the explanations Weak AI simply states that some “thinking-like” features can be added to computers to make them more useful tool. Weak AI research deals with the creation of some form of computer-based artificial intelligence which can reason and solve problems in a limited domain. Hence, such a machine would act in some ways as if it was intelligent but it would not possess true intelligence. Some Al researchers are of the opinion that goal of AI should be to build machines that help people in their intellectual tasks rather than do these tasks. “Helping” is called weak Al and “doing” is sometimes referred to as strong AI. We have already started reaching the objectives of weak AI as we shall see in expert systems and speech recognition in later part of the book. Objectives of strong AI are still to be reached. OVERVIEW OF ARTIFICIAL INTELLIGENCE 3 FEA [DEFInitions oF al © “The art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil, 1990). © “The branch of computer science that is concerned with the automation of intelligent behaviour”. (Luger and Stublefield, 1993) Aftificial Intelligence have the following properties— [Systems that think like humans. | Systems that think/raationally [ Systems that act like humans. | Systems that act rationally. (i) Acting Humanly : The Turing Test © Ifthe response of a computer to an unrestricted textual natural-language conversation cannot be distinguished from that of a human being then it can be said to be intelligent. No. My name is Mary. Hil Are you a computer? Qo Are you kidding, I'm Hal and I can't even multiply two-digit numbers! Loebner Prize : Current constest for restricted form of the Turing test. (ii) Thinking Humanly : Cognitive Modelling © Method must not just exhibit behaviour sufficient to fool a human judge but must do it in a way demonstrably analogous to human congnition. © Requires detailed matching of computer behaviour and timing to detailed measurements of human subjects gathered in psychlogical experiments. © Cognitive Science : Interdisiplinary field (Al, psychology, linguistics, philosophy, anthropology) that tries to form computational theories of human cognition. (iii) Thinking Rationally : Laws of Thought © Formalize “correct” reasoning using a mathematical model (eg. of deductive reasoning) © Logicist Program. Encode knowledge in formal logical statements and use mathematical deduction to perform reasoning : ARTIFICIAL INTELLIGENCE Problems: Formalizing common sense knowledge is difficult. General deductive inference is comutationally intractable (iv) Acting Rationally : Rational Agents An agent is an entity that perceives its environment and is able to excute actions to change it. Agents have inherent goals that they want to achieve (e.g. survive, reproduce). A rational agent acts in a way to maximize the achievement of its goals. ‘True maximization of goals requires omniscience and unlimited computational abilities Limited rationality involved maximi: resources available ing goals within the computational and other Foundations of Al Many older disciplines contribute to a foundation for artificial intelligence + Philosophy; logic, philosophy of mind, philosophy of science, philosophy of mathematics - Mathematics; logic, probability theory, theory ofeomputability + Psychology; behaviorism, cognitive psychology - Computer Science & Engineering; hardware, algorithms, computational complexity theory Linguistics; theory of grammar, syntax, semantics Expert Systems Discovery that detailed knowledge of the specific domain can help control search and lead to expert level performance for restricted tasks. © First expert system DENDRAL for interpreting mass spectrogram data to determine moleculr structure by Buchanan, Feigenbaum, and Lederberg (1969) © Early expert systems developed for other tasks; - MYCIN: diagnosis of bacterialinfection (1975) - PROSPECTOR: Found molybendum deposit based on geological data (1979) Ri: Configure computers for DEC (1982) Al Industry © Development of numerous expert systems in early eighties © Estimated $2 billion industry by 1988. . Japanese start “Fifth Generation” project in 1981 to build intelligent computers based on Prolog logic programming. MCC established in Austin in 1984 to counter japanese project. Limitations become apparent, prediction of Al Winter - Brittleness and domain specificity knowledge acquisition bottleneck OVERVIEW OF ARTIFICIAL INTELLIGENCE [dj ['s auTomaTING INTELLIGENCE IS POSSIBLE AI research makes the assumptions that human Intelligence can be reduced to the manipulation of symbols and that it does not matter what medium is used to manipulate these symbols. Most of the reseorcher argue tht true intelligence can never be achieved by computers, but requires some human property, which can not be simulated but the Turing test, considered how you would be able to conclude that a machine was really intelligent, the test involved human communicating with a human and with a computer in other room, using a computer for the communication the first human can ask the other humar/computer any questions like, including very subjecteive questions like “what do you thinks of this drama” If the computers answers so well that the first human con't tell which of the two others is human then we say that the computers is intellegent. Hence we can say tht automating Intelligence is really possible FE [wan vs. computers There are many definitions of Al. Some of these definitions of AI are based on a sharp understanding of vital difference which exists between man and the computer (which is merely a machine). Basically computers are machines which obey rules speedily and accurately. On the other hand, human beings work by intuition in a way that even psychologists do not fully understand. However, an interesting feature about Al is that it covers those operations through which computers are made to do things which at the moment are done by the humans. With AI the role of computers changes from something useful to something essential. Hence, we define artificial intelligence as: Al is the study of how to make computers do things which, at the moment, people do better (Rich and Knight) The beauty of this definition lies in the fact that we have avoided defining the meaning of either artificial or intelligence. For a layman this definition would be surprising. To him, the computers appear to take over activities which belong to human beings and are beyond the scope of mere machines. Let us discuss this in more detail. 1.4.1 What computers do better than people? ‘The computers may perform these so called ‘human’ activities even more efficiently than the most of human beings. They are for example (a) Numerical Computation : Computers are without doubt faster and more accurate than humans in numerical computations. Also, the chances of error is almost zero in case of computers. (b) Information Storage : Computers can store huge amounts of information whereas in human beings only a certain amount of knowledge can be stored. (c) Repetitive Operations : It is well known that the computers do not get bored and commit mistakes as fatigue sets in even when they repeat the same process every day. If a computer is used to print out one thousand copies of a document, all will 6 ARTIFICIAL INTELLIGENC! be similar. This is different if human being produce such a huge amount of copies of a single document Despite these actions which are superbly performed by the computers, we remain secure in the belief that there are some activities which humans perform better these machines. 1.4.2 What people can do better than computers? People have outperformed computers in activities which involve intelligence. We do not just process information. We understand it, make sense out of what we see and hear. Then we come out with new ideas. We use common sense to make our way through a world which some times appears highly illogical. Common sense knowledge includes knowing what we know vaguely as well as what we know clearly. For example if we were asked to recall the phone number of our college or of our good friend, we would, search our memory, trying to retrieve information But if we were asked to give the phone number of India's Prime Minister we would not know the answer and do not even try a retrieval. Now, if we were asked the phone number of Tulsi Dass (writer of epic The Ramayana), we would know at once that no answer exists since telephones were not around in Tulsi Dass's time. From the above discussion we see that the definition of Al (as given by Rich and Knight) fails to include areas or problems that cannot be solved either by computers or people. As already mentioned, it avoids defining either artificial or intelligence and at the same time provides a good outline of what constitutes artificial intelligence. If people are more intelligent than computers and if AI tries to improve the performance of computers in activities which people do better, then the goal of AI is to make computers more ‘intelligent’. So, the second definition of AI can be Al is the part of computer science concerned with designing intelligent computer systems which exhibit the characteristics we associate with intelligence in human behaviour (Barr and Edward, 1981-82). This definition has two major parts ; Computer solutions for complex problems and processes that are similar to human reasoning processes. For the first part, regular conventional software is available. The second part of the definition is the distinguishing feature of AI programs. In other words, we can say that with AI the role of computers changes from something useful to something essential. This is why we need AI. Continuing with the debate, why is the term ‘intelligence’ reserved for humans and why are the computers not considered to be intelligent Winston (1984) has remarked that “since the exact definition of intelligence has proven to be extremely elusive, the following is the partial list of characteristics intelligence should possess” : © To respond to situations very flexibly * Make sense out of ambiguous or contradictory messages © To attach relative importance to different elements of a situation * To find similarities between situations despite differences which may separate them © To draw distinctions between situations despite similarities which may link them. ‘The two situations may look similar on the surface, yet we are able to note the difference and hence adjust our reaction. OVERVIEW OF ARTIFICIAL INTELLIGENCE z Though all these abilities come under common sense, yet these abilities cannot be simulated by the computer. Now consider some activities such as © What did you eat in a friend’s marriage? You cannot enlist the mental steps required to remember what you ate in the marriage © What are muscular contractions necessary to pick up a cup of tea? © Can we describe the processes of reading and understanding a book? The research done by cognitive scientists (scientists who study how human beings learn, reason, store knowledge and use it) helps to explain the workings of human intelligence This, in turn, has helped workers in the field of AI to simulate that intelligence on a computer Al technique is a method that exploits knowledge. Workers in Al used many different techniques to make computers more intelligent. © One commonly used technique is to determine the process used by humans to produce a particular type of intelligent behaviour and then to simulate that process on a computer. © The other technique which is used by cognitive scientists is to determine those processes which produce human intelligence in a given situation, These processes may be programmed, in an attempt to simulate that behaviour. This AI technique is called modeling or simulation. (In fact, a model of intelligent human behaviour is an effort to simulate that behaviour on a computer to determine if the computer will exhibit the same intelligent behaviour as docs a human.) Three important AI techniques are : (1) Search—It provides a way of solving problems for which no more direct approach is available as well as a framework into which any direct techniques that are available can be utilized. (2) Use of knowledge—It provides a way of solving problems by exploiting the structures of the objects that are involved (3) Abstraction—It provides a way of separating important features and from the many unimportant ones. tions, | SIMULATION OF SOPHISTICATED AND INTELLIGENT BEHAVIOUR The link between cognitive science and computer modeling is a continuous process as shown in fig. (1). Cognitive scientists develop theories of human intelligence which are programmed in to computer models by AI researchers. The computer models are then used to test the validity of these theories. The feedback from the computer models allows the cognitive scientists to refine their theories. These can then be used to implement better models 8 ARTIFICIAL INTELL —— NCE Development theories of |_ Computer ‘Test validity human intelligence Models 7 of theories | T | Congnitive Computer Al researchers scientists engineers Fig, (1). Relationship between cognitive scientists computer engineers and Al researchers. Naturally the question arises about the importance of the processes required to simulate human intelligence? Is it the goal of AI to simulate intelligent behaviour with a computer (by any means) or is it truly AT only if we simulate intelligence by using the same techniques as a human? ‘There is a difference of opinion in Al community about this issue. Some scientists believe that the goal of AI is simply to simulate intelligent behaviour on a computer, (using any technique which proves to be effective). Others claim that it is not AI when we simulate intelligence using procedures other than those which might be used by humans. So another definition can be Al is that branch of computer science which deals with symbolic, non algorithmic methods of problem solving. (Buchanen and Shortliffe, 1984) This definition focuses on two different characteristics of computer programs: (1) Numeric vs Symbolic : Computers were initially designed to process numbers. Consistent researches have shown that people think symbolically rather than numerically and human intelligence is partially based on our mental ability to manipulate symbols. rather than just numbers. (2) Algorithmic vs Non-algorithmic : An algorithm is a step by-step procedure with well-defined starting and ending points, which is guaranteed to reach a solution to a specific problem. Computer architecture readily lends itself to this step-by-step approach since the conventional computer programs are based on algorithms. However, most of the human processes, tend to be non-algorithmic, i.e., our mental activities consist of more than just following logical, step-by-step procedures Al research continues to be devoted to symbolic and non-algorithmic processing techniques in an attempt to emulate more closely human reasoning processes by a computer We have already discussed the importance of simulation technique as far as human intelligence is concerned. One of the goals of the AI is to simulate intelligent behaviour with a computer or by any other means. Areas which are related to Al and somewhat overlap include engineering (electrical, mechanical and computer science), linguistics, psychology. cognitive science and philosophy. The different areas in which AI is applied are shown in fig. (5). OVERVIEW OF ARTIFICIAL INTELLIGENCE, 9 General Expert Natural Computer . Problem System language Vision | |Robotics| | Others Solving Processing Fig. (5) Application areas of Al. Now we will discuss these areas in detail. 1.5.1 General Problem Solving : It involves solving of broad range of problems which includes reasoning about physical objects and their relationships to each other as well as reasoning about actions and their consequences. Several specific problems such as water jug problem, travelling sales man problem or other general problems such as Tower of Hanoi, Monkey and Bananas problem or the Missionaries and Cannibals problem ete can be taken up with the AI machines which will be described in detail at relevant places. However the solution may be approximate or exact depending upon the structure of the problem domain and the knowledge available. General problem solving has proved to be helpful in solving other less severe problems 1.5.2 Expert System: These are Al programs which act as intelligent adviser or consultant in a specific domain or specified areas. Even an inexperienced user can apply the inferencing capability of expert system and can solve problems and make decisions in a domain, as good as an expert. Expert systems (fig. 6) do not replace an expert in that domain but make available their knowledge and experience. Further, expert systems are better than books because knowledge is got only after reading books but in an expert system knowledge is readily available at one place. Expert systems have been built that can diagnose faults in aircrafts, radars ete. as well as diagnose diseases and recommend medicines. Expert systems are also available for computer configuration and financial planning. fe—>} Inference Engine ets) usr Interface ¥ fea} Knowledge Base Fig, (6). Block diagram showing structure of an Expert System. 10 ARTIFICIAL INTELLIGENCE .5.3 Natural Language Processing : Natural language means the native language i.c., the language one speaks. The term natural language is used to distinguish it from the computer input terms and language modeled on natural human languages. Computer's language (machine language) is quite complex. For the computer to understand natural language is equally complex at present In order to understand natural language it must know how to * Generate; * Understand; * Translate. So a natural language computer is the computer which interacts through natural language. Hence it must have a parser, a knowledge representation system and an output translator as shown in fig. (7). Understanding natural language : Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. This is because natural langauge has developed as an effective communication medium between intelligent beings. It can be seen as transmitting a “mental structure” from one brain to another which have highly similar mental structures. This similarity in contexts helps in generating and understanding highly condensed messages. Thus, natural language understanding is a highly complex problem of encoding and decoding. One of the areas of Al is the creation of programs that are capable of understanding and generating natural language. To build such computer systems both the contextual knowledge and the processes for making effective inferences are required. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains Natural Natural Knowledge Language +} Parser aRepresentation-—> _,. Output > anmuaee Text String System | Translation ‘ext or Input Lu Computer Code! | Language Output Dictionary Fig. (7). The major components of a natural language processing system. 1.5.4 Computer vision : Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory and technology for building artificial systems that obtain information from images or multi-dimensional data. The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimen- sional information directly, and they are not as good as what humans evidently use. OVERVIEW OF ARTIFICIAL INTELLIGENCE, u Computer vision is a technique for a computer system to search beyond the data it is given and to find out almost the real world by analyzing and evaluating visual information. By search and pattern matching techniques a computer can pick up key features, then identify features a human eye can miss. A computer vision system is shown in as shown in fig. (8). Computer VA Memory Analog to ‘ene Mises fy Distal Binary Version (ape) | ‘AL Vision” Program (Search Pattern Matching) Law Fig. 8. A computer vision system. In Al, computer vision is studied as matrix of intensity values. A visual scene may be encoded by sensors and represented as matrix of intensity values. These are processed by detectors that search for primitive picture components like line segments, simple curves, corners etc. These is turn are processed to infer information regarding the objects of the scene. The ultimate aim is to represent the scene by a suitable model. Useful applications of computer vision can be aerial photographs, remote sensing, finger prints, paperless banking, surveillance ete 5 Robotics : 1 Human beings are able to successfully move in their environment and manipulate things like obstacles, drive a car etc. These tasks, though performed unconsciously by humans, involve a great deal of complexity, When we try to program machines to perform the same tasks, we observe that this requires many of the capabilities used in solving more intellectually demanding problems. Robotics is field of engineering devoted to duplicating the physical capabilities of human being; attempts to mimic human mental abilities. They differ from AI programs, which usually operate in computer-simulated world whereas robots operate in physical world. As an example, consider making a move in chess. An Al program can search millions of nodes in a game tree without ever having to sense or touch anything in the real world. A complete chess-playing robot, on the other hand, must be capable of grasping pieces, visually interpreting board positions, and carrying on a host of other actions. ‘The field has expanded into a complete study in itself. 12 ARTIFICIAL INTELLIGENCE The Sony Corp. of Japan has recently designed 23 inch tall, humanoid robot, called “SDR-4X”, which has a photographic memory, an extensive vocabulary and a juke-box phonograph like knowledge of music. It is a robot designed to live with people in homes and costs as much as a luxury car. It can carry on simple conversations with its 60,000 word vocabulary, recognize colour, dodge obstacles in its path and even sing once programmed with music and lyries. This robot can even be programmed to recognize 10 people through their faces, stored as digital images shot with its camera and their voices. picked up through seven microphones. It also remembers their names. It has sensors on the bottom of its feet to help it walk on uneven surfaces such as carpeting and has been programmed to tumble without falling apart and then get up on its own. Walking robot called Asimo, greets visitors at showrooms. Entertainment robots talk with children, play simple games and draw pictures. Robots can help rchabitation patients who need to strengthen their legs. 1.5.6 Others As AL is a developing field it is expected that there are many areas in which AI techniques of simulation of sophisticated and intelligent behaviour will be used in near future. For example game playing is one such area. Game playing requires seeing patterns, making plans, searching combinations, judging alternative moves and learning from experience. ‘These are skills which are also involved in our daily tasks. In many ways, game playing has provided a simple ground for many of the Al's powerful ideas. Therefore, game playing has dominated as a key research area of AI. Today we can buy machines that can play master level chess for a few thousand rupees. Another example is robot soccer. Like any soccer team, soccer playing robots are specialized as goal tender, strikers and defenders IBM’s Deep Blue became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match in 1997. Another field which is fast developing is that of speech recognition. Speech recognition is the computer's ability to accept spoken words as dictation or to follow voice commands by using software. We can now instruct computers using speech and some more systems are already developed. ‘The above is a brief discussion of the various areas of AI. In the next three articles, we will be discussing natural languages, automated reasoning and visual perception in a more detailed fashion. Complete details forms the various chapters in the later part of the book There are many areas which we have still not identified. However, one thing is for sure. The areas of Al are going to expand dramatically in near future. Ey [ How al TECHNIQUES HELP COMPUTERS TO BE SMARTER? Computers cannot have experience, but it can study and learn. For this it should have knowledge. But what is knowledge? Knowledge is more than simply data or information Knowledge consists of : (i) Faets (ii) Concepts: (iii) Theories; (iv) Procedures, and Relationships between them All these entities form a knowledge base. When AI techniques (search techniques) are applied to this data base, a smarter computer results, This smarter computer can reason. OVERVIEW OF ARTIFICIAL INTELLIGENCE 13 take decision, make judgement ete. Fig. (2) shows how computers become intelligent by infusing inferencing capability into them. In other words, given a knowledge base and inferencing capability, the computer can be made a useful tool to enhance capabilities of human beings, [—> Input = | | -—> Output Knowledge Inference 0 | ie | (Answers, Solutions) (Questions, problems) Base | Capability | wers, Solutions) tt Fig. (2). Al Computing In order to understand how a computer becomes smarter, let us first know the role of human brain. The human brain address the following queries: How does a human being store knowledge? * How does human being use this knowledge? How does a human being learn? How does a human being reason? The art of performing these actions collectively is the aim of AI and is called cognitive science. In conventional computing, computer is given data and computer is told how to solve a problem. Whereas in AI computing computer is given knowledge about a domain and some inferencing capability is inducted. The computer is then told what the problem is but not how to solve it. AI computing is capable of explaining how a particular conclusion was reached and why requested information was needed during the consultation. Hence it gives the user a chance to assess and understand the system's reasoning ability. The differences between conventional and AI computing is depicted in the table 1.2 Table 1.2 Comparison of Conventional and Al Computing Dimension Conventional Computing Al Computing | Processing {Primarily algorithmic Includes symbolic! | lconceptualization Nature of Input ‘Must be complete \can be complete ‘Search Approach Frequently based on Frequently uses rules and) algorithms. heuristics | | Explanation |Usually not provided Provided | Focus Data. information |Knowledge | Maintenance and Usually difficult Relatively easy, changes can be} update made in self contained modules. Reasoning Capability No Yes 4 ARTIFICIAL INTELLIGENC: 10nce a knowledge base of facts and their logical associations is built up some means of using it to solve problems must be developed. How does Al software reason with or infer from this knowledge base? The basic techniques to interact with the collected knowledge base are : Search techniques and Pattern Matching Given some initial start-up information, the A.l. software searches the knowledge base looking for specific conditions or patterns. The computer literally hunts around until it finds the best answer it can give, based on the knowledge it has While AI problem solving does not take place directly by algorithmic processes, (algo- rithms, of course are used to implement the search processes), they perform symbolic manipulations which cause problems to be solved in a way which more closely approximates the way a human brain works. Virtually, all digital computers are algorithmic in their operation, based on the Von Neuman concept : instructions stored in memory are executed sequentially to perform some desired operation. The question is how symbolic processing is done on a machine required for AI. computing? The obvious answer is that algorithmic software is written in such a way as to permit symbolic representation manipulation. BRIEF HISTORY OF Al 7. Disciplines that have contributed ideas, viewpoints and techniques to Al are many and can be considered to be the foundations of AI. They are philosophy, mathematics. economics, neuroscience, psychology, engineering (electrical, mechanical, computer and control), cognitive seience and linguistics. Philosophers of 400 B.C. made AI conceivable by considering the ideas that the mind is in some ways like a machine. It operates on knowledge encoded in some internal language and that thought can be used to choose what action is to be taken Mathematicians provided the tools to manipulate statements of logical certainty as well uncertain, probabilistic statements. They also set the groundwork for understanding com- putation and reasoning about algorithms. Economists formalized the problem of making decisions that maximize the expected outcome to the decision-maker. Psychologists adopted the idea that humans and animals can be considered information-processing machines Linguists showed that language use fits into this model. Computer engineers provided the artifacts that make AI applications possible. The great advances in speed and memory of the computer have made Al programs to be run. Control theory has made possible the designing of devices that act optimally on the basis of feedback from the environment While knowledge of the history of Al is not essential to understand the subject, we study it to interpret current developments. Our approach here will be to concentrate on a very small number of people, events and ideas. The important people associated with AI are Alen Turing, Warren McCulloch, Marvin Minsky, Allen Newell, Herbert Simon, John McCarthy te. The most important event in the history of Al is the Dartmouth college summer workshop in June 1956 which is considered to be the official birth date of A. However, we ignore many people, ideas and events that are also important and focuss on only three things. The Dartmouth conference and Chinese room are discussed in brief whereas the ‘Turing-test will be discussed in detail. T_Th Ala symbol is a Totter, word or number which represents objects, processes and their relationship. Objects can be people, things, ideas, concepts, events or statements of facts. By using symbols, it is possible to create the knowledge base of facts and concepts, and the relationships which exist between them, Then various Processes are used to manipulate the symbols to solve a problem. OVERVIEW OF ARTIFICIAL INTELLIGENCE 15 The Dartmouth Conference. In the summer of 1956 a two-month workshop was organized at Dartmouth. The attendance list at this workshop read like a present-day who's who in the field—John McCarthy (creator of LISP), Marvin Minsky (leading AI researcher), Claude Shannon (Nobel prize winner) alongwith seven other prople attended this workshop. ‘This workshop did not lead to any new break through but it did introduce all the 1 figures (and in other way many disciplines) to each other. The new name for the “Artificial Intelligence” was thus coined. 1.7.1 Turing Test ‘The idea of Artificial Intelligence originated from the historic experiment, called Turing ‘Test. This test provides an answer to the question, “Can Machines Think” in operational language. The British mathematician Alan Turing is one of the founders of computer science and the father of artificial intelligence. More than 50 years ago he predicted the advent of “thinking machines”. In his times computers were slow. Turing left. a bench mark test for an intelligent computer; it must fool a person into thinking the computer as human. The test he performed, now known as TURING TEST, was performed in two phases. In the first phase [fig. (3a)], the interrogator, isolates himself from the man and woman Same questions are asked to both—man and woman through a neutral medium, say teletype writer and each party is isolated in separate room to eliminate the visual or audible clues Questions asked included calculations of multiplication of big numbers and also some questions on lyrics and English literature. In the second phase (fig, (3b)) the man is replaced by a computer without the knowledge of the interrogator. The interrogator cannot distinguish between man, woman or machine, rather he knows them as A and B. Paraphrased in terms (a) (b) Fig. (3) Pictorial representation of Turing test. of intelligence, turing test may be stated as “If conversation with a computer is indistin- guishable from that with a human, the computer is displaying intelligence”. In other words, if we cannot tell the difference between a person (natural intelligence) and a machine (artificial intelligence) they must be the same. If the interrogator could not distinguish between a man imitating a woman and a computer imitating a man the computer succeeded in passing the test. Or in other words the goal of the machine was to befool the interrogator into believing that it is person. If the machine succeeds at this, then it will be concluded that the machine can think. In “Computing machinery and intelligence”, Alan Turing, made the claim that by the year 2000, computers would be able to pass the Turing test at a reasonably sophisticated 16, ARTIFICIAL INTELLIGENCE level. In particular, the average interrogator would not be above to identify the computer correctly more than 70 per cent of the time after a five minute conversation, AI hasn’t quite lived upto Turing’s claims, but quite a bit of progress has been made. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer From the above discussion we observe that important features of Turing-test are (i) Without going into the debate of nature of intelligence, it gives us an objective notion of intelligence, i.c., it gives a standard for determining intelligence (ii) It avoids the questions such as whether the computer uses the appropriate internal processes or whether or not the machine is actually conscious of its actions (iii) It eliminates any bias in favour of living organisms. The interrogator solely focus: on the content of the answers to questions. Any machine, computer or system need to possess the following capabilities to pass the Turing-test © Natural language processing—To understand the natural language and communicate successfully in English. © Knowledge representation—To store what it knows or hears © Automated reasoning—To use the stored information to answer questions. It must be able to reason out and draw new conclusions, * Machine learning—To adapt to the new changes and detect the patterns Turing-test avoided a direct physical interaction between the interrogator and the computer. ‘The so called total Turing-test includes a video signal so that the interrogator ean test the subject's perceptual abilites for which the computer needs © Computer vision—To detect and perceive objects. © Robotics—To manipulate objects and move about. These six disciplines compose most of AI. The other, as also these branches, are discussed in following article The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human. Turing test have come under severe criticism for the following reasons : (i) Some people argue that Turing test makes the machine intelligence fit into a human mould. Perhaps machine intelligence is simply different from human intelligence ‘Therefore, trying to evaluate it in human terms is a fundamental mistake (ii) Even if a machine passes the Turing test, we can not say about the proficiency it has achieved. It is simply the level of proficiency of the programmer who has programmed it. In other words, the machine is simply demonstrating the intelligence of human beings only. OVERVIEW OF ARTIFICIAL INTELLIGENCE 7 (iii) The Chinese Room : Searle proposed the experiment of the Chinese Room to bring into limelight the major flow in Turing test. The Chinese Room is a thought experiment in which we are asked to imagine a room containing a complete set of instructions for manipulating and combining Chinese characters. These instructions are written in English and can be implemented by a group of people who can follow the instructions, matching the characters on the basis of appearance. Suppose a story is passed into the room followed by a series of questions about the story in Chinese. By referring to their rules the people in the room are able to produce a third set of Chinese characters, representing the answers In this way Searle showed and argued that people can behave as if they are fluent in a foreign language while actually they are unable to understand a word of it, Hence, if a machine passes the Turing test it can not be assumed to be intelligent as it only manipulates formal symbols but lacks understanding. As of today, Turing test is the ultimate test a machine must pass in order to be called as intelligent. Till today no computer program based even on optical or biological computer has ever succeeded in doing so. If and when does it happen it will open a Pandora's box of ethical and philosophical questions. After all, if a computer is perceived to be as intelligent as a person what is difference between a smart computer and a human being? Every year the Loebner prize is awarded to the program that comes closest to passing a version of the Turing-test. Today's chatbots, a computer program which has a persona and a name, chats with you, is incapable of dealing with changes in context or abstract ideas and succeeds only at momentarily tricking people only for pre-programmed answers, Recently, HAL a chain of algorithms, which is being raised as a child and taught to speak through experiential learning in the same way as human children, is being developed in Israiele (Indian Express Aug. 2001), by a neurolinguist, Dr. Anant Treister-Goren. HAL has fooled child language experts into thinking that it is a toddler with an understanding of about 200 words and a 50 word vocabulary which it uses in short, infantile sentences Dr. Goren talks to HAL and reads him stories in much the same way a mother teaches her young child to learn about colours, food and animals. The Israeli hi-tech computer company aims over next 10 years to develop HAL into an “adult” computer program which can do what no computer has ever done before, passing the turning test. If this becomes true the distinction between real flesh and blood, old-fashioned and the new kind, will start to blur. But at present AI computers are far less “intelligent” than the human beings (Fig. 4) Al Computers Human Low High S ~ Conventional Some Superior Computers animals Aliens Fig, (4). Spectrum of intelligence of “Brain Power".

You might also like