Professional Documents
Culture Documents
KMC-101
ARTIFICIAL INTELLIGENCE
(A) ARTIFICIAL
(B) INTELLIGENCE
ARTIFICIAL
IT MEANS THAT IT IS MADE AS A COPY OF SOMETHING NATURAL
• INTELLIGENCE : INTELLIGENCE HAS BEEN DEFINED IN MANY DIFFERENT WAYS
SUCH AS IN TERMS OF ONE'S CAPACITY FOR LOGIC, ABSTRACT THOUGHT,
UNDERSTANDING, SELF-AWARENESS, COMMUNICATION, LEARNING, EMOTIONAL
KNOWLEDGE, MEMORY, PLANNING, CREATIVITY AND PROBLEM SOLVING.
• IT CAN ALSO BE MORE GENERALLY DESCRIBED AS THE ABILITY TO PERCEIVE
AND/OR RETAIN KNOWLEDGE OR INFORMATION AND APPLY IT TO ITSELF OR
OTHER INSTANCES OF KNOWLEDGE OR INFORMATION CREATING REFERABLE
UNDERSTANDING MODELS OF ANY SIZE, DENSITY, OR COMPLEXITY, DUE TO ANY
CONSCIOUS OR SUBCONSCIOUS IMPOSED WILL OR INSTRUCTION TO DO SO.
DEFINITIONS OF AI
EVOLUTION OF AI:
“ARTIFICIAL INTELLIGENCE IS A BRANCH OF COMPUTER SCIENCE DEALING
WITH THE SIMULATION OF INTELLIGENT BEHAVIOR IN COMPUTERS.” WHEN A
MACHINE CAN MAKE INTELLIGENT DECISIONS, IT CAN BE REFERRED TO AS BEING
INTELLIGENT- ARTIFICIALLY. WE MOSTLY SEE PEOPLE USING THE TERMS OF
MACHINE LEARNING, DEEP LEARNING, AND AI SYNONYMOUSLY. HOWEVER, DEEP
LEARNING IS A SUBSET OF MACHINE LEARNING, AND MACHINE LEARNING IS A
SUBSET OF AI.
EVOLUTION OF AI
DESIGN GOALS OF AI
THE AI SURGE BEGAN WITH SIX MAJOR DESIGN GOALS AS FOLLOWS:
• TEACH MACHINES TO REASON IN ACCORDANCE TO PERFORM SOPHISTICATED MENTAL TASKS LIKE
PLAYING CHESS, PROVING MATHEMATICAL THEOREMS, AND OTHERS.
• KNOWLEDGE REPRESENTATION FOR MACHINES TO INTERACT WITH THE REAL WORLD AS HUMANS
DO — MACHINES NEEDED TO BE ABLE TO IDENTIFY OBJECTS, PEOPLE, AND LANGUAGES.
PROGRAMMING LANGUAGE LISP WAS DEVELOPED FOR THIS VERY PURPOSE.
• TEACH MACHINES TO PLAN AND NAVIGATE AROUND THE WORLD WE LIVE IN. WITH THIS, MACHINES
COULD AUTONOMOUSLY MOVE AROUND BY NAVIGATING THEMSELVES.
• ENABLE MACHINES TO PROCESS NATURAL LANGUAGE SO THAT THEY CAN UNDERSTAND
LANGUAGE, CONVERSATIONS AND THE CONTEXT OF SPEECH.
• TRAIN MACHINES TO PERCEIVE THE WAY HUMANS DO- TOUCH, FEEL, SIGHT, HEARING, AND TASTE.
• GENERAL INTELLIGENCE THAT INCLUDED EMOTIONAL INTELLIGENCE, INTUITION, AND CREATIVITY.
AI REBIRTH AND REVOLUTION
THERE HAVE BEEN FOUR SUCCESSIVE CATALYSTS IN THE AI REBIRTH AND REVOLUTION:
• THE DEMOCRATIZATION OF AI KNOWLEDGE THAT BEGAN WHEN WORLD-CLASS RESEARCH
CONTENTS WERE MADE AVAILABLE TO THE MASSES.
• DATA AND COMPUTING POWER (CLOUD AND GPU) THAT MADE AI ACCESSIBLE TO THE MASSES
WITHOUT ENORMOUS UPFRONT INVESTMENT OR BEING A MEGA-CORPORATION.
• EVEN WITH ACCESS TO DATA AND COMPUTING POWER, YOU HAD TO BE AN AI SPECIALIST TO
LEVERAGE IT. HOWEVER, IN 2015, THERE WAS A PROLIFERATION OF NEW TOOLS AND
FRAMEWORKS THAT MADE EXPLORING AND OPERATIONALIZING PRODUCTION-LEVEL AI
FEASIBLE TO THE MASSES.
• IN THE PAST TWO YEARS, AI AS A SERVICE HAS TAKEN THIS A STEP FURTHER, ENABLING EASIER
PROTOTYPING, EXPLORATION, AND EVEN BUILDING SOPHISTICATED AND INTELLIGENT USE-
CASE SPECIFIC AI’S IN THE PRODUCT. THERE ARE PLATFORMS LIKE AZURE AI, AWS AI, GOOGLE
CLOUD AI, IBM CLOUD AI, AND MANY MORE THAT PROVIDES AI AS A SERVICE.
HISTORY OF AI
HISTORY OF AI
HISTORY OF AI
INTELLECTUAL ROOTS OF AI DATE BACK TO THE EARLY STUDIES OF THE NATURE OF
KNOWLEDGE AND REASONING. THE DREAM OF MAKING A COMPUTER IMITATE
HUMANS ALSO HAS A VERY EARLY HISTORY.
ARISTOTLE (384-322 BC) DEVELOPED AN INFORMAL SYSTEM OF SYLLOGISTIC LOGIC,
WHICH IS THE BASIS OF THE FIRST FORMAL DEDUCTIVE REASONING SYSTEM.
EARLY IN THE 17TH CENTURY, DESCARTES PROPOSED THAT BODIES OF ANIMALS ARE
NOTHING MORE THAN COMPLEX MACHINES.
PASCAL IN 1642 MADE THE FIRST MECHANICAL DIGITAL CALCULATING MACHINE.
IN THE 19TH CENTURY, GEORGE BOOLE DEVELOPED A BINARY ALGEBRA
REPRESENTING (SOME) "LAWS OF THOUGHT."
HISTORY OF AI
CHARLES BABBAGE & ADA BYRON WORKED ON PROGRAMMABLE MECHANICAL
CALCULATING MACHINES.
IN THE LATE 19TH CENTURY AND EARLY 20TH CENTURY, MATHEMATICAL
PHILOSOPHERS LIKE GOTTLOB FREGE, BERTRAM RUSSELL, ALFRED NORTH
WHITEHEAD, AND KURT GÖDEL BUILT ON BOOLE'S INITIAL LOGIC CONCEPTS TO
DEVELOP MATHEMATICAL REPRESENTATIONS OF LOGIC PROBLEMS.
HISTORY OF AI
• THE ADVENT OF ELECTRONIC COMPUTERS PROVIDED A REVOLUTIONARY ADVANCE
IN THE ABILITY TO STUDY INTELLIGENCE.
• IN 1943 MCCULLOCH & PITTS DEVELOPED A BOOLEAN CIRCUIT MODEL OF BRAIN.
THEY WROTE THE PAPER “A LOGICAL CALCULUS OF IDEAS IMMANENT IN NERVOUS
ACTIVITY”, WHICH EXPLAINED HOW IT IS POSSIBLE FOR NEURAL NETWORKS TO
COMPUTE.
• MARVIN MINSKY AND DEAN EDMONDS BUILT THE SNARC IN 1951, WHICH IS THE
FIRST RANDOMLY WIRED NEURAL NETWORK LEARNING MACHINE (SNARC STANDS
FOR STOCHASTIC NEURAL-ANALOG REINFORCEMENT COMPUTER).IT WAS A NEURAL
NETWORK COMPUTER THAT USED 3000 VACUUM TUBES AND A NETWORK WITH 40
NEURONS.
HISTORY OF AI
•IN 1950 TURING WROTE AN ARTICLE ON “COMPUTING MACHINERY AND INTELLIGENCE”
WHICH ARTICULATED A COMPLETE VISION OF AI.
•IN 1956 A FAMOUS CONFERENCE TOOK PLACE IN DARTMOUTH. THE CONFERENCE
BROUGHT TOGETHER THE FOUNDING FATHERS OF ARTIFICIAL INTELLIGENCE FOR THE
FIRST TIME. IN THIS MEETING THE TERM “ARTIFICIAL INTELLIGENCE” WAS ADOPTED.
•BETWEEN 1952 AND 1956, SAMUEL HAD DEVELOPED SEVERAL PROGRAMS FOR PLAYING
CHECKERS. IN 1956, NEWELL & SIMON’S LOGIC THEORIST WAS PUBLISHED. IT IS
CONSIDERED BY MANY TO BE THE FIRST AI PROGRAM. IN 1959, GELERNTER DEVELOPED A
GEOMETRY ENGINE. IN 1961 JAMES SLAGLE (PHD DISSERTATION, MIT) WROTE A SYMBOLIC
INTEGRATION PROGRAM, SAINT. IT WAS WRITTEN IN LISP AND SOLVED CALCULUS
PROBLEMS AT THE COLLEGE FRESHMAN LEVEL. IN 1963, THOMAS EVAN'S PROGRAM
ANALOGY WAS DEVELOPED WHICH COULD SOLVE IQ TEST TYPE ANALOGY PROBLEMS.
HISTORY OF AI
•IN 1963, EDWARD A. FEIGENBAUM & JULIAN FELDMAN PUBLISHED COMPUTERS
AND THOUGHT, THE FIRST COLLECTION OF ARTICLES ABOUT ARTIFICIAL
INTELLIGENCE.
•IN 1965, J. ALLEN ROBINSON INVENTED A MECHANICAL PROOF PROCEDURE, THE
RESOLUTION METHOD, WHICH ALLOWED PROGRAMS TO WORK EFFICIENTLY WITH
FORMAL LOGIC AS A REPRESENTATION LANGUAGE. IN 1967, THE DENDRAL
PROGRAM (FEIGENBAUM, LEDERBERG, BUCHANAN, SUTHERLAND AT STANFORD)
WAS DEMONSTRATED WHICH COULD INTERPRET MASS SPECTRA ON ORGANIC
CHEMICAL COMPOUNDS. THIS WAS THE FIRST SUCCESSFUL KNOWLEDGE-BASED
PROGRAM FOR SCIENTIFIC REASONING. IN 1969 THE SRI ROBOT, SHAKEY,
DEMONSTRATED COMBINING LOCOMOTION, PERCEPTION AND PROBLEM SOLVING.
HISTORY OF AI
• INTELLIGENT TUTORING,
• CASE-BASED REASONING,
• UNCERTAIN REASONING,
SUPERCOMPUTER. THIS IS THE SAME COMPUTER THAT BEAT THE WORLD’S THEN
GRAND MASTER GARY KASPAROV. THE AI TEAMS DO NOT USE ANY TRAINING SETS
TO FEED THE MACHINES, NOR DO THE LATTER STORE DATA FOR FUTURE
DRIVEN CARS ARE THE PERFECT EXAMPLE. THESE MACHINES ARE FED WITH DATA
AND ARE TRAINED WITH OTHER CARS’ SPEED AND DIRECTION, LANE MARKINGS,
TIME.
THEORY OF MIND
WE ARE NOT THERE YET. THEORY OF MIND IS THE CONCEPT WHERE THE BOTS
REQUIREMENT.
SELF-AWARENESS
THESE MACHINES ARE THE EXTENSION OF THE CLASS III TYPE OF AI. IT IS ONE
STAND TODAY.
TYPES OF AI
STRONG AI AIMS TO BUILD MACHINES THAT CAN TRULY REASON AND SOLVE
THE TECHNOLOGY BEHIND THEM. THE SIRI APP, THE SUGGESTIONS THAT APPEAR
WHILE SEARCHING ON GOOGLE, THE AMAZING AMAZON’S ALEXA, AND THE LIST
CAN GO ON.
MARKETING
AI HAS INFLUENCED THE MARKETING SECTOR IN THE MOST PHENOMENAL WAY
POSSIBLE. THERE WAS A TIME WHEN PEOPLE USED TO STEER CLEAR OF
MARKETING GIMMICKS DUE TO A LACK OF TRUST. HOWEVER, TIMES HAVE
CHANGED. RETAILS BUSINESSES HAVE FOUND A SUBTLE WAY OF MARKETING
THESE DAYS. THE AI-POWERED RECOMMENDATION SYSTEMS ARE QUITE APT AT
MAKING PERFECT SUGGESTIONS THAT ARE TOO GOOD TO BE IGNORED. THE
AMAZON SUGGESTIONS TO BUY A PRODUCT BASED ON YOUR PREVIOUS
PURCHASES, NETFLIX MOVIE RECOMMENDATIONS, THEN WALMART’S STRATEGY
OF PLACING BREAD DIAPERS AND THE BUTTER TOGETHER AFTER IDENTIFYING
THE PATTERNS OF FREQUENT BUYERS. THESE ARE ALL MARKETING STRATEGIES
IMPLEMENTED BY THE BUSINESSES BASED ON THE CUSTOMERS’ PURCHASE DATA.
BANKING
AI HAS MADE ITS WAY TO BANKING AND HAS BROUGHT DRASTIC CHANGES IN
TERMS OF FRAUD DETECTION, CUSTOMER SUPPORT, IDENTIFYING THE LIKELY
DEFAULTERS OF CREDIT PAYMENTS, ETC. BASED ON THE SALARY, AGE, AND
PREVIOUS CREDIT CARD HISTORY, THE REPUTED BANKS USE THE DATA TO
PREDICT THE LIKELY DEFAULTERS BEFORE THEY ISSUE CREDIT CARDS.
ALSO, THE TOP BANKS RELY ON AI AND DEEP LEARNING TECHNOLOGIES TO
DETECT THE FRAUDULENT PRACTICES OF POTENTIAL CUSTOMERS IN THE PAST.
AND THEN THEY PREVENT THEM BY TAKING APPROPRIATE MEASURES WELL IN
ADVANCE.
FINANCE
THE FINANCE SECTOR IS THRIVING AS IT RELIES ON THE DATA SCIENTISTS TO
MAKE PREDICTIONS THAT DICTATE THE FINANCIAL DEALINGS AND STOCK
MARKET TRADING.
THE MACHINES ARE FED WITH A HUMUNGOUS AMOUNT OF DATA THAT THEY
PROCESS WITHIN A SHORT SPAN OF TIME, IDENTIFY THE PATTERNS, PROVIDE
INSIGHTS, AND THEN MAKE PREDICTIONS.
AS THERE IS NO SCOPE FOR ERRORS, THE FINANCIAL ORGANIZATIONS ARE
DEPENDING ON THE MACHINE-GENERATED PREDICTIONS TO IMPROVE STOCK
MARKET TRADING AND PROFITS.
AGRICULTURE
• AGRICULTURE HAS BEEN ONE OF THE OLDEST FORMS OF OCCUPATION IN THE
WORLD. FARMERS THESE DAYS USE THE TRENDS IN AI FOR IMPROVING
AGRICULTURAL ACCURACY AND PRODUCTIVITY.
• A BERLIN-BASED FIRM PEAT DEVELOPED AN AGRICULTURAL APP CALLED
PLANTIX. THIS APP CAN PREDICT THE NUTRIENT DEFECTS AND FERTILITY ISSUES
OF THE SOIL JUST FROM THE IMAGES. WHAT’S MORE, THE APP ALSO SUGGESTS
SOLUTIONS AND SOIL RESTORATION TECHNIQUES. THE START-UP ALSO CLAIMS
THAT THE APP IS EFFICIENT IN MAKING THE PREDICTIONS WITH 95% ACCURACY.
HEALTHCARE
BETWEEN THE SYSTEM AND THE PEOPLE WHO BUILD AND USE IT.
WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?
AI DEPENDS ON LABELED AND UNLABELED DATA AS WELL AS THE SYSTEMS
THAT STORE AND ACCESS IT. THE AVAILABILITY OF DATA AND THE SPEED AT
WHICH TODAY'S COMPUTERS CAN PROCESS IT ARE REASONS WHY AI IS
EXPLODING TODAY. AI SYSTEMS ARE REALLY GOOD AT CLASSIFYING,
CATEGORIZING, AND PARTITIONING MASSIVE AMOUNTS OF DATA TO MAKE THE
MOST RELEVANT PIECES AVAILABLE FOR HUMANS TO ANALYZE AND MAKE
DECISIONS. ENGINEERS MUST CONSIDER THE DATA ITSELF--PROVENANCE,
SECURITY, QUALITY, AND ALIGNING TEST AND TRAINING DATA--AND THE
HARDWARE AND SOFTWARE SYSTEMS THAT SUPPORT THAT DATA.
WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?
ONE AI, MANY ALGORITHMS. WHEN WE TALK ABOUT AI, ML, AND DEEP
NOT A NEW FIELD, AND MANY OF THE ALGORITHMS IN USE TODAY WERE
BEFORE SITUATIONS THAT IS INSIGHTFUL AND HAS A VERY GOOD PROBABILITY OF BEING
SYSTEM THAT DOES NOT INCORPORATE AI, YOU CAN BUILD IT IN ISOLATION, TEST IT IN
ISOLATION, AND THEN DEPLOY IT AND BE CERTAIN IT IS GOING TO BEHAVE JUST AS IT DID IN
THE LAB. AN AI SYSTEM DEPENDS ON THE CONDITIONS UNDER WHICH THE AI RUNS AND
WHAT THE AI SYSTEM IS SENSING, AND THIS CONTEXT ADDS ANOTHER LEVEL OF
COMPLEXITY.
TOP 10 EMERGING TECHNOLOGIES FOR 2020
1. AI
2. CLOUD COMPUTING
3. IOT
4. SERVERLESS COMPUTING
5. BIOMETRICS
6. AUGMENTED REALITY/VIRTUAL REALITY
7. BLOCKCHAIN
8. ROBOTICS
9. NATURAL LANGUAGE PROCESSING
10. QUANTUM COMPUTING
EMERGING TECHNOLOGIES (CLOUD
COMPUTING)
THE INTERNET.
EMERGING TECHNOLOGIES (CLOUD COMPUTING)
CLOUD COMPUTING SERVICES COVER A VAST RANGE OF OPTIONS NOW, FROM THE
• AUGMENTED REALITY (AR) IS A PERFECT BLEND OF THE DIGITAL WORLD AND THE PHYSICAL
ELEMENTS TO CREATE AN ARTIFICIAL ENVIRONMENT. APPS WHICH ARE DEVELOPED USING AR
TECHNOLOGY FOR MOBILE OR DESKTOP TO BLEND DIGITAL COMPONENTS INTO THE REAL WORLD.
THE FULL FORM OF AR IS AUGMENT REALITY. EXAMPLE: AR TECHNOLOGY HELPS TO DISPLAY
SCORE OVERLAYS ON TELECASTED SPORTS GAMES AND POP OUT 3D PHOTOS, TEXT MESSAGES,
AND EMAILS.
• VIRTUAL REALITY (VR) IS A COMPUTER-GENERATED SIMULATION OF AN ALTERNATE WORLD OR
REALITY. IT IS USED IN 3D MOVIES AND VIDEO GAMES. IT HELPS TO CREATE SIMULATIONS SIMILAR
TO THE REAL WORLD AND "IMMERSE" THE VIEWER USING COMPUTERS AND SENSORY DEVICES
LIKE HEADSETS AND GLOVES. APART FROM GAMES AND ENTERTAINMENT, VIRTUAL REALITY IS
ALSO USED FOR TRAINING, EDUCATION, AND SCIENCE. THE FULL FORM OF VR IS VIRTUAL REALITY
EMERGING TECHNOLOGIES (BLOCKCHAIN)
TRANSPARENT IN THEIR EFFORTS. BEN GOERTZEL AND DAVID HART CREATED OPENCOG AS AN
COMPANY CREATED BY ELON MUSK, SAM ALTMAN AND OTHERS TO DEVELOP OPEN-SOURCE AI
DEBATE AS TO THE LEGAL LIABILITY OF THE RESPONSIBLE PARTY IF THESE CARS GET
INTO ACCIDENTS. IN ONE REPORT WHERE A DRIVERLESS CAR HIT A PEDESTRIAN, THE
DRIVER WAS INSIDE THE CAR BUT THE CONTROLS WERE FULLY IN THE HAND OF
COMPUTERS. THIS LED TO A DILEMMA OVER WHO WAS AT FAULT FOR THE ACCIDENT.
ETHICS OF ARTIFICIAL INTELLIGENCE
• WEAPONIZATION OF ARTIFICIAL INTELLIGENCE : SOME EXPERTS AND ACADEMICS HAVE
QUESTIONED THE USE OF ROBOTS FOR MILITARY COMBAT, ESPECIALLY WHEN SUCH ROBOTS
ARE GIVEN SOME DEGREE OF AUTONOMY. ON OCTOBER 31, 2019, THE UNITED STATES
THE DEPARTMENT OF DEFENSE THAT WOULD ENSURE A HUMAN OPERATOR WOULD ALWAYS BE
ABLE TO LOOK INTO THE 'BLACK BOX' AND UNDERSTAND THE KILL-CHAIN PROCESS.
A HISTORY OF DATA : DATA IS PART OF THE FABRIC OF LIFE AND SOCIETY — AND
HAS BEEN FOR A LONG TIME. THE HISTORY OF DATA IS A LONG STORY DETAILING
SUCH AS A THERMOCOUPLE.
WHAT IS A DATA ACQUISITION SYSTEM?
SIGNAL CONDITIONER: THIS IS A DEVICE THAT FILTERS THE ANALOG SIGNAL PICKED UP BY SENSORS
BEFORE CONVERTING IT INTO DIGITAL INFORMATION. YOU CAN AMPLIFY THE SIGNAL, ATTENUATE IT,
FILTER IT, CALIBRATE IT OR ISOLATE IT. THE INFORMATION OBTAINED FROM REALITY CAN BE TOO
ANALOG TO DIGITAL SIGNAL CONVERTER: IT IS THE KEY TO ANY DATA ACQUISITION PROCESS. IT IS A
CHIP THAT TRANSFORMS THE SIGNAL CAPTURED FROM REALITY INTO INFORMATION THAT CAN BE
ANALYSIS.
WHY YOU SHOULD HAVE DATA ACQUISITION SYSTEM
ARE PREPARED FOR MINING. THE DATA MINING STEP MAY INTERACT WITH THE USER
EVALUATION. HOWEVER, IN INDUSTRY, IN MEDIA, AND IN THE RESEARCH MILIEU, THE TERM
DATA MINING IS OFTEN USED TO REFER TO THE ENTIRE KNOWLEDGE DISCOVERY PROCESS
(PERHAPS BECAUSE THE TERM IS SHORTER THAN KNOWLEDGE DISCOVERY FROM DATA).
THEREFORE, WE ADOPT A BROAD VIEW OF DATA MINING FUNCTIONALITY: DATA MINING IS THE
OF DATA. THE DATA SOURCES CAN INCLUDE DATABASES, DATA WAREHOUSES, THE WEB, OTHER
INFORMATION REPOSITORIES, OR DATA THAT ARE STREAMED INTO THE SYSTEM DYNAMICALLY
DATA ACQUISITION SYSTEM
MODERN DIGITAL DATA ACQUISITION SYSTEMS CONSIST OF FOUR ESSENTIAL
COMPONENTS THAT FORM THE ENTIRE MEASUREMENT CHAIN OF PHYSICS
PHENOMENA:
• SENSORS
• SIGNAL CONDITIONING
• ANALOG-TO-DIGITAL CONVERTER
• COMPUTER WITH DAQ SOFTWARE
THE PURPOSES OF DATA ACQUISITION
PEOPLE USE.
OFTEN RELIED ON THE OPINIONS OF TEST DRIVERS AS TO HOW THE SUSPENSION “FELT” TO THEM.
WITH THE INVENTION AND DEVELOPMENT OF DATA ACQUISITION SYSTEMS, WHICH COULD
COLLECT DATA FROM A WIDE VARIETY OF SENSORS, THESE KINDS OF SUBJECTIVE OPINIONS WERE
• DATA PROCESSING STARTS WITH DATA IN ITS RAW FORM AND CONVERTS IT INTO
A MORE READABLE FORMAT (GRAPHS, DOCUMENTS, ETC.), GIVING IT THE FORM
AND CONTEXT NECESSARY TO BE INTERPRETED BY COMPUTERS AND UTILIZED
BY EMPLOYEES THROUGHOUT AN ORGANIZATION.
SIX STAGES OF DATA PROCESSING
3. DATA INPUT : THE CLEAN DATA IS THEN ENTERED INTO ITS DESTINATION
FIRST STAGE IN WHICH RAW DATA BEGINS TO TAKE THE FORM OF USABLE
INFORMATION.
SIX STAGES OF DATA PROCESSING
PROCESS ITSELF MAY VARY SLIGHTLY DEPENDING ON THE SOURCE OF DATA BEING
PLAIN TEXT, ETC.). MEMBERS OF THE COMPANY OR INSTITUTION CAN NOW BEGIN
ALL OF THE DATA IS PROCESSED, IT IS THEN STORED FOR FUTURE USE. WHILE
DATA. BY USING VISUAL ELEMENTS LIKE CHARTS, GRAPHS, AND MAPS, DATA
TO ACQUIRE KNOWLEDGE.
MACHINE LEARNING
CUMULATIVE REWARD.
GENERAL BLOCK DIAGRAM OF
LEARNING SYSTEM
MACHINE LEARNING
• THE INPUT PROGRAM IS GENERAL PROGRAM EXECUTED FOR SOLUTION OF
A PROBLEM. THE RESULTS ARE REPORTED AS OUTPUT. THESE RESULT ARE
ALSO STORED BACK TO ACQUIRE KNOWLEDGE FOR FUTURE USE. THIS
SIMPLY MEANS THAT, IF A SIMILAR PROBLEM IS REQUIRED TO BE SOLVED
NEXT TIME, THE RESULTS CAN AUTOMATICALLY BE TAKEN FROM
PREVIOUSLY ACQUIRED KNOWLEDGE. A SYSTEM IS SAID TO BE LEARNING IF
IT NOT ONLY DOES THE REPETITION OF THE SAME TASK MORE EFFECTIVELY
BUT ALSO THE SIMILAR TASK OF RELATED DOMAIN. IT COVERS A RANGE OF
PHENOMENON:
• KNOWLEDGE ACQUISITION
• SKILL REFINEMENT
MACHINE LEARNING
EXECUTES ONE PROGRAM AND REMEMBERS THE PROCESS FOR FUTURE USE.
GET REFINED.
LEARNING AGENT
• MODELS
• PROPOSITIONAL AND FOL RULES
• DECISION TREES
• LINEAR SEPARATORS
• NEURAL NETWORKS
• GRAPHICAL MODELS
• COMPLETE THE DESIGN. RUN THE LEARNING ALGORITHM ON THE GATHERED TRAINING SET.
PERFORMANCE ON A SUBSET ( CALLED VALIDATION SET) OF THE TRAINING SET, OR VIA CROSS-
VALIDATION.
• LEARNING TO RANK: WHEN THE INPUT IS A SET OF OBJECTS AND THE DESIRED
CAN BE PRESENTED IN VARIOUS FORMS. EXAMPLES INCLUDE PIE CHARTS, BAR CHARTS,
➢AIM:
Computers
Semantics Parsing
FUNDAMENTAL PROBLEMS IN NLP
• NLP REQUIRES THE SYSTEM TO ANALYZE THE SENTENCE
AND RETRIEVE THE CORRECT MEANING. AMBIGUITY OF
THE WORDS AND THEIR MEANING IN THEIR RESPECTIVE
CONTEXT ARE THE MAJOR PROBLEM IN NLP.
Pragmatics
Semantics
Syntax
Morphology
THE STEPS IN NLP (CONT.)
• MORPHOLOGY: CONCERNS THE WAY WORDS ARE BUILT UP FROM SMALLER
MEANING BEARING UNITS. (COME(S),CO(MES))
• SYNTAX: CONCERNS HOW WORDS ARE PUT TOGETHER TO FORM CORRECT
SENTENCES AND WHAT STRUCTURAL ROLE EACH WORD HAS.
• SEMANTICS: CONCERNS WHAT WORDS MEAN AND HOW THESE MEANINGS
COMBINE IN SENTENCES TO FORM SENTENCE MEANINGS.
• PRAGMATICS: CONCERNS HOW SENTENCES ARE USED IN DIFFERENT
SITUATIONS AND HOW USE AFFECTS THE INTERPRETATION OF THE
SENTENCE.
• DISCOURSE: CONCERNS HOW THE IMMEDIATELY PRECEDING SENTENCES
AFFECT THE INTERPRETATION OF THE NEXT SENTENCE.
NATURAL LANGUAGE GENERATION
NATURAL LANGUAGE UNDERSTANDING
NATURAL LANGUAGE UNDERSTANDING
SPOKEN DIALOGUE SYSTEM
• PHONOLOGY
• MORPHOLOGY
• SYNTAX
• SEMANTICS
• PRAGMATICS
• DISCOURSE
PHONOLOGY
• THIS LEVEL DEALS WITH THE INTERPRETATION OF SPEECH SOUNDS WITHIN AND
ACROSS WORDS. THERE ARE THREE TYPES OF RULE USED IN PHONOLOGICAL
ANALYSIS:
• PHONETIC RULES-FOR SOUND WITHIN WORDS
TASKS DIRECTLY FROM IMAGES, TEXT, OR SOUND. DEEP LEARNING MODELS CAN
APPLICATIONS.
HOW DEEP LEARNING WORKS
WHY DEEP LEARNING MODELS ARE OFTEN REFERRED TO AS DEEP NEURAL NETWORKS.
• THE TERM “DEEP” USUALLY REFERS TO THE NUMBER OF HIDDEN LAYERS IN THE NEURAL