You are on page 1of 194

ARTIFICAL INTELLIGENCE FOR ENGINEERS

KMC-101
ARTIFICIAL INTELLIGENCE

• THE TERM ‘ARTIFICIAL INTELLIGENCE’ WAS COINED BY MCCARTHY IN 1956 AT


THE DARTMOUTH WORKSHOP.
• THERE ARE TWO IDEAS IN THE TERM:

(A) ARTIFICIAL

(B) INTELLIGENCE
ARTIFICIAL
IT MEANS THAT IT IS MADE AS A COPY OF SOMETHING NATURAL
• INTELLIGENCE : INTELLIGENCE HAS BEEN DEFINED IN MANY DIFFERENT WAYS
SUCH AS IN TERMS OF ONE'S CAPACITY FOR LOGIC, ABSTRACT THOUGHT,
UNDERSTANDING, SELF-AWARENESS, COMMUNICATION, LEARNING, EMOTIONAL
KNOWLEDGE, MEMORY, PLANNING, CREATIVITY AND PROBLEM SOLVING.
• IT CAN ALSO BE MORE GENERALLY DESCRIBED AS THE ABILITY TO PERCEIVE
AND/OR RETAIN KNOWLEDGE OR INFORMATION AND APPLY IT TO ITSELF OR
OTHER INSTANCES OF KNOWLEDGE OR INFORMATION CREATING REFERABLE
UNDERSTANDING MODELS OF ANY SIZE, DENSITY, OR COMPLEXITY, DUE TO ANY
CONSCIOUS OR SUBCONSCIOUS IMPOSED WILL OR INSTRUCTION TO DO SO.
DEFINITIONS OF AI

ACCORDING TO THE FATHER OF ARTIFICIAL INTELLIGENCE, JOHN MCCARTHY, IT IS

“THE SCIENCE AND ENGINEERING OF MAKING INTELLIGENT MACHINES,

ESPECIALLY INTELLIGENT COMPUTER PROGRAMS”.


DEFINITIONS OF AI

ARTIFICIAL INTELLIGENCE IS A WAY OF MAKING A COMPUTER, A COMPUTER-

CONTROLLED ROBOT, OR A SOFTWARE THINK INTELLIGENTLY, IN THE SIMILAR

MANNER THE INTELLIGENT HUMANS THINK.


DEFINITIONS OF AI

AI MAY BE DEFINED AS THE BRANCH OF COMPUTER SCIENCE THAT IS CONCERNED


WITH THE AUTOMATION OF INTELLIGENT BEHAVIOR. (LUGER-1993)

THE EXCITING NEW EFFORT TO MAKE COMPUTERS THINK…. MACHINES WITH


MINDS, IN THE FULL AND LITERAL SENSE. (HALLGELAND – 1985)

“THE AUTOMATION OF ACTIVITIES THAT WE ASSOCIATE WITH HUMAN THINKING,


ACTIVITIES SUCH AS DIVISION MAKING, PROBLEM SOLVING, LEARNING
…”(BELLMAN – 1978)
DEFINITIONS OF AI

• “THE ART OF CREATING MACHINES THAT PERFORM FUNCTIONS THAT REQUIRE


INTELLIGENCE, WHEN PERFORMED BY PEOPLE”. (KURZWELL – 1990).

• “THE STUDY OF HOW TO MAKE COMPUTERS DO THINGS AT WHICH, AT THE MOMENT,


PEOPLE ARE BETTER”. (RICH AND KNIGHT – 1991)

• THE STUDY OF MENTAL FACULTIES THROUGH THE USE OF COMPUTATIONAL


MODELS. (CHARNIAK AND MCDERMOTT)
AN OVERVIEW TO AI

EVOLUTION OF AI:
“ARTIFICIAL INTELLIGENCE IS A BRANCH OF COMPUTER SCIENCE DEALING
WITH THE SIMULATION OF INTELLIGENT BEHAVIOR IN COMPUTERS.” WHEN A
MACHINE CAN MAKE INTELLIGENT DECISIONS, IT CAN BE REFERRED TO AS BEING
INTELLIGENT- ARTIFICIALLY. WE MOSTLY SEE PEOPLE USING THE TERMS OF
MACHINE LEARNING, DEEP LEARNING, AND AI SYNONYMOUSLY. HOWEVER, DEEP
LEARNING IS A SUBSET OF MACHINE LEARNING, AND MACHINE LEARNING IS A
SUBSET OF AI. 
EVOLUTION OF AI
DESIGN GOALS OF AI
THE AI SURGE BEGAN WITH SIX MAJOR DESIGN GOALS AS FOLLOWS:
• TEACH MACHINES TO REASON IN ACCORDANCE TO PERFORM SOPHISTICATED MENTAL TASKS LIKE
PLAYING CHESS, PROVING MATHEMATICAL THEOREMS, AND OTHERS.
• KNOWLEDGE REPRESENTATION FOR MACHINES TO INTERACT WITH THE REAL WORLD AS HUMANS
DO — MACHINES NEEDED TO BE ABLE TO IDENTIFY OBJECTS, PEOPLE, AND LANGUAGES.
PROGRAMMING LANGUAGE LISP WAS DEVELOPED FOR THIS VERY PURPOSE.
• TEACH MACHINES TO PLAN AND NAVIGATE AROUND THE WORLD WE LIVE IN. WITH THIS, MACHINES
COULD AUTONOMOUSLY MOVE AROUND BY NAVIGATING THEMSELVES.
• ENABLE MACHINES TO PROCESS NATURAL LANGUAGE SO THAT THEY CAN UNDERSTAND
LANGUAGE, CONVERSATIONS AND THE CONTEXT OF SPEECH.
• TRAIN MACHINES TO PERCEIVE THE WAY HUMANS DO- TOUCH, FEEL, SIGHT, HEARING, AND TASTE.
• GENERAL INTELLIGENCE THAT INCLUDED EMOTIONAL INTELLIGENCE, INTUITION, AND CREATIVITY.
AI REBIRTH AND REVOLUTION
THERE HAVE BEEN FOUR SUCCESSIVE CATALYSTS IN THE AI REBIRTH AND REVOLUTION:
• THE DEMOCRATIZATION OF AI KNOWLEDGE THAT BEGAN WHEN WORLD-CLASS RESEARCH
CONTENTS WERE MADE AVAILABLE TO THE MASSES.
• DATA AND COMPUTING POWER (CLOUD AND GPU) THAT MADE AI ACCESSIBLE TO THE MASSES
WITHOUT ENORMOUS UPFRONT INVESTMENT OR BEING A MEGA-CORPORATION.
• EVEN WITH ACCESS TO DATA AND COMPUTING POWER, YOU HAD TO BE AN AI SPECIALIST TO
LEVERAGE IT. HOWEVER, IN 2015, THERE WAS A PROLIFERATION OF NEW TOOLS AND
FRAMEWORKS THAT MADE EXPLORING AND OPERATIONALIZING PRODUCTION-LEVEL AI
FEASIBLE TO THE MASSES.
• IN THE PAST TWO YEARS, AI AS A SERVICE HAS TAKEN THIS A STEP FURTHER, ENABLING EASIER
PROTOTYPING, EXPLORATION, AND EVEN BUILDING SOPHISTICATED AND INTELLIGENT USE-
CASE SPECIFIC AI’S IN THE PRODUCT. THERE ARE PLATFORMS LIKE AZURE AI, AWS AI, GOOGLE
CLOUD AI, IBM CLOUD AI, AND MANY MORE THAT PROVIDES AI AS A SERVICE.
HISTORY OF AI
HISTORY OF AI
HISTORY OF AI
INTELLECTUAL ROOTS OF AI DATE BACK TO THE EARLY STUDIES OF THE NATURE OF
KNOWLEDGE AND REASONING. THE DREAM OF MAKING A COMPUTER IMITATE
HUMANS ALSO HAS A VERY EARLY HISTORY.
ARISTOTLE (384-322 BC) DEVELOPED AN INFORMAL SYSTEM OF SYLLOGISTIC LOGIC,
WHICH IS THE BASIS OF THE FIRST FORMAL DEDUCTIVE REASONING SYSTEM.
EARLY IN THE 17TH CENTURY, DESCARTES PROPOSED THAT BODIES OF ANIMALS ARE
NOTHING MORE THAN COMPLEX MACHINES.
PASCAL IN 1642 MADE THE FIRST MECHANICAL DIGITAL CALCULATING MACHINE.
IN THE 19TH CENTURY, GEORGE BOOLE DEVELOPED A BINARY ALGEBRA
REPRESENTING (SOME) "LAWS OF THOUGHT."
HISTORY OF AI
CHARLES BABBAGE & ADA BYRON WORKED ON PROGRAMMABLE MECHANICAL
CALCULATING MACHINES.
IN THE LATE 19TH CENTURY AND EARLY 20TH CENTURY, MATHEMATICAL
PHILOSOPHERS LIKE GOTTLOB FREGE, BERTRAM RUSSELL, ALFRED NORTH
WHITEHEAD, AND KURT GÖDEL BUILT ON BOOLE'S INITIAL LOGIC CONCEPTS TO
DEVELOP MATHEMATICAL REPRESENTATIONS OF LOGIC PROBLEMS.
HISTORY OF AI
• THE ADVENT OF ELECTRONIC COMPUTERS PROVIDED A REVOLUTIONARY ADVANCE
IN THE ABILITY TO STUDY INTELLIGENCE.
• IN 1943 MCCULLOCH & PITTS DEVELOPED A BOOLEAN CIRCUIT MODEL OF BRAIN.
THEY WROTE THE PAPER “A LOGICAL CALCULUS OF IDEAS IMMANENT IN NERVOUS
ACTIVITY”, WHICH EXPLAINED HOW IT IS POSSIBLE FOR NEURAL NETWORKS TO
COMPUTE.
• MARVIN MINSKY AND DEAN EDMONDS BUILT THE SNARC IN 1951, WHICH IS THE
FIRST RANDOMLY WIRED NEURAL NETWORK LEARNING MACHINE (SNARC STANDS
FOR STOCHASTIC NEURAL-ANALOG REINFORCEMENT COMPUTER).IT WAS A NEURAL
NETWORK COMPUTER THAT USED 3000 VACUUM TUBES AND A NETWORK WITH 40
NEURONS.
HISTORY OF AI
•IN 1950 TURING WROTE AN ARTICLE ON “COMPUTING MACHINERY AND INTELLIGENCE”
WHICH ARTICULATED A COMPLETE VISION OF AI.
•IN 1956 A FAMOUS CONFERENCE TOOK PLACE IN DARTMOUTH. THE CONFERENCE
BROUGHT TOGETHER THE FOUNDING FATHERS OF ARTIFICIAL INTELLIGENCE FOR THE
FIRST TIME. IN THIS MEETING THE TERM “ARTIFICIAL INTELLIGENCE” WAS ADOPTED.
•BETWEEN 1952 AND 1956, SAMUEL HAD DEVELOPED SEVERAL PROGRAMS FOR PLAYING
CHECKERS. IN 1956, NEWELL & SIMON’S LOGIC THEORIST WAS PUBLISHED. IT IS
CONSIDERED BY MANY TO BE THE FIRST AI PROGRAM. IN 1959, GELERNTER DEVELOPED A
GEOMETRY ENGINE. IN 1961 JAMES SLAGLE (PHD DISSERTATION, MIT) WROTE A SYMBOLIC
INTEGRATION PROGRAM, SAINT. IT WAS WRITTEN IN LISP AND SOLVED CALCULUS
PROBLEMS AT THE COLLEGE FRESHMAN LEVEL. IN 1963, THOMAS EVAN'S PROGRAM
ANALOGY WAS DEVELOPED WHICH COULD SOLVE IQ TEST TYPE ANALOGY PROBLEMS.
HISTORY OF AI
•IN 1963, EDWARD A. FEIGENBAUM & JULIAN FELDMAN PUBLISHED COMPUTERS
AND THOUGHT, THE FIRST COLLECTION OF ARTICLES ABOUT ARTIFICIAL
INTELLIGENCE.
•IN 1965, J. ALLEN ROBINSON INVENTED A MECHANICAL PROOF PROCEDURE, THE
RESOLUTION METHOD, WHICH ALLOWED PROGRAMS TO WORK EFFICIENTLY WITH
FORMAL LOGIC AS A REPRESENTATION LANGUAGE. IN 1967, THE DENDRAL
PROGRAM (FEIGENBAUM, LEDERBERG, BUCHANAN, SUTHERLAND AT STANFORD)
WAS DEMONSTRATED WHICH COULD INTERPRET MASS SPECTRA ON ORGANIC
CHEMICAL COMPOUNDS. THIS WAS THE FIRST SUCCESSFUL KNOWLEDGE-BASED
PROGRAM FOR SCIENTIFIC REASONING. IN 1969 THE SRI ROBOT, SHAKEY,
DEMONSTRATED COMBINING LOCOMOTION, PERCEPTION AND PROBLEM SOLVING.
HISTORY OF AI

•THE YEARS FROM 1969 TO 1979 MARKED THE EARLY DEVELOPMENT OF


KNOWLEDGE-BASED SYSTEMS

•IN 1974: MYCIN DEMONSTRATED THE POWER OF RULE-BASED SYSTEMS FOR


KNOWLEDGE REPRESENTATION AND INFERENCE IN MEDICAL DIAGNOSIS AND
THERAPY. KNOWLEDGE REPRESENTATION SCHEMES WERE DEVELOPED. THESE
INCLUDED FRAMES DEVELOPED BY MINSKI. LOGIC BASED LANGUAGES LIKE
PROLOG AND PLANNER WERE DEVELOPED.
HISTORY OF AI

• THE 1990'S SAW MAJOR ADVANCES IN ALL AREAS OF AI INCLUDING THE


FOLLOWING:
• MACHINE LEARNING, DATA MINING

• INTELLIGENT TUTORING,

• CASE-BASED REASONING,

• MULTI-AGENT PLANNING, SCHEDULING,

• UNCERTAIN REASONING,

• NATURAL LANGUAGE UNDERSTANDING AND TRANSLATION,

• VISION, VIRTUAL REALITY, GAMES, AND OTHER TOPICS.


HISTORY OF AI
• TODAY’S AI SYSTEMS HAVE BEEN ABLE TO ACHIEVE LIMITED SUCCESS IN SOME OF THESE TASKS.
• IN COMPUTER VISION, THE SYSTEMS ARE CAPABLE OF FACE RECOGNITION
• IN ROBOTICS, WE HAVE BEEN ABLE TO MAKE VEHICLES THAT ARE MOSTLY AUTONOMOUS.
• IN NATURAL LANGUAGE PROCESSING, WE HAVE SYSTEMS THAT ARE CAPABLE OF SIMPLE MACHINE
TRANSLATION.
• TODAY’S EXPERT SYSTEMS CAN CARRY OUT MEDICAL DIAGNOSIS IN A NARROW DOMAIN
• SPEECH UNDERSTANDING SYSTEMS ARE CAPABLE OF RECOGNIZING SEVERAL THOUSAND WORDS CONTINUOUS
SPEECH

• PLANNING AND SCHEDULING SYSTEMS HAD BEEN EMPLOYED IN


SCHEDULING EXPERIMENTS WITH THE HUBBLE TELESCOPE.
• THE LEARNING SYSTEMS ARE CAPABLE OF DOING TEXT
CATEGORIZATION INTO ABOUT A 1000 TOPICS
• IN GAMES, AI SYSTEMS CAN PLAY AT THE GRAND MASTER LEVEL IN
CHESS (WORLD CHAMPION), CHECKERS, ETC.
VARIOUS APPROACHES TO AI
BASED ON THE WAYS THE MACHINES BEHAVE, THERE ARE FOUR TYPES OF
ARTIFICIAL INTELLIGENCE APPROACHES –
1. REACTIVE MACHINES
2. LIMITED MEMORY
3. THEORY OF MIND
4. SELF-AWARENESS.
REACTIVE MACHINES

THESE MACHINES ARE THE MOST BASIC FORM OF AI APPLICATIONS. EXAMPLES OF

REACTIVE MACHINES ARE GAMES LIKE DEEP BLUE, IBM’S CHESS-PLAYING

SUPERCOMPUTER. THIS IS THE SAME COMPUTER THAT BEAT THE WORLD’S THEN

GRAND MASTER GARY KASPAROV. THE AI TEAMS DO NOT USE ANY TRAINING SETS

TO FEED THE MACHINES, NOR DO THE LATTER STORE DATA FOR FUTURE

REFERENCES. BASED ON THE MOVE MADE BY THE OPPONENT, THE MACHINE

DECIDES/PREDICTS THE NEXT MOVE.


LIMITED MEMORY

THESE MACHINES BELONG TO THE CLASS II CATEGORY OF AI APPLICATIONS. SELF-

DRIVEN CARS ARE THE PERFECT EXAMPLE. THESE MACHINES ARE FED WITH DATA

AND ARE TRAINED WITH OTHER CARS’ SPEED AND DIRECTION, LANE MARKINGS,

TRAFFIC LIGHTS, CURVES OF ROADS, AND OTHER IMPORTANT FACTORS, OVER

TIME.
THEORY OF MIND

THIS IS WHERE WE ARE, STRUGGLING TO MAKE THIS CONCEPT WORK, HOWEVER,

WE ARE NOT THERE YET. THEORY OF MIND IS THE CONCEPT WHERE THE BOTS

WILL BE ABLE TO UNDERSTAND THE HUMAN EMOTIONS, THOUGHTS, AND HOW

THEY REACT TO THEM.  IF THE AI-POWERED MACHINES ARE EVER TO MINGLE

WITH US AND MOVE AROUND WITH US, UNDERSTANDING HUMAN BEHAVIOR IS

IMPERATIVE.  AND THEN, REACTING TO SUCH BEHAVIORS ACCORDINGLY IS THE

REQUIREMENT.
SELF-AWARENESS

THESE MACHINES ARE THE EXTENSION OF THE CLASS III TYPE OF AI. IT IS ONE

STEP AHEAD OF UNDERSTANDING HUMAN EMOTIONS. THIS IS THE PHASE WHERE

THE AI TEAMS BUILD MACHINES WITH SELF-AWARENESS FACTOR PROGRAMMED

IN THEM. BUILDING SELF-AWARE MACHINES SEEM FAR-FETCHED FROM WHERE WE

STAND TODAY. 
TYPES OF AI

STRONG AI AIMS TO BUILD MACHINES THAT CAN TRULY REASON AND SOLVE

PROBLEMS. THESE MACHINES SHOULD BE SELF AWARE AND THEIR OVERALL

INTELLECTUAL ABILITY NEEDS TO BE INDISTINGUISHABLE FROM THAT OF A

HUMAN BEING. EXCESSIVE OPTIMISM IN THE 1950S AND 1960S CONCERNING

STRONG AI HAS GIVEN WAY TO AN APPRECIATION OF THE EXTREME DIFFICULTY OF

THE PROBLEM. STRONG AI MAINTAINS THAT SUITABLY PROGRAMMED MACHINES

ARE CAPABLE OF COGNITIVE MENTAL STATES.


TYPES OF AI

WEAK AI: IT DEALS WITH THE CREATION OF SOME FORM OF COMPUTER-BASED

ARTIFICIAL INTELLIGENCE THAT CANNOT TRULY REASON AND SOLVE PROBLEMS,

BUT CAN ACT AS IF IT WERE INTELLIGENT. WEAK AI HOLDS THAT SUITABLY

PROGRAMMED MACHINES CAN SIMULATE HUMAN COGNITION.


TYPES OF AI
APPLIED AI AIMS TO PRODUCE COMMERCIALLY VIABLE "SMART" SYSTEMS SUCH
AS, FOR EXAMPLE, A SECURITY SYSTEM THAT IS ABLE TO RECOGNIZE THE FACES
OF PEOPLE WHO ARE PERMITTED TO ENTER A PARTICULAR BUILDING. APPLIED AI
HAS ALREADY ENJOYED CONSIDERABLE SUCCESS.
TYPES OF AI
COGNITIVE AI: COMPUTERS ARE USED TO TEST THEORIES ABOUT HOW THE
HUMAN MIND WORKS--FOR EXAMPLE, THEORIES ABOUT HOW WE RECOGNISE
FACES AND OTHER OBJECTS, OR ABOUT HOW WE SOLVE ABSTRACT PROBLEMS.
WHAT ARE THE USES OF ARTIFICIAL
INTELLIGENCE?

WE HAVE BEEN COHABITING WITH AI AND ITS APPLICATIONS WITHOUT REALIZING

THE TECHNOLOGY BEHIND THEM. THE SIRI APP, THE SUGGESTIONS THAT APPEAR

WHILE SEARCHING ON GOOGLE, THE AMAZING AMAZON’S ALEXA, AND THE LIST

CAN GO ON.
MARKETING
AI HAS INFLUENCED THE MARKETING SECTOR IN THE MOST PHENOMENAL WAY
POSSIBLE. THERE WAS A TIME WHEN PEOPLE USED TO STEER CLEAR OF
MARKETING GIMMICKS DUE TO A LACK OF TRUST. HOWEVER, TIMES HAVE
CHANGED. RETAILS BUSINESSES HAVE FOUND A SUBTLE WAY OF MARKETING
THESE DAYS. THE AI-POWERED RECOMMENDATION SYSTEMS ARE QUITE APT AT
MAKING PERFECT SUGGESTIONS THAT ARE TOO GOOD TO BE IGNORED. THE
AMAZON SUGGESTIONS TO BUY A PRODUCT BASED ON YOUR PREVIOUS
PURCHASES, NETFLIX MOVIE RECOMMENDATIONS, THEN WALMART’S STRATEGY
OF PLACING BREAD DIAPERS AND THE BUTTER TOGETHER AFTER IDENTIFYING
THE PATTERNS OF FREQUENT BUYERS. THESE ARE ALL MARKETING STRATEGIES
IMPLEMENTED BY THE BUSINESSES BASED ON THE CUSTOMERS’ PURCHASE DATA.
BANKING
AI HAS MADE ITS WAY TO BANKING AND HAS BROUGHT DRASTIC CHANGES IN
TERMS OF FRAUD DETECTION, CUSTOMER SUPPORT, IDENTIFYING THE LIKELY
DEFAULTERS OF CREDIT PAYMENTS, ETC. BASED ON THE SALARY, AGE, AND
PREVIOUS CREDIT CARD HISTORY, THE REPUTED BANKS USE THE DATA TO
PREDICT THE LIKELY DEFAULTERS BEFORE THEY ISSUE CREDIT CARDS.
ALSO, THE TOP BANKS RELY ON AI AND DEEP LEARNING TECHNOLOGIES TO
DETECT THE FRAUDULENT PRACTICES OF POTENTIAL CUSTOMERS IN THE PAST.
AND THEN THEY PREVENT THEM BY TAKING APPROPRIATE MEASURES WELL IN
ADVANCE.
FINANCE
THE FINANCE SECTOR IS THRIVING AS IT RELIES ON THE DATA SCIENTISTS TO
MAKE PREDICTIONS THAT DICTATE THE FINANCIAL DEALINGS AND STOCK
MARKET TRADING.
THE MACHINES ARE FED WITH A HUMUNGOUS AMOUNT OF DATA THAT THEY
PROCESS WITHIN A SHORT SPAN OF TIME, IDENTIFY THE PATTERNS, PROVIDE
INSIGHTS, AND THEN MAKE PREDICTIONS.  
AS THERE IS NO SCOPE FOR ERRORS, THE FINANCIAL ORGANIZATIONS ARE
DEPENDING ON THE MACHINE-GENERATED PREDICTIONS TO IMPROVE STOCK
MARKET TRADING AND PROFITS.
AGRICULTURE
• AGRICULTURE HAS BEEN ONE OF THE OLDEST FORMS OF OCCUPATION IN THE
WORLD. FARMERS THESE DAYS USE THE TRENDS IN AI FOR IMPROVING
AGRICULTURAL ACCURACY AND PRODUCTIVITY.
• A BERLIN-BASED FIRM PEAT DEVELOPED AN AGRICULTURAL APP CALLED
PLANTIX. THIS APP CAN PREDICT THE NUTRIENT DEFECTS AND FERTILITY ISSUES
OF THE SOIL JUST FROM THE IMAGES. WHAT’S MORE, THE APP ALSO SUGGESTS
SOLUTIONS AND SOIL RESTORATION TECHNIQUES. THE START-UP ALSO CLAIMS
THAT THE APP IS EFFICIENT IN MAKING THE PREDICTIONS WITH 95% ACCURACY.
HEALTHCARE

THIS IS ANOTHER INDUSTRY THAT IS BOOMING WITH THE PRESENCE OF AI

APPLICATIONS. ARTIFICIAL INTELLIGENCE HAS PLAYED A MAJOR ROLE IN MAKING

PREDICTIONS IN THE FIELDS OF DIAGNOSTICS. AN AI ALGORITHM OUTPERFORMED

THE DOCTORS IN DETECTING BREAST CANCER WITH THE HELP OF MAMMOGRAMS.


WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?

AI DEPENDS ON THE HUMAN ELEMENT. AI AUGMENTS, BUT DOES NOT REPLACE,

HUMAN KNOWLEDGE AND EXPERTISE. THIS BASIC UNDERSTANDING AFFECTS

ENGINEERS OF AI SYSTEMS IN TWO DIMENSIONS: HUMAN-MACHINE TEAMING AND

THE PROBABILISTIC NATURE OF AI "ANSWERS." ENGINEERS DEVELOPING AI

SYSTEMS MUST ACCOUNT FOR HUMAN-MACHINE TEAMING--THE INTERACTIONS

BETWEEN THE SYSTEM AND THE PEOPLE WHO BUILD AND USE IT.
WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?
AI DEPENDS ON LABELED AND UNLABELED DATA AS WELL AS THE SYSTEMS
THAT STORE AND ACCESS IT. THE AVAILABILITY OF DATA AND THE SPEED AT
WHICH TODAY'S COMPUTERS CAN PROCESS IT ARE REASONS WHY AI IS
EXPLODING TODAY. AI SYSTEMS ARE REALLY GOOD AT CLASSIFYING,
CATEGORIZING, AND PARTITIONING MASSIVE AMOUNTS OF DATA TO MAKE THE
MOST RELEVANT PIECES AVAILABLE FOR HUMANS TO ANALYZE AND MAKE
DECISIONS. ENGINEERS MUST CONSIDER THE DATA ITSELF--PROVENANCE,
SECURITY, QUALITY, AND ALIGNING TEST AND TRAINING DATA--AND THE
HARDWARE AND SOFTWARE SYSTEMS THAT SUPPORT THAT DATA. 
WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?

ONE AI, MANY ALGORITHMS. WHEN WE TALK ABOUT AI, ML, AND DEEP

LEARNING, WE ARE REFERRING TO MANY DIFFERENT ALGORITHMS, MANY

DIFFERENT APPROACHES, NOT ALL OF WHICH ARE NEURAL-NETWORK BASED. AI IS

NOT A NEW FIELD, AND MANY OF THE ALGORITHMS IN USE TODAY WERE

GENERATED IN THE 1950S, 1960S, OR 1970S. 


WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?
THE INSIGHT IS THE BENEFIT OF AI. ENGINEERS FACE THE REALITY THAT IT IS IMPOSSIBLE

TO TEST A SYSTEM IN EVERY SITUATION IT WILL EVER ENCOUNTER. AN AI SYSTEM ADDS

CAPABILITY FOR THE ENGINEERING BECAUSE IT CAN FIND AN ANSWER TO NEVER-SEEN-

BEFORE SITUATIONS THAT IS INSIGHTFUL AND HAS A VERY GOOD PROBABILITY OF BEING

CORRECT. HOWEVER, IT IS NOT NECESSARILY CORRECT, BUT PROBABILISTIC. 


WHAT SHOULD ALL ENGINEERS KNOW ABOUT
AI?
AN AI SYSTEM DEPENDS ON THE SYSTEM UNDER WHICH IT RUNS. WHEN BUILDING A

SYSTEM THAT DOES NOT INCORPORATE AI, YOU CAN BUILD IT IN ISOLATION, TEST IT IN

ISOLATION, AND THEN DEPLOY IT AND BE CERTAIN IT IS GOING TO BEHAVE JUST AS IT DID IN

THE LAB. AN AI SYSTEM DEPENDS ON THE CONDITIONS UNDER WHICH THE AI RUNS AND

WHAT THE AI SYSTEM IS SENSING, AND THIS CONTEXT ADDS ANOTHER LEVEL OF

COMPLEXITY.
TOP 10 EMERGING TECHNOLOGIES FOR 2020
1. AI
2. CLOUD COMPUTING
3. IOT
4. SERVERLESS COMPUTING
5. BIOMETRICS
6. AUGMENTED REALITY/VIRTUAL REALITY
7. BLOCKCHAIN
8. ROBOTICS
9. NATURAL LANGUAGE PROCESSING
10. QUANTUM COMPUTING
EMERGING TECHNOLOGIES (CLOUD
COMPUTING)

CLOUD COMPUTING IS THE ON-DEMAND AVAILABILITY OF COMPUTER SYSTEM

RESOURCES, ESPECIALLY DATA STORAGE (CLOUD STORAGE) AND COMPUTING

POWER, WITHOUT DIRECT ACTIVE MANAGEMENT BY THE USER. THE TERM IS

GENERALLY USED TO DESCRIBE DATA CENTERS AVAILABLE TO MANY USERS OVER

THE INTERNET.
EMERGING TECHNOLOGIES (CLOUD COMPUTING)

• RATHER THAN OWNING THEIR OWN COMPUTING INFRASTRUCTURE OR DATA


CENTERS, COMPANIES CAN RENT ACCESS TO ANYTHING FROM APPLICATIONS TO
STORAGE FROM A CLOUD SERVICE PROVIDER.
• ONE BENEFIT OF USING CLOUD COMPUTING SERVICES IS THAT FIRMS CAN AVOID
THE UPFRONT COST AND COMPLEXITY OF OWNING AND MAINTAINING THEIR
OWN IT INFRASTRUCTURE, AND INSTEAD SIMPLY PAY FOR WHAT THEY USE,
WHEN THEY USE IT.
• IN TURN, PROVIDERS OF CLOUD COMPUTING SERVICES CAN BENEFIT FROM
SIGNIFICANT ECONOMIES OF SCALE BY DELIVERING THE SAME SERVICES TO A
WIDE RANGE OF CUSTOMERS.
EMERGING TECHNOLOGIES (CLOUD
COMPUTING)

CLOUD COMPUTING SERVICES COVER A VAST RANGE OF OPTIONS NOW, FROM THE

BASICS OF STORAGE, NETWORKING, AND PROCESSING POWER THROUGH TO

NATURAL LANGUAGE PROCESSING AND ARTIFICIAL INTELLIGENCE AS WELL AS

STANDARD OFFICE APPLICATIONS. PRETTY MUCH ANY SERVICE THAT DOESN'T

REQUIRE YOU TO BE PHYSICALLY CLOSE TO THE COMPUTER HARDWARE THAT

YOU ARE USING CAN NOW BE DELIVERED VIA THE CLOUD.


EMERGING TECHNOLOGIES (IOT)

THE INTERNET OF THINGS, OR IOT, REFERS TO THE BILLIONS OF PHYSICAL


DEVICES AROUND THE WORLD THAT ARE NOW CONNECTED TO THE INTERNET, ALL
COLLECTING AND SHARING DATA. THANKS TO THE ARRIVAL OF SUPER-CHEAP
COMPUTER CHIPS AND THE UBIQUITY OF WIRELESS NETWORKS, IT'S POSSIBLE TO
TURN ANYTHING, FROM SOMETHING AS SMALL AS A PILL TO SOMETHING AS BIG
AS AN AEROPLANE, INTO A PART OF THE IOT. CONNECTING UP ALL THESE
DIFFERENT OBJECTS AND ADDING SENSORS TO THEM ADDS A LEVEL OF DIGITAL
INTELLIGENCE TO DEVICES THAT WOULD BE OTHERWISE DUMB, ENABLING THEM
TO COMMUNICATE REAL-TIME DATA WITHOUT INVOLVING A HUMAN BEING. THE
INTERNET OF THINGS IS MAKING THE FABRIC OF THE WORLD AROUND US MORE
SMARTER AND MORE RESPONSIVE, MERGING THE DIGITAL AND PHYSICAL
UNIVERSES.
EMERGING TECHNOLOGIES (SERVERLESS
COMPUTING )

SERVERLESS COMPUTING IS A METHOD OF PROVIDING BACKEND SERVICES ON AN


AS-USED BASIS. A SERVERLESS PROVIDER ALLOWS USERS TO WRITE AND DEPLOY
CODE WITHOUT THE HASSLE OF WORRYING ABOUT THE UNDERLYING
INFRASTRUCTURE. A COMPANY THAT GETS BACKEND SERVICES FROM A
SERVERLESS VENDOR IS CHARGED BASED ON THEIR COMPUTATION AND DO NOT
HAVE TO RESERVE AND PAY FOR A FIXED AMOUNT OF BANDWIDTH OR NUMBER OF
SERVERS, AS THE SERVICE IS AUTO-SCALING. NOTE THAT DESPITE THE NAME
SERVERLESS, PHYSICAL SERVERS ARE STILL USED BUT DEVELOPERS DO NOT NEED
TO BE AWARE OF THEM.
EMERGING TECHNOLOGIES (SERVERLESS
COMPUTING )

• SERVERLESS COMPUTING ALLOWS DEVELOPERS TO PURCHASE BACKEND


SERVICES ON A FLEXIBLE ‘PAY-AS-YOU-GO’ BASIS, MEANING THAT DEVELOPERS
ONLY HAVE TO PAY FOR THE SERVICES THEY USE. THIS IS LIKE SWITCHING FROM
A CELL PHONE DATA PLAN WITH A MONTHLY FIXED LIMIT, TO ONE THAT ONLY
CHARGES FOR EACH BYTE OF DATA THAT ACTUALLY GETS USED.
• THE TERM ‘SERVERLESS’ IS SOMEWHAT MISLEADING, AS THERE ARE STILL
SERVERS PROVIDING THESE BACKEND SERVICES, BUT ALL OF THE SERVER SPACE
AND INFRASTRUCTURE CONCERNS ARE HANDLED BY THE VENDOR. SERVERLESS
MEANS THAT THE DEVELOPERS CAN DO THEIR WORK WITHOUT HAVING TO
WORRY ABOUT SERVERS AT ALL.
EMERGING TECHNOLOGIES (AUGMENTED
REALITY/VIRTUAL REALITY)

• AUGMENTED REALITY (AR) IS A PERFECT BLEND OF THE DIGITAL WORLD AND THE PHYSICAL
ELEMENTS TO CREATE AN ARTIFICIAL ENVIRONMENT. APPS WHICH ARE DEVELOPED USING AR
TECHNOLOGY FOR MOBILE OR DESKTOP TO BLEND DIGITAL COMPONENTS INTO THE REAL WORLD.
THE FULL FORM OF AR IS AUGMENT REALITY. EXAMPLE: AR TECHNOLOGY HELPS TO DISPLAY
SCORE OVERLAYS ON TELECASTED SPORTS GAMES AND POP OUT 3D PHOTOS, TEXT MESSAGES,
AND EMAILS.
• VIRTUAL REALITY (VR) IS A COMPUTER-GENERATED SIMULATION OF AN ALTERNATE WORLD OR
REALITY. IT IS USED IN 3D MOVIES AND VIDEO GAMES. IT HELPS TO CREATE SIMULATIONS SIMILAR
TO THE REAL WORLD AND "IMMERSE" THE VIEWER USING COMPUTERS AND SENSORY DEVICES
LIKE HEADSETS AND GLOVES. APART FROM GAMES AND ENTERTAINMENT, VIRTUAL REALITY IS
ALSO USED FOR TRAINING, EDUCATION, AND SCIENCE. THE FULL FORM OF VR IS VIRTUAL REALITY
EMERGING TECHNOLOGIES (BLOCKCHAIN)

• BLOCKCHAIN IS A SHARED, IMMUTABLE LEDGER THAT FACILITATES THE PROCESS OF RECORDING


TRANSACTIONS AND TRACKING ASSETS IN A BUSINESS NETWORK. AN ASSET CAN BE TANGIBLE (A
HOUSE, CAR, CASH, LAND) OR INTANGIBLE (INTELLECTUAL PROPERTY, PATENTS, COPYRIGHTS,
BRANDING). VIRTUALLY ANYTHING OF VALUE CAN BE TRACKED AND TRADED ON A BLOCKCHAIN
NETWORK, REDUCING RISK AND CUTTING COSTS FOR ALL.
• BUSINESS RUNS ON INFORMATION. THE FASTER IT’S RECEIVED AND THE MORE ACCURATE IT IS, THE
BETTER. BLOCKCHAIN IS IDEAL FOR DELIVERING THAT INFORMATION BECAUSE IT PROVIDES
IMMEDIATE, SHARED, AND COMPLETELY TRANSPARENT INFORMATION STORED ON AN IMMUTABLE
LEDGER THAT CAN BE ACCESSED ONLY BY PERMISSIONED NETWORK MEMBERS. A BLOCKCHAIN
NETWORK CAN TRACK ORDERS, PAYMENTS, ACCOUNTS, PRODUCTION, AND MUCH MORE. AND BECAUSE
MEMBERS SHARE A SINGLE VIEW OF THE TRUTH, YOU CAN SEE ALL DETAILS OF A TRANSACTION END-
TO-END, GIVING YOU GREATER CONFIDENCE, AS WELL AS NEW EFFICIENCIES AND OPPORTUNITIES.
EMERGING TECHNOLOGIES (ROBOTICS)

• ROBOTICS IS THE INTERSECTION OF SCIENCE, ENGINEERING AND TECHNOLOGY THAT


PRODUCES MACHINES, CALLED ROBOTS, THAT SUBSTITUTE FOR (OR REPLICATE) HUMAN
ACTIONS. BUSINESS RUNS ON INFORMATION. THE FASTER IT’S RECEIVED AND THE MORE
ACCURATE IT IS, THE BETTER.
• ROBOTICS, DESIGN, CONSTRUCTION, AND USE OF MACHINES (ROBOTS) TO PERFORM
TASKS DONE TRADITIONALLY BY HUMAN BEINGS. ROBOTS ARE WIDELY USED IN SUCH
INDUSTRIES AS AUTOMOBILE MANUFACTURE TO PERFORM SIMPLE REPETITIVE TASKS,
AND IN INDUSTRIES WHERE WORK MUST BE PERFORMED IN ENVIRONMENTS HAZARDOUS
TO HUMANS. MANY ASPECTS OF ROBOTICS INVOLVE ARTIFICIAL INTELLIGENCE; ROBOTS
MAY BE EQUIPPED WITH THE EQUIVALENT OF HUMAN SENSES SUCH AS VISION, TOUCH,
AND THE ABILITY TO SENSE TEMPERATURE.
EMERGING TECHNOLOGIES (NATURAL LANGUAGE
PROCESSING)

• NATURAL LANGUAGE PROCESSING, USUALLY SHORTENED AS NLP, IS A BRANCH


OF ARTIFICIAL INTELLIGENCE THAT DEALS WITH THE INTERACTION BETWEEN
COMPUTERS AND HUMANS USING THE NATURAL LANGUAGE.

• THE ULTIMATE OBJECTIVE OF NLP IS TO READ, DECIPHER, UNDERSTAND, AND


MAKE SENSE OF THE HUMAN LANGUAGES IN A MANNER THAT IS VALUABLE.

• MOST NLP TECHNIQUES RELY ON MACHINE LEARNING TO DERIVE MEANING


FROM HUMAN LANGUAGES.
EMERGING TECHNOLOGIES (QUANTUM COMPUTING )

• QUANTUM COMPUTING IS THE USE OF QUANTUM PHENOMENA SUCH AS


SUPERPOSITION AND ENTANGLEMENT TO PERFORM COMPUTATION. COMPUTERS
THAT PERFORM QUANTUM COMPUTATIONS ARE KNOWN AS QUANTUM
COMPUTERS. QUANTUM COMPUTERS ARE BELIEVED TO BE ABLE TO SOLVE
CERTAIN COMPUTATIONAL PROBLEMS, SUCH AS INTEGER FACTORIZATION
(WHICH UNDERLIES RSA ENCRYPTION), SUBSTANTIALLY FASTER THAN
CLASSICAL COMPUTERS. THE STUDY OF QUANTUM COMPUTING IS A SUBFIELD
OF QUANTUM INFORMATION SCIENCE.
ETHICS OF ARTIFICIAL INTELLIGENCE

THE ETHICS OF ARTIFICIAL INTELLIGENCE IS THE BRANCH OF THE ETHICS OF

TECHNOLOGY SPECIFIC TO ARTIFICIALLY INTELLIGENT SYSTEMS. IT IS SOMETIMES

DIVIDED INTO A CONCERN WITH THE MORAL BEHAVIOR OF HUMANS AS THEY

DESIGN, MAKE, USE AND TREAT ARTIFICIALLY INTELLIGENT SYSTEMS, AND A

CONCERN WITH THE BEHAVIOR OF MACHINES, IN MACHINE ETHICS. IT ALSO

INCLUDES THE ISSUE OF A POSSIBLE SINGULARITY DUE TO SUPERINTELLIGENT AI.


ETHICS OF ARTIFICIAL INTELLIGENCE
THE TERM "ROBOT ETHICS" (SOMETIMES "ROBOETHICS") REFERS TO THE
MORALITY OF HOW HUMANS DESIGN, CONSTRUCT, USE AND TREAT ROBOTS. IT
CONSIDERS BOTH HOW ARTIFICIALLY INTELLIGENT BEINGS MAY BE USED TO
HARM HUMANS AND HOW THEY MAY BE USED TO BENEFIT HUMANS.
• ROBOT RIGHTS: "ROBOT RIGHTS" IS THE CONCEPT THAT PEOPLE SHOULD HAVE
MORAL OBLIGATIONS TOWARDS THEIR MACHINES, AKIN TO HUMAN RIGHTS OR
ANIMAL RIGHTS. IT HAS BEEN SUGGESTED THAT ROBOT RIGHTS (SUCH AS A
RIGHT TO EXIST AND PERFORM ITS OWN MISSION) COULD BE LINKED TO ROBOT
DUTY TO SERVE HUMANITY, ANALOGOUS TO LINKING HUMAN RIGHTS WITH
HUMAN DUTIES BEFORE SOCIETY.
ETHICS OF ARTIFICIAL INTELLIGENCE
THREAT TO HUMAN DIGNITY:JOSEPH WEIZENBAUM ARGUED IN 1976 THAT AI TECHNOLOGY
SHOULD NOT BE USED TO REPLACE PEOPLE IN POSITIONS THAT REQUIRE RESPECT AND
CARE, SUCH AS:
• A CUSTOMER SERVICE REPRESENTATIVE (AI TECHNOLOGY IS ALREADY USED TODAY FOR
TELEPHONE-BASED INTERACTIVE VOICE RESPONSE SYSTEMS)
• A THERAPIST
• A NURSEMAID FOR THE ELDERLY
• A SOLDIER
• A JUDGE
• A POLICE OFFICER
ETHICS OF ARTIFICIAL INTELLIGENCE

• TRANSPARENCY, ACCOUNTABILITY, AND OPEN SOURCE: BILL HIBBARD ARGUES THAT

BECAUSE AI WILL HAVE SUCH A PROFOUND EFFECT ON HUMANITY, AI DEVELOPERS ARE

REPRESENTATIVES OF FUTURE HUMANITY AND THUS HAVE AN ETHICAL OBLIGATION TO BE

TRANSPARENT IN THEIR EFFORTS. BEN GOERTZEL AND DAVID HART CREATED OPENCOG AS AN

OPEN SOURCE FRAMEWORK FOR AI DEVELOPMENT. OPENAI IS A NON-PROFIT AI RESEARCH

COMPANY CREATED BY ELON MUSK, SAM ALTMAN AND OTHERS TO DEVELOP OPEN-SOURCE AI

BENEFICIAL TO HUMANITY. THERE ARE NUMEROUS OTHER OPEN-SOURCE AI DEVELOPMENTS.


ETHICS OF ARTIFICIAL INTELLIGENCE

• BIASES IN AI SYSTEMS: AI HAS BECOME INCREASINGLY INHERENT IN FACIAL


AND VOICE RECOGNITION SYSTEMS. SOME OF THESE SYSTEMS HAVE REAL
BUSINESS APPLICATIONS AND DIRECTLY IMPACT PEOPLE. THESE SYSTEMS ARE
VULNERABLE TO BIASES AND ERRORS INTRODUCED BY ITS HUMAN CREATORS.
ALSO, THE DATA USED TO TRAIN THESE AI SYSTEMS ITSELF CAN HAVE BIASES.
FOR INSTANCE, FACIAL RECOGNITION ALGORITHMS MADE BY MICROSOFT, IBM
AND FACE++ ALL HAD BIASES WHEN IT CAME TO DETECTING PEOPLE'S GENDER;
THESE AI SYSTEMS WERE ABLE TO DETECT GENDER OF WHITE MEN MORE
ACCURATELY THAN GENDER OF DARKER SKIN MEN.
ETHICS OF ARTIFICIAL INTELLIGENCE

• LIABILITY FOR SELF-DRIVING CARS :AS THE WIDESPREAD USE OF AUTONOMOUS

CARS BECOMES INCREASINGLY IMMINENT, NEW CHALLENGES RAISED BY FULLY

AUTONOMOUS VEHICLES MUST BE ADDRESSED. RECENTLY,[WHEN?] THERE HAS BEEN

DEBATE AS TO THE LEGAL LIABILITY OF THE RESPONSIBLE PARTY IF THESE CARS GET

INTO ACCIDENTS. IN ONE REPORT WHERE A DRIVERLESS CAR HIT A PEDESTRIAN, THE

DRIVER WAS INSIDE THE CAR BUT THE CONTROLS WERE FULLY IN THE HAND OF

COMPUTERS. THIS LED TO A DILEMMA OVER WHO WAS AT FAULT FOR THE ACCIDENT.
ETHICS OF ARTIFICIAL INTELLIGENCE
• WEAPONIZATION OF ARTIFICIAL INTELLIGENCE : SOME EXPERTS AND ACADEMICS HAVE

QUESTIONED THE USE OF ROBOTS FOR MILITARY COMBAT, ESPECIALLY WHEN SUCH ROBOTS

ARE GIVEN SOME DEGREE OF AUTONOMY. ON OCTOBER 31, 2019, THE UNITED STATES

DEPARTMENT OF DEFENSE'S DEFENSE INNOVATION BOARD PUBLISHED THE DRAFT OF A

REPORT RECOMMENDING PRINCIPLES FOR THE ETHICAL USE OF ARTIFICIAL INTELLIGENCE BY

THE DEPARTMENT OF DEFENSE THAT WOULD ENSURE A HUMAN OPERATOR WOULD ALWAYS BE

ABLE TO LOOK INTO THE 'BLACK BOX' AND UNDERSTAND THE KILL-CHAIN PROCESS.

HOWEVER, A MAJOR CONCERN IS HOW THE REPORT WILL BE IMPLEMENTED.


UNIT - II

DATA & ALGORITHMS


DATA

A HISTORY OF DATA : DATA IS PART OF THE FABRIC OF LIFE AND SOCIETY — AND

HAS BEEN FOR A LONG TIME. THE HISTORY OF DATA IS A LONG STORY DETAILING

THE EVOLUTION OF DATA COLLECTION, STORAGE AND PROCESSING. IT’S SAID

THAT KNOWLEDGE IS POWER. WELL, DATA IS KNOWLEDGE, AND WE’RE NOW

SEEING THE POWER THAT OUR DATA HOLDS.


WHAT IS A DATA ACQUISITION SYSTEM?

DATA ACQUISITION, OR DAQ AS IT IS OFTEN REFERRED, IS THE PROCESS OF

DIGITIZING DATA FROM THE WORLD AROUND US SO IT CAN BE DISPLAYED,

ANALYZED, AND STORED IN A COMPUTER. A SIMPLE EXAMPLE IS THE PROCESS OF

MEASURING THE TEMPERATURE IN A ROOM AS A DIGITAL VALUE USING A SENSOR

SUCH AS A THERMOCOUPLE.
WHAT IS A DATA ACQUISITION SYSTEM?

A DATA ACQUISITION SYSTEM CONSISTS OF THREE KEY ELEMENTS: A SENSOR, A

SIGNAL CONDITIONER AND AN ANALOG-TO-DIGITAL CONVERTER, ALSO

KNOWN BY ITS ACRONYM CAD OR ADC (ANALOG-TO-DIGITAL CONVERTER (ADC).

SENSOR: A DEVICE, ALSO KNOWN AS A TRANSDUCER, CAPABLE OF

TRANSFORMING CONDITIONS OF REALITY, SUCH AS TEMPERATURE OR

MOTION, INTO AN ELECTRICAL SIGNAL THAT CAN BE MEASURED AND

ANALYZED WITH A COMPUTER.


WHAT IS A DATA ACQUISITION SYSTEM?

SIGNAL CONDITIONER: THIS IS A DEVICE THAT FILTERS THE ANALOG SIGNAL PICKED UP BY SENSORS

BEFORE CONVERTING IT INTO DIGITAL INFORMATION. YOU CAN AMPLIFY THE SIGNAL, ATTENUATE IT,

FILTER IT, CALIBRATE IT OR ISOLATE IT. THE INFORMATION OBTAINED FROM REALITY CAN BE TOO

NOISY AND TOO DANGEROUS TO TREAT DIRECTLY WITHOUT PRIOR FILTERING.

ANALOG TO DIGITAL SIGNAL CONVERTER: IT IS THE KEY TO ANY DATA ACQUISITION PROCESS. IT IS A

CHIP THAT TRANSFORMS THE SIGNAL CAPTURED FROM REALITY INTO INFORMATION THAT CAN BE

INTERPRETED BY A PROCESSOR. THIS DATA IS TRANSFERRED TO A PC VIA A BUS FOR FURTHER

ANALYSIS.
WHY YOU SHOULD HAVE DATA ACQUISITION SYSTEM

DATA ACQUISITION IS KEY IN VERY DIVERSE FIELDS SUCH AS HEALTH SCIENCES


RESEARCH, CIVIL ENGINEERING, INDUSTRY, ETC.
• IMPROVES THE EFFICIENCY AND RELIABILITY OF PROCESSES OR
MACHINERY. STEEL MILLS,  UTILITIES, OR A RESEARCH LAB HAVE SOME KIND
OF DATA ACQUISITION DEVICE THAT SILENTLY MONITORS SOME PARAMETER.
THESE COLLECTED DATA CAN BE USED TO IMPROVE EFFICIENCY, ENSURE
RELIABILITY OR ENSURE THAT MACHINERY OPERATES SAFELY.
• PROBLEMS ARE ANALYZED AND SOLVED FASTER.  WITH THE USE OF REAL-
TIME DATA ACQUISITION SYSTEMS, MEASUREMENTS ARE GENERATED AND
DISPLAYED WITHOUT DELAY. THANKS TO THIS SYSTEM, A TECHNICIAN CAN
INTERVENE FASTER IN ANY PROBLEM AND MAKE THE MACHINE REACH
OPTIMUM PERFORMANCE IN LESS TIME.
WHY YOU SHOULD HAVE DATA ACQUISITION SYSTEM

• DATA REDUNDANCY IS REDUCED. WITH THE APPLICATION OF A SYSTEM OF THIS TYPE,


COMPANIES FORGET TO HAVE DUPLICATE DATA AND ADOPT A TECHNOLOGY THAT
FACILITATES THE ANALYSIS OF THE INFORMATION OBTAINED, AS IT ALLOWS THEM TO
WORK WITHOUT ANY NOISE THAT HINDERS THE ANALYSIS.
• DECREASE UPDATE ERRORS. THIS TYPE OF SYSTEM AUTOMATES DATA ENTRY
PROCESSES THAT WERE PREVIOUSLY DONE BY HAND. AUTOMATION REDUCES ERRORS
BY ELIMINATING HUMAN ERROR AND MISPLACEMENT.
• INCREASED DATA INTEGRATION AND RELIANCE ON OTHER PROGRAMS. THE FEWER
PROGRAMS THAT INTERVENE IN A MORE AGILE PROCESS, THE MORE AGILE IT WILL BE.
THANKS TO A DAQ SYSTEM, IT ENSURES THAT THE INFORMATION IS COMPLETE AND
CORRECT WITHOUT HAVING TO RELY ON OTHER TYPES OF APPLICATIONS.
WHY YOU SHOULD HAVE DATA ACQUISITION
SYSTEM
• IMPROVED ACCESS TO DATA FOR USERS THROUGH THE USE OF HOST AND
QUERY LANGUAGES. WITH THESE SYSTEMS IT IS EASIER TO ACCESS THE
DATABASE AND RETRIEVE INFORMATION FOR PROCESSING AND ANALYSIS.
• IMPROVES DATA SECURITY. BY AUTOMATING THE PROCESS OF CAPTURING
DATA FROM REALITY, THE HUMAN FACTOR IS NO LONGER INVOLVED AND THE
SECURITY RISKS ASSOCIATED WITH THIS PROCEDURE ARE REDUCED.
• DATA ENTRY, STORAGE AND RETRIEVAL COSTS ARE REDUCED. THESE THREE
PROCESSES ARE CHEAPER BECAUSE DATA IS ENTERED FASTER, TAKES UP LESS
SPACE, AND CAN BE RETRIEVED IN LESS TIME.
WHY YOU SHOULD HAVE DATA ACQUISITION
SYSTEM
• QUALITY CONTROL. A SYSTEM OF THIS TYPE CAN CONFIRM THAT A SYSTEM IS
MEETING THE DESIGN SPECIFICATIONS AND THAT A PRODUCT MEETS THE USER’S
NEEDS. IN ADDITION, YOU CAN TEST WHETHER A PRODUCT HAS THE QUALITY
REQUIRED FOR MARKETING AND DETECT THOSE THAT ARE DEFECTIVE.

• SUPERVISION OF PROCESSES WITHOUT HUMAN INTERACTION. WITH SUCH A


SYSTEM, THE COMPANY’S VARIOUS PROCEDURES ARE TRACKED AND
MONITORED TO IDENTIFY AND RESOLVE FAULTS FASTER.
DATA ACQUISITION SYSTEM
DATA MINING AS A SYNONYM FOR ANOTHER POPULARLY USED TERM, KNOWLEDGE DISCOVERY FROM
DATA, OR KDD, WHILE OTHERS VIEW DATA MINING AS MERELY AN ESSENTIAL STEP IN THE PROCESS OF
KNOWLEDGE DISCOVERY. THE KNOWLEDGE DISCOVERY PROCESS IS SHOWN IN FIGURE AS AN ITERATIVE
SEQUENCE OF THE FOLLOWING STEPS:
1. DATA CLEANING (TO REMOVE NOISE AND INCONSISTENT DATA)
2. DATA INTEGRATION (WHERE MULTIPLE DATA SOURCES MAY BE COMBINED)
3. DATA SELECTION (WHERE DATA RELEVANT TO THE ANALYSIS TASK ARE RETRIEVED FROM THE
DATABASE)
4. DATA TRANSFORMATION (WHERE DATA ARE TRANSFORMED AND CONSOLIDATED INTO FORMS
APPROPRIATE FOR MINING BY PERFORMING SUMMARY OR AGGREGATION OPERATIONS)4
5. DATA MINING (AN ESSENTIAL PROCESS WHERE INTELLIGENT METHODS ARE APPLIED TO EXTRACT DATA
PATTERNS)
6. PATTERN EVALUATION (TO IDENTIFY THE TRULY INTERESTING PATTERNS REPRESENTING KNOWLEDGE
BASED ON INTERESTINGNESS MEASURES)
7. KNOWLEDGE PRESENTATION (WHERE VISUALIZATION AND KNOWLEDGE REPRESENTATION
TECHNIQUES ARE USED TO PRESENT MINED KNOWLEDGE TO USERS)
DATA ACQUISITION SYSTEM

STEPS 1 THROUGH 4 ARE DIFFERENT FORMS OF DATA PRE-PROCESSING, WHERE DATA

ARE PREPARED FOR MINING. THE DATA MINING STEP MAY INTERACT WITH THE USER

OR A KNOWLEDGE BASE. THE INTERESTING PATTERNS ARE PRESENTED TO THE USER

AND MAY BE STORED AS NEW KNOWLEDGE IN THE KNOWLEDGE BASE.


DATA ACQUISITION SYSTEM
THE PRECEDING VIEW SHOWS DATA MINING AS ONE STEP IN THE KNOWLEDGE DISCOVERY

PROCESS, ALBEIT AN ESSENTIAL ONE BECAUSE IT UNCOVERS HIDDEN PATTERNS FOR

EVALUATION. HOWEVER, IN INDUSTRY, IN MEDIA, AND IN THE RESEARCH MILIEU, THE TERM

DATA MINING IS OFTEN USED TO REFER TO THE ENTIRE KNOWLEDGE DISCOVERY PROCESS

(PERHAPS BECAUSE THE TERM IS SHORTER THAN KNOWLEDGE DISCOVERY FROM DATA).

THEREFORE, WE ADOPT A BROAD VIEW OF DATA MINING FUNCTIONALITY: DATA MINING IS THE

PROCESS OF DISCOVERING INTERESTING PATTERNS AND KNOWLEDGE FROM LARGE AMOUNTS

OF DATA. THE DATA SOURCES CAN INCLUDE DATABASES, DATA WAREHOUSES, THE WEB, OTHER

INFORMATION REPOSITORIES, OR DATA THAT ARE STREAMED INTO THE SYSTEM DYNAMICALLY
DATA ACQUISITION SYSTEM
MODERN DIGITAL DATA ACQUISITION SYSTEMS CONSIST OF FOUR ESSENTIAL
COMPONENTS THAT FORM THE ENTIRE MEASUREMENT CHAIN OF PHYSICS
PHENOMENA: 
• SENSORS
• SIGNAL CONDITIONING
• ANALOG-TO-DIGITAL CONVERTER
• COMPUTER WITH DAQ SOFTWARE 
THE PURPOSES OF DATA ACQUISITION
 

The primary purpose of a data acquisition system is to acquire and store


the data. But they are also intended to provide real-time and post-
recording visualization and analysis of the data. Furthermore, most data
acquisition systems have some analytical and report generation
capability built-in. 
A recent innovation is the combination of data acquisition and control,
where a DAQ system is connected tightly and synchronized with a real-
time control system. You can read more about this topic in the related
article: “Merging Data Acquisition with a Real-Time Control System”.
THE PURPOSES OF DATA ACQUISITION
 
Engineers in different applications have various requirements, of course,
but these key capabilities are present in varying proportion:
•Data recording
•Data storing
•Real-time data visualization
•Post-recording data review
•Data analysis using various mathematical and statistical calculations
•Report generation
IMPORTANCE OF DATA ACQUISITION SYSTEMS
DATA ACQUISITION SYSTEMS OR DAQ DEVICES ARE ESSENTIAL IN THE TESTING OF PRODUCTS,

FROM AUTOMOBILES TO MEDICAL DEVICES - BASICALLY, ANY ELECTROMECHANICAL DEVICE THAT

PEOPLE USE.

BEFORE DATA ACQUISITION, PRODUCTS WERE TESTED IN AN UNSTRUCTURED, HIGHLY SUBJECTIVE

MANNER. FOR EXAMPLE, WHEN TESTING A NEW SUSPENSION IN AN AUTOMOBILE, ENGINEERS

OFTEN RELIED ON THE OPINIONS OF TEST DRIVERS AS TO HOW THE SUSPENSION “FELT” TO THEM.

WITH THE INVENTION AND DEVELOPMENT OF DATA ACQUISITION SYSTEMS, WHICH COULD

COLLECT DATA FROM A WIDE VARIETY OF SENSORS, THESE KINDS OF SUBJECTIVE OPINIONS WERE

REPLACED WITH OBJECTIVE MEASUREMENTS. THESE COULD EASILY BE REPEATED, COMPARED,

ANALYSED MATHEMATICALLY AND VISUALIZED IN MANY WAYS.


WHAT IS DATA PROCESSING?

• DATA PROCESSING OCCURS WHEN DATA IS COLLECTED AND TRANSLATED INTO


USABLE INFORMATION. USUALLY PERFORMED BY A DATA SCIENTIST OR TEAM OF
DATA SCIENTISTS, IT IS IMPORTANT FOR DATA PROCESSING TO BE DONE
CORRECTLY AS NOT TO NEGATIVELY AFFECT THE END PRODUCT, OR DATA
OUTPUT.

• DATA PROCESSING STARTS WITH DATA IN ITS RAW FORM AND CONVERTS IT INTO
A MORE READABLE FORMAT (GRAPHS, DOCUMENTS, ETC.), GIVING IT THE FORM
AND CONTEXT NECESSARY TO BE INTERPRETED BY COMPUTERS AND UTILIZED
BY EMPLOYEES THROUGHOUT AN ORGANIZATION.
SIX STAGES OF DATA PROCESSING

1. DATA COLLECTION : COLLECTING DATA IS THE FIRST STEP IN DATA PROCESSING.

DATA IS PULLED FROM AVAILABLE SOURCES, INCLUDING DATA LAKES AND DATA

WAREHOUSES. IT IS IMPORTANT THAT THE DATA SOURCES AVAILABLE ARE

TRUSTWORTHY AND WELL-BUILT SO THE DATA COLLECTED (AND LATER USED AS

INFORMATION) IS OF THE HIGHEST POSSIBLE QUALITY.


SIX STAGES OF DATA PROCESSING

1. DATA COLLECTION : COLLECTING DATA IS THE FIRST STEP IN DATA PROCESSING.

DATA IS PULLED FROM AVAILABLE SOURCES, INCLUDING DATA LAKES AND DATA

WAREHOUSES. IT IS IMPORTANT THAT THE DATA SOURCES AVAILABLE ARE

TRUSTWORTHY AND WELL-BUILT SO THE DATA COLLECTED (AND LATER USED AS

INFORMATION) IS OF THE HIGHEST POSSIBLE QUALITY.


SIX STAGES OF DATA PROCESSING
2. DATA PREPARATION : ONCE THE DATA IS COLLECTED, IT THEN ENTERS THE DATA
PREPARATION STAGE. DATA PREPARATION, OFTEN REFERRED TO AS “PRE-
PROCESSING” IS THE STAGE AT WHICH RAW DATA IS CLEANED UP AND ORGANIZED
FOR THE FOLLOWING STAGE OF DATA PROCESSING. DURING PREPARATION, RAW
DATA IS DILIGENTLY CHECKED FOR ANY ERRORS. THE PURPOSE OF THIS STEP IS TO
ELIMINATE BAD DATA (REDUNDANT, INCOMPLETE, OR INCORRECT DATA) AND
BEGIN TO CREATE HIGH-QUALITY DATA FOR THE BEST BUSINESS INTELLIGENCE.
SIX STAGES OF DATA PROCESSING

3. DATA INPUT : THE CLEAN DATA IS THEN ENTERED INTO ITS DESTINATION

(PERHAPS A CRM LIKE SALESFORCE OR A DATA WAREHOUSE LIKE REDSHIFT), AND

TRANSLATED INTO A LANGUAGE THAT IT CAN UNDERSTAND. DATA INPUT IS THE

FIRST STAGE IN WHICH RAW DATA BEGINS TO TAKE THE FORM OF USABLE

INFORMATION.
SIX STAGES OF DATA PROCESSING

4. PROCESSING : DURING THIS STAGE, THE DATA INPUTTED TO THE COMPUTER IN

THE PREVIOUS STAGE IS ACTUALLY PROCESSED FOR INTERPRETATION.

PROCESSING IS DONE USING MACHINE LEARNING ALGORITHMS, THOUGH THE

PROCESS ITSELF MAY VARY SLIGHTLY DEPENDING ON THE SOURCE OF DATA BEING

PROCESSED (DATA LAKES, SOCIAL NETWORKS, CONNECTED DEVICES ETC.) AND

ITS INTENDED USE (EXAMINING ADVERTISING PATTERNS, MEDICAL DIAGNOSIS

FROM CONNECTED DEVICES, DETERMINING CUSTOMER NEEDS, ETC.).


SIX STAGES OF DATA PROCESSING

5. DATA OUTPUT/INTERPRETATION : THE OUTPUT/INTERPRETATION STAGE IS THE

STAGE AT WHICH DATA IS FINALLY USABLE TO NON-DATA SCIENTISTS. IT IS

TRANSLATED, READABLE, AND OFTEN IN THE FORM OF GRAPHS, VIDEOS, IMAGES,

PLAIN TEXT, ETC.). MEMBERS OF THE COMPANY OR INSTITUTION CAN NOW BEGIN

TO SELF-SERVE THE DATA FOR THEIR OWN DATA ANALYTICS PROJECTS.


SIX STAGES OF DATA PROCESSING

6. DATA STORAGE : THE FINAL STAGE OF DATA PROCESSING IS STORAGE. AFTER

ALL OF THE DATA IS PROCESSED, IT IS THEN STORED FOR FUTURE USE. WHILE

SOME INFORMATION MAY BE PUT TO USE IMMEDIATELY, MUCH OF IT WILL SERVE A

PURPOSE LATER ON. PLUS, PROPERLY STORED DATA IS A NECESSITY FOR

COMPLIANCE WITH DATA PROTECTION LEGISLATION LIKE GDPR. WHEN DATA IS

PROPERLY STORED, IT CAN BE QUICKLY AND EASILY ACCESSED BY MEMBERS OF

THE ORGANIZATION WHEN NEEDED.


DATA VISUALIZATION

DATA VISUALIZATION IS THE GRAPHICAL REPRESENTATION OF INFORMATION AND

DATA. BY USING VISUAL ELEMENTS LIKE CHARTS, GRAPHS, AND MAPS, DATA

VISUALIZATION TOOLS PROVIDE AN ACCESSIBLE WAY TO SEE AND UNDERSTAND

TRENDS, OUTLIERS, AND PATTERNS IN DATA.


ADVANTAGES OF DATA VISUALIZATION
• OUR EYES ARE DRAWN TO COLORS AND PATTERNS. WE CAN QUICKLY IDENTIFY
RED FROM BLUE, SQUARE FROM CIRCLE. OUR CULTURE IS VISUAL, INCLUDING
EVERYTHING FROM ART AND ADVERTISEMENTS TO TV AND MOVIES.

• DATA VISUALIZATION IS ANOTHER FORM OF VISUAL ART THAT GRABS OUR


INTEREST AND KEEPS OUR EYES ON THE MESSAGE. WHEN WE SEE A CHART, WE
QUICKLY SEE TRENDS AND OUTLIERS. IF WE CAN SEE SOMETHING, WE
INTERNALIZE IT QUICKLY. IT’S STORYTELLING WITH A PURPOSE. IF YOU’VE EVER
STARED AT A MASSIVE SPREADSHEET OF DATA AND COULDN’T SEE A TREND, YOU
KNOW HOW MUCH MORE EFFECTIVE A VISUALIZATION CAN BE.
EXAMPLES OF DATA VISUALIZATION

THE DIFFERENT TYPES OF VISUALIZATIONS


COMMON GENERAL TYPES OF DATA VISUALIZATION:
• CHARTS
• TABLES
• GRAPHS
• MAPS
• INFOGRAPHICS
• DASHBOARDS
EXAMPLES OF DATA VISUALIZATION
MORE SPECIFIC EXAMPLES OF METHODS • MATRIX
TO VISUALIZE DATA: • NETWORK
• AREA CHART • POLAR AREA
• BAR CHART • RADIAL TREE
• BOX-AND-WHISKER PLOTS • SCATTER PLOT (2D OR 3D)
• BUBBLE CLOUD • STREAMGRAPH
• BULLET GRAPH • TEXT TABLES
• CARTOGRAM • TIMELINE
• CIRCLE VIEW • TREEMAP
• DOT DISTRIBUTION MAP • WEDGE STACK GRAPH
• GANTT CHART • WORD CLOUD
• HEAT MAP
• HIGHLIGHT TABLE
• HISTOGRAM
WHICH TECHNOLOGIES ARE USED?
THERE ARE TWO TYPES:
STATISTICS : STATISTICS STUDIES THE COLLECTION, ANALYSIS, INTERPRETATION
OR EXPLANATION, AND PRESENTATION OF DATA. DATA MINING HAS AN INHERENT
CONNECTION WITH STATISTICS.
A STATISTICAL MODEL IS A SET OF MATHEMATICAL FUNCTIONS THAT DESCRIBE
THE BEHAVIOR OF THE OBJECTS IN A TARGET CLASS IN TERMS OF RANDOM
VARIABLES AND THEIR ASSOCIATED PROBABILITY DISTRIBUTIONS. STATISTICAL
MODELS ARE WIDELY USED TO MODEL DATA AND DATA CLASSES.
FOR EXAMPLE, IN DATA MINING TASKS LIKE DATA CHARACTERIZATION AND
CLASSIFICATION, STATISTICAL MODELS OF TARGET CLASSES CAN BE BUILT.
WHICH TECHNOLOGIES ARE USED?
WHICH TECHNOLOGIES ARE USED?

• MACHINE LEARNING : MACHINE LEARNING INVESTIGATES HOW COMPUTERS

CAN LEARN (OR IMPROVE THEIR PERFORMANCE) BASED ON DATA. A MAIN

RESEARCH AREA IS FOR COMPUTER PROGRAMS TO AUTOMATICALLY LEARN TO

RECOGNIZE COMPLEX PATTERNS AND MAKE INTELLIGENT DECISIONS BASED ON

DATA. FOR EXAMPLE, A TYPICAL MACHINE LEARNING PROBLEM IS TO PROGRAM

A COMPUTER SO THAT IT CAN AUTOMATICALLY RECOGNIZE HANDWRITTEN

POSTAL CODES ON MAIL AFTER LEARNING FROM A SET OF EXAMPLES.


WHICH TECHNOLOGIES ARE USED?

• SUPERVISED LEARNING IS BASICALLY A SYNONYM FOR CLASSIFIFICATION. THE


SUPERVISION IN THE LEARNING COMES FROM THE LABELED EXAMPLES IN THE
TRAINING DATA SET. FOR EXAMPLE, IN THE POSTAL CODE RECOGNITION
PROBLEM, A SET OF HANDWRITTEN POSTAL CODE IMAGES AND THEIR
CORRESPONDING MACHINE-READABLE TRANSLATIONS ARE USED AS THE
TRAINING EXAMPLES, WHICH SUPERVISE THE LEARNING OF THE
CLASSIFIFICATION MODEL.
WHICH TECHNOLOGIES ARE USED?

• UNSUPERVISED LEARNING IS ESSENTIALLY A SYNONYM FOR CLUSTERING. THE


LEARNING PROCESS IS UNSUPERVISED SINCE THE INPUT EXAMPLES ARE NOT CLASS
LABELED. TYPICALLY, WE MAY USE CLUSTERING TO DISCOVER CLASSES WITHIN THE
DATA. FOR EXAMPLE, AN UNSUPERVISED LEARNING METHOD CAN TAKE, AS INPUT, A
SET OF IMAGES OF HANDWRITTEN DIGITS. SUPPOSE THAT IT FINDS 10 CLUSTERS OF
DATA. THESE CLUSTERS MAY CORRESPOND TO THE 10 DISTINCT DIGITS OF 0 TO 9,
RESPECTIVELY. HOWEVER, SINCE THE TRAINING DATA ARE NOT LABELED, THE
LEARNED MODEL CANNOT TELL US THE SEMANTIC MEANING OF THE CLUSTERS
FOUND.
WHICH TECHNOLOGIES ARE USED?

• SEMI-SUPERVISED LEARNING IS A CLASS OF MACHINE LEARNING TECHNIQUES


THAT MAKE USE OF BOTH LABELED AND UNLABELED EXAMPLES WHEN
LEARNING A MODEL. IN ONE APPROACH, LABELED EXAMPLES ARE USED TO
LEARN CLASS MODELS AND UNLABELED EXAMPLES ARE USED TO REFIFINE THE
BOUNDARIES BETWEEN CLASSES. FOR A TWO-CLASS PROBLEM, WE CAN THINK
OF THE SET OF EXAMPLES BELONGING TO ONE CLASS AS THE POSITIVE
EXAMPLES AND THOSE BELONGING TO THE OTHER CLASS AS THE NEGATIVE
EXAMPLES.
WHICH TECHNOLOGIES ARE USED?

• SEMI-SUPERVISED LEARNING IS A CLASS OF MACHINE LEARNING TECHNIQUES


THAT MAKE USE OF BOTH LABELED AND UNLABELED EXAMPLES WHEN
LEARNING A MODEL. IN ONE APPROACH, LABELED EXAMPLES ARE USED TO
LEARN CLASS MODELS AND UNLABELED EXAMPLES ARE USED TO REFIFINE THE
BOUNDARIES BETWEEN CLASSES. FOR A TWO-CLASS PROBLEM, WE CAN THINK
OF THE SET OF EXAMPLES BELONGING TO ONE CLASS AS THE POSITIVE
EXAMPLES AND THOSE BELONGING TO THE OTHER CLASS AS THE NEGATIVE
EXAMPLES.
WHAT KINDS OF PATTERNS CAN BE MINED?

THERE ARE A NUMBER OF DATA MINING FUNCTIONALITIES. THESE INCLUDE


CHARACTERIZATION AND DISCRIMINATION; THE MINING OF FREQUENT PATTERNS,
ASSOCIATIONS, AND CORRELATIONS; CLASSIFICATION AND REGRESSION; CLUSTERING
ANALYSIS; AND OUTLIER ANALYSIS. DATA MINING FUNCTIONALITIES ARE USED TO
SPECIFY THE KINDS OF PATTERNS TO BE FOUND IN DATA MINING TASKS. IN GENERAL, SUCH
TASKS CAN BE CLASSIFIED INTO TWO CATEGORIES: DESCRIPTIVE AND PREDICTIVE.
DESCRIPTIVE MINING TASKS CHARACTERIZE PROPERTIES OF THE DATA IN A TARGET DATA
SET. PREDICTIVE MINING TASKS PERFORM INDUCTION ON THE CURRENT DATA IN ORDER
TO MAKE PREDICTIONS.
WHAT KINDS OF PATTERNS CAN BE MINED?
DATA ENTRIES CAN BE ASSOCIATED WITH CLASSES OR CONCEPTS. IT CAN BE USEFUL TO DESCRIBE
INDIVIDUAL CLASSES
AND CONCEPTS IN SUMMARIZED, CONCISE, AND YET PRECISE TERMS. SUCH DESCRIPTIONS OF A CLASS
OR A CONCEPT ARE CALLED CLASS/CONCEPT DESCRIPTIONS. THESE DESCRIPTIONS CAN BE DERIVED
USING (1) DATA CHARACTERIZATION, BY SUMMARIZING THE DATA OF THE CLASS UNDER STUDY (OFTEN
CALLED THE TARGET CLASS) IN GENERAL TERMS, OR (2) DATA DISCRIMINATION, BY COMPARISON OF
THE TARGET CLASS WITH ONE OR A SET OF COMPARATIVE CLASSES (OFTEN CALLED THE CONTRASTING
CLASSES), OR (3) BOTH DATA CHARACTERIZATION AND DISCRIMINATION.
MACHINE LEARNING

• A COMPUTER PROGRAM IS SAID TO LEARN FROM


EXPERIENCE E WHEN ITS PERFORMANCE P AT A TASK T
IMPROVES WITH EXPERIENCE E.
TOM MITCHELL, MACHINE LEARNING, 1997

• LEARNING DENOTES CHANGES IN THE SYSTEM THAT ARE


ADAPTIVE IN THE SENSE THAT THEY ENABLE THE SYSTEM
TO DO THE SAME TASK OF TASKS MORE EFFICIENTLY AND
MORE EFFECTIVELY NEXT TIME. (HERBERT SIMON 1983)
MACHINE LEARNING

• LEARNING IS CONSTRUCTING OF MODIFYING


REPRESENTATIONS OF WHAT IS BEING EXPERIENCE

• LEARNING IS MAKING USEFUL CHANGES IN THE WORKING


OF OUR MIND.

• AN AGENT IS LEARNING IF IT IMPROVES ITS


PERFORMANCE ON FUTURE TASKS AFTER MAKING
OBSERVATIONS ABOUT THE WORLD.
MACHINE LEARNING

• A BRANCH OF ARTIFICIAL INTELLIGENCE, CONCERNED WITH THE DESIGN AND

DEVELOPMENT OF ALGORITHMS THAT ALLOW COMPUTERS TO EVOLVE BEHAVIORS

BASED ON EMPIRICAL DATA.

• AS INTELLIGENCE REQUIRES KNOWLEDGE, IT IS NECESSARY FOR THE COMPUTERS

TO ACQUIRE KNOWLEDGE.
MACHINE LEARNING

• MACHINE LEARNING IS THE STUDY OF HOW TO BUILD


COMPUTER SYSTEM THAT ADAPT AND IMPROVE WITH
EXPERIENCE. IT IS A SUBFIELD OF ARTIFICIAL
INTELLIGENCE AND INTERSECTS WITH COGNITIVE
SCIENCE, INFORMATIONAL THEORY, AND PROBABILITY
THEORY.
• LEARNING ALLOWS THE AGENT TO OPERATE IN INITIALLY
UNKNOWN ENVIRONMENTS AND TO BECOME MORE
COMPETENT THAN ITS INITIAL KNOWLEDGE ALONE MIGHT
ALLOW.
MACHINE LEARNING
MACHINE LEARNING IS PARTICULARLY ATTRACTIVE IN
SEVERAL REAL LIFE PROBLEMS BECAUSE OF THE
FOLLOWING REASONS:
• SOME TASKS CANNOT BE DEFINED WELL EXCEPT BY
EXAMPLE
• WORKING ENVIRONMENT OF MACHINES MAY NOT BE
KNOWN AT DESIGN TIME.
• EXPLICIT KNOWLEDGE ENCODING MAY BE DIFFICULT AND
NOT AVAILABLE
• ENVIRONMENT CHANGE OVER THE TIME
• BIOLOGICAL SYSTEMS LEARN.
MACHINE LEARNING

• APPLICATION OF THE MACHINE LEARNING:


• DATA MINING AND KNOWLEDGE DISCOVERY
• SPEECH/IMAGE/VIDEO (PATTERN) RECOGNITION
• ADAPTIVE CONTROL
• AUTONOMOUS VEHICLES/ROBOTICS
• DECISION SUPPORT SYSTEM
• BIOINFORMATICS
MACHINE LEARNING

• THE MAJOR PARADIGMS OF MACHINE LEARNING ARE AS


FOLLOWS:
• ROTE LEARNING IS A MEMORIZATION TECHNIQUE BASED ON
REPETITION. THE IDEA IS THAT ONE WILL BE ABLE TO
QUICKLY RECALL THE MEANING OF THE MATERIAL THE MORE
ONE REPEATS IT. SOME OF THE ALTERNATIVES TO ROTE
LEARNING INCLUDE MEANINGFUL LEARNING, ASSOCIATIVE
LEARNING, AND ACTIVE LEARNING.
MACHINE LEARNING

• RULE INDUCTION IS AN AREA OF MACHINE LEARNING IN


WHICH FORMAL RULES ARE EXTRACTED FROM A SET OF
OBSERVATIONS. THE RULES EXTRACTED MAY REPRESENT A
FULL SCIENTIFIC MODEL OF THE DATA, OR MERELY
REPRESENT LOCAL PATTERNS IN THE DATA. THIS INVOLVES
THE PROCESS OF LEARNING BY EXAMPLE WHERE A SYSTEM
TRIES TO INDUCE A GENERAL RULE FROM A SET OF OBSERVED
INSTANCES.

• ANALOGY MEANS LEARNING FROM SIMILARITY.


MACHINE LEARNING
• IN THE FIELD OF ARTIFICIAL INTELLIGENCE, A GENETIC
ALGORITHM (GA) IS A SEARCH HEURISTIC THAT MIMICS THE
PROCESS OF NATURAL SELECTION. THIS HEURISTIC (ALSO
SOMETIMES CALLED A META HEURISTIC) IS ROUTINELY USED TO
GENERATE USEFUL SOLUTIONS TO OPTIMIZATION AND SEARCH
PROBLEMS. GENETIC ALGORITHMS BELONG TO THE LARGER CLASS
OF EVOLUTIONARY ALGORITHMS (EA), WHICH GENERATE
SOLUTIONS TO OPTIMIZATION PROBLEMS USING TECHNIQUES
INSPIRED BY NATURAL EVOLUTION, SUCH AS INHERITANCE,
MUTATION, SELECTION, AND CROSSOVER.
MACHINE LEARNING

• REINFORCEMENT LEARNING IS AN AREA OF MACHINE LEARNING

INSPIRED BY BEHAVIORIST PSYCHOLOGY, CONCERNED WITH HOW

SOFTWARE AGENTS OUGHT TO TAKE ACTIONS IN AN

ENVIRONMENT SO AS TO MAXIMIZE SOME NOTION OF

CUMULATIVE REWARD.
GENERAL BLOCK DIAGRAM OF
LEARNING SYSTEM
MACHINE LEARNING
• THE INPUT PROGRAM IS GENERAL PROGRAM EXECUTED FOR SOLUTION OF
A PROBLEM. THE RESULTS ARE REPORTED AS OUTPUT. THESE RESULT ARE
ALSO STORED BACK TO ACQUIRE KNOWLEDGE FOR FUTURE USE. THIS
SIMPLY MEANS THAT, IF A SIMILAR PROBLEM IS REQUIRED TO BE SOLVED
NEXT TIME, THE RESULTS CAN AUTOMATICALLY BE TAKEN FROM
PREVIOUSLY ACQUIRED KNOWLEDGE. A SYSTEM IS SAID TO BE LEARNING IF
IT NOT ONLY DOES THE REPETITION OF THE SAME TASK MORE EFFECTIVELY
BUT ALSO THE SIMILAR TASK OF RELATED DOMAIN. IT COVERS A RANGE OF
PHENOMENON:
• KNOWLEDGE ACQUISITION
• SKILL REFINEMENT
MACHINE LEARNING

• KNOWLEDGE ACQUISITION REFERS TO A SITUATION IN WHICH COMPUTER

EXECUTES ONE PROGRAM AND REMEMBERS THE PROCESS FOR FUTURE USE.

KNOWLEDGE ACQUISITION IS THE PROCESS OF EXTRACTING, STRUCTURING AND

ORGANIZING KNOWLEDGE FROM ONE SOURCE, USUALLY HUMAN EXPERTS, SO IT

CAN BE USED IN SOFTWARE SUCH AS AN EXPERT SYSTEM. THIS IS OFTEN THE

MAJOR OBSTACLE IN BUILDING AN EXPERT SYSTEM.


MACHINE LEARNING

• SKILL REFINEMENT MEANS THE ABILITY BECAUSE OF

WHICH, BY DOING SOME TASK REPEATEDLY PEOPLE TEND

TO SOLVE THE TASK IN LESSER TIME. BY PRACTICE SKILL

GET REFINED.
LEARNING AGENT

Model of Learning Agent


LEARNING AGENT

A LEARNING AGENT CAN BE SPLIT INTO THE 4 PARTS SHOWN


IN THE DIAGRAM.
•THE LEARNING ELEMENT IS RESPONSIBLE FOR IMPROVEMENTS
THIS CAN MAKE A CHANGE TO ANY OF THE KNOWLEDGE
COMPONENTS IN THE AGENTS. ONE WAY OF LEARNING IS TO
OBSERVE PAIRS OF SUCCESSIVE STATES IN THE PERCEPT
SEQUENCE; FROM THIS THE AGENT CAN LEARN HOW THE
WORLD EVOLVES. FOR UTILITY BASED AGENTS AN EXTERNAL
PERFORMANCE STANDARD IS NEEDED TO TELL THE CRITIC IF
THE AGENT’S ACTION HAS A GOOD OR A BAD EFFECT ON THE
WORLD.
LEARNING AGENT
•THE PERFORMANCE ELEMENT IS RESPONSIBLE FOR
SELECTING EXTERNAL ACTIONS, AND THIS IS CONSIDERED
TO BE THE PREVIOUS AGENTS DISCUSSED.
•THE LEARNING AGENT GAINS FEEDBACK FROM THE CRITIC
ON HOW WELL THE AGENT IS DOING AND DETERMINES
HOW THE PERFORMANCE ELEMENT SHOULD BE MODIFIED
IF AT ALL TO IMPROVE THE AGENT.
•FOR EXAMPLE WHEN YOU WERE IN SCHOOL YOU WOULD DO A TEST AND IT WOULD BE
MARKED THE TEST IS THE CRITIC. THE TEACHER WOULD MARK THE TEST AND SEE WHAT
COULD BE IMPROVED AND INSTRUCTS YOU HOW TO DO BETTER NEXT TIME, THE
TEACHER IS THE LEARNING ELEMENT AND YOU ARE THE PERFORMANCE ELEMENT.
LEARNING AGENT

• THE LAST COMPONENT IS THE PROBLEM GENERATOR, THE


PERFORMANCE GENERATOR ONLY SUGGESTS ACTIONS THAT IT
CAN ALREADY DO SO WE NEED A WAY OF GETTING THE AGENT
TO EXPERIENCE NEW SITUATIONS, AND THIS IS WHAT THE
PERFORMANCE GENERATOR IS FOR. THIS WAY THE AGENT KEEPS
ON LEARNING.
• FOR EXAMPLE COMING BACK TO THE SCHOOL ANALOGY, IN
SCIENCE WITH YOUR CURRENT KNOWLEDGE AT THAT TIME YOU
WOULD NOT HAVE THOUGHT OF PLACING A MASS ON A SPRING BUT
THE TEACHER SUGGESTED AN EXPERIMENT AND YOU DID IT AND
THIS TAUGHT YOU MORE AND ADDED TO KNOWLEDGE BASE.
LEARNING AGENT

• ALL THE FOUR COMPONENTS OF THE AGENT ARE


CRITICALLY IMPORTANT BUT THE MOST IMPORTANT IS THE
DESIGN OF LEARNING ELEMENTS. THE DESIGN OF
LEARNING ELEMENT IS AFFECTED BY THREE MAJOR
ISSUES:
• WHICH COMPONENTS OF THE PERFORMANCE ELEMENTS ARE
TO BE LEARNED?
• WHAT REPRESENTATION IS USED FOR THESE COMPONENTS?
• WHAT FEEDBACK IS AVAILABLE TO LEARN THESE
COMPONENTS?
LEARNING SYSTEM MODEL
LEARNING SYSTEM MODEL

• THE FIGURE SHOWN ABOVE IS A TYPICAL LEARNING


SYSTEM MODEL. IT CONSISTS OF THE FOLLOWING
COMPONENTS.
1. LEARNING ELEMENT
2. KNOWLEDGE BASE
3. PERFORMANCE ELEMENT
4. FEEDBACK ELEMENT
5. STANDARD SYSTEM.
LEARNING SYSTEM MODEL

• LEARNING ELEMENT : IT RECEIVES AND PROCESSES THE


INPUT OBTAINED FROM A PERSON ( I.E. A TEACHER), FROM
REFERENCE MATERIAL LIKE MAGAZINES, JOURNALS, ETC,
OR FROM THE ENVIRONMENT AT LARGE.
• KNOWLEDGE BASE : THIS IS SOMEWHAT SIMILAR TO THE
DATABASE. INITIALLY IT MAY CONTAIN SOME BASIC
KNOWLEDGE. THEREAFTER IT RECEIVES MORE
KNOWLEDGE WHICH MAY BE NEW AND SO BE ADDED AS IT
IS OR IT MAY REPLACE THE EXISTING KNOWLEDGE.
LEARNING SYSTEM MODEL

• PERFORMANCE ELEMENT : IT USES THE UPDATED


KNOWLEDGE BASE TO PERFORM SOME TASKS OR SOLVES
SOME PROBLEMS AND PRODUCES THE CORRESPONDING
OUTPUT.
• FEEDBACK ELEMENT: IT IS RECEIVING THE TWO INPUTS,
ONE FROM LEARNING ELEMENT AND ONE FROM STANDARD
(OR IDEALIZED) SYSTEM. THIS IS TO IDENTIFY THE
DIFFERENCES BETWEEN THE TWO INPUTS. THE FEEDBACK
IS USED TO DETERMINE WHAT SHOULD BE DONE IN ORDER
TO PRODUCE THE CORRECT OUTPUT.
LEARNING SYSTEM MODEL

• STANDARD SYSTEM : IT IS A TRAINED PERSON OR A


COMPUTER PROGRAM THAT IS ABLE TO PRODUCE THE
CORRECT OUTPUT. IN ORDER TO CHECK WHETHER THE
MACHINE LEARNING SYSTEM HAS LEARNED WELL, THE
SAME INPUT IS GIVEN TO THE STANDARD SYSTEM. THE
OUTPUTS OF STANDARD SYSTEM AND THAT OF
PERFORMANCE ELEMENT ARE GIVEN AS INPUTS TO THE
FEEDBACK ELEMENT FOR THE COMPARISON. STANDARD
SYSTEM IS ALSO CALLED IDEALIZED SYSTEM.
CLASSIFICATION OF LEARNING SYSTEM
• GOAL/TASK/TARGET FUNCTION
• PREDICTION: TO PREDICT THE DESIRED OUTPUT FOR A GIVEN INPUT BASED ON PREVIOUS
INPUT/OUTPUT PAIRS. EG. TO PREDICT THE VALUE OF A STOCK GIVEN OTHER INPUTS LIKE
MARKET INDEX, INTEREST RATES ETC.
• CATEGORIZATION: TO CLASSIFY AN OBJECT INTO ONE OF SEVERAL CATEGORIES BASED ON
FEATURES OF THE OBJECT. EG. A ROBOTIC VISION SYSTEM TO CATEGORIZE A MACHINE
PART INTO ONE OF THE CATEGORIES, SPANNER, HAMMER ETC BASED ON THE PARTS’
DIMENSION AND SHAPE
• CLUSTERING: TO ORGANIZE A GROUP OF OBJECTS INTO HOMOGENOUS SEGMENTS. EG. A
SATELLITE IMAGE ANALYSIS SYSTEM WHICH GROUPS LAND AREAS INTO FOREST, URBAN
AND WATER BODY, FOR BETTER UTILIZATION OF NATURAL RESOURCES.
• PLANNING: TO GENERATE AN OPTIMAL SEQUENCE OF ACTIONS TO SOLVE A PARTICULAR
PROBLEM. EG. AN UNMANNED AIR VEHICLE WHICH PLANS ITS PATH TO OBTAIN A SET OF
PICTURE AND AVOID ANTI-AIRCRAFT GUNS.
CLASSIFICATION OF LEARNING SYSTEM

• MODELS
• PROPOSITIONAL AND FOL RULES

• DECISION TREES

• LINEAR SEPARATORS

• NEURAL NETWORKS

• GRAPHICAL MODELS

• TEMPORAL MODELS LIKE HIDDEN MARKOV MODELS


CLASSIFICATION OF LEARNING SYSTEM
• LEARNING RULES: LEARNING RULES ARE OFTEN TIED UP WITH
THE MODEL OF LEARNING USED. SOME COMMON RULES ARE
GRADIENT DESCENT, LEAST SQUARE ERROR, EXPECTATION
MAXIMIZATION AND MARGIN MAXIMIZATION .
• EXPERIENCES: LEARNING ALGORITHMS USE EXPERIENCES IN THE
FORM OF PERCEPTIONS OR PERCEPTION ACTION PAIRS TO IMPROVE
THEIR PERFORMANCE. THE NATURE OF EXPERIENCES AVAILABLE
VARIES WITH APPLICATIONS.
• SUPERVISED LEARNING: IN THIS TYPE OF LEARNING A TEACHER OR
ORACLE IS AVAILABLE WHICH PROVIDES THE DESIRED ACTION
CORRESPONDING TO A PERCEPTION. A SET OF PERCEPTION ACTION
PAIR PROVIDES WHAT IS CALLED A TRAINING SET.
CLASSIFICATION OF LEARNING SYSTEM
• UNSUPERVISED LEARNING: IN UNSUPERVISED LEARNING NO TEACHER IS
AVAILABLE . THE LEARNER ONLY DISCOVERS PERSISTENT PATTERNS IN THE
DATA CONSISTING OF A COLLECTION OF PERCEPTIONS. THIS IS ALSO CALLED
EXPLORATORY LEARNING. EG. FINDING OUT MALICIOUS NETWORK ATTACKS
FROM A SEQUENCE OF ANOMALOUS DATA PACKETS IS AN EXAMPLE OF
UNSUPERVISED LEARNING.
• ACTIVE LEARNING: IN THIS TYPE OF LEARNING NOT ONLY A TEACHER IS
AVAILABLE, THE LEARNER HAS THE FREEDOM TO ASK THE TEACHER FOR
SUITABLE PERCEPTION-ACTION EXAMPLE PAIRS WHICH WILL HELP THE
LEARNER TO IMPROVE ITS PERFORMANCE. EG. CONSIDER A NEWS
RECOMMENDER SYSTEM WHICH TRIES TO LEARN AN USERS PREFERENCES AND
CATEGORIZE NEWS ARTICLES AS INTERESTING OR UNINTERESTING TO THE USER
TO THE USER.
CLASSIFICATION OF LEARNING SYSTEM
• REINFORCEMENT LEARNING : IN THIS TYPE OF LEARNING A TEACHER IS
AVAILABLE, BUT THE TEACHER INSTEAD OF DIRECTLY PROVIDING THE
DESIRED ACTION CORRESPONDING TO A PERCEPTION, RETURN REWARD AND
PUNISHMENT TO THE LEARNER FOR ITS ACTION CORRESPONDING TO A
PERCEPTION. EG. A ROBOT IN A UNKNOWN TERRAIN WHERE ITS GET A
PUNISHMENT WHEN ITS HITS AND OBSTACLE AND REWARD WHEN IT MOVES
SMOOTHLY.
SUPERVISED LEARNING

• SUPERVISED LEARNING IS THE MACHINE LEARNING TASK OF INFERRING A


FUNCTION FROM LABELED TRAINING DATA. THE TRAINING DATA CONSIST OF A
SET OF TRAINING EXAMPLES. IN SUPERVISED LEARNING, EACH EXAMPLE IS A
PAIR CONSISTING OF AN INPUT OBJECT (TYPICALLY A VECTOR) AND A DESIRED
OUTPUT VALUE (ALSO CALLED THE SUPERVISORY SIGNAL).
SUPERVISED LEARNING
• IN SUPERVISED LEARNING, EACH EXAMPLE IS A PAIR CONSISTING OF AN INPUT
OBJECT (TYPICALLY A VECTOR) AND A DESIRED OUTPUT VALUE (ALSO CALLED
THE SUPERVISORY SIGNAL).
• A SUPERVISED LEARNING ALGORITHM ANALYZES THE TRAINING DATA AND
PRODUCE AN INFERRED FUNCTION, WHICH IS CALLED A CLASSIFIER ( IF THE
OUTPUT IS DISCRETE) OR A REGRESSION FUNCTION (IF THE OUTPUT IS
CONTINUOUS).
• THE INFERRED FUNCTION SHOULD PREDICT THE CORRECT OUTPUT VALUE FOR
ANY VALID INPUT OBJECT.
SUPERVISED LEARNING

• THIS REQUIRES THE LEARNING ALGORITHM TO GENERALIZE FROM TRAINING

DATA TO UNSEEN SITUATIONS IN A “REASONABLE” WAY.

• THE PARALLEL TASK IN HUMAN AND ANIMAL PSYCHOLOGY IS OFTEN

REFERRED TO AS CONCEPT OF LEARNING.


SUPERVISED LEARNING PROCESS: TWO
STEPS
 Learning (training): Learn a model using the training data
 Testing: Test the model using unseen test data to assess the
model accuracy
Number of correct classifications
Accuracy  ,
Total number of test cases
SUPERVISED LEARNING

• IN ORDER TO SOLVE A GIVEN PROBLEM OF SUPERVISED LEARNING, ONE HAS TO


PERFORM THE FOLLOWING STEPS:
• DETERMINE THE TYPE OF TRAINING EXAMPLES. BEFORE DOING ANYTHING ELSE, THE
USER SHOULD DECIDE WHAT KIND OF DATA IS TO BE USED AS A TRAINING SET. IN THE CASE
OF HANDWRITING ANALYSIS, FOR EXAMPLE, THIS MIGHT BE A SINGLE HANDWRITTEN
CHARACTER, AN ENTIRE HANDWRITTEN WORD, OR AN ENTIRE LINE OF HANDWRITING.
• GATHER A TRAINING SET. THE TRAINING SET NEEDS TO BE REPRESENTATIVE OF THE REAL-
WORLD USE OF FUNCTION. THUS, A SET OF INPUT OBJECTS IS GATHERED AND
CORRESPONDING OUTPUTS ARE ALSO GATHERED, EITHER FROM HUMAN EXPERTS OR FROM
MEASUREMENTS.
SUPERVISED LEARNING
• DETERMINE THE INPUT FEATURE REPRESENTATION OF THE LEARNED FUNCTION. THE
ACCURACY OF THE LEARNED FUNCTION DEPENDS STRONGLY ON HOW THE INPUT OBJECT IS
REPRESENTED. TYPICALLY, THE INPUT OBJECT IS TRANSFORMED INTO A FEATURE VECTOR , WHICH
CONTAINS A NUMBER OF FEATURES THAT ARE DESCRIPTIVE OF THE OBJECT. THE NUMBER OF
FEATURES SHOULD NOT BE TOO LARGE, BECAUSE OF THE CURSE OF THE DIMENSIONALITY; BUT
SHOULD CONTAIN ENOUGH INFORMATION TO ACCURATELY PREDICT THE OUTPUT.
• DETERMINE THE STRUCTURE OF THE LEARNED FUNCTION AND CORRESPONDING LEARNING
ALGORITHM. FOR EXAMPLE THE ENGINEER MAY CHOOSE TO USE SUPPORT VECTOR MACHINES OR
DECISION TREES.
SUPERVISED LEARNING

• COMPLETE THE DESIGN. RUN THE LEARNING ALGORITHM ON THE GATHERED TRAINING SET.

SOME SUPERVISED LEARNING ALGORITHMS REQUIRE THE USER TO DETERMINE CERTAIN

CONTROL PARAMETERS. THESES PARAMETERS MAY BE ADJUSTED BY OPTIMIZING

PERFORMANCE ON A SUBSET ( CALLED VALIDATION SET) OF THE TRAINING SET, OR VIA CROSS-

VALIDATION.

• EVALUATE THE ACCURACY OF THE LEARNED FUNCTION. AFTER PARAMETER ADJUSTMENT

AND LEARNING, THE PERFORMANCE OF THE RESULTING FUNCTION SHOULD BE MEASURED ON A

TEST SET THAT IS SEPARATE FROM TRAINING SET.


GENERALIZATIONS OF SUPERVISED LEARNING

• THERE ARE SEVERAL WAYS IN WHICH THE STANDARD SUPERVISED LEARNING


PROBLEM CAN BE GENERALIZED:
• SEMI-SUPERVISED LEARNING : IN THIS SETTING, THE DESIRED OUTPUT VALUES ARE
PROVIDED ONLY FOR A SUBSET OF THE TRAINING DATA. THE REMAINING DATA IS
UNLABELED.
• ACTIVE LEARNING: INSTEAD OF ASSUMING THAT ALL OF THE TRAINING EXAMPLES ARE
GIVEN AT THE START, ACTIVE LEARNING ALGORITHMS INTERACTIVELY COLLECT NEW
EXAMPLES, TYPICALLY BY MAKING QUERIES TO A HUMAN USER. OFTEN, THE QUERIES ARE
BASED ON UNLABELED DATA, WHICH IS A SCENARIO THAT COMBINES SEMI-SUPERVISED
LEARNING WITH ACTIVE LEARNING.
GENERALIZATIONS OF SUPERVISED LEARNING

• STRUCTURED PREDICTION: WHEN THE DESIRED OUTPUT VALUE IS A COMPLEX

OBJECT, SUCH AS A PARSE TREE OR LABELED GRAPH, THEN STANDARD

METHODS MUST BE EXTENDED.

• LEARNING TO RANK: WHEN THE INPUT IS A SET OF OBJECTS AND THE DESIRED

OUTPUT IS A RANKING OF THOSE OBJECTS, THEN AGAIN THE STANDARD

METHODS MUST BE EXTENDED.


ADVANTAGES AND DISADVANTAGES

• THE FOREMOST ADVANTAGE OF SUPERVISED LEARNING IS THAT ALL CLASSES OR


ANALOG OUTPUTS MANIPULATED BY THE ALGORITHM OF THIS PARADIGM ARE
MEANINGFUL TO HUMANS. AND IT CAN BE EASILY USED FOR DISCRIMINATIVE
PATTERN CLASSIFICATION, AND FOR DATA REGRESSION. BUT IT ALSO HAS SEVERAL
DISADVANTAGES. THE FIRST ONE IS CAUSED BY THE DIFFICULTY OF COLLECTING
SUPERVISION OR LABELS.
ADVANTAGES AND DISADVANTAGES
• WHEN THERE IS A HUGE VOLUME OF INPUT DATA, IT IS PROHIBITIVELY
EXPENSIVE, IF NOT IMPOSSIBLE, TO LABEL ALL OF THEM.. FOR EXAMPLE , IT IS
NOT A TRIVIAL TASK TO LABEL A HUGE SET OF IMAGES FOR IMAGE
CLASSIFICATION. SECOND, AS NOT EVERYTHING IN THE REAL WORLD HAS A
DISTINCTIVE LABEL, THERE ARE UNCERTAINTIES AND AMBIGUITIES IN THE
SUPERVISION OR LABELS. FOR EXAMPLE, THE MARGIN FOR SEPARATING THE
TWO CONCEPTS OF “HOT” AND “COLD” IS NOT DISTINCT; AND IT IS DIFFICULT TO
NAME AN OBJECT THAT IS A CROSS BETWEEN A INTERMEDIATE SITUATION.
THESE DIFFICULTIES MAY LIMIT THE APPLICATIONS OF THE SUPERVISED
LEARNING PARADIGM IN SOME SCENARIOS. TO OVERCOME THESE LIMITATIONS
IN PRACTICE, OTHER LEARNING PARADIGMS, SUCH AS UNSUPERVISED
LEARNING, SEMI-SUPERVISED LEARNING, REINFORCEMENT LEARNING, ACTIVE
LEARNING, OR SOME MIXED LEARNING APPROACHES CAN BE CONSIDERED.
UNSUPERVISED LEARNING

• UNSUPERVISED LEARNING STUDIES HOW SYSTEMS CAN LEARN TO

REPRESENT PARTICULAR INPUT PATTERNS IN A WAY THAT REFLECTS THE

STATISTICAL STRUCTURE OF THE OVERALL COLLECTION OF INPUT PATTERNS.


UNSUPERVISED LEARNING

• IN MACHINE LEARNING, THE PROBLEM OF UNSUPERVISED LEARNING IS THAT OF


TRYING TO FIND HIDDEN STRUCTURE IN UNLABELED DATA. SINCE THE EXAMPLES
GIVEN TO THE LEARNER ARE UNLABELED, THERE IS NO ERROR OR REWARD SIGNAL
TO EVALUATE A POTENTIAL SOLUTION. THIS DISTINGUISHES UNSUPERVISED
LEARNING FROM SUPERVISED LEARNING AND REINFORCEMENT LEARNING.
UNSUPERVISED LEARNING

• UNSUPERVISED LEARNING IS CLOSELY RELATED TO THE PROBLEM OF


DENSITY ESTIMATION IN STATISTICS. HOWEVER UNSUPERVISED LEARNING
ALSO ENCOMPASSES MANY OTHER TECHNIQUES THAT SEEK TO SUMMARIZE
AND EXPLAIN KEY FEATURES OF THE DATA. MANY METHODS EMPLOYED IN
UNSUPERVISED LEARNING ARE BASED ON DATA MINING METHODS USED TO
PREPROCESS DATA.
UNSUPERVISED LEARNING
• APPROACHES TO UNSUPERVISED LEARNING INCLUDE:
• CLUSTERING (E.G., K-MEANS, MIXTURE MODELS, HIERARCHICAL
CLUSTERING),
• HIDDEN MARKOV MODELS,
• BLIND SIGNAL SEPARATION USING FEATURE EXTRACTION
TECHNIQUES FOR DIMENSIONALITY REDUCTION, E.G.:
• PRINCIPAL COMPONENT ANALYSIS,
• INDEPENDENT COMPONENT ANALYSIS,
• NON-NEGATIVE MATRIX FACTORIZATION,
• SINGULAR VALUE DECOMPOSITION.
UNSUPERVISED LEARNING
• THERE ARE ACTUALLY TWO APPROACHES TO UNSUPERVISED LEARNING.
• THE FIRST APPROACH IS TO TEACH THE AGENT NOT BY GIVING EXPLICIT
CATEGORIZATION, BUT BY USING SOME SORT OF REWARD SYSTEM TO INDICATE
SUCCESS.
• THIS TYPE OF TRAINING WILL GENERALLY FIT INTO THE DECISION PROBLEM
FRAMEWORK BECAUSE THE GOAL IS NOT TO PRODUCE A CLASSIFICATION BUT TO
MAKE DECISIONS THAT MAXIMIZE REWARDS.
• THIS APPROACH NICELY GENERALIZES TO THE REAL WORLD, WHERE AGENTS
MIGHT BE REWARDED FOR DOING CERTAIN ACTIONS AND PUNISHED FOR DOING
OTHERS.
UNSUPERVISED LEARNING
• OFTEN, A FORM OF REINFORCEMENT LEARNING CAN BE USED FOR
UNSUPERVISED LEARNING, WHERE THE AGENT BASES ITS ACTIONS ON THE
PREVIOUS REWARDS AND PUNISHMENTS WITHOUT NECESSARILY EVEN
LEARNING ANY INFORMATION ABOUT THE EXACT WAYS THAT ITS ACTION
AFFECT THE WORLD.
• IN A WAY, ALL OF THIS INFORMATION IS UNNECESSARY BECAUSE BY LEARNING A
REWARD FUNCTION, THE AGENT SIMPLY KNOWS WHAT TO DO WITHOUT ANY
PROCESSING BECAUSE IT KNOWS THE EXACT REWARD IT EXPECTS TO ACHIEVE
FOR EACH ACTION IT COULD TAKE.
• THIS CAN BE EXTREMELY BENEFICIAL IN CASES WHERE CALCULATING EVERY
POSSIBILITY IS VERY TIME CONSUMING. ON THE OTHER HAND, IT CAN BE VERY
TIME CONSUMING TO LEARN BY, ESSENTIALLY, TRIAL AND ERROR.
• BUT THIS KIND OF LEARNING CAN BE POWERFUL BECAUSE IT ASSUMES NO PRE-
UNSUPERVISED LEARNING
• A SECOND TYPE OF UNSUPERVISED LEARNING IS CALLED CLUSTERING. IN THIS
TYPE OF LEARNING, THE GOAL IS NOT TO MAXIMIZE A UTILITY FUNCTION, BUT
SIMPLY TO FIND SIMILARITIES IN THE TRAINING DATA
• THE ASSUMPTION IS OFTEN THAT THE CLUSTER DISCOVERED WILL MATCH
REASONABLY WELL WITH AN INTUITIVE CLASSIFICATION. FOR INSTANCE,
CLUSTERING INDIVIDUALS BASED ON DEMOGRAPHICS MIGHT RESULT IN A
CLUSTERING OF WEALTHY IN ONE GROUP AND THE POOR IN ANOTHER.
• ALTHOUGH THE ALGORITHM WON’T HAVE NAMES TO ASSIGN TO THESE
CLUSTERS, IT CAN PRODUCE THEM AND THEN USE THOSE CLUSTERS TO ASSIGN
NEW EXAMPLES INTO ONE OR THE OTHER OF THE CLUSTERS.
• THIS IS A DATA DRIVEN APPROACH THAT CAN WORK WELL WHEN THERE IS
SUFFICIENT DATA.
COMPARISON OF SUPERVISED AND UNSUPERVISED
LEARNING
WHAT KINDS OF PATTERNS CAN BE MINED?

DATA CHARACTERIZATION IS A SUMMARIZATION OF THE GENERAL CHARACTERISTICS OR

FEATURES OF A TARGET CLASS OF DATA. THE DATA CORRESPONDING TO THE USER-SPECIFIED

CLASS ARE TYPICALLY COLLECTED BY A QUERY. THE OUTPUT OF DATA CHARACTERIZATION

CAN BE PRESENTED IN VARIOUS FORMS. EXAMPLES INCLUDE PIE CHARTS, BAR CHARTS,

CURVES, MULTIDIMENSIONAL DATA CUBES, AND MULTIDIMENSIONAL TABLES, INCLUDING

CROSSTABS. THE RESULTING DESCRIPTIONS CAN ALSO BE PRESENTED AS GENERALIZED

RELATIONS OR IN RULE FORM (CALLED CHARACTERISTIC RULES).


NATURAL LANGUAGE PROCESSING
• A SUB-FIELD OF ARTIFICIAL INTELLIGENCE (AI)

• AN INTER DISCIPLINARY SUBJECT

➢AIM:

• TO BUILD INTELLIGENT COMPUTERS THAT CAN INTERACT WITH HUMAN

BEING LIKE A HUMAN BEING !!


NATURAL LANGUAGE PROCESSING

NATURAL LANGUAGE PROCESSING IS A THEORETICALLY MOTIVATED


RANGE OF COMPUTATIONAL TECHNIQUES FOR ANALYZING AND
REPRESENTING NATURALLY OCCURRING TEXTS/SPEECH AT ONE OR
MORE LEVELS OF LINGUISTIC ANALYSIS FOR THE PURPOSE OF
ACHIEVING HUMAN-LIKE LANGUAGE PROCESSING FOR A RANGE OF
TASKS OR APPLICATIONS.
WHAT IS NATURAL LANGUAGE PROCESSING?

• NATURAL LANGUAGE PROCESSING


• PROCESS INFORMATION CONTAINED IN NATURAL LANGUAGE
TEXT.
• ALSO KNOWN AS COMPUTATIONAL LINGUISTICS (CL), HUMAN
LANGUAGE TECHNOLOGY (HLT), NATURAL LANGUAGE
ENGINEERING (NLE)

• CAN MACHINES UNDERSTAND HUMAN LANGUAGE?


• DEFINE ‘UNDERSTAND’
• UNDERSTANDING IS THE ULTIMATE GOAL. HOWEVER, ONE
DOESN’T NEED TO FULLY UNDERSTAND TO BE USEFUL.
WHAT IS NATURAL LANGUAGE PROCESSING?

• ANALYZE, UNDERSTAND AND GENERATE HUMAN LANGUAGES JUST LIKE


HUMANS DO.
• APPLYING COMPUTATIONAL TECHNIQUES TO LANGUAGE DOMAIN..
• TO EXPLAIN LINGUISTIC THEORIES, TO USE THE THEORIES TO BUILD SYSTEMS
THAT CAN BE OF SOCIAL USE..
• STARTED OFF AS A BRANCH OF ARTIFICIAL INTELLIGENCE..
• BORROWS FROM LINGUISTICS, PSYCHOLINGUISTICS, COGNITIVE SCIENCE &
STATISTICS.
• MAKE COMPUTERS LEARN OUR LANGUAGE RATHER THAN WE LEARN THEIRS.
WHY STUDY NLP?

• A HALLMARK OF HUMAN INTELLIGENCE.


• TEXT IS THE LARGEST REPOSITORY OF HUMAN KNOWLEDGE
AND IS GROWING QUICKLY.
• EMAILS, NEWS ARTICLES, WEB PAGES, IM, SCIENTIFIC ARTICLES,
INSURANCE CLAIMS, CUSTOMER COMPLAINT LETTERS,
TRANSCRIPTS OF PHONE CALLS, TECHNICAL DOCUMENTS,
GOVERNMENT DOCUMENTS, PATENT PORTFOLIOS, COURT
DECISIONS, CONTRACTS, ……
• ARE WE READING ANY FASTER THAN BEFORE?
WHY ARE LANGUAGE TECHNOLOGIES NEEDED?

• MANY COMPANIES WOULD MAKE A LOT OF MONEY IF THEY


COULD USE COMPUTER PROGRAMMES THAT UNDERSTOOD TEXT
OR SPEECH. JUST IMAGINE IF A COMPUTER COULD BE USED FOR:
• ANSWERING THE PHONE, AND REPLYING TO A QUESTION
• UNDERSTANDING THE TEXT ON A WEB PAGE TO DECIDE WHO IT
MIGHT BE OF INTEREST TO
• TRANSLATING A DAILY NEWSPAPER FROM JAPANESE TO ENGLISH
(AN ATTEMPT IS MADE TO DO THIS ALREADY)
• UNDERSTANDING TEXT IN JOURNALS / BOOKS AND BUILDING AN
EXPERT SYSTEMS BASED ON THAT UNDERSTANDING
NLP APPLICATIONS
• QUESTION ANSWERING
• WHO IS THE FIRST TAIWANESE PRESIDENT?
• TEXT CATEGORIZATION/ROUTING
• E.G., CUSTOMER E-MAILS.
• TEXT MINING
• FIND EVERYTHING THAT INTERACTS WITH BRCA1.
• MACHINE (ASSISTED) TRANSLATION
• LANGUAGE TEACHING/LEARNING
• USAGE CHECKING
• SPELLING CORRECTION
• IS THAT JUST DICTIONARY LOOKUP?
APPLICATION AREAS
• TEXT-TO-SPEECH & SPEECH RECOGNITION
• NATURAL LANGUAGE DIALOGUE INTERFACES TO
DATABASES
• INFORMATION RETRIEVAL
• INFORMATION EXTRACTION
• DOCUMENT CLASSIFICATION
• DOCUMENT IMAGE ANALYSIS
• AUTOMATIC SUMMARIZATION
• TEXT PROOFREADING – SPELLING & GRAMMAR
• MACHINE TRANSLATION
• STORY UNDERSTANDING SYSTEMS
• PLAGIARISM DETECTION
• CAN U THINK OF ANYTHING ELSE ??
APPLICATION AREAS

• TEXT-TO-SPEECH & SPEECH RECOGNITION


• NATURAL LANGUAGE DIALOGUE INTERFACES TO DATABASES
• INFORMATION RETRIEVAL
• INFORMATION EXTRACTION
• DOCUMENT CLASSIFICATION
• DOCUMENT IMAGE ANALYSIS
• AUTOMATIC SUMMARIZATION
• TEXT PROOFREADING – SPELLING & GRAMMAR
• MACHINE TRANSLATION
• STORY UNDERSTANDING SYSTEMS
• PLAGIARISM DETECTION
BIG DEAL

• L = WORDS + RULES + EXCEPTIONS..


• AMBIGUITY AT ALL LEVELS..
• WE SPEAK DIFFERENT LANGUAGES..
• AND LANGUAGE IS A CULTURAL ENTITY..
• SO THEY ARE NOT EQUIVALENT..
• HIGHLY SYSTEMATIC BUT ALSO COMPLEX..
• KEEPS CHANGING.. NEW WORDS, NEW RULES AND NEW
EXCEPTIONS..
• SOURCE : ELECTRONIC TEXTS / PRINTED TEXTS / ACOUSTIC
SPEECH SIGNAL.. THEY ARE NOISY..
• LANGUAGE LOOKS OBVIOUS TO US.. BUT IT IS A BIG DEAL ☺!
Where does it fit in the CS taxonomy?

Computers

Databases Artificial Intelligence Algorithms Networking

Robotics Natural Language Processing Search

Information Machine Language


Retrieval Translation Analysis

Semantics Parsing
FUNDAMENTAL PROBLEMS IN NLP
• NLP REQUIRES THE SYSTEM TO ANALYZE THE SENTENCE
AND RETRIEVE THE CORRECT MEANING. AMBIGUITY OF
THE WORDS AND THEIR MEANING IN THEIR RESPECTIVE
CONTEXT ARE THE MAJOR PROBLEM IN NLP.

• WORDS USED BY ONE SET OF PEOPLE COULD HAVE


DIFFERENT MEANING FOR DIFFERENT SET OF PEOPLE.

• THE FUNCTIONAL STRUCTURE OF THE SENTENCE ITSELF


CAN GIVE RISE TO AMBIGUITIES.
FUNDAMENTAL PROBLEMS IN NLP
• EXTENSIVE USE OF PRONOUNS INCREASE AMBIGUITIES. CONSIDER THE
FOLLOWING SENTENCE. “RAVI WENT TO THE SUPER MARKET. HE FOUND
HIS FAVORITE BRAND COFFEE POWDER IN RACK LIVE. HE PAID FOR IT
AND LEFT.” THE QUESTION IS: TO WHAT OBJECT THE PRONOUN “IT”
REFERS TO: THE SUPERMARKET OR COFFEE POWDER OR RACK FIVE?
• CONJUNCTIONS USED IN NATURAL LANGUAGE TO AVOID REPETITION
OF PHRASES ALSO CAUSE NLP PROBLEMS. FOR EXAMPLE CONSIDER THE
SENTENCE: “RAM AND SHYAM WENT TO A RESTAURANT. WHILE RAM
HAD A CUP OF COFFEE AND SHYAM HAD TEA”. IN THE SENTENCE WE
HAVE SUPPRESS THE TERM “HAD A CUP OF” UNDERSTOOD BY HUMANS
WHILE IT MIGHT BE DIFFICULT FOR THE MACHINE,”
FUNDAMENTAL PROBLEMS IN NLP

• ELLIPSIS IS A MAJOR PROBLEM WHICH NLP SYSTEMS FINDS


IT DIFFICULT TO MANAGE.

• ELLIPSIS, ONE DOES NOT STATE EXPLICIT}; SOME WORDS,


BUT LEAVE IT TO THE AUDIENCE TO FILL IT UP. AN
EXAMPLE FOR THIS TYPE IS “WHAT IS THE LENGTH OF
RIVER GANGA OF RIVER KAVERI ?”
PROCESS SHOWING NLP
BRIEF HISTORY OF NLP
• 1940S –1950S: FOUNDATIONS
• DEVELOPMENT OF FORMAL LANGUAGE THEORY (CHOMSKY, BACKUS, NAUR, KLEENE)
• PROBABILITIES AND INFORMATION THEORY (SHANNON)
• 1957 – 1970S:
• USE OF FORMAL GRAMMARS AS BASIS FOR NATURAL LANGUAGE PROCESSING (CHOMSKY,
KAPLAN)
• USE OF LOGIC AND LOGIC BASED PROGRAMMING (MINSKY, WINOGRAD, COLMERAUER, KAY)
• 1970S – 1983:
• PROBABILISTIC METHODS FOR EARLY SPEECH RECOGNITION (JELINEK, MERCER)
• DISCOURSE MODELING (GROSZ, SIDNER, HOBBS)
• 1983 – 1993:
• FINITE STATE MODELS (MORPHOLOGY) (KAPLAN, KAY)
• 1993 – PRESENT:
• STRONG INTEGRATION OF DIFFERENT TECHNIQUES, DIFFERENT AREAS. 164
TERMINOLOGIES OF NLP

• SPEECH SYNTHESIS: SPEECH SYNTHESIS IS THE COMPUTER-


GENERATED SIMULATION OF HUMAN SPEECH. IT IS USED TO
TRANSLATE WRITTEN INFORMATION INTO AURAL INFORMATION
WHERE IT IS MORE CONVENIENT, ESPECIALLY FOR MOBILE
APPLICATIONS SUCH AS VOICE-ENABLED E-MAIL AND UNIFIED
MESSAGING . IT IS ALSO USED TO ASSIST THE VISION-IMPAIRED
SO THAT, FOR EXAMPLE, THE CONTENTS OF A DISPLAY SCREEN
CAN BE AUTOMATICALLY READ ALOUD TO A BLIND USER.
SPEECH RECOGNITION

• IN COMPUTER SCIENCE AND ELECTRICAL ENGINEERING,

SPEECH RECOGNITION (SR) IS THE TRANSLATION OF

SPOKEN WORDS INTO TEXT. IT IS ALSO KNOWN AS

"AUTOMATIC SPEECH RECOGNITION" (ASR), "COMPUTER

SPEECH RECOGNITION", OR JUST "SPEECH TO TEXT" (STT).


NATURAL LANGUAGE UNDERSTANDING

• NATURAL LANGUAGE UNDERSTANDING IS A SUBTOPIC OF NATURAL


LANGUAGE PROCESSING IN ARTIFICIAL INTELLIGENCE THAT DEALS WITH
MACHINE READING COMPREHENSION.
• THE PROCESS OF DISASSEMBLING AND PARSING INPUT IS MORE COMPLEX
THAN THE REVERSE PROCESS OF ASSEMBLING OUTPUT IN NATURAL
LANGUAGE GENERATION BECAUSE OF THE OCCURRENCE OF UNKNOWN
AND UNEXPECTED FEATURES IN THE INPUT AND THE NEED TO DETERMINE
THE APPROPRIATE SYNTACTIC AND SEMANTIC SCHEMES TO APPLY TO IT,
FACTORS WHICH ARE PRE-DETERMINED WHEN OUTPUTTING LANGUAGE.
THE STEPS IN NLP
Discourse

Pragmatics

Semantics

Syntax

Morphology
THE STEPS IN NLP (CONT.)
• MORPHOLOGY: CONCERNS THE WAY WORDS ARE BUILT UP FROM SMALLER
MEANING BEARING UNITS. (COME(S),CO(MES))
• SYNTAX: CONCERNS HOW WORDS ARE PUT TOGETHER TO FORM CORRECT
SENTENCES AND WHAT STRUCTURAL ROLE EACH WORD HAS.
• SEMANTICS: CONCERNS WHAT WORDS MEAN AND HOW THESE MEANINGS
COMBINE IN SENTENCES TO FORM SENTENCE MEANINGS.
• PRAGMATICS: CONCERNS HOW SENTENCES ARE USED IN DIFFERENT
SITUATIONS AND HOW USE AFFECTS THE INTERPRETATION OF THE
SENTENCE.
• DISCOURSE: CONCERNS HOW THE IMMEDIATELY PRECEDING SENTENCES
AFFECT THE INTERPRETATION OF THE NEXT SENTENCE.
NATURAL LANGUAGE GENERATION
NATURAL LANGUAGE UNDERSTANDING
NATURAL LANGUAGE UNDERSTANDING
SPOKEN DIALOGUE SYSTEM

Speech Semantic Discourse


Us Recognition Interpretation Interpretation
er

Speech Response Dialogue


Synthesis Generation Management
PARTS OF THE SPOKEN DIALOGUE
SYSTEM
• SIGNAL PROCESSING:
• CONVERT THE AUDIO WAVE INTO A SEQUENCE OF FEATURE VECTORS.
• SPEECH RECOGNITION:
• DECODE THE SEQUENCE OF FEATURE VECTORS INTO A SEQUENCE OF
WORDS.
• SEMANTIC INTERPRETATION:
• DETERMINE THE MEANING OF THE WORDS.
• DISCOURSE INTERPRETATION:
• UNDERSTAND WHAT THE USER INTENDS BY INTERPRETING UTTERANCES IN
CONTEXT.
• DIALOGUE MANAGEMENT:
• DETERMINE SYSTEM GOALS IN RESPONSE TO USER UTTERANCES BASED ON
USER INTENTION.
• SPEECH SYNTHESIS:
• GENERATE SYNTHETIC SPEECH AS A RESPONSE.
LEVELS OF LANGUAGE ANALYSIS

• PHONOLOGY
• MORPHOLOGY
• SYNTAX
• SEMANTICS
• PRAGMATICS
• DISCOURSE
PHONOLOGY

• THIS LEVEL DEALS WITH THE INTERPRETATION OF SPEECH SOUNDS WITHIN AND
ACROSS WORDS. THERE ARE THREE TYPES OF RULE USED IN PHONOLOGICAL
ANALYSIS:
• PHONETIC RULES-FOR SOUND WITHIN WORDS

• PHONEMIC RULES – FOR VARIATIONS OF PRONUNCIATION WHEN WORDS ARE SPOKEN


TOGETHER

• PROSODIC RULES-FOR FLUCTUATION IN STRESS AND INTONATION ACROSS A SENTENCE.


SPEECH – SO IS IT DIFFICULT ?
• “IT'S VERY HARD TO WRECK A NICE BEACH ”
• PRONUNCIATION OF DIFFERENT SPEAKERS
• PACE OF SPEECH
• SPEECH AMBIGUITY – HOMONYMS
• I ATE EIGHT CAKES
• THAT BAND IS BANNED
• I WENT TO THE MALL NEAR BY TO BUY SOME FOOD
• THE FINNISH WERE THE FIRST ONES TO FINISH
• I KNOW NO JAMES BOND.
MORPHOLOGY: WHAT IS A WORD?
• MORPHOLOGY IS ALL ABOUT THE WORDS.
• MAKE MORE WORDS FROM LESS ☺.
• STRUCTURES AND PATTERNS IN WORDS
• ANALYZES HOW WORDS ARE FORMED FROM MINIMAL UNITS OF
MEANING, OR MORPHEMES, E.G., DOGS= DOG+S.
• WORDS ARE A SEQUENCE OF MORPHEMES.
• MORPHEME – SMALLEST MEANINGFUL UNIT IN A WORD. FREE & BOUND.
• INFLECTIONAL MORPHOLOGY – SAME PART OF SPEECH
• BUSES = BUS + ES
• CARRIED = CARRY + ED
• DERIVATIONAL MORPHOLOGY – CHANGE POS.
• DESTRUCT + ION = DESTRUCTION (NOUN)
• BEAUTY + FUL = BEAUTIFUL (ADJECTIVE)
• AFFIXES – PREFIXES, SUFFIXES & INFIXES
• RULES GOVERN THE FUSION.
LEXICAL

• AT THIS LEVEL NLP SYSTEM INTERPRET THE MEANING OF

INDIVIDUAL WORDS. SEVERAL TYPES OF PROCESSING

CONTRIBUTE TO WORD-LEVEL UNDERSTANDING.


SYNTACTIC

• THIS LEVEL FOCUSES ON ANALYZING THE WORDS IN A SENTENCE SO AS TO

UNCOVER THE GRAMMATICAL STRUCTURE OF THE SENTENCE. THIS

REQUIRE BOTH GRAMMAR AND A PARSER. THE OUTPUT OF THIS LEVEL OF

PROCESSING IS A REPRESENTATION OF THE SENTENCE THAT REVEALS THE

STRUCTURAL DEPENDENCY RELATIONSHIPS BETWEEN THE WORDS.


SEMANTIC

• THIS IS THE LEVEL AT WHICH DETERMINATION OF THE POSSIBLE

MEANINGS OF A SENTENCE BY FOCUSING ON THE INTERACTIONS AMONG

WORD-LEVEL MEANINGS IN THE SENTENCE. THIS LEVEL OF PROCESSING

CAN INCLUDE THE SEMANTIC DISAMBIGUATION OF WORDS WITH

MULTIPLE SENSES; IN AN ANALOGOUS WAY HOW SYNTACTIC

DISAMBIGUATION OF WORDS THAT CAN FUNCTION AS MULTIPLE PARTS-OF-

SPEECH IS ACCOMPLISHED AT THE SYNTACTIC LEVEL.


DISCOURSE

• THE DISCOURSE FOCUSES ON THE PROPERTIES OF THE

TEXT AS A WHOLE THAT CONVEY MEANING BY MAKING

CONNECTIONS BETWEEN COMPONENT OF THE SENTENCE.


PRAGMATIC
• THIS LEVEL IS CONCERNED WITH THE PURPOSEFUL USE OF LANGUAGE IN
SITUATIONS AND UTILIZED CONTEXT OVER AND ABOVE THE CONTENTS OF
THE TEXT FOR UNDERSTANDING. THE GOAL IS TO EXPLAIN HOW EXTRA
MEANING IS READ INTO TEXTS WITHOUT ACTUALLY BEING ENCODED IN
THEM. PRAGMATICS IS A SUBFIELD OF LINGUISTICS AND SEMIOTICS THAT
STUDIES THE WAYS IN WHICH CONTEXT CONTRIBUTES TO MEANING.
PRAGMATICS ENCOMPASSES SPEECH ACT THEORY, CONVERSATIONAL
IMPLICATIVE, TALK IN INTERACTION AND OTHER APPROACHES TO LANGUAGE
BEHAVIOR IN PHILOSOPHY, SOCIOLOGY, LINGUISTICS AND ANTHROPOLOGY.
PRAGMATIC

• UNLIKE SEMANTICS, WHICH EXAMINES MEANING THAT IS CONVENTIONAL OR


"CODED" IN A GIVEN LANGUAGE, PRAGMATICS STUDIES HOW THE
TRANSMISSION OF MEANING DEPENDS NOT ONLY ON STRUCTURAL AND
LINGUISTIC KNOWLEDGE (E.G., GRAMMAR, LEXICON, ETC.) OF THE SPEAKER
AND LISTENER, BUT ALSO ON THE CONTEXT OF THE UTTERANCE, ANY PRE-
EXISTING KNOWLEDGE ABOUT THOSE INVOLVED, THE INFERRED INTENT OF
THE SPEAKER, AND OTHER FACTORS. IN THIS RESPECT, PRAGMATICS EXPLAINS
HOW LANGUAGE USERS ARE ABLE TO OVERCOME APPARENT AMBIGUITY, SINCE
MEANING RELIES ON THE MANNER, PLACE, TIME ETC. OF AN UTTERANCE.
DEEP LEARNING

DEEP LEARNING IS A MACHINE LEARNING TECHNIQUE THAT TEACHES COMPUTERS


TO DO WHAT COMES NATURALLY TO HUMANS: LEARN BY EXAMPLE. DEEP
LEARNING IS A KEY TECHNOLOGY BEHIND DRIVERLESS CARS, ENABLING THEM TO
RECOGNIZE A STOP SIGN, OR TO DISTINGUISH A PEDESTRIAN FROM A LAMPPOST. IT
IS THE KEY TO VOICE CONTROL IN CONSUMER DEVICES LIKE PHONES, TABLETS,
TVS, AND HANDS-FREE SPEAKERS. DEEP LEARNING IS GETTING LOTS OF
ATTENTION LATELY AND FOR GOOD REASON. IT’S ACHIEVING RESULTS THAT WERE
NOT POSSIBLE BEFORE.
DEEP LEARNING

IN DEEP LEARNING, A COMPUTER MODEL LEARNS TO PERFORM CLASSIFICATION

TASKS DIRECTLY FROM IMAGES, TEXT, OR SOUND. DEEP LEARNING MODELS CAN

ACHIEVE STATE-OF-THE-ART ACCURACY, SOMETIMES EXCEEDING HUMAN-LEVEL

PERFORMANCE. MODELS ARE TRAINED BY USING A LARGE SET OF LABELLED DATA

AND NEURAL NETWORK ARCHITECTURES THAT CONTAIN MANY LAYERS.


WHY DEEP LEARNING MATTERS
WHILE DEEP LEARNING WAS FIRST THEORIZED IN THE 1980S, THERE ARE TWO
MAIN REASONS IT HAS ONLY RECENTLY BECOME USEFUL:

• DEEP LEARNING REQUIRES LARGE AMOUNTS OF LABELED DATA. FOR EXAMPLE,


DRIVERLESS CAR DEVELOPMENT REQUIRES MILLIONS OF IMAGES AND
THOUSANDS OF HOURS OF VIDEO.

• DEEP LEARNING REQUIRES SUBSTANTIAL COMPUTING POWER. HIGH-


PERFORMANCE GPUS HAVE A PARALLEL ARCHITECTURE THAT IS EFFICIENT FOR
DEEP LEARNING. WHEN COMBINED WITH CLUSTERS OR CLOUD COMPUTING,
THIS ENABLES DEVELOPMENT TEAMS TO REDUCE TRAINING TIME FOR A DEEP
LEARNING NETWORK FROM WEEKS TO HOURS OR LESS.
EXAMPLES OF DEEP LEARNING AT WORK
DEEP LEARNING APPLICATIONS ARE USED IN INDUSTRIES FROM AUTOMATED DRIVING TO
MEDICAL DEVICES.
• AUTOMATED DRIVING: AUTOMOTIVE RESEARCHERS ARE USING DEEP LEARNING TO
AUTOMATICALLY DETECT OBJECTS SUCH AS STOP SIGNS AND TRAFFIC LIGHTS. IN ADDITION,
DEEP LEARNING IS USED TO DETECT PEDESTRIANS, WHICH HELPS DECREASE ACCIDENTS.
• AEROSPACE AND DEFENCE: DEEP LEARNING IS USED TO IDENTIFY OBJECTS FROM SATELLITES
THAT LOCATE AREAS OF INTEREST, AND IDENTIFY SAFE OR UNSAFE ZONES FOR TROOPS.
• MEDICAL RESEARCH: CANCER RESEARCHERS ARE USING DEEP LEARNING TO
AUTOMATICALLY DETECT CANCER CELLS. TEAMS AT UCLA BUILT AN ADVANCED MICROSCOPE
THAT YIELDS A HIGH-DIMENSIONAL DATA SET USED TO TRAIN A DEEP LEARNING APPLICATION
TO ACCURATELY IDENTIFY CANCER CELLS.
EXAMPLES OF DEEP LEARNING AT WORK

• INDUSTRIAL AUTOMATION: DEEP LEARNING IS HELPING TO IMPROVE WORKER SAFETY

AROUND HEAVY MACHINERY BY AUTOMATICALLY DETECTING WHEN PEOPLE OR

OBJECTS ARE WITHIN AN UNSAFE DISTANCE OF MACHINES.

• ELECTRONICS: DEEP LEARNING IS BEING USED IN AUTOMATED HEARING AND SPEECH

TRANSLATION. FOR EXAMPLE, HOME ASSISTANCE DEVICES THAT RESPOND TO YOUR

VOICE AND KNOW YOUR PREFERENCES ARE POWERED BY DEEP LEARNING

APPLICATIONS.
HOW DEEP LEARNING WORKS

• MOST DEEP LEARNING METHODS USE NEURAL NETWORK ARCHITECTURES, WHICH IS

WHY DEEP LEARNING MODELS ARE OFTEN REFERRED TO AS DEEP NEURAL NETWORKS.

• THE TERM “DEEP” USUALLY REFERS TO THE NUMBER OF HIDDEN LAYERS IN THE NEURAL

NETWORK. TRADITIONAL NEURAL NETWORKS ONLY CONTAIN 2-3 HIDDEN LAYERS,

WHILE DEEP NETWORKS CAN HAVE AS MANY AS 150.


HOW DEEP LEARNING WORKS
DEEP LEARNING MODELS ARE TRAINED BY USING LARGE SETS OF LABELLED DATA
AND NEURAL NETWORK ARCHITECTURES THAT LEARN FEATURES DIRECTLY FROM
THE DATA WITHOUT THE NEED FOR MANUAL FEATURE EXTRACTION.
HOW DEEP LEARNING WORKS
• ONE OF THE MOST POPULAR TYPES OF DEEP NEURAL NETWORKS IS KNOWN
AS CONVOLUTIONAL NEURAL NETWORKS (CNN OR CONVNET). A CNN
CONVOLVES LEARNED FEATURES WITH INPUT DATA, AND USES 2D
CONVOLUTIONAL LAYERS, MAKING THIS ARCHITECTURE WELL SUITED TO
PROCESSING 2D DATA, SUCH AS IMAGES.
• CNNS ELIMINATE THE NEED FOR MANUAL FEATURE EXTRACTION, SO YOU DO
NOT NEED TO IDENTIFY FEATURES USED TO CLASSIFY IMAGES. THE CNN WORKS
BY EXTRACTING FEATURES DIRECTLY FROM IMAGES. THE RELEVANT FEATURES
ARE NOT PRETRAINED; THEY ARE LEARNED WHILE THE NETWORK TRAINS ON A
COLLECTION OF IMAGES. THIS AUTOMATED FEATURE EXTRACTION MAKES DEEP
LEARNING MODELS HIGHLY ACCURATE FOR COMPUTER VISION TASKS SUCH AS
OBJECT CLASSIFICATION.
WHAT'S THE DIFFERENCE BETWEEN MACHINE
LEARNING AND DEEP LEARNING?
DEEP LEARNING IS A SPECIALIZED FORM OF MACHINE LEARNING. A MACHINE
LEARNING WORKFLOW STARTS WITH RELEVANT FEATURES BEING MANUALLY
EXTRACTED FROM IMAGES. THE FEATURES ARE THEN USED TO CREATE A MODEL
THAT CATEGORIZES THE OBJECTS IN THE IMAGE. WITH A DEEP LEARNING
WORKFLOW, RELEVANT FEATURES ARE AUTOMATICALLY EXTRACTED FROM
IMAGES. IN ADDITION, DEEP LEARNING PERFORMS “END-TO-END LEARNING” –
WHERE A NETWORK IS GIVEN RAW DATA AND A TASK TO PERFORM, SUCH AS
CLASSIFICATION, AND IT LEARNS HOW TO DO THIS AUTOMATICALLY.
WHAT'S THE DIFFERENCE BETWEEN MACHINE
LEARNING AND DEEP LEARNING?
ANOTHER KEY DIFFERENCE IS DEEP LEARNING ALGORITHMS SCALE WITH DATA,
WHEREAS SHALLOW LEARNING CONVERGES. SHALLOW LEARNING REFERS TO
MACHINE LEARNING METHODS THAT PLATEAU AT A CERTAIN LEVEL OF
PERFORMANCE WHEN YOU ADD MORE EXAMPLES AND TRAINING DATA TO THE
NETWORK.

You might also like