You are on page 1of 6

9/17/2020 CS440 Lectures

CS 440/ECE 448
Fall 2020
Margaret Fleck
Introduction

Layout of the field


Core reasoning techniques

discrete/logic-based
statistical (e.g. Bayesian)
neural nets
engineering (e.g. signal processing, 3D geometry, optics, kinematics, ...)

Application areas

"Core AI" (basic general reasoning)


Mathematical applications (e.g. theorem proving)
Computer Vision: describing what's in a digitized picture
Natural Language/Speech: understanding text or speech input, also generating language that is fluent and
keeps track of context.
Robotics: planning high-level sequences of actions down to mechanical design
Games (e.g. Chess, Poker)
Other (computational biology, recognition of music and other non-speech sounds, smart agriculture, ...)

Mathematical applications have largely become their own separate field, separate from AI. The same will
probably happen soon to game playing. However, both applications played a large role in early AI.

We're doing to see popular current techniques (e.g. neural nets) but also selected historical techniques that may
still be relevant.

Intelligent agents
Viewed in very general terms, the goal of AI is to build intelligent agents. An agent has three basic components:

sensory inputs (cameras, microphones, touch, kbd input)


output actions (move left, say "pumpernickel", place call to Alistair)
goal: what actions are reasonable given specific inputs

Some agents are extremely simple, e.g. a Roomba:

Roomba (from IRobot)

We often imagine intelligent agents that have a sophisticated physical presence, like a human or animal. Current
robots are starting to move like people/animals but have only modest high-level planning ability.

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 1/6
9/17/2020 CS440 Lectures

Boston dynamics robot ( video)

Other agents may live mostly or entirely inside the computer, communicating via text or simple media (e.g.
voice commands). This would be true for a chess playing program or an intelligent assistant to book airplane
tickets. A well-known fictional example is the HAL 9000 system from the movie "2001: A Space Odyssey"
which appears mostly as a voice plus a camera lens (below).

(from Wikipedia) ( dialog) ( video)

When designing an agent, we need to consider

what environment is it intended to operate in?


exactly what inputs and actions does it have?
what do we mean by good/bad performance and how do we quantify it?

Many AI systems work well only because they operate in limited environments. A famous sheep-shearingn robot
from the 1980's depended on the sheep being tied down to limit their motion. Face identification systems at
security checkpoints may depend on the fact that people consistently look straight forwards when herded
through the gates.

Simple agents like the Roomba may have very direct connections between sensing and action, with very fast
response and almost nothing that could qualify as intelligence. Smarter agents may be able to spend a lot of time
thinking, e.g. as in playing chess. They may maintain an explicit model of the world and the set of alternative
actions that it is choosing between.

AI researchers often plan to build a fully-featured end-to-end system, e.g. a robot child. However, most research
projects and published papers consider only one small component of the task, e.g.

identify the objects in a picture


transcribe this audio clip into written English
plan motion for robot arm to fit two parts together

History: major trends


Over the decades, AI algorithms have gradually improved. Two main reasons:

Improvements in scientific theories

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 2/6
9/17/2020 CS440 Lectures
Increase in computing power and amount of training data.

The second has been the most important. Some approaches popular today were originally proposed in the 1940's
but could not be made to work without sufficient computing power.

AI has cycled between two general approaches to algorithm design:

"Smart" models, discrete and logic-based.


"Blind" statistical algorithms

Most likely neither extreme position is correct.

AI results have consistently been overly hyped, sometimes creating major tension with grant agencies.

History: details
Early 20th century

No clear notion of a computer (though interesting philosophy)

1930's and 1940's

First computers (Atanasoff 1941, Eniac 1943-44), able to do almost nothing.

ENIAC, mid 1940's (from Wikipedia)

On-paper models that are insightful but can't be implemented. Some of these foreshadow approaches that work
now.

Alan Turing 1930's and 1940's


Zellig Harris 1940's owards
McCulloch and Pitts (early neural nets)

1950-1970's
Computers with functioning CPU but (by today's standards) very slow with absurdly little memory. E.g. the IBM
1130 (1965-72) came with up to 11M of main memory. The VAX 11/780 had many terminals sharing one CPU.

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 3/6
9/17/2020 CS440 Lectures

IBM 1130, late 1960's (from Wikipedia)

Graphis mostly used pen plotters, or perhaps graphics paper that had to be stored in a fridge. Mailers avoided
linux gateways because they were unreliable. Prototypes of great tools like the mouse, GUI, Unix, refresh-screen
graphics.

AI algorithms are very stripped down and discrete/symbolic. Chomsky's Syntactic Structures (1957) proposed
very tight constraints on linguistic models in an attempt to make them learnable from tiny datasets.

1980's-1990's
Computers now look like tower PC's, eventually also laptops. Memory and disk space are still tight. Horrible
peripherals like binary (black/white) screens and monochrome surveillance cameras. HTTP servers and internet
browsers appear in the 1990's:

Lisp Machine, 1980's (from Wikipedia

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 4/6
9/17/2020 CS440 Lectures

NCSA Mosaic, 1990's (from Wikipedia)

AI starts to use more big-data algorithms, especially at sites with strong resources. First LDC (linguistic data
consortium) datasets: 1993 Switchboard 1, TIMIT. Fred Jelinek starts to get speech recognition working usefully
using statistical ngram models. Linguistic theories were still very discrete/logic based. Jelinek famously said in
the mid/late 1980's: "Every time I fire a linguist, the performance of our speech recognition system goes up."

This century

Modern computers and internet.

Algorithms achieve strong performance by gobbling vast quantities of training data, e.g. Google's algorithms use
the entire web to simulate a toddler. Neural nets "learn to the test," i.e. learn what's in the dataset but not always
with constraints that we assume implicitly as humans. So strange unexpected lossage ("adversarial examples").
Speculation that learning might need to be guided by some structural constraints.

What still doesn't work

("Three men drinking tea" by a Microsoft AI program, from New Scientist,


2019)

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 5/6
9/17/2020 CS440 Lectures

(from CNN, 2019)

Boston Dynamics robot falling down (from The Guardian, 2017)

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/intro.html 6/6

You might also like