Professional Documents
Culture Documents
Week 1 Introduction To The Machine Learning Course
Week 1 Introduction To The Machine Learning Course
The bulk of the work in the early days of computer science was focussed on low
level matters applying the new computer tools on very standardized task.
I think the answer is that the people who were instrumental in the development of
computer science already from the beginning had the visions and ambitions now
articulated in the agenda for artificial intelligence.
Obviously they thought that after some decade of preliminary work, the original
ambitions for the use of computers should be put up high on the agenda again.
Defining Statement for Artificial Intelligence
We think that a significant advance can be made in one or more of these problems if a carefully selected
group of scientists work on it together for a summer.¨
Julian Bigelow chief engineer for the von Neumann computer at Princeton in 1946
Nathaniel Rochester author of the first assembler for the first commercial computer
In 1955 Allen Newell, Herbert A. Simon and Cliff Shaw created the Logic Theorist, the first program
deliberately engineered to mimic the problem solving skills of a human being. It is called "the first Artificial
Intelligence program". It could prove theorems in Whitehead and Russel’s Principia Mathematica. It introduced
key artificial intelligence techniques such as list processing, means ends analysis and heuristic search.
In 1958 John McCarthy created the first version of Lisp based on Lambda Calculus and using list processing,
the second-oldest high-level programming language in widespread use today. Only Fortran is older, by one year.
Oliver Selfridge created the Pandemonium architecture in 1959, which was one of the first computational
models in pattern recognition for images.
In 1959 Simon, Newell and Shaw created the General Problem Solver (GPS) a computer program intended to
work as a universal problem solver machine. Any problem that can be expressed as a set of well-formed
formulas (WFFs) or Horn clauses, can be solved, in principle, by GPS.
Early work in Machine Learning (1950-1960)
Arthur Samuel coined the term Machine Learning in 1959. Samuel joined IBM's Poughkeepsie Laboratory in
1956 and pioneered first machine learning in the context of programs that played checkers.
Samuel, Arthur L. (1959). "Some Studies in Machine Learning Using the Game of Checkers¨. IBM Journal of Research and Development.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.368.2254&rep=rep1&type=pdf
McCulloch and Pitts introduced Neural Networks as a model of computation as early as 1943
Warren McCulloch and Walter Pitt, "A Logical Calculus of the Ideas Immanent in Nervous Activity“. In: Bulletin of Mathematical Biophysics Vol 5,
http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf
Marvin Minsky and Dean Edmonds build SNARC in 1956, the first Neural Network machine, able to learn.
John Holland introduced Genetic Algorithms in 1960 based on the concept of Darwin’s theory of evolution.
Ray Solomonoff published work on Non-semantic machine learning in 1956. His first contribution, he termed an
´Inductive Inference Machine´. In 1960 he also published his first work on Algorithmic probability. http://world.std.com
/~rjs/An%20Inductive%20Inference%20Machine1957.pdf
NPTEL