You are on page 1of 10

computational learning theory (or

just learning theory) is a subfield of


Artificial Intelligence devoted to
studying the design and analysis of
machine learning algorithms
Theoretical results in machine learning mainly deal with a
type of inductive learning called supervised learning.
In supervised learning, an algorithm is given samples that
are labeled in some useful way.
For example, the samples might be descriptions of
mushrooms, and the labels could be whether or not the
mushrooms are edible. The algorithm takes these previously
labeled samples and uses them to induce a classifier. This
classifier is a function that assigns labels to samples
including the samples that have never been previously seen
by the algorithm. The goal of the supervised learning
algorithm is to optimize some measure of performance such
as minimizing the number of mistakes made on new
samples.
In addition to performance bounds, computational learning theory
studies the time complexity and feasibility of learning. In
computational learning theory, a computation is considered feasible
if it can be done inpolynomial time. There are two kinds of time
complexity results:
An algorithm is said to be ofpolynomial timeif its running time isupper boundedby a
polynomial expressionin the size of the input for the algorithm, i.e.,T(n) = O(nk) for some
constantk.
Positive results Showing that a certain class of functions is
learnable in polynomial time.
Negative results Showing that certain classes cannot be learned in
polynomial time.
Negative results often rely on commonly believed, but yet unproven
assumptions, such as:
Computational complexity P NP (the P versus NP problem) ;
CryptographicOne-way functionsexist.
the question is equivalent to asking
whether all problems in NP are also in P
The question is whether or not,
for all problems for which an algorithm canverifya
given solution quickly (that is, inpolynomial time)
[NP],
an algorithm can alsofindthat solution quickly [P]

. Since the former describes the class of


problems termed NP, while the latter describes
P, the question is equivalent to asking whether
all problems in NP are also in P.
Machine Learning is getting computers to
program themselves. If programming is
automation, then machine learning is
automating the process of automation.

Writing software is the bottleneck, we dont


have enough good developers. Let the data
do the work instead of people. Machine
learning is the way to make programming
scalable.
Traditional Programming:Data
and program is run on the computer
to produce the output.
Machine Learning: Data and output
is run on the computer to create a
program. This program can be used
in traditional programming.
Machine learning is like farming or
gardening. Seeds is the algorithms,
nutrientsis the data, theGardneris
you and plants is the programs.
Every machine learning algorithm has three components:
Representation: how to represent knowledge.
Examples include decision trees, sets of rules, instances,
graphical models, neural networks, support vector
machines, model ensembles and others.
Evaluation: the wayto evaluate candidate programs
(hypotheses).Examples include accuracy, prediction and
recall, squared error, likelihood, posterior probability,
cost, margin,entropy k-L divergence and others.
Optimization: the way candidate programs are
generated known asthe search process. For example
combinatorial optimization, convex optimization,
constrained optimization.
Supervised learning: (also called inductive
learning) Training data includes desired outputs.
This is spam this is not, learning is supervised.
Unsupervisedlearning: Training data does not
include desired outputs. Example is clustering. It is
hard to tell what is good learning and what is not.
Semi-supervised learning: Training data
includes a few desired outputs.
Reinforcement learning: Rewards from a
sequence of actions. AI types like it, it is the most
ambitious type of learning.

You might also like