You are on page 1of 2

https://en.wikipedia.

org/wiki/Naive_Bayes_classifier

https://en.wikipedia.org/wiki/Thomas_Bayes

Bayes’ theorem was named after Thomas Bayes (1701–1761), who studied how to compute a
distribution for the probability parameter of a binomial distribution (in modern terminology). Bayes’s
unpublished manuscript was significantly edited by Richard Price before it was posthumously read at
the Royal Society. Price edited[11] Bayes’s major work “An Essay towards solving a Problem in the
Doctrine of Chances” (1763), which appeared in Philosophical Transactions,[12] and contains Bayes’
theorem. Price wrote an introduction to the paper which provides some of the philosophical basis
of Bayesian statistics. In 1765, he was elected a Fellow of the Royal Society in recognition of his
work on the legacy of Bayes.[13][14]
The French mathematician Pierre-Simon Laplace reproduced and extended Bayes's results in 1774,
apparently unaware of Bayes's work.[note 1][15] The Bayesian interpretation of probability was developed
mainly by Laplace.[16]
Stephen Stigler used a Bayesian argument to conclude that Bayes’ theorem was discovered
by Nicholas Saunderson, a blind English mathematician, some time before Bayes;[17][18] that
interpretation, however, has been disputed. [19] Martyn Hooper[20] and Sharon McGrayne[21] have
argued that Richard Price's contribution was substantial:
By modern standards, we should refer to the Bayes–Price rule. Price discovered Bayes’ work,
recognized its importance, corrected it, contributed to the article, and found a use for it. The modern
convention of employing Bayes’ name alone is unfair but so entrenched that anything else makes
little sense.[21]

A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes'


theorem with strong (naive) independence assumptions.

Bayes' theorem was named after the Reverend Thomas Bayes (1702–61), who studied how
to compute a distribution for the probability parameter of a binomial distribution. After Bayes'
death, his friend Richard Price edited and presented this work in 1763, as An Essay
towards solving a Problem in the Doctrine of Chances.

So it is safe to say that Bayes classifiers have been around since the 2nd half of the 18th
century.

Especially as Stephen Stigler suggested (in 1983, Stephen M. Stigler, "Who Discovered
Bayes' Theorem?" The American Statistician 37(4):290–296) that Bayes' theorem was
discovered by Nicholas Saunderson some time before Bayes. On the other hand Edwards
(1986) disputed that interpretation (in 1986, A. W. F. Edwards, "Is the Reference in Hartley
(1749) to Bayesian Inference?", The American Statistician 40(2):109–110).
Which takes us back to the safe assumption of "2nd half of the 18th century" again, as a
naive Bayes classifier is a simple probabilistic classifier based on applying Bayes'
theorem... which makes it "naive" is that it comes with strong (naive) independence
assumptions. But practically, it's the same theorem.

You might also like