You are on page 1of 49

Bayes’ Theorem: Lust for Glory!

BY R ICH AR D CAR R IER / ON DECEMB ER 1, 2011 / 74 COMMENTS

My talk at Skepticon IV on the importance of Bayes’ Theorem to skepticism is now available on YouTube (Bayes’
Theorem: Lust for Glory!). (My slides in that on the UFO case don’t show the whole text because I had to use Darrel
Ray’s computer at the last minute [thx D!] which didn’t have the right font; but I speak most of it out, so you don’t miss
anything. There were some other font goofs, but that’s the only one you’ll notice. Oh, and the slide near the end that
everyone laughs at but you can’t see on the video, says “Ockham’s razor will cut a bitch.” Oh yeah she will!)

For a handy web page on using and understanding Bayes’ Theorem (which I’ll soon be
improving with an even more versatile applet) see my Bayesian Calculator. And
besides my book Proving History: Bayes’s Theorem and the Quest for the
Historical Jesus which has since become available (and is now the first place you
should go to learn about Bayes’ Theorem and Bayesian reasoning), the other books I
recommend in the video are: Innumeracy: Mathematical Illiteracy and Its
Consequences by John Allen Paulos (I also recommend his Beyond Numeracy and
A Mathematician Reads the Newspaper) ; Proofiness: The Dark Arts of
Mathematical Deception by Charles Seife; The Theory That Would Not Die by
Sharon Bertsch McGrayne; Math Doesn’t Suck by Danica McKellar (this is the only one of her series that you need, and
everyone should buy, but if you want to gift her higher grade math books to a teen you know, she also has Kiss My Math
and Hot X: Algebra Exposed!, Girl’s Get Curves: Geometry Takes Shape, and more to come; I didn’t have time to
also mention another woman who advocates for wider math literacy, so I will here, although it’s less useful than
McKellar’s, since it doesn’t teach math but only why you might like learning it more than you thought:The Calculus
Diaries: How Math Can Help You Lose Weight, Win in Vegas, and Survive a Zombie Apocalypseby Jennifer
Ouellette); and The Mathematical Palette by Ronald Staszkow and Robert Bradshaw (get a used one, since new copies
are priced at “textbook robbery” levels; you might get stuck with an old edition when buying used, but they’re all good) and
101 Things Everyone Should Know About Math by Marc Zev, Kevin Segal and Nathan Levy.
In addition to Proving History, which is now my most comprehensive treatment of Bayesian
reasoning for laymen, the books in which I also discuss and apply Bayes’ Theorem are The
Christian Delusion (TCD) and The End of Christianity (TEC), both edited byJohn
Loftus. In TCD, in my chapter “Why the Resurrection Is Unbelievable,” I only mention Bayes
(and show the math) in the endnotes, but you can see how those translate what I otherwise say in
that chapter in plain English, and thus see an application of Bayes’ Theorem in action. That
chapter refutes previous attempts to use Bayesian reasoning to prove the miraculous resurrection
of Jesus (by Swinburne and the McGrews, for example), by showing the correct way to do it,
and how using the correct facts changes everything. (TCD also has my chapter explaining why
Christianity isn’t responsible for modern science, contrary to a popular claim of late, but I don’t translate my argument there
into Bayes, though I could.)

In TEC I have two chapters deploying Bayes’ Theorem, and both explicitly discuss and use it
from the get go. One proves the entire Christian religion false just from considering how it began,
and that gives you a good look at how Bayesian reasoning opens your eyes to things you might
have overlooked before, or confirms what you intuitively knew but couldn’t articulate the logic of.
The other uses Bayes to prove every design argument false, including creationism, divine
biogenesis, and the fine tuning argument (among some others). In fact, I show how the fine tuning
of the physical constants actually proves God doesn’t exist. Quite conclusively in fact. And in
saying that I’m just explaining in ordinary language what two independent teams of expert
mathematicians already proved (I cite their work in the chapter). (TEC also has my most
controversial chapter, peer reviewed by several professors of philosophy, proving my theory of godless morality correct,
and Christian morality defective, but I didn’t translate that into Bayes, though again I could have.)

Although I might punt a lot to Proving History, this is the place to ask questions about my Skepticon talk or my use of
Bayes’ Theorem in TCD, TEC, or elsewhere. Feel free to query me on any of that here.

Share
this:

   

THE CA RRIE R RE VIVA L! M EA T NOT B A D

74 comments
R YA N • D E C E M B E R 1 , 2 0 1 1 , 8 : 2 4 P M

Hi Rick,
Re: the fine-tuning argument disproving God’s existence, I think you’re referring to the work of Ikeda and
Jeffreys which you cite in a chapter of “The End of Christianity.” I believe I have some original insight into that
argument. This will be long, but it should be worth it:

An online work by Michael Ikeda and Bill Jeffreys [3]


argues that fine-tuning is evidence against the existence of
God. They reason that if naturalism (the view that
no spirits exist, only the natural world) is true then it is
necessarily the case that human beings will only find
themselves in a universe which naturally allows for their
existence. On the other hand, God could perform miracles
in order to allow humans to exist in a world that is not
favorable to life, which means that the hypothesis of
theism renders it less than 100% likely that we would find
ourselves in a universe which naturally supports life.

When I first read this I was very puzzled. I viewed it


as a silly argument. What would the universe
look like if it were unsuitable to life but life existed
because God was frequently performing miracles to keep
living things alive? Presumably, such a constant set of
miracles would be regularly occurring and could be
described by generalizations (scientific “laws” are nothing
more than generalizations).

But such generalizations would form the basis of the scientific laws that the humans within this hypothetical
universe would have, and so they would still necessarily have to have scientific laws that were congenial to their
existence. If that’s the case, then we could not sensibly say that there is any probability(on the hypothesis of
naturalism or theism) that humans might observe a universe with laws that do not support
their existence, and therefore the argument put forward
by Ikeda and Jeffreys fails as an argument against theism.
However, as I started thinking about it a bit more I
realized that it is logically possible to have a universe with
two sets of scientific laws: one set governing the behavior
of inanimate matter and one set governing the behavior of
matter that composes living bodies. In fact, at one point in
history scientists did seriously consider the hypothesis that
life itself might be above certain laws, such as the law of
increasing entropy (this hypothesis has long been
discredited, of course). In a genuinely supernatural
universe such a thing is quite possible, and observing a
dual set of laws would be highly suggestive of nature itself
being built with certain goals in mind. On the other hand, a
universe not governed by any supreme or supernatural
intelligence would be unable to distinguish between life
and non-life, consciousness and unconsciousness, and
hence would not “know” to assign different laws to the
bodies of living, conscious beings like us.
Needless to say this is not the way our universe
works, and the fact that it does not is ought to qualify as
(at least marginal) evidence for naturalism and against
supernaturalism.

All of this is an abridged version of some material that I have published in a book called “Selected Essays”
available here:
http://www.lulu.com/product/ebook/selected-essays/17358956

R E P LY

RICH ARD CARRIER • DECEMBER 2, 2011, 8:47 AM

The conclusion is even clearer than that. See my chapter inTEC. The other advocate is
Sober (whom Ikeda & Jeffreys later acknowledged, having only discovered his work after
they published). I expand their analysis with even more examples, clarifying the actual
significance of the points they are already making.

R E P LY

H E D D L E • D E C E M B E R 7, 2 0 1 1 , 3 : 2 7 A M

The Ikeda & Jeffreys is utter nonsense and demonstrates yet again why in
a perfect world both Heisenberg and Bayes would be off limits to
philosophers. There is a reason why no scientist, of any religious stripe,
puzzling over apparent fine tuning (properly defined as having nothing to
do with probability, but simply: the apparent sensitivity of habitability
to the values of some physical constants) ever invokes Ikeda and
Jefferys–because they know that talk of “sufficiently inscrutable and
powerful gods” is woo and has no bearing on science and cannot prove
anything about the real world–even results they might find advantageous.

Strangely enough their actual result is trivial: if fine tuning is real and the
constants appear to be a random draw (i.e., low probability) then the
anthropic principle (manifested through a multiverse) is the simplest
explanation–as it is the only view that makes such a prediction. Duh. You
don’t need Bayes to make the point more complicated than it is.

Of course if fine tuning is real and the constants are high probability
(imagine they are shown to be inevitable–prob = 1) it’s a different story.

But in either case their paper is a joke with no value whatsoever to


science–only to woo-meisters.

R E P LY
RICHARD CARRIER •

D E C E M B E R 7, 2 0 1 1 , 3 : 5 1 P M

That isn’t Ikeda and Jeffreys’ argument. That you can’t


even correctly describe their argument doesn’t give me
any confidence in your judgment in the matter. You
also forgot Sober, who independently confirmed their
result. You also seem to be contradicting yourself, at
once calling the argument nonsense that you then
immediately declare so obviously correct as to be
trivial. Pick a lane. And to top it all off, you aren’t even
exhibiting any sign of having read my chapter on the
matter. Adhere to my entirely reasonable posting
requirements or I’ll be kicking your posts off as
spam.

H EDDLE • D E C E M B E R 7, 2 0 1 1 ,

4:09 PM

Try reading more carefully.

There argument is nonsense because they employ


Bayes’ theorem when it is not necessary. Surely you
understand (well maybe not) that a paper can can get
the right answer and still be nonsense. Perhaps you are
like a freshman who argues but I got the right
answer!!

There is a reason why the paper languishes stale on a


U Tx website.

I don’t need to read your chapter. You wrote:


In fact, I show how the fine tuning of the
physical constants actually proves God
doesn’t exist. Quite conclusively in fact.

It is simply not possible to prove “quite conclusively”


that the fine tuning of the physical constants proves
God doesn’t exist. It’s secular woo. That is why no
physicist or cosmologist will quote Ikeda & Jefferys,
Sober or you in a scientific peer reviewed article as
having solved the fine tuning problem. We’ll never see:

Leonard Susskind:Hmm. I was going to


demonstrate how the String Landscape saves us
from the apparent fine tuning and the mischief it
causes among the IDers. Silly me! I don’t know
that Ikeda, Jefferys, Sober, and now Richard
Carrier has made that argument superfluous! They
have used Bayes’ theorem to prove that fine tuning
has disproved God! I’m certainly convinced. What
a service!

I of course don’t give a rat’s ass if someone of your


caliber treats my post as “spam”. I know your work is
s sham–and you know that I (and other scientists)
know–that’s good enough for me.

RICHARD CARRIER •

DECEMBER 9 , 2011, 12:31 PM

Heddle, you are not responding to anything I have


written anymore. Quote my chapter or either the
Sober and Ikeda-Jeffreys articles and respond to their
actual content. That you make excuses for not needing
to only proves which of us is uninterested in facts or
truth. That’s all I or anyone needs to know about your
epistemology. Address what we actually say. Or take
your marbles and go home.

PA T R I C K • D E C E MBE R 2, 201 1 , 1 0:1 7 P M

Depending on whether you’re advancing your own argument, or just refuting someone else’s
use of a bayesian fine tuning argument, you can get a lot more brute force than that.

If someone defines “fine tuning” in terms of a bunch of cosmological constants, its easy to
come up with example universes that aren’t finely tuned, which would support life, and which
are possible under theism but not naturalism. For example, an entire universe consisting only
of a single star with a single planet.

R E P LY

RICH ARD CARRIER • DECEMBER 9 , 2011, 4:18

PM
Patrick, correct. That’s actually part of the argument of Sober and Ikeda
& Jeffreys. The number of possible life-bearing universes that a god can
make is vast, and most of them aren’t much like this one. Whereas the
number of possible life-bearing universes nature can make is extremely
few, and they all look essentially just like this one (which is to say, not in
physical appearance necessarily, but in gross attributes: they will all be
vast in size, extremely ancient, almost entirely lethal to life, and yet finely
tuned to make life a statistical byproduct of it). Thus P(e|God) is low, but
P(e|~God) is high. Fine tuning is therefore not evidence for the existence
of God, but evidence against the existence of God. There’s more to it than
that, which I cover in my chapter’s survey of their argument, but that’s a
key point of it.

R E P LY

A U STER I TY • D EC EMBER 2, 201 1 , 7:1 4 AM

Richard,

Do you have any book recommendations on the subject of Bayesian reasoning in philosophy? Specifically on the
subject of Bayesian reasoning as proper inductive reasoning. I have ET Jaynes’ Probability Theory: The Logic of
Science, but would like to know of others, perhaps something a bit shorter, that I could recommend to others.

R E P LY

RICH ARD CARRIER • DECEMBER 2, 2011, 8:40 AM

Bayesian Epistemology by Bovens and Hartmann is a good place to start, but you might
want to read first the Stanford entry on “Bayesian Epistemology” first and then look over
its bibliography (Bovens & Hartmann is on it). My book treats some of the critical objections
listed there. They are much easier to overcome than the article lets on. But one example I
leave out of the book (except to reference it) is my Basic Empiricism epistemology, as that
(or something like it) is already granted as axiomatic by mainstream historians, but since you
are probably more interested in that, see my Epistemological Endgame, Defending
Naturalism as a Worldview (at least the first three sections, the third having “basic
empiricism” as its title), and my earlier note on Bayesian induction.

R E P LY

JOS H • D EC EMBER 2, 201 1 , 8 :25 AM

It seems like a weakness to this type of analysis is that highly improbable events. This is a good thing, because
the level of evidence required should be much higher for improbable events, but at the same time, we know that
improbable events do occur.
I’m trying to think of an example where a bayesian analysis would ‘prove’ something didn’t happen in the past
that we know did occur. This would happen when we have large amounts of missing evidence. While I’m having
trouble thinking of an example from history, but it might be best to use your example from the video with the
flares / meteor / ufo’s.

If the person running the analysis never knew about the flares possibility, they would have concluded that it’s a
meteor, and using the words you’re using ‘proven’ something that wasn’t true.

I guess my critique really revolves around using the word ‘prove.’ While we know that Bayes’ Theorem has a
rigorous mathematical base and can be very useful for making predictions, I’m more skeptical of it’s reliability in
projecting hypotheses into the past. Even when making predictions, we wouldn’t say that we’ve proven what
will happen. Less absolute language seems appropriate (e.g. “given this information, a bayesian analysis indicates
this is what is likely).

R E P LY

RICH ARD CARRIER • DECEMBER 2, 2011, 9 :16 AM

“I’m trying to think of an example where a bayesian analysis would ‘prove’ something
didn’t happen in the past that we know did occur. This would happen when we have
large amounts of missing evidence.” Actually, no. A Bayesian analysis would prove in such
a case that we don’t know whether it happened, not that it didn’t happen. I actually spend a
lot of time analyzing different examples of this in Proving History.

The example you give, of a person ignorant of flares, that’s not a case of “proves false when
we know it’s true” because in that scenario we wouldn’t know the flares explanation was
true, so it is not an example of what you are talking about. As soon as you know about flares,
you know it’s flares. This is the beauty of Bayesian epistemology: it explains the logic of
updating your conclusions as new evidence arises, thus verifying our intuition that knowledge
is always revisable. The possibility of our not knowing something is actually included in the
probability we calculate. For example, in the video I came to around 95% chance it’s
meteors, when we are in a state of ignorance about flares, which means there was a 1 in 20
chance it was something else. If we discover the flares, then we know this was in that 1 in 20
(which is why I made the specific point that we shouldn’t get too cocky when we have such a
high probability of being wrong). Thus at no point are we affirming “it can’t possibly be
something else we don’t yet know about.” Rather, we only affirm that that will be unlikely,
given the data available to us. Which is logically correct.

This ties into the talks given by Galef and Greenberg. A good bayesian will know when their
background knowledge is weak, and this will translate to the probabilities. For example, if
you know you have not researched UFO phenomena and therefore you know there are lots
of things you don’t know, then your probability estimates are going to be lower, specifically
to account for what you already know is a higher probability that anything you (in your
ignorance) think of may be wrong. An example is doing home renovations: if project after
project you learn you were way overconfident in your ability to carry out the job you
planned, then your estimate of the prior probability that you will do well on the next job goes
down, to the point that your ignorance is now fully represented in the math, and you’ll
probably start considering hiring contractors or taking classes first before attempting another
renovation. If you were not operating with built-in biases in your brain, or were aware of
them, you would have started with a low prior right from the get go, knowing full well that
you are probably overestimating your ability from day one. But that is again the effect of
background knowledge (in this case, knowledge of the biases in your brain, or of the true
likelihood of spontaneous competence).

But you are right that people trade too much on the ambiguity of the word “proven.” I don’t
think I did that in the talk. However, “proven” always means some probability of being wrong
(think Cartesian Demon), so the only issue is what threshold you intend to assign as
warranting the label “proven.” Scientists actually explore this boundary all the time (in physics
something like a 1 in 10,000 chance of being wrong is considered indicating a conclusion is
“proven,” yet even they accept that proven conclusions are still tentative and revisable), but
where that boundary is varies by context. Thus, for example, when a historian says something
is “proven” sometimes they know, and their colleagues know, that this may mean nothing
more than a 1 in 100 chance of being wrong, simply because we already accept that in that
context conclusions are never as certain as in, for example, physics, a completely different
epistemic context. This is fine as long as people don’t forget the context-dependency of their
language.

Christian apologists make this mistake all the time, but only because they are prone to black-
and-white thinking and thus struggle to conceive of even the idea that there is some
probability they are wrong. They thus always talk about conclusions (pro or con) being
definite or certain or indisputable, and don’t reflect on just what they can honestly mean by
that, since logically they cannot mean 100% (even a formal logical proof vetted by a hundred
experts still has a nonzero probability of being invalid, since there is some small probability
that all one hundred experts missed an error in it). But vanishingly small probabilities of being
wrong still warrant strong language (like proven, certain, definite, indisputable, etc.), and
there are some such conclusions in history (I discuss some examples in Proving History, but
there are so many they tend to be overlooked for being so obvious, e.g. that there was a
Roman Empire). But you are right that we should be more clear in what we mean whenever a
conclusion is contentious or, more importantly, being used in a way that presents risk (e.g.
making policy decisions based on historical conclusions).

Since risk can be measured, it’s easy to compare that risk to the epistemic certainty and
calculate the effective cost of trusting a conclusion. For example, if the cost of being wrong is
huge, then you need a much higher epistemic certainty; whereas when the cost of being
wrong is low, you don’t. All of this can be worked out using Bayes’ Theorem and standard
theories of utility. Of course, the risk itself may be in doubt (think Pascal’s Wager) and thus
has its own Bayesian probability to begin with, which can be absurdly low. But that’s a
different discussion entirely. In the end, Bayes’ Theorem actually teaches the very point you
are making: that we need to be more aware of the probability of being wrong whenever we
make assertions of certitude.

R E P LY

BEN • DECEMBER 2, 2011, 4:48 PM

Obligatory Dumb and Dumber quote “So you’re sayin there’s a chance!”
to give all the Christians hope despite all this Bayesian crap:
http://www.youtube.com/watch?v=gqdNe8u-Jsg

R E P LY

RICHARD CARRIER •

DECEMBER 9 , 2011, 4:20 PM

And she was being generous, too.

H MM • DECEMBER 3, 2011, 2:56 AM

Easily the best talk of Skepticon, congratulations.

Curious: does Bayes’ theorem finally give us a good argument against


solipsism? A solipsist might say they can’t know other people really exist
and that they’re just making the fewest assumptions. What does
Bayesianism and Kolmogorov complexity have to say about this?

R E P LY

RICHARD CARRIER •

DECEMBER 9 , 2011, 4:07 PM

Yes.

Solipsism still requires an explanation for what you


are cognating. There are only two logically possible
explanations: random chance, or design.

It’s easy to show that the probability that your stream


of consciousness is a product of random chance is
absurdly low (see Bolzmann brains, for example). In
simple form, if we assume no prior knowledge or
assumptions (other than logic and our raw
uninterpreted experience), the prior probability of
solipsism becomes 0.5 but the likelihood of the
evidence on solipsism is then vanishingly small
(approaching zero), since chance events would sooner
produce a relative chaos than an organized stream of
complex consciousness, whereas the likelihood of that
same evidence on a modest scientific realism is
effectively 100%. Work the math and the probability
of chance-based solipsism is necessarily vanishingly
small (albeit not zero, but close enough for any
concern). Conclusion: random solipsism would sooner
produce a much weirder experience.

That leaves some sort of design hypothesis, namely


your mind is cleverly making everything up, just so.
Which requires your mind to be vastly more intelligent
and resourceful and recollectful than you experience
yourself being, since you so perfectly create a reality
for yourself that remains consistent and yet that you
can’t control with your mind. So you control absolutely
everything, yet control next to nothing, a contradiction
in terms, although an extremely convoluted system of
hypotheses could eliminate that contradiction with
some elaborate device explaining why your
subconscious is so much more powerful and brilliant
and consistent and mysterious than your conscious self
is. The fact that you have to develop such a vastly
complex model of how your mind works, just to get
solipsism to make the evidence likely (as likely as it
already is on modest scientific realism), necessarily
reduces the prior probability by as much, and thus the
probability of intelligent solipsism is likewise vanishingly
small. Conclusion: intelligent solipsism would sooner
result in your being more like a god, i.e. you would
have vast or total control over your reality.

One way to think of the latter demarcation of prior


probability space is similar to the thermodynamic
argument against our having a Boltzmann brain:
solipsism is basically a cartesian demon scenario,
only the demon is you; so think of all the possible
cartesian demons, from “you can change a few things
but not all,” to “you can change anything you want,”
and then you’ll see the set of all possible solipsistic
states in which you would have obvious supernatural
powers (the ability to change aspects of reality) is
vastly larger than the set of all possible solipsistic states
in which you can’t change anything except in exactly
the same way as a modest scientific realism would
produce. In other words, we’re looking at an
incredible coincidence, where the version of solipsism
that is realized just “happens” to be exactly identical in
all observed effects to non-solipsism. And the prior
probability space shared by that extremely rare
solipsism is a vanishingly small fraction of all logically
possible solipsisms. Do the math and the probability of
an intelligent solipsism is vanishingly small.

This all assumes you have no knowledge making any


version of solipsism more likely than another. And we
are effectively in that state vis-a-vis normal
consciousness. However we are not in that state vis-a-
vis other states of consciousness, e.g. put “I just
dropped acid” or “I am sleeping” in your background
knowledge and that entails a much higher probability
that you are in a solipsistic state, but then that will be
because the evidence will be just as such a hypothesis
would predict: reality starts conforming to your whim
or behaving very weirdly in ways peculiar to your own
desires, expectations, fears, etc. Thus “subjective”
solipsism is then not a vanishingly small probability. But
“objective” solipsism would remain so (wherein reality
itself is a product of your solipsistic state), since for
that to explain all the same evidence requires extremely
improbable coincidences again, e.g. realism explains
why you need specific conditions of being drugged or
sleeping to get into such a state, and why everything
that happens or changes in the solipsistic state turns out
not to have changed or happened when you exit that
state, and why the durations and limitations and side
effects and so on all are as they are, whereas pure
solipsism doesn’t come with an explanation for any of
that, there in that case being no actual brain or
chemistry or “other reality” to return to, and so on, so
you would have to build all those explanations in to get
objective solipsism to predict all the same evidence,
and that reduces the prior. By a lot.

There is no logically consistent way to escape the


conclusion that solipsism is exceedingly improbable.

TR IN A • DECEMBER 2, 2011, 11:07 AM

Unless this has been subjected to OPEN PEER REVIEW I am not going to put any time into it.

R E P LY

MATTY • D EC EMBER 3, 201 1 , 2:27 AM

If that’s your standard for everything you read or watch you must spend a lot of time doing
nothing.

R E P LY
K OL • DECEMBER 3, 2011, 2:03 PM

Richard,

Would you show me how to use the equation to determine the likelihood of “face-palm”
events from a specific statement?

It can’t help me now but others might benefit.

R E P LY

RICH ARD CARRIER • DECEMBER 9 , 2011, 2:17

PM

You need to give me a specific example.

R E P LY

A BB3W • DECEMBER 2, 2011, 11:19 AM

Are you familiar with the paper “Minimum Description Length Induction, Bayesianism and Kolmogorov
Complexity”, by Paul M. B. Vitányi and Ming Li?

R E P LY

RICH ARD CARRIER • DECEMBER 2, 2011, 3:08 PM

No, I had not known of it until now. But thanks for the reference. Yudkowski would be
pleased. It ties his version of Ockham’s Razor to mine, which I was already sure could be
done, but it’s nice to see that confirmed.

R E P LY

MU F F IT • DECEMBER 3, 2011, 10:02 AM

Richard, can you set up a twitter account that ‘tweets’ your blog headlines so we can follow you in our twitter
feed?

Super thanks, excited about you blogging!

R E P LY
RICH ARD CARRIER • DECEMBER 9 , 2011, 2:39 PM

I confess I know next to nil about twitter. So I don’t know if that’s worth the labor. A twitter
feed that only flashes my blog headlines? Is that really needed? If you can actually click
through to the blog then you are already on a device that can run an RSS feed. So why not
just do that? That’s not a rhetorical question. I just don’t understand the advantage of
twittering what I already feed. So feel free to educate me!

R E P LY

QU A N TH EOR Y • DECEMBER 3, 2011, 12:58 PM

I just wanted to pop in and mention that, without having seen or read any of your work, a few months back, I
wrote a similar argument about apologetic “excuses” for God to that given in your presentation. I was fairly
certain that it was the most direct, slam dunk argument against most of the apologetic arguments in the class I
was discussing. However, it’s still a bit reassuring to see someone well-regarded in the atheist movement say
much the same thing, at least insofar as I don’t feel like a lone crackpot physicist doing math about God in an
out-of-the-way corner of the internet.

So thanks for that. Now I’m going to have to read your books.

Also interesting to think about how Bayes theorem can formalize informal but valid reasoning. I remember that
my path out of Christianity came from gradually refining my ideas about God to see what kind of God was most
probable, which eventually got me to the point where the most probable sort of “God” was not much of a God
at all, but a sort of naturalist ordering principle, at which point it made sense to drop the term altogether.

R E P LY

QU A N TH EOR Y • DECEMBER 3, 2011, 1:04 PM

Well, let me correct myself. I had seen some of your internet postings, but not much about
your application of Bayes to religion.

R E P LY

JU LI EN R OU SSEA U • DEC EMBER 3, 2011, 5:17 P M

Could you please post your slides from the talk online because it is sometimes hard to read them on the youtube
video.

Thank you.

Also, I recently discovered infidels.org and have been reading quite a lot of your writings there and find it very
interesting so let me take the occasion to thank you for that.
R E P LY

RICH ARD CARRIER • DECEMBER 9 , 2011, 3:02 PM

Thanks.

You aren’t missing much in the slides, since I speak out everything important during the talk,
and you get to see everything funny, except that one joke that I explain in this blog entry
already.

The slides only run in Keynote, and not the same way on every platform. The file is nearly 60
megabytes large and won’t run as embedded media. That makes it pretty useless to put it
online. It would take you forever to download the file, and you probably couldn’t even run it
when you did. My server won’t even allow keynote files to be uploaded anyway. I’ve tried
removing the animations and exporting to PDF but I keep getting a corrupt file. So I’ve given
up on that for now. Sorry.

R E P LY

JT5 12 • D EC EMBER 3, 201 1 , 1 1 :1 6 P M

I’m surprised you don’t use the odds form of Bayes Theorem in your presentations to non-mathematical
audiences. I think it’s not only much more useful, but much easier to grasp.

R E P LY

R I C H A R D C A R R I E R • D E C E M B E R 7, 2 0 1 1 , 6 : 3 8 P M

I disagree. Everyone I’ve tutored on this has a much easier time understanding the evidence-
weights model than the odds model. It’s also easier to run mathematically without making
mistakes (because it has built-in checks), and easier to critique the premises (since it stratifies
every problem into three numbers, three propositions that can be easily framed as ordinary
language sentences about the probability of events). (For those who are curious, by the
“odds form” he means this.)

R E P LY

JT 5 12 • D EC EMBER 8 , 201 1 , 4:1 3 P M

Oy, that notation! More plainly, the odds form of Bayes theorem is

Odds(H1|D) = P(D|H1)/P(D|H0) × Odds(H1) ,


where H0 is the null hypothesis, H1 is the alternative hypothesis, and D
are the observed data. The term on the far right, the prior odds of H1,
equals P(H1)/[1–P(H1)], which shows that the two forms of Bayes
theorem have as their only inputs the exact same three probabilities. So,
your third objection is not valid.

Your second objection, that the conventional form of Bayes theorem is


“easier to run mathematically” seems difficult to justify in light of the fact
that the odds form is the mathematically simpler one. It’s so simple that
you can do the calculation in your head.

Finally, from the point of view of understanding Bayesian reasoning, I find


the odds form superior, because it cleanly separates the weight of the
evidence—the first term on the right-hand side (the “Bayes factor”)—
from the weight of prior opinion—the second term on the right-hand side
(the prior odds)—and clearly shows how each should inform one’s
opinion of the plausibility of the hypothesis.

Jay

R E P LY

RICHARD CARRIER •

DECEMBER 9 , 2011, 12:26 PM

The fact that no one here knows what you are talking
about makes my point for me.

JT 5 12 • DECEMBER 9, 2011,

12:30 PM

That’s a disappointing, and obviously fallacious,


response.

RICHARD CARRIER •

DECEMBER 9 , 2011, 12:49 PM

It’s not fallacious if it’s true.

JT 5 12 • D EC EMBER 9 , 201 1 , 1 :23


PM

Look, it’s not a “fact” that nobody here knows what


I’m talking about. And even if that premise is true, it
doesn’t follow that the conventional form of Bayes
theorem is the pedagogically superior one for Bayesian
hypothesis testing. I don’t have a 45-minute video, a
website, an online calculator, a confusing pdf, and like
three books explaining the form of Bayes theorem I
prefer, some subset of which mostly everyone “here”
has likely seen. You do.

Did I really need to explain that to you?

RICHARD CARRIER •

DECEMBER 9 , 2011, 2:14 PM

You’ve inspired me to look into the pedagogy behind


teaching the odds form and I can see some better
ways to teach it in that form than I thought. So you
may be on to something that I can incorporate in my
book. To be sure I’d like to see you take a stab at a
quick tutorial. Solve the following using the odds form,
but with minimal notation and the simplest possible
arrangement, as if speaking to a novice, i.e. I’d like to
see how you would simplify the teaching of bayesian
reasoning this way.

P(e|h) = .95
P(e|~h) = .33
P(h) = .6
P(h|e) =

BR IA N MA C K ER • DECEMBER 4, 2011, 10:18 AM

Richard,

I just watched your video, and bought “The End of Christianity” – John W. Loftus to read your chapters when I
get it. I will be buying your next book.
I’m a computer scientist, and fellow atheist.

Currently I am a pan-critical rationalist and being one I am more than willing to change my position should
something better come along. I already know Bayes Theory and in fact deduced it for myself as a kid. You seem
think it applies much more broadly than I do.



“Since all empirical/inductive reasoning is probabilistic (always have some chance of being
wrong). There is always some chance of being wrong, and those chances vary. And since
probability is by definition mathematical, some things are more probable than others some
things are less probable than others, ah, it follows that the logic of correct reasoning has to be
mathematical. It’s the only way you can model how this kind of reasoning works.” – From
your talk

Ok, be careful here. Yes, we use logic when reasoning, but reasoning doesn’t boil down to mere logic as far as I
can tell. Reasoning is the matter of following some algorthm, and there are lots of potentially valid algorithms to
follow. This is true of empirical reasoning also, and many more statements are empirical than probably you
realize (based on your claims here).


“And when you working it out, you do the math and figure out what the correct model is, Bayes
Theoreom is what you get.” – From your talk

Bayes Theoreom (the formula), or something broader, some algorithm? One can come up with many different
methods to use Bayes Theoreom in an algorithm. Bayes Theoreom doesn’t seem to have information on how to
gather the initial probabilities, or what to do after the formula is a applied. Nor does it indicate how to generate
the various hypothesis upon which it is calculating probabilities. What do I do with these probabilities other than
generate more probabilities?

In one of my high school classes in the 1970’s a teacher asked a question of the class having to do with lie
detectors, polygraph tests. The assumptions were that the polygraph was 99% accurate, and that 1% of students
had cheated on an exam. The question was to determine the odds that someone choosen at random would fail
the lie detector and yet be innocent. I had no idea what Bayes Theoreom was but easily deduced the answer
(because it is about deduction, not induction) as being only a 50% correct. Obviously 1% of the 99% innocent
will fail the polygraph or .99% of all students, and 99% of the 1% guilty will fail or .99% of all students. The total
testing guilty by the polygraph is .99% + .99% = 1.98%. The percent of all found guilty who are innocent is
9.9%/1.98% which is 50%.

The teacher then claimed that we shouldn’t use polygraphs in our judicial systems for this reason, they just don’t
work. I picked the numbers here to work out at 50% but I believe he picked numbers where the odds of finding
the innocent guilty were higher in order to bolster his position. I think his actual example had around two
innocent people falsely convicted for each guilty person.

When using Bayes the initial percentages matter quite a bit. In fact, here I picked a balancing point where the
answer was 50%. There are an infinite number of other assumptions for the intial accuracy and innocent percents
that will yield an answer of 50%. All one need do is assume that the polygraph accuracy is equal to the percent
of innocent people. A 75% accurate test with 75% innocent students will on average find one innocent student
guilty for each guilty student found guilty.
In order to understand what the math is doing I look for such boundary conditions and special cases. Another
boundry condition is when the accuracy of the polygraph is 50%. When that is the case the final answer of how
many innocents are convicted exactly matches their proportion in the initial population. An accuracy rate
increasingly higher than 50% will tend to cause an false conviction rate increasingly lower than the percentage of
innocents in the population.

For innocent rates increasingly higher than the accuracy rate the polygraph will convict increasingly higher
proportions of innocent people. The converse is also true, the results of using the polygraph get increasingly
better.

Now as a matter of fact the teacher was incorrect. He didn’t recognize that he was using Bayes inside a larger
algorithm. It matters what that outer algorithm is. If the outer algorithm is to test students a random (or to test
everyone) and punish them if they fail the polygraph then his conclusion is correct. However there are other
algorithms that either worse or better, or both worse and better, depending on your desires. One could, for
instance, just punish everyone if any cheating occurred. Obviously a polygraph with greather than 50% accuracy
is superior to that. Also obviously basing punishment purely on random testing with a polygraph is inferior to not
doing anything when the number of innocents is higher than the polygraph accuracy.

What about superior algorithms? Well any test that tends to reduce the percent of innocent parties will tend to
reenforce the accuracy of the polygraph. A 99% accurate test with 1% innocent rate will tend to falsely convict
only one hundreth of one percent in comparison to all the guilty parties netted. That suggests that one could use
an algorithm with a prior filter to narrow the pool down first to lower the proportion of the innocent to the guilty
before applying the polygraph. You only apply the polygraph to those found guilty by the prior test.

For example you could first restrict your population by testing only the students who passed the test, or only
those students who’s answers exactly replicated those of another student, or only those students who scored
higher than they did on similar tests in the past, or those students who moved up in rank in comparison to other
students. You don’t even need to know the actual percentages. You just have to know that it tends to reduce
the proportion of innocents.

I think the main problem is that the general populace doesn’t understand math and will think 99% accuracy =
99% chance of guilt with bad results. One could solve both this issue and also take advantage of the improvment
in accuracy provided by the polygraph by applying it after the jury has made it’s decision. So maybe you use the
polygraph to find people innocent only after a jury convicts. In order to improve things there would be some
threshold polygraph accuracy required which would depend on the average accuracy of the jury. One might be
able to classify that jury accuracy further depending on the type of evidence used to convict, was it
circumstantial, and the intelligence of the jury.

Of course, one can use Bayes Theorem as a means to choose between the various algorithms. Some algorithms
will ultimately generate better percentages of certainty than others. However Bayes Theorem itself says nothing
about how to go about this. This is in fact a meta-algorithm, an algoritm about algorithms using Bayes Theorem
as a criteria to select between them.

Ultimately this outer algorithm is in fact not Bayes Theorem since his theorm is not in fact an algorthm. I believe
that any such algorithm is in fact Darwinisitic in nature, in that it is a form of trial and error.


“Now this statement is a bold statement, there are mathematicians who will argue with me on
this, there are philosophers who will argue with me on this, but I am going to prove it in a book
coming soon, which I will talk about in a moment. And this is my bold statement here, is that,
“Bayes’ Theorem is the mathematical model for all correct reasoning about empirical claims.
Every time you reason correctly you are following Bayes Theorem even if you don’t know it,
and if you aren’t following it you aren’t reasoning correctly.” – From your talk

I’ll buy your book but I don’t have high hopes here unless you expand upon Bayes Theorem to add on some
kind of algorthm.

I hope you realize that your claim here is in fact an empirical one. You are claiming something that is true about
the world. Since this is an empirical claim about all empircial claims it is self referential. This tends to open it up
to various attacks. Maybe you considered this and have addressed these attacks but if not you should.

I’m not sure what you mean by “follow” here so I’m not sure how to attack the claim. If by “follow” you mean
that all correct reasoning must not violate Bayes theorem given assumptions that are isopmorphic to those used
to derive the theorem then it is trivial. That is true of all such theorems. That does not however mean that any
such theorem subsumes all others.

I haven’t actually be keeping up with the Bayesian revolution other than to have read some stuff by Yudkowski
and comments by random people who are true believers. I didn’t bother to investigate further because I already
understood Bayes better than those trying to convert me, and the fact they made all sorts of additional false
claims. For example, many do not understand Popper and Critical Rationism, and many have never heard of
Pan-Critical Rationalism.

They have made the claim, for instance, that Bayesianism subsumes Popperianism, the way Einstein’s theories
subsume Newtons. They haven’t shown how to my satisfaction. How can a very narrow theorem regarding
probabilities subsume what is in fact an algoritm? It can’t as far as I can tell. Popper and Bartley both recognized
with science they were dealing with a trial an error algorithm. What one uses as the selection methods (and there
can be many) is interatively open ended but also subject to selection. So it is in fact a meta-algorithm with
regards to what selection criteria to use during the “trial” phase. As far as I can see the Bayesian theorem
something one plugs into this portion of the broader Pan-Critical Rationalist algorithm.

Bayesians have also made the claim that ultimately I must be using induction. However, as a Pan-Critical
Rationalist I don’t see how this is true. My ultimate algorithm is trial and error, not induction. One can
understand this with the Theory of Natural Selection. Even though in the end Natural Selection can only create
creatures taylored to situtations which have been “observed” by past generations the algorithm does not depend
on any form of inductive reasoning. What it does is throws guesses (mutations) out there to be tested and
rejected by selection (a kind of falsification). I’m certainly using Bayes Theorem when I do some kinds of
empirical reasoning, but I only use it as part (the selective part) of a larger algorithm.

Perhaps I’m guilty of having a hammer and thinking everything is a nail, but from my perspective that is what you
Baysians look like you are doing. I at least recognize that I have both an algorithmic hammer (pan critical
rationalism) and different sized nail sets tools (Bayes theorem being one of them). You Bayesians seem to think a
nail set is a hammer. I will admit however that I don’t fully understand my hammer, and that perhaps it is a little
bent, and perhaps it will work better with a better nail set.

I will say that I am getting a bit irritated with all the pseudo-Bayesians telling me that I use empirical induction
and that is how I know my bedroom floor has not turned into a frothing pit of werewolves every morning. [Yes,
someone really posited this to me] Frankly, I don’t even consider the probabilities involved so that can’t be the
algorithm I am using. There are an infinite possible set of such alternate theories and frankly my brain just isn’t
big enough to do the calculations involved. I may, in some cases, use Bayes Theorem to choose between
possibilities, but not in this case. In this case I have a model of how wooden floors behave that I have accepted,
and it doesn’t have any mechanism to turn into a single, let alone multiple werewolves, so it never crosses my
mind. Hell, I don’t even have a model for how a human could turn into a werewolf, and that is the normal claim.

If by “following” you meant “using” then I would have to disagree with that too. Using simple logic where the
probabilities are 100% and 0% on truth or falsehood really doesn’t count as “using Bayes Theorem”. So when I
disprove someones empirical claim by showing a contradiction in that claime that doesn’t really count as
“following Bayes Theorem” in that regard.

R E P LY

BR IA N MA C K ER • DECEMBER 5, 2011, 6 :02 AM

BTW, I do not actually believe polygraph’s should be used and for many reasons. My point
was to show that the teachers reason was wrong. I won’t go into the various reasons.

R E P LY

R I C H A R D C A R R I E R • D E C E M B E R 7, 2 0 1 1 , 6 : 3 4

PM

You mean the teacher’s premises were fictional. Their reasoning is still
correct, e.g. too many false positives, in a system which we desire to have
few to none. It sounds to me like they were just creating a toy example to
illustrate that point, so your response seemed a bit pedantic to me. I doubt
they had a scientific study showing that 1 out of every 100 polygraph
subjects is falsely identified as lying. Rather, that clearly looks like a
number they made up as a hypothetical. Obviously any realistic debate
requires exploring what false positive rate we will deem acceptable, and
then finding out if scientific studies can get polygraph testing under that
rate, and under what conditions. I doubt your teacher was unaware of that
fact. They just wanted to illustrate how much harm can be done right
under our nose if we don’t know how to do the basic math. In other
words, I suspect they were trying to teach you how to do the math, the
very math that you yourself then just did (using different premises,
specifically to explore when it would be acceptable to use a polygraph). I
doubt they were claiming their fictional premises were true. (Or if they
were, that is what would be stupid, not the fact that Bayes’ Theorem is
the correct way to analyze the problem.)

At any rate, for the good analysis of when polygraph use would be most
sound (and including your 50/50 example, which suggests your teacher
was working from a stock analogy) see Jonathan Marin’s He Said She
Said.
R E P LY

BR IA N MA C K ER • DECEMBER 8,

2011, 5:14 PM

Fair enough but I don’t think the teacher was wrong


because the premises are wrong. In fact, I think the
teacher was right but for the wrong reasons. The same
exact argument made could also be made for many
other forms of evidence. If we have to exclude
polygraphs for this specific reason then we need to
exclude all these others too. An accuracy rate of 90-
99% is damn good and most evidence we currently
allow in court would be hard pressed to meet this
standard (confessions, eyewitness testimony,
fingerprints, blood type, etc).

Again, I don’t want to get into the long discussion of


why we should actually exclude polygraphs. My first
concern would be that the real accuracy for
polygraphs is much closer to 50% in the first place.
Which is indeed amplified by Bayes. My second
concern with allowing polygraphs is that most people
are math illiterates, and have a poor intuition for math.
They would tend to think 80% correct means more
than it does and give it an over weighing against other
evidence.

A polygraph that was 90-99% accurate could actually


be quite useful depending on your goals. If you are
very concerned about accidentally convicting an
innocent person (for example in a death penalty
situation) then you could always use a 99% accurate
polygraph test for exculpatory purposes only, to
overturn a conviction (sort of like we are using DNA
tests now).

BTW, thank you for responding.

RICHARD CARRIER •

DECEMBER 9 , 2011, 12:06 PM

But I don’t think the teacher was wrong because


the premises are wrong. In fact, I think the teacher
was right but for the wrong reasons. The same
exact argument made could also be made for many
other forms of evidence. If we have to exclude
polygraphs for this specific reason then we need to
exclude all these others too.

Any evidence that was that bad, sorry, no, it would not
be good evidence. If DNA evidence convicted the
wrong person 1 out of every 100 times we would
never use it. The only reason DNA is such good
evidence is that it’s false positive rate is millions or
billions to one, not a hundred to one. And though
fingerprinting has not been formally tested the same
way, it’s false positive rate must be comparable (as
otherwise we would have thousands of cases of
fingerprint misidentification confirmed by
corresponding DNA evidence, and we don’t, in fact
“being mistaken for someone else” on account of
identical fingerprints has never happened to my
knowledge).

An accuracy rate of 90-99% is damn good…

No, it isn’t. It’s damn shitty. Would you get into a car
that has a 1 in 100 chance of exploding? I sure hope
not. So why would you convict someone on a similar
condition of uncertainty? A 1 in 100 chance of being
wrong is not “beyond a reasonable doubt,” since it
would mean for every 100 juries, 1 innocent man gets
sent up. I would not want to be on a jury that acts so
recklessly, much less would I want to be tried by one!

1 in 100 is good enough for other things, where the


risks of being wrong are low, or where there is no
alternative (e.g. a “1 in 100 chance a UFO is a
spaceship” compels us to conclude it’s probably not a
spaceship–even though we must confess uncertainty,
we can by no means conclude it “is” a spaceship in
such a case and thus it would be irrational to act as if it
was).

…and most evidence we currently allow in court


would be hard pressed to meet this standard
(confessions, eyewitness testimony, fingerprints,
blood type, etc.

First, courts are now turning against the reliability of


confessions and eyewitness testimony for the very
reason that they are indeed unreliable. Second,
fingerprints are extremely unlikely to be that unreliable
(as I noted above, having accidentally identical
fingerprints is so rare as to be nonexistent as far as I
know). Third, no one is ever convicted on blood type.
By itself blood type is solely an exclusionary test, and
this is exactly what gets argued in a trial (that the
accused has the same blood type is not evidence of
guilt, it only serves not to exclude her), and it can count
as part of a collection of evidence that is together
much less likely to be in error than “1 in 100,” but the
reason lie detectors can’t be used that way is that
juries are so hugely biased by them (a failed lie
detector test causes juries to hugely overestimate the
likelihood of guilt, so much so that such evidence is
simply too prejudicial to allow, without adding a
massive education section to the trial, which hardly
anyone ever wants to pay for; notably, this is the same
reason the rules of evidence for eyewitnesses are being
radically revised: juries hugely overestimate the
probability of guilt on an eyewitness id), whereas they
correctly grasp the probabilities of misidentification by
blood type.

Again, I don’t want to get into the long discussion


of why we should actually exclude polygraphs. My
first concern would be that the real accuracy for
polygraphs is much closer to 50% in the first place.
Which is indeed amplified by Bayes. My second
concern with allowing polygraphs is that most
people are math illiterates, and have a poor
intuition for math. They would tend to think 80%
correct means more than it does and give it an
over weighing against other evidence.

I think we agree on both points.

A polygraph that was 90-99% accurate could


actually be quite useful depending on your goals. If
you are very concerned about accidentally
convicting an innocent person (for example in a
death penalty situation) then you could always use
a 99% accurate polygraph test for exculpatory
purposes only, to overturn a conviction (sort of like
we are using DNA tests now).

Releasing 1 out of every 100 guilty murderers is not a


good tool to deploy. I can’t see any reason to suggest
using it. The reason DNA is used is that itcan’t
release so many guilty people, because it is massively
more reliable than that. But using lie detectors to
commute death penalties to life sentences would be
viable, since the risk is then much lower.
R I C H A R D C A R R I E R • D E C E M B E R 7, 2 0 1 1 , 5 : 4 8 P M

Brian, most of your concerns will indeed be answered in Proving History.

The only one that isn’t relevant is your conflating epistemic queries with utility functions. I
explicitly limited my talk (in one of the very first slides) to ascertaining whether empirical
claims are true (lit. “how we determine the most likely explanation of a body of evidence”).
Making decisions about actions requires a utility function, i.e. what you value and applying
that to measures of risk. Risk measures are Bayesian. Value measures are not, except in a
subsidiary way that negates your concern. They are not even probabilistic. Thus your
discussion of polygraphs has nothing to do with what I mean, since that involves answering
questions such as “is punishing a random 50% better than punishing all 100%?” which is
Bayesian only when asking for “the probability that it is true that punishing a random 50% is
better than punishing all 100%,” yet that probability is so easily demonstrated to be extremely
high, you don’t need to run the numbers. You are thus confusing “x has a value of y” with
“the probability is z that x has a value of y.” Bayes’ Theorem tells you z. Thus it didn’t even
occur to you to query how you knew the statement “punishing a random 50% is better than
punishing all 100%” was true or what probability you would assign its being false…whereas
once you start thinking about it in those terms, then it becomes obvious that it’s Bayes
Theorem all the way down. (See my discussion of how values and morals reduce to empirical
facts in the last chapter of The End of Christianity.)

Otherwise your example of narrowing the reference class before applying a polygraph only
proves my case: that’s a very Bayesian thing to do, which can be accomplished by updating
your priors, or directly resampling the new reference class. In fact, the reason this works is
precisely explained by Bayes’ Theorem, and that’s my point. You are reasoning like a
Bayesian even when you don’t know it…and notably, this is precisely when you are
reasoning correctly. In my talk I gave a directly analogous example with people named Jayne
(using the hidden gender guessing game, instead of a polygraph).

Likewise, when you quite rightly note that there are many different ways to run a Bayesian
thought process without even knowing we are (i.e. many “algorithms” that run a Bayesian
function on a data set) you are confirming the very argument of Proving History. It’s like
windows or OS to binary machine language: it’s all binary machine language, no matter what
operating system you are using or even what software language you are coding in (be it html
or java or C++). It can all be reduced to the same thing. And when it doesn’t so reduce, it
doesn’t run. Analogously, algorithms for determining the probability that a proposition is true
are either valid or invalid, and if valid, always reduce to Bayes’ Theorem, whether you need
to so reduce them or not (and often you needn’t, but the reduction still confirms why any
higher-order method is valid).

And yes, this is even true for P=1 and P=0 propositions in strict logic (e.g. Aristotelian
syllogisms). There is no sense in which using 0s and 1s is “not” Bayesian. That you don’t
need to reduce an Aristotelian syllogism to a Bayesian equation (since multiplying with 0s and
1s gives you a foregone conclusion before you even need to write the formula) is irrelevant to
the fact that it does so reduce, which fact further verifies the proposition that all valid
reasoning does. Likewise you don’t need to develop a formal proof from ZFC set theory that
1 + 1 = 2 because the conclusion is obvious without it, yet the fact that you can reduce 1 + 1
= 2 to such a formal proof is precisely what validates arithmetic as logically sound.
Finally, it’s not relevantly correct to say “Bayes Theoreom doesn’t seem to have information
on how to gather the initial probabilities,” since I assume you don’t mean to confuse the form
of an argument (which ensures the logical validity of an argument) with ascertaining the truth
of its premises (as necessary for establishing an argument is sound). The latter is just the
output of another Bayesian calculation. And so on in turn. Even when the math is so obvious
it needn’t be done (as in your werewolf-seething floor posit). It’s still Bayes’ Theorem all the
way down (all the way down to P=1 propositions about raw uninterpreted experience: see
my Epistemological End Game). And it’s this fact that establishes your assumptions are
logically valid. Like a formal proof that 1 + 1 = 2, you don’t need the proof any more once
you’ve established the validity of the higher-order process (e.g. arithmetic) that such proofs
justify. But if you were so inclined, you could produce that proof, and it would be the answer
to the question “why is it logically valid to conclude that 1 + 1 = 2?”

Likewise hypothesis development, which is just another product of a growing reference class:
for the set of all conceived hypotheses is an element of the background knowledge [b] that all
Bayesian conclusions are conditional on. As new hypotheses are conceived, that class
changes, and thus so might the Bayesian results. But the proper logical effect of this is always
perfectly modeled by Bayes’ Theorem. Thus, again, it’s Bayes’ Theorem all the way down.

In the meantime you can always be delightfully silly and spray paint “Mind the Werewolves”
on the floor by your bed in Johnston typeface.

R E P LY

BR IA N MA C K ER • DECEMBER 8, 2011, 6 :47 PM

Thanks for the reply. I’m aware of some of the things you seem to think
I’m confused on. I do understand the difference between the probabilities,
values, initial premises, etc.

I think you are interpreting my understanding as misunderstanding on these


basic topics because I’m misinterpreting your position and that I am
therefore miscommunicating, asking the wrong questions, etc. Since you
have already done the very hard work of writing a book (several actually),
and this is going to be a pointless discussion until I am crystal clear on
your position I will not bother clarifying my comment above anymore.

I’ll just assume that you understood my points, although I am not sure you
have. I’ll also assume I don’t understand yours.

I had already considered whether Pan Critical Rationalism (the algorithm)


could be justified on Bayesian grounds and I think the answer is yes. If
that is all you are saying then I have no quibble. (Note early Popper is
different than later Popper, is different than Pan Critical Rationalism).

Obviously any method that violates Bayesian theory, or other rules of


logic is not valid (if I assume the premises of probabilistic logic are a valid
model of reality). I can hold such beliefs in the applicability of math
tentatively (ala Popper) until and unless I have credible evidence to the
contrary.

I just don’t see the supposed conflict between Pan Critical Rationalism
and Bayes theorem (ala Yudkowski). I’m not even sure you hold that
same position. I think Bayes merely expands on Pan Critical Rationalism,
but I also think Bayes was discovered by multi-level process of trial and
error. Therefore I see Bayes as derivative via process (even if the process
that produced it is reduced from Bayes).

Bayes Theorem in that way would be a well tested empirical “guess”. One
can use a Popperian algorithm to discover Bayes without even being
aware of what you are doing, and perhaps *obeying* Bayes all along. In
a sense we are all taking advantage of math all the time without being
aware that we are in many more ways than reasoning.

Question: Do you think any general trial and error algorithm is in fact
always Bayesian? Do you think natural selection is Bayesian. Certainly,
natural selection generates theories about how the world works, empirical
theories. It is an method of empirical knowledge collection. Our eyes, for
example, operation on certain theories about perspective, shading, color,
etc. These theories must fundamentally be mathematically valid (when they
are accurate and do not result in optical illusions). These empirical theories
are recorded in our genes.

I’ll buy and read the book. I understand Bayes theory very deeply, I think
perhaps Bayesianism might be some broader set of claims I’m not aware
of.

R E P LY

RICHARD CARRIER •

DECEMBER 9 , 2011, 11:44 AM

“I think you are interpreting my understanding as


misunderstanding on these basic topics because
I’m misinterpreting your position and that I am
therefore miscommunicating, asking the wrong
questions, etc.”

Yes, that’s entirely possible. It’s happened before!

I’m not sure, for example, why you are concerned


about the genesis of a tool. That’s called a genetic
fallacy. It doesn’t matter where an idea comes from
(randomly picked from a bag, revealed by faeries,
made up on a hunch, arrived at from trial and error,
learned from your dad, …), it’s validity and soundness
are determined by entirely independent factors (such
as logical consistency and coherence with accumulated
data).

Thus your statement “Bayes Theorem in that way


would be a well tested empirical “guess”” doesn’t
make any sense to me. Bayes’ Theorem is a logically
proven theorem: it is necessarily true. That’s a lot
better than a guess. Perhaps you mean its application
to certain contexts, as that would require testing, but all
that such testing would determine is the practicality of
applying BT, not the correctness of its conclusions
given the assigned data, which would not need to be
tested, because it is already known to be certainly true.
In short, if the prior probability and the likelihoods
cannot be empirically denied, then neither can the
conclusion BT then entails. That fact doesn’t have to
be tested. It’s just true. Full stop. (Although a logical
proof is “tested” in a trivial sense: see my treatment of
this point in Sense and Goodness without God
II.3.2, pp. 53-54). And for any empirical question
there is always some prior probability and likelihoods
that cannot be denied (owing to the method of arguing
a fortiori as I explained in the talk, using resurrection
as an example). Uncertainty only arises when you want
the conclusion to be X and you can only get that
conclusion by using a prior or a likelihood that can be
denied (or at least doubted), but that’s a defect of your
unreasonable desire to want the conclusion to come
out one way, not a defect of BT. In fact it is BT that
then demonstrates that desire to be epistemically
defective. At any rate, you are right, I will elaborate on
these points a lot more in Proving History.

Do you think any general trial and error algorithm


is in fact always Bayesian?

Yes. Or rather, the less Bayesian it is, the less reliable


that algorithm will be. (Since there are logically invalid
methods that “work,” they just don’t work reliably.
Confirmation bias, for instance, as exhibited in our
perceptual organs that produce pareidolia.)

Do you think natural selection is Bayesian.

No. “Natural” selection is not a thinking process and


has no goals. It thus is not concerned with testing
hypotheses. It can produce organs that are, but then
it’s those organs that may or may not be Bayesian. As
it happens, brains have partly evolved to be Bayesian
computers: some of the older and more autonomic
(and thus most evolved) brain centers operate very
reliably with innate Bayesian functions (recently
demonstrated for certain features of the human
perceptual system), but not all brain systems do this;
thus brains as a whole rely on a more diverse hodge-
podge of heuristic functions, whatever “works” that
evolution “chanced upon” by accident. In fact, brains
rely not on single heuristics at all but a system of them
acting as checks against each other. For example, our
brains are wired for agency overdetection (we “over
detect” motives in events, like bushes rustling) because
a rapid detection system can’t be overly complex and
simple systems always err and so the only choice is
which way to err: underdetection or overdetection;
underdetection kills more, so overdetectors were
selected to survive. But then our brains have systems
that check this error (we turn our heads and investigate
and assess if it’s windy etc.; we learn about our biases
and develop tools to get more accurate results; and so
on), so the overall effect these interacting systems may
be Bayesian, and at any rate it will be more reliable in
direct proportion to how Bayesian it is. Which is why
we should train ourselves to be more Bayesian.

Our eyes, for example, operation on certain


theories about perspective, shading, color, etc.
These theories must fundamentally be
mathematically valid (when they are accurate and
do not result in optical illusions). These empirical
theories are recorded in our genes.

In this case you are right. Science just recently proved


the systems you are talking about have evolved to
operate using the function described in Bayes’
Theorem. As are other brain systems (see the recent
Science News article on this). In fact a whole field is
dedicated to examining these kinds of features of
neurocognition. Wikipedia even has an article on it.
But not all optical systems have evolved that way (e.g.
pareidolia clearly is not operating on Bayesian
principles but on a principle of overdetection).

LY K E X • D E C E M B E R 5 , 2 0 1 1 , 5 : 2 5 A M

A good, clear talk. I had next to no knowledge of this subject beforehand, but I was happily taking notes and
feel that I now get at least the basics.

I’ve just tested it by trying to calculate the probability of the Jesus-turning-water-into-wine story being true. I
came up with 1/450.000 and I was being pretty generous to the christian position.

R E P LY

R I C H A R D C A R R I E R • D E C E M B E R 7, 2 0 1 1 , 4 : 5 5 P M

Interesting. If you feel inclined, you may share your analysis here.

R E P LY

LY K E X • D E C E M B E R 1 0 , 2 0 1 1 , 9 : 4 2 A M

I’d be happy to.

The hypothesis is that the water-to-wine story was a real historical event. The evidence
available is the biblical texts. The background knowledge relates to how often water tends to
turn into wine, absent of normal fermentation.

P(h/b) I set at 10^-6, literally one in a million. I looked up statistics on sales of bottled water.
In my own little country, Denmark, several million bottles are sold every year and yet no
stories of water bottles filled with wine ever surface.
Clearly the chance of a volume of water suddenly turning into wine is infinitesimally small,
easily smaller than one in a million, but I went with the approach of being as kind as possible
to the opposing view.

P(e/h.b) I set at one. Arguably, it’s quite possible for a real event to be forgotten over such a
long span of time, even such a remarkable event as that. However, again, I was being kind.

P(-h/b) should really have been 0.999999, but I was doing the calculation by hand, so I just
shortened it to 0.9. If anything, this favours the theist argument once again. Besides, we’re
only looking for an approximation.

Finally P(e/-h.b). This was the number I had most trouble with. I thought about using the
argument that there are miracle stories from other religions. It’s unlikely that a christian would
be willing to accept some of the stories told about hindu sages or Muhammad, for example.
If we reject the validity of such stories, the probability of stories accumulating despite having
no factual basis must be set quite high. In the end, I went with a simple 50/50, since I was
reasonably sure that I could get any hypothetical theist to accept at least that.

That leaves us with:

P(h/e.b.) =
(10^-6 * 1) / ((10^-6 * 1) + (0.9 * 0.5)) ≈
10*-6 / 0.45 =
1/ 450,000

The more I’ve thought about this, the more I’ve come to agree that this is simply a technical
representation of how good thinking is done in the first place. You evaluate the claim, the
evidence for it and any alternative explanations for that evidence.
You can even derive Ockham’s Razor from it. If you have competing hypotheses that equally
explain the evidence, the probabilities would favour the one most in tune with what we
already know about the world, i.e. the one introducing the least number of new elements.

I’ll be reading up on this subject more when I have the time. It’s quite fascinating and I’d like
a better understanding of it, especially the underlying math. Sadly, I’ve much neglected my
math, but that’s all the more reason to get back in the game.

I’ve had the chance to get a peek at more of your work. It seems of consistently high quality.
I’ll probably be getting my hands on some of those books you mentioned. Rock on.

R E P LY

RICHARD CARRIER • DECEMBER 13, 2011,

11:40 AM

“P(-h/b) should really have been 0.999999, but I was doing the
calculation by hand, so I just shortened it to 0.9.”

Just FYI, you should never do that, because the difference between .9
and .999999 is actually huge (one is 100,000 times more probable than
the other, i.e. an event that occurs 9 out of 10 times occurs 100,000 times
more often than an event that occurs only 999,999 times out of
1,000,000), and meddling with the math that way can easily lead you into
error, so it’s best to be precise within the limits of your own assumptions,
and then only round off at the conclusion. Needless to say, though, the
result ends up only more strongly for your conclusion.

The only other issue one might challenge is the assumption that h belongs
to the class of spontaneous transmutation. One could argue that water to
wine only occurs in the presence of Jesus (or some comparable
supernatural agent), and therefore the water bottle industry is not an
appropriate reference class. A more appropriate reference class in this
case would be claims of transmutation and comparable (conjuration, etc.),
or opportunities therefor: that is, the number of all the religious figures who
could in principle pray for God to transmute a material under controlled
conditions but do not because they know it won’t work, tells against it
ever working at any other religious figure’s behest (like Jesus). Stating that
Jesus is in a class of his own is special pleading (there is no non-circular
data to support that) and therefore he remains in the reference class of all
religious figures when determining prior probability (since prior to any
evidence, he is no more likely to be successful at that than anyone else in
his confirmed class). That a religious figure could transmute a material
once in his life has a prior probability of (s+1)/(n+2), where s is 0 (we
have not one confirmed case) and n is the total of all such religious figures
who could have done this; a safe bet is that there have been at least a
thousand of those who have so far had access to controlled conditions
(that equation is called Laplace’s Rule). So you get a much higher prior
of 1/1002 = 0.000998 [etc.] and a converse prior of 0.99902 [etc.]. That
plus your appropriate likelihood of 1 and generous .5 for the other
likelihood, and you get:

(.000998 x 1)/(.000998 x 1)+(0.99902 x 0.5) =


.000998/.000998+0.499501 = .000998/0.500499 [etc.] = 0.001994
[etc.] = about 502 to 1 against.

Because that’s based on generous estimates, this means if someone


presented evidence they (or Jesus) had transmuted something (without
technological intervention) that was about 500 times less likely on any
other explanation, I would consider the case worth examining more
closely because it is a candidate for being true, and a more rigorous
analysis would be warranted. But any less evidence than that and I have
no reason to consider it at all. The odds are already 500 to 1 the story is
made up. And if I gathered evidence that other stories in the Gospels are
made up, then the odds this one was made up would likewise rise.

The math gets more complicated when you consider multiple hypotheses,
e.g. ~h includes h1 “Jesus deployed a conjuring trick like the Bacchic
temples of that time did” and h2 “the story was wholly made up after
Jesus died”; likewise the result changes when you consider the evidence
as a whole, e.g. that such an astounding miracle was never heard of by
Mark, Matthew, or Luke (despite Luke even saying he followed
everything precisely), and only gets reported in John decades after the
others wrote, is much more probable on “made up” than on “actually
happened,” so P(e|h) even if we’re generous is not 1 but at best .33
(because e includes the silence of all those earlier authors, which h does
not explain as well as h2 does).

R E P LY

BER TR A M C A BOT • DECEMBER 9 , 2011, 12:25 AM

Great talk. I have tested it by trying to calculate the probablity that all existence, life, mind and reason itself are
the product of mindless undirected forces.

I came up with approximately 1/25,000,000,000,000

And I gave the benfit of the doubt to the Philosophical Naturalist position.

R E P LY

RICH ARD CARRIER • DECEMBER 9 , 2011, 12:55 PM


Interesting. Let’s see the analysis.

R E P LY

LY K E X • D E C E M B E R 1 1 , 2 0 1 1 , 5 : 2 6 P M

Interesting. I’d love to see how you got there.

I’m a little leery of the approach, though, considering that your hypothesis is really a
composite of several separate ideas with differing amounts of evidence for them.
E.g. we know for a fact that matter can be produced without intervention from any sentient
beings, but we have no such verification for life. We have viable theories, but we’ve never
actually directly observed the occurrence of life.

As such, I think I would prefer to evaluate each separate hypothesis on its own, rather than
lumping them together.

R E P LY

RICHARD CARRIER • DECEMBER 13, 2011,

11:43 AM

LukeX, re: “the occurrence of life,” I’m not sure what you are responding
to. But I analyze whether the origin of life argues for or against intelligent
design in Bayesian terms in The End of Christianity pp. 289-92, with
references.

R E P LY

K AR IMGH AN TOUS • D EC EMBER 9 , 201 1 , 2:45 AM

I really liked your talk so thanks! I still don’t fully understand it but I’ll get there eventually.

Perhaps you could have made it clear that the values can be rough approximates – i.e. when you convert
qualitative statements (e.g. ‘very likely’) into quantitative ones (e.g. 0.85) you don’t have to stress about being
precise. This is what I didn’t get when you brought up BT a while ago on your previous blog. I can imagine this
is one of the first objections you’d get.

I’ll definitely be buying the McGrayne book. I did not know that BT was used so widely. That really is eye-
opening. I dare say that applying BT can be quite fun.

You once made a point that schools teach too much of the wrong maths to students. I’m beginning to see why:
BT and statistics are so much more valuable to most people than some of the things that we actually get taught.

I have a weird question: can we use BT to help us win lotteries?


R E P LY

BEN • DECEMBER 9 , 2011, 2:53 AM

“I have a weird question: can we use BT to help us win lotteries?”

Haha, I think Bayes’ theorem would decisively illustrate why you shouldn’t even *play* the
lottery.

R E P LY

RICH ARD CARRIER • DECEMBER 9 , 2011, 11:12 AM

I actually make that point, that you don’t have to be exact, in the talk. I spend several minutes
on it, and even mention the same point several other times throughout. I’ll cover it in more
detail in Proving History.

On winning lotteries: no. Unless winning lottery numbers are not randomly selected. Then
you could use BT to detect what numbers are more or less likely to win. But lotteries seem
pretty well randomized to me, and even if they have unknown regularities, the deviations from
random are not likely to be large enough to be worth the labor of detecting them anyway.

R E P LY

RICH ARD CARRIER • DECEMBER 13, 2011, 12:02 PM

Note: I have changed the order in which the comments display. See here for why and how to
compensate.

R E P LY

LY K E X • D E C E M B E R 1 3 , 2 0 1 1 , 1 2 : 0 4 P M

Hey, Richard

I was responding to Bertram Cabot’s post. The nesting of comments seems to have disappeared, causing the
confusion.
Thanks for your response to my other post. Your critiques are valid. I’ll have to think more on it.

R E P LY
RICH ARD CARRIER • DECEMBER 14, 2011, 3:38 PM

Note on comment posting and response delays: please see the recent post on my current status.

R E P LY

P OTIR A • DECEMBER 15, 2011, 2:37 PM

I took my time and subtitled your Skepticon IV talk in Brazilian Portuguese. I wanted to show it to some friends
whose English skills would cripple comprehension. It was unlikely anyone else would do it, so there…

http://www.universalsubtitles.org/pt/videos/YA51kvO5cfeV/pt-br/202883/

PS: You talk way too fast, and have an absurd record of incomplete sentences.

R E P LY

RICH ARD CARRIER • DECEMBER 30, 2011, 5:11 PM

Oh, yes, Potira, I do indeed speak very colloquially. I actually think that’s how we
should speak, because it’s less elitist and more readily understandable. And sometimes funny
(with every intention of being so).

Anyway, the subtitled video, that’s really neat. Thanks!

R E P LY

GGD F A N 777 • AP R IL 1 0, 201 3, 7:35 AM

Ikeda and Jeffreys argument fails. As astrophysicist Luke Barnes points out:

“On the other hand, the argument of Ikeda and Jeffreys obviously fails, since they don’t even take into account
the fine-tuning of the universe for intelligent life. They only take into account the fact that our universe is life-
friendly, not that life-friendliness is rare in the set of all possible universes.”

For more details see:

http://letterstonature.wordpress.com/2010/10/26/terms-and-conditions-a-fine-tuned-critique-of-ikeda-
and-jeffreys-part-1/

http://letterstonature.wordpress.com/2010/11/05/what-do-you-know-a-fine-tuned-critique-of-ikeda-
and-jefferys-part-2/

See also his critiques of Victor Stenger:


http://arxiv.org/pdf/1112.4647v1.pdf

http://letterstonature.wordpress.com/2012/05/02/in-defence-of-the-fine-tuning-of-the-universe-for-
intelligent-life/

Another interesting article that might be related where Sober, Ikeda and Jeffreys are mentioned:

http://arxiv.org/pdf/0802.4013v2

R E P LY

RICH ARD CARRIER • APRIL 10, 2013, 3:42 PM

If that is what Barnes said, then he clearly doesn’t know what he’s talking about. For Ikeda
& Jeffreys very specifically address that issue. For Barnes to claim they don’t means he is
either lying or hasn’t really read their argument, or doesn’t understand it.

If he can’t be bothered to get that right, I do not trust he has gotten anything right. I can see
no reason to waste time reading him.

At any rate, I make their argument clear and irrefutable in my chapter on this in The End of
Christianity.

R E P LY

LUKE BAR N ES • AP R IL 23, 2013, 5:26 P M

Ikeda And Jeffreys say:

“Our basic argument starts with a few very simple assumptions. We believe that anyone who accepts that the
universe is “fine-tuned” for life would find it difficult not to accept these assumptions. They are:

a) Our universe exists and contains life.

b) Our universe is “life friendly,” that is, the conditions in our universe (such as physical laws, etc.) permit or are
compatible with life existing naturalistically.

c) Life cannot exist in a universe that is governed solely by naturalistic law unless that universe is “life-friendly.”

We will show that if assumptions (a-c) are true, then the observation that our universe is “life-friendly” can never
be evidence against the hypothesis that the universe is governed solely by naturalistic law.”

These assumptions (a-c) are all true, but none of them describe the fine-tuning of the universe for intelligent life.
b) states that our universe is able to sustain life via only the operation of natural causes. c) serves as a definition:
a life friendly universe is by definition a universe which can sustain life using only natural causes.
None of this deals with the fine-tuning of the universe for intelligent life, which states (roughly) that in the set of
possible universes, the subset that permits the existence of intelligent life is very small. Leonard Susskind puts it
as follows: “The Laws of Physics … are almost always deadly. In a sense the laws of nature are like East Coast
weather: tremendously variable, almost always awful, but on rare occasions, perfectly lovely.” (from The Cosmic
Landscape).

The argument from fine-tuning doesn’t aim to show that the universe is not governed solely by natural law. It
aims to show that the laws of nature themselves are the product of an intelligent cause. It doesn’t try to show that
God needs to stick his finger into the universe to create and sustain life, which would otherwise not form. Rather,
God is posited as the explanation for the fact that our universe has laws which can produce life by natural causes
alone. Thus, the argument of Ikeda and Jeffreys is aimed at the wrong target.

And speaking of not knowing what one is talking about, anyone who thinks that “whereas in the 19th century
there were some twenty to forty “physical constants,” there are now only around six”
(http://www.infidels.org/library/modern/richard_carrier/finetuning.html) is laughably ignorant of modern
physics. That statement is as embarrassing as a philosopher asserting that Aristotle was Belgian. There are at
least 31: 26 from particle physics (http://math.ucr.edu/home/baez/constants.html) and at least 5 from
cosmology (http://arxiv.org/pdf/astro-ph/0511774v3.pdf).

R E P LY

RICH ARD CARRIER • APRIL 23, 2013, 6 :06 PM


These assumptions (a-c) are all true, but none of them describe the fine-
tuning of the universe for intelligent life.

Yes, they do: (b) encompasses all facts pertaining to it, including fine tuning. Their argument
goes on to prove that fine tuning can never be evidence of design. Never. Not ever. No
matter how finely tuned the tuning is. Because godless universes can only ever be finely
tuned. In other words, if there is no God, then there cannot ever be observed any universe
but a finely tuned universe. A finely tuned universe is a 100% expected observation on
atheism. Because we could never be anywhere else.

If you don’t understand this, then try reading my re-presentation of the argument in The End
of Christianity, which spells it out in plainer terms with more examples and analogies and a
discussion of what the math is doing in their argument. I wrote that chapter specifically for
that purpose.

As to ginning up the constants by referencing the Standard Model, you are falsely assuming
the Standard Model properties are random and not derivative of more basic constants. If one
grants that assumption (we do not know if it is the case, and what is not known cannot
generate knowledge, but whatever), then yes, you can say there are many more constants
than the core six. You would be declaring yourself against Superstring Theory in that case.
Which is fine. But no less speculative thanaffirming Superstring Theory. So as arguments
go, that’s a wash.

R E P LY

LUK E BAR N ES • JUN E 1 0, 201 3, 1 1 :53 P M

Nope. b) is a statement about the properties of this universe. Fine-tuning cases are about
how the universe would have been if this or that constant were slightly different, so they are
not statements about this universe. They are counterfactuals. Ikeda And Jeffreys’ argument
makes no reference to what the universe would be like if things were different, and so doesn’t
address fine-tuning.

“A finely tuned universe is a 100% expected observation on atheism”. Given that life exists, a
life-friendly universe is expected, given atheism. That’s what Ikeda and Jeffrey’s proved. The
question that fine-tuning raises is why a life and its life-permitting universe exists at all. That is
not expected on atheism, since on atheism there is no a priori expectation for what exists.
That life exists cannot be the explanation for why a life-permitting universe exists at all.

What “core six”? What are you talking about? There never were only six fundamental
constants. Did you misinterpret the title of Rees’ book “Just Six Numbers”?

“you are falsely assuming the Standard Model properties are random and not derivative of
more basic constants”. Nope. The fine-tuning of the constants of the standard model can be
translated into limits on GUT parameters, and string theory merely changes the label on these
constants from “parameters of the Lagrangian” to “parameters of the solution” with out
affecting their need to be fine-tuned. As such, string theory has “has anthropic principle
written all over it” (http://arxiv.org/pdf/1112.4647v2.pdf). String theory provides the meta-
theory against which the parameters of the standard model can be seen as randomly (or at
least non-determinately) assigned parameters.

R E P LY

R IC H AR D C AR R IER • JUN E 1 2, 201 3, 7:01 P M

Your first paragraph is false. Jeffreys’ argument does reference what the
universe would be like if things were different: we wouldn’t be here to
observe it. That’s in their math. That is, in fact, crucial to their argument.

Your second paragraph does not address their argument, which is that we
know of no way a life-bearing universe can exist without a God unless it
is finely tuned. Certainly, if you can describe a non-fine-tuned universe
(which was not created or designed or governed by any God) that would
produce intelligent life, then you can have the beginning of a counter-
argument to their point. So, can you?

(They are not arguing, BTW, that “life exists explains why a life-permitting
universe exists,” and if you think that is what they are arguing, you neither
understand their argument nor even how Bayes’ Theorem works
generally.)

Your third paragraph suggests you don’t know what the fundamental
constants are, or what of them remain when we discount the trivia of the
Standard Model parameters. But that’s a red herring, since the number of
constants is irrelevant to the Ikeda-Jeffreys-Sober argument. Which you
still show no sign of understanding.

R E P LY

LUK E BAR N ES • JUN E 1 3, 201 3, 4:43 AM

If the universe weren’t life permitting, then we wouldn’t be here to observe it. That’s the
anthropic principle. That’s still not the fine-tuning of the universe for intelligent life: in the set of
possible universes, the subset of life-permitting universes is very small. Ikeda and Jeffries
don’t take into account the fact that life-permitting universes are rare in the set of possible
universes.

An analogy. Suppose I survive an explosion at close range. The schrapnel destroys the entire
room around me, except for a remarkably me-shaped outline around where you are standing.

a) I exist after the explosion.


b) The explosion was me-friendly – the initial conditions of the explosion were such that,
without the schrapnel pieces needing to be guided on their post-explosion trajectory, they
“naturally” (i.e. ballistically) miss my body.
c) I would not survive an unguided explosion unless the initial conditions were me-friendly.

From this, I conclude that my survival of the explosion is not evidence against the hypothesis
that the explosion pieces were unguided post-explosion.

Now consider premise d …


d) In the set of initial conditions for the explosion, the subset that are me-friendly is very
small.

My point: c) and d) are not the same. c) is essentially a tautology. d) is not necessarily true. c)
would still be true if the explosion were a balloon popping, but d) would be false in that case.
d) suggests we investigate the possibility that the explosion was rigged in our favour.

I didn’t say that Ikeda and Jeffreys argued as such. I made that point in response to your
claim that “A finely tuned universe is a 100% expected observation on atheism”. That claim is
false.

p(A life permitting universe | Atheism, life exists) = 1 (by the anthropic principle)
p(A life permitting universe | Atheism) is not 1, since there is a possible world in which a
universe exists with no life-forms and no God.

I gave a Bayesian argument to show that, even assuming the premises of Ikeda and Jeffreys
including the anthropic principle, the fine-tuning of the universe increases the probability that
the physics of our universe (parameters, initial conditions, laws of nature) were not effected
indifferently of the requirements of intelligent life (with caveats). So, Bayesian Man, where did
I go wrong?

Frank Wilczek: “At present, and for the past 35 years or so, the irreducible laws of physics –
that is, the laws which we don’t know how to derive from other ones – can be summarized in
the so-called standard model. So the standard model appears, for the present, to be the most
appropriate a priori context in which to frame the definition of fundamental constants. … A
fundamental constant is a parameter whose value we must supply in order to specify the
Lagrangian of the standard model. … Like the standard model of fundamental physics, the
standard model of cosmology requires us to specify the values of a few parameters. Given
those parameters, the equations of the standard model of cosmology describe important
features of the content and large-scale structure of the universe”.
(http://www.frankwilczek.com/Wilczek_Easy_Pieces/416_Fundamental_Constants.pd
f)

He’s a Nobel prize winning particle physicist. But do go on. It entertains me to see a historian
make a complete fool of himself while discussing physics with a physicist. Tell us about the
“trivia” of the standard model. Explain why they should be discounted in a discussion of the
fundamental properties of our universe. What, according to you, are the “six” “core”
constants of nature? I ask again … what are you talking about?

R E P LY

R IC H AR D C AR R IER • JUN E 1 4, 201 3, 9 :57 AM

You simply need to read my chapter on this (chapter twelve of The End
of Christianity). Everything you are ineptly attempting to argue is
already refuted there, and I write chapters like that specifically so I don’t
have to keep repeating myself. So address my actual arguments there, or
admit you can’t and go away.


I didn’t say that Ikeda and Jeffreys argued as such. I made
that point in response to your claim that “A finely tuned
universe is a 100% expected observation on atheism”. That
claim is false.

It is only false if you can come up with a way to get a life bearing universe
without a god or fine tuning.

So, please present one. Or admit you can’t.


R E P LY

LUK E BAR N ES • JUN E 1 4, 201 3, 9 :26 P M

“You simply need to read my chapter on this”

Deal. If the library can track down a copy.

R E P LY

R I C H A R D C A R R I E R • J U N E 1 7, 2 0 1 3 , 1 1 : 5 7 A M

Any library with WorldCat Interlibrary Loan will be able to. And that’s
nearly every library in the U.S (I don’t know about Australia). You can
also get the book on kindle for under nine dollars U.S. (and nearly any
mobile device can run the kindle app now).

Then respond to the arguments in my chapter.

R E P LY

GGDF A N 777 • APRIL 25, 2013, 1:18 PM

When you say:


“if there is no God, then there cannot ever be observed any universe but a finely tuned universe.”

that may be true if you define a ‘finely tuned universe’ as a universe which constants/laws allow for life.
However, it is not the case that given naturalism and the fact that we are alive we should expect to observe that
“in the set of possible universes, the subset that permits the existence of intelligent life is very small”, which is
actually very surprising! and is what fine tuning is really about.

So, it seems to me that when we take ‘F’ to be: “in the set of possible universes, the subset that permits the
existence of intelligent life is very small”, it is not the case that P(F|LN)=1, contra Ikeda/Jefferys.In fact, on this
view P(F|LN)<<1 and P(L|NF) < P(L|N).

Moreover, when you say:


"A finely tuned universe is a 100% expected observation on atheism." It seems to me here, the only thing that
results from this is that:

"If i observe that I am alive, I should observe a universe that has constants/laws that allow for life."

But it doesn't follow from atheism that "I will observe that I am alive in a finely tuned universe" (for all we know,
a universe without any life would exist)

Nor that "the subset of possible universes that permits the existence of intelligent life is very small"
R E P LY

RICH ARD CARRIER • APRIL 26 , 2013, 10:48 AM


However, it is not the case that given naturalism and the fact that we are
alive we should expect to observe that “in the set of possible universes, the
subset that permits the existence of intelligent life is very small”, which is
actually very surprising! and is what fine tuning is really about.

But that’s the problem–we do not have any observations of other universes; and we could
only ever observe a fine-tuned one. Thus “in the set of possible universes, the subset that
permits the existence of intelligent life is very small” has no effect whatever on the probability
that this universe is or is not designed. It is a logical fallacy to argue from “in the set of
possible universes, the subset that permits the existence of intelligent life is very small” to the
conclusion that this universe was more likely designed than not. And that is what Ikeda &
Jefferys demonstrate. And as I show in The End of Christianity, they’re right (their
argument was even independently demonstrated by Sober).


So, it seems to me that when we take ‘F’ to be: “in the set of possible
universes, the subset that permits the existence of intelligent life is very
small”, it is not the case that P(F|LN)=1, contra Ikeda/Jefferys. In fact, on
this view P(F|LN)<<1 and P(L|NF) < P(L|N).

Your math is wrong.

First, what we observe is U (a finely tuned universe). And P(U|~design) does in fact = 1.
Because P(~U|~design) = 0 (i.e. we could never observe a non finely tuned universe,
because we would never be in one).

If we talk about naturalism specifically as a hypothesis, N, then P(~U|N) = 0 (because we


could never observe ~U, as we would then not exist), which entails P(U|N) = 1. It makes no
difference whatever that “in the set of possible universes, the subset that permits the existence
of intelligent life is very small.”

In fact, P(F|design) is also very small–in fact, it’s exactly the same as P(F|~design), because
regardless of whether god exists or not and regardless of whether U was designed or not, F


is still always true. Therefore F can never be evidence for one or the other.

“If i observe that I am alive, I should observe a universe that has
constants/laws that allow for life.”

But it doesn’t follow from atheism that “I will observe that I am alive in a
finely tuned universe” (for all we know, a universe without any life would
exist)

Yes, it does. Because on atheism you can only ever exist in a finely tuned universe (there is
no other kind of universe you could ever find yourself in). That “a universe without any life
could exist” is moot because we can never, ever, be in such a universe, and thus we can
never, ever observe being in such a universe. Therefore, the probability of observing such a
universe on atheism is zero (we can allow for the exception of one day being able to observe
such a universe from this one, but that’s not relevant here, since such an observation would
then also be 100% expected on atheism…unless it was accomplished by demonstrably divine
assistance, and not just some godless physics).


Nor that “the subset of possible universes that permits the existence of
intelligent life is very small”

That “the subset of possible universes that permits the existence of intelligent life is very small”
is equally true on theism and naturalism. It’s not as if, if God exists, that it would cease to be
the case that “the subset of possible universes that permits the existence of intelligent life is
very small.” So that cannot be evidence of anything.

R E P LY

GGDF A N 777 • APRIL 25, 2013, 1:27 PM

Moreover, with respect to the constants, William Lane Craig had this to say on his website (I’m btw, not saying
that I endorse everything he says, I don’t know enough of the relevant physics but I thought it might be
interesting):

” Carrier is mistaken when he asserts that there are only about six physical constants in contemporary physics;
on the contrary, the standard model of particle physics involves a couple dozen or so. The figure six may be
derived from Sir Martin Rees’ book Just Six Numbers (New York: Basic Books, 2000), in which he focuses
attention on six of these constants which must be finely tuned for our existence. But this is just a selection of the
constants there are, and new constants, unknown in the 19th century, like the so-called cosmological constant,
which must be fine-tuned to one part in 10^120 in order for life to exist, are being discovered as physics
advances.

In addition to these constants, there are also the arbitrary quantities which serve as boundary conditions on
which the laws of nature operate, such as the level of entropy in the early universe, which are also fine-tuned for
life. If one may speak of a pattern, it would be that fine-tuning, like a stubborn bump in the carpet, just won’t go
away: when it is suppressed in one place, it pops up in another. Moreover, although some of the constants may
be related so that a change in the value of one will upset the value another, others of the constants, not to
mention the boundary conditions, are not interdependent in this way. In any case, there’s no reason at all to
suspect so happy a coincidence that such changes would exactly compensate for one another so that in the
aftermath of such an alteration life could still exist. It appears that the fine-tuning argument is here to stay.”

R E P LY

B R I A N PA N S KY • A P R I L 2 7, 2 0 1 4 , 1 1 : 0 4 A M

Certainly Carrier would agree that the fine-tuning argument is “here to stay”. But Carrier
shows that the conclusion of that argument is in favor of atheism.

Craig’s last line there it seems he thinks that the number of constants that need to be adjusted
makes a difference as to the conclusion of the fine tuning argument. As if 6 constants will
conclude that atheism is true…but some unspecified amount (7? 12? 4000?) will make a
conclusion that theism is true.

Will Craig, or someone, will have to show the math as to how that conclusion changes,
because I am not seeing it.

R E P LY

Add a Comment (For Patrons & Select Persons Only)

Enter your comment here...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search This Blog


Search here...

Get Carrier’s Latest!


Follow Richard Carrier’s Work & Announcements

 

Categories

Select Category

Archives

Select Month

About The Author


Richard Carrier is the author of many books and numerous articles online and in print. His avid readers span the world from Hong Kong to
Poland. With a Ph.D. in ancient history from Columbia University, he specializes in the modern philosophy of naturalism and humanism, and the
origins of Christianity and the intellectual history of Greece and Rome, with particular expertise in ancient philosophy, science and technology. He
is also a noted defender of scientific and moral realism, Bayesian reasoning, and historical methods.

Support Dr. Carrier

Subscribe To This Blog

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,459 other subscribers

Email Address
Subscribe

Subscribe

Books By Dr. Carrier

Explore C.H.R.E.S.T.U.S.
Get Your E-Books Signed!

Take Online Courses With Dr. Carrier

As An Amazon Associate I Earn From Qualifying Purchases Following Links On My Website.


Buying From Here Helps Fund My Work.

Recommendations

Proudly powered by WordPress | Copyright 2016 Danza

You might also like