You are on page 1of 51

What is knowledge?

What is knowledge?
We’ve been talking about rational belief (or justi ed belief, the beliefs we ought to
have, etc.), but I said at the beginning that epistemology is the theory of knowledge.
We’ve not said much about knowledge. Let’s talk about knowing.

fi

The ‘traditional’ analysis


Everyone agrees that there are things to know that nobody believes and things we believe
that we do not know. What distinguishes knowledge from mere belief?
Truth?
If a belief is false, it won’t constitute knowledge. If it is true, might it constitute knowledge?
Objection: If we use manipulation to convince the 9am class that there is no life after
death and then convince the 10am class that there is, we have belief. One of these beliefs is
true. So, do we have knowledge?

The ‘traditional’ analysis


Everyone agrees that there are things to know that nobody believes and things we believe that we do
not know. What distinguishes knowledge from mere belief?
Truth?
If a belief is false, it won’t constitute knowledge. If it is true, might it constitute knowledge?
Objection: If we use manipulation to convince the 9am class that there is no life after death and then
convince the 10am class that there is, we have belief. One of these beliefs is true. So, do we have
knowledge?
Response: No, but that’s because their reasons aren’t good. Maybe what we need for knowledge is to
have a true belief that’s held for good reasons.

The ‘traditional’ analysis


Progression.
Knowledge isn’t mere belief. (Because one has to be true and the other needn’t be.)
Knowledge isn’t mere true belief. (Because one has to be held for good reason and the other
needn’t be.)
So, here’s a view:
JTB: A person knows that something is true i they believe that it is true and they are justi ed
in this belief because it is based on good reasons.
Is this view any good?

ff

fi
The ‘traditional’ analysis
So, here’s a view:
JTB: A person knows that something is true i they believe that it is true and they
are justi ed in this belief because it is based on good reasons.
Is this view any good? Maybe not. Here’s a problem case:
In a text that dates to around the year 770 CE, the Indian philosopher Dharmottara
o ers the following cases: A re has just been lit to roast some meat. The re hasn’t
started sending up any smoke, but the smell of the meat has attracted a cloud of
insects. From a distance, an observer sees the dark swarm above the horizon and
mistakes it for smoke. ‘There’s a re burning at that spot,’ the distant observer says.
Does the observer know that there is a re burning in the distance?
ff
fi

fi
fi
fi
ff

fi

A patch
Here’s a diagnosis of what went wrong.
The belief in question is true. It’s also based on good enough evidence to be justi ed.
But it’s also based on a mistaken assumption—the assumption that it’s smoke that we
see in the distance.
NFL: A person knows that something is true i they believe that it is true and they
are justi ed in this belief because it is based on good reasons without being based
on any false assumptions.
This seems to rule out the case that Dharmottara had in mind, but it faces problems
of its own.
fi

ff
fi
A patch
Consider
NFL: A person knows that something is true i they believe that it is true and they are justi ed
in this belief because it is based on good reasons without being based on any false
assumptions.
One objection is that it rules out too much knowledge:
Father Christmas: Tiny Tim’s parents tell him that Father Christmas will bring presents this
year. They’ve purchased them already. He now believes he’ll have presents this year.
Another is that it treats things as knowledge that aren’t knowledge.
Fake barns: sing his reliable perceptual faculties, Barney forms a true belief that the object in
front of him is a barn. Barney is indeed looking at a barn. Unbeknownst to Barney, however,
he is in an epistemically unfriendly environment when it comes to making observations of this
sort, since most objects that look like barns in these parts are in fact barn façades

ff

fi
A better diagnosis?
It seems that even if our beliefs aren’t based on falsehoods, they might be true as a
matter of luck.
To know that p, it has to be not at all an accident that you’re right about p. In
Gettier cases, there’s often a lot of luck that stands between a person and the
facts.
Three ways of eshing out the details:
1. Causal theory
2. Sensitivity
3. Safety

fl

Causal theory
Goldman’s causal theory. What we need to add to belief and truth to get knowledge is some sort of proper
connection between your belief and the world. In the examples considered thus far, it might seem that the
subjects lack knowledge because there’s not a proper causal connection between the subjects’ beliefs and the
facts. So, why not replace the justi cation condition with this: the fact that p causes the subject’s belief that p?
Dretske raises an interesting problem for this view. Tom correctly believes that mountain A erupted. His
justi cation for believing this is that there is solidi ed lava all around it. But not far from mountain A there is
mountain B. They’re connected in this way: if mountain A had not erupted then mountain B would have
erupted instead. Had B erupted, it would have produced the same pattern of solidi ed lava.
According to the causal theory, Tom knows that mountain A erupted, because the fact that he believes that
mountain A erupted is causally connected in an appropriate way with the fact that mountain A erupted. But
Tom does not know that it was A that erupted.
Also, think about fake barns cases. It seems the causal connection between the perceiver and the barn is the
same in the case where fakes are nearby (so no knowledge) as normal cases in which there are no fakes
(knowledge).
fi
fi

fi
fi

Sensitivity
According to Nozick:
Sensitivity: You know that p i : (i) p is true and (ii) You believe p in such a way
that if p weren’t true, you wouldn’t believe p.
To determine whether you meet the sensitivity condition, we have to ask whether your
correct belief is one you would still hold if it had been the case that what you believed
was false. If not, your belief is sensitive. If you would, your belief is insensitive.
(NB: We’re skipping over some details and re nements).

ff
fi

Sensitivity
According to Nozick:
Sensitivity: You know that p i : (i) p is true and (ii) You believe p in such a way
that if p weren’t true, you wouldn’t believe p.
To determine whether you meet the sensitivity condition, we have to ask whether your
correct belief is one you would still hold if it had been the case that what you believed
was false. If not, your belief is sensitive. If you would, your belief is insensitive.
(NB: We’re skipping over some details and re nements).

ff
fi

Sensitivity
To determine whether the sensitivity condition is met, you need to know how to evaluate subjunctive conditionals. To determine whether the conditional
as a whole is true, you’re supposed to engage in a bit of pretense: if, contrary to fact, the antecedent had been true, would the consequent have been true
as well? If so, the conditional is true. If not, it is not.
Consider an example:  
(1) If kangaroos didn’t have tails, they would topple over. 
In evaluating (1), you’re supposed to imagine how things would have been if they were as similar to the way things actually are apart from what has to be
changed in order to make it the case that kangaroos don’t have tails.  Let’s suppose that kangaroos rest on their tails to balance, so removing them would
be like removing a leg from a three-legged stool.  We wouldn’t expect them to remain upright if we removed a support and put nothing new in place.  So,
we’d evaluate (1) as true.  Contrast that with this one:  
(2) If kangaroos didn’t have tails, they would take to driving cars. 
In the course of constructing our imaginary scenario we’re supposed to keep xed as much as we can while changing things just enough to make the
antecedent of (2) true.  By taking away a kangaroo’s tail, we wouldn’t endow it with powers and abilities it doesn’t have now.  Chopping o a part that it
needs to balance wouldn’t bestow upon it the ability to shift gears, put the keys into the ignition, read the driver’s manual, etc.  We’d evaluate (2) as false. 
Notice that we’d evaluate (2) as false even if we think that it’s logically conceivable that there are kangaroos that drive.  God could send a team of angels
and neuroscientists to work over these kangaroos and turn them into drivers, let’s suppose.  That wouldn’t make (2) true. 
In evaluating (2), we don’t ask whether it’s logically possible for the consequent to be true in some imagined scenario where the antecedent is true, but
whether the consequent is true in the closest possibilities in which the antecedent is.  The possibility just described in which kangaroos do have the power
to drive cars is very remote indeed, much more remote than the possibility in which they lose their tails and don’t acquire any new remarkable abilities.   

fi

ff

Sensitivity
With this in mind, let’s consider Fake Barns.  Remember that the countryside was populated with realistic fakes.  While
her belief that was true, it doesn’t seem Carol knows that it’s true.  According to Nozick, Carol cannot know that the
barn she’s looking at unless this is so:  
(*) if there hadn’t been a barn in front of her, Carol wouldn’t have believed that there was. 
It doesn’t seem that (*) is satis ed. Had she not had a real barn in front of her when she formed her belief, she would
have looked at one of the fakes and formed the (false) belief that it was a barn.

In Volcanoes, it also seems that the sensitivity condition isn’t satis ed:  
(*) if the rst mountain hadn’t erupted and Doris relied on the same evidence and background beliefs as she actually
did to determine whether the lava came from this mountain, she wouldn’t believe that the lava came from this
mountain. 
It seems that (*) is false. If M hadn’t erupted, N would have and it would have distributed the lava in just the same way it
was actually distributed. Doris would still attribute the lava to M, not N. 
fi

fi

fi

Sensitivity
With this in mind, let’s consider Fake Barns.  Remember that the countryside was populated with realistic fakes.  While
her belief that was true, it doesn’t seem Carol knows that it’s true.  According to Nozick, Carol cannot know that the
barn she’s looking at unless this is so:  
(*) if there hadn’t been a barn in front of her, Carol wouldn’t have believed that there was. 
It doesn’t seem that (*) is satis ed. Had she not had a real barn in front of her when she formed her belief, she would
have looked at one of the fakes and formed the (false) belief that it was a barn.

In Volcanoes, it also seems that the sensitivity condition isn’t satis ed:  
(*) if the rst mountain hadn’t erupted and Doris relied on the same evidence and background beliefs as she actually
did to determine whether the lava came from this mountain, she wouldn’t believe that the lava came from this
mountain. 
It seems that (*) is false. If M hadn’t erupted, N would have and it would have distributed the lava in just the same way it
was actually distributed. Doris would still attribute the lava to M, not N. 
fi

fi

fi

Sensitivity
Not only does the account handle the problem cases we’ve considered thus far, it seems to help us solve a
puzzle about lottery cases.  Suppose your friend gives you a ticket for lottery drawing that will take place
tomorrow. You know the odds of winning are astronomically small. The drawing is held. Can you know that
your ticket lost without checking the paper?  Intuitively, it seems not. Suppose you check the paper and
check your number against the winning number.  Your ticket had 67-12-34-52-1. The winning ticket, according
to the paper, was 13-4-55-11-9.  Now it seems you know that your ticket lost. 

If you think about it, it’s sort of strange to think that you can’t know that your ticket lost in advance but can
know that your ticket lost once you check the paper. The probability that the paper makes a mistake is
greater than the probability that you’d be wrong about whether your ticket lost

Sensitivity
Nozick’s account provides a plausible explanation of these two puzzling intuitions.  If we can’t know that a ticket
will lose just by considering the odds, how can we come to know that the ticket lost by reading about the results
in a paper? 
Consider two claims:
(*) if my ticket hadn’t lost and I formed my belief about my ticket just by considering the probabilities, I wouldn’t
have believed that it lost.
(**) if my ticket hadn’t lost and I formed my belief about my ticket by checking the paper, I wouldn’t have believed
that it lost.
The rst claim is false. If you formed your belief about your ticket just on the basis of your knowledge of the
probabilities and you were lucky enough to hold a winning ticket, you’d still have believed that the ticket would
lose given the very high probability that it would. Thus, your belief about the ticket doesn’t
satisfy Nozick’s sensitivity condition and his theory says that you don’t know that the ticket is a loser. The
second claim is true. If your ticket had won, the paper would have reported that your number was the winning
number and you wouldn’t have believed that your ticket lost.
fi

Sensitivity
Nozick’s account provides a plausible explanation of these two puzzling intuitions.  If we can’t know that a ticket
will lose just by considering the odds, how can we come to know that the ticket lost by reading about the results
in a paper? 
Consider two claims:
(*) if my ticket hadn’t lost and I formed my belief about my ticket just by considering the probabilities, I wouldn’t
have believed that it lost.
(**) if my ticket hadn’t lost and I formed my belief about my ticket by checking the paper, I wouldn’t have believed
that it lost.
The rst claim is false. If you formed your belief about your ticket just on the basis of your knowledge of the
probabilities and you were lucky enough to hold a winning ticket, you’d still have believed that the ticket would
lose given the very high probability that it would. Thus, your belief about the ticket doesn’t
satisfy Nozick’s sensitivity condition and his theory says that you don’t know that the ticket is a loser. The
second claim is true. If your ticket had won, the paper would have reported that your number was the winning
number and you wouldn’t have believed that your ticket lost.
fi

Sensitivity
Here’s an interesting argument for scepticism. Let’s say that a brain in a vat (i.e., BIV) is a brain in a lab that’s
been stimulated by a neuroscientist so that it undergoes experiences indistinguishable from your own.
Consider the following:
P1. I am not in a position to knowingly judge that I’m not a BIV.
P2. If I am in a position to judge knowingly that I have hands, I am in a position to judge knowingly that
I’m not a handless BIV.
C. I am not in a position to judge knowingly that I have hands.

Dretske and Nozick can account for some of the intuitions that underwrite the argument from ignorance
without giving in to skepticism.
The argument is valid and they concede that the argument’s rst premise is true, but they reject the
argument’s conclusion and o er an explanation as to why we should reject the closure principle and the
argument’s second premise.

ff

fi

Sensitivity
Still, there’s something weird about this.
This view says that we can know that we have hands but cannot know that we’re not handless brains in vats even if
it’s true that handless brains in vats do not have hands.

Should we deny closure?


CP: If you know p and know that p entails q, you can come to know q by means of competent deduction.

Consider some challenge cases.


Hands: I have hands. If I have hands, I’m not a handless BIV. Thus, I’m not a handless BIV.
Zebra: This is a zebra. If this is a zebra, it is not a cleverly painted mule. Thus, this is not a cleverly painted mule.

To some, denying closure is very costly.


Sensitivity
Moreover, it seems that some insensitive beliefs might constitute knowledge. Consider my dog, Agnes. And
consider two possibilities.

1. Agnes speaks German.

We know that this is false.

What about this?

2. Agnes secretly speaks German.

Does this pass the sensitivity test? But don’t we know it?

Safety
According to the sensitivity condition, you cannot know p unless: if p were false, you would not believe p. We get safety
by contraposing sensitivity: if you were to believe p, p.

What’s the di erence between safety and sensitivity? At rst, there might seem to be little di erence, but an example
might help.  Suppose we’re evaluating archers hunting rabbits. There’s a di erence between saying:
(i) If Jane were to shoot, she’d hit.
(ii) If Jill wouldn’t hit, she wouldn’t take the shot.

With (i) we get something closer to the safety condition. The intuitive idea behind (i) is that there are no nearby
possibilities in which Jane tries to hit the rabbit and fails. The intuitive idea behind (ii), which is intended to be the
analogue of sensitivity, is that if the arrow would’ve missed the rabbit, Jill wouldn’t have taken a shot.  Remember that the
safety condition says, in e ect, that you couldn’t easily have been mistaken. There’s no nearby possibility in which your
actual belief is mistaken. What sensitivity says, in e ect, is that in the nearest possibility in which your belief content is
mistaken, you won’t endorse that content.    
ff
ff

ff

fi
ff
ff

Safety
Advantages?

Preserves closure but does not commit us to skepticism.


Handles the same cases that sensitivity does.
Avoids counterexamples to sensitivity.

The rst point probably needs some discussion.

Consider the following propositions: (H) I have hands. (NB) I’m not a handless BIV.

As we know, H entails NB. As we’ve seen, a belief in H might be sensitive even if a belief in NB isn’t. [In the nearest world
in which you are a BIV (very distant!), you’d still believe that you had hands and weren’t a BIV.] In the worlds in which H is
safe, NB will be safe, too. If, say, there’s no world near this one in which you believe you have hands for roughly the
reasons that you do in which you are mistaken, there’s no world near this one in which you’ll be a handless BIV. [If there’s
a nearby possibility in which you are a handless BIV, that’s a nearby possibility in which ¬H.] So, if you like closure it looks
as if you’d have some reason to reject sensitivity. There’s no obvious con ict between closure and safety.
fi

fl

Safety
Chute There’s a garbage chute in Ernest’s building.  When he wants to take out his trash, he drops the bag
into the chute and it travels down the chute into a bin in the basement. The building is very well maintained,
so there’s no worry that the bag will get caught on the walls of the chute and snag. Ernest drops his bag into
the chute and believes that his trash is now in the basement. His belief is true, but it’s also true that if his bag
had snagged on the way down, he still would have believed that his bag was in the basement.

Sosa thinks that this case shows a virtue that safety has over sensitivity. Even if there’s nothing that would
have clued us in to failure should we have failed, there’s no need for this feedback if it couldn’t have easily
gotten things wrong.

Scepticism
Introduction
Most of us think most of the time that we each know quite a lot:
• I know that I have two sisters (knowledge of the external world, of other minds,
of the past);
• You know that Arsenal will not win the league this year (knowledge of the
future, of mathematics);
• We all know that people shouldn’t try to buy and sell people (knowledge of
right and wrong).
And so on.
This picture might be right, but it’s one that can be di cult to defend in light of
arguments for scepticism. In this lecture, I’ll introduce aspects of the sceptical puzzle.

ffi

Introduction
Nagel’s historical introduction introduces us to a kind of sceptical position, that of the
academic sceptics who rejected the Stoic’s conception of knowledge.
The stoic picture was something like this. The world makes its impressions upon us
(e.g., think about auditory or visual experience). We then sometimes judge (believe,
infer, think, conclude, nd that we’re convinced) that something is true or we don’t
judge that things are as they appear to be.
The stoics thought that we sometimes judge or believe without knowing because we
rely on impressions that aren’t su ciently good (e.g., hazy or indistinct impressions
that lead us to falsely believe there is water nearby when it’s just a mirage). When,
however, we believe based on impressions that couldn’t be mistaken, we come to know.
fi
ffi

Introduction
The academic sceptics agree with the stoics that knowledge requires these impressions that couldn’t
lead us astray, but they deny that we have such impressions.
It might seem that a friend is approaching, but couldn’t she have a twin? Isn’t this possibility enough
to demonstrate the fallibility of the impression? Isn’t this enough to show that we cannot know?
Some ancient sceptics and philosophers in the Indian tradition thought that ‘learning’ that we have
little to no knowledge can be good for us. Once we abandon attempts to know by recognising that
our impressions are fallible, Śrīharśa (11th century author of The Sweets of Refutation) thought that we
could achieve a kind of tranquility.
We’re going to bracket these debates about whether it would be good for us to abandon aspirations
to know and just focus on some sceptical arguments themselves to see if they are any good or if
there are any good responses to them.

Universal scepticism
The thesis of universal ignorance is that nothing can be known.
• This is a funny thesis because if it’s true, we couldn’t know that it’s true. Not even
our best arguments could give us knowledge here.
• Might it be true, however, even if unknowable? We cannot know whether the
number of stars is even (or odd), so it’s not as if there’s anything incoherent in the
idea that something might be true but unknowable. (Indeed, it would be weird if we
thought that everything true was knowable!)

Universal scepticism
Descartes’s narrator (in his Meditations) thinks there’s a way to show that the thesis of
universal ignorance is false.
Even if we were to imagine a malevolent being with all possible powers intended to
deceive us, there would be limits to its abilities:
Even then, if he is deceiving me I undoubtedly exist: let him deceive me all he can,
he will never bring it about that I am nothing while I think I am something. So after
thoroughly thinking the matter through I conclude that this proposition, I am, I
exist, must be true whenever I assert it or think it (1639/1986, §1).

Universal scepticism
When the malevolent deceiver aims to deceive, the deceiver wants to create a ‘mismatch’
between appearance and reality, exploiting our dispositions to take appearance at face
value. Thus, we think we’re in the good scenario when we’re actually in the bad:
Good: It appears there’s an apple here & there’s an apple here.
Bad: It appears there’s an apple here & there’s no apple here.
But there are limits to this. As the narrator notes, we cannot get a good-bad pair in this
case:
Good: It appears that I exist as a conscious, thinking thing & I exist as a conscious,
thinking thing.
Bad: It appears that I exist as a conscious, thinking thing & I don’t really exist as a
conscious or as a thinking thing.

Nearly universal scepticism?


The upshot is that there are limits even to what an all-powerful being could do if it
were intent on deceiving us, so that’s some reason to think that the thesis of universal
ignorance is not really defensible.
The thesis of nearly universal ignorance, however, remains untouched. It seems for
any proposition such that it’s possible for an all-powerful being to make it appear true
when it’s not true, we might be mistaken. And wouldn’t that be enough to deprive us
of knowledge in most things?
Sure, we’d know that we exist as thinking things, but we wouldn’t know that we existed
as embodied things, that we’ve existed for any duration, or that there’s anything in
reality beyond our thoughts and appearances. Maybe we could do a little bit of math.
But that’s not much.

The BIV Argument


To make the possibility of nearly universal ignorance vivid, think about the brain in a vat case (BIV).

Conjecture. Neuroscience will progress to the point that scientists will be able to sustain a functioning brain
outside of a body (e.g., in a vat of nutrients). This would be a brain in a vat (i.e., a BIV). It also seems that
they will learn how to stimulate the BIV so that some subject undergoes experiences indistinguishable from
the experiences that you’re undergoing right now. They’ll also be able to create false memories
indistinguishable from yours. In short, they’ll be able to create mental lives subjectively indistinguishable
from yours.

If this seems possible, this argument might seem persuasive:


The Radical Skeptical Argument
P1. You do not know that you are not a bodiless BIV.
P2. If you do not know that you are not a bodiless BIV, you do not know you have a hand.
C. Thus, you do not know you have a hand.

The BIV Argument


The Radical Skeptical Argument
P1. You do not know that you are not a bodiless BIV.
P2. If you do not know that you are not a bodiless BIV, you do not know you have a hand.
C. Thus, you do not know you have a hand.

Why should we accept the rst premise? One idea (very rough) is that things would seem the same if it
were true that you were a BIV. If your experiences were the same, how could you use them to tell which
possibility obtained?

What about the second? It’s an implication of a more general principle:


Closure Principle: If S knows that p, and S competently deduces q from p, thereby forming a belief that q
on the basis of this deduction while retaining her knowledge that p, then S knows that q.

fi

The BIV Argument


The Radical Skeptical Argument
P1. You do not know that you are not a bodiless BIV.
P2. If you do not know that you are not a bodiless BIV, you do not know you have a hand.
C. Thus, you do not know you have a hand.

Why should we accept the rst premise? One idea (very rough) is that things would seem the same if it
were true that you were a BIV. If your experiences were the same, how could you use them to tell which
possibility obtained?

What about the second? It’s an implication of a more general principle:


Closure Principle: If S knows that p, and S competently deduces q from p, thereby forming a belief that q
on the basis of this deduction while retaining her knowledge that p, then S knows that q.

fi

The BIV Argument


The Inconsistent Radical Skeptical Triad
(I) One is unable to know the denials of radical skeptical hypotheses.
(II) The closure principle.
(III) One has widespread everyday knowledge.

Let’s consider some responses to the sceptical puzzle.


The Moorean Shift


Here’s G.E. Moore’s ‘proof’ of the external world:
I can prove now, for instance, that two human hands exist. How? By holding up my two hands, and saying, as I
make a certain gesture with the right hand, ‘Here is one hand’, and adding, as I make a certain gesture with the
left, ‘and here is another’. And if, by doing this, I have proved ipso facto the existence of external things, you will
all see that I can also do it now in numbers of other ways: there is no need to multiply examples (1939/1959, 146).

In short then, Moore—looking at his hands—o ers the following piece of anti-skeptical reasoning.
M1. Here are two hands.
M2. If hands exist, then there is an external world.
M3. So there is an external world.

It might seem that Moore's proof proves nothing at all, but he thinks that it does because it meets three conditions
for an adequate proof: (i) The premises di er from the conclusion; (ii) The premises are known to be true; (iii) The
conclusion follows from the premises.

ff
ff

The Moorean Shift


Many complain that there’s something unsatisfying in Moore’s response to the sceptic.
Few are satis ed by Moore’s proof when they rst encounter it, but it can be hard to say precisely what it is. Do
we think that he hasn’t satis ed his own standards? It’s clear that (i) and (iii) are met. If we were to say that the
problem with his proof was that he didn’t satisfy (ii), we'd be siding with the skeptic. If, on the other hand, we
were to grant that (ii) might have been satis ed, we’d have to concede that he satis ed his standards for an
adequate proof. Perhaps we think those standards aren’t adequate, but then it looks like we’d have to say that
someone can successfully derive conclusions from things that you know without proving them to be the case.
That’s sort of strange, isn’t it?
fi
fi

fi
fi
fi

The Moorean Shift


Crispin Wright has tried to pinpoint the problem with Moore’s proof as follows. As Wright sees it, if your
appearance as of having a hand is to lend support to the premise that you have a hand as opposed to the
premise that the computer is implementing a phase of its program that requires you to su er the illusion of
having a hand, you need to already have some positive warrant for believing there’s an external world (as
opposed to, say, the bizarre BIV scenario).
What this means is that Moore’s Proof is such that having justi cation to believe the conclusion (that there is an
external world) is itself among the conditions that make you have the justi cation you purport to have for the
premise (that ‘here is a hand’). And so, on this line of thinking, the structure of the argument is such that the
justi cation you have for the premise can’t ‘transmit’ to the conclusion the justi cation for which would have
already been needed in order to be justi ed in believing the premise. In short, Moore’s proof su ers from a kind
of ‘transmission failure’.
We can think of this ‘transmission failure’ as similar to o ering a kind of circular argument—assuming part of
what we’re trying to prove in o ering the proof. But maybe Moore’s point is more subtle than this. Maybe he’s
asking to consider whether his standards are any less certain than those employed by the sceptical argument.
fi

ff
fi

ff
fi
fi
fi
ff
ff
The Moorean Shift
One thing that’s frustrating (for me) about Moore’s response is that it doesn’t tell us what we should reject in
responding to the sceptic.
You can say that common sense or something you believe is more certain than the conclusion of an argument to
the contrary, but isn’t part of the game trying to provide a diagnosis of where unsound arguments go wrong or
why puzzling thoughts are so gripping? Isn’t part of the aim here to provide a systematic account of something
that’s di cult to understand? Moore does none of this.
It seems, though, that there are some ways of developing the anti-sceptical view in keeping with Moore’s
outlook. Let’s brie y consider those…
ffi
fl

Response 1: Deny closure


The sceptical argument rests on this assumption that we couldn’t know something if we couldn’t know something else.
This assumes, in other words, a kind of connection between things that can be known and a kind of connection
between the unknowability of di erent things.

But we’ve already seen that Nozick’s view challenges this. On the sensitivity theory of knowledge, knowledge requires
sensitive belief (i.e., a belief we wouldn’t hold if it were false):
• Beliefs about sceptical scenarios couldn’t constitute knowledge (because we’d still believe we were not BIVs if we
were BIVs);
• Beliefs about mundane things can constitute knowledge (because we wouldn’t still believe we had hands if we didn’t
have them).

Objection: but the denial of closure sounds really, really bizarre.


Objection: the implications of the sensitivity view are quite strange (e.g., speaking German vs. Secretly speaking
German).

ff

Response 2: Deny fallibility


Why do we think that we cannot know the denials of sceptical hypotheses?

Here’s a standard answer:


You have the same evidence regardless of whether you’re in the good case or you’re a BIV (e.g., your visual
experiences).
Thus, you cannot know on the basis of this evidence whether you’re in the good case or whether you’re a BIV.

Some will deny the premise. They’ll say that we actually have evidence that the BIV doesn’t have. For example,
we see that we have hands. The BIV only seems to see that it has hands. Thus, our evidence (i.e., that we see that
we have hands) is not part of their evidence. So, whilst our evidence (in line with a reading of the Stoic view)
cannot give us knowledge if it’s fallible, we have infallible evidence.

Objection: But do we?


Objection: But does this really solve what we want solved?

Response 3: Embrace fallibilism


Why do we think that we cannot know the denials of sceptical hypotheses?

Here’s a standard answer:


You have the same evidence regardless of whether you’re in the good case or you’re a BIV (e.g., your visual
experiences).
Thus, you cannot know on the basis of this evidence whether you’re in the good case or whether you’re a BIV.

Some will accept the premise but deny the implications of the assumption that we have fallible grounds.

Recall Sosa’s idea that knowledge is success that’s attributable to our abilities. This only requires that we wouldn’t
have failed in similar situations, not that we wouldn’t have failed in any possible situation no matter how bizarre.
Our evidence gives us knowledge when there’s no ‘easy’ possibility in which we have that evidence but get things
wrong (much in the way that our attempts or tries give us creditable success when there aren’t ‘easy’ possibilities
in which we try but fail).

Lewisian Contextualism

Lewis doesn’t think that any of the straight responses to scepticism works, but he also
doesn’t think that the sceptical view is right.
On the one hand, he says, it’s mad to embrace scepticism. On the other, it seems that
the fallibilist view that we need to avoid scepticism borders on incoherence (e.g.,
Agnes knows that it’s raining but she might be mistaken about that!?!)
He thinks that we need a view that allows for the possibility that we can be said to
‘know’ that doesn’t embrace fallibilism. His proposal:
Lewis: We know when our evidence eliminates all of the error possibilities.

Lewisian Contextualism

Recall again the Inconsistent Radical Skeptical Triad:


(I) One is unable to know the denials of radical skeptical hypotheses.
(II) The closure principle.
(III) One has widespread everyday knowledge.

Contextualists think that the inconsistent radical skeptical triad is a kind of rigged game. There is a way, they think, for us to
manage to hold all three of I-III—at least, so long as we are willing to think a little di erently than the others have about the
meaning of the word ‘knows’.
Lewis thinks that the straight responses to the skeptical argument some important features of the language we use when we
describe each others as ‘knowing’ things.
There is not one privileged relation between a thinker and some xed set of facts that we can say is the relation picked out by
our use of ‘knows’. There are many relations we can pick out using this word that can hold between larger and smaller sets of
facts depending upon what kind of conversational context we’re in.

fi

ff
Lewisian Contextualism

To understand Lewis’s proposal, it helps to think a little bit about language, reality,
and truth.
We normally think that the truth/falsity of a sentence is a function of two things:
• What the sentence means;
• What the world is like.
For some sentences it seems that we can hold the features of the world xed and use
the same sentence twice, stating something true on one occasion and false on the
other.

fi

Lewisian Contextualism

For some sentences it seems that we can hold the features of the world xed and use
the same sentence twice, stating something true on one occasion and false on the
other.
Examples. ‘Donald is my husband’ is true when Melania says it but false when we say
it. ‘Mitt is incredibly rich’ might be true if we say it but might be false if uttered in a
meeting of billionaires trying to decide who gets to join their club. ‘Nebraska is at’
might be true if we’re talking about organising a marathon or bike race but not if we’re
talking about things to play billiards on.

fi
fl
Lewisian Contextualism

Consider ‘is rich’ in more detail.


Imagine we’ve arranged everyone in London on the basis of their wealth.
Least wealth … [insert millions of stick gures] … most wealth.

In some conversational settings (e.g., between locals in a low income area worried about gentri cation), the set of
people who count as ‘rich’ might be quite large. In others (e.g., between billionaires about who they should invite over
for champagne and grilled swan), the set of people who count as ‘rich’ might be quite small.
It might be, for example, that in the rst conversation, ‘The President of KCL is rich’ comes out as true and the second,
‘The President of KCL is rich’ comes out as false. These sentences would then express distinct propositions since they’d
have di erent truth-conditions. (We’d think that arguing about this can (sometimes) rest on mistaken presuppositions
and doubt that there’s some privileged class of individuals that would be the uniquely correct set of people to count as
‘rich’ without taking account of features of the conversational context (e.g., beliefs and interests of conversational
participants).)
ff

fi
fi

fi
Lewisian Contextualism

One more example.


All the glasses are clean.
Scenario 1. We’re in the philosophy bar cleaning up and that might be true i 25
glasses that are stored behind the bar have been washed.
Scenario 2. We’re in the government wondering if anyone might be drinking from
glasses that had been used by someone carrying the virus without being washed. This
wouldn’t be true just because the 25 glasses in the philosophy bar had been washed.
When we use words like ‘all’, we often have a contextually determined group in mind
and that can change from one conversation to another.

ff

Lewisian Contextualism

When Lewis says that we ‘know’ something i our evidence eliminates ‘all’ the error possibilities, here’s the rough idea.
• Consider the vast range of possible universes that God could have had in mind before creating this world.
• In some, things are just like this. There are people, penguins, etc. and they see each other, interact with each other, and there are
no BIVs, no deceiving demons, etc.
• In some possible universes, there are experiences like ours but no penguins because there are BIVs or deceiving demons.
• In some conversational settings (e.g., courtrooms) the possibilities we’re interested in form a relatively small set (e.g., we focus on
normal universes and ask questions about what’s happening in them and questions about which of them our evidence rules out).
• In other conversational settings (e.g., philosophy classes) the possibilities we’re interested in form a much larger set and include
lots of far out possibilities.
• The evidence we have (e.g., our appearances, apparent memories, etc.) ‘rule’ out lots of things if we con ne our attention to the
smaller set of normal possibilities and next to nothing if we shift our focus to the larger set that includes abnormal possibilities.
• This is akin to the way in which something we said was ‘ at’ in one setting is said to be ‘bumpy’ in another. In one setting, our
standard for bumps is quite strict. In the other, it’s quite loose. As we change the standards for counting bumps, we change the
standards for determining what’s at.

fl

ff
fl
fi

Lewisian Contextualism

In this way, Lewis thinks that he can vindicate the intuitions in the Radical Skeptical Triad
(I) One is unable to know the denials of radical skeptical hypotheses.
(II) The closure principle.
(III) One has widespread everyday knowledge.

Start with (III). In some conversational settings (e.g., courtrooms), since we con ne our attention to relatively normal possibilities,
there is no possibility in which we have our experiences but we’re radically mistaken because we rely on them. So, in this setting, we
can be said to ‘know’.
Consider (II). So long as we realise that ‘know’ can pick out relations that hold between us and just a handful or facts or between us
and loads of facts, we can see that there’s no univocal reading of ‘knows’ that should make us worry about closure.
Now, (I). When we’re actively considering radical sceptical hypotheses, we’re in a setting where we think about abnormal possibilities
in which we have our evidence but we’re wrong. In this setting, the word ‘knows’ picks out a relation between us and very few facts,
so (I) is true.
The ‘solution’ here is that ‘knows’ sometimes is used in a non-demanding way and sometimes in more demanding ways to say
di erent things. The failure to register this is responsible for philosophical puzzlement.
ff

fi

You might also like