You are on page 1of 5

The existence of the square root of two.

How could there possibly be any doubt about the existence of the square root of two?
This is a very reasonable question, but one that few people dare to ask when their
lecturers solemnly stand in front of them and prove that the square root of two exists,
either by defining x to be the supremum of all rationals r such that r2<2 and proving that
x2=2, or by applying the intermediate value theorem (which itself confuses people by
seeming too obvious to need a proof).

Imagine that you did not know any advanced mathematics (if you actually don't, then
that is fine) and were confronted by somebody who denied the existence of the square
root of two. What would you say? The conversation might go something like this.

What do you mean when you say that the square root of two doesn't exist?

I mean the obvious thing: if you take any real number x and square it, the answer is
never 2.

But what about 1.41421356237309... ?

What about it? You haven't told me how the sequence continues.

Well, all I'm doing is taking the decimal expansion of the square root of two.

That sounds pretty circular to me.

You're right. I'm sorry. But it isn't really as circular as it sounds. What I mean is that I
am calculating the decimal expansion of the real number x with the property that x2=2.

That still sounds circular. Aren't you still assuming that a number with this property
exists?

No, because I can tell you how to calculate the sequence of digits, and that will be my
proof that the number exists.

Go on then.

Well, 12=1<2 and 22=4>2 so I know that the the number must be one point something.
Then by trial and error I discover that 1.42=1.96<2 and 1.52=2.25>2 so the decimal
expansion must start with 1.4. I then just continue this process: if I have calculated the
first 38 digits, say, then I try all the possibilities for the 39th, picking the largest one that
results in a number whose square is less than 2.

All right, I see what it is you are doing, and that it leads to an unambiguously defined
infinite decimal. But what makes you call that a real number and what makes you so
sure that it squares to two?

I don't understand the first question. Surely a real number is something with a possibly
infinite decimal expansion.

That sounds fishy to me. I notice that you say something with a decimal expansion,
rather than just a decimal expansion. So what is the actual thing, of which you calculate
the expansion?

Well, it's just a number ... you know, something like 1,2,3,.. or 25/38 or pi or the square
root of 3.

I notice you didn't have the courage to say the square root of 2! I'm beginning to think
that you don't really have any idea what a real number is. You've given me a few
examples, but you haven't said what they have in common.

I think you are being quite unnecessarily pedantic. Just think of the number line. It's got
all the numbers on it, in order (ignoring the complex numbers for now). We know that
some numbers are irrational, but we can still describe them, by means of their decimal
expansions.

Describe what?

Positions on the number line, lengths, whatever you want to call them.

That's no good at all. What is this number line that you assume I am familiar with? What
is a length? Note that for the second question you can't fob me off with an answer about
rulers and so on, because they only work to a certain accuracy.

All right, I take the point, but I still don't think it is a serious problem. If it makes you
feel better, I shall simply define a real number to be a decimal expansion.

So when you say "the real number x" what you really mean is "the decimal expansion
x"?

Well, it's not always what I think of when I talk about real numbers, but if you insist on
a precise definition, then I can fall back on this one.

Does that mean that 0.999999.... and 1 are different numbers?


Oh yes, I forgot about that. Different decimal expansions correspond to different real
numbers except in cases like 2.439999999.... equalling 2.44. So I suppose my definition
is that real numbers are finite or infinite decimals except that a finite decimal can also be
written as, and is considered equal to, the "previous" finite decimal with an infinite
string of nines on the end. Happy now?

We've hardly started, because you haven't told me how to do arithmetic with these real
numbers of yours, and you certainly haven't convinced me that there is a real number
that squares to give 2.

Are you going to ask me how to multiply two infinite decimals together?

Yes.

Well, you just do it in the obvious way, by a sort of infinite long multiplication.

It sounds to me as though long multiplication would be a pretty accurate description of


whatever process you have in mind, but I notice that you are somewhat vague about it.

Do I really have to go into this? Surely you can see how it would work.

No I can't.

Well, let's take the example of the square root of two. If you take the numbers 1, 1.4,
1.41, 1.414 and so on, then their squares, 1, 1.96, 1.9881, 1.999396 and so on, get closer
and closer to 2. On the other hand, if you take the numbers 2, 1.5, 1.42, 1.415 and so on,
their squares, 4, 2.25, 2.0164, 2.002225 and so on, also get closer and closer to 2. And
they don't just get closer and closer, but they get as close to 2 as you like , as long as you
take enough digits. It follows that x2=2, where x is once again the infinite decimal
resulting from the procedure I told you before.

I don't see that it follows at all.

Oh come on. It's obvious that x2 can't be less than 2, because if it was, then it would
have a decimal expansion which at some point was definitely less than 1.99999....
Suppose for example that the decimal expansion of x2 began 1.9999736... This leads to
an easy contradiction, because x is obviously greater than 1.41421 (which is given by
the first few digits of x) and 1.414212=1.9999899241, which is definitely bigger than
any number that starts 1.9999736... A similar argument shows, for any number y less
than 2, that x2>y, and for any number z greater than 2, that x2 < z. Therefore x2=2.

I have several objections to what you have just said. Perhaps the most fundamental one
is how you are so certain that x2 exists at all. You seem to assume it in your argument.
You don't really calculate x2 at all. The only calculation involved is with finite parts of
x. Looking at the eventual argument you came up with, which is quite ingenious I
suppose, it appears to rest on the following assumption: if a and b are positive real
numbers and a < b, then a2 < b2. That is basically what you were using. I accept that this
rule is true for finite decimals. However, at no point did you tell me how to put infinite
decimals in order, and you have yet to justify that this rule applies to them. You also
used the famous law of trichotomy when you said that x2 must either be less than two,
greater than two or equal to two and then ruled out the first two possibilities. Finally,
your argument seemed to be saying that if x2 is anything at all, then it is 2.

Well I hope it's pretty obvious how to put infinite decimals in order.

I must admit, it is.

The law of trichotomy is then pretty easy to justify.

I suppose it's not that hard. Anyway, I'm prepared to accept it because it seems to me
that the main problem is with the assumption that x2 exists in the first place.

I'm tempted to say that of course it exists, and I've shown you how to calculate it, but
you'll probably tell me that I assumed that it existed in the course of my calculation.

Exactly.

But surely you see how I did the calculation? I mean, that is how I work out x2.

Work out what? What are you working out, if you're not even sure it exists?

But I am sure it exists. The argument I gave is what I mean by calculating x2.

Go on.

Well, I suppose you could say that I define x2 to be 2, simply because the finite
truncations of x square to numbers that get arbitrarily close to 2. In fact, what my
argument showed is that if I want to define x2 in some way, consistent with the principle
that 0 < a < b implies that a2 < b2, then I have no choice but to define it to be 2.

I see. So you are basically defining the square root of two into existence. Smart move.

Well, it's more than that. Although I haven't given the details, the ideas I have discussed
can be used to show that the real numbers, defined as decimals, can be added, multiplied
and so on, in a natural way, and that adopting this natural way as a definition, one finds
that there is indeed a number x that squares to 2. (See here for more details.) In other
words, I didn't just define the square root of two. Rather, I defined an entire number
system and showed, by which I mean actually proved, that the square root of 2 exists in
that system.

Oh, well if that's what you mean by the existence of the square root of two, then I
suppose I accept it.

You might also like