FOM: feasibility discussion

Anatoly Vorobey mellon at pobox.com
Fri Oct 16 20:28:39 EDT 1998


You, Harvey Friedman, wrote this on Tue, Oct 13, 1998 at 03:30:57AM +0100: 
> Can my postings
> 19:Long Sequences 7/31/98
> script/can't run away 9/15/98
> 21:Long Sequences/Update 10/13/98
> 
> concerning the investigation of n(1),n(2),n(3),n(4),m(1),m(2),m(3),m(4)
> 
> be integrated into the discussion, and/or perhaps bring a wider audience
> into some aspects of the discussion, such as a wide variety of
> mathematicians?

I can't readily systematize all the possible foundational questions raised
by very large numbers such as n(3) vs. the concept of feasability and
a finite universe. But perhaps some of the questions below, formulated
as a sort of thinking aloud rather than an attempt of systematic exposition,
will seem interesting. 

Consider n(3). We can regard it as an example of an incomprehensibly
large number. Elsewhere you say that n(3) > A7(184). A7(184) is probably 
already incomprehensibly large. The lower bounds on n(4) that you give are
much more convincing yet, but I think n(3) will suffice. 

Right now, you don't *know* what is the value of n(3) (you just know it's
very very large). Presumably, you would want to find out. 
Suppose for a second, as unlikely as it
seems, that you manage to prove that n(3) turns out to be *exactly* 
A100(200). The first question that arises then, what does it exactly
*mean*?

What is it that you proved? From one point of view, you simply showed
some sentence, which captures "n(3)=A100(200)", provable in, for
instance, ZFC. But that alone can hardly provide you with the *meaning*
of such a result; pure formalism cannot do that. 

On the other hand, your feeling would perhaps be that you actually
*found* what n(3) is. You knew its definition but you didn't know 
"what" it is, "how big" it is, and so on; A100(200), as you may feel,
provides you with the answer to such questions. But is that really so?

After all, both n(x) and A_x(y) are, trivially, recursive functions. 
Does A100(200) really give more information about the number than n(3),
and if yes, what kind of information? (note that I deliberately
talk about *numbers*; placing n(x) into the Schwichtenberg-Wainer
hierarchy is quite a different matter). For both n(3) and A100(200) there
are easy recursive computations which, when followed, "will" "find" what
both n(3) and A100(200) "are" (the scare quotes are all justified, it
would seem); for instance, Turing machines which write binary
representations of n(3) or A100(200) on the tape after they are finished
working. For both, these computations are actually unfeasible and
probably cannot be carried out in practice. One may answer that A100(200)
gives more information about the number, that it perhaps allows one to
deduce some of its properties; but n(3) also gives some information about
the number (via the definition of n(x)), and perhaps allows one to
deduce some other properties (for instance, we immediately know that
n(3) is odd). 

Even if we could, for instance, discover,
using A100(200), what is the length of the binary string denoting
the number, and couldn't do that using n(3), that wouldn't necessarily
mean that A100(200) is more useful than n(3). Even an exponential tower
of 2's of height 1 trillion, which you give as an example of a bad lower bound
for n(3) in another posting, doesn't seem to gain much of our understanding
despite the fact that we "know" its binary length; what is it? - oh, the
length is just the exponential tower of 2's of height trillion minus one;
hmm, that doesn't seem to help much... Moreover, even if we did find this
binary length of our number based on its equality to A100(200), how would
whatever description of that length we obtain be better than [log(n(3))]+1?

So, again, what is it you achieved when
you proved (as we assumed you did) that n(3)=A100(200), and did you in
any way "find" the number by doing so? 

A finitist/feasabilist may feel that the above offers some evidence that
at these staggeringly large values, the concept of number itself "breaks
down"; different ways of naming the number only amount to different unfeasible
recursive ways to "reach it" and do not justify a feeling that the number
has somehow been "found". None of the problems described above would occur 
if, for example, you were to prove that n(3)=1000; and your intuitive 
feeling that you have "found" what n(3) is would definitely be justified 
in that case. Perhaps, a finitist/feasabilist might say, we are only justified
in treating something as a "number" if we can at least imagine a physical
collection consisting of this many objects; or if such a collection could
at least in principle exist in the universe. 

After treating equality in this fashion, let us come back to earth, to
inequality. You didn't, after all, prove that n(3)=A100(200); but you did
prove that n(3)>A7(184). What does *that* mean?

Well, I know what it means when I prove, for instance, that 2^10 > 10^3.
My intuitive meaning of that is that I can imagine a collection of 2^10
objects, and that 10^3 of them will not exhaust all of it. But I cannot
imagine such a collection in case of n(3) and A7(184). They are both
unfeasible. Perhaps n(3)>A7(184) and perhaps n(3)<A7(184); I don't see
how to attach any real meaning to either of these assertions, except
that one is a sentence provable in ZFC and the other is refutable there.
I don't even see why either of them should strike me as more evident than
the other.

Moreover, I don't even see how to attach any meaning to the statement
"a very bad lower bound" (such as the exponential tower of height 1
trillion for n(3)). How is p, an exponential tower of height trillion,
a "very bad" lower bound for n(3), and how is an exponential tower of
height p a "much better" lower bound, but still a "very bad" lower
bound? How on earth do we assign any meaning to such terms?

What does it mean for X to be "a lot less" than Y?
One way we could try to attach an intuitive meaning to it is to assert
that the binary length of X is a "lot less" than a binary length of Y;
this allows us, for example, to see that 8 is "a lot less" than 2^56. 
But for unfeasible numbers we are dealing with here, passing from
a number to the length of its binary numeral still leaves it unfeasible
and leaves us in the dark about whether first number's binary length
is "a lot less" than the second's. To assess that, you need to take
the binary length of the binary length again; but the problem is, in
order to get to anything feasible, you must continue taking logarithms
an unfeasible number of times. 

So, a finitist/feasabilist may ask, what kind of information, insight,
knowledge, does n(3)>A7(184) offer you? And maybe the right way for
us to attach a meaning to "X > Y" is to be able to imagine, in principle,
a collection of X objects, Y of which do not exhaust all of it? And
perhaps a formal system which recognizes such limitations on the 
concepts of number, its largeness, relative size, etc. should be useful?

I hope the above is not too murky/tedious. I also hope that finitists/
feasabilists will forgive me if they find my description of their
concerns/ideas inaccurate or irrelevant in some places; I am not one
of them, just playing a devil's advocate for this very interesting
point of view, and trying to understand the issues involved. 

Finally, a question: what does one use to prove that n(k) exists for any
k? Formally, does one need full ZFC, and if not, what suffices? And
the same questions for the bounds, such as n(3)>A7(184): what does
one need/use to prove them?

Sincerely,
Anatoly.

-- 
Anatoly Vorobey,
mellon at pobox.com http://pobox.com/~mellon/
"Angels can fly because they take themselves lightly" - G.K.Chesterton



More information about the FOM mailing list