FOM: Was sind und was sollen die Zahlen

Vaughan R. Pratt pratt at cs.Stanford.EDU
Wed Nov 5 00:31:17 EST 1997


Those of you who've met and talked with Yessenin-Volpin may have had my
initial reaction to Vladimir Sazonov's post.  My experience has been
that it is difficult to see Y-V's point of view.  On rereading it more
carefully however I see that Vladimir is making the identical point I
was planning to make in one of my several half-completed on-hold
replies to Harvey, Martin, etc.  Let me try to develop his (and Rohit
Parikh's) point in even more detail.

The point is that 2^1000 is not a feasible number, in the sense that
one can't hope to touch every member of it, at least not classically (a
kiloqubit quantum computer can easily "touch" that many points of
Hilbert space as soon as it passes from a pure to a mixed state, though
it can't extract more than 1000 bits from it).

We can easily visit individual points of that large space, in fact we
visit points of a 2^10000000000 element space when we write to the sort
of 1GB hard drive that sells in Fry's for around $100 these days.  But
try testing directly whether the drive is capable of distinguishing
every pair of recordable bit patterns.  Such a direct test is not even
feasible for a 128-byte memory!

Now, how many times can one exponentiate?  Well, let's try starting
from 0.  We go 1, 2, 4, 16, 65536 and the next number is not
feasible---we can write it down, very tediously, but we cannot come
near exploring its space, which is much bigger than 2^1000.  So we can
only exponentiate 5 times max before becoming infeasible.

Now you may feel that this effect if relevant at all to mathematics is
only relevant to finite numbers, and is meaningless for even small
cardinals like \Beth_i for i=0,1,2,..., since we have no way of
experiencing the difference: all infinite numbers are beyond our
experience so what has feasibility to do with them?

I would disagree with this as follows.  Infinite numbers give us a
convenient and easily manipulated handle on either large numbers or
variable numbers.  The finite powers of omega accomplish this for omega
viewed as an abstract limit on each level of finitely many nested
levels of finite iterations.  The fact that they are infinite does not
mean that they are intrinsically beyond our reach so much as that they
are elusive: when pinned down they may turn out to be ordinary finite
numbers easily within reach.

With this perspective, it becomes reasonable to postulate a certain
relativistic or fractal or scalability viewpoint: certain phenomena
like feasibility apply at all scales.  For any quantity that is
feasible but nontrivially so, its exponential is likely to be
infeasible.  By scalability this principle should apply no matter where
in the number spectrum one sits.

Now suppose we are working at some cardinal c for some good reason, and
we suddenly jump to 2^c.  If c was a "feasible but sizable" cardinal,
then by scalability 2^c is very likely not to be feasible.  Even if it
is feasible, five exponentiations are *guaranteed* to demolish it.

For this reason alone I tend to take cardinals like \aleph_5 with a
grain of salt.

Another reason I'm suspicious of the results of repeated exponentiation
is my concern for neglected topology.  When c is discrete 2^c should
have a *very* coarse topology, so coarse in fact that 2^2^c should be
c.  This is metatopology, actually it is Chu spaces (now that I've told
you a little about them in the previous message), with discrete c being
an A x 2^A Chu space (the 2^A half consists of the open sets and A is
the set of points), and 2^c being a 2^A x A Chu space (amounting to a
complete atomic Boolean algebra, i.e. a power set properly
"topologized"), and 2^2^c being the original A x 2^A Chu space back
again (exponentiation despite its naive definition for Chu spaces turns
out to do this, by respecting topology in the way that should have been
all along instead of being casually cast aside for so many decades).

What's wrong with casting aside topology?  Well, *this* is where the
real exponentiation takes place!  When you exponentiate a Chu space
(literally form 2^\A as the Chu maps from \A to the 2x1 Chu space
serving as the dualizing object), you merely transpose it, leaving its
size unchanged.  Repeated exponentiation merely keeps flipping it back
and forth.  But suppose you catch an initially discrete space Ax2^A on
the odd cycle, as 2^A x A, and you discard the topology.  What you end
up with is the space 2^A x 2^2^A.  One side jumps two exponentials!
Bearing in mind the limit of 5, this means that if you discard topology
more than twice in any sequence of operations (which might include lots
of "free" exponentiations realized as transpositions), you are at great
risk of reaching an infeasible object.  The apparently simple act of
discarding topology is that expensive!  (There is a similar phenomenon
in the theory of reversible computing: as long as you keep on computing
reversibly, there is no mandatory kT/2 energy loss, you only have to
pay that if you try to clean up by erasing everything!)

Harvey will of course complain that this is *still* not enough detail
to satisfy him, I fear nothing short of a complete textbook with every
last detail is going to satisfy Harvey.  Well, it's a fair enough
request, sigh.

Vaughan



More information about the FOM mailing list