[FOM] Re: The Myth of Hypercomputation
Timothy Y. Chow
tchow at alum.mit.edu
Thu Feb 19 13:50:02 EST 2004
Aatu Koskensilta wrote:
> Assume that some physical theory T becomes accepted - say T is
> some form of quantum gravity or string theory or whatnot - and someone
> finds out that some physical set up corresponds to a mechanism
> which decides Pi-1 sentences *and* gives counterexamples when the
> sentence is not true. We could well have as much confidence in T as in
> any of our now common theories. In addition, we could test the
> predictions about Pi_1 sentences we know to be true, and some we know to
> be false, and see that the results make sense.
This is roughly the picture that Toby Ord paints. I think that there
are a couple of crucial points missing from this broad-brush view.
The first point is that when we say that we believe that a certain
physical theory T is true, there is always an implicit rider that the
theory T is only believed to be true inside certain limits that are
accessible to us experimentally.
There is a strawman view, more plausible in the heyday of classical
physics than today, that physical theories are "theories of everything"
in the sense that they describe the universe all the way from the
infinitesimally small to the infinitely large. This view may sometimes
be reinforced in the classroom; I was just reading Littlewood's
"Miscellany" the other day, where he talks about a rod balanced vertically
on the floor of a train, hinged at the bottom, and argues that there is
some initial position of the rod that will cause the rod never to touch
the ground for a fortnight. The argument requires some severe
idealizations, extrapolating the validity of classical physics to realms
where we know it is not valid. Pace Littlewood, the notion that a
physical theory tells us *exactly* what the world is like is nonsense.
Our confidence in a physical theory cannot extend to realms beyond what
we can measure, regardless of how much testing we do in the realm that
we *can* measure.
The other point that is missing from the Ord-Koskensilta broad-brush
picture is the fact that we give experimental results a different kind
of epistemological status than we give theories. Suppose I calculate
mathematically with a physical theory and generate an unexpected
experimental prediction. Suppose furthermore that the physical theory
has been strongly confirmed so far. Let me pose the question: Why bother
doing the experiment? If the theory has been confirmed in the past,
and you've proved mathematically that this experimental consequence
must follow if the theory is correct, then don't you *know* what the
result of the experiment will be? And if the experiment is carried
out and gives the "wrong" result, you can just take your notebook to
Mother Nature and explain to her where she made her mistake, right?
Something that can be directly confirmed experimentally confers a
certain kind of confidence that cannot be achieved via theoretical
extrapolation sans experiment.
The word "compute" has a strong connotation of carrying the kind of
confidence that a direct experiment confers. I do not deny that we
can have the same kind of confidence in a Pi-1 sentence that we have
in extrapolations of physical theories. After all, we have that kind
of confidence already in certain sentences (e.g., the consistency of
ZFC). We don't need hypercomputation for that. However, the way
people talk about hypercomputation has the strong implication, if not
the outright assertion, that if a hypercomputer were to "compute" that
ZFC is consistent, then we would acquire the kind of certainty in its
truth that we have in the results of direct experiments and ordinary
computations and other finitely verifiable facts; in particular, there
would be a *qualitative* change in our confidence. This is what I don't
see hypercomputation (at least the incarnations I've seen) providing.
---
Now, let me shift gears slightly and argue the other side to some extent.
One distinction that I have not really made in my previous posts here
is the distinction between interpolation and extrapolation. The actual
number of controlled experiments we perform is not only finite, but very
small. Modern computers are built assuming certain physical principles,
and although these physical principles have been tested, they have
certainly not been tested exhaustively. We routinely assume that our
computers will behave the same under a wide variety of conditions---
different locations on the earth, different temperatures, different
times, etc. Hardly any of these assumptions has been directly tested,
yet we treat the results of the computations of the computer as having
effectively the same certainty as a controlled physical experiment would
have. So aren't even ordinary finite computations vulnerable to the kind
of skeptical argument that I have been proposing---that they depend on
extrapolating a physical theory to a previously untested realm?
I think this is a good objection, and it shows that any sort of wedge
I want to drive between direct empirical verification by finite
experiments and theoretical predictions has to be constructed carefully.
My initial feeling is that, as I said, the distinction between
interpolation and extrapolation is the right response. Our experience
suggests to us that if a particular experimental result holds for a
selected set of conditions within a certain range, then it is likely
to hold for suitably interpolated conditions. On the other hand, if
my computer works fine at 0, 20, and 40 degrees Celsius, I would be
very rash to claim that it will also work at 60, 80, 100, 120, ...
degrees Celsius or at -20, -40, -60, ... degrees Celsius. Or if
Newtonian mechanics works fine for velocities of 10, 100, 1000
meters per second, I can confidently interpolate to 500 meters per
second, but I can't confidently extrapolate to arbitrarily high
velocities.
Therefore, if someone can give a plausible account of hypercomputation
that relies only on interpolation rather than extrapolation (in some
sense that I have admittedly not defined precisely yet), or alternatively
that relies only on *untested* rather than *untestable* realms, then
I might be convinced.
I think this is going to be difficult, for the following reasons (again
sketched out intuitively since I haven't honed these arguments yet).
Roughly speaking, interpolation of finite experiments still leads to
finite experiments. I could still, in principle, carry out finite,
controlled experiments in the interpolated region to directly verify
the theoretical predictions. Therefore, allowing interpolation is not
going to circumvent the finite/infinite barrier. To solve the halting
problem, on the other hand, is almost certainly going to involve some
kind of infinities. The reason is that to be confident that a
hypercomputer solves the halting problem, we have to convince ourselves
that what the hypercomputer is concerning itself with is *sufficiently
similar* to a Turing machine churning away indefinitely. Otherwise, the
hypercomputer will be solving not the halting problem but some other
problem. It is hard to see how we could achieve this sufficient
similarity without embedding infinity *somehow* into the physical setup,
in a way that requires extrapolation of theories to untestable realms.
---
Let me add one more footnote about why I have been harping on the
finite/infinite distinction rather than the recursive/non-recursive
distinction. Focusing on the recursive/non-recursive distinction
makes it sound like we have computers today that compute any recursive
function. However, it is obvious that we do not; a "recursive function"
is inherently an infinite object, and computers are finite. The concept
of a recursive function is practical *only because for any instance of
interest, we can remove all infinities*. The input size, the space
usage, the time usage, and so on, can all be taken to be finite.
Any finite partial function can be extended to a non-recursive function,
so we have computers today that can compute non-recursive functions (up
to a certain point, of course; but we can do no better than that for
recursive functions either). Conversely, if we could build a computer
that computes a *recursive* function with *infinities intact*---i.e.,
it truly has infinite memory---that would be as significant as a computer
that computes a non-recursive function. Therefore the real issue is
finite versus infinite.
Tim
More information about the FOM
mailing list