[FOM] Testing Hypercomputers
dmytro at mit.edu
Wed Mar 7 18:30:37 EST 2012
Suppose that hypercomputers get built that, for example, efficiently
solve the halting problem for feasible inputs. The nonrecursive nature
of hypercomputers raises the question as to how we can know that they
perform as advertised. The answer lies in getting results that appear
correct combined with the view that nature is non-perverse. The full
argument would be subtle and depend on the physical theory and the
computer design, but at the highest level, analogous to the argument
that the world existed at 10000B.C. as opposed to getting created at
9999B.C. as if it existed in 10000B.C., and with possible additional
arguments based on human access to mathematical truth.
However, even then there might be a question of interpretation. Suppose
that a computer purports to compute the truth predicate of second order
arithmetic, and the results appear fine except that it claims that all
real numbers are constructible. There are two interpretations:
A. All real numbers are constructible.
B. Since there are nonconstructible real numbers, the theory's use of
quantification over all real numbers is incorrect and should be replaced
with quantification over constructible real numbers. Future physical
theories might allow computation with nonconstructible real numbers.
I think that B is correct, but others may prefer A.
It is also instructive to see to what extent we can trust the results
from purported hypercomputers (that claim to efficiently solve the
halting problem) if the assumption of non-perverseness of physical laws
is relaxed. To simplify exposition at the price of formal accuracy, I
will partially conflate polynomial time with feasible.
Assume that computation outside of these hypercomputers is limited to
polynomial time, except for randomness. There are 3 scenarios:
1. If there is no randomness, then the trustworthy results would be
limited to solving NP problems with feasible inputs. Even then there
may be limitations because negative solutions to NP problems do not
appear to be feasibly verifiable in general (that is NP is not known to
equal in coNP).
2. If there is enough randomness but all hypercomputers are capable of
communicating with each other despite not being connected to each other,
then the relevant complexity class is IP (Interactive Polynomial Time),
which equals PSPACE, so we would be able to solve polynomial space
problems with feasible inputs.
3. If there is enough randomness and two (or more, the number does not
matter) hypercomputers can be isolated from each other, then the
relevant complexity class is MIP (Multiple Independent Provers), which
equals NEXPTIME, so would be able to solve nondeterministic exponential
time problems with feasible inputs.
Note 1: The upper bounds on complexity are based on what computation is
sufficient to fool the rest of world under the assumptions, and these
bounds turn out to be exact (up to polynomial input size
transformation). While the theorems are ordinarily stated for
complexity classes, the relevant constants have been proved to be small,
so the intuitive explanation with feasible inputs makes sense.
Note 2: The word 'feasible' is vague, and the least unfeasible number
depends on the instance. For example, in the informal claim that the
sum of two feasible numbers is feasible, if the least unfeasible number
for the sum is 2^32, then least non-feasible number for the summands is
2^31; alternatively, the claim can mean that (under the relevant
probability distribution) sum of two feasible numbers is likely to be
feasible, or that the sum operator only marginally increases degree of
More information about the FOM