[FOM] Physical Theories and Hypercomputation
Dmytro Taranovsky
dmytro at mit.edu
Sun Mar 11 09:02:30 EDT 2012
On 03/10/2012 11:12 AM, Vaughan Pratt wrote:
> In response to Dmytro Taranovsky I would say simply that any
> speculation about the computational capability of nature that ignores
> the implications of quantum mechanics is dead on arrival.
I was writing in general terms that are applicable to both classical and
quantum physics. In both cases, key to recursiveness is the finite and
approximate nature of observations (which in quantum mechanics is
reinforced by the uncertainty principle). In both cases, singularities
form a potential loophole.
In quantum mechanics, under the multiverse interpretation, time
evolution is exact and deterministic, like in a classical system.
However, to relate the theory to our experience, the theory is commonly
presented in terms of a classical observer interacting with a quantum
system. Some states that make sense classically, such as a particle
with an exact position and exact momentum, do not exist in a quantum
system, which is where the uncertainty principle comes from.
For some classical theories, a source of hypercomputation is a computer
with infinitely-self-shrinking parts, which is ruled out by ordinary
quantum physics. However, some quantum theories have their own issues
(as in potential divergences), such as contribution of states with
arbitrarily high energy to the result. For quantum field theories,
convergence is a major open question, and renormalizability only
partially addresses it. If the absence of divergences is proved (which
is far from certain), then one might be able to show that (ignoring
potential nonrecursiveness of physical constants) under appropriate
assumptions, effective time evolution under the Standard Model is in BQP.
Vaughan Pratt also makes an intuitive argument about limited precision
of observations. One correction is that 34 digits of precision is for
an observer with about 1 Joule of energy and 1 second of time; potential
precision increases linearly with E*t (and hence the number of digits of
precision increases logarithmically). An interesting problem would be
to formalize the argument and to prove -- or refute -- that for certain
quantum theories, the fine structure constant (or some other constant)
cannot be computed to 100 decimal places in a reasonable amount of
time. One question here is how sensitive a many-particle system can be
to the precise value of the constants.
Sincerely,
Dmytro Taranovsky
More information about the FOM
mailing list