[FOM] Physical Theories and Hypercomputation
Antonino Drago
drago at unina.it
Thu Mar 8 16:41:41 EST 2012
The subject was debated in J. Phil. Logic 1993ff by Hellmann, Bridges and
Billinge. In particular, Scedrov delt with constructive differential
equations in J. Pure Applied Algebra 33 1984. I contributed by papers
published in E. Agazzi et al. (eds.): logica e Filosofia della Scienza,
CLUEB, Bologna, 1986, II, 267-272, in V. Fano et al. (eds.):
Proceedings of SILFS 1999, Rubbettino, Cosenza, 2001, 161-180 and in A.
Garola and A. Rossi: Foundations of Quantum Mechanics, World Scientific,
Singapore, 2000, 167-193.
In your words, hypercomputation (in constructive words undecidable problems)
come out mainly from physical principles.
E.g. in mechanics Newton's version of inertia principle requires to be able
to decide that force is null exactly, i.e. with infinite accurateness;
because, since the effects of a force will be defined by the subsequent
principle, you cannot exclude that an even minimal force generates great
deviations by means of a non linear effect (hence no justification for this
version of inertia principle by putting the test-body very far from other
bodies).
Best regards
Antonino Drago
----- Original Message -----
From: "Dmytro Taranovsky" <dmytro at mit.edu>
To: "fom" <fom at cs.nyu.edu>
Sent: Thursday, March 08, 2012 12:30 AM
Subject: [FOM] Physical Theories and Hypercomputation
> This posting explains the relation between physical theories and
> computability.
>
> Physical theories are ordinarily stated in terms of real numbers and
> quantities that are reducible to real numbers, such as continuous
> functions between (specific) separable metric spaces. Real numbers are a
> priori infinite quantities, that is they can store infinite information,
> which raises the question as to why the theories tend to be computable.
> The answer is that the finite nature of observations combined with
> approximate nature of physical theories naturally leads to continuous time
> evolution relation. For example, if distance is only predicted
> approximately because of (for example) neglection of corrections from a
> more accurate unknown theory, it does not appear relevant whether a
> predicted non-zero distance is rational or irrational. A continuous
> function is recursive in terms of some real constant. Continuity means
> that one can create finite approximations to the theory with arbitrarily
> high specified quality/precision -- for example by quantizing space-time
> and limiting floating point precision -- and recursiveness allows for
> these approximations to be (uniformly) computable.
>
> There are 3 ways in which physical theories can lead to hypercomputation:
> * failure of continuity
> * non-recursive physical constants
> * non-recursive initial conditions
>
> An empirical fact is that in current and past physical theories, physical
> constants are either recursive or only predicted approximately (if
> predicted at all). This should not be surprising since we do not know
> non-recursive natural real numbers (that are natural as real numbers) or
> even non-recursive natural continuous functions between real numbers.
> Recursiveness in terms of freely choosable (in a certain range) constants
> also follows (ordinarily not formally) from the form of typical theories:
> A differential equation for the time evolution with terms that are
> elementary functions.
> Even if the physical constants are non-recursive, then under current
> physics, they appear to be a sparse oracle -- and thus worst case time
> complexity of the halting problem would remain exponential or worse --
> because of their limited number and because of difficulty of arbitrary
> precision measurements.
>
> In addition to specifying time evolution, a physical theory may also
> partially specify initial conditions, and nonrecursive initial conditions
> may allow hypercomputation.
>
> Continuous time evolution may still lead to singularities (unless the
> state space is compact), which can be dealt in three ways:
> * Modify the theory to prevent singularities. This is the usual solution
> since near singularities some values are arbitrarily large and thus likely
> imply new physics. However, there is no universal guarantee that the more
> accurate theory will not have the same (or some other) singularity.
> * Apply the theory except at singularities and define the transition at
> singularities. The result is usually nonrecursive (assuming that
> singularities are common enough for a computer to be able to steer itself
> into one).
> * Prohibit initial conditions that lead to singularities. In some cases,
> a theory leads to singularities unless some factors are precisely canceled
> out, which potentially requires non-recursive initial conditions. As to
> potential hypercomputation, assume that time evolution as a function of
> initial conditions is a recursive function up to a potential singularity.
> Note that even if for a certain initial condition, time evolution does not
> lead to singularity, it may still approach singularity at a superrecursive
> speed, which may lead to superrecursive slowdown of computation of its
> time evolution. Thus, there are 2 cases:
> - If there is a solution without superrecursive slowdown for computing
> approximations, then hypercomputation that is valid for all initial
> conditions (that will not lead to singularity) is limited to separation of
> disjoint recursively enumerable sets. Separation of disjoint r.e. sets
> would not solve the halting problem, but if available efficiently (for a
> pair that is universal under polynomial-time reducibility), would make
> every recursive predicate computable in polynomial time.
> - In the general case, we can do arbitrary hyperarithmetic computation.
> This occurs even if we have just 2 degrees of freedom, one of which can be
> a constant of motion. For example, for every recursive omega-consistent
> theory T, there is a smooth recursive f with all derivatives recursive
> such that y'(t) = f(a, y) has a solution that does not go to infinity at
> finite t iff 'a' encodes (in a standard way) an omega model of T (a and y
> are real numbers and a is constant). For all T containing ATR_0 with
> Skolem functions, this allows arbitrary hyperarithmetic hypercomputation.
>
> With respect to the Standard Model, we do not know (as of March 2012)
> whether it is generally consistent (as in singularity-free time
> evolution), or inconsistent, or requires special initial conditions, and
> if so, whether that can be used for hypercomputation. Furthermore, while
> recursive approximations to Standard Model are known (for example, through
> lattice gauge theories), it is not known whether they converge in the
> limit to theory, and even infinitesimal (as in arbitrarily short from a
> fixed starting point) time evolution might be problematic (which may
> potentially lead to additional nonrecursiveness).
>
> For theories that allow hypercomputation, hypercomputation -- by its
> nature and unlike ordinary predictions -- tends to depend on infinite
> precision of certain parts of the theory, and physical theories are
> usually approximate. Thus, hypercomputation predictions usually involve
> applying the theory fundamentally outside its ordinary domain, so it is
> reasonable to be skeptical unless partial hypercomputation or some form of
> infinite precision of the theory has been tested. One method that
> arguably does not depend on infinite precision is that if a discrete
> physical process decays (that is loses frequency) at a sufficiently fast
> rate (as a function of the number of past events), this can be used for
> (very slow) arbitrary hyperarithmetic hypercomputation regardless of what
> the rate is, provided that the process still generates an unlimited number
> of events.
>
> The best argument for hypercomputation is based on the physical laws
> optimality principle: Physical laws are the best possible. The
> optimality principle is related to the anthropic principle, and also to
> the widely held view that physical laws are beautiful. Once one is
> convinced that it is better to have universe in which nonrecursive
> nonrandom processes are possible, one applies the optimality principle to
> conclude that nonrecursive nonrandom processes are possible in the actual
> universe.
>
> Sincerely,
> Dmytro Taranovsky
> http://web.mit.edu/dmytro/www/main.htm
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>
More information about the FOM
mailing list