[FOM] Andrew Hodges on Kieu and Calude
martin at eipye.com
Thu Mar 11 12:01:28 EST 2004
The comments below are by Andrew Hodges. He is Turing's biographer and is a
physicist, a former student of Roger Penrose. Although not an FOM subscriber, I
post this with his permission.
I note that you stress in your recent paper that Calude's identification of
the halting problem with the 'coin' search is mistaken and misleading.
However perhaps one could say more about this dubious identification. The
crux lies in his quantum operator which has the assumed effect of working
out n steps of a Turing machine operation, for all n simultaneously. It is
because he writes this down that he assumes that the infinite sequence of
states of the Turing machine are *immediately accessible,* thus reducing
the halting problem to an inspection problem. He does not discuss whether
any actual physical operator can in fact compute infinitely many steps of a
Turing machine in a finite time. He just assumes it.
In the standard theory of quantum computing, a quantum computation is
broken down into primitive standard operations on q-bits with quantum
gates, and then a complexity theory is derived, yielding its well-known
results (e.g. the possible exponential speed-up). If Calude described his
operator in *this* way, i.e. building it out of primitive gates, it would
certainly seem to need an infinite time. (Conversely, if
quantum-computationists used Calude's assumptions, they could claim that
all finite quantum computations, however long, take no time at all!)
Perhaps Calude has in mind some ingenious theory of quantum processes that
gets round this difficulty, but if so, he doesn't discuss it in the paper.
As it stands, he seems to get his starting-point only by ignoring this
standard groundwork of quantum computation theory.
The point is that any actual quantum operator has to be realised in terms of
interacting particles/fields, and all interactions take time; infinitely
many interactions will take an infinite time.
Some people may like to call Calude's idea a 'theoretical' solution of the
halting problem, thus leaving aside the question of real physical
implementation. But the whole idea of the decision problem as formulated by
Church et. al. is that a solution must be an 'effective' procedure, as
opposed to one merely postulated or imagined. This must hold as rigorously
for putative 'quantum' procedures as for classical procedures.
If you are going to assume a quantum operator that does infinitely many
things in a finite time, you might just as well assume a classical
operation that does the same (e.g. the trivial idea of a Turing machine
that can magically perform its nth step in time 2^(-n).)
I have a similar query regarding Calude's sophisticated stochastic theory
using Wiener measure. Its starting point is the allocation of 'prior
probabilities' for the probability of a Turing machine to halt on the nth
step. It is easy to see that, given an arbitrary Turing machine to test,
there is no allocation of such probabilities that a rational person can
make. (It is obvious that there are infinitely many machines which halt on
the nth step, but that the set of machines halting on the (n+1)th step is
at least as large as the set of machines halting on the nth step. The only
allocation of 'probabilities' consistent with this fact is the allocation
of 0 to 'halting on the nth step' and 1 to 'never halting.' This would
wreck the proposed procedure.)
But suppose for the sake of argument that we accept some non-trivial 'prior
probabilities' summing to 1, with a non-zero probability of never halting.
Now, forget the quantum mechanics and just keep on stepping the classical
Turing machine. If it halts, we have a decision. If not, we keep on going,
and at a certain point, the Bayesian principle used by Calude will tell us
that there is a probability (1 - epsilon) that the machine never halts.
Epsilon can be made as small as we like, and it will just determine a
longer, finite, number of steps to try before we call it a day and say the
machine is 'probably' non-halting. Is this a 'stochastic solution' of the
problem? No, it obviously tells us *nothing whatsoever* about the real
problem being studied and is a complete artifice introduced by the
assumption of prior probabilities.
Possibly Calude has in mind that the prior probabilities should be assigned
taking into account some other information about the machine, e.g. its
total number of configurations. In this case there could be genuine prior
probabilities. The trouble with this is that knowledge of the prior
probabilities would then be equivalent to solving the classical decision
problem! There may well be all sorts of variants of these ideas possible -
my point is that (1) Calude doesn't discuss them and (2) he does not show
that his proposed 'stochastic solution' is anything better than the trivial
and meaningless 'stochastic' test I have described above.
Kieu avoids the problem of demanding the running of a Turing machine for
infinitely many steps by using the equivalence with the problem of finding
to solutions to a Diophantine problem. Thus, he is evaluating a polynomial
for infinitely many arguments instead.
I can't believe that this reformulation can work magic, and allow an
infinite amount of computation to be done in a finite time. His Hamiltonian
operating on the nth occupation number state must be equivalent to putting
n particles through a physical process, and one would expect this to take
something like n times as long as putting one particle through the process.
What is the Hamiltonian, anyway? As with the Calude operator, we have to
ask how it is to be made an *effective* procedure. At first sight, the
evaluation of polynomials looks an lighter task for a quantum operator than
the running of Turing machines, but this may be because one is thinking of
modelling the polynomial in an analog form, with coefficients modelled by
physical quantities like an electric field. (Kieu's method seems to make no
essential use of the discreteness of the polynomial coefficients, only of
the integer variables on which it is evaluated.) The trouble with this is,
as you have pointed out, that the coefficients require *infinite
precision*. The Hamiltonian H=(2m^2 - n^2)^2 has infinitely many states
with 'energy' 1, and none with 'energy' 0. But H will have infinitely many
zeroes if that coefficient 2 is allowed to vary by any amount whatever.
Infinite precision is not 'effective'.
Maybe we *could* insist that the Hamiltonian be built entirely out of
operators with discrete integer eigenvalues (exploiting the fact that the
Diophantine polynomial has integer coefficients). This would get us out of
the need for infinite precision. But in this case, it seems to me that it
would be equivalent to performing, by quantum operations, bit-by-bit
operations of unary arithmetic on the input integer n. Basically, this
would be just as complicated as putting a T-machine through n steps, and we
are back to the problem of Calude's 'operator.'
At the very least, some argument should be supplied by Kieu to explain why
in fact, despite these obvious difficulties, his Hamiltonian operator could
be implemented in an effective way in a finite time.
More generally, there is very little discussion of the 'infinite limit' in
Kieu's paper, even though infinitude is *of the essence* in this
discussion, and finite approximations are of no real value. A lot of his
theory is for two-state cases, with a rather vague assertion that this
suffices because eventually two states will dominate.
I suspect that the infinite limit doesn't actually work for his theory as
he thinks. As his theory never seems to use the fact that the Hamiltonian
is a integer-valued polynomial, as far as I can see it would work just as
well if it were real-valued. (Or have I missed something in the theory of
adiabatic cooling?) Thus we could apply it to ((sqrt(2)*m - n))^2 which as
m and n range over integers, can never vanish, but attains arbitarily small
values for sufficiently large m and n. How would his idea of converging to
the 'ground state' apply in this case? There are infinitely many states of
which the infimum is 0, but 0 is never attained. I suspect that this might
show up a difficulty in the infinite limit.
There is certainly something very odd about his assertions on page 6 about
'truncating' the number of states according to which Diophantine equation
one is looking at. The trivial example of H=(2m^2 - n^2)^2 shows that the
lowest energy states (H=1) can continue for ever. How then can any
'truncation' be valid? More deeply, his remark goes against the whole
spirit of a method which must to applicable to any and every Diophantine
equation, as it suggests taking some first step which depends on inspecting
the equation to be considered. But worse, we know from computability theory
that there is no algorithm which, supplied with a general Diophantine
equation, can give you a bound on its solutions. So any such inspection
would be a waste of time. I confess I have not been able to see whether
this discussion of 'truncation' is essential to his argument about taking
the infinite limit. But if it is not, I cannot see why he brings it up, and
if it is, then it must render his argument invalid.
In the concluding section, Kieu claims that there is no conflict with the
classical insolubility of the problem. Kieu claims that probability makes
all the difference. But the classical (Turing) argument shows the
non-existence of a halt-testing machine by applying it to itself. If, as
Kieu suggests, his quantum-theoretic test could be made into a classical
procedure (by simulating the solution of the Schroedinger equation on a
computer), then it looks as if you could still get a similar contradiction
by applying it to itself - and getting a probability which is both more
than 1/2 and less than 1/2. Anyway, I do not see that probability *in
itself* makes all the difference. Thus this section does not convince me of
the lack of conflict and I am more inclined to suspect something is wrong
with the theory as applied to infinitely many states.
If there *is* a solution to a Diophantine equation, there is no problem -
you can do it trivially just by ordinary computation over all integer
values until you reach the solution in a finite time. In this case Kieu
offers no improvement on the obvious. But his positive claim is that for
each insoluble equation there is a time T, after which you know that there
is naer-certainty of there being *no* solution for any value, however
large. This suggests that after time T, the system has 'sensed' the
non-existence of any zero at or above 2^(2^(2^(2^.....[as many as you
I find this hard to believe. The adiabatic cooling method which is the
basis of Kieu's idea, is claimed only to give a square-root speed-up in
search routines, the same speed-up as comes from more conventional
quantum-computational schemes. How is it then that it can speed up an
infinite search *more than exponentially* - and so arrive at a finite time?
andrew.hodges at wadh.ox.ac.uk
Visiting Scholar UC Berkeley
Professor Emeritus, NYU
martin at eipye.com
(Add 1 and get 0)
More information about the FOM